Share -F nfs ... failure
I am trying to share a file system by:
zfs set sharenfs=on ...
or
share -F nfs -o ro=ro1estw -d "home dirs" /export/tag/os_1103
ld.so.1: share: fatal: relocation error: file /usr/lib/fs/nfs/share: symbol issubdir: referenced symbol not found
Killed
My system described by uname 0a:
SunOS ro1estw 5.10 Generic_141445-09 i86pc i386 i86pc Solaris
ldd -r /usr/lib/fs/nfs/share says indeed :
symbol not found: issubdir (/usr/lib/fs/nfs/share)
but on the other side it seems the incriminated symbol is really defines...??? please see below
ldd /usr/lib/fs/nfs/share | awk '{print $3}' | xargs nm -Al | grep issubdir
/usr/lib/libshare.so.1: [79] | 76412| 621|FUNC |LOCL |2 |12 |issubdir
/usr/lib/libshare.so.1: [238] | 0| 0|FILE |LOCL |0 |ABS |issubdir.c
the package containing libshare.so.1 is
pkginfo -l SUNWcsu
PKGINST: SUNWcsu
NAME: Core Solaris, (Usr)
CATEGORY: system
ARCH: i386
VERSION: 11.10.0,REV=2005.01.21.16.34
BASEDIR: /
VENDOR: Sun Microsystems, Inc.
DESC: core software for a specific instruction-set architecture
PSTAMP: on10-patch-x20101116204123
INSTDATE: Jan 09 2011 05:44
HOTLINE: Please contact your local service provider
STATUS: completely installed
FILES: 1675 installed pathnames
smppatch update says that my system is up to date
Can somebody support me with that?
Hi.
Try clear LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/lib
export LD_LIBRARY_PATHand try share again.
Regards.
Similar Messages
-
Error mounting NFS share - mount.nfs: Operation not permitted
I've got an NFS share on a FreeBSD server which I mount via fstab.
It mounts automatically at boot and everything is fine.
However, if I unmount it and try to mount it again I get:
mount.nfs: Operation not permitted
I have tried vers=3 and nfsvers=3 in fstab, but to no avail.
rpcbind is allowed in /etc/hosts.allow.
Does anyone have any ideas?
fstab entry:
server:/path/to/files /mnt/files nfs ro,hard,intr,nfsvers=3 0 0Tagging along, I have the same problem, although I have a different setup:
- Server = Arch linux
- Client1 = Debian Testing linux
- Client2 = Arch linux
On client1, I'm unable to mount all NFS-shares. 2 out of 3 mount ok and the third fails with this error (both through fstab and manually):
# mount -a
mount.nfs4: access denied by server while mounting (null)
On Client2 I'm able to connect automatically and manually to all shares.
Maybe it is Debian-related, but the debian user forums have not been of much help...
THX for any input!
Last edited by zenlord (2010-03-04 12:07:04) -
Library stored on NFS share?
All,
Thinking about a new MB Air, but I've got a 65GB iTunes Library. When I'm not at home, I have my iPhone with me, so I listen to music there. When I'm at home, my WLAN is 802.11n, and my NAS has a 2x1G LAG connecting it to the network. I'm thinking of putting the library on an auto mounted NFS share.
NFS is a pretty resilient protocol, not at all unstable like SMB and AFP an be..
Anyone done something like this before? Success? Failure? Dragons?i have my iTunes library (in fact, the entire iTunes folder including the library files) on a NAS and don't see these problems. however, i'm making sure that the share is mounted on my desktop before i launch iTunes.
you can have the NAS mount automatically at startup by adding the share(s) to your login items via system preferences > accounts > login item.
or, if you are comfortable with AppleScript, you could edit this script to suit your situation:
try
mount volume "afp://<AirPort Extreme Name>.local/<Share Volume Name>"as user name "<Share Username>" with password "<Share password>"
end try
this script just places the drive's icon on the desktop.
save it as an application and add that to your login items.
credit for the script goes to Tesserax. -
What is proper way of defining NFS shares?
i have two servers, serverA is solaris 9 and serverB is solaris 10.
in serverA, i define in /etc/dfs/dfstab:
share -F nfs -o root=serverB /dirBthen from serverB, i do a:
mount -F nfs serverA:/dirB /xdirBwhen i do a (from serverB):
cd /xdirB
find . -print -mount -depth | cpio -pvdm /destinationi get permission denied errors on some directories. i found that if i first do a
chmod -R 777 /dirBi will not get permission denied errors. which is understandble.
question is, how does one properly define NFS shares so that i don't have to make all dirs/files world readable?If you are running DNS, it is likely that the IP address of your client does not resolve to 'serverB' but to something like 'serverB.company.com'. Those two strings do not match, and you are probably not granting root access to the client.
On the client, touch a file that doesn't exist. When it's created, who is the owner? If it's 'nobody', then that's almost certainly your problem.
Darren -
Does file share preview support NFS for mounting in linux?
I've been experimenting with the file share preview and realized that cifs doesn't really support a true file share, allowing proper permissions.
Is it possible to use the file share with NFS?
thanks
RicardoRicardoK,
No, you can't mount an Azure file share via NFS. Azure file shares only support CIFS (SMB version 2.1). Although it doesn't support NFS you can still mount it to a Linux system via CIFS. Install the "cifs-utils" package ("apt-get
install cifs-utils" on Ubuntu). You can then mount it manually like this:
$ mount -t cifs \\\\mystorage.blob.core.windows.net\\mydata /mnt/mydata -o vers=2.1,dir_mode=0777,file_mode=0777,username=mystorageaccount,password=<apikeygoeshere>
Or you can add it to your /etc/fstab to have it mounted automatically at boot. Add the following line to your /etc/fstab file:
//mystorage.blob.core.windows.net/mydata /mnt/mydata cifs vers=2.1,dir_mode=0777,file_mode=0777,username=mystorageaccount,password=<apikeygoeshere>
It's not as good as having a real NFS export, but it's as good as you can get using Azure Storage at the moment. If you truly want NFS storage in Azure, the best approach is to create a Linux VM that you configure as an NFS file server and create NFS
exports that can be mounted on all of your Linux servers.
-Robert -
How to limit nfs share to a specific host
Hi all,
I used the following command in dfstab to limit sharing of a directory to a host called alert1.somedomain.com.
share -F nfs -o rw=alert1.somedomain.com /disk1/share
When I mount the directory in alert1, it can read the directory, but cannot write to the directory.
ThanksMost likely the other security mechanisms are
blocking you, perhaps just a simple matter of
user permissions. Note that if you are trying to
do this as root on alert1, then you also need to
allow root in the share otherwise the root userid
is mapped to nobody.
-- richard -
IOMeter hangs when running to a NFS share from Windows Storage Server 2012
Hello,
I am trying to measure performance of NFS share coming from Windows Storage Server 2012 using IOMeter also running on windows Server 2012. I can create the share on WSS2012. Windows 2012 client does see the share. IOmeter does see the share, and I can start
running. But fairly quick IOMeter gets an error, and stops. After that NFS share on the client is not visible to IOMeter anymore. This happen every time.
I have used IOMeter to SMB shares a lot with no problem..
Thanks in advance,
BJI am trying to measure performance of NFS share coming from Windows Storage Server 2012 using IOMeter also running on windows Server 2012. I can create the share on WSS2012. Windows 2012 client does see the share. IOmeter does see the share, and I can start
running. But fairly quick IOMeter gets an error, and stops. After that NFS share on the client is not visible to IOMeter anymore. This happen every time.
I have used IOMeter to SMB shares a lot with no problem..
1) Can you use NFS share with NFS clients normally? I mean is it I/O Meter who has issues with streaming or do other apps have similar problems? Say normal copy to / from NFS share?
2) What error exactly is popped up? Do you happen to have a screenshot?
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Question on cluster 3.x and NFS shares
I'm going to depict a situation that under sc2.x worked just fine, but currently isn't working so well..
Let's attach some trivial names to machines just for grins -
I'm on dbserver1 and I want to share a filesystem over a private (ipmp'd) network - not my nodename that is - to a server called appserver1 :
dbserver1 has routable ip address of 10.0.0.1
appserver1 has routable ip address of 10.0.0.2
dbserver1-qfe1 is using ip address of 192.168.0.1
appserver1-qfe1 is using ip address of 192.168.0.2
all entries are in each server's local /etc/inet/hosts file
the nodename of each system is the corresponding ip address on the 10 net.
If I wanted to share /usr/local via the physical, I'd run from dbserver1
share -F nfs -o rw=appserver1 -d "comment" /usr/local
on appserver1 -
mount -F nfs dbserver1:/usr/local /mnt
I want to do this however, I want to share some filesystem so it's only visible via the 192 subnet
share -F nfs -o rw=appserver1-qfe1 -d "comment" /usr/local
on appserver1 -
mount -F nfs dbserver1-qfe1:/usr/local /mnt
currently mounting over the "public" works, but over the private returns "permission denied"
Interesting twist...
If I do this
share -F nfs -o rw -d "comment" /usr/local
and then try
mount -F nfs dbserver1-qfe1:/usr/local /mnt
it works...
I know I've depicted something that's fairly generic, but I'm just trying to understand what is being done differently in sc3.x with respect to nfs exports versus sc2.x.
thanks in advance,
Jeffanything, anybody?
Just for additional clarification, this is a solaris 9 cluster running cluster 3.1...
Thanks again, -
Hello,
since Sunday I'm unable to mount NFS shares:
mount.nfs: No such device
The server-side is working fine, I can mount all shares from my FreeBSD Desktop machine.
I'm using netcfg and start rpcbind and nfs-common upong connection before mounting NFS shares (via netfs). Is this maybe related to some recent pacman updates? It was working flawless just until Sunday.As it turns out, It now works. I did load the nfs module manually during my troubleshooting but it was already loaded or built into the kernel or whatever.
The thing that made it work is changing the nfs mount lines in /etc/fstab from the hostname of the server to the ip address of the server. I don't know why that worked on both machines since I could ping the hostname of the nfs server which is a Freenas server and it always worked before.
@ jasonwryan
rc.d start rpcbind && rc.d start nfs-common
start fine after stopped and restarted. Have you replaced portmap with rpcbind in pacman? rpcbind superceded portmap a while back. gl.
@.:B:.
lol, snide remark succesfully detected. In my defense I was half guessing and half sniding (or some percentage thereof). I have to admit I do get a bit snippy over this since nfs is necessary for my little clients to run mpd and I gets a bit cranky when I gots no musics! Fueling my frustration, it seems I have to chase down nfs problems frequently after "pacman -Syu". -
AlwaysOn Cluster reboot due to file share witness unavailability
Hi Team,
Anyone came across this scenario in AlwaysOn Availability Group (two node), file share witness times out and RHS terminate and cause the cluster node to reboot. File share witness is for continuous
failover and if the resource is unavailable my expectation was that it should go offline and should not impact Server or Sql Server. But its rebooting the cluster node to rectify the issue.
Configuration
Windows Server 2012 R2 (VMs) - two node, file share witness (nfs)
Sql Server 2012 SP2
Errors
A component on the server did not respond in a timely fashion. This caused the cluster resource 'File Share Witness' (resource type 'File Share Witness', DLL 'clusres2.dll') to
exceed its time-out threshold. As part of cluster health detection, recovery actions will be taken. The cluster will try to automatically recover by terminating and restarting the Resource Hosting Subsystem (RHS) process that is running this resource. Verify
that the underlying infrastructure (such as storage, networking, or services) that are associated with the resource are functioning correctly.
The cluster Resource Hosting Subsystem (RHS) process was terminated and will be restarted. This is typically associated with cluster health detection and recovery of a resource.
Refer to the System event log to determine which resource and resource DLL is causing the issue.
Thanks,
-SreejitGThanks Elden, We were using DFS name for the file share! We gave the actual file share name and looks good now.
Few interesting facts, the failure happens exactly between window 12:30 PM to 1:30 AM and it never recorded any error specific to DFS! Not sure if there was any daily maintenance or task happens during the window pertaining to DFS.
+ DFS is not supported or recommended by Microsoft
Do not use a file share that is part of a Distributed File System (DFS) Namespace.
https://technet.microsoft.com/en-us/library/cc770620%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396
Thanks,
-SreejitG -
Greetings.
I am in the process of changing a system over to Solaris 9 (9/04) from Solaris 8 (we cannot move to Solaris 10 due to ClearCase incompatiblities).
We use flash archives in our jumpstart process. The master system is created using a very Spartin profile (SUNWCreq with a number of other required packages). There are also a number of additional tweaks made to the master system to stop unrequired services, deamons, etc. The only additional patches installed are the Java cluster patches.
I have been able to successfully jumpstart the jumpstart server host from CD. However any attempts to jumpstart other clients using the jumpstart server have failed. I suspect that it is related to the inability to copy the sysidcfg file during the jumpstart process.
The address for the jumpstart server is 10.1.1.1; the hostname is n1; the MAC address is 0:3:ba:35:80:88
The address for the jumpstart client is 10.1.1.34; the hostname is n34; the MAC address is 0:3:ba:14:c6.
On the jumpstart server some of the relevant files are included below.
/etc/bootparams:
n34 root=n1:/jumpstart/OS/Solaris_9_2004-09/Solaris_9/Tools/Boot install=n1:/jumpstart/OS/Solaris_9_2004-09 boottype=:in sysid_config=n1:/jumpstart/Sysidcfg install_config=n1:/jumpstart rootopts=:rsize=32768
/etc/hosts:
# Internet host table
127.0.0.1 localhost
10.1.1.1 n1 oam1a loghost
10.1.1.2 n2 db1a
10.1.1.34 n34
/etc/ethers:
0:3:ba:14:c6:cd n34
/etc/dfs/dfstab:
share -F nfs -o ro,anon=0 /jumpstart
share -F nfs -o ro,anon=0 /jumpstart/OS/Solaris_9_2004-0�9
/tftpboot directory:
lrwxrwxrwx 1 root root 26 Aug 8 10:36 0A010122 -> inetboot.SUN4U.Solaris_9-1
lrwxrwxrwx 1 root root 26 Aug 8 10:36 0A010122.SUN4U -> inetboot.SUN4U.Solaris_9-1
-rwxr-xr-x 1 root root 152376 Aug 8 10:36 inetboot.SUN4U.Solaris_9-1
-rw-r--r-- 1 root root 313 Aug 8 10:36 rm.10.1.1.34
ls -l /jumpstart/Sysidcfg/sysidcfg:
-rw-r--r-- 1 root root 375 Aug 4 17:12 /jumpstart/Sysidcfg/sysidcfg
/jumpstart/Sysidcfg/sysidcfg:
system_locale=en_AU
timezone=Australia/NSW
name_service=none
root_password=<removed for this post>
terminal=xterm
network_interface=primary { protocol_ipv6=no netmask=255.255.240.0 default_route
=10.1.0.1 }
timeserver=localhost
timeserver=47.153.235.110
Once the install is started on the client the following output is generated (note the sysidcfg copy failure):
ok boot net - install
Res
LOM event: +21h32m35s host reset
etting ...
�
Netra 120 (UltraSPARC-IIe 648MHz), No Keyboard
OpenBoot 4.0, 1024 MB memory installed, Serial #51693261.
Ethernet address 0:3:ba:14:c6:cd, Host ID: 8314c6cd.
Executing last command: boot net - install
Boot device: /pci@1f,0/pci@1,1/network@c,1 File and args: - install
SunOS Release 5.9 Version Generic_117171-07 64-bit
Copyright 1983-2003 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
whoami: no domain name
Configuring /dev and /devices
Using RPC Bootparams for network configuration information.
Skipping interface eri1
Configured interface eri0
Searching for configuration file(s)...
cp: cannot create /etc/sysidcfg: Permission denied
chmod: WARNING: can't change /etc/sysidcfg
Using sysid configuration file 10.1.1.1:/jumpstart/Sysidcfg/sysidcfg
Search complete.
WARNING: IP: Hardware address '00:03:ba:35:80:88' trying to be our address 010.001.001.001!
WARNING: IP: Hardware address '00:03:ba:35:80:88' trying to be our address 010.001.001.001!
The IP address conflict is with the jumpstart server. The address for the jumpstart server is 10.1.1.1. The warning message is generated for a number of minutes after it starts. I figure that it is caused by the inability to copy the sysidcfg file.
This same system can be installed successfully using a Solaris 8 jumpstart configuration.
Note that the client system is currently installed with Solaris 8.
I did use the command "boot net -v - install" from the OK prompt, but no additional information was provided relating to when the sysidcfg file could not be copied.
If anyone has any ideas about what could be causing this problem or has any information about additional debugging which could be used to figure out this issue, I would greatly appreciate your thoughts.
Thanks in advance.
Cheers,
Jason.Ideas.. Hmm, none which seems that correct, but you could try some things.
If it gets the wrong IP that could explain why it fails to copy the sysidcfg file.
First you could try and do a snoop on the ethernet address;
snoop ether 0:3:ba:14:c6:cd
(you could also try the -v flag to increase the verbosity).
The things you should look for are arp/rarp requests, the jumpstart client will use arp/rarp to determine its IP address, snoop will show you which server that responds, and what address it gets, futher more its a good idea to verify that the response to the bootparams requests comes from the correct server.
Of course there might be other oddnesses as well.
Was the data you provided extracts? If it was you should check the /etc/ethers and /etc/bootparams for duplicates entries, so there are no other occourances of the clients ehternet address in /etc/ethers, and no bogus entries starting with * or the same hostname in /etc/bootparams.
If you added the client manually you could always try and use the Tools/rm_install_client and Tools/add_install_client scripts to add it again, these script sometimes detect problems with the configuration.
Lastly you didn't show us your /etc/nsswitch.conf file, but i assume that it has "files" first for the ethers, hosts and bootparams entries?
Good luck
//Magnus -
OVM 3.3.1: NFS storage is not available during repository creation
Hi, I have OVM Manager running on a separate machines managing 3 servers running OVM server in a server pool. One of the server also exports a NFS share that all other machines are able to mount and read/write to. I want to use this NFS share to create a OVM repository but so far unable to get it to work.
From this first screen shot we can see that the NFS file system was successfully added under storage tab and refreshed.
https://www.dropbox.com/s/fyscj2oynud542k/Screenshot%202014-10-11%2013.40.00.png?dl=0
But its is not available when adding a repository as shown below. What can I did to make it show up here.
https://www.dropbox.com/s/id1eey08cdbajsg/Screenshot%202014-10-11%2013.40.19.png?dl=0
No luck with CLI either. Any thoughts?
OVM> create repository name=myrepo fileSystem="share:/" sharepath=myrepo - Configurable attribute by this name can't be found.
== NFS file system refreshed via CLI ===
OVM> refresh fileServer name=share
Command: refresh fileServer name=share
Status: Success
Time: 2014-10-11 13:28:14,811 PDT
JobId: 1413059293069
== file system info
OVM> show fileServer name=share
Command: show fileServer name=share
Status: Success
Time: 2014-10-11 13:28:28,770 PDT
Data:
FileSystem 1 = ff5d21be-906d-4388-98a2-08cb9ac59b43 [share]
FileServer Type = Network
Storage Plug-in = oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0) [Oracle Generic Network File System]
Access Host = 1.2.3.4
Admin Server 1 = 44:45:4c:4c:46:00:10:31:80:51:c6:c0:4f:35:48:31 [dev1]
Refresh Server 1 = 44:45:4c:4c:46:00:10:31:80:51:c6:c0:4f:35:48:31 [dev1]
Refresh Server 2 = 44:45:4c:4c:47:00:10:31:80:51:b8:c0:4f:35:48:31 [dev2]
Refresh Server 3 = 44:45:4c:4c:33:00:10:34:80:38:c4:c0:4f:53:4b:31 [dev3]
UniformExports = Yes
Id = 0004fb0000090000fb2cf8ac1968505e [share]
Name = share
Description = NFS exported /dev/sda1 (427GB) on dev1
Locked = false
== version details ==
OVM server:3.3.1-1065
Agent Version:3.3.1-276.el6.7Kernel Release:3.8.13-26.4.2.el6uek.x86_64
Oracle VM Manager
Version: 3.3.1.1065
Build: 20140619_1065Actually, OVM, as is with all virtualization servers, is usually only the head on a comprehensive infrastructure. OVM seems quite easy from the start, but I'd suggest, that you at least skim trough the admin manual, to get some understanding of the conecpts behind it. OVS thus usually only provides the CPU horse power, but not the storage, unless you only want to setup a single-server setup. If you plan on having a real multi-server setup, then you will need shared storage.
The shared storage for the server pool, as well as the storage repository can be served from the same NFS server without issues. If you want to have a little testbed, then NFS is for you. It lacks some features that OCFS2 benefits from, like thin provisioning, reflinks and sparse files.
If you want to remove the NFS storage, then you'll need to remove any remainders of any OVM object, like storage repositories or server pool filesystems. Unpresent and storage repo and delete it afterwards… Also, I hope that you didn't create the NFS export directly on the root of the drive, since OVM wants to remove any file on the NFS export and on any root of ony volume there's the lost-found folder, which OVM, naturally, can't remove. Getting rid of such a storage repo can be a bit daunting…
Cheers,
budy -
Problem in accessing NFS using WebNFS.
Hi,
We are trying to access an NFS area from our web application which is deployed in Weblogic server by the use of WebNFS api.
Both our weblogic server and NFS area present in a Solaris box(5.8 release).
For NFS configuration the entries made to the config files as
/etc/dfs/sharetab
/ics_data - nfs root=anon
/etc/dfs/dfstab
share -F nfs -o root=anon /ics_data
For reference, the following commands do list the exported file system as
> df -k | grep ics
/dev/dsk/c1t13d0s6 1026367 12 964773 1% /ics_data
> /usr/sbin/showmount -e
export list for sunbom4:
/ics_data (everyone)
Also the nfs daemons are running
> ps -ef | grep nfs
root 9599 1 0 16:05:58 ? 0:00 /usr/lib/nfs/mountd
root 9601 1 0 16:05:58 ? 0:00 /usr/lib/nfs/nfsd -a 16
root 25505 1 0 15:14:25 ? 0:00 /usr/lib/nfs/lockd
daemon 25506 1 0 15:14:25 ? 0:00 /usr/lib/nfs/statd
Our java code as follows
XFile xfd = new XFile("nfs://[IPAddress]:2049//ics_data");
System.out.println("xfd.exists() = " + xfd.exists());
XFile xfd1 = new XFile("nfs://[IPAddress]:2049//ics_data/testFile.txt");
System.out.println("xfd1.exists() = " + xfd1.exists());The output is
xfd.exists() = false
xfd1.exists() = false
We have confirmed the nfs port by
cat /etc/services | grep nfsnfsd 2049/udp nfs # NFS server daemon (clts)
nfsd 2049/tcp nfs # NFS server daemon (cots)
Though the file actually exists on the specified location, we could not able to get it by using XFile and NFS url. Kindly advice if we are missing out something some where or incase we are taking any wrong approach.We have also tried to associate the 'public' file handle with the shared file system by changing the entry in /etc/dfs/dfstab file as
share -F nfs -o root=anon,public,log /ics_data
And our java code as
XFile xfd = new XFile("nfs://[IPAddress]:2049");
System.out.println("xfd.exists() = " + xfd.exists());
XFile xfd1 = new XFile("nfs://[IPAddress]:2049//testFile.txt");
System.out.println("xfd1.exists() = " + xfd1.exists());But the same problem still persists.
Can any one please help us out to identify the problem?
Message was edited by:
Amit.Pol -
Hi all,
I am testing HA-NFS(Failover) on two node cluster. I have sun fire v240 ,e250 and Netra st a1000/d1000 storage. I have installed Solaris 10 update 6 and cluster packages on both nodes.
I have created one global file system (/dev/did/dsk/d4s7) and mounted as /global/nfs. This file system is accessible form both the nodes. I have configured ha-nfs according to the document, Sun Cluster Data Service for NFS Guide for Solaris, using command line interface.
Logical host is pinging from nfs client. I have mounted there using logical hostname. For testing purpose I have made one machine down. After this step files tem is giving I/O error (server and client). And when I run df command it is showing
df: cannot statvfs /global/nfs: I/O error.
I have configured with following commands.
#clnode status
# mkdir -p /global/nfs
# clresourcegroup create -n test1,test2 -p Pathprefix=/global/nfs rg-nfs
I have added logical hostname,ip address in /etc/hosts
I have commented hosts and rpc lines in /etc/nsswitch.conf
# clreslogicalhostname create -g rg-nfs -h ha-host-1 -N
sc_ipmp0@test1, sc_ipmp0@test2 ha-host-1
# mkdir /global/nfs/SUNW.nfs
Created one file called dfstab.user-home in /global/nfs/SUNW.nfs and that file contains follwing line
share -F nfs –o rw /global/nfs
# clresourcetype register SUNW.nfs
# clresource create -g rg-nfs -t SUNW.nfs ; user-home
# clresourcegroup online -M rg-nfs
Where I went wrong? Can any one provide document on this?
Any help..?
Thanks in advance.test1# tail -20 /var/adm/messages
Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 344672 daemon.error] Unable to open door descriptor /var/run/rgmd_receptionist_door
Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 801855 daemon.error]
Feb 28 22:28:54 testlab5 Error in scha_cluster_get
Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d5s0 has changed to OK
Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d6s0 has changed to OK
Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/scsymon-srv:default: Method "/usr/cluster/lib/svc/method/svc_scsymon_srv start" failed with exit status 96.
Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 748625 daemon.error] system/cluster/scsymon-srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 537175 daemon.notice] CMM: Node e250 (nodeid: 1, incarnation #: 1235752006) has become reachable.
Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 525628 daemon.notice] CMM: Cluster has reached quorum.
Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node e250 (nodeid = 1) is up; new incarnation number = 1235752006.
Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node testlab5 (nodeid = 2) is up; new incarnation number = 1235840337.
Feb 28 22:37:15 testlab5 Cluster.CCR: [ID 499775 daemon.notice] resource group rg-nfs added.
Feb 28 22:39:05 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
Feb 28 22:39:05 testlab5 Cluster.CCR: [ID 491081 daemon.notice] resource ha-host-1 removed.
Feb 28 22:39:17 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
Feb 28 22:39:17 testlab5 Cluster.CCR: [ID 254131 daemon.notice] resource group nfs-rg removed.
Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_validate> for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, timeout <300> seconds
Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_validate>:tag=<rg-nfs.ha-host-1.2>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_validate> completed successfully for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, time used: 0% of timeout <300 seconds>
Feb 28 22:39:30 testlab5 Cluster.CCR: [ID 973933 daemon.notice] resource ha-host-1 added. -
I am trying to export a share using NFS in Workgroup Manager. I have what I think should work but the client behaves as if there is nothing being exported.
I wanted to check /etc/exports to make sure WGM had done things sanely but I find that it doesn't exist! Apple's documentation and man pages make reference to this file. I'm comfortable creating it 'by hand' but I'd like to know what OS X did with the stuff I set up in WGM first!
thanks,
sean"showmount" is a generic NFS tool. It's not tied to NetInfo at all. Pretty much the only relation of NetInfo to the NFS server is the aforementioned behavior of the NFS server getting exports from NetInfo if the /etc/exports file didn't exist.
In general, when debugging NFS issues you'll always want to check /var/log/system.log.
Other useful tools include:
showmount -e
rpcinfo -p
And of course, wireshark when you need to dissect the packets on the wire.
Note that showmount and rpcinfo can be run from the NFS client too, you just
need to include the server's name or IP address as an extra argument.
New in 10.5 are some helpful sub-commands included as part of the "nfsd" command:
nfsd -v status
nfsd checkexports
All of these tools have man pages that you can dig into if you need to.
HTH
--macko
Maybe you are looking for
-
How to create a pdf/a-1b with a text field
hello, I'm trying to execute a very simple task: create a pdf/a-1b containig a text field. using Acrobat XI pro, I created a simple pdf form with 1 form field of type text , then tried to save as pdf/A-1b. The pdf is saved but the form field is remov
-
Want to move data from 1 internal table to other?
Hi, i have values in one internal table like in 1 st column 13 value..in second column first 13 value blank & from 14 value starts.....so i want such internal table in another internal table having no blank values ....means in 2nd column value starts
-
Read text file and split up. How???
Can anyone tell me how to split up a string into words. The string is: "me and my cat both like milk and cookies" I need read through the file and diplay the biggest word(s) and the smallest word(s). I would appriciate help on this as i am really stu
-
Is Prompted (Case When Statement)
I created a column in an answers report and used a CASE WHEN statement as the formula. I set this column to an Is Prompted filter. But when i try to run the report from the dashboard prompt, it just ignores this filter. What am i doing wrong?
-
What protection do you use with VG224s?
We have many VG224s in production. Nearby lightning strikes kill ports on the VG224. The cabling has protection on it - from the old Western Electric 701 PBX. However, that switch delivers 52 volts at 39 milliamps, and the Cisco VG224 is much lower