NFS double share impossible ?

Hello, I introduce myself : I am 24 years old and want to learn many things about linux, projects and developement : because it is really powerfull and this is the way I see computering in the future.
I explain my problem : I have to say that I have read a lot of documentation about nfsv4 and the system works (I don't have any problem of writing permission or anything else)
computer1 : /mymountcomputer1 10.40.0.0/255.255.0.0(rw,fsid=0,insecure,no_hide,no_subtree_check,no_acl,all_squash,sync)
                : sudo mount -t nfs4 computer2:/ /mymountcomputer1
computer2 : /mymountcomputer2 10.40.0.0/255.255.0.0(rw,fsid=0,insecure,no_hide,no_subtree_check,no_acl,all_squash,sync)
                : sudo mount -t nfs4 computer1:/ /mymountcomputer2
/mymountcomputer1 == /mymountcomputer2 (I should be able to see in both direction)
When I mount each one it works when I mount both nothing works
Many thanks to help me on this situation

Hello, thanks to help for this problem.
I try to explain better and sorry for confusing everybody here.
Well I got a server where there are multiple folders of some user (/prod/eminem, /prod/u2, /prod/jayz).
1) The user mounts the "root" shared folder of the server (he sees /prod/eminem, /prod/u2, /prod/jayz in /share/server so the paths are /share/server/eminem, /share/server/u2, /share/server/jayz).
2) The server mounts a folder of the user which is located in his local hard drive (for example the computer of jayz shares /home/photos/friday_night_with_snoop dogg).
The server mounts each local folder of each user and each client mount the root folder, understood ?
Now :
According to me you should see on the server /home/photos/friday_night_with_snoop dogg in /prod/jayz >>> It works, no problem !
Then :
The result is that on the computer of jayz (client) /share/server/jayz which would be the same as /home/photos/friday_night_with_snoop dogg I can't see anything. (this is my problem)
So I have understood that with NFS it could not be possible so I dit it with sshfs and it worked.
I must do this way because the photos of jay-z are very heavy and I can't buy more storage and I need something faster than the actual solution.
My question : Is there an other solution in order to make it work without using sshfs ? Or.  Have you ever made this schema work with high performance ?
By the way with sshfs : when the client computer reboots or shuts down u2 and eminem can't work because sshfsd process is blocked and i always need to kill the process.
Bonus question : Can I script automatically a .service in order to make a listener of both computers (client and server) in bash ? If yes how you do so ? with sshfs or any other solution.
Many thanks
All the best to all.

Similar Messages

  • Accessing NFS mounted share in Finder no longer works in 10.5.3+

    I have setup an automounted NFS share previously with Leopard against a RHEL 5 server at the office. I had to go through a few loops to punch a hole through the appfirewall to get the share accessible in the Finder.
    A few months later when I returned to the office after a consultancy stint and upgrades to 10.5.3 and 10.5.4 the NFS mount no longer works. I have investigated it today and I can't get it to run even with the appfirewall disabled.
    I've been doing some troubleshooting, and the interaction between the statd, lockd and perhaps the portmap seem a bit fishy, even with the appfirewall disabled. Both the statd and lockd complains that they can not register; lockd once and statd indefinitely.
    Jul 2 15:17:10 ySubmarine com.apple.statd[521]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd[521]): Exited with exit code: 1
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    ... and rpcinfo -p gets connection refused unless I start portmap using the launchctl utility.
    This may be a bit obscure, and I'm not exactly an expert of NFS, so I wonder if someone else stumbled across this, and can point me in the right direction?
    Johan

    Sorry for my late response, but I have finally got around to some trial and error. I can mount the share using mount_nfs (but need to use sudo), and it shows up as a mounted disk in the Finder. However, when I start to browse a directory on the share that I can write to, I end up with the lockd and statd failures.
    $ mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    mount_nfs: /Users/yyyy/xxxx-home: Permission denied
    $ sudo mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    Jul 7 10:37:34 zzzz com.apple.statd[253]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd[253]): Exited with exit code: 1
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:44 zzzz com.apple.statd[254]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd[254]): Exited with exit code: 1
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:54 zzzz com.apple.statd[255]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd[255]): Exited with exit code: 1
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:58 zzzz loginwindow[25]: 1 server now unresponsive
    Jul 7 10:37:59 zzzz KernelEventAgent[26]: tid 00000000 unmounting 1 filesystems
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /net updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /home updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: no unmounts
    Jul 7 10:38:02 zzzz loginwindow[25]: No servers unresponsive
    ... and firewall wide open.
    I guess that the Finder somehow triggers file locking over NFS.

  • DNS Fails for NFS Server Shares

    When I boot, I get a message that DNS has failed for the NFS server mounts, and the shares do not mount. The message says, "mount.nfs: DNS resolution failed for server: name or service unknown." I have to mount the shares myself. Then when rebooting, I get the same error saying it can't unmount the shares.
    this is /etc/resolv.conf:
    $ cat /etc/resolv.conf
    # Generated by dhcpcd from eth0
    # /etc/resolv.conf.head can replace this line
    nameserver 208.67.222.222
    nameserver 208.67.220.220
    # /etc/resolv.conf.tail can replace this line
    this is /etc/conf.d/nfs:
    # Number of servers to be started up by default
    NFSD_OPTS=8
    # Options to pass to rpc.mountd
    # e.g. MOUNTDOPTS="-p 32767"
    MOUNTD_OPTS="--no-nfs-version 1 --no-nfs-version 2"
    # Options to pass to rpc.statd
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly. Specifically, the Arch
    # NFS init scripts require the --no-notify flag on the server,
    # but not on the client e.g.
    # STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
    # STATD_OPTS="-p 32765 -o 32766" -> client
    STATD_OPTS=""
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS=""
    Do I need to add some option to rpc.statd, or is there some other misconfiguration there? AFAIK it is the default. What else should I look at to fix this? I can ping the server by name, and log in with ssh by name, just fine. It's only the nfs that is failing with DNS.

    airman99 wrote:
    Yahoo! Good news, I've finally solved the problem on my laptop. The message I was receiving turned out merely to be a network timing issue.
    The error I was receiving was exactly correct and informative. When /etc/rc.d/netfs ran and executed a 'mount -a -t nfs...' the network was indeed NOT reachable. I am running networkmanager, and apparently during bootup, networkmanager gets loaded, but there is a delay between when networkmanager is loaded and when the network is available. In other words, networkmanager allows the boot process to continue before the network is available.
    My daemons are loaded in this order (rc.conf):
    DAEMONS=(syslog-ng hal dhcdbd networkmanager crond cups ntpdate ntpd portmap nfslock netfs)
    Consequently, if I add a delay to /etc/rc.d/netfs to allow time for the network to come up, then when the NFS shares are mounted, the network is up. In my case I had to add a 3 second delay.
    sleep 3
    I'm sure this isn't the best way to solve the problem, by editing the system file /etc/rc.d/netfs, because the next upgrade where changes occur to netfs, my fix will get overwritten. But I'll keep it until I figure out the "right" fix.
    The real solution is to not load networkmanager in the background, but to force startup to wait for the networok to be up before continuing.
    there is the _netdev option you can use in fstab, but that doesn't always work:
    http://linux.die.net/man/8/mount
    _netdev
        The filesystem resides on a device that requires network access (used to prevent the system from attempting to mount these filesystems until the network has been enabled on the system).
    Alternatively, you could just add a cronjob to do a mount -a with a sleep 20 in there or something. You might have to play with the sleep value a little to make sure it's long enough

  • Is installing from a NFS mounted share on Windows supported?

    In order to make our automation using the same structure, we need to put the Siebel server for Windows software on Linux and then export it as a NFS share and mount it from Windows, then do the installation on to the Windows Local disk. NFS is not native to Windows, has anyone tried this before and is this offically supported? Thanks!

    I think so, if I understand what your question is.
    We have installed our installation software on NFS for the purpose of automating our installation process. Each target server has the same structure and we use this automation for Windows and Linux.
    Don't know if it is supported by Siebel though.

  • Where to set UserID and Password for NFS file share in Central File Adapter

    Hi All,
    Does anyone know if/where it is possible to set the userID and password if you want to connect to an NFS fileshare with the central XI30 file adapter?
    Cheers,
    Frank

    Hi Frank,
    The file adapter uses user ID SAPSERVICE... with the addition of the system ID of your SAP XI system. So if your system ID is EXD the user id which needs read/write access should be SAPSERVICEEXD.
    Have fun

  • Custom NFS share point directory showing up on all network machines

    Hi,
    I'm in the process of migrating our 10.4 PowerMac server to a Mac Pro (running 10.5). I've been trying to recreate our 10.4 server setup as much as possible and so far I've only come across one annoying issue.
    We have fink installed on the server and under our 10.4 setup the /sw directory was set up as an NFS automounted share point with a custom mount point of '/sw'. I.e. users logging into client machines saw a /sw directory and could work with that. This made it easier to add fink packages as I only needed to do this on one machine (the server). This setup worked very well under 10.4 and had been working stably for the last couple of years.
    As we now have (for another month or two at least) a mix of intel and Power PC machines, I don't want to share out the (intel) server version of fink to all clients. In Server Admin, I have chosen to set the NFS protocol options to specify the IP address of just one client (an intel machine). I am only using NFS to share this directory. The plan is to add more client IP addresses as we get more intel machines.
    This works for the one intel client machine. Logging in via the GUI or via ssh allows you to run programs located under the /sw directory. The problem is that a phantom /sw directory appears on all client machines, even though their IP addresses are not specified in Server Admin. The /sw directory has root/wheel permissions (for user/group) and attempting to list its contents returns 'Operation not permitted' (even with 'sudo ls /sw').
    If I use Directory Utility to remove the connection to the Directory server on our main server, then the /sw directory becomes owned by root/admin and I can remove it (it appears empty). Reconnecting to the Directory Server changes the permissions back to root/wheel. It is also worth noting that when I first installed fink on the server (in /sw) the act of making this a share point also changed the permissions on /sw to root/wheel meaning that I couldn't access the fink programs that I had only just installed (this forced me to reinstall fink in /Volumes/Data/fink).
    Has anyone else noticed this behavior? It almost seems like Server Admin is not honoring the list of IP addresses that are being listed as targets for client machines. I had planned to install fink locally on the PowerPC clients until we upgrade them to intel machines. However, I would then also have to install fink somewhere other than /sw as I can't write to that directory. I would presume that this behavior should happen on any NFS share point that is trying to automount to a custom mount point on a client. Can anyone else verify this?
    Regards,
    Keith

    As a footnote. I have now removed my shared fink installation. It is no longer listed as an NFS sharepoint in Server Admin and running the 'showmount -e' command does not list it. However, a /sw directory is still being created on the server and on the client machines on our network.
    This is perplexing and frustrating. There is no sharepoint any more. I rebooted the server but it makes no difference. I removed the /sw directory (on the server) by booting the machine in target firewire mode and removing it by using a 2nd machine. But following the restart, it appeared again.
    This suggests that once you make any custom mountpoint (using NFS sharing) then you will forever be stuck with a directory at the root level of all your clients which you can not remove.
    Keith

  • Ufsdump to windows nfs share directory

    Hello all,
    I want to do ufsdump my sun solaris (root) file system to windows server nfs directory..please tell me the exact command to do that.
    I am using following command in single user mod
    ->ufsdump 0f / 192.xx.xx.xx:/D/ufsdump.
    when i tried use this i am getting broken pipe error ..please tell me the exact command
    nfs directory share is /D/ufsdump and server name Backups(192.xx.xx.xx)
    thanks,
    vinay

    vk_sun wrote:
    I am using following command in single user mod
    ->ufsdump 0f / 192.xx.xx.xx:/D/ufsdump.#1 Your arguments are backward. With the options '0f', the next argument needs to be the device you are dumping to, not the device you are backing up. So the order is wrong
    #2 192.xx.xx.xx:/D/ufsdump isn't a valid filename (at least not for you). You must first mount the remote filesystem to a spot on your local filesystem. Then you send the data to a file within that mount.
    Darren

  • Synology NAS: no NFS support for encrypted folders / alternatives ?

    Dear all,
    I recently bought a Synology DS710+ NAS. It comes with DSM2.3, and I am a bit disappointed to notice that encrypted shared folders cannot be exported using NFS. This is a problem since I need uid/gid and file permissions to be fully preserved, and that it's not the case with CIFS or AFP.
    Why such a limitation ? Can you think about reliable alternatives ?
    Cheers,
    Aurélien.

    Might not be helpful since I know NOTHING about synology devices...
    A friend of mine managed to make his setup with synology sweet and i believe he customized NFS/network shares.
    http://befreely.blogspot.com/2010/04/nas-setup.html
    https://sites.google.com/a/befreely.dyn … x/synology
    Let me know if it helps!
    Last edited by anthonyclark (2010-09-24 01:11:17)

  • NFS not cooperating after portmap to rpcbind migration

    So yes, this is bugging me. I am starting to miss my series and if I don't get this fixed quick I'll have to go through a full detox .
    So I call upon you for assistance!
    As per this post on the frontpage, I replaced portmap with rpcbind and fixed my rc.conf (and services script that I run after mounting my data partition - it requires user intervention - too), but no dice.
    Here's what I got:
    /etc/exports
    /var/data/series 10.0.0.20(ro,async,no_root_squash,subtree_check)
    /var/data/films 10.0.0.20(ro,async,no_root_squash,subtree_check)
    /etc/hosts.allow
    # /etc/hosts.allow
    # SSH access (open to the world since it's fortified anyway, right?)
    sshd: ALL
    vsftpd: ALL
    #lighttpd: 192.168.1.
    #mysqld : 192.168.1.
    nfsd: 10.0.0.20
    #lockd: 192.168.1.2
    #rquotad: 192.168.1.2
    rpcbind: 10.0.0.20
    rpc.mountd: 10.0.0.20
    rpc.statd: 10.0.0.20
    # End of file
    I really should throw out nfsd, since there's no such thing anymore. I've checked netstat for stuff listening worlwide (sounds fancy eh) but other than those three rpc* services I could not find anything related (I disabled the imadp stuff or whatever that is since the comments said only NFS v4 stuff needed it).
    /etc/conf.d/nfs-common
    # Parameters to be passed to nfs-common (nfs clients & server) init script.
    # If you do not set values for the NEED_ options, they will be attempted
    # autodetected; this should be sufficient for most people. Valid alternatives
    # for the NEED_ options are "yes" and "no".
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=
    # Options to pass to rpc.statd.
    # See rpc.statd(8) for more details.
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly. Specifically, the Arch
    # NFS init scripts require the --no-notify flag on the server,
    # but not on the client e.g.
    # STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
    # STATD_OPTS="-p 32765 -o 32766" -> client
    STATD_OPTS="--no-notify -p 32765 -o 32766"
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS="-p 32764"
    # Do you want to start the idmapd daemon? It is only needed for NFSv4.
    NEED_IDMAPD=no
    # Options to pass to rpc.idmapd.
    # See rpc.idmapd(8) for more details.
    IDMAPD_OPTS=
    # Do you want to start the gssd daemon? It is required for Kerberos mounts.
    NEED_GSSD=
    # Options to pass to rpc.gssd.
    # See rpc.gssd(8) for more details.
    GSSD_OPTS=
    # Where to mount rpc_pipefs filesystem; the default is "/var/lib/nfs/rpc_pipefs".
    PIPEFS_MOUNTPOINT=
    # Options used to mount rpc_pipefs filesystem; the default is "defaults".
    PIPEFS_MOUNTOPTS=
    /etc/conf.d/nfs-server
    # Parameters to be passed to nfs-server init script.
    # Options to pass to rpc.nfsd.
    # See rpc.nfsd(8) for more details.
    NFSD_OPTS=
    # Number of servers to start up; the default is 8 servers.
    NFSD_COUNT="2"
    # Where to mount nfsd filesystem; the default is "/proc/fs/nfsd".
    PROCNFSD_MOUNTPOINT=
    # Options used to mount nfsd filesystem; the default is "rw,nodev,noexec,nosuid".
    PROCNFSD_MOUNTOPTS=
    # Options for rpc.mountd.
    # If you have a port-based firewall, you might want to set up
    # a fixed port here using the --port option.
    # See rpc.mountd(8) for more details.
    MOUNTD_OPTS="--no-nfs-version 1 --no-nfs-version 2"
    # Do you want to start the svcgssd daemon? It is only required for Kerberos
    # exports. Valid alternatives are "yes" and "no"; the default is "no".
    NEED_SVCGSSD=
    # Options to pass to rpc.svcgssd.
    # See rpc.svcgssd(8) for more details.
    SVCGSSD_OPTS=
    I am launching rpcbind, nfs-common and nfs-server (in that order), and if I may believe the service scripts, everything goes well. However my client, a HDX-1000, does not see any NFS shares at all (they are NFS v3 shares). Several reboots of both the server and client did not help (after fiddling with services and restarting them too, that is). If anybody has any clues: please .
    As a totally unrelated sidenote: suddenly my HDX-1000 has decided it does like Mediatomb and happily plays back whatever it streams (ever since I bought the bloody damn thing it has been refusing to do so - which is why I set up NFS in the first place). So I don't need to fix this, but just for the sake of it (and because I know how quirky this HDX-1000 is) I'd like to fix it so I have a fallback option in case that shiny metal thing decides to act up again.

    jealma wrote:Add mountd to your hosts.allow. I had not yet adapted my hosts.allow until I saw your post, but now I added rpcbind, rpc.mountd and rpc.statd. I also uncommented nfs, portmap and mountd, after which I couldn't mount my NFS share. After uncommenting mountd, I could mount again.
    There is no mountd anymore:
    [root@amalthea stijn]# netstat -puntal
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    tcp 0 0 0.0.0.0:57472 0.0.0.0:* LISTEN 3642/rpc.mountd
    tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:55555 0.0.0.0:* LISTEN 3577/mediatomb
    tcp 0 0 0.0.0.0:35081 0.0.0.0:* LISTEN -
    tcp 0 0 10.0.0.15:6666 0.0.0.0:* LISTEN 3562/mpd
    tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 3592/rpcbind
    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3224/sshd
    tcp 0 0 0.0.0.0:32765 0.0.0.0:* LISTEN 3612/rpc.statd
    tcp 0 960 10.0.0.15:22 10.0.0.5:37903 ESTABLISHED 3547/sshd: stijn [p
    udp 0 0 0.0.0.0:2049 0.0.0.0:* -
    udp 0 0 10.0.0.15:44302 78.47.136.197:123 ESTABLISHED 3222/ntpd
    udp 0 0 0.0.0.0:795 0.0.0.0:* 3592/rpcbind
    udp 0 0 0.0.0.0:54820 0.0.0.0:* 3642/rpc.mountd
    udp 0 0 10.0.0.15:39719 212.68.197.145:123 ESTABLISHED 3222/ntpd
    udp 0 0 0.0.0.0:820 0.0.0.0:* 3612/rpc.statd
    udp 0 0 10.0.0.15:57921 85.158.108.151:123 ESTABLISHED 3222/ntpd
    udp 0 0 127.0.0.1:33730 0.0.0.0:* 3577/mediatomb
    udp 0 0 10.0.0.15:43468 81.246.92.140:123 ESTABLISHED 3222/ntpd
    udp 0 0 0.0.0.0:1900 0.0.0.0:* 3577/mediatomb
    udp 0 0 10.0.0.15:53102 212.3.228.111:123 ESTABLISHED 3222/ntpd
    udp 0 0 0.0.0.0:111 0.0.0.0:* 3592/rpcbind
    udp 0 0 10.0.0.15:39546 193.41.86.177:123 ESTABLISHED 3222/ntpd
    udp 0 0 0.0.0.0:60539 0.0.0.0:* -
    udp 0 0 0.0.0.0:32765 0.0.0.0:* 3612/rpc.statd
    udp 0 0 10.0.0.15:57855 79.99.122.30:123 ESTABLISHED 3222/ntpd
    Checking for rpc and mountd gives these results:
    [root@amalthea stijn]# whereis rpc
    rpc: /usr/sbin/rpc.idmapd /usr/sbin/rpc.statd /usr/sbin/rpc.mountd /usr/sbin/rpc.svcgssd /usr/sbin/rpc.gssd /usr/sbin/rpc.nfsd /etc/rpc /usr/include/rpc /usr/share/man/man3/rpc.3t.gz /usr/share/man/man3/rpc.3.gz /usr/share/man/man5/rpc.5.gz /usr/share/man/man3x/rpc.3t.gz /usr/share/man/man3x/rpc.3.gz
    [root@amalthea stijn]# whereis mountd
    mountd: /usr/share/man/man8/mountd.8.gz
    Tomk: I doublechecked again - the hosts.{allow,deny} files are very verbose about their syntax . So I wasn't sure too, but some googling told me I should be using the right format:
    daemon_list : client_list [ : shell_command ]
    If you ask me it would be rather silly for the hosts files to allow/deny connections on such a rough basis. I understood tcp_wrappers is a basic (yet in some way configurable) traffic blocker, a bit more low-level than iptables is for example. Also, if you can only say 'allow all incoming connections from any client' or 'block all connections from all clients', why are there two files then? Wouldn't it be easier just to use one file and set it to yes/no? I might be wrong ofcourse .
    Anyway, I noticed whereis breaks on the dot, maybe it is expanded in the hosts files too (I hope not... Would be pesky). I will try with an ALL: ALL though to see if that fixes anything.

  • Prevent double click / double submit

    Hi,
    I have a JSP with some input fields and some command buttons within a form. When the user clicks twice on one of the buttons two requests (including the form data) are sent.
    In the server side this results in unpredictable behvior of my business logic. So I decided to prevent these double clicks on buttons. Now I have the following question.
    Are there ADF means to achive this or do I have to use JavaScript (what I wouldn't prefer)?
    Thanks for you help.
    Tom

    Hi,
    another option if using ADF Faces is to set the "blocking" property in a command button to true, which will make a double click impossible. This however, only prevents people from accidentally double submitting a request, not fom purposely doing it. For the latter usecase follow Shay's hint
    Frank

  • HAStoragePlus NFS for ZFS - nested ZFS mounts

    I have a two node cluster setup to be a HA nfs server for several zpools. Everything is going fine according to all instructions, except I can not seem to find any documents that discuss nested zfs mounts. Here's what I have:
    top-level zfs:
    # zfs list t1qc/img_prod
    NAME            USED  AVAIL  REFER  MOUNTPOINT
    t1qc/img_prod  18.8M  5.88T  1.07M  /t1qc/img_prod- descendant from that zfs are many other zfs's (i.e., t1qc/img_prod/0000, t1qc/img_prod/0001.... etc.)
    the top-level is setup under the HAStoragePlus as follows:
    # clresourcegroup create -p PathPrefix=/t1qc/img_prod improdqc1-rg
    # clreslogicalhostname create -g improdqc1-rg -h improdqc1 improdqc1-resource
    # clresource create -g improdqc1-rg -t SUNW.HAStoragePlus -p Zpools=t1qc improdqc1-hastp-resource
    # clresourcegroup online -M improdqc1-rg
    # clresource create -g improdqc1-rg -t SUNW.nfs -p Resource_dependencies=improdqc1-hastp-resource improdqc1-nfs-resource
    # clresourcegroup online -M improdqc1-rg- contents of /t1qc/img_prod/SUNW.nfs/dfstab.improdqc1-nfs-resource:
    share -F nfs -o rw -d "t1qc" /t1qc/img_prod-----
    jump over to one of my other servers (linux rhel5) and mount the exported fileystem
    # mount -t nfs4 improd1:/t1qc/img_prod /zfs_img_prod- that works just fine, if I execute an ls of that mounted filesystem, I see the listings for the descendant zfs's, however, if i try to access one of those:
    # ls /zfs_img_prod/0000
    ls: reading directory /zfs_img_prod/0000: Input/output error-----
    jump over to one of my solaris 10 servers and mount the exported fileystem
    # mount -F nfs -o vers=4 improd1:/t1qc/img_prod /zfs_img_prod- that works just fine, if I execute an ls of that mounted filesystem, I see the listings for the descendant zfs's, however, if i try to access one of those:
    # ls /zfs_img_prod/0000
    #- empty listing, even if there are files/directories in that zfs on the server
    This setup worked great without the cluster, i.e., just shared with zfs. Is this not possible under the cluster or am I missing something?
    Thanks.

    Yes, I've been using NFSv4 on the client side since I discovered that in relation to zfs without the cluster. You mentioned you were using OpenSolaris, maybe there's been a change there that I don't have because I'm running Solaris 10...
    If I add a zfs:
    # zfs create t1qc/img_prod/testzfsShare it on the server:
    # scswitch -n -M -j improdqc1-nfs-resource
    # share -F nfs -o rw -d "testzfs" /t1qc/img_prod/testzfs
    # scswitch -e -M -j improdqc1-nfs-resourceOn my client:
    # ls /zfs_img_prod
    testzfs
    # ls /zfs_img_prod/testzfs
    ls: reading directory /zfs_img_prod/testzfs: Input/output error
    # mount -o remount /zfs_img_prod
    # ls /zfs_img_prod/testzfs
    .. files are listed
    .I have to be missing something here... a setting... something

  • NFS Performance

    I have 2 questions about NFS on 10.4.
    Client;
    Has NFS performance improved on the client side? Last time I tested was 10.3, and the sustained throughput was about 10-12 MB/s on a gig connection. This was from a Sun NFS server.
    Server;
    Has the performance improved? I am thinking about doing an Xsan NFS re-share to 50+ Linux machines in a compute farm. Will this work out well?
    I'm interested to hear from anybody doing heavy NFS serving.
    Thanks,
    David

    The client is somewhat lacking.
    On one test here (XServe G5 client talking to XServe RAID 5 array connected to XServe G5 NFS server) I get around 40MB/sec copying a file to the RAID over a gigabit ethernet network.
    By comparison, a Solaris machine talking to the same server gets almost 80MB/sec.
    So it sounds like it's improved some from when you last tested, but maybe not by as much as you'd like.
    Note that these tests were done on a single active client (or maybe some minor background traffic going on at the same time).
    As for the server side, I don't know quite where that tops out. A quick test here shows little difference in times even when multiple clients are writing to the RAID at the same time. The server might be able to keep up with the RAID speed.

  • NFS client question ... do I need rpc?

    Hi All!
    I am in the process of doing some security auditing and in the process I came across open rpc ports that I believe I don't actually need. I would like to solicit thoughts on this:
    I have solaris 8 and 10 machines importing NFS filesystems from a solaris 8 server. For the client machines, is there any reason to have rpc running?
    I tested this with a solaris 8 machine, turned off both lockd/statd (/etc/init.d/nfs.client stop) and rpc (/etc/init.d/rpc stop), made sure that the daemons were not running and tried to nfsmount a share on the client:
    mount -F nfs server:/share /mnt/tmp
    which seemed to work fine. So do I need lockd/statd and rpc or not?
    Rudolf

    As I said, I tested the ability to connect to NFS even in the absence of rpc / lockd / statd on the client. However, it seems mAbrante is right about these services being at least advisable on the client to allow file locking. This is an excerpt from the lockd manpage that I should have spotted before even asking this question:
    State information kept by the lock manager about these locking requests can be lost if the lockd is killed or the operating system is rebooted. Some of this information can be recovered as follows. When the server lock manager restarts, it waits for a grace period for all client-site lock managers to submit reclaim requests. Client-site lock managers, on the other hand, are notified by the status monitor daemon, statd(1M), of the restart and promptly resubmit previously granted lock requests. If the lock daemon fails to secure a previously granted lock at the server site, then it sends SIGLOST to a process.
    So I guess you don't need the rpc / lockd / statd but you will lose functionality ... you might be able to get away with it if the exported filesystem is read only ...
    Rudolf

  • Ports needed for a nfs client

    Hello -
    Which ports on a firewall running on a solaris 10 machine should I open to make it a nfs client? I openned udp/tcp port 111. But it didn't work. The file server is running solaris 9.
    Thanks
    Rui

    If you use the WebNFS feature of Solaris it will only require port 2049, to use WebNFS, simply mount your NFS share as an URI.
    For example, replace:
    mount server:/share /mountpoint
    with
    mount nfs://server/share /mountpoint
    .. to use WebNFS instead.
    .7/M.

  • Is NFS the only supported sharing method?

    Is NFS the only file systen sharing method certified for use in SAP NetWeaver 7?
    Are SAMBA or XFS supported ?

    > I'm currently using NFS to share the following:
    >
    > 1.) /usr/sap/trans from Production to Quality Assurance and Development
    > 2.) /sapmnt/<sid> from the Central Instance to the associated Dialog Intances
    > 3.) A file system used for storage of interface files. These files would be used by non-SAP systems
    > 4.) A file system used to store the SAP applications CD images. This is used as a central repository when performing upgrades and installations
    Sounds good and reasonable.
    Why do you want to change?
    Markus

Maybe you are looking for