NFS client problem

Hi,
I have setuped NFS environment on solaris 9.
All the services are running in Server and Client side.
While the client rebooted im getting the mount point is not found.
like : /pani/notape is not found or does not exist.
When i umount /pani im getting mounted with some delay....
Any body tell me if any solution is there.
Thanks in advance.
Pani.

How does the client mount the share -- you use automount maps or just an entry in vfstab? Did you try snooping the traffic to see if the client is at least attempting to mount the share?

Similar Messages

  • Svc:/network/nfs/client problem in Solaris 10

    Hi,
    I`ve been trying to figure out what is the problem with the below scenario for so long and till now i`m not able to.
    i have a X4100 SunFire system with solaris 10 installed.
    most of the services are not coming up (ssh,ftp,etc....) each time i reboot the server, and after approximately 3 hours everything comes fine by it self.
    i noticed that the service :/network/nfs/client is taking too long while starting. This might be related to the problem.
    See below the svcs command : hope this will be usefull :)
    bash-3.00# svcs -xv ssh
    svc:/network/ssh:default (SSH server)
    State: offline since Wed May 20 16:37:45 2009
    Reason: Service svc:/network/nfs/client:default is starting.
    See: http://sun.com/msg/SMF-8000-GE
    Path: svc:/network/ssh:default
    svc:/system/filesystem/autofs:default
    svc:/network/nfs/client:default
    See: man -M /usr/share/man -s 1M sshd
    Impact: 3 dependent services are not running:
    svc:/milestone/multi-user-server:default
    svc:/system/basicreg:default
    svc:/system/zones:default
    any advise on this ?
    Appreciate your help,

    hi all
    issue resolved actually port was conflicting in /etc/services i have readded the entry in /etc/service
    Thanks

  • NFS client problem "The document X could not be saved"

    Hi,
    Briefly: Debian Linux server (Lenny), OS X 10.5.7 client. NFS server config is simple enough:
    /global 192.168.72.0/255.255.255.0(rw,rootsquash,sync,insecure,no_subtreecheck)
    This works well without our Linux clients, and generally it is Ok with my OS X iMac. OS X NFS client is configured through Directory Utility, with no "Advanced" options. Client can authenticate with NIS nicely, and NFS, on the whole, works. I can manipulate files with Finder, and create files on the commandline with the usual tools.
    The problem is TextEdit, iWork and other Cocoa apps (not all). They can save a file once, but subsequently saving a file produces a "The document X.txt cannot be saved" error dialog. If I remove the file on the commandline and re-save, then the save succeeds. It is as if re-saving the document with the same name as an existing file causes issues. There seems to be no problem with file permissions. When I save in a non NFS exported directory everything is fine.
    Has anyone spotted this problem before?
    Lawrence

    I doubt that "OS X NFS is fundamentally broken" seeing as how many people use it successfully.
    tcpdump (or more preferably: wireshark) might be useful in tracking down what's happening between the NFS client and NFS server. Sometimes utilities like fs_usage can be useful in tracking down the application/filesystem interaction.
    It's usually a good idea to check the logs (e.g. /var/log/system.log) for possible clues in case an error/warning is getting logged around the same time as the failure. And if you can't reproduce the problem from the command line, then that can be a good indication that the issue is with the higher layers of the system.
    Oh, and if you think there's a bug in the OS, it never hurts to officially tell Apple that via a bug report:
    http://developer.apple.com/bugreporter/
    Even if it isn't a bug, they should still be able to work with you to help you figure out what's going on. They'll likely want know details about what exactly isn't working and will probably ask for things like a tcpdump capture file and/or an fs_usage trace.
    HTH
    --macko

  • Systemd nfs client mount share ???

    Just installed systemd, systemd-arch-units and initscripts-systemd  as per the wiki and all went well except mounting a nfs share from my server. (networking is OK!)
    This is the old working mount command run from rc.local before installing systemd;
    mount -t nfs4 -o rw,hard,async,intr,rsize=49152,wsize=49152,proto=tcp 192.168.0.250:/ /media/SERVER_NYTT &
    Did not mount at all with systemd, not even when run manually from command line. From systemd log netfs and nfs-common failed to start.
    So I tried from fstab instead;
    192.168.0.250:/ /media/SERVER_NYTT nfs rw,hard,async,intr,rsize=49152,wsize=49152,proto=tcp 0 0
    This did work but 'mount' showed that systemd had mounted with nfs default options instead of mine (wsize, rsize ...). Still errors from systemd starting netfs and nfs.
    So I disabled netfs and nfs-common/rpcbind from rc.conf and created this systemd file (/etc/systemd/system/192.168.0.250.mount);
    [Unit]
    Description=ServerNfs
    Wants=network.target rpc-statd.service
    After=network.target rpc-statd.service
    [Mount]
    What=192.168.0.250:/
    Where=/media/SERVET_NYTT
    Type=nfs
    Options=rw,hard,async,intr,rsize=49152,wsize=49152,proto=tcp
    DirectoryMode=0777
    StandardOutput=syslog
    StandardError=syslog
    From the sparce wiki and 'man systemd.mount'. Now nothing happens. With my limited understanding I thought it would start neccessary services (wants) and replace the entry in fstab.
    I will now enable systemd services rpcbind, rpcstatd and see what happens.
    Overall the transition to systemd went very well indeed; slim, openbox, network, .xinitrc, e4rat all started OK much to my surprise! There's still some fine-tuning to do like nfs, and possibly automounting and at last weeding out unneccessary services.
    But any help with this nfs client problem is much appreciated.

    swanson wrote:
    Do you use any of this;
    Alternatively, you can mark these entries in /etc/fstab with the x-systemd.automount and x-systemd-device-timeout= options (see systemd.mount ..
    I did read the man for systemd.mount but couldn't make out what to put in the fstab line.
    By the way, I renamed the mountfile to the target mountpoint but no success. I then reactivated the fstab line, with the new mountpoint in /mnt and that worked fine with mount -a. Except the mount options are not as I want, as they are in fstab. They were before systemd, and now they are as default for nfs4.
    have a look here regarding the x-systemd-automount option. I seem to be only one who has noticed a problem with this option, comment=systemd.automount works fine for automounting though. I haven't changed the wiki because I'm still not sure if the problem is on my side.

  • Problems enabling nfs client and server

    I just re-build a solaris 11.1x86 on a x4640 SunFire
    Have problems enabling nfs
    First i typed the following command:
    # svcs network/nfs/server
    disable
    Second I typed the following command:
    #svcadm enable network/nfs/server
    # svcs network/nfs/server
    offline
    did this 3 or 4 times without success...
    any ideas?
    I holding production here! Please help!!

    Let's rule out the easier stuff first...
    Do you have something shared?
    I think you need to have something shared before you can enable the nfs server service.
    Or, if you share something, the service is started automatically. See below.
    Thanks, Cindy
    # svcs -a | grep nfs
    disabled Feb_26 svc:/network/nfs/client:default
    disabled Feb_26 svc:/network/nfs/server:default
    disabled Feb_26 svc:/network/nfs/rquota:default
    # svcadm enable svc:/network/nfs/server:default
    # svcs | grep nfs
    disabled 13:51:52 svc:/network/nfs/server:default
    # zfs set share.nfs=on rpool/cindy
    # share
    rpool_cindy /rpool/cindy nfs sec=sys,rw
    # svcs | grep nfs
    online Feb_26 svc:/network/nfs/fedfs-client:default
    online Feb_27 svc:/network/nfs/status:default
    online Feb_27 svc:/network/nfs/cbd:default
    online Feb_27 svc:/network/nfs/mapid:default
    online Feb_27 svc:/network/nfs/nlockmgr:default
    online 13:52:35 svc:/network/nfs/rquota:default
    online 13:52:35 svc:/network/nfs/server:default

  • [Solved]NFS Client Not Mounting Shares

    Here is my setup:
    I have two Arch boxes that I am attempting to setup NFS shares on.  The box that is going to be the server is headless FYI.  So far, I have installed nfs-utils, started `rpc-idmapd` and `rpc-mountd` successfully on the server, and started `rpc-gssd` successfully on the client.
    The folder I am trying to share is the /exports folder.
    ls -l /exports
    produces
    total 8
    drwxrwxrw-+ 110 daniel 1004 4096 Dec 6 17:26 Movies
    drwxrwxrwx+ 13 daniel users 4096 Jan 8 19:12 TV-Shows
    On the server:
    /etc/exports
    # /etc/exports
    # List of directories exported to NFS clients. See exports(5).
    # Use exportfs -arv to reread.
    # Example for NFSv2 and NFSv3:
    # /srv/home hostname1(rw,sync) hostname2(ro,sync)
    # Example for NFSv4:
    # /srv/nfs4 hostname1(rw,sync,fsid=0)
    # /srv/nfs4/home hostname1(rw,sync,nohide)
    # Using Kerberos and integrity checking:
    # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt)
    # /srv/nfs4/home gss/krb5i(rw,sync,nohide)
    /exports 192.168.1.10(rw,fsid=0)
    On the client:
    showmount -e 192.168.1.91
    Export list for 192.168.1.91:
    /exports 192.168.1.10
    Everything is looking hunky-dory.  However, I go to mount using
    sudo mount -t nfs4 192.168.1.91:/exports /mnt/Media
    and the mount never takes place.  It sits there and does nothing.  I CAN, however, kill the process with Ctrl-c.
    So does anybody have ANY idea why my shares aren't working.
    EDIT: Just thought I should mention that all of the data in the /exports folder is a mount --bind from /mnt/media.  All of the /mnt/media is contained on a USB external hard drive.  I did notice that there is an ACL.
    getfacl /exports
    getfacl: Removing leading '/' from absolute path names
    # file: exports/
    # owner: root
    # group: root
    user::rwx
    group::r-x
    other::r-x
    Last edited by DaBungalow (2014-01-10 03:18:05)

    I found what the problem was.  Apparently rpc_gssd was causing a problem.  Stopping it fixed everything.

  • [solved] NFS client will not work correctly

    I have all my $HOME on an NFS Server. So long I used suse and debian, now I want switch to arch but the nfs-client ist not working correctly:
    I start "portmap nfslock nfsd netfs" over rc.conf. When I do a "rpcinfo -p <ip-arch-system>" I got the following
    stefan:/home/stefan # rpcinfo -p 192.168.123.3
       Program Vers Proto   Port
        100000    2   tcp    111  portmapper
        100000    2   udp    111  portmapper
        100021    1   udp  32768  nlockmgr
        100021    3   udp  32768  nlockmgr
        100021    4   udp  32768  nlockmgr
        100003    2   udp   2049  nfs
        100003    3   udp   2049  nfs
        100003    4   udp   2049  nfs
        100021    1   tcp  48988  nlockmgr
        100021    3   tcp  48988  nlockmgr
        100021    4   tcp  48988  nlockmgr
        100003    2   tcp   2049  nfs
        100003    3   tcp   2049  nfs
        100003    4   tcp   2049  nfs
        100005    3   udp    891  mountd
        100005    3   tcp    894  mountd
    As you see "status" is missing, so the statd is not running. It sould look like the result on my suse box:
    stefan:/home/stefan # rpcinfo -p 192.168.123.2
       Program Vers Proto   Port
        100000    2   tcp    111  portmapper
        100000    2   udp    111  portmapper
        100024    1   udp  32768  status
        100021    1   udp  32768  nlockmgr
        100021    3   udp  32768  nlockmgr
        100021    4   udp  32768  nlockmgr
        100024    1   tcp  35804  status
        100021    1   tcp  35804  nlockmgr
        100021    3   tcp  35804  nlockmgr
        100021    4   tcp  35804  nlockmgr
    There is the "status" line and so the statd is running.
    How can I fix that problem, so that statd ist running on my arch box too?
    Last edited by stka (2007-06-10 15:59:48)

    The Problem ist solved.
    I use ldap for authentication. During the setup of the ldapclient I copied the nsswitch.ldap to nsswitch.conf. But the line for "hosts:" was:
    hosts:          dns ldap
    but in my dns ist no localhost entry. After I changed this line to:
    hosts:          files dns ldap
    everything was ok. The statd is now running and I can start to migrate to archlinux ;-)

  • Solaris 10 NFS caching problems with custom NFS server

    I'm facing a very strange problem with a pure java standalone application providing NFS server v2 service. This same application, targeted for JVM 1.4.2 is running on different environment (see below) without any problem.
    On Solaris 10 we try any kind of mount parameters, system services up/down configuration, but cannot solve the problem.
    We're in big trouble 'cause the app is a mandatory component for a product to be in production stage in a while.
    Details follows
    System description
    Sunsparc U4 with SunOS 5.10, patch level: Generic_118833-33, 64bit
    List of active NFS services
    disabled   svc:/network/nfs/cbd:default
    disabled   svc:/network/nfs/mapid:default
    disabled   svc:/network/nfs/client:default
    disabled   svc:/network/nfs/server:default
    disabled   svc:/network/nfs/rquota:default
    online       svc:/network/nfs/status:default
    online       svc:/network/nfs/nlockmgr:default
    NFS mount params (from /etc/vfstab)
    localhost:/VDD_Server  - /users/vdd/mnt nfs - vers=2,proto=tcp,timeo=600,wsize=8192,rsize=8192,port=1579,noxattr,soft,intr,noac
    Anomaly description
    The server side of NFS is provided by a java standalone application enabled only for NFS v2 and tested on different environments like: MS Windows 2000, 2003, XP, Linux RedHat 10 32bit, Linux Debian 2.6.x 64bit, SunOS 5.9. The java application is distributed with a test program (java standalone application) to validate main installation and configuration.
    The test program simply reads a file from the NFS file-system exported by our main application (called VDD) and writes the same file with a different name on the VDD exported file-system. At end of test, the written file have different contents from the read one. Indeep investigation shows following behaviuor:
    _ The read phase behaves correctly on both server (VDD) and client (test app) sides, trasporting the file with correct contents.
    _ The write phase produces a zero filled file for 90% of resulting VDD file-system file but ending correctly with the same sequence of bytes as the original read file.
    _ Detailed write phase behaviour:
    1_ Test app wites first 512 bytes => VDD receive NFS command with offset 0, count 512 and correct bytes contents;
    2_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1024 and WRONG bytes contents: the first 512 bytes are zero filled (previous write) and last 512 bytes with correct bytes contents (current write).
    3_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1536 and WRONG bytes contents: the first 1024 bytes are zero filled (previous writes) and last 512 bytes with correct bytes contents (current write).
    4_ and so on...
    Further tests
    We tested our VDD application on the same Solaris 10 system but with our test application on another (Linux) machine, contacting VDD via Linux NFS client, and we don�t see the wrong behaviour: our test program performed ok and written file has same contents as read one.
    Has anyone faced a similar problem?
    We are Sun ISV partner: do you think we have enough info to open a bug request to SDN?
    Any suggestions?
    Many thanks in advance,
    Maurizio.

    I finally got it working. I think my problem was that I was coping and pasting the /etc/pam.conf from Gary's guide into the pam.conf file.
    There was unseen carriage returns mucking things up. So following a combination of the two docs worked. Starting with:
    http://web.singnet.com.sg/~garyttt/Configuring%20Solaris%20Native%20LDAP%20Client%20for%20Fedora%20Directory%20Server.htm
    Then following the steps at "Authentication Option #1: LDAP PAM configuration " from this doc:
    http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server
    for the pam.conf, got things working.
    Note: ensure that your user has the shadowAccount value set in the objectClass

  • How to config nfs client in netware 6 SP5

    Hello, I'm with some problems to find some information. It becomes difficult to find on a system already without support. I need to access an NFS from NetWare 6 SP5, and the truth can not find anything, so some links are broken. Anyone have any idea.

    The NFS documentation for NetWare 6.0 can be found:
    Novell Documentation: NetWare 6 - Working with UNIX Machines
    Now this is about th eNFS service.
    From your message title, are you talking about connecting a NFS client to a NetWare server, or about using NetWare as an NFS client? In the later case, you would need tu purchase an extra product callled NetWare NFS gateway.

  • Wrong atime created by iMac as a NFS client

    We have iMac computers in classrooms, and they are running as an NFS client. We have found that the wrong access time (atime) is created by Mac OS 10.5 and 10.6.
    A file that is opened by using O_CREATE on the NFS-mounted directory will have the wrong atime. For example, the year will appear as 1920 or 2037 depending on the IP address of the NFS client.
    This problem can also be found with Solaris NFS server and EMC Celerra.
    The following program will create a file having the wrong atime on an NFS-mounted directory:
    #include <fcntl.h>
    #include <stdlib.h>
    #include <string.h>
    #include <stddef.h>
    main() {
    int fd;
    /* O_EXCL option will cause wrong atime */
    fd = open("AIZU", O_CREAT | O_EXCL | O_RDWR, 0644);
    /* fd = open("AIZU", O_CREAT | O_RDWR, 0644); */
    close(fd);
    exit(0);
    The phenomenon is quite similar to the following problems on FreeBSD reported in 2001:
    http://www.mail-arch...g/msg22084.html
    If anyone knows a solution to this problem, we would greatly appreciate your advice. Thanks in advance.

    The solution shown in
    http://www.mail-archive.com/[email protected]/msg22084.html
    should be applied to the current NFS modules of Snow Leopard.

  • Is NFS client data cacheing possible?

    Yesterday, I was viewing an HD 1080 video with VLC, and noticed that Activity Monitor was showing about 34MB/sec from my NAS box. My NAS box runs OpenSolaris (I was at Sun for over 20 years, and some habits die hard), and the 6GB video file was mounted on my iMac 27" (10.7.2) using NFSv3 (yes, I have a gigabit network).
    Being a long term UNIX performance expert and regular DTrace user, I was able to confirm that VLC on Lion was reading the file at about 1.8MB/sec, and that the NFS server was being hit at 34MB/sec. Further investigation showed that the NFS client (Lion) was requesting each 32KB block 20 times!
    Note: the default read size for NFSv3 over TCP is 32KB).
    Digging deeper, I found that VLC was reading the file in 1786 byte blocks. I have concluded that Lion's NFSv3 client implement at least one 32KB read for each application call to read(2), and that no data is cached betweem reads (this fully accounts for the 20x overhead in this case).
    A workaround is to use say rsize=1024, which will increase the number of NFS ops but dramatically reduce the bandwidth consumption (which means I might yet be able to watch HD video over wifi).
    That VLC should start issuing such small reads is a bug, so I have also written some notes in the vlc.org forums. But client side cacheing would hide the issue from the network.
    So, the big question: is it possible to enable NFS client data cacheing in Lion?

    The problem solved itself mysteriously overnight, without any interference from myself.
    The systems are again perfectly happily mounting the file space (650 clients of them all at the same time
    mounting up to 6 filesystems from the same server) and the server is happily serving again as it has been for the past 2 years.
    My idea is that there has been a network configuration clash, but considering that the last modification of NIS hosts file was about 4 weeks ago and the latest server was installed then and has been serving since then, I have no
    idea how such clash could happen without interference in the config files. It is a mystery and I will have to make
    every effort to unravel it. One does not really like to sweep incidents like that un-investigated under the carpet.
    If anybody has any suggestions and thoughts on this matter please post them here.
    Lydia

  • NFS client test program

    I'm trying to write a test program that behaves like an NFS client. The code uses the standard RPC calls to talk to the NFS server. It compiles and runs fine on Linux. On Solaris though I have a problem establishing an RPC client initially.
    The test program performs the following operations:
    1. Uses the portmapper service on the server to establish the remote port associated with the MOUNT service
    1. Creates a socket, sockfd.
    2. Uses bindresvport to bind a reserved port to the socket. The port number is in the reserved range (ie 600 to 1000ish).
    3. Connects this socket to the server
    3. Creates a struct sockadd_in with the appropriate address information for the server in addr
    4. Calls clnttcp_create
    The following code
    client = clnttcp_create(addr, <------- points at the server
    program, <-------- MOUNT RPC program number
    program_version, <--------- MOUNT RPC program version
    &sockfd, <------- connected socket using a reserved port on the local client
    0,
    0);
    if (client == NULL) {
    clnt_pcreateerror("clnttcp_create");
    returns "clnttcp_create: RPC: Remote system error - Address already in use".
    So it looks like port 688 (the local port selected say by bindresvport) is already in use ? But that's the whole point, I need to provide cnttcp_create() with a socket that already has a port bound to it.
    The same code runs fines on a Suse 11 client talking to Open Solaris 11 server. The same code run as a client on Solaris 11 fails with the above message when talking to either a Suse 11 server or a Solaris 11 server.
    Any suggestions what I'm doing wrong? Is this the right forum in which to ask?
    many thanks

    Make sure you have the sub-folder com, and then another sub-folder underneath it, mastertech and another sub-folder underneath this called sample. This is how packages are structured. They should be in your ep workspace directory structure if you're using Eclipse. In JBoss, you do not have to have the packages or any source code structure. Just jar your compiled code and save in JBoss's deploy folder which only accepts jar's, war's and ear's.

  • [Solved] NFS write problem

    I looked through available threads and couldn't find anything that helped, I use VMware to develop on I need to connect to the vm filesystem with local dev tools. I set up everything according to arch wiki, caveat the server vm is ubuntu. I can mount using either the Ubuntu command
    mount -t nfs4 -o proto=tcp,port=2049 192.168.1.96:/ /root/www
    or the Arch command
    #mount -t nfs4 192.168.1.96:/ /root/www
    I can read and traverse directories, I can create a file, but I cannot edit or delete a file that is already there.
    Ubuntu fstab
    # /etc/fstab: static file system information.
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    # <file system> <mount point> <type> <options> <dump> <pass>
    proc /proc proc nodev,noexec,nosuid 0 0
    # / was on /dev/sda1 during installation
    UUID=24e3b6ef-d2bc-4ff9-891c-411910f7ce24 / ext4 errors=remount-ro 0 1
    # swap was on /dev/sda5 during installation
    UUID=ee28c56a-6aaa-4861-9778-fb3f335dba0c none swap sw 0 0
    /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
    /var/www /export/www none bind,rw 0 0
    I have to run the mount once the vm is up so nothing in my fstab.
    client /etc/conf.d/nfs-common.conf
    # Parameters to be passed to nfs-common (nfs clients & server) init script.
    # If you do not set values for the NEED_ options, they will be attempted
    # autodetected; this should be sufficient for most people. Valid alternatives
    # for the NEED_ options are "yes" and "no".
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=""
    # Options to pass to rpc.statd.
    # See rpc.statd(8) for more details.
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly.
    # STATD_OPTS="-p 32765 -o 32766"
    STATD_OPTS=""
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS=""
    # Do you want to start the idmapd daemon? It is only needed for NFSv4.
    NEED_IDMAPD=""
    # Options to pass to rpc.idmapd.
    # See rpc.idmapd(8) for more details.
    IDMAPD_OPTS=""
    # Do you want to start the gssd daemon? It is required for Kerberos mounts.
    NEED_GSSD=""
    # Options to pass to rpc.gssd.
    # See rpc.gssd(8) for more details.
    GSSD_OPTS=""
    # Where to mount rpc_pipefs filesystem; the default is "/var/lib/nfs/rpc_pipefs".
    PIPEFS_MOUNTPOINT=""
    # Options used to mount rpc_pipefs filesystem; the default is "defaults".
    PIPEFS_MOUNTOPTS=""
    Here is the /etc/exports from Ubuntu
    /export 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)
    /export/www 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,async)
    Ubuntu nfs-kernel-server
    # Number of servers to start up
    # To disable nfsv4 on the server, specify '--no-nfs-version 4' here
    RPCNFSDCOUNT=8
    # Runtime priority of server (see nice(1))
    RPCNFSDPRIORITY=0
    # Options for rpc.mountd.
    # If you have a port-based firewall, you might want to set up
    # a fixed port here using the --port option. For more information,
    # see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS
    RPCMOUNTDOPTS=--manage-gids
    # Do you want to start the svcgssd daemon? It is only required for Kerberos
    # exports. Valid alternatives are "yes" and "no"; the default is "no".
    NEED_SVCGSSD=no # no is default
    # Options for rpc.svcgssd.
    RPCSVCGSSDOPTS=
    # Options for rpc.nfsd.
    RPCNFSDOPTS=
    Ubuntu /etc/default/nfs-common
    # If you do not set values for the NEED_ options, they will be attempted
    # autodetected; this should be sufficient for most people. Valid alternatives
    # for the NEED_ options are "yes" and "no".
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=
    # Options for rpc.statd.
    # Should rpc.statd listen on a specific port? This is especially useful
    # when you have a port-based firewall. To use a fixed port, set this
    # this variable to a statd argument like: "--port 4000 --outgoing-port 4001".
    # For more information, see rpc.statd(8) or http://wiki.debian.org/SecuringNFS
    STATDOPTS=
    # Do you want to start the idmapd daemon? It is only needed for NFSv4.
    NEED_IDMAPD=yes
    # Do you want to start the gssd daemon? It is required for Kerberos mounts.
    NEED_GSSD=no # no is default
    I appreciate any help you can give thanks
    --jerry
    Last edited by jk121960 (2012-03-22 19:03:07)

    Basically, I have had very few problems with NFS - however, the ubuntu-version of linuxmint had problems with uid/gids - they ended up as the complement of '-2'!
    I never use an exports file, but set up the server in rc.local, basically like so:
    exportfs -iv -o rw,insecure,no_root_squash /pub/usb 192.168.1.0/24:/pub
    Then, on the client side in /etc/fstab I have:
    {servername}:/pub /pub nfs defaults,noatime,noauto 0 0
    With linuxmint (at least the gnome-version), I had to use 'sshfs' instead. linuxmint lmde is ok.
    Also - I have used NFS for more years than I care to remember, but ... I must admit that sshfs has its merits - especially if you remember to use '-o allow_other'.
    ie on the client side you can use:
    sshfs -o allow_other {servername}:/pub/usb /pub
    Hope this helps.
    Last edited by perbh (2012-03-22 18:12:05)

  • NFS locking problem, solaris 10, nfsd stops serving

    Hi all,
    First time poster, short time viewer.
    To set the scene...
    I have an x86 SAM-QFS installation under management currently. The SAM-QFS infrastructure consists of 2x meta data controllers and 2x file service heads, that currently share out NFS (they are NFS servers).
    These systems are not clustered, nor do they have any form of "high availability". They can be failed between, but this is very much a manual process.
    The file service heads are running mounted SAM-FS filesystems. These file systems are shared out in the form of /etc/dfs/dfstab style exports to NFS clients (some 250 Mac OS X Tiger/Leopard clients) that use NFS for their home directories et al.
    So, here is where things become sad.
    For the past month, we've been fighting with our nfsd. Unfortunately for us, at (apparently) completely random moments, our nfsd will simply stop responding to requests, cease to serve out nfs to clients and then silently stop processing any locking/transfer/mapping requests. An svcadm restart nfs/server does nothing, as the process simply 'hangs' in a state of reattempting to stop/start the nfsd. We find that the only way to get nfs to respond is to completely reboot the host. At this point only, will nfs reshare with the "shareall" command based upon our entries in /etc/dfs/dfstab
    We have been though a lengthy support case with Sun. We are getting NO closer to understanding the problem, but we are providing Sun with every possible detail we can. We've even gotten to the point of writing crash dumps from the machine out to disk using commands such as 'uadmin 5 1' and then sending our entire vmcore outputs to Sun.
    We are stabbing in the dark suggesting it might be file locking related, but we cannot/do not have enough visibility on the inside of the nfds to be able to tell what is happening. We've tried increasing the number of nfs_server allowed daemons in /etc/default/nfs, as well as made the maximum NFS version to serve out as v3, but it hasn't helped us here at all.
    If anyone has any experience with what could possibly cause NFS to do this, in what (we thought) was a fairly common, NFS environment, it would be appreciate. Our infrastructure is fairly grounded until we have a resolution to an issue Sun simply can't seem to pinpoint.
    Thank you.
    z.

    I have the same problem on a samfs server. I have never come accross a shittier system then a samfs system. I feel for people who have to support them. SAMFS takes an easy concept and makes it an overwhelming problem.

  • How do I disable NFS client in Solaris 10

    I am trying to disable NFS client in Solaris 10. In Solaris 9 I would simply rename /etc/rc2.d/S73nfs.client to /etc/rc2.d/s73nfs.client
    Since /etc/rc2.d/nfs.client does not seem to exist in 10 I'm wondering how to do this.
    Thanks in advance for the help.
    Max

    Since /etc/rc2.d/nfs.client does not seem to exist in
    10 I'm wondering how to do this.Read up on the new Solaris 10 service management faciities. Info at http://docs.sun.com/ There are a couple of tutorial doc's at bigadmin

Maybe you are looking for