Exportfs on nfs server causes server fence and reboots OVM3 server

Hi,
I am using OVM 3.1.1 manager and and same version ovm servers. I have a single server pool with four nodes. My server repository and pool file system are shared from two different NFS servers, and accessed using the back-end interface which is bonded on all Dom0s. When I do 'exportfs -ra' or 'service nfs reload' on the NFS server with the pool file system, two of my four servers immediately fence and reboot. It does it pretty consistently, I can recreate it every time. It just seem to reboot different servers from the pool. The NFS server with repository does not produce the same result after doing the same nfs refresh commands.
The system log doesn't seem to catch any meaningful messages before the reboot. The o2cb parameters in /etc/sysconfig/o2cb are:
O2CB_IDLE_TIMEOUT_MS=60000
O2CB_HEARTBEAT_THRESHOLD=31
O2CB_RECONNECT_DELAY_MS=2000
O2CB_ENABLED=true
O2CB_STACK=o2cb
O2CB_KEEPALIVE_DELAY_MS=2000
O2CB_BOOTCLUSTER=618edbaefe604e5a
I appreciate any help to resolve this issue.
Thanks.

Is your cluster filesystem on NFS?
I'd bump up your O2CB_HEARTBEAT_THRESHOLD to either 61 or 91. In OVM 3.2.1, the default has been upped to 61. We're running at 91. My guess is the exportfs is causing NFS to restart, which makes your cluster filesystem vanish for a bit, which causes the fencing and rebooting.
OVM sure is fun...
I'd also strongly suggest you log an SR.
Rob

Similar Messages

  • Solaris 9 NFS clients and Mac OS X 10.3.8 NFS Server issues

    I have a situation where I'm using a Mac OS X Server machine as my file server for heterogeneous mix of clients, Solaris 9 being one of them. The NFS server portion of Mac OS X seems to have some quirks. I made the move to a Mac OS X server from a combination of Linux and Solaris because it was touted as a good multi-platform server solution, but my NFS woes are souring my opinion of it
    I haven't been able to nail down the exact cause, but it seems that Sun's Gnome Destop 2.0 has problems starting up. The 'gconfd-2' process starts and never finishes what it's doing. I'm suspecting a problem with creating a lock file in the user's NFS mounted home directory. My workaround was to disable the Gnome Desktop 2.0 option, but it isn't very pleasant because many of my users liked it.
    Another strange issue that plagues the Solaris 9 and Linux (Fedora 2) users is that Mozilla, which is the primary email client for my users, complains that it "Could not initialize the browser's security component." It goes on to suggest that there may be a problem with read / write access with the user's profile directory or that there's no more room. Googling didn't turn up much about this problem, but the home directory share is nowhere near full and the permissions are such that eveyone can read and write to their own profile directory just fine. I've been able to work around this problem on Linux by removing the user's 'cert8.db' and 'key3.db' files before they run Mozilla, but this technique is failing to work on the Solaris 9 clients.
    All these problems seem to involve some strange file access issue, and the Linux and Solaris clients had no problems when I was using a Solaris box for home directory sharing via NFS, so it definitely seems like a problem with Mac OS X's NFS implementation.
    If anyone has come across this type of issue and has some information about a fix or a better workaround, I would love to hear from you. Thanks in advance!

    I just figured out today that the Gnome Desktop 2.0 problem is due to some part of Gnome not liking really long home directory paths. Mac OS X by default dictates that home directories be of the form /Network/Servers/(fqdn of file server)/Volumes/(volume name)(home directory share path), for example, /Network/Servers/xxx.myschool.edu/Volumes/Homes/userx. I haven't figured out what in Gnome doesn't like it, but it appears to be the gconf mechanism.
    The Mozilla strangeness still is happening though.
    Quote: schwenk wrote on Fri, 15 April 2005 11:09
    I haven't been able to nail down the exact cause, but it seems that Sun's Gnome Destop 2.0 has problems starting up. The 'gconfd-2' process starts and never finishes what it's doing. I'm suspecting a problem with creating a lock file in the user's NFS mounted home directory. My workaround was to disable the Gnome Desktop 2.0 option, but it isn't very pleasant because many of my users liked it.

  • Possible bug in the arch NFS server package?

    i run the nfs server on my arch box, exporting to a debian box and an arch laptop. whenever the arch server reboots lately, the shares don't automatically remount properly on the clients. they're listed in mtab, but if i ls or try to access the directories, i get a permission denied error. i have to manually unmount the shares and then remount them again to be able to see/use them.
    the reason i think this is an arch package problem is because i set up a share on the debian box to share with arch, and that worked perfectly. when i rebooted debian as the server, the shares were automatically remounted on the arch client, and when i rebooted arch, the shares were again mounted properly on reboot.
    it's possible i'm doing something wrong with permissions, but it seems unlikely because 1) everything was working fine for a long time until recently, when i started noticing the behavior,  2) all the permissions on the shared directory are identical to the ones on the arch shared directory, all user name UIDs are the same, same groups and GIDs, etc., 3) the shares mount perfectly well manually from the command line, and 4) i set up the debian share/exports, etc. in about 2 minutes with no problem at all, while dealing with this problem on arch for 2 days now, after changing options and going over everything multiple times until my head is spinning. it just seems unlikely that a configuration is wrong, although i guess anything is possible. i can provide all that permissions/group info, fstab info, /etc/exports info, etc. if anyone wants to take a closer look at it.
    so until this is sorted, i wondered if anyone else is having this problem, or if anyone had any ideas of something i might be overlooking. again, everything *seems* to be set up right, but maybe there's some arch specific thing i'm overlooking. thanks.

    Ok out of pure fustration I just grabbed the gentoo init script. Installed start-stop-daemon, modified the script to run as #!/bin/bash, stuck it in /etc/rc.d, rebooted and everything works, I can reboot my computer and clients reconnect. Or restart daemon and they reconnect.
    Heres the script I am using.
    #!/bin/bash
    # Copyright 1999-2005 Gentoo Foundation
    # Distributed under the terms of the GNU General Public License v2
    # $Header: /var/cvsroot/gentoo-x86/net-fs/nfs-utils/files/nfs,v 1.14 2007/03/24 10:14:43 vapier Exp $
    # This script starts/stops the following
    # rpc.statd if necessary (also checked by init.d/nfsmount)
    # rpc.rquotad if exists (from quota package)
    # rpc.nfsd
    # rpc.mountd
    # NB: Config is in /etc/conf.d/nfs
    opts="reload"
    # This variable is used for controlling whether or not to run exportfs -ua;
    # see stop() for more information
    restarting=no
    # The binary locations
    exportfs=/usr/sbin/exportfs
    gssd=/usr/sbin/rpc.gssd
    idmapd=/usr/sbin/rpc.idmapd
    mountd=/usr/sbin/rpc.mountd
    nfsd=/usr/sbin/rpc.nfsd
    rquotad=/usr/sbin/rpc.rquotad
    statd=/usr/sbin/rpc.statd
    svcgssd=/usr/sbin/rpc.svcgssd
    mkdir_nfsdirs() {
    local d
    for d in /var/lib/nfs/{rpc_pipefs,v4recovery,v4root} ; do
    [[ ! -d ${d} ]] && mkdir -p "${d}"
    done
    mount_pipefs() {
    if grep -q rpc_pipefs /proc/filesystems ; then
    if ! grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
    mount -t rpc_pipefs rpc_pipefs /var/lib/nfs/rpc_pipefs
    fi
    fi
    umount_pipefs() {
    if [[ ${restarting} == "no" ]] ; then
    if grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
    umount /var/lib/nfs/rpc_pipefs
    fi
    fi
    start_gssd() {
    [[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
    local ret1 ret2
    ${gssd} ${RPCGSSDDOPTS}
    ret1=$?
    ${svcgssd} ${RPCSVCGSSDDOPTS}
    ret2=$?
    return $((${ret1} + ${ret2}))
    stop_gssd() {
    [[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
    local ret
    start-stop-daemon --stop --quiet --exec ${gssd}
    ret1=$?
    start-stop-daemon --stop --quiet --exec ${svcgssd}
    ret2=$?
    return $((${ret1} + ${ret2}))
    start_idmapd() {
    [[ ! -x ${idmapd} ]] && return 0
    ${idmapd} ${RPCIDMAPDOPTS}
    stop_idmapd() {
    [[ ! -x ${idmapd} ]] && return 0
    local ret
    start-stop-daemon --stop --quiet --exec ${idmapd}
    ret=$?
    umount_pipefs
    return ${ret}
    start_statd() {
    # Don't start rpc.statd if already started by init.d/nfsmount
    killall -0 rpc.statd &>/dev/null && return 0
    start-stop-daemon --start --quiet --exec \
    $statd -- $RPCSTATDOPTS 1>&2
    stop_statd() {
    # Don't stop rpc.statd if it's in use by init.d/nfsmount.
    mount -t nfs | grep -q . && return 0
    # Make sure it's actually running
    killall -0 rpc.statd &>/dev/null || return 0
    # Okay, all tests passed, stop rpc.statd
    start-stop-daemon --stop --quiet --exec $statd 1>&2
    waitfor_exportfs() {
    local pid=$1
    ( sleep ${EXPORTFSTIMEOUT:-30}; kill -9 $pid &>/dev/null ) &
    wait $1
    case "$1" in
    start)
    # Make sure nfs support is loaded in the kernel #64709
    if [[ -e /proc/modules ]] && ! grep -qs nfsd /proc/filesystems ; then
    modprobe nfsd &> /dev/null
    fi
    # This is the new "kernel 2.6 way" to handle the exports file
    if grep -qs nfsd /proc/filesystems ; then
    if ! grep -qs "^nfsd[[:space:]]/proc/fs/nfsd[[:space:]]" /proc/mounts ; then
    mount -t nfsd nfsd /proc/fs/nfsd
    fi
    fi
    # now that nfsd is mounted inside /proc, we can safely start mountd later
    mkdir_nfsdirs
    mount_pipefs
    start_idmapd
    start_gssd
    start_statd
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # bootup process can continue.
    if grep -q '^/' /etc/exports &>/dev/null; then
    $exportfs -r 1>&2 &
    waitfor_exportfs $!
    fi
    if [ -x $rquotad ]; then
    start-stop-daemon --start --quiet --exec \
    $rquotad -- $RPCRQUOTADOPTS 1>&2
    fi
    start-stop-daemon --start --quiet --exec \
    $nfsd --name nfsd -- $RPCNFSDCOUNT 1>&2
    # Start mountd
    start-stop-daemon --start --quiet --exec \
    $mountd -- $RPCMOUNTDOPTS 1>&2
    stop)
    # Don't check NFSSERVER variable since it might have changed,
    # instead use --oknodo to smooth things over
    start-stop-daemon --stop --quiet --oknodo \
    --exec $mountd 1>&2
    # nfsd sets its process name to [nfsd] so don't look for $nfsd
    start-stop-daemon --stop --quiet --oknodo \
    --name nfsd --user root --signal 2 1>&2
    if [ -x $rquotad ]; then
    start-stop-daemon --stop --quiet --oknodo \
    --exec $rquotad 1>&2
    fi
    # When restarting the NFS server, running "exportfs -ua" probably
    # isn't what the user wants. Running it causes all entries listed
    # in xtab to be removed from the kernel export tables, and the
    # xtab file is cleared. This effectively shuts down all NFS
    # activity, leaving all clients holding stale NFS filehandles,
    # *even* when the NFS server has restarted.
    # That's what you would want if you were shutting down the NFS
    # server for good, or for a long period of time, but not when the
    # NFS server will be running again in short order. In this case,
    # then "exportfs -r" will reread the xtab, and all the current
    # clients will be able to resume NFS activity, *without* needing
    # to umount/(re)mount the filesystem.
    if [ "$restarting" = no ]; then
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # shutdown process can continue.
    $exportfs -ua 1>&2 &
    waitfor_exportfs $!
    fi
    stop_statd
    stop_gssd
    stop_idmapd
    umount_pipefs
    reload)
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # bootup process can continue.
    $exportfs -r 1>&2 &
    waitfor_exportfs $!
    restart)
    # See long comment in stop() regarding "restarting" and exportfs -ua
    restarting=yes
    svc_stop
    svc_start
    echo "usage: $0 {start|stop|restart}"
    esac
    exit 0

  • Problems setting up an NFS server

    Hi everybody,
    I just completed my first arch install. :-)
    I have a desktop and a laptop, and I installed Arch on the desktop (the laptop runs Ubuntu 9.10). I had a few difficulties here and there, but I now have the system up and running, and I'm very happy.
    I have a problem setting up an NFS server. With Ubuntu everything was working, so I'm assuming that the Ubuntu machine (client) is set-up correctly. I'm trying to troubleshoot the arch box (server) now.
    I followed this wiki article: http://wiki.archlinux.org/index.php/Nfs
    Now, I have these problems:
    - when I start the daemons, I get:
    [root@myhost ~]# /etc/rc.d/rpcbind start
    :: Starting rpcbind [FAIL]
    [root@myhost ~]# /etc/rc.d/nfs-common start
    :: Starting rpc.statd daemon [FAIL]
    [root@myhost ~]# /etc/rc.d/nfs-server start
    :: Mounting nfsd filesystem [DONE]
    :: Exporting all directories [BUSY] exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.1.1/24:/home".
    Assuming default behaviour ('no_subtree_check').
    NOTE: this default has changed since nfs-utils version 1.0.x
    [DONE]
    :: Starting rpc.nfsd daemon [FAIL]
    - If I mount the share on the client with "sudo mount 192.168.1.20:/home /media/desktop", IT IS mounted but I can't browse it because I have no privileges to access the home directory for the user.
    my /etc/exports looks like this:
    # /etc/exports: the access control list for filesystems which may be exported
    # to NFS clients. See exports(5).
    /home 192.168.1.1/24(rw,sync,all_squash,anonuid=99,anongid=99))
    /etc/conf.d/nfs-common.conf:
    # Parameters to be passed to nfs-common (nfs clients & server) init script.
    # If you do not set values for the NEED_ options, they will be attempted
    # autodetected; this should be sufficient for most people. Valid alternatives
    # for the NEED_ options are "yes" and "no".
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=
    # Options to pass to rpc.statd.
    # See rpc.statd(8) for more details.
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly. Specifically, the Arch
    # NFS init scripts require the --no-notify flag on the server,
    # but not on the client e.g.
    # STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
    # STATD_OPTS="-p 32765 -o 32766" -> client
    STATD_OPTS="--no-notify"
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS=""
    # Do you want to start the idmapd daemon? It is only needed for NFSv4.
    NEED_IDMAPD=
    # Options to pass to rpc.idmapd.
    # See rpc.idmapd(8) for more details.
    IDMAPD_OPTS=
    # Do you want to start the gssd daemon? It is required for Kerberos mounts.
    NEED_GSSD=
    # Options to pass to rpc.gssd.
    # See rpc.gssd(8) for more details.
    GSSD_OPTS=
    # Where to mount rpc_pipefs filesystem; the default is "/var/lib/nfs/rpc_pipefs".
    PIPEFS_MOUNTPOINT=
    # Options used to mount rpc_pipefs filesystem; the default is "defaults".
    PIPEFS_MOUNTOPTS=
    /etc/hosts.allow:
    nfsd: 192.168.1.0/255.255.255.0
    rpcbind: 192.168.1.0/255.255.255.0
    mountd: 192.168.1.0/255.255.255.0
    Any help would be very appreciated!

    Thanks, I finally got it working.
    I realized that even though both machines had the same group, my Ubuntu machine (client) group has GID 1000, while the Arch one has GID 1001. I created a group that has GID 1001 on the client, and now everything is working.
    I'm wondering why my Arch username and group both have 1001 rather than 1000 (which I suppose would be the default number for the first user created).
    Anyway, thanks again for your inputs.

  • Memory leak in  NFS server

    Hi all,
    I have a problem with 2 SunFire 240 (4Gb of Ram) with solaris 10 in a Veritas Cluster.
    These nodes are 2 NFS server and they have 10 nfs client.
    We have a memory leak on these servers. The memory utilization increase day by day.
    The memory seems to be allocated by the kernel and not from some process.
    So I would like to know if this is a common issue (NFS?) or this is a single case.
    Thanks in advance for you help
    Regards
    Daniele
    Edited by: Danx on Jan 2, 2008 5:23 PM

    That message relates to how the application deals with its threads, which for a the most part isn't actually an issue. However, since it does have the potential to cause a leak under certain circumstances we did make a change in 10.3 to address that issue, so I suggest you upgrade to that release.

  • NFS: server reboot - client problem with "file handle" - no unmont

    Hello,
    when I restart a NFS-server and a client has a NFS-share on it still mounted, that NFS-share becomes unusable on the client: everything causes the client complain about an old "file handle", even trying to unmount that share!
    So my question is: How to deal gracefully with a NFS-server reboot and avoiding this file handle problem?
    Apart from going to each client and unmount the NFS-share by hand?
    Thanks!

    Do I need to Give these values manually again as they are moved from Env to other?
    When we transport ID objects, the values stored in the channel get lost and have to be manually enetered......it happens even for DEV --> QA environment.
    Now your channel should reflect PROD-related data.
    2- Why my Receiver communiation channel coimg as
    Communication Channel | PRD_400 | RFC_QA3_Receiver:
    The channel name does not depend on the environment it is present....verify the name once again!
    Regards,
    Abhishek.

  • 7410 NFS server not responding

    Greetings,
    Anyone seeing "NFS server ... not responding" from a client of a 7410?
    I have a 7410 with a single J4400 (22 1TB drives @ RAID1, 1 spare, 1 logzilla). It's running 2009.09.01.3.0,1-1.8, which I believe is the latest and greatest. There is one client, a T2000 running Solaris 10. We're using NFS v3 to mount four shares from the 7410. Mount options look like this:
    box-nge2:/export/oracle/data     -       /oradata/data   nfs     -   yes rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,noac,forcedirectioA fairly large (~1TB) Oracle database lives here (although there was very little activity in the database when the following occurred).
    Today I was cleaning up some old data files from a now unused Oracle instance. Pretty simple: rm -rf /oradata/data/DO-NOT-WANT/ . I was surprised to see the command appear to hang, and the "NFS server kwaltz-nge2 not responding still trying" message appear. Control-C eventually got me my prompt back.
    When the system became responsive again a few minutes later I tried deleting files one at a time. Deleting some "large" data files (i.e 200GB or more) was taking more than 2 minutes, and caused the NFS server not responding message. Smaller files would take a few seconds.
    Why is a simple "rm" command bringing the 7410 to its knees? Any thoughts? Thanks.
    Edited by: roymcmorran on Dec 23, 2009 3:44 PM

    After some update issues, we also lost NFS. I created a test share and exported it to one of our Solaris 10 hosts. I got an rpcbind failure. This failure wasn't corrected by restarting the NFS service, nor rebooting the 7410. Rather, I had to DISABLE the NFS service, then re-enable it. After that, all connectivity returned.
    Charles

  • Solaris 10 NFS caching problems with custom NFS server

    I'm facing a very strange problem with a pure java standalone application providing NFS server v2 service. This same application, targeted for JVM 1.4.2 is running on different environment (see below) without any problem.
    On Solaris 10 we try any kind of mount parameters, system services up/down configuration, but cannot solve the problem.
    We're in big trouble 'cause the app is a mandatory component for a product to be in production stage in a while.
    Details follows
    System description
    Sunsparc U4 with SunOS 5.10, patch level: Generic_118833-33, 64bit
    List of active NFS services
    disabled   svc:/network/nfs/cbd:default
    disabled   svc:/network/nfs/mapid:default
    disabled   svc:/network/nfs/client:default
    disabled   svc:/network/nfs/server:default
    disabled   svc:/network/nfs/rquota:default
    online       svc:/network/nfs/status:default
    online       svc:/network/nfs/nlockmgr:default
    NFS mount params (from /etc/vfstab)
    localhost:/VDD_Server  - /users/vdd/mnt nfs - vers=2,proto=tcp,timeo=600,wsize=8192,rsize=8192,port=1579,noxattr,soft,intr,noac
    Anomaly description
    The server side of NFS is provided by a java standalone application enabled only for NFS v2 and tested on different environments like: MS Windows 2000, 2003, XP, Linux RedHat 10 32bit, Linux Debian 2.6.x 64bit, SunOS 5.9. The java application is distributed with a test program (java standalone application) to validate main installation and configuration.
    The test program simply reads a file from the NFS file-system exported by our main application (called VDD) and writes the same file with a different name on the VDD exported file-system. At end of test, the written file have different contents from the read one. Indeep investigation shows following behaviuor:
    _ The read phase behaves correctly on both server (VDD) and client (test app) sides, trasporting the file with correct contents.
    _ The write phase produces a zero filled file for 90% of resulting VDD file-system file but ending correctly with the same sequence of bytes as the original read file.
    _ Detailed write phase behaviour:
    1_ Test app wites first 512 bytes => VDD receive NFS command with offset 0, count 512 and correct bytes contents;
    2_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1024 and WRONG bytes contents: the first 512 bytes are zero filled (previous write) and last 512 bytes with correct bytes contents (current write).
    3_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1536 and WRONG bytes contents: the first 1024 bytes are zero filled (previous writes) and last 512 bytes with correct bytes contents (current write).
    4_ and so on...
    Further tests
    We tested our VDD application on the same Solaris 10 system but with our test application on another (Linux) machine, contacting VDD via Linux NFS client, and we don�t see the wrong behaviour: our test program performed ok and written file has same contents as read one.
    Has anyone faced a similar problem?
    We are Sun ISV partner: do you think we have enough info to open a bug request to SDN?
    Any suggestions?
    Many thanks in advance,
    Maurizio.

    I finally got it working. I think my problem was that I was coping and pasting the /etc/pam.conf from Gary's guide into the pam.conf file.
    There was unseen carriage returns mucking things up. So following a combination of the two docs worked. Starting with:
    http://web.singnet.com.sg/~garyttt/Configuring%20Solaris%20Native%20LDAP%20Client%20for%20Fedora%20Directory%20Server.htm
    Then following the steps at "Authentication Option #1: LDAP PAM configuration " from this doc:
    http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server
    for the pam.conf, got things working.
    Note: ensure that your user has the shadowAccount value set in the objectClass

  • NFS server unmount error w/ bind-mount

    Hello, please let me know if I should change the thread title.
    so I'm sharing a folder through NFS between two arch-linux pc's.
    Host1 is my desktop, Host2 is my laptop.
    I boot up my computer, rpcbind is started on boot-up
    I launch the server on Host2
    I launch the client on Host1
    on Host2, I do a bind-mount from some directory to the one I'm exporting
    I then unmount it, and it unmounts w/o error
    then I redo the bind-mount
    then, on the client, I mount the server's NFS share to some directory
    then, on the client, I unmount it w/o error
    then, on the server, I try to unmount the bind-mounted directory, but it won't, says it's "busy"
    if I restart the server, then I can unmount the bind-mount w/o error, but I believe it should work without me having to restart the server every time
    I want to bind a different directory to the exported directory.
    server
    $ systemctl start nfs-server
    client
    $ systemctl start nfs-client.target
    server
    $ mount test/ nfs/ --bind
    client
    $ mount Host1:/srv/nfs /mnt/nfs -t nfs
    $ umount /mnt/nfs
    server
    $ umount /srv/nfs
    umount: /srv/nfs: target is busy
    (In some cases useful info about processes that
    use the device is found by lsof(8) or fuser(1).)
    the output of
    $ lsof | grep /srv/nfs
    $ lsof | grep /srv
    did not show anything that had either the substring "nfs" or "rpc" in it...
    below is the status of both hosts rpcbind @startup (they were identical)
    $ systemctl status rpcbind
    rpcbind.service - RPC bind service
    Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; static)
    Active: active (running) since Sat 2014-12-06 15:51:38 PST; 46s ago
    Process: 262 ExecStart=/usr/bin/rpcbind -w ${RPCBIND_ARGS} (code=exited, status=0/SUCCESS)
    Main PID: 264 (rpcbind)
    CGroup: /system.slice/rpcbind.service
    └─264 /usr/bin/rpcbind -w
    Dec 06 15:51:38 BabaLinux systemd[1]: Started RPC bind service.
    below is the status of the nfs-server @ startup then after it was launched
    $ systemctl status nfs-server
    nfs-server.service - NFS server and services
    Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
    Active: inactive (dead)
    $ systemctl status nfs-server
    nfs-server.service - NFS server and services
    Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
    Active: active (exited) since Sat 2014-12-06 15:53:44 PST; 21s ago
    Process: 483 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
    Process: 480 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
    Main PID: 483 (code=exited, status=0/SUCCESS)
    CGroup: /system.slice/nfs-server.service
    Dec 06 15:53:44 BabaLinux exportfs[480]: exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "NeoZelux:/srv/nfs".
    Dec 06 15:53:44 BabaLinux exportfs[480]: Assuming default behaviour ('no_subtree_check').
    Dec 06 15:53:44 BabaLinux exportfs[480]: NOTE: this default has changed since nfs-utils version 1.0.x
    Dec 06 15:53:44 BabaLinux systemd[1]: Started NFS server and services.
    below is the status of the nfs-client @ startup then after it was launched
    $ systemctl status nfs-client.target
    nfs-client.target - NFS client services
    Loaded: loaded (/usr/lib/systemd/system/nfs-client.target; disabled)
    Active: inactive (dead)
    $ systemctl status nfs-client.target
    nfs-client.target - NFS client services
    Loaded: loaded (/usr/lib/systemd/system/nfs-client.target; disabled)
    Active: active since Sat 2014-12-06 15:54:18 PST; 4s ago
    Dec 06 15:54:18 NeoZelux systemd[1]: Starting NFS client services.
    Dec 06 15:54:18 NeoZelux systemd[1]: Reached target NFS client services.

    NFSv4 operates in "namespaces" and you're basically messing with it's brain by bind-mounting (and unmounting) within the exported directory while it's live, without using the exportfs command (either via add/remove in /etc/exports or just putting it all on the commandline). You'll want to read up on section 7 of the RFC:
    https://tools.ietf.org/html/rfc5661
    In essence you're confusing it's brain -- when you add a bind mount, re-export and vice versa -- when removing it, deconfigure and re-export. It still might not work but you can try -- like Spider.007 I question your technique and methodology. It just feels like you're trying to do something... "wrong" here that should be designed in a better way.

  • Failed to invoke getMountedFileSystemUsage[/SASHome/nfs/sasconfig]: bonhamtest:/SASHome/nfs/sasconfig nfs server unreachable:

    Hi, getting getMountedFileSystemUsage[/SASHome/nfs/sasconfig]: bonhamtest:/SASHome/nfs/sasconfig nfs server unreachable:
    error with hyperic platform service file mount
    Looks to be a sigar issue with nfs version 4 and auto mounter
    I was not able to find an open issue or topic on
    Linux RHEL both client and server
    also using imapd
    exportfs is
    /SASHome/nfs/sasconfig
    10.87.73.0/24(rw,sync,no_root_squash)
    10.97.73.0/24(rw,sync,no_root_squash)
    Nothing unique with the auto master server or client
    nfs version 4
    nfs server setting
    MOUNTD_NFS_V2="no"
    MOUNTD_NFS_V3="no"
    RPCNFSDARGS="-N 2 -N 3 -d"
    RPCMOUNTDOPTS="--debug all"
    RPCIDMAPDARGS="-vvv"
    RPCGSSDARGS="-vvv"
    RPCSVCGSSDARGS="-vvv"

  • [SOLVED] Can't start NFS server

    I've read the wiki page and I've been changing my config files accordingly. When I try to start it I get the following error (from journalctl):
    Nov 02 21:41:47 arch sudo[11201]: dennis : TTY=pts/0 ; PWD=/home/dennis ; USER=root ; COMMAND=/usr/bin/systemctl start nfsd.
    service rpc-idmapd.service rpc-mountd.service rpcbind.service
    Nov 02 21:41:47 arch sudo[11201]: pam_unix(sudo:session): session opened for user root by dennis(uid=0)
    Nov 02 21:41:47 arch systemd[1]: Starting NFS server...
    Nov 02 21:41:47 arch systemd[1]: Mounting RPC pipe filesystem...
    [b]Nov 02 21:41:47 arch rpc.nfsd[11204]: rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
    Nov 02 21:41:47 arch rpc.nfsd[11204]: Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem[/b]
    Nov 02 21:41:47 arch rpc.nfsd[11204]: error starting threads: errno 38 (Function not implemented)
    Nov 02 21:41:47 arch systemd[1]: Started RPC Bind.
    Nov 02 21:41:47 arch systemd[1]: nfsd.service: main process exited, code=exited, status=1/FAILURE
    Nov 02 21:41:47 arch mount[11205]: mount: unknown filesystem type 'rpc_pipefs'
    Nov 02 21:41:47 arch systemd[1]: Failed to start NFS server.
    Nov 02 21:41:47 arch systemd[1]: Dependency failed for NFS Mount Daemon.
    Nov 02 21:41:47 arch systemd[1]: Job rpc-mountd.service/start failed with result 'dependency'.
    Nov 02 21:41:47 arch systemd[1]: Unit nfsd.service entered failed state
    Nov 02 21:41:47 arch systemd[1]: var-lib-nfs-rpc_pipefs.mount mount process exited, code=exited status=32
    Nov 02 21:41:47 arch systemd[1]: Failed to mount RPC pipe filesystem.
    Nov 02 21:41:47 arch systemd[1]: Dependency failed for NFSv4 ID-name mapping daemon.
    Nov 02 21:41:47 arch systemd[1]: Job rpc-idmapd.service/start failed with result 'dependency'.
    Nov 02 21:41:47 arch systemd[1]: Unit var-lib-nfs-rpc_pipefs.mount entered failed state
    Nov 02 21:41:47 arch sudo[11201]: pam_unix(sudo:session): session closed for user root
    When I run mount -t nfsd nfsd /proc/fs/nfsd it says:
    [dennis@arch ~]$ sudo mount -t nfsd nfsd /proc/fs/nfsd
    mount: unknown filesystem type 'nfsd'
    What to do? I've no idea how to solve this.
    edit : changed quote tags to code --Inxsible
    Last edited by snufkin (2012-11-03 07:50:52)

    alphaniner wrote:What happens if you just run systemctl start nfsd.service ?
    Then I got another error message.
    I found out what was wrong though. I had suspended the computer, which caused GNOME or whatever to mess with my session data. A reboot fixed the problem, although I'm sure a relog would've done the same.
    Last edited by snufkin (2012-11-03 07:51:13)

  • [SOLVED] Cannot connect to NFS server.

    I'm trying to setup a NFS share at home. I have one laptop configured as a server, and one desktop client. Both are running Arch. I can ping both in either way.
    For some reason, I cannot connect to the server. At one time, I could connect, but only while having an empty /etc/hosts.deny file (which obviously is very insecure). Now, I cannot even reproduce this.
    Thus, I believe the problem is caused by either /etc/hosts.allow or / and by /etc/hosts.deny.
    Here is my server's /etc/exports:
    /shared 192.168.x.xxx(rw,sync,no_subtree_check,no_root_squash)
    And here its /etc/hosts.allow:
    ALL:192.168.0.0/255.255.255.0
    And finally its /etc/hosts.deny:
    ALL: ALL: DENY
    Running 'rpcinfo -p' on the server shows output, while running 'rpcinfo -p <server-ip>' on the client prints out 'No remote programs registered.'
    From what I've read, when the client wants to connect to the server, the server first checks in hosts.allow to see if the client's allowed to connect. So in my case, it should be able connect.
    However, when I run
    sudo mount -t nfs 192.168.x.xxx:/ /mnt/SERVER
    it tells me
    mount.nfs: mount to NFS server '192.168.x.xxx:/`failed: RPC Error: Program not registered
    And even if outcomment both hosts.deny and hosts.allow, running the same command prints out
    mount.nfs: access denied by server while mounting 192.168.x.xxx:/
    I'm clueless. Any help would be greatly appreciated!
    Last edited by MrAllan (2009-03-20 15:05:24)

    tomk wrote:
    Missed a detail on first reading. Your server's exports file makes the /shared directory available to 192.168.x.xxx, but your client mount command tries to mount the server's / i.e. root directory. You can only mount directories that have been exported.
    btw, there's no need to conceal internal addresses like that - we all have them, and there's no way anyone can use them to track you down. My laptop address is 10.12.62.99, my server is 192.168.10.10, and my irssi/bittorrent/whatever machine is 192.168.10.69. I challenge anyone to hack me.
    But if they know his IP, couldn't they use the lan IP to gain access easier than if they didn't know the subnet addresses? Since the thread is solved, I figured I would ask since I've always wondered about it.
    Btw, for anyone have unresolved NFS issues, see this bug report. The arch devs aren't responding to it for some reason, can anyone tell me why?
    http://bugs.archlinux.org/task/13434

  • How do I set up a NFS server?

    Running 10.5 on macbook. Trying to create NFS server to mount ISO to proprietary UNIX client, AIX 5.2. Looked under SHARING and directory utility. Not finding any documentation which seems to indicate how this is done.
    On UNIX, I would start NFS services, add directory to export list with apprpriate permissions (Which client can access it etc) and it would be good to go (assuming name resolution/routing etc). Any insight would be helpful.

    You're over my head with your UNIX knowledge, but I wonder if the 10.4.x method still works in 10.5.x?
    http://mactechnotes.blogspot.com/2005/09/mac-os-x-as-nfs-server.html
    Nope, I guess not, they removed NetInfoManager in 10.5.
    Few know as much as this guy about OSX & he has a little utility to do it, but requires 10.6+...
    http://www.williamrobertson.net/documents/nfs-mac-linux-setup.html
    Maybe some clues you haven't seen yet anyway???

  • Linux guy wants to install solaris 10 via nfs using centos as nfs server

    well in linux when installing this way I simply copy the dvd install image to an nfs share (or do mount -o loop "isoimage" /nfsmountdir), copy the boot.iso so a cdrom and do a "linux askmethod" during install and then specify the nfs server and dir.
    I want to learn sun and want to start by really understanding the install and how to set up slices etc. I would really like to do this using nfs using my centos box as the server as this method at least when installing linux is much mush faster.
    so can I do this and if so what are the commands. I did see some documentation but it was for installing from a sun nfs server. but I need to install from the centos as the server. I already have the iso's for the 5 cdroms as well as the dvd image. if possible I would obviously like to use the dvd image.
    thanks and remember I am a sun newbie (just bought my first solaris book this week so the more dummied down answer would be the best one for me). if it can be done could you give me the exact process.
    thanks a million (and one).

    The Sun network installer scripts only work on Solaris. So there's no instructions from Sun for configuring the daemons on Linux necessary to do the configuration.
    There are some guides on the net, but I don't know of one that tells the whole story.
    You might consider installing Solaris in a VM on the linux box, then configuring jumpstart on that. You could then see how the daemons on it are configured and replicate that configuration on the Linux machine.
    Darren

  • NFS server from Solaris 10 Not supported in OVM 3.1.1 ?

    Iam trying an POC system with OVM. Cust has an existing Solaris 10 NFS server.
    Trying to use it as repository , but would not find any shares when discovering file servers.
    Am running OVM 3.1.1 , Any pointers.

    Should work fine.. Check your export and permissions on the volume/directory your exporting as NFS.

Maybe you are looking for

  • Printing problem with the HP LaserJet 100 color MFP M175

    I have recently upgraded to Windows 8.  When running Windows 7, I could print as may documents from Word or Excel as  I wanted.  Now, with Windows 8, I can only print one document.  When I try to print another, the program freezes.  The only way I ca

  • OS X 10.8.3 iMessege Sign into apple ID error - "Could Not Sign Into iMessege"

    Hi! I'm just new to apple. When I try and sign into iMessege I get an error "Could not sign into iMessege" "Could not sign into iMessege please check your network connection and try again." Any suggestions? Im on Mountian Lion OS X 10.8.3. Thanks.

  • Error on sample code Oracle's JMS Resource Adapter for WebSphere MQ Series

    Hi, I've succesfully assembled and deployed demo application located at the following url http://www.oracle.com/technology/tech/java/oc4j/1013/how_to/how-to-mq-jms/doc/how-to-mq-jms.html However, during testing of the app, i've discovered strange err

  • Quicktime will not download !

    I been trying to download QT but it will not open, disabled firewall, cleared download folder, opened new folder please help! Toshiba Qosmio X505-Q870 win 7 home 64bit foxfire 3.6 kaspersky IS 2010

  • Creating applications for Nokia 5800

    Hi, Can anybody provide me with a list of the software that I need to download in order to write applications for my phone (Nokia 5800 (model 5800d-1)) using either Visual C++ or Qt. I can, to a certain extent, write software with both of these progr