NFS server unmount error w/ bind-mount

Hello, please let me know if I should change the thread title.
so I'm sharing a folder through NFS between two arch-linux pc's.
Host1 is my desktop, Host2 is my laptop.
I boot up my computer, rpcbind is started on boot-up
I launch the server on Host2
I launch the client on Host1
on Host2, I do a bind-mount from some directory to the one I'm exporting
I then unmount it, and it unmounts w/o error
then I redo the bind-mount
then, on the client, I mount the server's NFS share to some directory
then, on the client, I unmount it w/o error
then, on the server, I try to unmount the bind-mounted directory, but it won't, says it's "busy"
if I restart the server, then I can unmount the bind-mount w/o error, but I believe it should work without me having to restart the server every time
I want to bind a different directory to the exported directory.
server
$ systemctl start nfs-server
client
$ systemctl start nfs-client.target
server
$ mount test/ nfs/ --bind
client
$ mount Host1:/srv/nfs /mnt/nfs -t nfs
$ umount /mnt/nfs
server
$ umount /srv/nfs
umount: /srv/nfs: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
the output of
$ lsof | grep /srv/nfs
$ lsof | grep /srv
did not show anything that had either the substring "nfs" or "rpc" in it...
below is the status of both hosts rpcbind @startup (they were identical)
$ systemctl status rpcbind
rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; static)
Active: active (running) since Sat 2014-12-06 15:51:38 PST; 46s ago
Process: 262 ExecStart=/usr/bin/rpcbind -w ${RPCBIND_ARGS} (code=exited, status=0/SUCCESS)
Main PID: 264 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─264 /usr/bin/rpcbind -w
Dec 06 15:51:38 BabaLinux systemd[1]: Started RPC bind service.
below is the status of the nfs-server @ startup then after it was launched
$ systemctl status nfs-server
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
Active: inactive (dead)
$ systemctl status nfs-server
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
Active: active (exited) since Sat 2014-12-06 15:53:44 PST; 21s ago
Process: 483 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 480 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 483 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Dec 06 15:53:44 BabaLinux exportfs[480]: exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "NeoZelux:/srv/nfs".
Dec 06 15:53:44 BabaLinux exportfs[480]: Assuming default behaviour ('no_subtree_check').
Dec 06 15:53:44 BabaLinux exportfs[480]: NOTE: this default has changed since nfs-utils version 1.0.x
Dec 06 15:53:44 BabaLinux systemd[1]: Started NFS server and services.
below is the status of the nfs-client @ startup then after it was launched
$ systemctl status nfs-client.target
nfs-client.target - NFS client services
Loaded: loaded (/usr/lib/systemd/system/nfs-client.target; disabled)
Active: inactive (dead)
$ systemctl status nfs-client.target
nfs-client.target - NFS client services
Loaded: loaded (/usr/lib/systemd/system/nfs-client.target; disabled)
Active: active since Sat 2014-12-06 15:54:18 PST; 4s ago
Dec 06 15:54:18 NeoZelux systemd[1]: Starting NFS client services.
Dec 06 15:54:18 NeoZelux systemd[1]: Reached target NFS client services.

NFSv4 operates in "namespaces" and you're basically messing with it's brain by bind-mounting (and unmounting) within the exported directory while it's live, without using the exportfs command (either via add/remove in /etc/exports or just putting it all on the commandline). You'll want to read up on section 7 of the RFC:
https://tools.ietf.org/html/rfc5661
In essence you're confusing it's brain -- when you add a bind mount, re-export and vice versa -- when removing it, deconfigure and re-export. It still might not work but you can try -- like Spider.007 I question your technique and methodology. It just feels like you're trying to do something... "wrong" here that should be designed in a better way.

Similar Messages

  • Failed to invoke getMountedFileSystemUsage[/SASHome/nfs/sasconfig]: bonhamtest:/SASHome/nfs/sasconfig nfs server unreachable:

    Hi, getting getMountedFileSystemUsage[/SASHome/nfs/sasconfig]: bonhamtest:/SASHome/nfs/sasconfig nfs server unreachable:
    error with hyperic platform service file mount
    Looks to be a sigar issue with nfs version 4 and auto mounter
    I was not able to find an open issue or topic on
    Linux RHEL both client and server
    also using imapd
    exportfs is
    /SASHome/nfs/sasconfig
    10.87.73.0/24(rw,sync,no_root_squash)
    10.97.73.0/24(rw,sync,no_root_squash)
    Nothing unique with the auto master server or client
    nfs version 4
    nfs server setting
    MOUNTD_NFS_V2="no"
    MOUNTD_NFS_V3="no"
    RPCNFSDARGS="-N 2 -N 3 -d"
    RPCMOUNTDOPTS="--debug all"
    RPCIDMAPDARGS="-vvv"
    RPCGSSDARGS="-vvv"
    RPCSVCGSSDARGS="-vvv"

  • Can't mount Archlinux NFS-Server via Win 7

    Dear Forum,
    first of all: I'm neither a native english speaker nor do i have any experience in the field of linux etc.
    Therefore please excuse typing errors and unclear sentences. I read a lot of manpages and webforums but can't get the solution to my problem.
    I bought a pogoplug classic (Version E-02) to use it as a linux-based webserver. I installed Arch Linux, updated all the packages and tried to get NFS to work (i followed these steps: https://wiki.archlinux.org/index.php/NFS )
    My desktop machine runs a win 7 x64 and i connect to my pogoplug via putty/ssh.
    Shortcut to my problem:
    I can't connect to my NFS server, everytime i try do so, i just get a page that looks like kind of a small manpage for the syntax of the "mount" command. Is my syntax wrong or did i miss something essential?
    What i did in particular:
    - Installed nfs-utils via pacman
    - idmapd.conf:
    [General]
    Verbosity = 1
    Pipefs-Directory = /var/lib/nfs/rpc_pipefs
    Domain = localdomain
    [Mapping]
    Nobody-User = nobody
    Nobody-Group = nobody
    - /etc/exports
    /srv/nfs4/ *(rw,fsid=0,no_subtree_check)
    /srv/nfs4/tausch *(rw,no_subtree_check,async,insecure,nohide)
    I also created /mnt/tausch/
    - /etc/fstab/
    /mnt/tausch /srv/nfs4/tausch none bind 0 0
    - "showmount -e alarm" gives me this:
    /srv/nfs4/tausch *
    /srv/nfs4 *
    I can get this message via cmd.exe from my windows machine - does this mean, my nfs server is running correctly?
    - Activated NFS client in win 7
    - After the succesfull "showmount" i did this:
    mount -t nfs4 -o alarm:/mnt/tausch /srv/nfs4/tausch
    Thats where i am right now. Any hint would be appreciated!
    Merry christmas to all of you,
    greetings,
    nick

    As far as I know you have to have a NFS connector on the Win7 machine for it to use NFS.  It is not a native function of Windows to have NFS.
    I think you are mounting /mnt/tausch from host alarm and mounting it to your /srv/nfs4/tausch on another machine (win7?).
    It otherwise looks like you are mounting /mnt/tausch to /srv/nfs4/tausch on your ArchLinux machine, and then trying to export it from double mounting to the win7 machine.  I don't know why you wouldn't just use the export from alarm and mount it to your win7 machine.  Mounting an export from another machine and then exporting that to another machine, unless you transpose it will not show the other export.

  • Error starting ORMI-Server.  Unable to bind socket: Address already in use:

    hi,
    I run the folowing command "start_oc4j.bat" to start Oc4j.
    It starts successfully.
    Then I start "BPEL PM Server".
    I get the following error,
    "Error starting ORMI-Server. Unable to bind socket: Address already in use: JVM_Bind".
    I understand the above errror because both of them are trying to bind to the same port. And both of them are trying to start ORMI server. Can some one suggest how to get rid of this problem.
    I tried giving different port numbers for them, but that did not work. Let me know, how to over come this problem.
    with regards
    shaila

    You may have another OC4J running on your computer.
    You should change the port number in the file config/rmi.xml.

  • Error: 0x800f0922 when trying to install NFS Server Feature on Windows 2012R2

    I'm trying to install the "Server for NFS" option under the File and Storage Services role on a Windows 2012R2 server, but I keep receiving an 0x800f0922 error. I've tried the install via the GUI and Powershell. I've also tried using the
    DISM utility with different source media for the install, but with the same result.   I ran "SFC /scannow"  to check for any missing files/reg keys but it returned no errors.  Any thoughts?

    Thank you for the help!  I did try the suggested steps, but the issue persisted.  I was able to resolve the issue by removing software on the server that was using the NFS ports needed for successful installation of the native NFS server.  
    In our case it was an optional component of our backup software called"Veeam backup vPower NFS".   There is also a known issue with the Qlogic's SanSurfer software using the NFS ports. 
    There were entries in the System event log that tipped me off:
    Event ID:      7000  The Server for NFS Open RPC (ONCRPC) Portmapper service failed to start due to the following error:
    A device attached to the system is not functioning.
    Event ID:      7001  The Server for NFS Driver service depends on the Server for NFS Open RPC (ONCRPC) Portmapper service which failed to
    start because of the following error:

  • Error starting ORMI-Server. Uable to bind socket

    Hi,
    I get the following error whenever I start my Application Server on Linux machine.
    If I deploy the EJB application onto a OC4J instance, the server is refusing the connection by throwing Connection Refused error.
    Any insight about this issue will be helpful.
    Thanks,
    Mohan
    04/07/08 17:14:42 Error starting ORMI-Server. Unable to bind socket: Address already in use
    04/07/08 17:14:44 Oracle Application Server Containers for J2EE 10g (9.0.4.0.0) initialized
    04/07/08 17:14:44 java.lang.NullPointerException
    04/07/08 17:14:44 at com.evermind.server.rmi.RMIServer.run(RMIServer.java:470)
    04/07/08 17:14:44 at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(Releasabl
    eResourcePooledExecutor.java:186)
    04/07/08 17:14:44 at java.lang.Thread.run(Thread.java:534)

    Hi Avi,
    of course you are right that the standalone OC4J ist not the same as an OC4J instance within the Oracle Application Server though from my point of view many J2EE-Server related questions or problems regarding the EJB- and or WEB-container could be answered/handled same or equally.
    That's why I'm a little bit irritated by your statement. From a forum user's perspective I'd hate it to follow eg all those Forms and Reports service discussions if I were a Java Developer or Deployer. So I would like to see all posts regarding the OC4J-Container here (no matter if standalone or inside OAS). Nevertheless everybody should clearly indicate the exact product version including the distinction beetween standalone and OAS.
    Just my 0.2 about this. What do others think?
    Avi, if you really want all discussion about OAS including OC4J topics to happen within the "Application Server - General" forum, I'd suggest you add a forum description which makes that clear.
    Well, if one can persuade you to not follow such a strict separation as I argued against, it could be still beneficial to add a meaningful forum description (eg. to remind people of the distinction between standalone OC4J and OAS.
    Thoughts?
    Regards,
    Eric

  • "Error Starting OMI-Server. Unable to bind socket.Permission denied. Listen

    Hi All,
    I Installed Forms 1o-g successfully on my mishon ( Version windows vista), when i try to connect Forms i got error
    like this *"Error Starting OMI-Server. Unable to bind socket.Permission denied. Listen*
    Plz Anyone help....
    Thanks
    sa.....

    Which version of Windows Vista? Bear in mind that Vista home edition is not supported by Oracle (I think for all products). If that is the case, you should be upgrading your OS to business or ultimate edition.
    If that is not the case, please get in touch with Oracle support. Not much is available on internet or in forums (as you seem to have posted this in Forms forum as well).

  • Error starting ORMI-Server. Unable to bind socket...

    One of our BPEL App Servers was moved from a local network to a LAN. I believe it's IP was changed. When we start it with opmnctl startall we get the following error message:
    bq. Error starting ORMI-Server. Unable to bind socket: Cannot assign requested address
    There does not seem to be a resource issue with regard to ports. A netstat -p doesn't show any potential port conflicts. In a WEB search I came across the following suggested check:
    bq. Solution: \\ *A) Check if your IP address or hostname is changed or if there is a conflict of resources in the network*
    Can someone explain what needs to be fixed if the IP Address or hostname of the BPEL App Server has changed? What files need to be corrected to get the install to startup.

    You may have another OC4J running on your computer.
    You should change the port number in the file config/rmi.xml.

  • Mounting NFS home directory error, please help!

    Hello everyone,
    We are working on setting up a network of Macs to a Linux server. The server is running NFS / NIS to authenticate, all the users have their home directories on the server and do not have any local accounts on any of the computers.
    We have a problem getting the home dirs automounted on the local computers. The users need their home folder set correctly to import settings and so on, and the home folders are stored on the server.
    We can connect to the NFS servers, we can get the computers to log in with NIS accounts, we can manually access their restricted info once we manually mount the NFS drive.
    However, we cannot get the home folders to mount in the home directory, automounted. We have one working macintosh computer running Apple Os X 1.5, but we cannot get it to work on Snow Leopard ( 1.6 ).
    Did anything change in permissions or in the way Mac OS X handles NFS shares or automounting in the update to 1.6, Snow Leopard? Does anyone have any experience handling NFS/NIS mounting in Apple Mac OS X 1.6?
    / Z.

    Hello everyone!
    Even if I posted this recently, we had been working on it for two days. However, we just defeated this beast and managed to solve it!
    It turned out to be simple, as it often is:
    We changed /etc/auto_home to manually provide the IP of the NFS server with a wildcard (*) for users, & for username, then commented out the normal +auto_home.
    The server was already set up to allow the computers to access everything.
    The problem was, in short, that mac OS X has a dedicated slot for mounting in /home and thus you can't mount anything else there unless you replace the normal /etc/auto_home

  • Possible bug in the arch NFS server package?

    i run the nfs server on my arch box, exporting to a debian box and an arch laptop. whenever the arch server reboots lately, the shares don't automatically remount properly on the clients. they're listed in mtab, but if i ls or try to access the directories, i get a permission denied error. i have to manually unmount the shares and then remount them again to be able to see/use them.
    the reason i think this is an arch package problem is because i set up a share on the debian box to share with arch, and that worked perfectly. when i rebooted debian as the server, the shares were automatically remounted on the arch client, and when i rebooted arch, the shares were again mounted properly on reboot.
    it's possible i'm doing something wrong with permissions, but it seems unlikely because 1) everything was working fine for a long time until recently, when i started noticing the behavior,  2) all the permissions on the shared directory are identical to the ones on the arch shared directory, all user name UIDs are the same, same groups and GIDs, etc., 3) the shares mount perfectly well manually from the command line, and 4) i set up the debian share/exports, etc. in about 2 minutes with no problem at all, while dealing with this problem on arch for 2 days now, after changing options and going over everything multiple times until my head is spinning. it just seems unlikely that a configuration is wrong, although i guess anything is possible. i can provide all that permissions/group info, fstab info, /etc/exports info, etc. if anyone wants to take a closer look at it.
    so until this is sorted, i wondered if anyone else is having this problem, or if anyone had any ideas of something i might be overlooking. again, everything *seems* to be set up right, but maybe there's some arch specific thing i'm overlooking. thanks.

    Ok out of pure fustration I just grabbed the gentoo init script. Installed start-stop-daemon, modified the script to run as #!/bin/bash, stuck it in /etc/rc.d, rebooted and everything works, I can reboot my computer and clients reconnect. Or restart daemon and they reconnect.
    Heres the script I am using.
    #!/bin/bash
    # Copyright 1999-2005 Gentoo Foundation
    # Distributed under the terms of the GNU General Public License v2
    # $Header: /var/cvsroot/gentoo-x86/net-fs/nfs-utils/files/nfs,v 1.14 2007/03/24 10:14:43 vapier Exp $
    # This script starts/stops the following
    # rpc.statd if necessary (also checked by init.d/nfsmount)
    # rpc.rquotad if exists (from quota package)
    # rpc.nfsd
    # rpc.mountd
    # NB: Config is in /etc/conf.d/nfs
    opts="reload"
    # This variable is used for controlling whether or not to run exportfs -ua;
    # see stop() for more information
    restarting=no
    # The binary locations
    exportfs=/usr/sbin/exportfs
    gssd=/usr/sbin/rpc.gssd
    idmapd=/usr/sbin/rpc.idmapd
    mountd=/usr/sbin/rpc.mountd
    nfsd=/usr/sbin/rpc.nfsd
    rquotad=/usr/sbin/rpc.rquotad
    statd=/usr/sbin/rpc.statd
    svcgssd=/usr/sbin/rpc.svcgssd
    mkdir_nfsdirs() {
    local d
    for d in /var/lib/nfs/{rpc_pipefs,v4recovery,v4root} ; do
    [[ ! -d ${d} ]] && mkdir -p "${d}"
    done
    mount_pipefs() {
    if grep -q rpc_pipefs /proc/filesystems ; then
    if ! grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
    mount -t rpc_pipefs rpc_pipefs /var/lib/nfs/rpc_pipefs
    fi
    fi
    umount_pipefs() {
    if [[ ${restarting} == "no" ]] ; then
    if grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
    umount /var/lib/nfs/rpc_pipefs
    fi
    fi
    start_gssd() {
    [[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
    local ret1 ret2
    ${gssd} ${RPCGSSDDOPTS}
    ret1=$?
    ${svcgssd} ${RPCSVCGSSDDOPTS}
    ret2=$?
    return $((${ret1} + ${ret2}))
    stop_gssd() {
    [[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
    local ret
    start-stop-daemon --stop --quiet --exec ${gssd}
    ret1=$?
    start-stop-daemon --stop --quiet --exec ${svcgssd}
    ret2=$?
    return $((${ret1} + ${ret2}))
    start_idmapd() {
    [[ ! -x ${idmapd} ]] && return 0
    ${idmapd} ${RPCIDMAPDOPTS}
    stop_idmapd() {
    [[ ! -x ${idmapd} ]] && return 0
    local ret
    start-stop-daemon --stop --quiet --exec ${idmapd}
    ret=$?
    umount_pipefs
    return ${ret}
    start_statd() {
    # Don't start rpc.statd if already started by init.d/nfsmount
    killall -0 rpc.statd &>/dev/null && return 0
    start-stop-daemon --start --quiet --exec \
    $statd -- $RPCSTATDOPTS 1>&2
    stop_statd() {
    # Don't stop rpc.statd if it's in use by init.d/nfsmount.
    mount -t nfs | grep -q . && return 0
    # Make sure it's actually running
    killall -0 rpc.statd &>/dev/null || return 0
    # Okay, all tests passed, stop rpc.statd
    start-stop-daemon --stop --quiet --exec $statd 1>&2
    waitfor_exportfs() {
    local pid=$1
    ( sleep ${EXPORTFSTIMEOUT:-30}; kill -9 $pid &>/dev/null ) &
    wait $1
    case "$1" in
    start)
    # Make sure nfs support is loaded in the kernel #64709
    if [[ -e /proc/modules ]] && ! grep -qs nfsd /proc/filesystems ; then
    modprobe nfsd &> /dev/null
    fi
    # This is the new "kernel 2.6 way" to handle the exports file
    if grep -qs nfsd /proc/filesystems ; then
    if ! grep -qs "^nfsd[[:space:]]/proc/fs/nfsd[[:space:]]" /proc/mounts ; then
    mount -t nfsd nfsd /proc/fs/nfsd
    fi
    fi
    # now that nfsd is mounted inside /proc, we can safely start mountd later
    mkdir_nfsdirs
    mount_pipefs
    start_idmapd
    start_gssd
    start_statd
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # bootup process can continue.
    if grep -q '^/' /etc/exports &>/dev/null; then
    $exportfs -r 1>&2 &
    waitfor_exportfs $!
    fi
    if [ -x $rquotad ]; then
    start-stop-daemon --start --quiet --exec \
    $rquotad -- $RPCRQUOTADOPTS 1>&2
    fi
    start-stop-daemon --start --quiet --exec \
    $nfsd --name nfsd -- $RPCNFSDCOUNT 1>&2
    # Start mountd
    start-stop-daemon --start --quiet --exec \
    $mountd -- $RPCMOUNTDOPTS 1>&2
    stop)
    # Don't check NFSSERVER variable since it might have changed,
    # instead use --oknodo to smooth things over
    start-stop-daemon --stop --quiet --oknodo \
    --exec $mountd 1>&2
    # nfsd sets its process name to [nfsd] so don't look for $nfsd
    start-stop-daemon --stop --quiet --oknodo \
    --name nfsd --user root --signal 2 1>&2
    if [ -x $rquotad ]; then
    start-stop-daemon --stop --quiet --oknodo \
    --exec $rquotad 1>&2
    fi
    # When restarting the NFS server, running "exportfs -ua" probably
    # isn't what the user wants. Running it causes all entries listed
    # in xtab to be removed from the kernel export tables, and the
    # xtab file is cleared. This effectively shuts down all NFS
    # activity, leaving all clients holding stale NFS filehandles,
    # *even* when the NFS server has restarted.
    # That's what you would want if you were shutting down the NFS
    # server for good, or for a long period of time, but not when the
    # NFS server will be running again in short order. In this case,
    # then "exportfs -r" will reread the xtab, and all the current
    # clients will be able to resume NFS activity, *without* needing
    # to umount/(re)mount the filesystem.
    if [ "$restarting" = no ]; then
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # shutdown process can continue.
    $exportfs -ua 1>&2 &
    waitfor_exportfs $!
    fi
    stop_statd
    stop_gssd
    stop_idmapd
    umount_pipefs
    reload)
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # bootup process can continue.
    $exportfs -r 1>&2 &
    waitfor_exportfs $!
    restart)
    # See long comment in stop() regarding "restarting" and exportfs -ua
    restarting=yes
    svc_stop
    svc_start
    echo "usage: $0 {start|stop|restart}"
    esac
    exit 0

  • DNS Fails for NFS Server Shares

    When I boot, I get a message that DNS has failed for the NFS server mounts, and the shares do not mount. The message says, "mount.nfs: DNS resolution failed for server: name or service unknown." I have to mount the shares myself. Then when rebooting, I get the same error saying it can't unmount the shares.
    this is /etc/resolv.conf:
    $ cat /etc/resolv.conf
    # Generated by dhcpcd from eth0
    # /etc/resolv.conf.head can replace this line
    nameserver 208.67.222.222
    nameserver 208.67.220.220
    # /etc/resolv.conf.tail can replace this line
    this is /etc/conf.d/nfs:
    # Number of servers to be started up by default
    NFSD_OPTS=8
    # Options to pass to rpc.mountd
    # e.g. MOUNTDOPTS="-p 32767"
    MOUNTD_OPTS="--no-nfs-version 1 --no-nfs-version 2"
    # Options to pass to rpc.statd
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly. Specifically, the Arch
    # NFS init scripts require the --no-notify flag on the server,
    # but not on the client e.g.
    # STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
    # STATD_OPTS="-p 32765 -o 32766" -> client
    STATD_OPTS=""
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS=""
    Do I need to add some option to rpc.statd, or is there some other misconfiguration there? AFAIK it is the default. What else should I look at to fix this? I can ping the server by name, and log in with ssh by name, just fine. It's only the nfs that is failing with DNS.

    airman99 wrote:
    Yahoo! Good news, I've finally solved the problem on my laptop. The message I was receiving turned out merely to be a network timing issue.
    The error I was receiving was exactly correct and informative. When /etc/rc.d/netfs ran and executed a 'mount -a -t nfs...' the network was indeed NOT reachable. I am running networkmanager, and apparently during bootup, networkmanager gets loaded, but there is a delay between when networkmanager is loaded and when the network is available. In other words, networkmanager allows the boot process to continue before the network is available.
    My daemons are loaded in this order (rc.conf):
    DAEMONS=(syslog-ng hal dhcdbd networkmanager crond cups ntpdate ntpd portmap nfslock netfs)
    Consequently, if I add a delay to /etc/rc.d/netfs to allow time for the network to come up, then when the NFS shares are mounted, the network is up. In my case I had to add a 3 second delay.
    sleep 3
    I'm sure this isn't the best way to solve the problem, by editing the system file /etc/rc.d/netfs, because the next upgrade where changes occur to netfs, my fix will get overwritten. But I'll keep it until I figure out the "right" fix.
    The real solution is to not load networkmanager in the background, but to force startup to wait for the networok to be up before continuing.
    there is the _netdev option you can use in fstab, but that doesn't always work:
    http://linux.die.net/man/8/mount
    _netdev
        The filesystem resides on a device that requires network access (used to prevent the system from attempting to mount these filesystems until the network has been enabled on the system).
    Alternatively, you could just add a cronjob to do a mount -a with a sleep 20 in there or something. You might have to play with the sleep value a little to make sure it's long enough

  • [SOLVED] Can't start NFS server

    I've read the wiki page and I've been changing my config files accordingly. When I try to start it I get the following error (from journalctl):
    Nov 02 21:41:47 arch sudo[11201]: dennis : TTY=pts/0 ; PWD=/home/dennis ; USER=root ; COMMAND=/usr/bin/systemctl start nfsd.
    service rpc-idmapd.service rpc-mountd.service rpcbind.service
    Nov 02 21:41:47 arch sudo[11201]: pam_unix(sudo:session): session opened for user root by dennis(uid=0)
    Nov 02 21:41:47 arch systemd[1]: Starting NFS server...
    Nov 02 21:41:47 arch systemd[1]: Mounting RPC pipe filesystem...
    [b]Nov 02 21:41:47 arch rpc.nfsd[11204]: rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
    Nov 02 21:41:47 arch rpc.nfsd[11204]: Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem[/b]
    Nov 02 21:41:47 arch rpc.nfsd[11204]: error starting threads: errno 38 (Function not implemented)
    Nov 02 21:41:47 arch systemd[1]: Started RPC Bind.
    Nov 02 21:41:47 arch systemd[1]: nfsd.service: main process exited, code=exited, status=1/FAILURE
    Nov 02 21:41:47 arch mount[11205]: mount: unknown filesystem type 'rpc_pipefs'
    Nov 02 21:41:47 arch systemd[1]: Failed to start NFS server.
    Nov 02 21:41:47 arch systemd[1]: Dependency failed for NFS Mount Daemon.
    Nov 02 21:41:47 arch systemd[1]: Job rpc-mountd.service/start failed with result 'dependency'.
    Nov 02 21:41:47 arch systemd[1]: Unit nfsd.service entered failed state
    Nov 02 21:41:47 arch systemd[1]: var-lib-nfs-rpc_pipefs.mount mount process exited, code=exited status=32
    Nov 02 21:41:47 arch systemd[1]: Failed to mount RPC pipe filesystem.
    Nov 02 21:41:47 arch systemd[1]: Dependency failed for NFSv4 ID-name mapping daemon.
    Nov 02 21:41:47 arch systemd[1]: Job rpc-idmapd.service/start failed with result 'dependency'.
    Nov 02 21:41:47 arch systemd[1]: Unit var-lib-nfs-rpc_pipefs.mount entered failed state
    Nov 02 21:41:47 arch sudo[11201]: pam_unix(sudo:session): session closed for user root
    When I run mount -t nfsd nfsd /proc/fs/nfsd it says:
    [dennis@arch ~]$ sudo mount -t nfsd nfsd /proc/fs/nfsd
    mount: unknown filesystem type 'nfsd'
    What to do? I've no idea how to solve this.
    edit : changed quote tags to code --Inxsible
    Last edited by snufkin (2012-11-03 07:50:52)

    alphaniner wrote:What happens if you just run systemctl start nfsd.service ?
    Then I got another error message.
    I found out what was wrong though. I had suspended the computer, which caused GNOME or whatever to mess with my session data. A reboot fixed the problem, although I'm sure a relog would've done the same.
    Last edited by snufkin (2012-11-03 07:51:13)

  • NFS: server reboot - client problem with "file handle" - no unmont

    Hello,
    when I restart a NFS-server and a client has a NFS-share on it still mounted, that NFS-share becomes unusable on the client: everything causes the client complain about an old "file handle", even trying to unmount that share!
    So my question is: How to deal gracefully with a NFS-server reboot and avoiding this file handle problem?
    Apart from going to each client and unmount the NFS-share by hand?
    Thanks!

    Do I need to Give these values manually again as they are moved from Env to other?
    When we transport ID objects, the values stored in the channel get lost and have to be manually enetered......it happens even for DEV --> QA environment.
    Now your channel should reflect PROD-related data.
    2- Why my Receiver communiation channel coimg as
    Communication Channel | PRD_400 | RFC_QA3_Receiver:
    The channel name does not depend on the environment it is present....verify the name once again!
    Regards,
    Abhishek.

  • Testing ha-nfs in two node cluster (cannot statvfs /global/nfs: I/O error )

    Hi all,
    I am testing HA-NFS(Failover) on two node cluster. I have sun fire v240 ,e250 and Netra st a1000/d1000 storage. I have installed Solaris 10 update 6 and cluster packages on both nodes.
    I have created one global file system (/dev/did/dsk/d4s7) and mounted as /global/nfs. This file system is accessible form both the nodes. I have configured ha-nfs according to the document, Sun Cluster Data Service for NFS Guide for Solaris, using command line interface.
    Logical host is pinging from nfs client. I have mounted there using logical hostname. For testing purpose I have made one machine down. After this step files tem is giving I/O error (server and client). And when I run df command it is showing
    df: cannot statvfs /global/nfs: I/O error.
    I have configured with following commands.
    #clnode status
    # mkdir -p /global/nfs
    # clresourcegroup create -n test1,test2 -p Pathprefix=/global/nfs rg-nfs
    I have added logical hostname,ip address in /etc/hosts
    I have commented hosts and rpc lines in /etc/nsswitch.conf
    # clreslogicalhostname create -g rg-nfs -h ha-host-1 -N
    sc_ipmp0@test1, sc_ipmp0@test2 ha-host-1
    # mkdir /global/nfs/SUNW.nfs
    Created one file called dfstab.user-home in /global/nfs/SUNW.nfs and that file contains follwing line
    share -F nfs –o rw /global/nfs
    # clresourcetype register SUNW.nfs
    # clresource create -g rg-nfs -t SUNW.nfs ; user-home
    # clresourcegroup online -M rg-nfs
    Where I went wrong? Can any one provide document on this?
    Any help..?
    Thanks in advance.

    test1#  tail -20 /var/adm/messages
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 344672 daemon.error] Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 801855 daemon.error]
    Feb 28 22:28:54 testlab5 Error in scha_cluster_get
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d5s0 has changed to OK
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d6s0 has changed to OK
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/scsymon-srv:default: Method "/usr/cluster/lib/svc/method/svc_scsymon_srv start" failed with exit status 96.
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 748625 daemon.error] system/cluster/scsymon-srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 537175 daemon.notice] CMM: Node e250 (nodeid: 1, incarnation #: 1235752006) has become reachable.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 525628 daemon.notice] CMM: Cluster has reached quorum.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node e250 (nodeid = 1) is up; new incarnation number = 1235752006.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node testlab5 (nodeid = 2) is up; new incarnation number = 1235840337.
    Feb 28 22:37:15 testlab5 Cluster.CCR: [ID 499775 daemon.notice] resource group rg-nfs added.
    Feb 28 22:39:05 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:05 testlab5 Cluster.CCR: [ID 491081 daemon.notice] resource ha-host-1 removed.
    Feb 28 22:39:17 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:17 testlab5 Cluster.CCR: [ID 254131 daemon.notice] resource group nfs-rg removed.
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_validate> for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, timeout <300> seconds
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_validate>:tag=<rg-nfs.ha-host-1.2>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_validate> completed successfully for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, time used: 0% of timeout <300 seconds>
    Feb 28 22:39:30 testlab5 Cluster.CCR: [ID 973933 daemon.notice] resource ha-host-1 added.

  • Nfs server not responding, hanging startup after 10.10.3 update

    The system boots up fine and I enter my password, then it sits for over 6 minutes with a spinning wheel. The console shows:
    4/16/15 10:35:30.000 AM kernel[0]: nfs server localhost:/zz0Lo64HS_Mb5IMJ_JdkXi: not responding
    4/16/15 10:35:34.214 AM netbiosd[232]: name servers down?
    4/16/15 10:36:01.000 AM kernel[0]: nfs server localhost:/zz0Lo64HS_Mb5IMJ_JdkXi: not responding
    4/16/15 10:36:22.000 AM kernel[0]: nfs server localhost:/zz0Lo64HS_Mb5IMJ_JdkXi: dead
    4/16/15 10:37:33.515 AM apsd[77]: Reporting com.apple.main-thread is hung
    4/16/15 10:41:48.000 AM kernel[0]: nfs reconnect localhost:/zz0Lo64HS_Mb5IMJ_JdkXi: returned 4
    4/16/15 10:41:48.557 AM KernelEventAgent[96]: tid 54485244 received event(s) VQ_DEAD (32)
    4/16/15 10:41:48.557 AM KernelEventAgent[96]: tid 54485244 type 'mtmfs', mounted on '/Volumes/MobileBackups', from 'localhost:/zz0Lo64HS_Mb5IMJ_JdkXi', dead
    4/16/15 10:41:48.558 AM KernelEventAgent[96]: tid 54485244 force unmount localhost:/zz0Lo64HS_Mb5IMJ_JdkXi from /Volumes/MobileBackups
    4/16/15 10:41:48.559 AM KernelEventAgent[96]: tid 54485244 found 1 filesystem(s) with problem(s)
    After this the system then carries on like normal.  I'm not sure where MobileBackups came from, seems to be a corrupt file somewhere. How do I fix this?

    Triple-click anywhere in the line below on this page to select it:
    /Library/Keychains
    Right-click or control-click the line and select
              Services ▹ Reveal in Finder (or just Reveal)
    from the contextual menu.* A folder should open with a subfolder named "Keychains" selected.
    Open the Info window. In the General section at the top, is the box marked Locked checked?
    Close the Info window. Inside the Keychains folder, there should be a file named exactly "apsd.keychain". Is that file present? Are there any files with a name such as "apsd.keychain.something"?
    *If you don't see the contextual menu item, copy the selected text to the Clipboard by pressing the key combination  command-C. In the Finder, select
              Go ▹ Go to Folder...
    from the menu bar and paste into the box that opens by pressing command-V. You won't see what you pasted because a line break is included. Press return.

Maybe you are looking for

  • L9 528 - Material cannot be used in Inbound Delivery because QM is active

    Hi Colleagues, So we have HUM products going into a warehouse managed location and we know that we cannot create an inbound delivery for these items if QM is active without 'HU inspection' selected in the QM inspection type of the material master So

  • Reciever file adapter permissions

    Hi I am trying to writing a file on remote computer using nfs reciever file adapter I get the following error Message processing failed.Cause com:sap.aii.af.ra.ms.api.RecoverableException: Target directory ' Hfavediqa2_0\ZIMEQCDATAIN' does not exist

  • Can't view PDF files in browser without disabling add-on.

    Hi, I have a situation where the website i'm using tries to open a PDF file on screen/in browser, but unfortunately it doesn't actually open, i just get a grey screen. The only way around this is by disabling the adobe reader add-on. The only problem

  • Flash Player loading problem

    When trying to install Flash Player after 50% complete a message pops up saying "User does not have sufficient privileges to download Flash Player.  Can anyone help me with this? Please! Windows XP, Pentium 4,

  • UWL system creation error?

    Hi, I'm trying to setup UWL configuration for ESS/MSS.  During minimal configuration of UWL setup, I'm assigning one of the system alias which is connecting to the R3 system using SAPLogonTicket method.  We got same uid maintained in portal and ECC6.