Possible bug in the arch NFS server package?

i run the nfs server on my arch box, exporting to a debian box and an arch laptop. whenever the arch server reboots lately, the shares don't automatically remount properly on the clients. they're listed in mtab, but if i ls or try to access the directories, i get a permission denied error. i have to manually unmount the shares and then remount them again to be able to see/use them.
the reason i think this is an arch package problem is because i set up a share on the debian box to share with arch, and that worked perfectly. when i rebooted debian as the server, the shares were automatically remounted on the arch client, and when i rebooted arch, the shares were again mounted properly on reboot.
it's possible i'm doing something wrong with permissions, but it seems unlikely because 1) everything was working fine for a long time until recently, when i started noticing the behavior,  2) all the permissions on the shared directory are identical to the ones on the arch shared directory, all user name UIDs are the same, same groups and GIDs, etc., 3) the shares mount perfectly well manually from the command line, and 4) i set up the debian share/exports, etc. in about 2 minutes with no problem at all, while dealing with this problem on arch for 2 days now, after changing options and going over everything multiple times until my head is spinning. it just seems unlikely that a configuration is wrong, although i guess anything is possible. i can provide all that permissions/group info, fstab info, /etc/exports info, etc. if anyone wants to take a closer look at it.
so until this is sorted, i wondered if anyone else is having this problem, or if anyone had any ideas of something i might be overlooking. again, everything *seems* to be set up right, but maybe there's some arch specific thing i'm overlooking. thanks.

Ok out of pure fustration I just grabbed the gentoo init script. Installed start-stop-daemon, modified the script to run as #!/bin/bash, stuck it in /etc/rc.d, rebooted and everything works, I can reboot my computer and clients reconnect. Or restart daemon and they reconnect.
Heres the script I am using.
#!/bin/bash
# Copyright 1999-2005 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-fs/nfs-utils/files/nfs,v 1.14 2007/03/24 10:14:43 vapier Exp $
# This script starts/stops the following
# rpc.statd if necessary (also checked by init.d/nfsmount)
# rpc.rquotad if exists (from quota package)
# rpc.nfsd
# rpc.mountd
# NB: Config is in /etc/conf.d/nfs
opts="reload"
# This variable is used for controlling whether or not to run exportfs -ua;
# see stop() for more information
restarting=no
# The binary locations
exportfs=/usr/sbin/exportfs
gssd=/usr/sbin/rpc.gssd
idmapd=/usr/sbin/rpc.idmapd
mountd=/usr/sbin/rpc.mountd
nfsd=/usr/sbin/rpc.nfsd
rquotad=/usr/sbin/rpc.rquotad
statd=/usr/sbin/rpc.statd
svcgssd=/usr/sbin/rpc.svcgssd
mkdir_nfsdirs() {
local d
for d in /var/lib/nfs/{rpc_pipefs,v4recovery,v4root} ; do
[[ ! -d ${d} ]] && mkdir -p "${d}"
done
mount_pipefs() {
if grep -q rpc_pipefs /proc/filesystems ; then
if ! grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
mount -t rpc_pipefs rpc_pipefs /var/lib/nfs/rpc_pipefs
fi
fi
umount_pipefs() {
if [[ ${restarting} == "no" ]] ; then
if grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
umount /var/lib/nfs/rpc_pipefs
fi
fi
start_gssd() {
[[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
local ret1 ret2
${gssd} ${RPCGSSDDOPTS}
ret1=$?
${svcgssd} ${RPCSVCGSSDDOPTS}
ret2=$?
return $((${ret1} + ${ret2}))
stop_gssd() {
[[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
local ret
start-stop-daemon --stop --quiet --exec ${gssd}
ret1=$?
start-stop-daemon --stop --quiet --exec ${svcgssd}
ret2=$?
return $((${ret1} + ${ret2}))
start_idmapd() {
[[ ! -x ${idmapd} ]] && return 0
${idmapd} ${RPCIDMAPDOPTS}
stop_idmapd() {
[[ ! -x ${idmapd} ]] && return 0
local ret
start-stop-daemon --stop --quiet --exec ${idmapd}
ret=$?
umount_pipefs
return ${ret}
start_statd() {
# Don't start rpc.statd if already started by init.d/nfsmount
killall -0 rpc.statd &>/dev/null && return 0
start-stop-daemon --start --quiet --exec \
$statd -- $RPCSTATDOPTS 1>&2
stop_statd() {
# Don't stop rpc.statd if it's in use by init.d/nfsmount.
mount -t nfs | grep -q . && return 0
# Make sure it's actually running
killall -0 rpc.statd &>/dev/null || return 0
# Okay, all tests passed, stop rpc.statd
start-stop-daemon --stop --quiet --exec $statd 1>&2
waitfor_exportfs() {
local pid=$1
( sleep ${EXPORTFSTIMEOUT:-30}; kill -9 $pid &>/dev/null ) &
wait $1
case "$1" in
start)
# Make sure nfs support is loaded in the kernel #64709
if [[ -e /proc/modules ]] && ! grep -qs nfsd /proc/filesystems ; then
modprobe nfsd &> /dev/null
fi
# This is the new "kernel 2.6 way" to handle the exports file
if grep -qs nfsd /proc/filesystems ; then
if ! grep -qs "^nfsd[[:space:]]/proc/fs/nfsd[[:space:]]" /proc/mounts ; then
mount -t nfsd nfsd /proc/fs/nfsd
fi
fi
# now that nfsd is mounted inside /proc, we can safely start mountd later
mkdir_nfsdirs
mount_pipefs
start_idmapd
start_gssd
start_statd
# Exportfs likes to hang if networking isn't working.
# If that's the case, then try to kill it so the
# bootup process can continue.
if grep -q '^/' /etc/exports &>/dev/null; then
$exportfs -r 1>&2 &
waitfor_exportfs $!
fi
if [ -x $rquotad ]; then
start-stop-daemon --start --quiet --exec \
$rquotad -- $RPCRQUOTADOPTS 1>&2
fi
start-stop-daemon --start --quiet --exec \
$nfsd --name nfsd -- $RPCNFSDCOUNT 1>&2
# Start mountd
start-stop-daemon --start --quiet --exec \
$mountd -- $RPCMOUNTDOPTS 1>&2
stop)
# Don't check NFSSERVER variable since it might have changed,
# instead use --oknodo to smooth things over
start-stop-daemon --stop --quiet --oknodo \
--exec $mountd 1>&2
# nfsd sets its process name to [nfsd] so don't look for $nfsd
start-stop-daemon --stop --quiet --oknodo \
--name nfsd --user root --signal 2 1>&2
if [ -x $rquotad ]; then
start-stop-daemon --stop --quiet --oknodo \
--exec $rquotad 1>&2
fi
# When restarting the NFS server, running "exportfs -ua" probably
# isn't what the user wants. Running it causes all entries listed
# in xtab to be removed from the kernel export tables, and the
# xtab file is cleared. This effectively shuts down all NFS
# activity, leaving all clients holding stale NFS filehandles,
# *even* when the NFS server has restarted.
# That's what you would want if you were shutting down the NFS
# server for good, or for a long period of time, but not when the
# NFS server will be running again in short order. In this case,
# then "exportfs -r" will reread the xtab, and all the current
# clients will be able to resume NFS activity, *without* needing
# to umount/(re)mount the filesystem.
if [ "$restarting" = no ]; then
# Exportfs likes to hang if networking isn't working.
# If that's the case, then try to kill it so the
# shutdown process can continue.
$exportfs -ua 1>&2 &
waitfor_exportfs $!
fi
stop_statd
stop_gssd
stop_idmapd
umount_pipefs
reload)
# Exportfs likes to hang if networking isn't working.
# If that's the case, then try to kill it so the
# bootup process can continue.
$exportfs -r 1>&2 &
waitfor_exportfs $!
restart)
# See long comment in stop() regarding "restarting" and exportfs -ua
restarting=yes
svc_stop
svc_start
echo "usage: $0 {start|stop|restart}"
esac
exit 0

Similar Messages

  • AUR has a bug in the way it versions packages. [WORKED AROUND]

    If you check out my PKGBUILD in the AUR (http://aur.archlinux.org/packages.php?ID=29410), you'll notice its version on the page is "smooth-tasks-pkgver.txt", while the ${pkgver} variable in its PKGBUILD is "wip_2009_09_13".  `smooth-tasks-pkgver.txt' is a file the PKGBUILD uses to temporarily store the correct ${pkgver} of the package (not its mercurial revision), which is then used to update the arch package and PKGBUILD.
    It was an experiment to see how I could manipulate makepkg's standard versioning scheme to reflect the actual author's package version rather than the mercurial revision, and it works!  AUR just seems to have an issue with the way it parses the PKGBUILD.
    I figured it would be best to discuss this on the forums before filing an actual bug report.
    EDIT: changed the title from "... names packages" to "... versions packages", and edited the post body to correct my original misconception the bug lies in how the AUR names packages, and not versions them.
    Last edited by deltaecho (2009-09-13 21:48:06)

    I don't understand your logic, what is `[ 1 ]' for?  It reminds me of the pieces of code I occasionally come across that look something like:
    if (true) {
    /* Do something here */
    I reckon it makes since to someone, I've just never understood the logic behind such blocks.
    The above snippet of code is one way of creating a block comment within Bash scripts, since whatever is located between the `EOF' tags is piped to nowhere (and thus isn't displayed); by placing a block comment at the end of the PKGBUILD containing an explicit ${pkgver} declaration, I am able to dictate to the AUR what the package version should be, and still have the segment of code ignored by `makepkg' when the script is executed.
    EDIT: Unless, you mean to use something like `(( 0 ))', which Bash will evaluate to false and thus ignore the proposition.  That would indeed make since, but, in my opinion, isn't really any clearer than my implementation.
    Last edited by deltaecho (2009-09-13 23:22:10)

  • Is this a possible bug on the Facebook app for iPhone?

    I tend to suffer from slight paranoia regarding security online and on my phone. I'll try to explain the worry I'm having as best I can.
    Basically, I had a personal video on my phone, (nothing horrible don't worry), but it's of a nature that would be embarrassing to be seen by other people.
    So this video was the latest thing I had recorded/taken with my phone. I went onto the Facebook app on my iPhone to post an older picture I had to my page.
    When I saw that this video was still on my phone, I panicked and worried that it could somehow get posted to my page without me actually posting it e.g. a bug whereby the last picture or video taken would go onto your Facebook without you actually selecting it or posting it. So I deleted it while still on the grid page where you choose your photos/videos and so it disappeared when I went back to the app.
    Can anyone with 100% honesty tell me one way or another if this is possible? If I haven't explained it enough let me know and I'll explain it further. Again, please be honest cos it's the best thing for me. Thanks.

    It may be in Facebook, but Facebook doesn't consider it a bug; it's the way Facebook works. If you gave the app permission to view your photos it gives Facebook access to ALL of your photos. To put your mind at ease go to your Facebook page on your computer and see if it's there.
    If you are concerned about privacy online you probably shouldn't have a Facebook page

  • HT4814 Is it possible to manage the mac mini server with server app before set up?

    The article says using the server app I need to enter the IP address, then the Administrator name and password, but obviously on the first boot these are not configured yet. The article does not mention if this is possible or the server.app can only manage the server after its initial configuration using a monitor connected.

    From dim memory...  Load Server.app on the client box or use Server.app on some other handy server, and either configure DHCP with the server's MAC address (usually listed on the shipping box) and your intended static IP address and then boot the new target server box, or just boot the new target server box and use Bonjour Browser or the command-line dns-sd tool to find the IP address that the new server box has acquired from DHCP, and then in either case, connect to the new target server via Connect To Server in Server.app via its IP address, specify the root user and use the system serial number (again, usually listed on the shipping box) as the password.

  • Is it possible to make the ISE guest server redundant ?

    Hi,
    We've an ISE cluster of two ISE nodes.
    The ISE guest server works fine on the primairy ISE node.
    MAC address of the guest client is set in the map 'GuestDevices' after accepting the AUP policy.
    The the ISE sents the COA and the client authenticates again and is punt in the guest vlan.
    But when the primairy ISE is offline, I see the guest portal AUP page on the secondairy ISE node.
    I can accept the AUP policy, and I get an error message.
    On the secondairy ISE I see that the COA to the switch is sent, to clear the session to the primairy ISE....
    But the COA request should ask to clear the session to the secondairy ISE ( the primairy ISE is offline ).
    Should it be possible to configure the ISE guest functionality redundant in an ISE cluster?
    /SB

    The Guest portal can run on a node that assumes the Policy Services persona when the primary node with Administration persona is offline. However, it has the following restrictions:
    •Self registration is not allowed
    •Device Registration is not allowed
    •The AUP is shown at every login even if first login is selected
    •Change Password is not allowed and accounts are given access with the old password.
    •Maximum Failed Login is not be enforced
    http://www.cisco.com/en/US/docs/security/ise/1.0/user_guide/ise10_guest_pol.html#wp1126706

  • A possible bug in the Accordion scripts

    I've encountered what may be a bug in the handling a single
    apostrophe in the tab of an accordion panel. When I use '
    in the tagged element content displayed in the accordion tab, the
    character appears in the tab but any processing of the panels
    stops. When I remove the ' everything is fine. When I swap
    ' for " (or any other special character) this
    also works. So I'm thinking the single quote is affecting the
    javascript somewhere along the line.
    I'm using the 1.4 release of Spry.
    BTW, all special characters in the panel content work just
    fine. It seems to be only the tab that is affected when the content
    there contains this single quote mark special character.
    Unless I'm totally out to lunch or this is one of those magic
    things that self-corrects the minute you ask someone a question
    about it (I hate when that happens).
    Thanks for any help you can give me.

    Hi solsenkp,
    I'm not seeing the problem. Can you post some sample markup?
    --== Kin ==--

  • Possible bug with the registration verification code, some follow up

    About this post I made earily,
    https://bbs.archlinux.org/viewtopic.php?id=187286
    > I ran into this problem several times before.
    I mean that I finally made a successful registration after dozens of trying
    in the last two or three years time span.
    I just don't know what went wrong, and never get around to search for the solution.
    Luckily I figure it out today.
    And I rushed in to report it, hoping it will be fixed,
    And no more people will be put off by this again.
    After some search, I see I am not alone:
    "" List of titles:
    Can not register in bbs.archlinux.org
    Registration question
    Arch Forum User Registration is Broken
    [solved] Forum registration bug
    Arch forum registration question
    Forum account registration form problem
    Problem Registering for Forum -> date -u +%W$(uname)
    Forum registrations from Hawaii
    Forum registering issues, again
    Unable to register unless running linux
    Forum registration question does not work if DST is enabled
    It seams that Time Zone and DST are causing a bigger problem than it should be.
    May be a not time-related Registration Captcha will server us better.

    Scimmia wrote:If you have your timezones set up correctly, this shouldn't be an issue at all.
    I don't know if it has been fixed, but when I tried to register 2 years ago, it continuously failed, even though I was quite sure I had entered everything correctly.  On a whim, I decided to check the box that said daylight savings time is in effect.  When I did that, my registration went through.  So while "shouldn't be an issue" is technically true, it was an issue for me.  I had to provide incorrect information in order to successfully register, and I only managed to do so by grasping at straws.
    For the record, daylight savings time is never in effect in Hawaii.

  • Is it possible to configure the OS X Server VPN Service to use Certificates?

    I was attempting to set up the VPN Service on OS X Server 4.0.3 (Yosemite) to use certificates instead of a private shared key.  It does not appear that the VPN Server in OS X Server is designed to use anything other than a private shared key (on the server side).  I was wondering if I was missing something?  The VPN Server works fine using the PSK (L2TP or PPTP) - I just thought I would experiment with certificates - but every example I am finding shows the PSK being used - although some of the "how to" tuturials allude to the fact that VPN certificates are supported for L2TP - but they don't provide any detail on how that functionality would be configured.  I tried creating both a VPN Server and VPN Client certificate - however - the certificates show up in the login keychain and do not appear in the certificate window in the Server app.  I was hoping that maybe the presence of a VPN Server Certificate would possibly enable an option to use it when configuring the VPN.
    ~Scott

    No unfortunately the 'official' Apple VPN service does not have this ability, furthermore as Apple use a heavily customised version of Racoon you cannot cheat by trying to do this via the command line.
    You will have to use a completely different VPN server, Mac and iOS clients can do this but not the Mac server side. I use StrongSwan running in a Linux virtual machine.

  • Possible Bug in "The Rules"

    After logging-in through the firewall, regardless of whether I am logged into the forum or not, when I click on the link in;
        "Welcome to Lenovo's Discussion Community - Please note The RULES when posting."
    I get a log-in window requiring user name and PW. Closing the window brings an Authorisation failed message, using the FW PW everything is fine.
    I know the rules are also in the Welcome forum, but I would hope most potential / new members will click on this link.
    Andy
    PS. This post has been marked as having URL and IMG, does the IMG come from the fact I copied and pasted the "Welcome" text?  
    Message Edited by andyP on 11-27-2007 04:45 PM
    Andy  ______________________________________
    Please remember to come back and mark the post that you feel solved your question as the solution, it earns the member + points
    Did you find a post helpfull? You can thank the member by clicking on the star to the left awarding them Kudos Please add your type, model number and OS to your signature, it helps to help you. Forum Search Option T430 2347-G7U W8 x64, Yoga 10 HD+, Tablet 1838-2BG, T61p 6460-67G W7 x64, T43p 2668-G2G XP, T23 2647-9LG XP, plus a few more. FYI Unsolicited Personal Messages will be ignored.
      Deutsche Community     Comunidad en Español    English Community Русскоязычное Сообщество
    PepperonI blog 

    Mark_Lenovo wrote:
    I don't encounter this issue with the rules, but have from time to time in other areas.  I expect this will cease to be an issue after Friday when the password protections are removed.
    You may not encounter this as you have admin rights, if it doesn't dissapear after Friday I'm sure I, or another member who has experienced this, will let you know.
    Mark_Lenovo wrote:As to your note above, the community management section heading was an error by applying permissions at the board vs category level.  I believe it should now be resolved.  Good catch - Thanks!
    Yes, is resolved.
    Andy  ______________________________________
    Please remember to come back and mark the post that you feel solved your question as the solution, it earns the member + points
    Did you find a post helpfull? You can thank the member by clicking on the star to the left awarding them Kudos Please add your type, model number and OS to your signature, it helps to help you. Forum Search Option T430 2347-G7U W8 x64, Yoga 10 HD+, Tablet 1838-2BG, T61p 6460-67G W7 x64, T43p 2668-G2G XP, T23 2647-9LG XP, plus a few more. FYI Unsolicited Personal Messages will be ignored.
      Deutsche Community     Comunidad en Español    English Community Русскоязычное Сообщество
    PepperonI blog 

  • Problems setting up an NFS server

    Hi everybody,
    I just completed my first arch install. :-)
    I have a desktop and a laptop, and I installed Arch on the desktop (the laptop runs Ubuntu 9.10). I had a few difficulties here and there, but I now have the system up and running, and I'm very happy.
    I have a problem setting up an NFS server. With Ubuntu everything was working, so I'm assuming that the Ubuntu machine (client) is set-up correctly. I'm trying to troubleshoot the arch box (server) now.
    I followed this wiki article: http://wiki.archlinux.org/index.php/Nfs
    Now, I have these problems:
    - when I start the daemons, I get:
    [root@myhost ~]# /etc/rc.d/rpcbind start
    :: Starting rpcbind [FAIL]
    [root@myhost ~]# /etc/rc.d/nfs-common start
    :: Starting rpc.statd daemon [FAIL]
    [root@myhost ~]# /etc/rc.d/nfs-server start
    :: Mounting nfsd filesystem [DONE]
    :: Exporting all directories [BUSY] exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.1.1/24:/home".
    Assuming default behaviour ('no_subtree_check').
    NOTE: this default has changed since nfs-utils version 1.0.x
    [DONE]
    :: Starting rpc.nfsd daemon [FAIL]
    - If I mount the share on the client with "sudo mount 192.168.1.20:/home /media/desktop", IT IS mounted but I can't browse it because I have no privileges to access the home directory for the user.
    my /etc/exports looks like this:
    # /etc/exports: the access control list for filesystems which may be exported
    # to NFS clients. See exports(5).
    /home 192.168.1.1/24(rw,sync,all_squash,anonuid=99,anongid=99))
    /etc/conf.d/nfs-common.conf:
    # Parameters to be passed to nfs-common (nfs clients & server) init script.
    # If you do not set values for the NEED_ options, they will be attempted
    # autodetected; this should be sufficient for most people. Valid alternatives
    # for the NEED_ options are "yes" and "no".
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=
    # Options to pass to rpc.statd.
    # See rpc.statd(8) for more details.
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly. Specifically, the Arch
    # NFS init scripts require the --no-notify flag on the server,
    # but not on the client e.g.
    # STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
    # STATD_OPTS="-p 32765 -o 32766" -> client
    STATD_OPTS="--no-notify"
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS=""
    # Do you want to start the idmapd daemon? It is only needed for NFSv4.
    NEED_IDMAPD=
    # Options to pass to rpc.idmapd.
    # See rpc.idmapd(8) for more details.
    IDMAPD_OPTS=
    # Do you want to start the gssd daemon? It is required for Kerberos mounts.
    NEED_GSSD=
    # Options to pass to rpc.gssd.
    # See rpc.gssd(8) for more details.
    GSSD_OPTS=
    # Where to mount rpc_pipefs filesystem; the default is "/var/lib/nfs/rpc_pipefs".
    PIPEFS_MOUNTPOINT=
    # Options used to mount rpc_pipefs filesystem; the default is "defaults".
    PIPEFS_MOUNTOPTS=
    /etc/hosts.allow:
    nfsd: 192.168.1.0/255.255.255.0
    rpcbind: 192.168.1.0/255.255.255.0
    mountd: 192.168.1.0/255.255.255.0
    Any help would be very appreciated!

    Thanks, I finally got it working.
    I realized that even though both machines had the same group, my Ubuntu machine (client) group has GID 1000, while the Arch one has GID 1001. I created a group that has GID 1001 on the client, and now everything is working.
    I'm wondering why my Arch username and group both have 1001 rather than 1000 (which I suppose would be the default number for the first user created).
    Anyway, thanks again for your inputs.

  • NFS4 IDMAPD Possible Bug

    Generally NFS4 works very well, but when restarting the server the id mapping fuction does not work, means: all rights are just replaced by the user "nobody". Form the daemon side the start sequence is rpcbind, nfs-common nfs-server. Actually I just need to restart the nfs-common service which starts idmapd and the mapping works fine again. Of course I can add an additional line in the rclocal which is executed after rc.conf, but this would be a rather dirty hack.
    Hope this information is helpful.
    nfs-common.conf
    # Parameters to be passed to nfs-common (nfs clients & server) init script.
    # If you do not set values for the NEED_ options, they will be attempted
    # autodetected; this should be sufficient for most people. Valid alternatives
    # for the NEED_ options are "yes" and "no".
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=yes
    # Options to pass to rpc.statd.
    # See rpc.statd(8) for more details.
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly. Specifically, the Arch
    # NFS init scripts require the --no-notify flag on the server,
    # but not on the client e.g.
    # STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
    # STATD_OPTS="-p 32765 -o 32766" -> client
    STATD_OPTS=
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS=""
    # Do you want to start the idmapd daemon? It is only needed for NFSv4.
    NEED_IDMAPD=yes
    # Options to pass to rpc.idmapd.
    # See rpc.idmapd(8) for more details.
    IDMAPD_OPTS=
    # Do you want to start the gssd daemon? It is required for Kerberos mounts.
    NEED_GSSD=no
    # Options to pass to rpc.gssd.
    # See rpc.gssd(8) for more details.
    GSSD_OPTS=
    # Where to mount rpc_pipefs filesystem; the default is "/var/lib/nfs/rpc_pipefs".
    PIPEFS_MOUNTPOINT=
    # Options used to mount rpc_pipefs filesystem; the default is "defaults".
    PIPEFS_MOUNTOPTS=
    nfs-server.conf is just default.

    I don't have anything to contribute unfortunately, just want to confirm this.
    I'm running NIS so tried to start ypbind before nfs-common (through rc.conf) but it doesn't matter.
    DAEMONS=(syslog-ng network rpcbind ypbind nfs-common nfs-server netfs crond sshd ntpd dbus hal)
    Only a restart of nfs-common will start idmapd.
    Mounting the NFSv4 export on a client (before restarting nfs-common) spawns this message on the server:
    nfsd: nfsv4 idmapping failing: has idmapd not been started?
    After the rpc.idmapd daemon is started correctly no message appears when clients mounts the export.
    Here's the two only settings I changed in /etc/conf.d/nfs-common:
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=no
    # Do you want to start the idmapd daemon? It is only needed for NFSv4.
    NEED_IDMAPD=yes
    (Tried with and without quotes btw: ="no"/=no)
    Even if I don't want STATD, it is starting up during boot anyway:
    rpc.statd[2998]: Version 1.1.6 Starting
    I want to convert the network to NFSv4 so testing this on Arch (test-server) and Fedora (client). Haven't tried NFSv4 server on another distro so don't know if the behaviour is the same..

  • DNS Fails for NFS Server Shares

    When I boot, I get a message that DNS has failed for the NFS server mounts, and the shares do not mount. The message says, "mount.nfs: DNS resolution failed for server: name or service unknown." I have to mount the shares myself. Then when rebooting, I get the same error saying it can't unmount the shares.
    this is /etc/resolv.conf:
    $ cat /etc/resolv.conf
    # Generated by dhcpcd from eth0
    # /etc/resolv.conf.head can replace this line
    nameserver 208.67.222.222
    nameserver 208.67.220.220
    # /etc/resolv.conf.tail can replace this line
    this is /etc/conf.d/nfs:
    # Number of servers to be started up by default
    NFSD_OPTS=8
    # Options to pass to rpc.mountd
    # e.g. MOUNTDOPTS="-p 32767"
    MOUNTD_OPTS="--no-nfs-version 1 --no-nfs-version 2"
    # Options to pass to rpc.statd
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly. Specifically, the Arch
    # NFS init scripts require the --no-notify flag on the server,
    # but not on the client e.g.
    # STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
    # STATD_OPTS="-p 32765 -o 32766" -> client
    STATD_OPTS=""
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS=""
    Do I need to add some option to rpc.statd, or is there some other misconfiguration there? AFAIK it is the default. What else should I look at to fix this? I can ping the server by name, and log in with ssh by name, just fine. It's only the nfs that is failing with DNS.

    airman99 wrote:
    Yahoo! Good news, I've finally solved the problem on my laptop. The message I was receiving turned out merely to be a network timing issue.
    The error I was receiving was exactly correct and informative. When /etc/rc.d/netfs ran and executed a 'mount -a -t nfs...' the network was indeed NOT reachable. I am running networkmanager, and apparently during bootup, networkmanager gets loaded, but there is a delay between when networkmanager is loaded and when the network is available. In other words, networkmanager allows the boot process to continue before the network is available.
    My daemons are loaded in this order (rc.conf):
    DAEMONS=(syslog-ng hal dhcdbd networkmanager crond cups ntpdate ntpd portmap nfslock netfs)
    Consequently, if I add a delay to /etc/rc.d/netfs to allow time for the network to come up, then when the NFS shares are mounted, the network is up. In my case I had to add a 3 second delay.
    sleep 3
    I'm sure this isn't the best way to solve the problem, by editing the system file /etc/rc.d/netfs, because the next upgrade where changes occur to netfs, my fix will get overwritten. But I'll keep it until I figure out the "right" fix.
    The real solution is to not load networkmanager in the background, but to force startup to wait for the networok to be up before continuing.
    there is the _netdev option you can use in fstab, but that doesn't always work:
    http://linux.die.net/man/8/mount
    _netdev
        The filesystem resides on a device that requires network access (used to prevent the system from attempting to mount these filesystems until the network has been enabled on the system).
    Alternatively, you could just add a cronjob to do a mount -a with a sleep 20 in there or something. You might have to play with the sleep value a little to make sure it's long enough

  • [SOLVED] The terminal in the Arch installation

    I've notice that the terminal in the Arch installation has interesting features: the TAB autocompletion shows lots of information, typing a command and using up/down arrow only navigates through the history of this command... I love it. I'm using Arch with LXDE and its lxterminal does not have those features. Is it possible to install the Arch-installation terminal in LXDE? Or maybe adapt lxterminal someway? Thanks.
    Last edited by David López (2012-09-03 22:52:50)

    Background info: http://mailman.archlinux.org/pipermail/ … 02683.html
    The config it uses: http://www.archlinux.org/packages/extra … sh-config/

  • Is it possible to refresh the client from serverside in j2ee application

    Hello
    Is it possible to refreshh the client from server side in j2ee Application server using JMS technology,
    If you know about it plz. mail me on [email protected] Or plz. reply me over here.
    Thank you

    You can either use server push or client pull.
    Server push:
    Server push the changes to client on every certain time interval.
    You can make client to subscribe for JMS topic created on server side. Every certain interval, server needs to publish a message and JMS will distribute all the message to the subscribers (clients)
    Client pull:
    Each client request/inquiry the changes from server on every certain time interval.
    You can't use JMS (you can but you must use P2P (queue), which is not recommended in this case).
    You can make the client to post HTTP request to get the latest data from server on every certain time interval.
    Alex

  • Error: 0x800f0922 when trying to install NFS Server Feature on Windows 2012R2

    I'm trying to install the "Server for NFS" option under the File and Storage Services role on a Windows 2012R2 server, but I keep receiving an 0x800f0922 error. I've tried the install via the GUI and Powershell. I've also tried using the
    DISM utility with different source media for the install, but with the same result.   I ran "SFC /scannow"  to check for any missing files/reg keys but it returned no errors.  Any thoughts?

    Thank you for the help!  I did try the suggested steps, but the issue persisted.  I was able to resolve the issue by removing software on the server that was using the NFS ports needed for successful installation of the native NFS server.  
    In our case it was an optional component of our backup software called"Veeam backup vPower NFS".   There is also a known issue with the Qlogic's SanSurfer software using the NFS ports. 
    There were entries in the System event log that tipped me off:
    Event ID:      7000  The Server for NFS Open RPC (ONCRPC) Portmapper service failed to start due to the following error:
    A device attached to the system is not functioning.
    Event ID:      7001  The Server for NFS Driver service depends on the Server for NFS Open RPC (ONCRPC) Portmapper service which failed to
    start because of the following error:

Maybe you are looking for