Problems setting up an NFS server

Hi everybody,
I just completed my first arch install. :-)
I have a desktop and a laptop, and I installed Arch on the desktop (the laptop runs Ubuntu 9.10). I had a few difficulties here and there, but I now have the system up and running, and I'm very happy.
I have a problem setting up an NFS server. With Ubuntu everything was working, so I'm assuming that the Ubuntu machine (client) is set-up correctly. I'm trying to troubleshoot the arch box (server) now.
I followed this wiki article: http://wiki.archlinux.org/index.php/Nfs
Now, I have these problems:
- when I start the daemons, I get:
[root@myhost ~]# /etc/rc.d/rpcbind start
:: Starting rpcbind [FAIL]
[root@myhost ~]# /etc/rc.d/nfs-common start
:: Starting rpc.statd daemon [FAIL]
[root@myhost ~]# /etc/rc.d/nfs-server start
:: Mounting nfsd filesystem [DONE]
:: Exporting all directories [BUSY] exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.1.1/24:/home".
Assuming default behaviour ('no_subtree_check').
NOTE: this default has changed since nfs-utils version 1.0.x
[DONE]
:: Starting rpc.nfsd daemon [FAIL]
- If I mount the share on the client with "sudo mount 192.168.1.20:/home /media/desktop", IT IS mounted but I can't browse it because I have no privileges to access the home directory for the user.
my /etc/exports looks like this:
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
/home 192.168.1.1/24(rw,sync,all_squash,anonuid=99,anongid=99))
/etc/conf.d/nfs-common.conf:
# Parameters to be passed to nfs-common (nfs clients & server) init script.
# If you do not set values for the NEED_ options, they will be attempted
# autodetected; this should be sufficient for most people. Valid alternatives
# for the NEED_ options are "yes" and "no".
# Do you want to start the statd daemon? It is not needed for NFSv4.
NEED_STATD=
# Options to pass to rpc.statd.
# See rpc.statd(8) for more details.
# N.B. statd normally runs on both client and server, and run-time
# options should be specified accordingly. Specifically, the Arch
# NFS init scripts require the --no-notify flag on the server,
# but not on the client e.g.
# STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
# STATD_OPTS="-p 32765 -o 32766" -> client
STATD_OPTS="--no-notify"
# Options to pass to sm-notify
# e.g. SMNOTIFY_OPTS="-p 32764"
SMNOTIFY_OPTS=""
# Do you want to start the idmapd daemon? It is only needed for NFSv4.
NEED_IDMAPD=
# Options to pass to rpc.idmapd.
# See rpc.idmapd(8) for more details.
IDMAPD_OPTS=
# Do you want to start the gssd daemon? It is required for Kerberos mounts.
NEED_GSSD=
# Options to pass to rpc.gssd.
# See rpc.gssd(8) for more details.
GSSD_OPTS=
# Where to mount rpc_pipefs filesystem; the default is "/var/lib/nfs/rpc_pipefs".
PIPEFS_MOUNTPOINT=
# Options used to mount rpc_pipefs filesystem; the default is "defaults".
PIPEFS_MOUNTOPTS=
/etc/hosts.allow:
nfsd: 192.168.1.0/255.255.255.0
rpcbind: 192.168.1.0/255.255.255.0
mountd: 192.168.1.0/255.255.255.0
Any help would be very appreciated!

Thanks, I finally got it working.
I realized that even though both machines had the same group, my Ubuntu machine (client) group has GID 1000, while the Arch one has GID 1001. I created a group that has GID 1001 on the client, and now everything is working.
I'm wondering why my Arch username and group both have 1001 rather than 1000 (which I suppose would be the default number for the first user created).
Anyway, thanks again for your inputs.

Similar Messages

  • How do I set up a NFS server?

    Running 10.5 on macbook. Trying to create NFS server to mount ISO to proprietary UNIX client, AIX 5.2. Looked under SHARING and directory utility. Not finding any documentation which seems to indicate how this is done.
    On UNIX, I would start NFS services, add directory to export list with apprpriate permissions (Which client can access it etc) and it would be good to go (assuming name resolution/routing etc). Any insight would be helpful.

    You're over my head with your UNIX knowledge, but I wonder if the 10.4.x method still works in 10.5.x?
    http://mactechnotes.blogspot.com/2005/09/mac-os-x-as-nfs-server.html
    Nope, I guess not, they removed NetInfoManager in 10.5.
    Few know as much as this guy about OSX & he has a little utility to do it, but requires 10.6+...
    http://www.williamrobertson.net/documents/nfs-mac-linux-setup.html
    Maybe some clues you haven't seen yet anyway???

  • Problems setting up 2014 SQL Server Express advanced

    I use windows 8.1.  After I install 2014 Server Express I cant find Management
    studio.  I then try to connect from Visual studio using SQLExpress as the
    server name, but get a message that the name is not correct.  I then try
    SQLEXPRESS as the name but that name is not accepted.   How do I know
    what the name of the server is.
    Also, I am unable to access the server through the management studio because I can't find it.  I can I get a shortcut on my desktop to access Management studio.
    As you can tell I am a novice.  My programing has been limited to VBA.
    Thanks

    Hi,
    I guess you just installed Database engine services of Express edition and you did not installed SSMS. You can go to
    This link and click on download go down and you would see below file
    MgmtStudio 64BIT\SQLManagementStudio_x64_ENU.exe
    Assuming you have 64 edition. Download and install it
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Cannot enable nfs server

    Hello --
    I have some problem to enable the nfs server on a Ultra60 running Solaris 10 06/06.
    First, svcs tells me the service is not enable:
    bash-3.00# svcs -l /network/nfs/server
    fmri svc:/network/nfs/server:default
    name NFS server
    enabled false
    state disabled
    next_state none
    state_time Tue Aug 22 11:11:12 2006
    logfile /var/svc/log/network-nfs-server:default.log
    restarter svc:/system/svc/restarter:default
    contract_id
    dependency require_any/error svc:/milestone/network (online)
    dependency require_all/error svc:/network/nfs/nlockmgr (online)
    dependency optional_all/error svc:/network/nfs/mapid (online)
    dependency require_all/restart svc:/network/rpc/bind (online)
    dependency optional_all/none svc:/network/rpc/keyserv (online)
    dependency optional_all/none svc:/network/rpc/gss (online)
    dependency require_all/error svc:/system/filesystem/local (online)
    Then, I tries to enable it using svcadm:
    bash-3.00# svcadm enable /network/nfs/server
    It returns to the prompt silently. However, when I check the status using svcs, it shows the service is still disabled:
    bash-3.00# svcs -l /network/nfs/server
    fmri svc:/network/nfs/server:default
    name NFS server
    enabled false
    state disabled
    next_state none
    state_time Tue Aug 22 11:35:25 2006
    logfile /var/svc/log/network-nfs-server:default.log
    restarter svc:/system/svc/restarter:default
    contract_id
    dependency require_any/error svc:/milestone/network (online)
    dependency require_all/error svc:/network/nfs/nlockmgr (online)
    dependency optional_all/error svc:/network/nfs/mapid (online)
    dependency require_all/restart svc:/network/rpc/bind (online)
    dependency optional_all/none svc:/network/rpc/keyserv (online)
    dependency optional_all/none svc:/network/rpc/gss (online)
    dependency require_all/error svc:/system/filesystem/local (online)
    I cannot find any useful info from the log file:
    [ Aug 22 11:07:25 Enabled. ]
    [ Aug 22 11:07:25 Executing start method ("/lib/svc/method/nfs-server start") ]
    [ Aug 22 11:07:26 Method "start" exited with status 0 ]
    [ Aug 22 11:07:26 Stopping because all processes in service exited. ]
    [ Aug 22 11:07:26 Executing stop method ("/lib/svc/method/nfs-server stop 97") ]
    [ Aug 22 11:07:26 Method "stop" exited with status 0 ]
    [ Aug 22 11:07:26 Executing start method ("/lib/svc/method/nfs-server start") ]
    [ Aug 22 11:07:26 Method "start" exited with status 0 ]
    [ Aug 22 11:07:26 Stopping because service disabled. ]
    [ Aug 22 11:07:26 Executing stop method ("/lib/svc/method/nfs-server stop 99") ]
    [ Aug 22 11:07:26 Method "stop" exited with status 0 ]
    [ Aug 22 11:11:11 Enabled. ]
    [ Aug 22 11:11:11 Executing start method ("/lib/svc/method/nfs-server start") ]
    [ Aug 22 11:11:11 Method "start" exited with status 0 ]
    [ Aug 22 11:11:11 Stopping because all processes in service exited. ]
    [ Aug 22 11:11:11 Executing stop method ("/lib/svc/method/nfs-server stop 101")
    [ Aug 22 11:11:12 Method "stop" exited with status 0 ]
    [ Aug 22 11:11:12 Executing start method ("/lib/svc/method/nfs-server start") ]
    [ Aug 22 11:11:12 Method "start" exited with status 0 ]
    [ Aug 22 11:11:12 Stopping because service disabled. ]
    [ Aug 22 11:11:12 Executing stop method ("/lib/svc/method/nfs-server stop 103")
    [ Aug 22 11:11:12 Method "stop" exited with status 0 ]
    Any help will be appreciated.
    --taolizhong                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    The problem has been solved. It was because there was no share entry in /etc/dfs/dfstab. It appears that if there are no shares the services won't start.
    --Taolizhong                                                                                                                                                                                                                                                                                                                                               

  • Problems Setting up NFS

    Hi.
    I've been having problems setting up NFS on a LAN at home.  I've read through all the Arch NFS-related threads and looked at (and tried some of) the suggestions, but without success.
    What I'm trying to do is set up an NFS share on a wireless laptop to copy one user's files from the laptop to a wired workstation.  The laptop is running Libranet 2.8.1. The workstation is running Arch Linux with the most up-to-date (as of today) software, including NFS.
    I have the portmap, nfslock, and nfsd daemons running via rc.conf, set to run in that order.
    I've re-installed nfs-utils and set up hosts.deny as follows:
    # /etc/hosts.deny
    ALL: ALL
    # End of file
    and hosts.allow as follows:
    # /etc/hosts.allow
    ALL: 192.168.0.0/255.255.255.0
    ALL: LOCAL
    # End of file
    In other words, I want to allow access on the machine to anything running locally or on the LAN.
    I have the shared directories successfully exported from the laptop (192.168.0.3), but I have not been able to mount the exported directories on the workstation (192.168.0.2).
    The mount command I'm using at the workstation:
    mount -t nfs -o rsize=8192,wsize=8192 192.168.0.3:/exports /exports
    There is an /exports directory on both machines at root with root permissions. There is an /etc/exports file on the NFS server machine that gives asynchronous read/write permissions to 192.168.0.2 for the /exports directory.
    I receive the following message:
    mount: RPC: Program not registered
    This appears to be a hosts.allow/hosts.deny type problem, but I can't figure out what I need to do to make RPC work properly.
    Some additional information, just in case it's helpful:
    This is the result if running rpcinfo:
       program vers proto   port
        100000    2   tcp    111  portmapper
        100000    2   udp    111  portmapper
        100024    1   udp  32790  status
        100024    1   tcp  32835  status
        100003    2   udp   2049  nfs
        100003    3   udp   2049  nfs
        100003    4   udp   2049  nfs
        100003    2   tcp   2049  nfs
        100003    3   tcp   2049  nfs
        100003    4   tcp   2049  nfs
        100021    1   udp  32801  nlockmgr
        100021    3   udp  32801  nlockmgr
        100021    4   udp  32801  nlockmgr
        100021    1   tcp  32849  nlockmgr
        100021    3   tcp  32849  nlockmgr
        100021    4   tcp  32849  nlockmgr
        100005    3   udp    623  mountd
        100005    3   tcp    626  mountd
    Running ps shows that the following processes are running:
    portmap
    rpc.statd
    nfsd
    lockd
    rpciod
    rpc.mountd
    Thanks for any help here!
    Regards,
    Win

    Hi ravster.
    Thanks for the suggestions. I had read the Arch NFS HowTo and did as you suggested get rcpinfo for the server machine. The Libranet NFS server is configured rather different since the NFS daemons are built into the default kernel.
    Since I needed to complete the transfer immediately, I gave up for now trying to set up NFS and used netcat (!) instead to transfer the files from the Libranet laptop to the Arch workstation. The laptop will be converted to Arch Linux soon so I think I'll be able to clear things up soon.  I haven't had any problems setting up NFS with homogeneous (i.e., all Arch or all Libranet) computer combinations.
    Thanks again.
    Win

  • Solaris 10 NFS caching problems with custom NFS server

    I'm facing a very strange problem with a pure java standalone application providing NFS server v2 service. This same application, targeted for JVM 1.4.2 is running on different environment (see below) without any problem.
    On Solaris 10 we try any kind of mount parameters, system services up/down configuration, but cannot solve the problem.
    We're in big trouble 'cause the app is a mandatory component for a product to be in production stage in a while.
    Details follows
    System description
    Sunsparc U4 with SunOS 5.10, patch level: Generic_118833-33, 64bit
    List of active NFS services
    disabled   svc:/network/nfs/cbd:default
    disabled   svc:/network/nfs/mapid:default
    disabled   svc:/network/nfs/client:default
    disabled   svc:/network/nfs/server:default
    disabled   svc:/network/nfs/rquota:default
    online       svc:/network/nfs/status:default
    online       svc:/network/nfs/nlockmgr:default
    NFS mount params (from /etc/vfstab)
    localhost:/VDD_Server  - /users/vdd/mnt nfs - vers=2,proto=tcp,timeo=600,wsize=8192,rsize=8192,port=1579,noxattr,soft,intr,noac
    Anomaly description
    The server side of NFS is provided by a java standalone application enabled only for NFS v2 and tested on different environments like: MS Windows 2000, 2003, XP, Linux RedHat 10 32bit, Linux Debian 2.6.x 64bit, SunOS 5.9. The java application is distributed with a test program (java standalone application) to validate main installation and configuration.
    The test program simply reads a file from the NFS file-system exported by our main application (called VDD) and writes the same file with a different name on the VDD exported file-system. At end of test, the written file have different contents from the read one. Indeep investigation shows following behaviuor:
    _ The read phase behaves correctly on both server (VDD) and client (test app) sides, trasporting the file with correct contents.
    _ The write phase produces a zero filled file for 90% of resulting VDD file-system file but ending correctly with the same sequence of bytes as the original read file.
    _ Detailed write phase behaviour:
    1_ Test app wites first 512 bytes => VDD receive NFS command with offset 0, count 512 and correct bytes contents;
    2_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1024 and WRONG bytes contents: the first 512 bytes are zero filled (previous write) and last 512 bytes with correct bytes contents (current write).
    3_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1536 and WRONG bytes contents: the first 1024 bytes are zero filled (previous writes) and last 512 bytes with correct bytes contents (current write).
    4_ and so on...
    Further tests
    We tested our VDD application on the same Solaris 10 system but with our test application on another (Linux) machine, contacting VDD via Linux NFS client, and we don�t see the wrong behaviour: our test program performed ok and written file has same contents as read one.
    Has anyone faced a similar problem?
    We are Sun ISV partner: do you think we have enough info to open a bug request to SDN?
    Any suggestions?
    Many thanks in advance,
    Maurizio.

    I finally got it working. I think my problem was that I was coping and pasting the /etc/pam.conf from Gary's guide into the pam.conf file.
    There was unseen carriage returns mucking things up. So following a combination of the two docs worked. Starting with:
    http://web.singnet.com.sg/~garyttt/Configuring%20Solaris%20Native%20LDAP%20Client%20for%20Fedora%20Directory%20Server.htm
    Then following the steps at "Authentication Option #1: LDAP PAM configuration " from this doc:
    http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server
    for the pam.conf, got things working.
    Note: ensure that your user has the shadowAccount value set in the objectClass

  • NFS locking problem, solaris 10, nfsd stops serving

    Hi all,
    First time poster, short time viewer.
    To set the scene...
    I have an x86 SAM-QFS installation under management currently. The SAM-QFS infrastructure consists of 2x meta data controllers and 2x file service heads, that currently share out NFS (they are NFS servers).
    These systems are not clustered, nor do they have any form of "high availability". They can be failed between, but this is very much a manual process.
    The file service heads are running mounted SAM-FS filesystems. These file systems are shared out in the form of /etc/dfs/dfstab style exports to NFS clients (some 250 Mac OS X Tiger/Leopard clients) that use NFS for their home directories et al.
    So, here is where things become sad.
    For the past month, we've been fighting with our nfsd. Unfortunately for us, at (apparently) completely random moments, our nfsd will simply stop responding to requests, cease to serve out nfs to clients and then silently stop processing any locking/transfer/mapping requests. An svcadm restart nfs/server does nothing, as the process simply 'hangs' in a state of reattempting to stop/start the nfsd. We find that the only way to get nfs to respond is to completely reboot the host. At this point only, will nfs reshare with the "shareall" command based upon our entries in /etc/dfs/dfstab
    We have been though a lengthy support case with Sun. We are getting NO closer to understanding the problem, but we are providing Sun with every possible detail we can. We've even gotten to the point of writing crash dumps from the machine out to disk using commands such as 'uadmin 5 1' and then sending our entire vmcore outputs to Sun.
    We are stabbing in the dark suggesting it might be file locking related, but we cannot/do not have enough visibility on the inside of the nfds to be able to tell what is happening. We've tried increasing the number of nfs_server allowed daemons in /etc/default/nfs, as well as made the maximum NFS version to serve out as v3, but it hasn't helped us here at all.
    If anyone has any experience with what could possibly cause NFS to do this, in what (we thought) was a fairly common, NFS environment, it would be appreciate. Our infrastructure is fairly grounded until we have a resolution to an issue Sun simply can't seem to pinpoint.
    Thank you.
    z.

    I have the same problem on a samfs server. I have never come accross a shittier system then a samfs system. I feel for people who have to support them. SAMFS takes an easy concept and makes it an overwhelming problem.

  • Problems setting up testing server CS5 with existing PhP site copied to hard drive

    A couple of years ago I dabbled in CS3 & PhP, but haven't touched it since 2009.  I still have WAMP loaded on my machines.  I'm now in a work situation where I was asked to see if I could make some simple copy updates to sites written in PhP.  I copied the entire sites over to my hard drive, and placed them in the WAMP www root folder, and followed instructions to setting up the site in CS5 (which is pretty close to the CS3 setup).  I made sure WAMP was running, and I'm still unable to connect. I am using "localhost" as part of the URL for the web server designation. I continue to get an "internal server error" message on these pages.  I am able to, from my browser, type in "http:localhost" and it appears that WAMP is running properly. And I do remember to start WAMP before I start a DW session.
    So, I kept experimenting.  If I define a new PhP site and create a test document with simple PhP code, the connection to the testing server works and renders the code.  It's just the existing PhP site I've copied over generates the Internal server message.
    Is there something basic I'm not understanding about viewing and manipulating a PhP site created by someone else using DW CS5?

    The first question: on what platform? Windows - Mac?
    I only succeeded in setting up a testing server on my Mac
    with PHP and MySQL after reading a book of David Powers. On Leopard
    I belief there still are problems connecting PHP to MySQL.

  • Problems setting up Remote Web Access in Windows Storage Server 2008 R2

    We just bought a WD Sentinel DX4000 Storage Server and went through the Remote Web Access wizard.  It setup the router just fine but when we got to the setup your domain name we have been having
    headaches ever since.  We chose the option to use our own domain name, which we have registered through GoDaddy.  It then showed that yes, our provider was GoDaddy but then it told us we needed to update our service with GoDaddy so we clicked the
    "Go to GoDaddy.com" button and it sent us to a page to purchase an SSL certificate.  So, we did that but then had issues with it not being able to be managed in our GoDaddy account.  So, I went into the domain name with GoDaddy and purchased
    a certificate there and got it setup on the domain name with GoDaddy.
    The problem is that when I try to go through the wizard again to set up the domain name I still have to click the goDaddy button to move to the next page where I type my domain name, username, and password
    with GoDaddy.  I setup the remote extension in the A records with GoDaddy as remote.mydomain.com, and put in my username and password with GoDaddy.  After I click next it tries to connect to the domain name service provider but then gives me the
    following error message over and over:
    The domain name was not set up for your server.  Wait a few minutes and run the wizard again.
    An unexpected or unknown problem occurred.  Please wait a few minutes, and then try again.
    Well, I have tried this several times and am about to start pulling my hair out.  Can anyone please tell me what I may be doing wrong?
    Thanks for any help you may be able to provide.

    Hi,
    Please verify the installed certificate status, you could refer to the article below to install GoDaddy SLL Certificates.
    Installing a GoDaddy Standard SSL Certificate on SBS 2008
    http://sbs.seandaniel.com/2009/02/installing-godaddy-standard-ssl.html
    Please Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there.
    There is a similar thread, please go through it to help troubleshoot this issue:
    SBS Essentials 2011 Domain Name Wizard Fails
    https://social.technet.microsoft.com/Forums/en-US/6644ec25-3071-4526-b878-9d6977589f88/sbs-essentials-2011-domain-name-wizard-fails?forum=smallbusinessserver2011essentials
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • NFS: server reboot - client problem with "file handle" - no unmont

    Hello,
    when I restart a NFS-server and a client has a NFS-share on it still mounted, that NFS-share becomes unusable on the client: everything causes the client complain about an old "file handle", even trying to unmount that share!
    So my question is: How to deal gracefully with a NFS-server reboot and avoiding this file handle problem?
    Apart from going to each client and unmount the NFS-share by hand?
    Thanks!

    Do I need to Give these values manually again as they are moved from Env to other?
    When we transport ID objects, the values stored in the channel get lost and have to be manually enetered......it happens even for DEV --> QA environment.
    Now your channel should reflect PROD-related data.
    2- Why my Receiver communiation channel coimg as
    Communication Channel | PRD_400 | RFC_QA3_Receiver:
    The channel name does not depend on the environment it is present....verify the name once again!
    Regards,
    Abhishek.

  • Possible bug in the arch NFS server package?

    i run the nfs server on my arch box, exporting to a debian box and an arch laptop. whenever the arch server reboots lately, the shares don't automatically remount properly on the clients. they're listed in mtab, but if i ls or try to access the directories, i get a permission denied error. i have to manually unmount the shares and then remount them again to be able to see/use them.
    the reason i think this is an arch package problem is because i set up a share on the debian box to share with arch, and that worked perfectly. when i rebooted debian as the server, the shares were automatically remounted on the arch client, and when i rebooted arch, the shares were again mounted properly on reboot.
    it's possible i'm doing something wrong with permissions, but it seems unlikely because 1) everything was working fine for a long time until recently, when i started noticing the behavior,  2) all the permissions on the shared directory are identical to the ones on the arch shared directory, all user name UIDs are the same, same groups and GIDs, etc., 3) the shares mount perfectly well manually from the command line, and 4) i set up the debian share/exports, etc. in about 2 minutes with no problem at all, while dealing with this problem on arch for 2 days now, after changing options and going over everything multiple times until my head is spinning. it just seems unlikely that a configuration is wrong, although i guess anything is possible. i can provide all that permissions/group info, fstab info, /etc/exports info, etc. if anyone wants to take a closer look at it.
    so until this is sorted, i wondered if anyone else is having this problem, or if anyone had any ideas of something i might be overlooking. again, everything *seems* to be set up right, but maybe there's some arch specific thing i'm overlooking. thanks.

    Ok out of pure fustration I just grabbed the gentoo init script. Installed start-stop-daemon, modified the script to run as #!/bin/bash, stuck it in /etc/rc.d, rebooted and everything works, I can reboot my computer and clients reconnect. Or restart daemon and they reconnect.
    Heres the script I am using.
    #!/bin/bash
    # Copyright 1999-2005 Gentoo Foundation
    # Distributed under the terms of the GNU General Public License v2
    # $Header: /var/cvsroot/gentoo-x86/net-fs/nfs-utils/files/nfs,v 1.14 2007/03/24 10:14:43 vapier Exp $
    # This script starts/stops the following
    # rpc.statd if necessary (also checked by init.d/nfsmount)
    # rpc.rquotad if exists (from quota package)
    # rpc.nfsd
    # rpc.mountd
    # NB: Config is in /etc/conf.d/nfs
    opts="reload"
    # This variable is used for controlling whether or not to run exportfs -ua;
    # see stop() for more information
    restarting=no
    # The binary locations
    exportfs=/usr/sbin/exportfs
    gssd=/usr/sbin/rpc.gssd
    idmapd=/usr/sbin/rpc.idmapd
    mountd=/usr/sbin/rpc.mountd
    nfsd=/usr/sbin/rpc.nfsd
    rquotad=/usr/sbin/rpc.rquotad
    statd=/usr/sbin/rpc.statd
    svcgssd=/usr/sbin/rpc.svcgssd
    mkdir_nfsdirs() {
    local d
    for d in /var/lib/nfs/{rpc_pipefs,v4recovery,v4root} ; do
    [[ ! -d ${d} ]] && mkdir -p "${d}"
    done
    mount_pipefs() {
    if grep -q rpc_pipefs /proc/filesystems ; then
    if ! grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
    mount -t rpc_pipefs rpc_pipefs /var/lib/nfs/rpc_pipefs
    fi
    fi
    umount_pipefs() {
    if [[ ${restarting} == "no" ]] ; then
    if grep -q "rpc_pipefs /var/lib/nfs/rpc_pipefs" /proc/mounts ; then
    umount /var/lib/nfs/rpc_pipefs
    fi
    fi
    start_gssd() {
    [[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
    local ret1 ret2
    ${gssd} ${RPCGSSDDOPTS}
    ret1=$?
    ${svcgssd} ${RPCSVCGSSDDOPTS}
    ret2=$?
    return $((${ret1} + ${ret2}))
    stop_gssd() {
    [[ ! -x ${gssd} || ! -x ${svcgssd} ]] && return 0
    local ret
    start-stop-daemon --stop --quiet --exec ${gssd}
    ret1=$?
    start-stop-daemon --stop --quiet --exec ${svcgssd}
    ret2=$?
    return $((${ret1} + ${ret2}))
    start_idmapd() {
    [[ ! -x ${idmapd} ]] && return 0
    ${idmapd} ${RPCIDMAPDOPTS}
    stop_idmapd() {
    [[ ! -x ${idmapd} ]] && return 0
    local ret
    start-stop-daemon --stop --quiet --exec ${idmapd}
    ret=$?
    umount_pipefs
    return ${ret}
    start_statd() {
    # Don't start rpc.statd if already started by init.d/nfsmount
    killall -0 rpc.statd &>/dev/null && return 0
    start-stop-daemon --start --quiet --exec \
    $statd -- $RPCSTATDOPTS 1>&2
    stop_statd() {
    # Don't stop rpc.statd if it's in use by init.d/nfsmount.
    mount -t nfs | grep -q . && return 0
    # Make sure it's actually running
    killall -0 rpc.statd &>/dev/null || return 0
    # Okay, all tests passed, stop rpc.statd
    start-stop-daemon --stop --quiet --exec $statd 1>&2
    waitfor_exportfs() {
    local pid=$1
    ( sleep ${EXPORTFSTIMEOUT:-30}; kill -9 $pid &>/dev/null ) &
    wait $1
    case "$1" in
    start)
    # Make sure nfs support is loaded in the kernel #64709
    if [[ -e /proc/modules ]] && ! grep -qs nfsd /proc/filesystems ; then
    modprobe nfsd &> /dev/null
    fi
    # This is the new "kernel 2.6 way" to handle the exports file
    if grep -qs nfsd /proc/filesystems ; then
    if ! grep -qs "^nfsd[[:space:]]/proc/fs/nfsd[[:space:]]" /proc/mounts ; then
    mount -t nfsd nfsd /proc/fs/nfsd
    fi
    fi
    # now that nfsd is mounted inside /proc, we can safely start mountd later
    mkdir_nfsdirs
    mount_pipefs
    start_idmapd
    start_gssd
    start_statd
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # bootup process can continue.
    if grep -q '^/' /etc/exports &>/dev/null; then
    $exportfs -r 1>&2 &
    waitfor_exportfs $!
    fi
    if [ -x $rquotad ]; then
    start-stop-daemon --start --quiet --exec \
    $rquotad -- $RPCRQUOTADOPTS 1>&2
    fi
    start-stop-daemon --start --quiet --exec \
    $nfsd --name nfsd -- $RPCNFSDCOUNT 1>&2
    # Start mountd
    start-stop-daemon --start --quiet --exec \
    $mountd -- $RPCMOUNTDOPTS 1>&2
    stop)
    # Don't check NFSSERVER variable since it might have changed,
    # instead use --oknodo to smooth things over
    start-stop-daemon --stop --quiet --oknodo \
    --exec $mountd 1>&2
    # nfsd sets its process name to [nfsd] so don't look for $nfsd
    start-stop-daemon --stop --quiet --oknodo \
    --name nfsd --user root --signal 2 1>&2
    if [ -x $rquotad ]; then
    start-stop-daemon --stop --quiet --oknodo \
    --exec $rquotad 1>&2
    fi
    # When restarting the NFS server, running "exportfs -ua" probably
    # isn't what the user wants. Running it causes all entries listed
    # in xtab to be removed from the kernel export tables, and the
    # xtab file is cleared. This effectively shuts down all NFS
    # activity, leaving all clients holding stale NFS filehandles,
    # *even* when the NFS server has restarted.
    # That's what you would want if you were shutting down the NFS
    # server for good, or for a long period of time, but not when the
    # NFS server will be running again in short order. In this case,
    # then "exportfs -r" will reread the xtab, and all the current
    # clients will be able to resume NFS activity, *without* needing
    # to umount/(re)mount the filesystem.
    if [ "$restarting" = no ]; then
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # shutdown process can continue.
    $exportfs -ua 1>&2 &
    waitfor_exportfs $!
    fi
    stop_statd
    stop_gssd
    stop_idmapd
    umount_pipefs
    reload)
    # Exportfs likes to hang if networking isn't working.
    # If that's the case, then try to kill it so the
    # bootup process can continue.
    $exportfs -r 1>&2 &
    waitfor_exportfs $!
    restart)
    # See long comment in stop() regarding "restarting" and exportfs -ua
    restarting=yes
    svc_stop
    svc_start
    echo "usage: $0 {start|stop|restart}"
    esac
    exit 0

  • MacBook hangs, NFS server not responding

    Lately when I leave my MacBook running overnight, and I return the next morning it behaves very strangely. It's not hanging but the running applications can't write to disk anymore. Every application crashes, and I can't reboot because applications fail to stop. When I start a Terminal, I'm able to write to the disk, so that's not the problem. (I did echo test >test.txt and checked the contents)
    When I started looking into the logs I found these messages:
    codemonkey:~ starquake$ cat /var/log/kernel.log | grep nfs
    Sep 12 21:59:34 codemonkey kernel[0]: nfs server localhost:/MY7dCA0ta-lAHBe6QwNAgT: not responding
    Sep 12 22:00:05 codemonkey kernel[0]: nfs server localhost:/MY7dCA0ta-lAHBe6QwNAgT: not responding
    Sep 12 22:00:34 codemonkey kernel[0]: nfs server localhost:/MY7dCA0ta-lAHBe6QwNAgT: dead
    Sep 13 22:06:07 codemonkey kernel[0]: nfs server localhost:/o7BMhxqBTfZ86LKTFIBRy4: not responding
    Sep 13 22:06:38 codemonkey kernel[0]: nfs server localhost:/o7BMhxqBTfZ86LKTFIBRy4: not responding
    Sep 13 22:07:07 codemonkey kernel[0]: nfs server localhost:/o7BMhxqBTfZ86LKTFIBRy4: dead
    Sep 14 00:08:18 codemonkey kernel[0]: nfs server localhost:/xJFc3ycIMRFkFh4uu1oYQA: not responding
    Sep 14 00:08:49 codemonkey kernel[0]: nfs server localhost:/xJFc3ycIMRFkFh4uu1oYQA: not responding
    Sep 14 00:09:19 codemonkey kernel[0]: nfs server localhost:/xJFc3ycIMRFkFh4uu1oYQA: dead
    More info can be found here:
    http://pastebin.com/XLXMvCbC
    Things I tried that didn't fix the problem:
    Updated the firmware of my harddisk (ST95005620AS)
    Check Partition and Disk using OS X Lion recovery tool

    I have the same issues, pauses are getting longer and more frequent as time goes on.  Seems to have set in aroud 10.7.1 update.

  • Problems setting 'allow HTML snippet insertion' in Contribute 4

    I am trying to set the administrator option to allow insertion of HTML snippets (as described here). But the dialog box to change the settings does not include an "Allow HTML Snippet Insertion" checkbox.
    Any advice? Is this to do with the version of Contribute I'm using? Or is there some other administrator setting I need to set first?

    Hi ravster.
    Thanks for the suggestions. I had read the Arch NFS HowTo and did as you suggested get rcpinfo for the server machine. The Libranet NFS server is configured rather different since the NFS daemons are built into the default kernel.
    Since I needed to complete the transfer immediately, I gave up for now trying to set up NFS and used netcat (!) instead to transfer the files from the Libranet laptop to the Arch workstation. The laptop will be converted to Arch Linux soon so I think I'll be able to clear things up soon.  I haven't had any problems setting up NFS with homogeneous (i.e., all Arch or all Libranet) computer combinations.
    Thanks again.
    Win

  • Setting up local testing server for coldfusion 9

    I could use some help with this thread in coldfusion forum...
    setting up local testing server for coldfusion 9 w dreamweaver on mac
    http://forums.adobe.com/thread/773350
    thanks

    I'm marking this as assumed answered, as you seem to have solved the problem in the other thread.

  • Setting up local testing server

    Hi,
    I am in the process of setting up a local server to test web sites with AJAX calls, and am wondering what are some of the more common and easier tools/resources to use to do this...I have seen MAMP and XAMPP are very common...I recently up-graded to Yosemite so hopefully this will not be a problem to configure a local server...!?
    Thanks very much for any direction or feedback.

    Ah. I see the problem. I will update that user tip when Yosemite is released. For now, you can try this:
    Enable mod_userdir and include the configuration file
    In the mod_userdir configuration file, include user-specific configuration files
    In said user-specific configuration files, add new some stuff to make Apache 2.4 happy. Specifically, you will need:
        Require all granted
    because Apache just needs that now.
    And you will need
        AddLanguage en .en
        LanguagePriority en fr de
        ForceLanguagePriority Fallback
    if you want to use Multiviews as per the User Tip.
    After a sudo apachectl graceful, it should work.
    However, I should caution anyone considering this that I haven't used Apache in some time. Things have progressed since I first wrote that user tip. These days, I would recommend using nginx, Tomcat, or Node. I also probably wouldn't even recommend using a Mac. Use Vagrant on your Mac instead and set everything up on Linux. Everyone, including Apple, uses Linux for servers these days. Plus, in today's world of security-enhanced Linux, you are going to have a mess trying to port something from a Mac to Linux. The resemblances are getting more and more superficial. You haven't lived until you have been screaming insults at selinux for hours on end. Even with Apple's recent drop in software quality (and I'm not the only one that says that), you aren't likely to scream insults at your Mac for more than 30 minutes or so until you figure out how to work around the latest bugset. With modern tools like selinux and Gradle, you aren't going to get anything done without reading a few hundred pages of manual first - assuming you can find the manual.
    Bottom line - use your Mac to track down the documentation and read it, get Vagrant running, and just use Linux.

Maybe you are looking for