NFS Exports confusion

I am trying to export a share using NFS in Workgroup Manager. I have what I think should work but the client behaves as if there is nothing being exported.
I wanted to check /etc/exports to make sure WGM had done things sanely but I find that it doesn't exist! Apple's documentation and man pages make reference to this file. I'm comfortable creating it 'by hand' but I'd like to know what OS X did with the stuff I set up in WGM first!
thanks,
sean

"showmount" is a generic NFS tool. It's not tied to NetInfo at all. Pretty much the only relation of NetInfo to the NFS server is the aforementioned behavior of the NFS server getting exports from NetInfo if the /etc/exports file didn't exist.
In general, when debugging NFS issues you'll always want to check /var/log/system.log.
Other useful tools include:
showmount -e
rpcinfo -p
And of course, wireshark when you need to dissect the packets on the wire.
Note that showmount and rpcinfo can be run from the NFS client too, you just
need to include the server's name or IP address as an extra argument.
New in 10.5 are some helpful sub-commands included as part of the "nfsd" command:
nfsd -v status
nfsd checkexports
All of these tools have man pages that you can dig into if you need to.
HTH
--macko

Similar Messages

  • NFS exports from Mac

    Hello,
    I have a little home network with a Mac and an NSLU2. The NSLU2 exports some directories (or folders) via NFS to the Mac. Additionally it makes backups from the files its exporting.
    Now I wanted to do the other way round: export from the Mac the /Users directory via NFS to the NSLU2 so it can make backups from the /Users directory too. But the NFS export (Mac to NSLU2) isn't working.
    I used NFS Manager 2.8 for the NFS imports (works like a charm) and for the exports (works not at all). The NFS Manager doesn't show me the list of exported directories (nor does 'showmount -e' on the Mac nor does 'showmount -e <Mac-IP-address>' on the NSLU2 show any exported directories). Furthermore a 'rpcinfo -p' doesn't show an entry for nfs with port number 2049. I had a look at some websites but all seem to show that I didn't do anything wrong.
    So, what's going wrong? I don't use Mac OS 10.4 Server. Could this be a problem?
    Regards
    Michael.
    Mac mini   Mac OS X (10.4.3)  
    Mac mini   Mac OS X (10.4.3)  
    Mac mini   Mac OS X (10.4.3)  

    I had the same question, NFS is so much more responsive than SMB but isn't so obviously accessible on OS X. I found this site:
    http://astcomm.net/tech/nfs_howto/server/
    Works perfectly but you have to be careful with the Firewall. I enabled UDP port 2049 (nfsd), tcp and udp ports 111 (portmapper), and then used rpcinfo -p to see which udp port mountd was using, unfortunately this tends to change each time it is started so you can't put a fixed value into System Preferences for the firewall. Still trying to work out how to resolve that.
    Mac OS X (10.4.3)

  • NFS exports

    Hi,
    I've changed the IP address of an (iMac) NFS client. From the workgroup manager on the NFS server, I have updated the exports accordingly, to include the client's new IP address - and if I do
    tpchn:jg # nidump exports .
    .. I can see the IP address listed for all necessary NFS exports.
    however, it's not happening on the client side.
    The client is in fact on a different subnet - not sure if that should make a difference or not as far as exports are concerned, although I can happily ssh between the client and the server so network connectivity is basically fine.
    Anyway, on the client side, if I attempt to cd to one of the NFS filesystems, I get:
    lyon:admin # cd /data/raid02
    su: cd: /data/raid02: RPC prog. not avail
    lyon:admin #
    I have rebooted the client completely, same result. I've also done:
    tpchn:jg # killall -HUP mountd
    tpchn:jg #
    .. any suggestions?
    Thanks,
    James

    Hi Mike,
    Sorry for the misplaced post and thanks for your reply.
    It's not the server; other clients (on the same subnet) can see the same exported partitions without problem. Running rpcinfo on a working client gives:
    $ rpcinfo -p <server IP>
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100024 1 udp 1021 status
    100024 1 tcp 1015 status
    100021 0 udp 1008 nlockmgr
    100021 1 udp 1008 nlockmgr
    100021 3 udp 1008 nlockmgr
    100021 4 udp 1008 nlockmgr
    100021 0 tcp 1014 nlockmgr
    100021 1 tcp 1014 nlockmgr
    100021 3 tcp 1014 nlockmgr
    100021 4 tcp 1014 nlockmgr
    100005 1 udp 984 mountd
    100005 3 udp 984 mountd
    100005 1 tcp 1012 mountd
    100005 3 tcp 1012 mountd
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    ..whereas on the problematic client, on a new subnet, I get:
    $ rpcinfo -p <server IP>
    No remote programs registered.
    $
    The server was actually restarted last night for other reasons, but this has made no difference.
    Actually I think I've overlooked hosts.allow/deny .. more later

  • Can't mount NFS exports on Arch laptop

    I've browsed the forums, double checked the wiki, but for some reason I'm not able to mount my NFS exports. There must be some small issue not letting me do this, but I can't find it. This is my /etc/exports file on the server (desktop):
    /glados 192.168.0.99(rw,fsid=0,no_subtree_check)
    /glados/movies 192.168.0.99(rw,no_subtree_check)
    And the order of the rc.conf daemons on the server:
    DAEMONS=(syslog-ng dbus network @netfs crond rpcbind nfs-common nfs-server !hwclock ntpd)
    On the client (laptop):
    DAEMONS=(syslog-ng dbus network crond @bumblebeed @networkmanager @postgresql rpcbind nfs-common nfs-server sshd)
    $ showmount -e 192.168.1.101
    Export list for 192.168.1.101:
    /glados/movies 192.168.0.99
    /glados 192.168.0.99
    Now try mount:
    $ sudo mount -t nfs4 192.168.1.101:/movies server/
    Here it just freezes and nothing happens. I also did the configuration both on server and client side of /etc/idmapd.conf, adding a domain.
    Nothing happens on the client side when I try to mount. It just freezes there. Both machines running up to date ArchLinux.
    Any help is appreciated!
    Last edited by fbt (2012-09-18 03:47:46)

    I just thought that server/ was a typo
    He's able to use showmount -e 192.168.1.101 from the machine with the ip 192.168.0.99 so assumed his network's functioning ok ... but you know what they say about 'assume', which machine did you actually run the showmount command on?  This might be stating the obvious, but if you ran it on 192.168.0.101 make sure 192.168.0.99 and 192.168.1.101 are communicating with each other
    Also I think you're not mounting the path correctly, you have movies exported as: 
    /glados/movies 192.168.0.99(rw,no_subtree_check)
    But you're trying to mount it as:
    sudo mount -t nfs4 192.168.1.101:/movies server/
    Try mounting it something like this: 
    sudo mount -t nfs4 192.168.1.101:/glados/movies /server/movies
    That command is assuming you're wanting to put your nfs shares inside a folder called /server, also be aware that the directory you're trying to mount your shares inside of has to already exist
    Incidentally, why are you mounting /glados and then also mounting directories unger /glados such as /glados/movies, by mounting /glados as an nfs share you already have access to /glados/movies.  Once /glados is mounted then you can symlink to or bind mount any directories under it to any location you want

  • Netboot with NFS - export what/where?

    Hello, Arch noob and first post here. Let me first express appreciation for the wiki, whose articles have helped me several times already outside of Arch. Great job!
    Now then - I'm trying to add Arch's live netboot into my current setup (based on Fog's menu system) and after reading through several wiki articles, I'm at the point where I load the relevant parts of arch.cfg syslinux config (contained on the iso, so cannot modify), which download kernel image and initrd, and start booting the system. I haven't configured the nfs export yet as it's not clear to me where arch expects it to be:
    The Diskless System article suggests /srv/arch, but doesn't mention WHAT should be in it.
    The PXE article suggests /mnt/archiso - this shouild work on my system without further configuration, as I export entire /mnt with crossmnt option to satisfy all the ubuntu netboots.
    When I actually boot the system, it appears to want to mount $SERVER/run/archiso/bootmnt. Searching for this path yields some patches on a mailing list, but no documentation.
    If the /run/... path is the right one, can anyone tell me what should be there, and preferably, how to automatically prepare that path on bootup, since /run is a tmpfs mount?

    myxal wrote:
    Right - by live netboot, I mean a network-based equivalent of a LiveCD - a means to test-drive/install a new release on a clean-slate, and as a recovery tool when GRUB on the disk dies for whatever reason.
    I've actually seen that article too, and have 2 issues with it:
    It's implied that servers on the internet are used to download the live system during boot. I'd rather avoid this and just use locally-shared files, which would obviously download much faster.
    It doesn't work, at least not without further prerequisites, which are missing from the article - the boot fails after a short while with "Could not find kernel image: menu.c32" - which is ridiculous, as that is definitely available over TFTP. Looking through the tftp logs, there appears no "file not found" error
    Arch Diskless expert here. 
    I would just go with the Diskless - NFS article.  The path doesn't exactly matter, but /srv/tftp/archxx is a good start in case you have some other PXE installations.  Install a PXE installation on your already installed Arch system.  If the architecture is the same as the host for the client, you can just point your pacman to your PXE installation directory. 
    In /etc/fstab export your /srv/tftp/archxx directory, but when tftp serves the file it will come from /srv/tftp.  You may want a separate partition for your PXE installation. 
    Diskless doesn't care whether you are trying to install a live diskless client or an install system.  I prefer to have a live diskless system, since you basically use a live running system and the same steps to install PXE as you do a new client.  Just point pacman to your installation directory and away you go. 
    menu.c32 comes from your syslinux directory under /usr/lib/syslinux/ and copy it into your /boot/grub /boot/syslinux directory.  You will need pxelinux.0 in /srv/tftp and a directory pxelinux.cfg will syslinux style configuration files will default for any clients, or MAC address specific.
    You will need a PXE ROM or iPXE to boot from the PXE server.
    Last edited by nomorewindows (2013-08-08 16:48:20)

  • NFS export external USB device fails

    I am trying to NFS export a FAT32 formatted external USB device, which fails with the error:
    /sbin/nfsd: Can't export /Volumes/<external>: Operation not supported (45)
    I am able to export internal/HFS drives, which have the "Owners Enabled: Yes" attribute, and therefore assume I need to set the flag accordingly on my external drive.
    Despite the fact that the device has been assigned a uuid (it appears to be in place in .fseventsd and running 'repair disk' echos it in syslog), I get this error when running vsdbutil:
    vsdbutil: Couldn't update volume information for '/Volumes/<external>/': Invalid argument
    vsdbutil: no valid volume UUID found on '/Volumes/<external>/': Invalid argument
    And diskutil returns this:
    Permissions are not enabled on the disk (-9973)
    I attempted to add the uuid to /var/db/volinfo.database in order to set the permissions there, to no effect.
    I don't believe that I am the only person who has attempted this, but I can find no evidence to the contrary. Thank you.

    NFS exporting requires specific NFS serving support from the file system.
    Unfortunately, the "msdos" file system implementation doesn't currently
    support NFS exporting.
    If you'd like that support added, I would strongly encourage filing a
    bug report/enhancement request with Apple:
    http://developer.apple.com/bugreporter/
    HTH
    --macko

  • Urgent: Oracle VM server reboot everytime write data to an NFS export

    This is a weird thing:
    simply describe the problem. I have created a server pool for holding just one server, not enabling cluster.
    this server has 2 NIC. one is used for public connection, let's say port 1 for management network, heartbeat, etc...
    the other is used to connect to a EMC NAS directly, let's say port2.
    what I did is created a network which type is Virtualmachine and add the port2 to the network.
    when I create virtual machine on the server, I added one VNIC for public network(10.*.*.*), and one VNIC for connecting to the NAS(192.168.4.*).
    then I installed Oracle ENterprise Linux 5.x/6.x (I've tried both) and started up the two VNIC inside the guest successfully. mount NFS filesystem successfully.the fstab like below:
    192.168.4.60:/datafs /u02 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
    I can touch a empty file or delete file from the NFS mount point. BUT, when I open an empty file and modified a few lines, then tried to save it, the whole OVM server just reboot itself. NOT the VM, but the OVM server. I tried to copy files or dd files to the mount pont, always get the same result.
    I really don't understand what's happening. the NFS exports should be fine. because when I tried to mount it to a VMware guest VM, read/write operations to the NFS mount point was just fine!
    anyone has any advice on this or any comments are highly appreciated...

    today I've got the culprit from the output screen of the server.
    kernel BUG at drivers/net/bna/bnad.c:2102!
    invalid opcode: 0000 [#1] SMP
    kernel panic - not syncing: fatal exception interrupt
    pid: 8160, comm: netback/1 Tainted: G D 2.6.32.21-45xen #1
    any idea on this? should I go to Oracle Support directly? when I tried to log a SR, I cannot find the OVM in the product category list... does it mean that we don't have Oracle VM support?

  • OS X extern drive ownership/permissions and NFS exporting

    - I have an external (250GB) firewire drive on OS X 10.4.9.
    - I want to have it available to local users of this Mac but with ownership/permissions of created files/directories protected in the usual UNIX sense of unique UID/GID -- files/directories created by one user cannot be read/written by other users of this Mac except as allowed by standard UNIX permissions groups settings; eg., those set with 'chmod' command.
    - I want to NFS-server this drive volume to a linux NFS client (eg., RHEL 4), again with files/directories protected in this same UID/GID UNIX sense. In our case, the users' UID/GIDs will be made to match, but regardless, I wish likewise for file/directory use on the linux client to be restricted as per UNIX permissions and the files/directories created by the Mac users have protections remain in place against linux user access, and visa versa, as above.
    Is this feasable in Mac OS X (without OS X Server)?
    How does one go about acheiving it?
    I have basic Netinfo Manager skills for creating NFS exports and starting NFS daemon services, but am not expert on all available export options. I have average linux IT NFS server/client and user management skills.
    Thanks,
    -Neil

    I don't know about networking with Linux, but I don know that for OS X users, enforcing permissions on an external drive without OS X Server is tricky.
    First, log in to your admin account. Right-click the drive, Get Info, expand Ownership & Permissions, and uncheck "Ignore ownership on this volume". Then set permissions accordingly.
    The problem is that any unprivileged user can log in to his own account, Get Info, recheck the box, and get ownership of the entire contents of the drive. This is possible even without the admin password.
    There is a workaround that will remove the Ignore Ownership box from the Get Info panel so that there will be no box for them to check. First make sure that the box is unchecked and that the permissions are set how you want. Then enable ACLs on the volume by entering this command in a Terminal window:
    sudo fsaclctl -p /Volumes/volumename -e
    Then restart Finder. Now there's no box for the unprivileged user to check. But I don't know where this setting is stored; perhaps the unprivileged user can find some command-line way of getting the box re-checked and thus getting ownership of everything.
    If there is some way you can get the data off of the external drive and onto the main boot drive you will have the best chance of keeping the data safe.

  • NFS exports and the mandatory no_root_squash

    We are running a SUSE11/OES11 cluster serving NSS volumes as NCP, NFS and AFP. Is the only feasible workaround for the NFS no_root_squash requirement to firewall the mountd port?
    If so will having a list of 1,000+ IP numbers in the allow list for mountd have a significant impact on the cluster nodes? Unfortunately on our University class B IPv4 site the allocated IP addresses are scattered and the subset of PCs controlled by technicians (and therefore 'trusted') are not contiguous and neatly arranged.

    There is another workaround to the "no_root_squash" requirement. The below is taken from TID: Support | OES: Compatibility issues between NSS and NFS
    2. no_root_squash: Officially, this is mandatory, so care should be taken to limit what hosts can mount the export (as the root user of the NFS client host will be able to act as the root user on the NSS exported path).
    However, due to potential security concerns with allowing root access, some administrators chose to set this up in another way. This alternative way is thus far considered experimental, and not thoroughly tested: It seems that the key requirement here is that the user who is requesting the mount (typically root) have at least Filescan rights to the NSS volume. If root is "squashed" he is treated like "nobody." Typically, "nobody" does not have access, neither through its own merits nor by being associated with any LUM-enabled user in eDir. However, an eDir user can be created and LUM-enabled, given Filescan right to the NSS volume(s), and then the UID assigned to that user can be used as the "anonuid" for that particular export. So, for example, if the user in question was given UID 1011, then instead of "no_root_squash" the combination of "root_squash,anonuid=1011" could be used.
    In that case, be sure to remember that even after mount, "squashed root user" will be treated as having whatever rights the anonuid user has been given. Also remember that if you use the "all_squash" parameter as well, all NFS client users (not just eDir users and not just root) will be treated as the anonuid user, and will be able to access the NSS volume.
    On the other subject: I do not know the potential impact of 1000+ IP numbers in an allow list for mountd.
    Darcy

  • Systemd and nfs exports [solved]

    I recently switched my server over to systemd and now I cannot connect to the NFS share that it is exporting.
    Here is the entry in the /etc/fstab on the server:
    /dev/sdb1 /media/media ext4 defaults,noatime 0 1
    /media/media /nfs4exports/media none rw,bind 0 0
    Here is the /etc/systemd/system/media-media.mount :
    [Unit]
    Description=media
    Wants=network.target rpc-statd.service
    After=network.target rpc-statd.service
    [Mount]
    What=/media/media
    Where=/nfs4exports/media
    Type=nfs
    StandardOutput=syslog
    StandardError=syslog
    When I connect it from my workstation, the mount command just hangs:
    # mount -t nfs mars:/media /media/media
    Help
    Last edited by graysky (2012-05-10 17:01:08)

    The solution is NOT to create this file at all.  Apparently, exports from the server do not require them.  If I remove it and reboot the server, I am able to connect from my workstation with no issues.  For reference:
    $ ls -l /etc/systemd/system/multi-user.target.wants/
    total 0
    lrwxrwxrwx 1 root root 40 May 10 10:58 cpupower.service -> /usr/lib/systemd/system/cpupower.service
    lrwxrwxrwx 1 root root 38 May 10 10:58 cronie.service -> /usr/lib/systemd/system/cronie.service
    lrwxrwxrwx 1 root root 40 May 10 12:10 exportfs.service -> /usr/lib/systemd/system/exportfs.service
    lrwxrwxrwx 1 root root 42 May 10 10:59 lm_sensors.service -> /usr/lib/systemd/system/lm_sensors.service
    lrwxrwxrwx 1 root root 35 Apr 30 15:15 network.service -> /etc/systemd/system/network.service
    lrwxrwxrwx 1 root root 36 May 10 10:59 ntpd.service -> /usr/lib/systemd/system/ntpd.service
    lrwxrwxrwx 1 root root 36 May 10 11:33 rc-local.service -> /etc/systemd/system/rc-local.service
    lrwxrwxrwx 1 root root 40 May 2 22:37 remote-fs.target -> /usr/lib/systemd/system/remote-fs.target
    lrwxrwxrwx 1 root root 39 May 10 10:58 rpcbind.service -> /usr/lib/systemd/system/rpcbind.service
    lrwxrwxrwx 1 root root 42 May 10 12:10 rpc-mountd.service -> /usr/lib/systemd/system/rpc-mountd.service
    lrwxrwxrwx 1 root root 41 May 10 12:10 rpc-statd.service -> /usr/lib/systemd/system/rpc-statd.service
    lrwxrwxrwx 1 root root 43 May 10 10:58 sshdgenkeys.service -> /usr/lib/systemd/system/sshdgenkeys.service
    lrwxrwxrwx 1 root root 36 May 10 10:58 sshd.service -> /usr/lib/systemd/system/sshd.service
    lrwxrwxrwx 1 root root 41 May 10 11:06 syslog-ng.service -> /usr/lib/systemd/system/syslog-ng.service
    lrwxrwxrwx 1 root root 35 May 10 10:57 ufw.service -> /usr/lib/systemd/system/ufw.service

  • Hide subdirectories in NFS exports

    Hi,
    on my server I have
    /data/subA
    /data/subB
    /data/subC
    /data/subD
    /data/subE
    I exported /data via NFS and everything works great.
    My problem is, that I do not wish to export subD.
    All have same permissions, which I can't change, because the server needs to be able to access subD itself.
    Also, since there are actually more than those 5 subdirectories, I'd like to avoid setting up individual exports for each subdirectory.
    The ideal solution for me would be to continue exporting just "/data" but somehow excluding that one subdirectory..
    I checked "man exports" and google but couldn't find anything useful.
    So, I faithfully turn to you.
    Mat

    Sin.citadel wrote:i have checked articles on nfs and symlinks, and found out that they most likely wont work as the symlinks so resolved are done by the client only, and not by the server, but if it does work for you, please let me know. on the other hand, this symlink setup does work well with samba so you can try using that if nfs doesnt work out.
    and that's what's helping me now!
    on the server, I created a symlink under /data named subD but pointing to /myhiddendir.
    When mounting /data on the client, the symlink is still there, but since the nfs-mount is treated like a local directory, it points to /myhiddendir ON THE CLIENT as well. So, the contents of "subD" is not exported and that's exactly what I wanted.
    I guess that was the problem for demian, he would have needed to export (and mount on the client) /myhiddendir as well

  • Qmaster NFS export  hozed my exports on MacOSX server

    While attempting to configure a Qmaster cluster I noticed that at each reboot (after installing the Qmaster software) a new /etc/exports file is generated and mountd run clobbering the information in the LDAP server which breaks NFS mounts to clients of user dirs and the like. So far, the quick fix is the simply remove the /etc/exports, kill the currently running mountd and restart mountd. But I wondering if anyone else is seeing this problem and wether they have come up with a better solution?
    Ian

    Here is the deal:
    1. no errors (NFS Related) that I can see in the logs.
    2. If I can't cp within the server itself no other machines can either, even if they see it
    3. sudo nfsd update helped not
    *4: Here is the kicker, I added the DNS name of the Server to the NFS client export list and Voilá!*
    --> Thats the fix. Not very obvious given that 127.0.0.1 should suffice, and its the correct way to export in Tiger.
    Anyways Thank you for the suggestion. I can stop banging my head against the wall now.

  • SUN NAS 5310 NFS exporting problem

    Dear Experts ,
    I'me facing a strange problem with SUN NAS 5310 , a volume from this NAS is exported as NFS share with the options ( R/W Access , map root user to root user of a Unix machine and the hosts to be in a trusted group ) what is happening is when i mount this exported volume at a Unix machine it is mounded with permission 700 and the owner is root no other users can access it . i tried to change the permissions from the Unix machine but i couldn't and it gives me the message " chmod :WARNING: can't change ...." there is no way on the NAS to change this as i can see . i tried to export the folders under this volume and mounted it on Unix and it went fine , i tried with other volumes used for windows and they mounted normally !!!!, except for the mentioned volume refuse to change and mount normally ,
    Any ideas will be appreciated
    Thanks for you all
    Regards, Amr.

    Hi,
    I've blogged some time ago about this: http://vnull.pcnet.com.pl/blog/?p=87 but in inter-VM communication scenario. This could be adapted to have physical ethernet in dom0 running also with 1GigE with MTU=9000.
    Be sure to have NFS mounted as something like this: nointr,tcp,nfsvers=3,timeo=300,rsize=32768,wsize=32768
    Another trick is to tune drivers setting, but this only works for high packet-per-seconds scenario (works great for e1000); Also you could try enabling all offload settings for driver (for both cases consult: man ethtool). If even that doesn't help to fully exploit storage performance, you could try dividing /OVS NFS mountpoint to several ones, based on performance charcteristics. If have 2 VMs that would mean for example three NFS mountpoints:
    1) /OVS
    2) /OVS/running_pool/01_vm1
    3) /OVS/running_pool/02_vm2
    This would exploit three TCP streams, but I'm not sure if it's supported. Also be sure to tune server NFS kernel threads (man nfsd -> rc.nfsd -> /etc/sysconfig/nfs -> RPCNFSDCOUNT="some_big_number").

  • [Solved] NFS export at boot and net-auto-wired

    Hello Guys,
    I have a problem with my fileserver. Previously dhcp was enabled through systemctl enable dhcpd@eth0 and everything worked fine. I changed the network to be configured using net-auto-wired, because it seemed a reasonable thing to do and because it allows for a fallback-ip-address in case dhcp fails. Now the problem is that many daemons are (at boot) started before the net is up and so don't work properly, especially nfsd, because exportfs can't resolve the names of the allowed client computers.
    Mär 18 11:32:22 bigbrain systemd[1]: Started NFS server.
    Mär 18 11:32:22 bigbrain systemd[1]: Starting NFS Mount Daemon...
    Mär 18 11:32:22 bigbrain systemd[1]: Starting NFSv4 ID-name mapping daemon...
    Mär 18 11:32:23 bigbrain systemd[1]: Started NFSv4 ID-name mapping daemon.
    Mär 18 11:32:24 bigbrain systemd[1]: Started Samba SMB/CIFS server.
    Mär 18 11:32:24 bigbrain systemd[1]: Started NFS Mount Daemon.
    Mär 18 11:32:28 bigbrain kernel: Installing knfsd (copyright (C) 1996 [email protected]).
    Mär 18 11:32:28 bigbrain tunnel-httpd.sh[321]: ssh: Could not resolve hostname example.com: Name or service not known
    Mär 18 11:32:28 bigbrain tunnel-httpd.sh[321]: ssh: Could not resolve hostname example.com: Name or service not known
    Mär 18 11:32:28 bigbrain ifplugd[318]: Link beat detected.
    Mär 18 11:32:28 bigbrain ifplugd[318]: Executing '/etc/ifplugd/netcfg.action eth0 up'.
    Mär 18 11:32:28 bigbrain ifplugd[318]: client: up
    Mär 18 11:32:28 bigbrain ifplugd[318]: client: loading stw-wh
    Mär 18 11:32:28 bigbrain ifplugd[318]: client: loading dhcp
    Mär 18 11:32:28 bigbrain kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
    Mär 18 11:32:28 bigbrain dhcpcd[397]: version 5.6.7 starting
    Mär 18 11:32:28 bigbrain kernel: NFSD: starting 90-second grace period
    Mär 18 11:32:28 bigbrain exportfs[354]: exportfs: Failed to resolve some-domain.example.com
    Mär 18 11:32:28 bigbrain exportfs[354]: exportfs: Failed to resolve some-domain.example.com
    Mär 18 11:32:28 bigbrain exportfs[354]: exportfs: Failed to resolve some-other-domain.example.com
    Mär 18 11:32:28 bigbrain exportfs[354]: exportfs: Failed to resolve some-other-domain.example.com
    Mär 18 11:32:28 bigbrain exportfs[354]: exportfs: Failed to resolve some-domain.example.com
    Mär 18 11:32:28 bigbrain exportfs[354]: exportfs: Failed to resolve some-domain.example.com
    Mär 18 11:32:28 bigbrain exportfs[354]: exportfs: Failed to resolve some-other-domain.example.com
    Mär 18 11:32:28 bigbrain exportfs[354]: exportfs: Failed to resolve some-other-domain.example.com
    Mär 18 11:32:28 bigbrain kernel: r8169 0000:04:00.0 eth0: link up
    Mär 18 11:32:28 bigbrain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    Mär 18 11:32:28 bigbrain systemd[1]: PID file /run/httpd/httpd.pid not readable (yet?) after start.
    Mär 18 11:32:29 bigbrain dhcpcd[397]: eth0: sending IPv6 Router Solicitation
    Mär 18 11:32:29 bigbrain dhcpcd[397]: eth0: broadcasting for a lease
    Mär 18 11:32:30 bigbrain dhcpcd[397]: eth0: offered 10.42.19.195 from 141.35.0.13
    Mär 18 11:32:30 bigbrain dhcpcd[397]: eth0: acknowledged 10.42.19.195 from 141.35.0.13
    Mär 18 11:32:30 bigbrain dhcpcd[397]: eth0: checking for 10.42.19.195
    Mär 18 11:32:30 bigbrain ntpd_intres[341]: host name not found: 0.pool.ntp.org
    Mär 18 11:32:30 bigbrain ntpd_intres[341]: host name not found: 1.pool.ntp.org
    Mär 18 11:32:30 bigbrain ntpd_intres[341]: host name not found: 2.pool.ntp.org
    Mär 18 11:32:32 bigbrain ntpd_intres[341]: host name not found: 0.pool.ntp.org
    Mär 18 11:32:32 bigbrain ntpd_intres[341]: host name not found: 1.pool.ntp.org
    Mär 18 11:32:32 bigbrain ntpd_intres[341]: host name not found: 2.pool.ntp.org
    Mär 18 11:32:33 bigbrain dhcpcd[397]: eth0: sending IPv6 Router Solicitation
    Mär 18 11:32:34 bigbrain dhcpcd[397]: eth0: leased 10.42.19.195 for 1800 seconds
    Mär 18 11:32:35 bigbrain dhcpcd[397]: forked to background, child pid 446
    Mär 18 11:32:35 bigbrain ifplugd[318]: client: :: dhcp up [done]
    Mär 18 11:32:35 bigbrain ifplugd[318]: Program executed successfully.
    Mär 18 11:32:36 bigbrain ntpd[336]: Listen normally on 5 eth0 10.42.19.195 UDP 123
    Mär 18 11:32:36 bigbrain ntpd[336]: peers refreshed
    Mär 18 11:32:36 bigbrain ntpd[336]: new interface(s) found: waking up resolver
    Mär 18 11:32:37 bigbrain dhcpcd[446]: eth0: sending IPv6 Router Solicitation
    Mär 18 11:32:38 bigbrain ntpd_intres[341]: DNS 0.pool.ntp.org -> 83.137.98.96
    Mär 18 11:32:38 bigbrain ntpd_intres[341]: DNS 1.pool.ntp.org -> 176.31.45.66
    Mär 18 11:32:38 bigbrain ntpd_intres[341]: DNS 2.pool.ntp.org -> 192.53.103.108
    what can I do to fix this?
    TIA
    Sunday
    Last edited by Sunday87 (2013-03-19 23:08:56)

    For future reference:
    net-auto-wired doesn't Wants=network.target at any time (neither when started nor when a connection is made) as does dhcpcd.service (so indeed i had the same problem already before i switched to net-auto-wired but i guess i didn't notice it). now i'm using netcfg.service which actually Wants=network.target and starts Before=network.target so everything works fine. The only thing that is missing is a fallback static ip in case the dhcp does not respond, but that is another question so i will mark this solved.

  • NFS export group permissions failing to be applied

    I have several NFS shares, mounted on RHEL/Centos 4.5 clients. Only posix permissions are used, no acl. The RHEL client authenticates users through opendirectory on the server.
    jim and bob belong to the same group, staff
    There are two files on the nfs mount, one belongs to jim, one to bob.
    Both files have rw group permissions, and belong to group staff.
    On the server, or logged into the server via ssh, jim can edit and save bobs file, since he has write permission for the group.
    However on the nfs mount, jim is not given permission to write to bob's file. Jim can delete bob's file though.
    Similarly, bob cannot edit jim's file, though he is in the same group.
    The group and user names are identical across systems, as are the group and user ids, which is to be expected as they served from the same directory.
    This problem has been affecting us for quite a while - from the original clean install of 10.4 and through to the current 10.5.6 server
    The issue has already been raised (and archived) at
    http://discussions.apple.com/thread.jspa?threadID=1442054&tstart=570
    with no useful result.

    Hi frndsss, Seems like we have an enemy in common.. well will keep this space updated if we come across any solutions... thanks..,
    Ricky.
    Edited by: user781890 on Aug 25, 2008 10:06 PM

Maybe you are looking for

  • Can no longer connect to internet via airport

    My roommate and I have wireless internet. I used it all last semester with no problems. Sometime in December, I was unable to access the internet (Firefox, Safari, tried both..) here in my apartment, even though Internet Connection shows a full signa

  • Problem with Business Partner List - State

    I'm using SAP B1 2007A. We recently added several International countries and their regions (states). We are having a problem with the state that appears on the Business Partner List (list that appears when you search using part of a company name or

  • Event handling in Network UI element in Webdynpro

    Hi ,    I am developing a hierarchial graph using Network UI element.I want to incorporate event handling so that the graph will respond to user actions like on double clicking a node an URL will be opened.I can notproceed with the event handling.   

  • Convert digital waveform to array

    Hello, I posted a problem before Christmas regarding data acquisition and accessing the hardware buffer on a 6562 card. I managed to figure out that problem but now I'm stuck when I try manipulating the data I obtain. (The original problem can be see

  • Web galleries : statistics & comments

    Hi all, Web galleries generated by LR already great. Nevertheless, I would be happy to some more functionalities in HTML galleries, here are the two main functions: - Statistics: View counter. Each time a person click on a picture, a counter would be