Solaris 10 NFS client and readdir

I have a Solaris 10 u5 client that mounts a directory using NFS from a Mac OS X server. The mount works fine and programs and tools such as /bin/ls, etc work ok. However, several programs that I have that use the readdir (or readdir_r) library calls never return lists of files from this NFS mounted directory (point these programs at a ufs/zfs file system and all works fine). I created a simple test using readdir and it happens in that as well. The only thing that it will find/list is the "." and the ".." directories for anything in the NFS mounted name space.
I found a reference to the nfs:nfs3_shrinkreaddir and the nfs:nfs_shrinkreaddir solaris tunable parameters and placed them both in the /etc/system file and rebooted, but it did not change the behavior. I also tried setting the nfs:nfs_disable_rddir_cache=1 and related entities to no avail.
I also noticed that tar was dumping core reading this directory, but have found a patch for tar to fix this. It did not include any guidance on nfs parameters for Mac OS X or similar NFS v3 servers.
Is there some set of NFS settings that I can make that will enable this Solaris client to mount the file system and be able to actually read the directories and files?

I believe I have found my problem. Turns out that it appears to be only remotely related to NFS. The application I have is built for 32 bits and the O/S is an i386/x64 system. Apparently, readdir fails when it gets an "nfs" inode when it is built in 32 bit mode - works fine when compiled with -m64. So now I need to track down some x64 builds of the failing packages.

Similar Messages

  • Solaris 9 NFS clients and Mac OS X 10.3.8 NFS Server issues

    I have a situation where I'm using a Mac OS X Server machine as my file server for heterogeneous mix of clients, Solaris 9 being one of them. The NFS server portion of Mac OS X seems to have some quirks. I made the move to a Mac OS X server from a combination of Linux and Solaris because it was touted as a good multi-platform server solution, but my NFS woes are souring my opinion of it
    I haven't been able to nail down the exact cause, but it seems that Sun's Gnome Destop 2.0 has problems starting up. The 'gconfd-2' process starts and never finishes what it's doing. I'm suspecting a problem with creating a lock file in the user's NFS mounted home directory. My workaround was to disable the Gnome Desktop 2.0 option, but it isn't very pleasant because many of my users liked it.
    Another strange issue that plagues the Solaris 9 and Linux (Fedora 2) users is that Mozilla, which is the primary email client for my users, complains that it "Could not initialize the browser's security component." It goes on to suggest that there may be a problem with read / write access with the user's profile directory or that there's no more room. Googling didn't turn up much about this problem, but the home directory share is nowhere near full and the permissions are such that eveyone can read and write to their own profile directory just fine. I've been able to work around this problem on Linux by removing the user's 'cert8.db' and 'key3.db' files before they run Mozilla, but this technique is failing to work on the Solaris 9 clients.
    All these problems seem to involve some strange file access issue, and the Linux and Solaris clients had no problems when I was using a Solaris box for home directory sharing via NFS, so it definitely seems like a problem with Mac OS X's NFS implementation.
    If anyone has come across this type of issue and has some information about a fix or a better workaround, I would love to hear from you. Thanks in advance!

    I just figured out today that the Gnome Desktop 2.0 problem is due to some part of Gnome not liking really long home directory paths. Mac OS X by default dictates that home directories be of the form /Network/Servers/(fqdn of file server)/Volumes/(volume name)(home directory share path), for example, /Network/Servers/xxx.myschool.edu/Volumes/Homes/userx. I haven't figured out what in Gnome doesn't like it, but it appears to be the gconf mechanism.
    The Mozilla strangeness still is happening though.
    Quote: schwenk wrote on Fri, 15 April 2005 11:09
    I haven't been able to nail down the exact cause, but it seems that Sun's Gnome Destop 2.0 has problems starting up. The 'gconfd-2' process starts and never finishes what it's doing. I'm suspecting a problem with creating a lock file in the user's NFS mounted home directory. My workaround was to disable the Gnome Desktop 2.0 option, but it isn't very pleasant because many of my users liked it.

  • Problems enabling nfs client and server

    I just re-build a solaris 11.1x86 on a x4640 SunFire
    Have problems enabling nfs
    First i typed the following command:
    # svcs network/nfs/server
    disable
    Second I typed the following command:
    #svcadm enable network/nfs/server
    # svcs network/nfs/server
    offline
    did this 3 or 4 times without success...
    any ideas?
    I holding production here! Please help!!

    Let's rule out the easier stuff first...
    Do you have something shared?
    I think you need to have something shared before you can enable the nfs server service.
    Or, if you share something, the service is started automatically. See below.
    Thanks, Cindy
    # svcs -a | grep nfs
    disabled Feb_26 svc:/network/nfs/client:default
    disabled Feb_26 svc:/network/nfs/server:default
    disabled Feb_26 svc:/network/nfs/rquota:default
    # svcadm enable svc:/network/nfs/server:default
    # svcs | grep nfs
    disabled 13:51:52 svc:/network/nfs/server:default
    # zfs set share.nfs=on rpool/cindy
    # share
    rpool_cindy /rpool/cindy nfs sec=sys,rw
    # svcs | grep nfs
    online Feb_26 svc:/network/nfs/fedfs-client:default
    online Feb_27 svc:/network/nfs/status:default
    online Feb_27 svc:/network/nfs/cbd:default
    online Feb_27 svc:/network/nfs/mapid:default
    online Feb_27 svc:/network/nfs/nlockmgr:default
    online 13:52:35 svc:/network/nfs/rquota:default
    online 13:52:35 svc:/network/nfs/server:default

  • NFS - Solaris 10 client from Ubuntu server gives Rpcbind error

    Hello All,
    New to Solaris, and I've been scouring the Internet to find a solution, but none have been produced. I'll start by giving you details about the setups, and then go into the error:
    Server Setup:
    Ubuntu 8.04
    Exports file ->
    /home/<folder> <Solaris 10 Server DNS name>(rw,no_subtree_check,async)
    Client Setup:
    Solaris 10
    Set /etc/default/nfs to have NFS_CLIENT_VERSMAX=3
    Ran svcadm -v enable -r network/nfs/client and then tried
    mount -F nfs <Ubuntu Server DNS name>:/home/<folder> /mnt/test/
    and all I ever get are Rpcbind failure - RPC: Timed Out and then it says it's retrying: /mnt/test
    I've gotten the firewall out of the way, I can ping the Ubuntu server from the Solaris server and vice versa, and I'm able to mount the Ubuntu NFS share on another Ubuntu machine perfectly, but I can't get it to mount on the Solaris server. If I specify v3 of NFS, that doesn't change anything. If I specify v4 of NFS, I get the error that the file or folder doesn't exist on the Ubuntu server.
    Any ideas? Any more info needed?

    This is the exact same problem I've been having. My server is Ubuntu 8.10, and the client is Solaris 10. This is on my home network, so I'm pretty confident it isn't a network issue. I do NFS all the time at work between Solaris machines, but I'm stumped on this one. I've noticed there are similar threads on the topic with no real answer that I have found --
    http://www.linuxquestions.org/questions/linux-networking-3/nfs-server-on-ubuntu-doesnt-play-nice-with-nfs-client-on-solaris-626508/
    I did a dfshares from the Solaris box, and I actually get a response listing the shares. Even though I can see it I still can't mount it. Here is what I see:
    bash-3.00# dfshares tabasco
    RESOURCE SERVER ACCESS TRANSPORT
    tabasco:/media/Shared tabasco - -
    bash-3.00# mount -F nfs -o ro tabasco:/media/Shared /mnt
    nfs mount: tabasco:/media/Shared: No such file or directory
    bash-3.00#
    NFS is working on the server, as I can mount it locally (see below)
    root@tabasco:/# cat /etc/exports
    /media/Shared *(ro,sync)
    root@tabasco:/# mount tabasco:/media/Shared /mnt
    root@tabasco:/# cd /mnt
    root@tabasco:/mnt# ls
    Videos lost+found Music Pictures Other
    root@tabasco:/mnt#
    Yes... my server''s name is tabasco... remember it's a home network... and I like Tabasco... :)

  • Wrong atime created by iMac as a NFS client

    We have iMac computers in classrooms, and they are running as an NFS client. We have found that the wrong access time (atime) is created by Mac OS 10.5 and 10.6.
    A file that is opened by using O_CREATE on the NFS-mounted directory will have the wrong atime. For example, the year will appear as 1920 or 2037 depending on the IP address of the NFS client.
    This problem can also be found with Solaris NFS server and EMC Celerra.
    The following program will create a file having the wrong atime on an NFS-mounted directory:
    #include <fcntl.h>
    #include <stdlib.h>
    #include <string.h>
    #include <stddef.h>
    main() {
    int fd;
    /* O_EXCL option will cause wrong atime */
    fd = open("AIZU", O_CREAT | O_EXCL | O_RDWR, 0644);
    /* fd = open("AIZU", O_CREAT | O_RDWR, 0644); */
    close(fd);
    exit(0);
    The phenomenon is quite similar to the following problems on FreeBSD reported in 2001:
    http://www.mail-arch...g/msg22084.html
    If anyone knows a solution to this problem, we would greatly appreciate your advice. Thanks in advance.

    The solution shown in
    http://www.mail-archive.com/[email protected]/msg22084.html
    should be applied to the current NFS modules of Snow Leopard.

  • NFS client problem "The document X could not be saved"

    Hi,
    Briefly: Debian Linux server (Lenny), OS X 10.5.7 client. NFS server config is simple enough:
    /global 192.168.72.0/255.255.255.0(rw,rootsquash,sync,insecure,no_subtreecheck)
    This works well without our Linux clients, and generally it is Ok with my OS X iMac. OS X NFS client is configured through Directory Utility, with no "Advanced" options. Client can authenticate with NIS nicely, and NFS, on the whole, works. I can manipulate files with Finder, and create files on the commandline with the usual tools.
    The problem is TextEdit, iWork and other Cocoa apps (not all). They can save a file once, but subsequently saving a file produces a "The document X.txt cannot be saved" error dialog. If I remove the file on the commandline and re-save, then the save succeeds. It is as if re-saving the document with the same name as an existing file causes issues. There seems to be no problem with file permissions. When I save in a non NFS exported directory everything is fine.
    Has anyone spotted this problem before?
    Lawrence

    I doubt that "OS X NFS is fundamentally broken" seeing as how many people use it successfully.
    tcpdump (or more preferably: wireshark) might be useful in tracking down what's happening between the NFS client and NFS server. Sometimes utilities like fs_usage can be useful in tracking down the application/filesystem interaction.
    It's usually a good idea to check the logs (e.g. /var/log/system.log) for possible clues in case an error/warning is getting logged around the same time as the failure. And if you can't reproduce the problem from the command line, then that can be a good indication that the issue is with the higher layers of the system.
    Oh, and if you think there's a bug in the OS, it never hurts to officially tell Apple that via a bug report:
    http://developer.apple.com/bugreporter/
    Even if it isn't a bug, they should still be able to work with you to help you figure out what's going on. They'll likely want know details about what exactly isn't working and will probably ask for things like a tcpdump capture file and/or an fs_usage trace.
    HTH
    --macko

  • NFS Errors on Solaris 2.6 and 7 Systems running as a clear case client

    I read in a Rational Document that When Sun fixed defect #4271267 they introduced a problem that causes EAGAIN to be returned to fsync() andclose() system calls. The EAGAIN defect is being tracked by Sun as defect #4349744 and it is a problem for NFS clients running Solaris2.6, 7, and 8.
    Sun has released a patch for this on Solaris 8(108727-06 or later). Does anyone know, What are the Patches(Patch ID Numbers)to be installed on Solaris 2.6 and Solaris 7 Systems for the Problem when ClearCase Views or Vobs are on NetAPP with Solaris servers.
    I am getting NFS errors and I am not able to view the files, when I try accessing the Vobs from Solaris 2.6,7 systems. The Clear Case Server is on Solaris 8 and the VOBs are stored in Netapp Filer.
    TIA
    Regards
    Saravanan.C.S

    Hello:
    I have the same problem with an only Adaptec AHA-2940, but it is bigger....i can't install Solaris with this problem.
    I am sure that it isn't hardware problem becouse i have 3 IBM PC Servers 315 and i have the same problem with all them.
    Some help would be very grateful.
    Hello,
    I have an intel box running Solaris 2.6 with Oracle
    8.i.
    There are 2 SCSI cards. The first one has the
    following devices. [0 = Seagate 9.5gig drive] [4 =
    Seagate tape drive] [6 = TEAC cdrom] [7 = adaptec 2940
    SCSI card].
    The Second has the following devices. [0 = Seagate
    9.5gig drive] [1 = Seagate 9.5gig drive] [7 = Adaptec
    2040 SCSI card].
    Solaris is partitioned as followed. 1 root drive. 1
    /opt drive. And 1 /backup drive.
    My problem is periodically we get Transport errors.
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4 (adp1):
    Dec 2 08:15:45 HAFC unix: timeout: abort
    request, target=1 lun=0
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4 (adp1):
    Dec 2 08:15:45 HAFC unix: timeout: abort device,
    target=1 lun=0
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4 (adp1):
    Dec 2 08:15:45 HAFC unix: timeout: reset target,
    target=1 lun=0
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4 (adp1):
    Dec 2 08:15:45 HAFC unix: timeout: early
    timeout, target=1 lun=0
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4/cmdk@1,0 (Disk8):
    Dec 2 08:15:45 HAFC unix: SCSI transport failed:
    reason 'incomplete': retrying command
    Like this. When they come it is by the thousands.
    Sometimes locking up the system. Many times the errors
    indicate a disk and give a block error or two. If I
    replace the disk. The problem usually goes away.
    Sometimes replacing the cable clears the problem.
    Sometimes this happens when there is no real activity
    on the server. I have applied patch 111031-01 witch
    was supposed to fix this problem. But it hasnt. Unless
    I have bad hardware. Is it possible?

  • NFS problem between RedHat Client and Solaris Server

    Hi all, we are experiencing a problem between a RedHat client and a Solaris 10 server. For the purposes of this post, I'll call the Redhat client server A and the Solaris 10 server B.
    Server B is exporting a filesystem that server A is trying to mount. Server A can successfully mount the exported file system, however, strange things are happening. If I change to the exported mount point on server A and create a file, the file is owned by nobody:nobody, not the user that created the file.
    A look at the file on server B shows the file has the correct UID and GID (ie the UID & GID of server A).
    The fstab file on server A looks like this:
    serverB:/data /data nfs4 rsize=32768,wsize=32768,hard,nointr,rw,bg,actimeo=0,timeo=300,suid 0 0
    Does anyone have a explanation for this?
    NB: There is a firewall between server A and server B. A firewall rule is in place to allow traffic between the two servers on port 2049
    Stewart

    Hi
    If I change to the exported mount point on server A and create a file, the file is owned by nobody:nobody, not the user that created the file.On a NFS share, for security reasons, you normally dont have root provileges.
    A file createt as root user will be mapped to nobody:nobody.
    The behaviour you see is correct.
    If you want the file to be createt as root, you have to export the filesystem with -o ro,anon=0
    NFSv3 will be blocked by your firewall.
    Franco

  • Svc:/network/nfs/client problem in Solaris 10

    Hi,
    I`ve been trying to figure out what is the problem with the below scenario for so long and till now i`m not able to.
    i have a X4100 SunFire system with solaris 10 installed.
    most of the services are not coming up (ssh,ftp,etc....) each time i reboot the server, and after approximately 3 hours everything comes fine by it self.
    i noticed that the service :/network/nfs/client is taking too long while starting. This might be related to the problem.
    See below the svcs command : hope this will be usefull :)
    bash-3.00# svcs -xv ssh
    svc:/network/ssh:default (SSH server)
    State: offline since Wed May 20 16:37:45 2009
    Reason: Service svc:/network/nfs/client:default is starting.
    See: http://sun.com/msg/SMF-8000-GE
    Path: svc:/network/ssh:default
    svc:/system/filesystem/autofs:default
    svc:/network/nfs/client:default
    See: man -M /usr/share/man -s 1M sshd
    Impact: 3 dependent services are not running:
    svc:/milestone/multi-user-server:default
    svc:/system/basicreg:default
    svc:/system/zones:default
    any advise on this ?
    Appreciate your help,

    hi all
    issue resolved actually port was conflicting in /etc/services i have readded the entry in /etc/service
    Thanks

  • NFS4: Problem mounting NFS mount onto a Solaris 10 Client

    Hi,
    I am having problems mounting NFS mount point from a Linux-Server onto a Solaris 10 Client.
    In the following
    =My server IP ..*.120
    =Client IP ..*.100
    Commands run on Client:
    ==================
    # mount -o vers=3 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: retrying: /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    # mount -o vers=4 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120:/scratch/pvfs2: No such file or directory
    # rpcinfo -p
    program vers proto port service
    100000 4 tcp 111 rpcbind
    100000 3 tcp 111 rpcbind
    100000 2 tcp 111 rpcbind
    100000 4 udp 111 rpcbind
    100000 3 udp 111 rpcbind
    100000 2 udp 111 rpcbind
    1073741824 1 tcp 36084
    100024 1 udp 42835 status
    100024 1 tcp 36086 status
    100133 1 udp 42835
    100133 1 tcp 36086
    100001 2 udp 42836 rstatd
    100001 3 udp 42836 rstatd
    100001 4 udp 42836 rstatd
    100002 2 tcp 36087 rusersd
    100002 3 tcp 36087 rusersd
    100002 2 udp 42838 rusersd
    100002 3 udp 42838 rusersd
    100011 1 udp 42840 rquotad
    100021 1 udp 4045 nlockmgr
    100021 2 udp 4045 nlockmgr
    100021 3 udp 4045 nlockmgr
    100021 4 udp 4045 nlockmgr
    100021 1 tcp 4045 nlockmgr
    100021 2 tcp 4045 nlockmgr
    100021 3 tcp 4045 nlockmgr
    100021 4 tcp 4045 nlockmgr
    # showmount -e 172.25.30.120 (Server)
    showmount: 172.25.30.120: RPC: Rpcbind failure - RPC: Unable to receive
    Commands OnServer:
    ================
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100021 1 tcp 49927 nlockmgr
    100021 3 tcp 49927 nlockmgr
    100021 4 tcp 49927 nlockmgr
    100021 1 udp 32772 nlockmgr
    100021 3 udp 32772 nlockmgr
    100021 4 udp 32772 nlockmgr
    100011 1 udp 796 rquotad
    100011 2 udp 796 rquotad
    100011 1 tcp 799 rquotad
    100011 2 tcp 799 rquotad
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 4 udp 2049 nfs
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    100003 4 tcp 2049 nfs
    100005 1 udp 809 mountd
    100005 1 tcp 812 mountd
    100005 2 udp 809 mountd
    100005 2 tcp 812 mountd
    100005 3 udp 809 mountd
    100005 3 tcp 812 mountd
    100024 1 udp 854 status
    100024 1 tcp 857 status
    # showmount -e 172.25.30.120
    Export list for 172.25.30.120:
    /scratch/nfs 172.25.30.100,172.25.24.0/4
    /scratch/pvfs2 172.25.30.100,172.25.24.0/4
    Thank you, ~al

    I also tried to run Snoop on the client and wireshark on Server and following is what I see:
    One Server: Upon issuing mount command on client:
    # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.205570 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.205586 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    0.207863 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.207869 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    2.005314 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    4.011005 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    5.206109 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    5.206277 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    5.216157 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    5.216170 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    On Clinet Upon issuing mount command on client:
    # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    Also I see the following on Client:
    # rpcinfo -p pvfs2-io-0-3
    rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecified error)
    When I try the above rpcinfo command on Client and Server Snoop And wireshark(ethereal) outputs are as follows:
    Client # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=872 Syn Seq=2065245538 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=2004 (Unknown), size = 48 bytes
    ? -> (multicast) ETHER Type=0003 (LLC/802.3), size = 90 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=874 Syn Seq=2068043912 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    Server # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.313739 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD CDP Device ID: MILEVA Port ID: GigabitEthernet1/0/16
    2.006422 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    3.483733 172.25.30.100 -> 172.25.30.120 TCP 865 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    3.483752 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    4.009741 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.014524 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.551356 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    8.019386 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    8.484344 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    8.484569 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    10.024411 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.030956 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.901333 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    12.901421 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    ^[[A 14.034193 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00  Cost = 0  Port = 0x8010
    15.691119 172.25.30.100 -> 172.25.30.120 TCP 866 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    15.691138 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    16.038944 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    16.550760 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    18.043886 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    20.050243 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    21.487689 172.25.30.100 -> 172.25.30.120 TCP 867 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    21.487700 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    22.053784 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    24.058680 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.063406 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.558307 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    ~thank you for any help you can provide!!!

  • NFS latency when Solaris 10 client mounts Linux NFS server(EMC NAS)

    Hello,
    One of our developers discovered a problem that for simplicity we call "latency". We have several 5.10 clients that we see the exact same symptoms on when NFS mounting our Celerra. The NAS is running a Linux variant "2.4.9-34.5406.EMC", but before you all jump on the "it's EMC's problem" bandwagon, let me explain. We set up an automated process (Perl) that watches an exported folder for the appearance of a request file (rand.req). When the request file comes in we rename the request file to (rand.sav) and then return a "report" named (rand.res). Very elegent I thought, and it runs at near lightspeed when only Linux NFS clients mount the share and create, monitor, delete, etc any files. In fact there is zero recorded latency from the time the report file appears and when the client detects it. But for all our Solaris 10 clients, they create the request file just fine, and the Perl process running on the Linux box sees the file instantaneously and returns the report, but it takes the Solaris client anywhere from 5 to up to 50 seconds before it see's any change in status for any files the Linux box manipulates. I've tried every possible combination of mount -o options there are including noac, rsize and wsize variants, vers=2, proto=udp, actimeo=0, etc, etc, etc. Nothing seems to be the magic bullet. nfsstat -c shows nothing out of the ordinary. There are no retransmits or dropepd packets anywhere in between, no firewall loads, no connectivity delays whatsoever. I'm completely out of ideas. Any ideas or clues would be greatly appreciated!
    thanks
    Dave

    No specific recommendations. But maybe you can watch the cable and get more information.
    Set up a case where the file has been created, then have the client check and snoop the cable at the same time. Does the client actually issue a directory check (or is it just displaying cached information)? Does the response contain the new file?
    Something to test anyway...
    Darren

  • Solaris 10 IPMP and NetApp NFS v4 ACL

    Okay here is my issue. I have one T5220 that has a failed NIC. IPMP is setup for active-standby and the NIC fails over on boot. I can reach the system through said interface and send traffic out the failed to NIC (ssh to another server and do a last and I get the 10.255.249.196 address). However the NFS acl I have is limiting to the shared IP address of the IPMP group (10.255.249.196). As that is what it should see. However if it appears that the NFS server is seeing the test IP (10.255.249.197) of the "failed to" NIC.. I added 10.255.249.197 to the NFS acl and all is fine. ifconfig output
    e1000g1: flags=219040803<UP,BROADCAST,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,CoS> mtu 1500 index 3
    inet 10.255.249.198 netmask ffffff00 broadcast 10.255.249.255
    groupname prvdmz
    ether 0:21:28:24:3:1f
    nxge1: flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS> mtu 1500 index 6
    inet 10.255.249.197 netmask ffffff00 broadcast 10.255.249.255
    groupname prvdmz
    ether 0:21:28:d:a4:6f
    nxge1:1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 6
    inet 10.255.249.196 netmask ffffff00 broadcast 10.255.249.255
    netstat -rn out put
    10.255.249.0 10.255.249.196 U 1 1333 nxge1:1
    10.255.249.0 10.255.249.197 U 1 0 e1000g1
    10.255.249.0 10.255.249.197 U 1 0 nxge1
    DNS sets the host name of the system to 10.255.249.196. But if I leave the ACL as is with the one IP address and wait about 10 minutes after a boot then I am able to mount the NFS share with acl only containing 10.255.249.196.
    Here are my hosts hostname.INT files.
    bash-3.00# cat /etc/hosts
    # Internet host table
    ::1 localhost
    127.0.0.1 localhost
    10.255.249.196 mymcresprod2-pv
    10.255.249.197 mymcresprod2-pv_nxge1
    10.255.249.198 mymcresprod2-pv_e1000g1
    bash-3.00# cat /etc/hostname.e1000g1
    mymcresprod2-pv_e1000g1 netmask 255.255.255.0 broadcast + deprecated -failover up group prvdmz
    addif mymcresprod2-pv netmask 255.255.255.0 broadcast + up
    bash-3.00# cat /etc/hostname.nxge1
    mymcresprod2-pv_nxge1 netmask 255.255.255.0 broadcast + deprecated -failover up group prvdmz
    bash-3.00# more /etc/default/mpathd
    #pragma ident "@(#)mpathd.dfl 1.2 00/07/17 SMI"
    # Time taken by mpathd to detect a NIC failure in ms. The minimum time
    # that can be specified is 100 ms.
    FAILURE_DETECTION_TIME=10000
    # Failback is enabled by default. To disable failback turn off this option
    FAILBACK=yes
    # By default only interfaces configured as part of multipathing groups
    # are tracked. Turn off this option to track all network interfaces
    # on the system
    TRACK_INTERFACES_ONLY_WITH_GROUPS=yes
    I think the IPMP configuration is fine but could be wrong. Any ideas on this? I mean I can add the test IP address to the ACL if need be but that just seems to be a band-aid. Or am I completely nuts and it should work this way.
    Thanks,
    Ben

    Followin up on my post...
    The moment I started to add more NFS shares, things slowed down when loggin in big time.
    Only way out was to fully open a hole on the sever for every client...
    I was able to lock down somewhat the Linux server, to fixed ports, and only open up those (111,2049,656,32766-32769) But on the Solaris server, I can't seem to figure this out...
    Any one ?
    TIA...

  • How do I disable NFS client in Solaris 10

    I am trying to disable NFS client in Solaris 10. In Solaris 9 I would simply rename /etc/rc2.d/S73nfs.client to /etc/rc2.d/s73nfs.client
    Since /etc/rc2.d/nfs.client does not seem to exist in 10 I'm wondering how to do this.
    Thanks in advance for the help.
    Max

    Since /etc/rc2.d/nfs.client does not seem to exist in
    10 I'm wondering how to do this.Read up on the new Solaris 10 service management faciities. Info at http://docs.sun.com/ There are a couple of tutorial doc's at bigadmin

  • Normal NFS client reaction when connection lost then re-established

    Say I have an NFS share mounted with defaults. There is no outstanding IO to the share and the server goes down. The server comes back up and there is an attempt to access the share. What should happen when this attempt is made? It seems to me that it should be like the server never went down. Instead, things behave more like the server never came back up: any application attempting to access it hangs indefinitely.
    Using the mount option 'soft' prevents this from happening, but it doesn't seem like one should have to risk 'silent data corruption' to prevent the client from going completely stupid when it loses contact for a period that no contact should be necessary.

    This is HIGHLY unrecommended. The NFS protocol has never, and probably will never, highly robust. VPN tunnels are prone to latency issues and session drops as a result. For this reason smb or afp or any form of FTP or sFTP is far more safe and reliable. All of these other protocols have mechanisms for reestablishing a session lost due to latency, unlike NFS. Additionally, NFS is sufficiently embedded in most UNIX kernels to potentially lock or crash the NFS client when a session goes stale. More recent versions of many OS's, including Solaris 9, have overcome the crash conditions usually caused, but at the expense of NFS filesystem integrity...

  • NFS client question ... do I need rpc?

    Hi All!
    I am in the process of doing some security auditing and in the process I came across open rpc ports that I believe I don't actually need. I would like to solicit thoughts on this:
    I have solaris 8 and 10 machines importing NFS filesystems from a solaris 8 server. For the client machines, is there any reason to have rpc running?
    I tested this with a solaris 8 machine, turned off both lockd/statd (/etc/init.d/nfs.client stop) and rpc (/etc/init.d/rpc stop), made sure that the daemons were not running and tried to nfsmount a share on the client:
    mount -F nfs server:/share /mnt/tmp
    which seemed to work fine. So do I need lockd/statd and rpc or not?
    Rudolf

    As I said, I tested the ability to connect to NFS even in the absence of rpc / lockd / statd on the client. However, it seems mAbrante is right about these services being at least advisable on the client to allow file locking. This is an excerpt from the lockd manpage that I should have spotted before even asking this question:
    State information kept by the lock manager about these locking requests can be lost if the lockd is killed or the operating system is rebooted. Some of this information can be recovered as follows. When the server lock manager restarts, it waits for a grace period for all client-site lock managers to submit reclaim requests. Client-site lock managers, on the other hand, are notified by the status monitor daemon, statd(1M), of the restart and promptly resubmit previously granted lock requests. If the lock daemon fails to secure a previously granted lock at the server site, then it sends SIGLOST to a process.
    So I guess you don't need the rpc / lockd / statd but you will lose functionality ... you might be able to get away with it if the exported filesystem is read only ...
    Rudolf

Maybe you are looking for

  • Restoring my new Iphone 5 from my backup. It is asking for an encryption password but i never set one up

    I've backuped my phone and then had it replaced as it was broken. Went to retore my new phone Iphone 5 and it is asking for a encryption password. I didn't set one two days ago. I have tried my itunes password, the computer password my iphone lock pa

  • Find tables in Stored Procedures

    Hi... Is there a way I can find the names of stored procedures where a certain table is referenced without having to look through all of them? I tried using dba_dependencies, but it did not return any rows even when I knew the table was referenced in

  • Multiple component instances for one component usage

    Hi,all How can I create multiple component instances for one component usage. I'm using the following code for creating  a single component instance for specific component usage: if(c_Usage.hasActiveComponent())       c_Usage.deleteComponent(); if(wd

  • PSD file from email into iPad Photoshop

    I was emailed a PSD file that I want to upload into Photoshop on my iPad. I cannot find a way to do this. Anyone know of a step by step way to do this? Thanks

  • BlackBerry ID request, possible malware

    Everyday I get notification:- "One or more applications/services require you to verify your BlackBerry ID". I am suspicious.. is this some kind of trick?  How do I get rid of it? Why doesn't the app or service identify itself? Solved! Go to Solution.