NFS mount problem

Hi everybody,
I need your help regarding the NFS server-client procedure.
at the /etc/hosts file of dcapp04 i have add the following line:
172.21.140.10 dcapp03
i have create a share point with this credentials:
root@dcapp04 # share -F nfs -o rw=dcapp03 -d "all files" /disk3/allfiles
then i go to the dcapp03, add the following line:
172.21.140.40 dcapp04
i have create a share point as a root with this credentials:
root@dcapp03 # mount -F nfs dcdb04:/disk3/allfiles /mnt
the problem is that i can read the /mnt mount point BUT i cannot write on it.
I ls and i take the following message
drwxr-xr-x 2 nobody nobody 36352 Jan 27 14:30 more_files
when i vi i take the message : permission denied
please help me.
Regards,
Storkath

the rw option when you share directories apply only if you have user acces to write that dir. When root look in a dir mount by nfs, it lots their god powers and become a mortal user called nobody ;).
If you are trying to work as root user, appenf this option to the share command on the nfs server: root=dcapp03
If the problem is with a regular user, check the perms of the dir on the nfs server.

Similar Messages

  • NFS4: Problem mounting NFS mount onto a Solaris 10 Client

    Hi,
    I am having problems mounting NFS mount point from a Linux-Server onto a Solaris 10 Client.
    In the following
    =My server IP ..*.120
    =Client IP ..*.100
    Commands run on Client:
    ==================
    # mount -o vers=3 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: retrying: /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    # mount -o vers=4 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120:/scratch/pvfs2: No such file or directory
    # rpcinfo -p
    program vers proto port service
    100000 4 tcp 111 rpcbind
    100000 3 tcp 111 rpcbind
    100000 2 tcp 111 rpcbind
    100000 4 udp 111 rpcbind
    100000 3 udp 111 rpcbind
    100000 2 udp 111 rpcbind
    1073741824 1 tcp 36084
    100024 1 udp 42835 status
    100024 1 tcp 36086 status
    100133 1 udp 42835
    100133 1 tcp 36086
    100001 2 udp 42836 rstatd
    100001 3 udp 42836 rstatd
    100001 4 udp 42836 rstatd
    100002 2 tcp 36087 rusersd
    100002 3 tcp 36087 rusersd
    100002 2 udp 42838 rusersd
    100002 3 udp 42838 rusersd
    100011 1 udp 42840 rquotad
    100021 1 udp 4045 nlockmgr
    100021 2 udp 4045 nlockmgr
    100021 3 udp 4045 nlockmgr
    100021 4 udp 4045 nlockmgr
    100021 1 tcp 4045 nlockmgr
    100021 2 tcp 4045 nlockmgr
    100021 3 tcp 4045 nlockmgr
    100021 4 tcp 4045 nlockmgr
    # showmount -e 172.25.30.120 (Server)
    showmount: 172.25.30.120: RPC: Rpcbind failure - RPC: Unable to receive
    Commands OnServer:
    ================
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100021 1 tcp 49927 nlockmgr
    100021 3 tcp 49927 nlockmgr
    100021 4 tcp 49927 nlockmgr
    100021 1 udp 32772 nlockmgr
    100021 3 udp 32772 nlockmgr
    100021 4 udp 32772 nlockmgr
    100011 1 udp 796 rquotad
    100011 2 udp 796 rquotad
    100011 1 tcp 799 rquotad
    100011 2 tcp 799 rquotad
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 4 udp 2049 nfs
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    100003 4 tcp 2049 nfs
    100005 1 udp 809 mountd
    100005 1 tcp 812 mountd
    100005 2 udp 809 mountd
    100005 2 tcp 812 mountd
    100005 3 udp 809 mountd
    100005 3 tcp 812 mountd
    100024 1 udp 854 status
    100024 1 tcp 857 status
    # showmount -e 172.25.30.120
    Export list for 172.25.30.120:
    /scratch/nfs 172.25.30.100,172.25.24.0/4
    /scratch/pvfs2 172.25.30.100,172.25.24.0/4
    Thank you, ~al

    I also tried to run Snoop on the client and wireshark on Server and following is what I see:
    One Server: Upon issuing mount command on client:
    # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.205570 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.205586 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    0.207863 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.207869 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    2.005314 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    4.011005 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    5.206109 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    5.206277 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    5.216157 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    5.216170 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    On Clinet Upon issuing mount command on client:
    # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    Also I see the following on Client:
    # rpcinfo -p pvfs2-io-0-3
    rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecified error)
    When I try the above rpcinfo command on Client and Server Snoop And wireshark(ethereal) outputs are as follows:
    Client # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=872 Syn Seq=2065245538 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=2004 (Unknown), size = 48 bytes
    ? -> (multicast) ETHER Type=0003 (LLC/802.3), size = 90 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=874 Syn Seq=2068043912 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    Server # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.313739 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD CDP Device ID: MILEVA Port ID: GigabitEthernet1/0/16
    2.006422 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    3.483733 172.25.30.100 -> 172.25.30.120 TCP 865 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    3.483752 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    4.009741 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.014524 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.551356 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    8.019386 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    8.484344 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    8.484569 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    10.024411 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.030956 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.901333 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    12.901421 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    ^[[A 14.034193 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00  Cost = 0  Port = 0x8010
    15.691119 172.25.30.100 -> 172.25.30.120 TCP 866 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    15.691138 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    16.038944 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    16.550760 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    18.043886 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    20.050243 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    21.487689 172.25.30.100 -> 172.25.30.120 TCP 867 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    21.487700 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    22.053784 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    24.058680 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.063406 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.558307 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    ~thank you for any help you can provide!!!

  • Anyone else having problems with NFS mounts not showing up?

    Since Lion, cannot see NFS shares anymore.  Folder that had them is still threre but the share will not mount.  Worked fine in 10.6.
    nfs://192.168.1.234/volume1/video
    /Volumes/DiskStation/video
    resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    Any ideas?
    Thanks

    Since the NFS points show up in the terminal app, go to the local mount directory (i.e. the mount location in the NFS Mounts using Disk Utility) and do the following:
    First Create a Link file
    sudo ln -s full_local_path link_path_name
    sudo ln -s /Volumes/linux/projects/ linuxProjects
    Next Create a new directory say in the Root of the Host Drive (i.e. Macintosh HDD)
    sudo mkdir new_link_storage_directory
    sudo mkdir /Volumes/Macintosh\ HDD/Links
    Copy the Above Link file to the new directory
    sudo  mv link_path_name new_link_storage_directory
    sudo  mv linuxProjects /Volumes/Macintosh\ HDD/Links/
    Then in Finder locate the NEW_LINK_STORAGE_DIRECTORY and then the link file should allow opening of these NFS point points.
    Finally, after all links have been created and placed into the NEW..DIRECTORY, Place it into the left sidebar.  Now it works just like before.

  • Cannot access external NFS mounts under Snow Leopard

    I was previously running Leopard (10.5.x) and automounted an Ubuntu (9.04 Jaunty) Linux NFS mount from my iMac. I had set this up with Directory Utility and it was instantly functional and I never had any issues. After upgrading to Snow Leopard, I set up the same mount point on the same machine (using Disk Utility now), without changing any of the export settings, and Disk Utility stated that the external server had responded and appeared to be working correctly. However, when attempting to access the share, I get a 'Operation not permitted' error. I also cannot manually create the NFS mount using mount or mount_nfs. I get a similar error if I try to cd into /net/<remote-machine>/<share>. I can see the shared folder in /net/<remote-machine>, but I cannot access it (cd, ls, etc). I can see on the Linux machine that the iMac has mounted the share (showmount -a), so the problem appears to be solely in the permissions. But I have not changed any of the permissions on the remote machine, and even then, they are blown wide open (777) so I'm not sure what is causing the issue. I have tried everything as both a regular user, and as root. Any thoughts?
    On the Linux NFS server:
    % cat /etc/exports
    /share 192.168.1.0/24(rw,sync,nosubtree_check,no_rootsquash)
    % showmount -a
    All mount points on <server>:
    192.168.1.100:/share <-- <server> address
    192.168.1.101:/share <-- iMac address
    On the iMac:
    % rpcinfo -t 192.168.1.100 nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    program 100003 version 4 ready and waiting
    % mount
    trigger on /net/<server>/share (autofs, automounted, nobrowse)
    % mount -t nfs 192.168.1.100:/share /Volumes/share1
    mount_nfs: /Volumes/share1: Operation not permitted

    My guess is that the Linux server is refusing NFS requests coming from a non-reserved (<1024) source port. If that's the case, adding "insecure" to the Linux export options should get it working. (Note: requiring the use of reserved ports doesn't actually make things any more secure on most networks, so the name of the option is a bit misleading.)
    If you were previously able to mount that same export from a Mac, you must have been specifying the "-o resvport" option and doing the mounts as root (via sudo or automount which happens to run as root). So that may be another fix.
    HTH
    --macko

  • Accessing NFS mounts in Finder

    I currently have trouble accessing NFS mounts with finder. The mount is O.K. I can access the directories on the NFS server in Terminal. However, in Finder when I click on the mount, instead of seeing the contents of the NFS mount I only see the "Alias" icon. Logs show nothing.
    I am not sure when it worked the last time. It could well be that the problem only started after one of the lastest snow leopard updates. I know it worked when I upgraded to Snow Leopard.
    Any ideas?

    Hello gvde,
    Two weeks ago I bought a NAS device that touted NFS as one of the features. As I am a fan of Unix boxes I chose an NAS that would support that protocol. I was disappointed to find out that my Macbook would not connect to it. As mentioned in previous posts (by others) on this forum, I could see my NFS share via the command line, but not when using Finder. I was getting pretty upset and racking my brain trying to figure it out. I called the NAS manufacturer which was no help. I used a Ubuntu LiveCD (which connected fine). I was about ready to give up. Then, in another forum someone had mentioned the NFS manager App.
    After I installed the app and attempted to configure my NFS shares, the app stated something along the lines of (paraphrasing) "default permissions were incorrect". It then asked me if I would authenticate to have the NFS manager fix the problem. I was at my wits end so I thought why not. Long story short, this app saved me! My shares survive a reboot, Finder is quick and snappy with displaying the network shares, and all is right with the world. Maybe in 10.6.3 Apple will have fixed the default permissions issue. Try the app. It's donationware. I hope this post helps someone else.
    http://www.macupdate.com/info.php/id/5984/nfs-manager

  • WARNING:Expected NFS mount options: rsize =32768,wsize =32768,hard,

    While using "dbua" I encountered a problem on the screen saying "cannot open the specified control file" and I was directed to see the new alert log.
    The alert log has several lines with messages like:
    WARNING:NFS mount of file <PATH>control01.ctl on filesystem <FS_NAME> done with incorrect options
    WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
    WARNING:NFS mount of file <PATH>control02.ctl on filesystem <FS_NAME> done with incorrect options
    WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
    WARNING:NFS mount of file <PATH>/control03.ctl on filesystem <FS_NAME> done with incorrect options
    WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
    ORA-00210: cannot open the specified control file
    ORA-00202: control file: '<PATH>/control03.ctl'
    ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
    The file system is actually mounted with these options: "nfs - yes rsize=4096,wsize=4096,hard,intr,vers-3".
    Would someone please help to identify what is wrong ?
    Thanks

    Thanks for your reply.
    OS is Solaris on x86-64. and we use NetApps and NFS.
    I followed the instruction specified in Oracle Support DocID 781349.1 and as it turned out the order is significant.
    The document specifies these mount options using this order.
    rw,bg,rsize=32768,wsize=32768,hard,vers=3,nointr,timeo=600,proto=tcp,suid 0 0
    ** This fixed the problem
    FYI, we had these options (and order):
    rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,proto=tcp,suid

  • Strange delete behavior in Solaris 10 with NFS mounts

    We are using the apache commons-io framework to delete a directory in a Solaris 10 environment. The code works well on our dev and qa boxes, but when we load it into our production envrionment we get intermittent failures where the files in a directory are not being deleted and therefore when we try to delete the directory it fails to delete.
    We suspect that this may be some kind of NFS problem in Solaris where it may take longer to delete a file than if it is on a local drive and therefore the code reaches the deletedir before the OS actually removes the files and this cause the delete directory failure because files are still present.
    Has anyone seen this in an NFS environment with Solaris? We are on Java 1.4.2_15 and we are using apache commons-io 1.3.1.

    The apache commons-io framework contains a method to delete a directory by recursively deleting all files and subdirectories. Intermittently, we are seeing some of the files in a subdirectory remain and then when delete is called to remove the directory (from within the commons-io framework deletedir method) we get an IOException. This only occurs on an NFS mounted file system on our production system. Our dev and qa systems are also on an NFS but it is a different one and appears to be loaded differently and the behavior for dev and qa consistently works as expected.
    It appears to be some kind of latency issue related to the way java deletes files on the NFS, but no conclusive evidence so far.
    We have not tried this with a newer version of java since we are presently constrained to 1.4 :-(

  • Solaris 10 NFS caching problems with custom NFS server

    I'm facing a very strange problem with a pure java standalone application providing NFS server v2 service. This same application, targeted for JVM 1.4.2 is running on different environment (see below) without any problem.
    On Solaris 10 we try any kind of mount parameters, system services up/down configuration, but cannot solve the problem.
    We're in big trouble 'cause the app is a mandatory component for a product to be in production stage in a while.
    Details follows
    System description
    Sunsparc U4 with SunOS 5.10, patch level: Generic_118833-33, 64bit
    List of active NFS services
    disabled   svc:/network/nfs/cbd:default
    disabled   svc:/network/nfs/mapid:default
    disabled   svc:/network/nfs/client:default
    disabled   svc:/network/nfs/server:default
    disabled   svc:/network/nfs/rquota:default
    online       svc:/network/nfs/status:default
    online       svc:/network/nfs/nlockmgr:default
    NFS mount params (from /etc/vfstab)
    localhost:/VDD_Server  - /users/vdd/mnt nfs - vers=2,proto=tcp,timeo=600,wsize=8192,rsize=8192,port=1579,noxattr,soft,intr,noac
    Anomaly description
    The server side of NFS is provided by a java standalone application enabled only for NFS v2 and tested on different environments like: MS Windows 2000, 2003, XP, Linux RedHat 10 32bit, Linux Debian 2.6.x 64bit, SunOS 5.9. The java application is distributed with a test program (java standalone application) to validate main installation and configuration.
    The test program simply reads a file from the NFS file-system exported by our main application (called VDD) and writes the same file with a different name on the VDD exported file-system. At end of test, the written file have different contents from the read one. Indeep investigation shows following behaviuor:
    _ The read phase behaves correctly on both server (VDD) and client (test app) sides, trasporting the file with correct contents.
    _ The write phase produces a zero filled file for 90% of resulting VDD file-system file but ending correctly with the same sequence of bytes as the original read file.
    _ Detailed write phase behaviour:
    1_ Test app wites first 512 bytes => VDD receive NFS command with offset 0, count 512 and correct bytes contents;
    2_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1024 and WRONG bytes contents: the first 512 bytes are zero filled (previous write) and last 512 bytes with correct bytes contents (current write).
    3_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1536 and WRONG bytes contents: the first 1024 bytes are zero filled (previous writes) and last 512 bytes with correct bytes contents (current write).
    4_ and so on...
    Further tests
    We tested our VDD application on the same Solaris 10 system but with our test application on another (Linux) machine, contacting VDD via Linux NFS client, and we don�t see the wrong behaviour: our test program performed ok and written file has same contents as read one.
    Has anyone faced a similar problem?
    We are Sun ISV partner: do you think we have enough info to open a bug request to SDN?
    Any suggestions?
    Many thanks in advance,
    Maurizio.

    I finally got it working. I think my problem was that I was coping and pasting the /etc/pam.conf from Gary's guide into the pam.conf file.
    There was unseen carriage returns mucking things up. So following a combination of the two docs worked. Starting with:
    http://web.singnet.com.sg/~garyttt/Configuring%20Solaris%20Native%20LDAP%20Client%20for%20Fedora%20Directory%20Server.htm
    Then following the steps at "Authentication Option #1: LDAP PAM configuration " from this doc:
    http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server
    for the pam.conf, got things working.
    Note: ensure that your user has the shadowAccount value set in the objectClass

  • Cannot write to NFS mount with finder

    hi folks
    i have moved from a G4 cube running 10.4.9 to a new iMac running os x 10.5.7 (including upgrading from the old machine)
    the NFS mounts i used to have with my G4 cube are not working properly; new mounts i've created aren't working properly either.
    i can read/open file OK, but when i try to drag & drop files onto the NFS mount using the finder i get errors:
    "You may need to enter the name and password for an administrator on this computer to change the item named test.jpg" [ stop ] [ continue ]
    clicking "continue" i get:
    "The item test.jpg contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read? [ stop ] [ continue ]
    Choosing continue again results in the file appearing in the NFS directory, but with 0 size, and a time stamp of 1970.
    if i try to copy the same file using the Terminal, it works fine - so it is not a simple NFS permissions problem - it is something particular to the Finder.
    i am able to create a folder inside the NFS director by using the Finder.
    i thought at first it might be related to the .DS_Store and similar files being written, so i tried turning off that behaviour:
    defaults write com.apple.desktopservices DSDontWriteNetworkStores true
    but that hasn't fixed the problem
    there are no obvious messages in any of the logs
    any suggestions or pointers on how to fix this?

    thanks for the reply
    these articles appear to relate to sharing a mac filesystem via NFS: exporting the data.
    i am referring to mounting a NFS filesystem from another server onto the mac (leopard) client
    the mounting works fine: it's just the finder which isn't behaving. the finder worked in tiger; isn't in leopard.

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • Missing menu "File|NFS Mounts" in Disk utility

    I had many NFS mounts using Disk utility and I used "File|NFS Mounts" but now that option is missing and I can't see my mounts neither those mounts are working, so I have two questions
    1. Where I can see my old mounts listed?
    2. How can I make nfs mount work?

    The problem is Boot Camp:  It uses a hybrid GPT/MBR partitioning scheme - which ends up hiding the Recovery HD partition - which is an EFI physical partition (neither GPT nor MBR).
    I would expect a new version of Boot Camp to release - like real soon - because of the Recovery HD partition invisibility issue.
    In this article - it is suggested that rEFIt should be used to partition a hard drive that is going to support multiple boots including Mac OS X, LINUX, and Windows. 
    (http://wiki.onmac.net/index.php/Triple_Boot_via_BootCamp)
    The key in the article to using Boot Camp with rEFIt is this:
    "Run the Boot Camp Assistant and create the Windows XP driver cd. Then exit Boot Camp.  DO NOT PARTITION USING BOOT CAMP: you are only using Boot Camp for the drivers, not the partitioning."
    All partitioning is done in terminal mode using the "diskutil" command.
    rEFIt is used to update both the GPT and MBR records so that all partitions will be visible using its "gptsync" command.
    Then - you replace the standard Mac boot menu with the rEFIt boot menu.  THAT will show the Mac OS X partition, Recovery HD (an EFI partition), and the Windows partition.
    My caveat is that rEFIt - which is open sourced and available here:  http://refit.sourceforge.net
    has not been recently updated and tested with respect to Mac OS X Lion.
    Hope this helps!

  • Autofs timeout while accessing to remote NFS mount

    Following Apple recommendations, I switched to "Directory Utility" to configure NFS mounts. As far as I understand, if you do so, the mounts are handled by automount which is itself called by autofs. The good thing of this is that autofs is unmounting unused mounts (after a timeout of 3600 seconds as defined in /etc/autofs.conf). Any time you need the remote drive (Finder call, ls in Terminal..., opening file), autofs is remounting ressource. This is a nice behaviour... in theory.
    I'm using some codes (written in IDL) that are reading and writing on that remote NFS server once every 5 minutes. Theoretically, autofs should be detecting these accesses and should keep the drive mounted. This is unfortunately not the case. The drive is unmounted 3600 seconds after I last accessed the mount through the Finder or with any other Application.
    There is apparently no way to remove this "automated unmounting" feature. I tried to set the timeout delay to a very large number (1 day) but it still disconnects me after this delay, if I don't do anything else than running my IDL code. If I mount the NFS share with the "mount_nfs" command, it works perfectly, as it is not handled by autofs.
    I wonder then if there is any recommandation on Apple's side in such a case, other than going back to traditional mount_nfs.

    As you have discovered, automount/autofs is also an "auto-unmounter" and there is no way to remove that feature. Contrary to what one might think, the auto unmounting does NOT happen after a period of "inactivity" of the mount. This is because autofs has no way of knowing when an automounted file system was last accessed. So, instead it periodically attempts to unmount it - if it is busy it won't get unmounted - if it isn't busy it will get unmounted.
    You can't disable this - but you can make the periodic unmounting so infrequent as to effectively disable the feature. Try setting the AUTOMOUNT_TIMEOUT interval to something really large - like 315360000 (which would be 10 years).
    However, in theory, this auto-unmounting should not be a problem because if it does get unmounted then the next access to that file system should cause it to get mounted again. And all this should happen without the code that is accessing the automount knowing that it isn't always mounted. It should always be there when it is accessed. So, the usual response to someone asking how to disable the auto-unmounting is to ask why they think it is a problem.
    (Oh, and you don't have to use "mount_nfs" - just "mount" should work to manually mount an NFS file system (that saves a little typing).)
    HTH
    --macko

  • IPhoto 09 & storing "Originals" directory in NFS-mounted partitition

    In iPhoto 08, I had the Originals directory NFS mounted off a Linux server. On the Mac, I mounted it at /Volumes/pics and I had a symbolic link pointing from iPhoto Library/Originals to /Volumes/pics. Everything was hunky-dory.
    I upgraded to iPhoto 09 and now it will not import new pictures. I get the following error message:
    iPhoto cannot import your photos to this library because iPhoto cannot access the library.
    Anyone have any ideas on how to store the original pictures on an NFS-mounted partition

    Ok. I figured this out myself.
    I mounted the NFS volume directly on ~/Pictures/iPhoto Library/Originals. This seems to fix the problem.
    So, it's not the NFS mounting that's the issue. It's the symbolic link to /Volumes/pics.
    Weird.

  • Since update to 10.4.10 NFS-mounts stopped mounting

    Since the update to 10.4.10 (with or without the security update, didn't matter) my NFS-mounts from a Linux-machine via a WLAN-router and Airport to my MacBook Pro.
    It did work, although apple-typically unreliable, until the 10.4.10 update.
    rpcinfo -p machine shows:
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100005 1 udp 797 mountd
    100005 1 tcp 800 mountd
    100005 2 udp 797 mountd
    100005 2 tcp 800 mountd
    100005 3 udp 797 mountd
    100005 3 tcp 800 mountd
    schowmount -e machine:
    Exports list on dream:
    /var/mnt/hdd/Bilder 192.168.233.9/255.255.255.0
    /var/mnt/hdd/Tools 192.168.233.9/255.255.255.0
    /var/mnt/hdd/movie 192.168.233.9/255.255.255.0
    /var/mnt/hdd/Musik 192.168.233.9/255.255.255.0
    /hdd/usbstick 192.168.233.9/255.255.255.0
    showmount -d machine:
    Directories on dream:
    /var/mnt/hdd/Bilder
    /var/mnt/hdd/Musik
    /var/mnt/hdd/movie
    192.168.233.9/255.255.255.0
    Therefore everything looks ok at the Linux-side, another Linux-machine is able to mount the shares without problems.
    The console.log shows:
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    Jul 15 18:11:50 alu automount[280]: Attempt to mount /automount/Servers/dream/hdd/Musik returned 1 (Operation not permitted)
    Jul 15 18:11:50 alu automount[280]: Attempt to mount /automount/Servers/dream/hdd/Bilder returned 1 (Operation not permitted)
    Jul 15 18:11:50 alu automount[280]: Attempt to mount /automount/Servers/dream/hdd/Tools returned 1 (Operation not permitted)
    Jul 15 18:11:50 alu automount[280]: Attempt to mount /automount/Servers/dream/hdd/movie returned 1 (Operation not permitted)
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    [...Masses of those lines]
    NFS Portmap: RPC: Program not registered
    2007-07-15 18:14:55.992 NetInfo Manager[499] * -[NSCFString substringFromIndex:]: Range or index out of bounds
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    [.... masses of those lines]
    I tried with NFS Manager - no luck.
    I checked the Netinfo-DB - no luck.
    I rebooted the MacBook Pro - no change.
    I rebooted the Linuxmachine - no change.
    I typed "mount -t nfs ..." into a terminal - no luck.
    It is working with Linux - what prevents Apple to get it to work, too?

    This error appeared once and never again since.
    But NFS-mounts still fail. A friends Titanium, still on 10.4.9, has the normal problems with airport-connection-stability and believed-server-connection-losses but it connects like a charm to that server the Aluminium doesn't like.
    I reinstalled 10.4.10, rebooted several times, repaird permissions, searched system.log and console.log but couldn't find a message pointing to a problem (with the exception of a message about a failing startup of my flying butress firewall.
    Here you have the latest system.log (deleted some lines for security reasons only):
    Jul 16 16:35:22 alu SystemStarter[1322]: authentication service (1332) did not complete successfully
    Jul 16 16:35:22 alu SystemStarter[1322]: Printing Services (1325) did not complete successfully
    Jul 16 16:35:24 alu Parallels: Unloading Network module...
    Jul 16 16:35:24 alu Parallels: Unloading ConnectUSB module...
    Jul 16 16:35:24 alu Parallels: Unloading Monitor module...
    Jul 16 16:35:28 alu SystemStarter[1322]: BrickHouse Firewall (1338) did not complete successfully
    Jul 16 16:35:29 alu SystemStarter[1322]: The following StartupItems failed to properly start:
    Jul 16 16:35:29 alu SystemStarter[1322]: /System/Library/StartupItems/AuthServer
    Jul 16 16:35:29 alu SystemStarter[1322]: - execution of Startup script failed
    Jul 16 16:35:29 alu SystemStarter[1322]: /System/Library/StartupItems/PrintingServices
    Jul 16 16:39:57 localhost kernel[0]: hi mem tramps at 0xffe00000
    Jul 16 16:39:58 localhost kernel[0]: PAE enabled
    Jul 16 16:39:58 localhost kernel[0]: standard timeslicing quantum is 10000 us
    Jul 16 16:39:58 localhost kernel[0]: vmpagebootstrap: 254317 free pages
    Jul 16 16:39:58 localhost kernel[0]: migtable_maxdispl = 71
    Jul 16 16:39:58 localhost kernel[0]: Enabling XMM register save/restore and SSE/SSE2 opcodes
    Jul 16 16:39:58 localhost kernel[0]: 89 prelinked modules
    Jul 16 16:39:58 localhost kernel[0]: ACPI CA 20060421
    Jul 16 16:39:58 localhost kernel[0]: AppleIntelCPUPowerManagement: ready
    Jul 16 16:39:58 localhost kernel[0]: AppleACPICPU: ProcessorApicId=0 LocalApicId=0 Enabled
    Jul 16 16:39:58 localhost kernel[0]: AppleACPICPU: ProcessorApicId=1 LocalApicId=1 Enabled
    Jul 16 16:39:58 localhost kernel[0]: Copyright (c) 1982, 1986, 1989, 1991, 1993
    Jul 16 16:39:58 localhost kernel[0]: The Regents of the University of California. All rights reserved.
    Jul 16 16:39:58 localhost kernel[0]: using 5242 buffer headers and 4096 cluster IO buffer headers
    Jul 16 16:39:58 localhost kernel[0]: Enabling XMM register save/restore and SSE/SSE2 opcodes
    Jul 16 16:39:58 localhost kernel[0]: Started CPU 01
    Jul 16 16:39:58 localhost kernel[0]: IOAPIC: Version 0x20 Vectors 64:87
    Jul 16 16:39:58 localhost kernel[0]: ACPI: System State [S0 S3 S4 S5] (S3)
    Jul 16 16:39:58 localhost kernel[0]: Security auditing service present
    Jul 16 16:39:58 localhost kernel[0]: BSM auditing present
    Jul 16 16:39:58 localhost kernel[0]: disabled
    Jul 16 16:39:58 localhost kernel[0]: rooting via boot-uuid from /chosen: 4EF96DEE-9FCF-4476-AD53-58BEA0AA953E
    Jul 16 16:39:58 localhost kernel[0]: Waiting on <dict ID="0"><key>IOProviderClass</key><string ID="1">IOResources</string><key>IOResourceMatch</key><string ID="2">boot-uuid-media</string></dict>
    Jul 16 16:39:58 localhost kernel[0]: USB caused wake event (EHCI)
    Jul 16 16:39:58 localhost kernel[0]: FireWire (OHCI) Lucent ID 5811 PCI now active, GUID 0016cbfffe66af32; max speed s400.
    Jul 16 16:39:58 localhost kernel[0]: Got boot device = IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/SATA@1F,2/AppleAHCI/PRT2 @2/IOAHCIDevice@0/AppleAHCIDiskDriver/IOAHCIBlockStorageDevice/IOBlockStorageDri ver/FUJITSU MHV2100BH Media/IOGUIDPartitionScheme/Customer@2
    Jul 16 16:39:58 localhost kernel[0]: BSD root: disk0s2, major 14, minor 2
    Jul 16 16:39:59 localhost kernel[0]: CSRHIDTransitionDriver::probe:
    Jul 16 16:39:59 localhost kernel[0]: CSRHIDTransitionDriver::start before command
    Jul 16 16:39:59 localhost kernel[0]: CSRHIDTransitionDriver::stop
    Jul 16 16:39:59 localhost kernel[0]: IOBluetoothHCIController::start Idle Timer Stopped
    Jul 16 16:39:59 localhost kernel[0]: Jettisoning kernel linker.
    Jul 16 16:39:59 localhost kernel[0]: Resetting IOCatalogue.
    Jul 16 16:39:59 localhost kernel[0]: display: family specific matching fails
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 0
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: display: family specific matching fails
    Jul 16 16:39:59 localhost kernel[0]: Previous Shutdown Cause: 0
    Jul 16 16:39:59 localhost kernel[0]: ath_attach: devid 0x1c
    Jul 16 16:39:59 localhost kernel[0]: mac 10.3 phy 6.1 radio 10.2
    Jul 16 16:39:59 localhost kernel[0]: IPv6 packet filtering initialized, default to accept, logging disabled
    Jul 16 16:40:00 localhost lookupd[47]: lookupd (version 369.6) starting - Mon Jul 16 16:40:00 2007
    Jul 16 16:40:03 localhost DirectoryService[55]: Launched version 2.1 (v353.6)
    Jul 16 16:40:05 localhost diskarbitrationd[45]: disk0s2 hfs 9AC36BC8-2C3E-3282-B08D-9C22EC354E35 Alu HD /
    Jul 16 16:40:07 localhost kernel[0]: yukonosx: Ethernet address 00:xx:xx:xx:xx - deleted
    Jul 16 16:40:07 localhost mDNSResponder: Couldn't read user-specified Computer Name; using default “Macintosh-00.........” instead
    Jul 16 16:40:07 localhost kernel[0]: AirPort_Athr5424ab: Ethernet address 00:xx:xx:xx:xx - deleted
    Jul 16 16:40:07 localhost mDNSResponder: Couldn't read user-specified local hostname; using default “Macintosh-00..........” instead
    Jul 16 16:40:08 localhost mDNSResponder: Adding browse domain local.
    Jul 16 16:40:09 localhost lookupd[70]: lookupd (version 369.6) starting - Mon Jul 16 16:40:09 2007
    Jul 16 16:40:09 localhost configd[43]: AppleTalk startup
    Jul 16 16:40:09 alu configd[43]: setting hostname to "alu.local"
    Jul 16 16:40:13 alu kernel[0]: Registering For 802.11 Events
    Jul 16 16:40:13 alu kernel[0]: [HCIController][setupHardware] AFH Is Supported
    Jul 16 16:40:15 alu configd[43]: AppleTalk startup complete
    Jul 16 16:40:15 alu configd[43]: AppleTalk shutdown
    Jul 16 16:40:15 alu configd[43]: AppleTalk shutdown complete
    Jul 16 16:40:18 alu configd[43]: AppleTalk startup
    Jul 16 16:40:20 alu mDNSResponder: getifaddrs ifa_netmask for fw0(7) Flags 8863 Family 2 169.254.113.87 has different family: 0
    Jul 16 16:40:20 alu mDNSResponder: SetupAddr invalid sa_family 0
    Jul 16 16:40:23 alu SystemStarter[51]: BrickHouse Firewall (102) did not complete successfully
    Jul 16 16:40:27 alu configd[43]: AppleTalk startup complete
    Jul 16 16:40:29 alu configd[43]: executing /System/Library/SystemConfiguration/Kicker.bundle/Contents/Resources/enable-net work
    Jul 16 16:40:29 alu configd[43]: posting notification com.apple.system.config.network_change
    Jul 16 16:40:29 alu lookupd[169]: lookupd (version 369.6) starting - Mon Jul 16 16:40:29 2007
    Jul 16 16:40:31 alu mDNSResponder: getifaddrs ifa_netmask for fw0(7) Flags 8863 Family 2 169.254.113.87 has different family: 0
    Jul 16 16:40:31 alu mDNSResponder: SetupAddr invalid sa_family 0
    Jul 16 16:40:31 alu /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow: Login Window Application Started
    Jul 16 16:40:31 alu SystemStarter[51]: The following StartupItems failed to properly start:
    Jul 16 16:40:31 alu SystemStarter[51]: /Library/StartupItems/Firewall
    Jul 16 16:40:31 alu SystemStarter[51]: - execution of Startup script failed
    Jul 16 16:40:33 alu loginwindow[182]: Login Window Started Security Agent
    Jul 16 16:40:48 alu configd[43]: target=enable-network: disabled
    Jul 16 16:44:14 alu /System/Library/PrivateFrameworks/Apple80211.framework/Resources/airport: Currently connected to network WZ
    Jul 16 16:45:40 alu automount[222]: Can't get NFS_V3/TCP port for dream
    Jul 16 16:45:40 alu automount[222]: Can't get NFS_V2/TCP port for dream
    Jul 16 16:45:40 alu automount[222]: Attempt to mount /automount/static/mnt returned 1 (Operation not permitted)

  • Permissions issue with NFS mounted MyCloud

    FIrst off let me say I think this is a LInux issue and not a WDMyCloud issue, but I'm not sure, so here goes... I can rsync my stuff off the  Linux systems on my LAN to back them up into a WDMyCloud share, no problem.  But then I can't get at some stuff with limited file permissions, when I mount the WDMyCloud via NFS. Here's the problem...  Let's s say my directory tree on the WDMyCloud looks like "/shares/Stuff/L1/L2/L3" where the permissions on "L3" look like this: drwx------+ 3 fred share 4096 Jul 1 01:03 L3 It's readable/writeable only by "fred".  I want to preserve the permissions and ownerships on everything, so if I have to do a restore and "rsync" it back onto another machine they'll go back the way they were when I backed them up.  I can see the contents of "L3" if I "ssh" into the MyCloud, *but* I *cannot* see the contents of  "L3" if I try to look at it via the NFS mount - I get ls: L3: permission denied. If I change the permissions on it, e.g.drwxrwxrwx+ 3 fred share 4096 Jul 1 01:03 L3then I can see the contents of "L3" just fine.  So it's just the fact that via the NFS mount the MyCloud NFS server (Or something?) won't give me access to it unless the permissions are open, even logged in as "root" on the machine where the MyCloud is NFS mounted. I tried creating "fred" as a user on the MyCloud *and* made sure the numerical UID and GIDs were the same on the Linux machine and the MyCloud - No dice. I haven't tried everything in the world yet (I haven't tried rebooting the MyCloud to see if some server hasn't picked up the changes to "/etc/passwd" or whatever ; there's something called "idmapd" that I guess I should look into...  Etc.)  But I thought maybe somebody here might have run into this or have a bright idea?  

    I had some problems, and I modified the /etc/exports file to meet a little more my needs This is the content of it: -------/nfs/jffs *(rw,sync,no_root_squash,no_subtree_check)/nfs *(rw,all_squash,sync,no_subtree_check,insecure,crossmnt,anonuid=999,anongid=1000)-------- The first line/share allows to change the permissisions and uid below them when you monted on your machineThe second one I maped the anonuid to the uid my user has on the wd (999) so when I mount a share on the machine everything is write with that used id If you modifes the /etc/exports file remeber to run after you edited the command "exportfs -a" to refresh the changes Hope this help with your problem

Maybe you are looking for

  • How to catch SAP application errors in BPM.

    Hi, I have a IDOC to Soap Sync Scenario where I send the message to a Webservice. I have used a BPM since we need to catch the resposne of this message and map it to a RFC. For ex if I get a success resposne I need to map success if not than I need t

  • My ipod touch 3g will not run or connect to itunes

    My iPod shows the Apple logo when I reboot it, and then shows a connect to iTunes screen.  However, it is not recognized as a valid USB device by my PC.  I've got another iPod Touch that works just fine on the same computer.

  • How to check an alert or LOV is displayed at Forms

    Is there any other way , shall we identify  any alert/Message/LOV is opened ? Please advise on this.

  • MDB's keep consuming messages after queue purged

    I have a pool of 5 mdbs processing messages from a queue. The queue had 500 messages in it. I used the imqcmd purge dst ... command to clear a queue. The mdb's continued to consume messages. I would think that at most 5 messages could have been in th

  • Certain Tools Have Stopped Working on Specified Layer

    Everything was working normally this morning until suddenly the Brush Tool stopped being able to add to a particular layer, but instead added to a layer below. A few tests found that the Brush Tool,Spot Healing Brush Tool, Dodge Tool, dont' work, and