IPhoto 09 & storing "Originals" directory in NFS-mounted partitition

In iPhoto 08, I had the Originals directory NFS mounted off a Linux server. On the Mac, I mounted it at /Volumes/pics and I had a symbolic link pointing from iPhoto Library/Originals to /Volumes/pics. Everything was hunky-dory.
I upgraded to iPhoto 09 and now it will not import new pictures. I get the following error message:
iPhoto cannot import your photos to this library because iPhoto cannot access the library.
Anyone have any ideas on how to store the original pictures on an NFS-mounted partition

Ok. I figured this out myself.
I mounted the NFS volume directly on ~/Pictures/iPhoto Library/Originals. This seems to fix the problem.
So, it's not the NFS mounting that's the issue. It's the symbolic link to /Volumes/pics.
Weird.

Similar Messages

  • Iphoto stopped storing originals in Masters file

    Due to the hard drive being almost full on my iMac, about a year ago I moved the iphoto libarary to an external drive following the instructions on the forum. That included how to make the version on the external hard drive the new default. Although I photo appears to be working well, I have discovered to my shock that the original raw images from my camera are no longer being stored under Masters when I import to Iphoto. They don't appear to have been since the move to the external hard drive last year. I have searched the Mac HD and they don't appear to be there either.
    Can anyone tell me what might have happened to the originals?
    In addition does anyone know how to redirect th eoriginals to be saved under Masters for future importing?

    Is there an Originals folder inside the Library Package? That folder got rename din recent versions. If so, are the files in that?
    Other than that, the only thing that would cause this is if you are running a Referenced Library.
    A Managed Library, is the default setting, and iPhoto copies files into the iPhoto Library when Importing. The files are then stored in the Library package
    A Referenced Library is when iPhoto is NOT copying the files into the iPhoto Library when importing because you made a change at iPhoto -> Preferences -> Advanced. (You unchecked the option to copy files into the Library on import) The files are then stored where ever you put them and not in the Library package. In this scenario you are responsible for the File Management.

  • Nfs mount created with Netinfo not shown by Directory Utility in Leopard

    On TIger I used to mount dynamically a few directories using NFS.
    To do so, I used NetInfo.
    I have upgraded to Leopard and the mounted directories
    are still working, although Netinfo is not present anymore.
    I was expecting to see these mount points and
    modify them using Directory Utility, which has substituted Netinfo.
    But they are not even shown in the Mount panel of Directory Utility.
    Is there a way to see and modify NFS mount point previously
    created by NetInfo with the new Directory Utility?

    Thank you very much! I was able to recreate the static automount that I had previously had. I just had to create the "mounts" directory in /var/db/dslocal/nodes/Default/ and then I saved the following text as a .plist file within "mounts".
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>dir</key>
    <array>
    <string>/Network/Backups</string>
    </array>
    <key>generateduid</key>
    <array>
    <string>0000000-0000-0000-0000-000000000000</string>
    </array>
    <key>name</key>
    <array>
    <string>server:/Backups</string>
    </array>
    <key>opts</key>
    <array>
    <string>url==afp://;AUTH=NO%20USER%[email protected]/Backups</string>
    </array>
    <key>vfstype</key>
    <array>
    <string>url</string>
    </array>
    </dict>
    </plist>
    I don't think the specific name of the .plist file matters, or the value for "generateduid". I'm listing all this info assuming that someone out there might care.
    I assume this would work for SMB shares also... if SMB worked, which it hasn't on my system since I installed leopard.

  • NFS Mounted Directory And Files Quit Responding

    I mounted a remote directory using NFS and I can access the mount point and all of its sub-directories and files. After a while, all of the sub-directories and files no longer respond when clicked; in column view there is no longer an icon nor any statistics for those files. If I go back and click on Network->Servers->myserver->its_subdirectories, it will eventually respond again.
    I have found no messages in the system log. And nfsstat shows no errors.
    I am using these these mount parameters with the Directory Utility->Mounts tab:
    ro net -P -T -3
    Any idea why the NFS mounted directories and files quit responding?
    Thanks.

    I may have found an answer to my own question.
    It looks like automount will automatically unmount a file system if it has not been accessed in 10 minutes. This time-out can be changed using the automount command. I am going to try increasing this time-out value.
    Here is part of the man page:
    SYNOPSIS
    automount [-v] [-c] [-t timeout]
    -t timeout
    Set to timeout seconds the time after which an automounted file
    system will be unmounted if it hasn't been referred to within
    that period of time. The default is 10 minutes (600 seconds).

  • Global NFS Mount & Cluster

    Dear All
    My Development server is in LAN environment and other system QAS and PRD is in the SZ2. For transport management configuration, we need to do the global NFS mounting.  But as per my company policy, there is security issue. 
    The Second issue is that if we mount /usr/sap/trans as global and also part of the NFS then it cluster startup will fail. Please suggest the above directory should be part of cluster or not.
    Regards
    Vimal Pathak

    Tiffany wrote:
              > We need to store information (objects) that are global to a cluster.
              The only way you can do this is to store the information in a
              database.
              > It's my understanding that anything stored in the servlet context is
              > visible to all servers,
              No. This is not true.
              > but it resides on a network drive. Wouldn't
              > each read of this servlet context info involve a directory read hit
              > with all its implied performance degradation?
              How about WebLogic Workspaces? Is this information replicated across
              clusters? Does it live on a network drive as a file as well?
              > Hoping someone can help us out here.
              Workspaces are not replicated.
              >
              > Thanks for any help,
              >
              > Tiffany
              Cheers
              - Prasad
              

  • Accessing NFS mounted share in Finder no longer works in 10.5.3+

    I have setup an automounted NFS share previously with Leopard against a RHEL 5 server at the office. I had to go through a few loops to punch a hole through the appfirewall to get the share accessible in the Finder.
    A few months later when I returned to the office after a consultancy stint and upgrades to 10.5.3 and 10.5.4 the NFS mount no longer works. I have investigated it today and I can't get it to run even with the appfirewall disabled.
    I've been doing some troubleshooting, and the interaction between the statd, lockd and perhaps the portmap seem a bit fishy, even with the appfirewall disabled. Both the statd and lockd complains that they can not register; lockd once and statd indefinitely.
    Jul 2 15:17:10 ySubmarine com.apple.statd[521]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd[521]): Exited with exit code: 1
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    ... and rpcinfo -p gets connection refused unless I start portmap using the launchctl utility.
    This may be a bit obscure, and I'm not exactly an expert of NFS, so I wonder if someone else stumbled across this, and can point me in the right direction?
    Johan

    Sorry for my late response, but I have finally got around to some trial and error. I can mount the share using mount_nfs (but need to use sudo), and it shows up as a mounted disk in the Finder. However, when I start to browse a directory on the share that I can write to, I end up with the lockd and statd failures.
    $ mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    mount_nfs: /Users/yyyy/xxxx-home: Permission denied
    $ sudo mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    Jul 7 10:37:34 zzzz com.apple.statd[253]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd[253]): Exited with exit code: 1
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:44 zzzz com.apple.statd[254]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd[254]): Exited with exit code: 1
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:54 zzzz com.apple.statd[255]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd[255]): Exited with exit code: 1
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:58 zzzz loginwindow[25]: 1 server now unresponsive
    Jul 7 10:37:59 zzzz KernelEventAgent[26]: tid 00000000 unmounting 1 filesystems
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /net updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /home updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: no unmounts
    Jul 7 10:38:02 zzzz loginwindow[25]: No servers unresponsive
    ... and firewall wide open.
    I guess that the Finder somehow triggers file locking over NFS.

  • Cannot access external NFS mounts under Snow Leopard

    I was previously running Leopard (10.5.x) and automounted an Ubuntu (9.04 Jaunty) Linux NFS mount from my iMac. I had set this up with Directory Utility and it was instantly functional and I never had any issues. After upgrading to Snow Leopard, I set up the same mount point on the same machine (using Disk Utility now), without changing any of the export settings, and Disk Utility stated that the external server had responded and appeared to be working correctly. However, when attempting to access the share, I get a 'Operation not permitted' error. I also cannot manually create the NFS mount using mount or mount_nfs. I get a similar error if I try to cd into /net/<remote-machine>/<share>. I can see the shared folder in /net/<remote-machine>, but I cannot access it (cd, ls, etc). I can see on the Linux machine that the iMac has mounted the share (showmount -a), so the problem appears to be solely in the permissions. But I have not changed any of the permissions on the remote machine, and even then, they are blown wide open (777) so I'm not sure what is causing the issue. I have tried everything as both a regular user, and as root. Any thoughts?
    On the Linux NFS server:
    % cat /etc/exports
    /share 192.168.1.0/24(rw,sync,nosubtree_check,no_rootsquash)
    % showmount -a
    All mount points on <server>:
    192.168.1.100:/share <-- <server> address
    192.168.1.101:/share <-- iMac address
    On the iMac:
    % rpcinfo -t 192.168.1.100 nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    program 100003 version 4 ready and waiting
    % mount
    trigger on /net/<server>/share (autofs, automounted, nobrowse)
    % mount -t nfs 192.168.1.100:/share /Volumes/share1
    mount_nfs: /Volumes/share1: Operation not permitted

    My guess is that the Linux server is refusing NFS requests coming from a non-reserved (<1024) source port. If that's the case, adding "insecure" to the Linux export options should get it working. (Note: requiring the use of reserved ports doesn't actually make things any more secure on most networks, so the name of the option is a bit misleading.)
    If you were previously able to mount that same export from a Mac, you must have been specifying the "-o resvport" option and doing the mounts as root (via sudo or automount which happens to run as root). So that may be another fix.
    HTH
    --macko

  • Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd
    given read,write grants to public as well as specific user

    782011 wrote:
    Hi sb92075,
    Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
    27040, 00000, "file create error, unable to create file"
    // *Cause:  create system call returned an error, unable to create file
    // *Action: verify filename, and permissions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Anyone else having problems with NFS mounts not showing up?

    Since Lion, cannot see NFS shares anymore.  Folder that had them is still threre but the share will not mount.  Worked fine in 10.6.
    nfs://192.168.1.234/volume1/video
    /Volumes/DiskStation/video
    resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    Any ideas?
    Thanks

    Since the NFS points show up in the terminal app, go to the local mount directory (i.e. the mount location in the NFS Mounts using Disk Utility) and do the following:
    First Create a Link file
    sudo ln -s full_local_path link_path_name
    sudo ln -s /Volumes/linux/projects/ linuxProjects
    Next Create a new directory say in the Root of the Host Drive (i.e. Macintosh HDD)
    sudo mkdir new_link_storage_directory
    sudo mkdir /Volumes/Macintosh\ HDD/Links
    Copy the Above Link file to the new directory
    sudo  mv link_path_name new_link_storage_directory
    sudo  mv linuxProjects /Volumes/Macintosh\ HDD/Links/
    Then in Finder locate the NEW_LINK_STORAGE_DIRECTORY and then the link file should allow opening of these NFS point points.
    Finally, after all links have been created and placed into the NEW..DIRECTORY, Place it into the left sidebar.  Now it works just like before.

  • Strange delete behavior in Solaris 10 with NFS mounts

    We are using the apache commons-io framework to delete a directory in a Solaris 10 environment. The code works well on our dev and qa boxes, but when we load it into our production envrionment we get intermittent failures where the files in a directory are not being deleted and therefore when we try to delete the directory it fails to delete.
    We suspect that this may be some kind of NFS problem in Solaris where it may take longer to delete a file than if it is on a local drive and therefore the code reaches the deletedir before the OS actually removes the files and this cause the delete directory failure because files are still present.
    Has anyone seen this in an NFS environment with Solaris? We are on Java 1.4.2_15 and we are using apache commons-io 1.3.1.

    The apache commons-io framework contains a method to delete a directory by recursively deleting all files and subdirectories. Intermittently, we are seeing some of the files in a subdirectory remain and then when delete is called to remove the directory (from within the commons-io framework deletedir method) we get an IOException. This only occurs on an NFS mounted file system on our production system. Our dev and qa systems are also on an NFS but it is a different one and appears to be loaded differently and the behavior for dev and qa consistently works as expected.
    It appears to be some kind of latency issue related to the way java deletes files on the NFS, but no conclusive evidence so far.
    We have not tried this with a newer version of java since we are presently constrained to 1.4 :-(

  • NFS4: Problem mounting NFS mount onto a Solaris 10 Client

    Hi,
    I am having problems mounting NFS mount point from a Linux-Server onto a Solaris 10 Client.
    In the following
    =My server IP ..*.120
    =Client IP ..*.100
    Commands run on Client:
    ==================
    # mount -o vers=3 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: retrying: /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    # mount -o vers=4 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120:/scratch/pvfs2: No such file or directory
    # rpcinfo -p
    program vers proto port service
    100000 4 tcp 111 rpcbind
    100000 3 tcp 111 rpcbind
    100000 2 tcp 111 rpcbind
    100000 4 udp 111 rpcbind
    100000 3 udp 111 rpcbind
    100000 2 udp 111 rpcbind
    1073741824 1 tcp 36084
    100024 1 udp 42835 status
    100024 1 tcp 36086 status
    100133 1 udp 42835
    100133 1 tcp 36086
    100001 2 udp 42836 rstatd
    100001 3 udp 42836 rstatd
    100001 4 udp 42836 rstatd
    100002 2 tcp 36087 rusersd
    100002 3 tcp 36087 rusersd
    100002 2 udp 42838 rusersd
    100002 3 udp 42838 rusersd
    100011 1 udp 42840 rquotad
    100021 1 udp 4045 nlockmgr
    100021 2 udp 4045 nlockmgr
    100021 3 udp 4045 nlockmgr
    100021 4 udp 4045 nlockmgr
    100021 1 tcp 4045 nlockmgr
    100021 2 tcp 4045 nlockmgr
    100021 3 tcp 4045 nlockmgr
    100021 4 tcp 4045 nlockmgr
    # showmount -e 172.25.30.120 (Server)
    showmount: 172.25.30.120: RPC: Rpcbind failure - RPC: Unable to receive
    Commands OnServer:
    ================
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100021 1 tcp 49927 nlockmgr
    100021 3 tcp 49927 nlockmgr
    100021 4 tcp 49927 nlockmgr
    100021 1 udp 32772 nlockmgr
    100021 3 udp 32772 nlockmgr
    100021 4 udp 32772 nlockmgr
    100011 1 udp 796 rquotad
    100011 2 udp 796 rquotad
    100011 1 tcp 799 rquotad
    100011 2 tcp 799 rquotad
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 4 udp 2049 nfs
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    100003 4 tcp 2049 nfs
    100005 1 udp 809 mountd
    100005 1 tcp 812 mountd
    100005 2 udp 809 mountd
    100005 2 tcp 812 mountd
    100005 3 udp 809 mountd
    100005 3 tcp 812 mountd
    100024 1 udp 854 status
    100024 1 tcp 857 status
    # showmount -e 172.25.30.120
    Export list for 172.25.30.120:
    /scratch/nfs 172.25.30.100,172.25.24.0/4
    /scratch/pvfs2 172.25.30.100,172.25.24.0/4
    Thank you, ~al

    I also tried to run Snoop on the client and wireshark on Server and following is what I see:
    One Server: Upon issuing mount command on client:
    # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.205570 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.205586 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    0.207863 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.207869 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    2.005314 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    4.011005 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    5.206109 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    5.206277 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    5.216157 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    5.216170 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    On Clinet Upon issuing mount command on client:
    # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    Also I see the following on Client:
    # rpcinfo -p pvfs2-io-0-3
    rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecified error)
    When I try the above rpcinfo command on Client and Server Snoop And wireshark(ethereal) outputs are as follows:
    Client # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=872 Syn Seq=2065245538 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=2004 (Unknown), size = 48 bytes
    ? -> (multicast) ETHER Type=0003 (LLC/802.3), size = 90 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=874 Syn Seq=2068043912 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    Server # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.313739 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD CDP Device ID: MILEVA Port ID: GigabitEthernet1/0/16
    2.006422 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    3.483733 172.25.30.100 -> 172.25.30.120 TCP 865 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    3.483752 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    4.009741 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.014524 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.551356 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    8.019386 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    8.484344 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    8.484569 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    10.024411 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.030956 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.901333 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    12.901421 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    ^[[A 14.034193 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00  Cost = 0  Port = 0x8010
    15.691119 172.25.30.100 -> 172.25.30.120 TCP 866 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    15.691138 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    16.038944 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    16.550760 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    18.043886 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    20.050243 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    21.487689 172.25.30.100 -> 172.25.30.120 TCP 867 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    21.487700 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    22.053784 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    24.058680 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.063406 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.558307 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    ~thank you for any help you can provide!!!

  • Cannot write to NFS mount with finder

    hi folks
    i have moved from a G4 cube running 10.4.9 to a new iMac running os x 10.5.7 (including upgrading from the old machine)
    the NFS mounts i used to have with my G4 cube are not working properly; new mounts i've created aren't working properly either.
    i can read/open file OK, but when i try to drag & drop files onto the NFS mount using the finder i get errors:
    "You may need to enter the name and password for an administrator on this computer to change the item named test.jpg" [ stop ] [ continue ]
    clicking "continue" i get:
    "The item test.jpg contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read? [ stop ] [ continue ]
    Choosing continue again results in the file appearing in the NFS directory, but with 0 size, and a time stamp of 1970.
    if i try to copy the same file using the Terminal, it works fine - so it is not a simple NFS permissions problem - it is something particular to the Finder.
    i am able to create a folder inside the NFS director by using the Finder.
    i thought at first it might be related to the .DS_Store and similar files being written, so i tried turning off that behaviour:
    defaults write com.apple.desktopservices DSDontWriteNetworkStores true
    but that hasn't fixed the problem
    there are no obvious messages in any of the logs
    any suggestions or pointers on how to fix this?

    thanks for the reply
    these articles appear to relate to sharing a mac filesystem via NFS: exporting the data.
    i am referring to mounting a NFS filesystem from another server onto the mac (leopard) client
    the mounting works fine: it's just the finder which isn't behaving. the finder worked in tiger; isn't in leopard.

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • Unable to do expdp on NFS mount point in solaris Oracle db 10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dwExport: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd

    Hi Peter,
    Thanks for ur reply.. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>

  • Autofs timeout while accessing to remote NFS mount

    Following Apple recommendations, I switched to "Directory Utility" to configure NFS mounts. As far as I understand, if you do so, the mounts are handled by automount which is itself called by autofs. The good thing of this is that autofs is unmounting unused mounts (after a timeout of 3600 seconds as defined in /etc/autofs.conf). Any time you need the remote drive (Finder call, ls in Terminal..., opening file), autofs is remounting ressource. This is a nice behaviour... in theory.
    I'm using some codes (written in IDL) that are reading and writing on that remote NFS server once every 5 minutes. Theoretically, autofs should be detecting these accesses and should keep the drive mounted. This is unfortunately not the case. The drive is unmounted 3600 seconds after I last accessed the mount through the Finder or with any other Application.
    There is apparently no way to remove this "automated unmounting" feature. I tried to set the timeout delay to a very large number (1 day) but it still disconnects me after this delay, if I don't do anything else than running my IDL code. If I mount the NFS share with the "mount_nfs" command, it works perfectly, as it is not handled by autofs.
    I wonder then if there is any recommandation on Apple's side in such a case, other than going back to traditional mount_nfs.

    As you have discovered, automount/autofs is also an "auto-unmounter" and there is no way to remove that feature. Contrary to what one might think, the auto unmounting does NOT happen after a period of "inactivity" of the mount. This is because autofs has no way of knowing when an automounted file system was last accessed. So, instead it periodically attempts to unmount it - if it is busy it won't get unmounted - if it isn't busy it will get unmounted.
    You can't disable this - but you can make the periodic unmounting so infrequent as to effectively disable the feature. Try setting the AUTOMOUNT_TIMEOUT interval to something really large - like 315360000 (which would be 10 years).
    However, in theory, this auto-unmounting should not be a problem because if it does get unmounted then the next access to that file system should cause it to get mounted again. And all this should happen without the code that is accessing the automount knowing that it isn't always mounted. It should always be there when it is accessed. So, the usual response to someone asking how to disable the auto-unmounting is to ask why they think it is a problem.
    (Oh, and you don't have to use "mount_nfs" - just "mount" should work to manually mount an NFS file system (that saves a little typing).)
    HTH
    --macko

Maybe you are looking for

  • Oracle Reports with Other Application Server

    Can Oracle Reports be run within other Application Server such as WebSphere? For example, I've created Oracle Reports and save it as RDF file. Is there a way to have my application which run inside WebSphere to initiate Oracle Reports to compile RDF

  • Frontrow ignores iTunes EQ settings??

    Movies which I have in iTunes seem unaffected by the iTunes equalizer... meaning that I can't choose how they sound in Frontrow either. Does anyone else find this or know a way to change sound settings for movies in Frontrow/iTunes? Pulling my hair o

  • I can't get rid of Cinema Plus malware.

    It's infected Firefox (37.0.1) and Safari (8.0.5) I'm running a Mac Mini w/ 10.10.3; 3ghz i7; 16megs of ram; Intel Iris 1536 MB graphics card. Help. Thanks.

  • I am getting Payroll error in "Customizing Error in Work Schedule Rule"

    Hi, Iam using SAP HR ECC6.0. While running payroll i am facing an error saying "Customizing Error in Work Schedule Rule". I checked all my work schedule related configuration everything is correct. I have checked the Manual Work Schedule Creation. My

  • SELECT statement for Chart Region

    I'm trying to code a Chart Region and I'm having difficulty with the specialized SELECT statement used with graphs. In particular, I seem to be misunderstanding the link parameter at the beginning. select http://apexdevapp1.ci.raleigh.nc.us:7777, Use