Strange delete behavior in Solaris 10 with NFS mounts

We are using the apache commons-io framework to delete a directory in a Solaris 10 environment. The code works well on our dev and qa boxes, but when we load it into our production envrionment we get intermittent failures where the files in a directory are not being deleted and therefore when we try to delete the directory it fails to delete.
We suspect that this may be some kind of NFS problem in Solaris where it may take longer to delete a file than if it is on a local drive and therefore the code reaches the deletedir before the OS actually removes the files and this cause the delete directory failure because files are still present.
Has anyone seen this in an NFS environment with Solaris? We are on Java 1.4.2_15 and we are using apache commons-io 1.3.1.

The apache commons-io framework contains a method to delete a directory by recursively deleting all files and subdirectories. Intermittently, we are seeing some of the files in a subdirectory remain and then when delete is called to remove the directory (from within the commons-io framework deletedir method) we get an IOException. This only occurs on an NFS mounted file system on our production system. Our dev and qa systems are also on an NFS but it is a different one and appears to be loaded differently and the behavior for dev and qa consistently works as expected.
It appears to be some kind of latency issue related to the way java deletes files on the NFS, but no conclusive evidence so far.
We have not tried this with a newer version of java since we are presently constrained to 1.4 :-(

Similar Messages

  • Permissions issue with NFS mounted MyCloud

    FIrst off let me say I think this is a LInux issue and not a WDMyCloud issue, but I'm not sure, so here goes... I can rsync my stuff off the  Linux systems on my LAN to back them up into a WDMyCloud share, no problem.  But then I can't get at some stuff with limited file permissions, when I mount the WDMyCloud via NFS. Here's the problem...  Let's s say my directory tree on the WDMyCloud looks like "/shares/Stuff/L1/L2/L3" where the permissions on "L3" look like this: drwx------+ 3 fred share 4096 Jul 1 01:03 L3 It's readable/writeable only by "fred".  I want to preserve the permissions and ownerships on everything, so if I have to do a restore and "rsync" it back onto another machine they'll go back the way they were when I backed them up.  I can see the contents of "L3" if I "ssh" into the MyCloud, *but* I *cannot* see the contents of  "L3" if I try to look at it via the NFS mount - I get ls: L3: permission denied. If I change the permissions on it, e.g.drwxrwxrwx+ 3 fred share 4096 Jul 1 01:03 L3then I can see the contents of "L3" just fine.  So it's just the fact that via the NFS mount the MyCloud NFS server (Or something?) won't give me access to it unless the permissions are open, even logged in as "root" on the machine where the MyCloud is NFS mounted. I tried creating "fred" as a user on the MyCloud *and* made sure the numerical UID and GIDs were the same on the Linux machine and the MyCloud - No dice. I haven't tried everything in the world yet (I haven't tried rebooting the MyCloud to see if some server hasn't picked up the changes to "/etc/passwd" or whatever ; there's something called "idmapd" that I guess I should look into...  Etc.)  But I thought maybe somebody here might have run into this or have a bright idea?  

    I had some problems, and I modified the /etc/exports file to meet a little more my needs This is the content of it: -------/nfs/jffs *(rw,sync,no_root_squash,no_subtree_check)/nfs *(rw,all_squash,sync,no_subtree_check,insecure,crossmnt,anonuid=999,anongid=1000)-------- The first line/share allows to change the permissisions and uid below them when you monted on your machineThe second one I maped the anonuid to the uid my user has on the wd (999) so when I mount a share on the machine everything is write with that used id If you modifes the /etc/exports file remeber to run after you edited the command "exportfs -a" to refresh the changes Hope this help with your problem

  • Anyone else having problems with NFS mounts not showing up?

    Since Lion, cannot see NFS shares anymore.  Folder that had them is still threre but the share will not mount.  Worked fine in 10.6.
    nfs://192.168.1.234/volume1/video
    /Volumes/DiskStation/video
    resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    Any ideas?
    Thanks

    Since the NFS points show up in the terminal app, go to the local mount directory (i.e. the mount location in the NFS Mounts using Disk Utility) and do the following:
    First Create a Link file
    sudo ln -s full_local_path link_path_name
    sudo ln -s /Volumes/linux/projects/ linuxProjects
    Next Create a new directory say in the Root of the Host Drive (i.e. Macintosh HDD)
    sudo mkdir new_link_storage_directory
    sudo mkdir /Volumes/Macintosh\ HDD/Links
    Copy the Above Link file to the new directory
    sudo  mv link_path_name new_link_storage_directory
    sudo  mv linuxProjects /Volumes/Macintosh\ HDD/Links/
    Then in Finder locate the NEW_LINK_STORAGE_DIRECTORY and then the link file should allow opening of these NFS point points.
    Finally, after all links have been created and placed into the NEW..DIRECTORY, Place it into the left sidebar.  Now it works just like before.

  • Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd
    given read,write grants to public as well as specific user

    782011 wrote:
    Hi sb92075,
    Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
    27040, 00000, "file create error, unable to create file"
    // *Cause:  create system call returned an error, unable to create file
    // *Action: verify filename, and permissions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Unable to do expdp on NFS mount point in solaris Oracle db 10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dwExport: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd

    Hi Peter,
    Thanks for ur reply.. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>

  • Finder not refreshing file or folder names, in NFS mounts.

    We are experiencing the an issues with NFS mounts, in both 10.7 and 10.8 machines.  After mounting NFS shares, the file/folder names do not update in other MAC finder windows until it is unmounted and remounted on the MACs.  This DOES NOT happen in our 10.6.8 and earlier machines.

    Cross posted
    http://forum.java.sun.com/thread.jspa?threadID=768693&tstart=0

  • Solaris 10 NFS caching problems with custom NFS server

    I'm facing a very strange problem with a pure java standalone application providing NFS server v2 service. This same application, targeted for JVM 1.4.2 is running on different environment (see below) without any problem.
    On Solaris 10 we try any kind of mount parameters, system services up/down configuration, but cannot solve the problem.
    We're in big trouble 'cause the app is a mandatory component for a product to be in production stage in a while.
    Details follows
    System description
    Sunsparc U4 with SunOS 5.10, patch level: Generic_118833-33, 64bit
    List of active NFS services
    disabled   svc:/network/nfs/cbd:default
    disabled   svc:/network/nfs/mapid:default
    disabled   svc:/network/nfs/client:default
    disabled   svc:/network/nfs/server:default
    disabled   svc:/network/nfs/rquota:default
    online       svc:/network/nfs/status:default
    online       svc:/network/nfs/nlockmgr:default
    NFS mount params (from /etc/vfstab)
    localhost:/VDD_Server  - /users/vdd/mnt nfs - vers=2,proto=tcp,timeo=600,wsize=8192,rsize=8192,port=1579,noxattr,soft,intr,noac
    Anomaly description
    The server side of NFS is provided by a java standalone application enabled only for NFS v2 and tested on different environments like: MS Windows 2000, 2003, XP, Linux RedHat 10 32bit, Linux Debian 2.6.x 64bit, SunOS 5.9. The java application is distributed with a test program (java standalone application) to validate main installation and configuration.
    The test program simply reads a file from the NFS file-system exported by our main application (called VDD) and writes the same file with a different name on the VDD exported file-system. At end of test, the written file have different contents from the read one. Indeep investigation shows following behaviuor:
    _ The read phase behaves correctly on both server (VDD) and client (test app) sides, trasporting the file with correct contents.
    _ The write phase produces a zero filled file for 90% of resulting VDD file-system file but ending correctly with the same sequence of bytes as the original read file.
    _ Detailed write phase behaviour:
    1_ Test app wites first 512 bytes => VDD receive NFS command with offset 0, count 512 and correct bytes contents;
    2_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1024 and WRONG bytes contents: the first 512 bytes are zero filled (previous write) and last 512 bytes with correct bytes contents (current write).
    3_ Test app writes next 512 bytes => VDD receive NFS command with offset 0, count 1536 and WRONG bytes contents: the first 1024 bytes are zero filled (previous writes) and last 512 bytes with correct bytes contents (current write).
    4_ and so on...
    Further tests
    We tested our VDD application on the same Solaris 10 system but with our test application on another (Linux) machine, contacting VDD via Linux NFS client, and we don�t see the wrong behaviour: our test program performed ok and written file has same contents as read one.
    Has anyone faced a similar problem?
    We are Sun ISV partner: do you think we have enough info to open a bug request to SDN?
    Any suggestions?
    Many thanks in advance,
    Maurizio.

    I finally got it working. I think my problem was that I was coping and pasting the /etc/pam.conf from Gary's guide into the pam.conf file.
    There was unseen carriage returns mucking things up. So following a combination of the two docs worked. Starting with:
    http://web.singnet.com.sg/~garyttt/Configuring%20Solaris%20Native%20LDAP%20Client%20for%20Fedora%20Directory%20Server.htm
    Then following the steps at "Authentication Option #1: LDAP PAM configuration " from this doc:
    http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server
    for the pam.conf, got things working.
    Note: ensure that your user has the shadowAccount value set in the objectClass

  • How to umount a busy NFS mount in solaris 2.6?

    How can I umount a busy NFS mount in solaris 2.6 without rebooting the machine? I've tried to check what processes that might hold the mount with the command 'fuser -c /home/user' but it reports nothing.
    I wish I had solaris 8 so I could use 'umount -f'...
    Can anyone help me?

    If your nfs mount is under autofs' control, you would run into issue. Give this a try.
    # /etc/init.d/autofs stop
    # fuser -ck /home/user
    # umount /home/user

  • NFS latency when Solaris 10 client mounts Linux NFS server(EMC NAS)

    Hello,
    One of our developers discovered a problem that for simplicity we call "latency". We have several 5.10 clients that we see the exact same symptoms on when NFS mounting our Celerra. The NAS is running a Linux variant "2.4.9-34.5406.EMC", but before you all jump on the "it's EMC's problem" bandwagon, let me explain. We set up an automated process (Perl) that watches an exported folder for the appearance of a request file (rand.req). When the request file comes in we rename the request file to (rand.sav) and then return a "report" named (rand.res). Very elegent I thought, and it runs at near lightspeed when only Linux NFS clients mount the share and create, monitor, delete, etc any files. In fact there is zero recorded latency from the time the report file appears and when the client detects it. But for all our Solaris 10 clients, they create the request file just fine, and the Perl process running on the Linux box sees the file instantaneously and returns the report, but it takes the Solaris client anywhere from 5 to up to 50 seconds before it see's any change in status for any files the Linux box manipulates. I've tried every possible combination of mount -o options there are including noac, rsize and wsize variants, vers=2, proto=udp, actimeo=0, etc, etc, etc. Nothing seems to be the magic bullet. nfsstat -c shows nothing out of the ordinary. There are no retransmits or dropepd packets anywhere in between, no firewall loads, no connectivity delays whatsoever. I'm completely out of ideas. Any ideas or clues would be greatly appreciated!
    thanks
    Dave

    No specific recommendations. But maybe you can watch the cable and get more information.
    Set up a case where the file has been created, then have the client check and snoop the cable at the same time. Does the client actually issue a directory check (or is it just displaying cached information)? Does the response contain the new file?
    Something to test anyway...
    Darren

  • NFS mount error messages on Solaris 8; is there a patch?

    We recently purchased an EMC Celerra NS80 to serve as a front end to our Centerra archive solution. On a Solaris 8 box I've been seeing a large number of NFS errors in the /var/adm/messages file relating directly to the datamover on the NS80. I've tried everything I can think of from the array side of the house to no avail. EMC states there is fix for this problem, patch 113318-12, but according to the patch notes this is for an NFS problem with Solaris 9.
    The error is causing sporadic performance issues when our end users attempt to pull up the data that resides on the Celerra NFS mount, so I would really like to get it resolved. The specific error message is:
    NFS server DATAMOVER1 not responding still trying
    NFS server DATAMOVER1 ok
    The messages of not responding and ok tend to relate every few seconds to 3-5 minutes delay. Our database vendor insists something on the NS80 is introducing this error, but I cannot find anything to change to clear this up.
    Any input would be greatly appreciated.
    Thanks.

    SunOS www02.unix 5.10 Generic_127128-11 i86pc i386 i86pc
    10:58am up 22:51, 4 users, load average: 2.16, 2.26, 2.26
    /export/www
    cache hit rate: 99% (111069900 hits, 6674 misses)
    consistency checks: 459000 (458945 pass, 55 fail)
    modifies: 0
    garbage collection: 0
    /export/zero
    cache hit rate: 94% (2089349 hits, 110629 misses)
    consistency checks: 1497690 (1497075 pass, 615 fail)
    modifies: 0
    garbage collection: 0
    /export/saba
    cache hit rate: 97% (7677577 hits, 174056 misses)
    consistency checks: 10809059 (10801491 pass, 7568 fail)
    modifies: 0
    garbage collection: 0
    So 1 day uptime, that is much better. We rebooted it for mirroring setup, and found that cachefs needs to fsck before it comes up. Can I not simply have it start afresh rather than attempt to keep cache directory? (Obviously I can, but I mean in a boot-friendly manner)

  • Strange window behavior with CS5

    I have begun experiencing strange behavior with PS CS5. I should begun with the machine configuration: 2.66GHz iMac (8,1) Intel Core Duo with 4GB RAM and OSX 10.6.6, Radeon HD 2600 Pro graphics card. The initial symptom showed up suddenly as an inability to display an image window; that is, I could open PS normally, everything would show up properly, but when I opened an image file of any sort, there would be no image window. Other cues indicated that the file was open; for example, the window menu would show it selected, and the layers palette would show the file's layers.
    It did not seem to matter what type of file was involved: PSD, JPG, TIF, whatever. Occasionally, this behavior would begin during a session. Sometimes it would happen right off the bat. Sometimes, stopping and restarting PS seemed to clean things up; other times, this had no effect. After messing around quite a bit with different permutations and combinations, all I could say was that the problem was random.
    I might add that I've had PS CS5 on this machine pretty much since it was announced, and had not had problems with it before. So this issue came along quite suddenly a couple of weeks ago. This was a little after I'd installed an update for Nik's Viveza 2; and I began to suspect that. Although why Viveza should do such a strange thing was odd, and Viveza was never part of the files I was experiencing issues with. Anyway, I went back to an earlier version of Viveza and the problem persisted.
    Since I run Time Machine, I also went back to versions of the entire PS folder from a month back, and found the same problem. Along the way, I experimented with the files I was using with PS CS4 & CS3, which I also have installed on this machine. These two versions of PS run perfectly well. At least the CS4 version has precisely the same plugins as the CS5 folder, and this seems to discount the potential problem with plugins.
    To be sure about CS5, I completely deactivated, deinstalled, and reinstalled it. The same issues occurred. Now, I have to admit that I let PS run its update process so that I brought it back to the most recent version in this step of my troubleshooting. So, I can't be certain that the issue isn't with some update of CS5.
    I might add that I have been running CS5 in 32-bit mode, for compatibility with certain plugins. This may or may not be an issue, since I have limited experience with running in 64-bit mode. At this point, I should add a bit more about the confusing behavior.
    While messing around with CS5, I also suspected the GPU support, since my problem had to do with windows. When the problem presents itself, the usual cycle through window views that you get by pressing the "F" key seems to get messed up. In particular, the "F" key doesn't work at all. Next, if you use the View menu to select different views, the full screen with menu bar view will often have garbled pixels, while the full screen view will show a blank grey screen. Then, cycling back to a window view will show a window with blank grey contents. This is interesting since the starting point was no window at all; but going into full screen and then back will produce a window with bogus contents.
    Now, the next strange thing, having to do with the GPU, is that CS5 often fails to recognize the Radeon HD 2600 card. Oddly enough, it did originally. And I had it running in Basic mode. However, CS4 does recognize the GPU; and I can set it in any variety of modes: Basic, Normal, and Advanced. This behavior is not consistent at all either; that is, while CS5 almost always now fails to detect the GPU, sometimes it does. As near as I can tell, this failure to detect the card does not depend upon whether I start CS5 in 32-bit or 64-bit mode. OTOH, CS4 always recognizes the GPU.
    Another form of this odd behavior arises if I start CS5 cold, and then just open a new blank window. This almost always works. I get a blank white window, and I can paint on it with a brush, etc. However, if I go ahead at this point and open a file, then the new file takes over the window; and there are no tabs. If I select the original new file, usually labelled "Untitled", the window bar will show the change of filename but the window's contents and the layers palette will continue to show the old file that I had opened. Cycling through views can sometimes get back to the new Untitled file, but just selecting file windows never works.
    Yet another aspect of this messed up window behavior is that the workspace bar, which shows the Bridge, MiniBridge and view icons on the upper left and the various workspace options and CS Live on the upper right, is present but blank. OTOH, the tool bar is always displayed correctly, as is the horizontal bar that shows the various tool options. Likewise, the palettes for the workspace such as layers, swatches, adjustments, and so on, are always correctly displayed on the left.
    My next steps in trying to figure this out will be to ensure that I am running the latest of all plugins so that I can get everything running in 64-bit mode and try this again. However, the single factor that seems consistent in this whole mess is that whenever I do find this odd behavior, the performance preferences window will indicate that CS5 is not detecting the GPU. Whenever this happens, if I open up CS4 and check the same performance preference in it, the GPU is detected. I can have the two preference windows side by side on the screen in fact, CS4 sees the GPU and CS5 does not. The counter-indication is that sometimes CS5 will work properly and the preferences window still shows that the GPU has not been detected. Having said that, I can add that if I have opened up CS5, and it has found the GPU, and I have set it into, say, Basic mode, and then later found CS5 messing up, and I go back and check the performance preferences again, the GPU has always been lost. So, CS5 can mess up windows with or without the GPU detected and active; but when if it started with the GPU present, it will show up as lost after the weird behavior starts.
    To add one more observation, I have had Macs and other computers with failing GPUs. This computer is showing none of the typical signs of that. Every other program seems to be working just fine, and I have a lot of these, many graphics intensive; e.g., Corel Painter 11, Parallels v6 running Windows 7, PS CS4, Lightroom v3, Aperture v3, and so on. It is just CS5 that is messing up.
    I could load up this message with screen shots of all this nonsense, but I'm not sure what that would add to the strange tale, other than I'm not pulling your legs. I am, however, pulling out my own hair.
    As I stated before, this began quite suddenly a couple of weeks ago, early Feb 2011. My best guess as to cause is some combination of incompatible updates between CS5, OSX, and maybe some plugin or other. I found unusual behavior with onOne's FocalPoint v2 last year after Apple updated GPU drivers in some OSX update, and cratered that program. It took onOne a month or two to fix that. I have CS5 on a laptop with the same OSX version, and on that I haven't seen these issues; but then, I haven't been using it as intensely over there. Also, that machine has different CPU and GPU models than this iMac.
    So, to the community: is anybody else starting to see this odd stuff? Am I alone?

    Here's the reason I say that the same plugins exist for both CS4 & CS5. Sometime last year, the internal hard drive on this computer died (as they so often do), and it was replaced under the Apple care warranty. As a consequence, I decided to rebuild the OS and applications from scratch and bring back my user files from backup. I among other things, I reinstalled all of CS4 as well as CS5 (and I had to put CS3 back too for reasons having to do with printing calibration images for Quad Tone RIP). Then, I went to reinstall the plugins that I needed. I use Nik Software, onOne, Portrait Professional and various Topaz plugins. To my recollection, most of these install in PS CS3, CS4, CS5, Aperture, and Lightroom. In short, it is not that I've copied or moved plugin folders from one application folder to another, just that the installers automatically detect the presence of the various applications and put the various plugins into each, as appropriate.
    The only 32-bit plugins that I have that don't fit into this category are from Vincent Versace. He provides special versions of certain Nik plugins (Tonal Contrast, Contrast Only, etc) that function with some Acme Educational extensions for performing B&W conversion, sharpening, blah blah blah. These plugins are only 32-bit and the extensions are only CS5 compatible. I've installed these ones only into CS5; so, you've got me. OTOH, I've had these from Vincent as soon as CS5 was available; and they've worked so far without problems. Perhaps I should just edit the extensions so I could use the latest plugins direct from Nik instead of the Versace special editions...
    I've checked fonts and repaired permissions. I am presently going through the rather tedious process of disabling plugins and extensions to see if anything yields consistent results. The rather random arrival of the strangeness makes ensuring that a change is really having an effect more difficult to validate, as you will appreciate.
    I apologize for my long-winded original post. I add all of the detail so that anyone who'd experienced similar strange behaviors might properly pattern-match what they were seeing. On the one hand, in finding the cause of a problem, you look for what's changed; and theoretically nothing is new. On the other hand, there's always so much changing behind the scenes with automatic updates of this, that, and the other thing that it's impossible to make the claim that nothing's new. 

  • Nfs mount created with Netinfo not shown by Directory Utility in Leopard

    On TIger I used to mount dynamically a few directories using NFS.
    To do so, I used NetInfo.
    I have upgraded to Leopard and the mounted directories
    are still working, although Netinfo is not present anymore.
    I was expecting to see these mount points and
    modify them using Directory Utility, which has substituted Netinfo.
    But they are not even shown in the Mount panel of Directory Utility.
    Is there a way to see and modify NFS mount point previously
    created by NetInfo with the new Directory Utility?

    Thank you very much! I was able to recreate the static automount that I had previously had. I just had to create the "mounts" directory in /var/db/dslocal/nodes/Default/ and then I saved the following text as a .plist file within "mounts".
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>dir</key>
    <array>
    <string>/Network/Backups</string>
    </array>
    <key>generateduid</key>
    <array>
    <string>0000000-0000-0000-0000-000000000000</string>
    </array>
    <key>name</key>
    <array>
    <string>server:/Backups</string>
    </array>
    <key>opts</key>
    <array>
    <string>url==afp://;AUTH=NO%20USER%[email protected]/Backups</string>
    </array>
    <key>vfstype</key>
    <array>
    <string>url</string>
    </array>
    </dict>
    </plist>
    I don't think the specific name of the .plist file matters, or the value for "generateduid". I'm listing all this info assuming that someone out there might care.
    I assume this would work for SMB shares also... if SMB worked, which it hasn't on my system since I installed leopard.

  • Strange Permissions Behavior with Public/Private Drop Box

    Strange Permissions Behavior with Non-Course Drop Box
    In an effort to promote iTunes U on campus this semester (and to get people working with audio and video more) we're having a contest in which people can submit personal or group audio/video projects.
    This being an iTunes promo, we intend for students to submit their contributions via a drop box.
    To that end, I began experimenting with drop boxes in iTunes U, which I haven't done much of previously. I've created a course called "iTunes U Drop Box Test" under "Campus Events". Within that, I have two tabs: "Featured Submissions" and "Dropbox". My goal with this drop box was to allow faculty, students and college folks the ability to use the drop box ("college" being a role I've defined for those who don't fit into the faculty/student roles).
    When I first started experimenting, access to the "iTunes U Drop Box Test" course looked like this:
    --- Credentials (System) ---
    Edit: Administrator@urn:mace:itunesu.com:sites:lafayette.edu
    Download: Authenticated@urn:mace:itunesu.com:sites:lafayette.edu
    Download: Unauthenticated@urn:mace:itunesu.com:sites:lafayette.edu
    Download: All@urn:mace:itunesu.com:sites:lafayette.edu
    --- Credentials ----
    Download: College@urn:mace:lafayette.edu
    Download: Instructor@urn:mace:lafayette.edu
    Download: Instructor@urn:mace:lafayette.edu:classes:${IDENTIFIER}
    Download: Student@urn:mace:lafayette.edu
    Download: Student@urn:mace:lafayette.edu:classes:${IDENTIFIER}
    For the "Featured" Submissions tab, I gave the non-system credentials the "download" right, and for the "Dropbox" tab I gave the non-system credentials the "dropbox" right.
    My understanding of this setup is that everyone should have had the ability to view the course and the contents of the "Featured Submissions" tab and that those in the College/Instructor/Student roles would be able to upload files via the "Dropbox" tab ... but not see the contents of said tab after the files were uploaded (aside from any files they uploaded themselves).
    This is not the behavior we saw however. While the College/Instructor/Student roles could upload files to the dropbox, everyone (including the unauthenticated public) was able to see all of the contents of the dropbox.
    The only way I could get this to work as advertised was to change all of the system credentials save the "Administrator" to "No Access":
    --- Credentials (System) ---
    Edit: Administrator@urn:mace:itunesu.com:sites:lafayette.edu
    No Access: Authenticated@urn:mace:itunesu.com:sites:lafayette.edu
    No Access: Unauthenticated@urn:mace:itunesu.com:sites:lafayette.edu
    No Access: All@urn:mace:itunesu.com:sites:lafayette.edu
    Once I did this, everything worked as advertised: College/Instructor/Student roles could upload tracks, and the "Dropbox" tab would only display tracks they uploaded.
    So my question is ... is this the correct behavior for the drop box? It looks like when the system credentials are in play, they're simply overriding whatever the normal "view" rule is for the drop box, which doesn't seem right.

    Your current configuration where things work as you wanted does seem correct to me. You are not using any System Credentials to accomplish the functionality and that's fine.
    Here's some more info to clarify how / why this is working for you and why you had to set "No Access" for the System Credentials:
    The System Credential "Authenticated@..." is going to get assigned to any user that goes through your transfer script. Even if you transfer script assigns no credentials to a user, upon entering iTunes U they will have at least 1 - the "Authenticated@..." credential. Therefore, unless you block access using "No Access", any user that passes through your transfer script is going to be able to access the area in question.
    When you change values for "Unauthenticated@..." or "All@..." you are defining what someone that DOES NOT pass through your transfer script can do. You want both of those to be "No Access" at the top level of your site if you do not want unauthenticated visitors.
    The distinction between "Unauthenticated" and "All" is that "All" applies to all users whether they pass through the transfer script or not.
    Here's another way to remember things:
    User passes through your transfer script, iTunes U automatically assigns:
    Authenticated@....
    All@....
    User does not pass through your transfer script and instead access your iTunes U site through derivable URL*, they get assigned:
    Unauthenticated@....
    All@....
    *The derivable URL for a site is: http://deimos.apple.com/WebObjects/Core.woa/Browse/site-domain-name
      Mac OS X (10.4.6)  

  • NFS4: Problem mounting NFS mount onto a Solaris 10 Client

    Hi,
    I am having problems mounting NFS mount point from a Linux-Server onto a Solaris 10 Client.
    In the following
    =My server IP ..*.120
    =Client IP ..*.100
    Commands run on Client:
    ==================
    # mount -o vers=3 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: retrying: /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    # mount -o vers=4 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120:/scratch/pvfs2: No such file or directory
    # rpcinfo -p
    program vers proto port service
    100000 4 tcp 111 rpcbind
    100000 3 tcp 111 rpcbind
    100000 2 tcp 111 rpcbind
    100000 4 udp 111 rpcbind
    100000 3 udp 111 rpcbind
    100000 2 udp 111 rpcbind
    1073741824 1 tcp 36084
    100024 1 udp 42835 status
    100024 1 tcp 36086 status
    100133 1 udp 42835
    100133 1 tcp 36086
    100001 2 udp 42836 rstatd
    100001 3 udp 42836 rstatd
    100001 4 udp 42836 rstatd
    100002 2 tcp 36087 rusersd
    100002 3 tcp 36087 rusersd
    100002 2 udp 42838 rusersd
    100002 3 udp 42838 rusersd
    100011 1 udp 42840 rquotad
    100021 1 udp 4045 nlockmgr
    100021 2 udp 4045 nlockmgr
    100021 3 udp 4045 nlockmgr
    100021 4 udp 4045 nlockmgr
    100021 1 tcp 4045 nlockmgr
    100021 2 tcp 4045 nlockmgr
    100021 3 tcp 4045 nlockmgr
    100021 4 tcp 4045 nlockmgr
    # showmount -e 172.25.30.120 (Server)
    showmount: 172.25.30.120: RPC: Rpcbind failure - RPC: Unable to receive
    Commands OnServer:
    ================
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100021 1 tcp 49927 nlockmgr
    100021 3 tcp 49927 nlockmgr
    100021 4 tcp 49927 nlockmgr
    100021 1 udp 32772 nlockmgr
    100021 3 udp 32772 nlockmgr
    100021 4 udp 32772 nlockmgr
    100011 1 udp 796 rquotad
    100011 2 udp 796 rquotad
    100011 1 tcp 799 rquotad
    100011 2 tcp 799 rquotad
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 4 udp 2049 nfs
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    100003 4 tcp 2049 nfs
    100005 1 udp 809 mountd
    100005 1 tcp 812 mountd
    100005 2 udp 809 mountd
    100005 2 tcp 812 mountd
    100005 3 udp 809 mountd
    100005 3 tcp 812 mountd
    100024 1 udp 854 status
    100024 1 tcp 857 status
    # showmount -e 172.25.30.120
    Export list for 172.25.30.120:
    /scratch/nfs 172.25.30.100,172.25.24.0/4
    /scratch/pvfs2 172.25.30.100,172.25.24.0/4
    Thank you, ~al

    I also tried to run Snoop on the client and wireshark on Server and following is what I see:
    One Server: Upon issuing mount command on client:
    # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.205570 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.205586 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    0.207863 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.207869 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    2.005314 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    4.011005 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    5.206109 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    5.206277 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    5.216157 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    5.216170 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    On Clinet Upon issuing mount command on client:
    # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    Also I see the following on Client:
    # rpcinfo -p pvfs2-io-0-3
    rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecified error)
    When I try the above rpcinfo command on Client and Server Snoop And wireshark(ethereal) outputs are as follows:
    Client # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=872 Syn Seq=2065245538 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=2004 (Unknown), size = 48 bytes
    ? -> (multicast) ETHER Type=0003 (LLC/802.3), size = 90 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=874 Syn Seq=2068043912 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    Server # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.313739 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD CDP Device ID: MILEVA Port ID: GigabitEthernet1/0/16
    2.006422 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    3.483733 172.25.30.100 -> 172.25.30.120 TCP 865 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    3.483752 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    4.009741 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.014524 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.551356 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    8.019386 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    8.484344 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    8.484569 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    10.024411 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.030956 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.901333 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    12.901421 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    ^[[A 14.034193 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00  Cost = 0  Port = 0x8010
    15.691119 172.25.30.100 -> 172.25.30.120 TCP 866 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    15.691138 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    16.038944 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    16.550760 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    18.043886 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    20.050243 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    21.487689 172.25.30.100 -> 172.25.30.120 TCP 867 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    21.487700 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    22.053784 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    24.058680 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.063406 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.558307 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    ~thank you for any help you can provide!!!

  • Cannot write to NFS mount with finder

    hi folks
    i have moved from a G4 cube running 10.4.9 to a new iMac running os x 10.5.7 (including upgrading from the old machine)
    the NFS mounts i used to have with my G4 cube are not working properly; new mounts i've created aren't working properly either.
    i can read/open file OK, but when i try to drag & drop files onto the NFS mount using the finder i get errors:
    "You may need to enter the name and password for an administrator on this computer to change the item named test.jpg" [ stop ] [ continue ]
    clicking "continue" i get:
    "The item test.jpg contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read? [ stop ] [ continue ]
    Choosing continue again results in the file appearing in the NFS directory, but with 0 size, and a time stamp of 1970.
    if i try to copy the same file using the Terminal, it works fine - so it is not a simple NFS permissions problem - it is something particular to the Finder.
    i am able to create a folder inside the NFS director by using the Finder.
    i thought at first it might be related to the .DS_Store and similar files being written, so i tried turning off that behaviour:
    defaults write com.apple.desktopservices DSDontWriteNetworkStores true
    but that hasn't fixed the problem
    there are no obvious messages in any of the logs
    any suggestions or pointers on how to fix this?

    thanks for the reply
    these articles appear to relate to sharing a mac filesystem via NFS: exporting the data.
    i am referring to mounting a NFS filesystem from another server onto the mac (leopard) client
    the mounting works fine: it's just the finder which isn't behaving. the finder worked in tiger; isn't in leopard.

Maybe you are looking for

  • K9A2 Platinum v2

    This K9A2 Platinum v2 of mine has a lot of wierd issues. -The board never been 100% stable during a week under XP SP2 or under SP 3 (like my KT3Ultra2).I always had to try different drivers for the video,mobo,sound and different timings for my ram mo

  • BI 7.0 BEx report in Web

    Hi, We are using NW2004s. I vreated the query in Query designer and I tried to view the query in web then it is not picking up the correct URL,it is taking the following URL: http:///irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2

  • What happened to my QT7 (Pro) controls?

    I have QT7 installed & registered (yes... from the utilities folder) but the controls are no longer accessible. I used to open MP3's and MP4's to audition, rehearse with and otherwise dissect music while I compose (for hire) and would create start/en

  • The new i mac is it good for video editing

    apple just release  2 new models of imac they still looking very good but they are considerable cheaper so i wonder if any of these models good for video editing.

  • [SOLVED] Hibernate works when using rmmod, but not SUSPEND_MODULES

    I've been having trouble hibernating (or suspending) an ASUS U36JC laptop. Running pm-suspend causes the computer to hang on a blank screen with an unblinking cursor. I have found that this is caused by two modules not suspending properly, ath9k and