Autofs timeout while accessing to remote NFS mount

Following Apple recommendations, I switched to "Directory Utility" to configure NFS mounts. As far as I understand, if you do so, the mounts are handled by automount which is itself called by autofs. The good thing of this is that autofs is unmounting unused mounts (after a timeout of 3600 seconds as defined in /etc/autofs.conf). Any time you need the remote drive (Finder call, ls in Terminal..., opening file), autofs is remounting ressource. This is a nice behaviour... in theory.
I'm using some codes (written in IDL) that are reading and writing on that remote NFS server once every 5 minutes. Theoretically, autofs should be detecting these accesses and should keep the drive mounted. This is unfortunately not the case. The drive is unmounted 3600 seconds after I last accessed the mount through the Finder or with any other Application.
There is apparently no way to remove this "automated unmounting" feature. I tried to set the timeout delay to a very large number (1 day) but it still disconnects me after this delay, if I don't do anything else than running my IDL code. If I mount the NFS share with the "mount_nfs" command, it works perfectly, as it is not handled by autofs.
I wonder then if there is any recommandation on Apple's side in such a case, other than going back to traditional mount_nfs.

As you have discovered, automount/autofs is also an "auto-unmounter" and there is no way to remove that feature. Contrary to what one might think, the auto unmounting does NOT happen after a period of "inactivity" of the mount. This is because autofs has no way of knowing when an automounted file system was last accessed. So, instead it periodically attempts to unmount it - if it is busy it won't get unmounted - if it isn't busy it will get unmounted.
You can't disable this - but you can make the periodic unmounting so infrequent as to effectively disable the feature. Try setting the AUTOMOUNT_TIMEOUT interval to something really large - like 315360000 (which would be 10 years).
However, in theory, this auto-unmounting should not be a problem because if it does get unmounted then the next access to that file system should cause it to get mounted again. And all this should happen without the code that is accessing the automount knowing that it isn't always mounted. It should always be there when it is accessed. So, the usual response to someone asking how to disable the auto-unmounting is to ask why they think it is a problem.
(Oh, and you don't have to use "mount_nfs" - just "mount" should work to manually mount an NFS file system (that saves a little typing).)
HTH
--macko

Similar Messages

  • All ports Timeout Cannot access system remotly

    I have a few comuters in the office. My main tower is meant to be a DMZ. I usually have ports open for remote acces, screen sharing, file sharing, HTTP and FTP access.
    Something has happened that has cause no external access to get to this box. Including screen sharing from behind the router. I have removed the Router AEBS and still not out side access. I changed my NAT setting temporarily to anothe box within my network "mac mini" external access was available via my WAN IP.
    That said it seems to be an issue with permissions in my main tower.
    I have tried preferances> security to turn on and off the firewall (currently off)
    I have run disk utility to try to fix permissions that way.
    I have flushed every cahe I could find.
    I flush all my preferances.
    Still no joy.
    Any help with this situation would be greatly appreaciated.
    Quad core
    10.6.7
    Firewall Settings:
      Mode:    Allow all incoming connections
      Firewall Logging:    No
      Stealth Mode:    No

    BUMP
    Anyone have any clue of where I should look for a solution?

  • Security Exception while accessing a remote context

    Hi All,
    Am looking up a remote weblogic context from within another weblogic.
    "url" is t3://anothermachine:7001
    The moment i do a lookup( ) i get an authentication error.
    I guess am using the "guest" user by default to lookup the context on
    the remote machine.
    Why does the trace complain about user "system" ?
    Any idea as to what is going wrong here??
    <code>
    Properties p = new Properties();
    p.put( Context.INITIAL_CONTEXT_FACTORY,
    "weblogic.jndi.WLInitialContextFactory" );
    p.put( Context.PROVIDER_URL, url );
    Context ctx = new InitialContext(p);
    ctx.lookup( "RemoteBean");
    </code>
    This is the trace:
    Authentication for user system denied in realm wl_realm Start server
    side stack trace: java.lang.SecurityException: Authentication for user
    system denied in realm wl_realm at
    java.lang.SecurityException+-----------------------------------+
    weblogic.security.acl.Realm.getAuthenticatedName(Realm.java:233) at
    weblogic.security.acl.internal.Security.authenticate(Security.java:135)
    at weblogic.security.acl.internal.Security.verify(Security.java:90) at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:242)
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139) at
    weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120) End server
    side stack trace at
    weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:85)
    at
    weblogic.rmi.cluster.ReplicaAwareRemoteRef.invoke(ReplicaAwareRemoteRef.java:262)
    at
    weblogic.rmi.cluster.ReplicaAwareRemoteRef.invoke(ReplicaAwareRemoteRef.java:229)
    at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35) at
    $Proxy51.lookup(Unknown Source) at
    weblogic.jndi.internal.WLContextImpl.lookup(WLContextImpl.java:341) at
    javax.naming.InitialContext.lookup(InitialContext.java:347) at
    thanks,
    karthik.

    Shailesh,
    try adding this to your fileRealm.propeties:
    acl.lookup.weblogic.jndi.path=system,everyone
    I assume that you are loggin in as guest, ie not supplying any user to obtain the
    context.
    and "guest" does belong to the group "everyone"
    This is obviosuly an entry that needs to be on the remote weblogic fileRealm.properties
    file.
    Let me know if that works.
    karthik.
    shailesh wrote:
    Iam also in the same problem right now. any ideas or solution would be highly be
    appreciated.
    thx, shailesh
    karthik <[email protected]> wrote:
    Hi All,
    Am looking up a remote weblogic context from within another weblogic.
    "url" is t3://anothermachine:7001
    The moment i do a lookup( ) i get an authentication error.
    I guess am using the "guest" user by default to lookup the context on
    the remote machine.
    Why does the trace complain about user "system" ?
    Any idea as to what is going wrong here??
    <code>
    Properties p = new Properties();
    p.put( Context.INITIAL_CONTEXT_FACTORY,
    "weblogic.jndi.WLInitialContextFactory" );
    p.put( Context.PROVIDER_URL, url );
    Context ctx = new InitialContext(p);
    ctx.lookup( "RemoteBean");
    </code>
    This is the trace:
    Authentication for user system denied in realm wl_realm Start server
    side stack trace: java.lang.SecurityException: Authentication for user
    system denied in realm wl_realm at
    java.lang.SecurityException+-----------------------------------+
    weblogic.security.acl.Realm.getAuthenticatedName(Realm.java:233) at
    weblogic.security.acl.internal.Security.authenticate(Security.java:135)
    at weblogic.security.acl.internal.Security.verify(Security.java:90) at
    weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:242)
    at
    weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:22)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139) at
    weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120) End server
    side stack trace at
    weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:85)
    at
    weblogic.rmi.cluster.ReplicaAwareRemoteRef.invoke(ReplicaAwareRemoteRef.java:262)
    at
    weblogic.rmi.cluster.ReplicaAwareRemoteRef.invoke(ReplicaAwareRemoteRef.java:229)
    at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35) at
    $Proxy51.lookup(Unknown Source) at
    weblogic.jndi.internal.WLContextImpl.lookup(WLContextImpl.java:341) at
    javax.naming.InitialContext.lookup(InitialContext.java:347) at
    thanks,
    karthik.

  • Can you re-export an nfs mount as an nfs share

    If so what is the downside?
    I'm asking because we currently have an iscsi san and a recent upgrade
    severely degraded iscsi connectivity. consequently can't mount my iscsi
    volumes.
    Thanks,
    db

    Originally Posted by David Brown
    The filer/san NFS functionality is working normally. I can't access
    some of the iscsi luns. Thinking of just using NFS as the backend.
    Which would be a better sub forum?
    Thank you,
    db
    Depending on which Novell OS you are running.... this subform is for NetWare, but I suspect you are using OES Linux.
    I've never tried creating a NCP share on OES for a remote NFS mount on the server. My first guess would be it is not allowed and also not a good practice. You could however, with this situation and if you are running an OES2 or OES 11 Linux server, try configuring an NFS mount on the OES server and then configuring the NCP share on that using remote manager on the server.
    What I would recommend however to see if the iSCSI issue cannot be fixed or worked around.
    Could you describe a bit more of the situation there/what happened and what is not working on that end?
    -Willem

  • Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd
    given read,write grants to public as well as specific user

    782011 wrote:
    Hi sb92075,
    Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
    27040, 00000, "file create error, unable to create file"
    // *Cause:  create system call returned an error, unable to create file
    // *Action: verify filename, and permissions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Unable to do expdp on NFS mount point in solaris Oracle db 10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dwExport: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd

    Hi Peter,
    Thanks for ur reply.. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>

  • Cannot access external NFS mounts under Snow Leopard

    I was previously running Leopard (10.5.x) and automounted an Ubuntu (9.04 Jaunty) Linux NFS mount from my iMac. I had set this up with Directory Utility and it was instantly functional and I never had any issues. After upgrading to Snow Leopard, I set up the same mount point on the same machine (using Disk Utility now), without changing any of the export settings, and Disk Utility stated that the external server had responded and appeared to be working correctly. However, when attempting to access the share, I get a 'Operation not permitted' error. I also cannot manually create the NFS mount using mount or mount_nfs. I get a similar error if I try to cd into /net/<remote-machine>/<share>. I can see the shared folder in /net/<remote-machine>, but I cannot access it (cd, ls, etc). I can see on the Linux machine that the iMac has mounted the share (showmount -a), so the problem appears to be solely in the permissions. But I have not changed any of the permissions on the remote machine, and even then, they are blown wide open (777) so I'm not sure what is causing the issue. I have tried everything as both a regular user, and as root. Any thoughts?
    On the Linux NFS server:
    % cat /etc/exports
    /share 192.168.1.0/24(rw,sync,nosubtree_check,no_rootsquash)
    % showmount -a
    All mount points on <server>:
    192.168.1.100:/share <-- <server> address
    192.168.1.101:/share <-- iMac address
    On the iMac:
    % rpcinfo -t 192.168.1.100 nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    program 100003 version 4 ready and waiting
    % mount
    trigger on /net/<server>/share (autofs, automounted, nobrowse)
    % mount -t nfs 192.168.1.100:/share /Volumes/share1
    mount_nfs: /Volumes/share1: Operation not permitted

    My guess is that the Linux server is refusing NFS requests coming from a non-reserved (<1024) source port. If that's the case, adding "insecure" to the Linux export options should get it working. (Note: requiring the use of reserved ports doesn't actually make things any more secure on most networks, so the name of the option is a bit misleading.)
    If you were previously able to mount that same export from a Mac, you must have been specifying the "-o resvport" option and doing the mounts as root (via sudo or automount which happens to run as root). So that may be another fix.
    HTH
    --macko

  • Accessing NFS mounts in Finder

    I currently have trouble accessing NFS mounts with finder. The mount is O.K. I can access the directories on the NFS server in Terminal. However, in Finder when I click on the mount, instead of seeing the contents of the NFS mount I only see the "Alias" icon. Logs show nothing.
    I am not sure when it worked the last time. It could well be that the problem only started after one of the lastest snow leopard updates. I know it worked when I upgraded to Snow Leopard.
    Any ideas?

    Hello gvde,
    Two weeks ago I bought a NAS device that touted NFS as one of the features. As I am a fan of Unix boxes I chose an NAS that would support that protocol. I was disappointed to find out that my Macbook would not connect to it. As mentioned in previous posts (by others) on this forum, I could see my NFS share via the command line, but not when using Finder. I was getting pretty upset and racking my brain trying to figure it out. I called the NAS manufacturer which was no help. I used a Ubuntu LiveCD (which connected fine). I was about ready to give up. Then, in another forum someone had mentioned the NFS manager App.
    After I installed the app and attempted to configure my NFS shares, the app stated something along the lines of (paraphrasing) "default permissions were incorrect". It then asked me if I would authenticate to have the NFS manager fix the problem. I was at my wits end so I thought why not. Long story short, this app saved me! My shares survive a reboot, Finder is quick and snappy with displaying the network shares, and all is right with the world. Maybe in 10.6.3 Apple will have fixed the default permissions issue. Try the app. It's donationware. I hope this post helps someone else.
    http://www.macupdate.com/info.php/id/5984/nfs-manager

  • PERFORMANCE while accessing remote database DB2 on AS/400 using WAS

    Subject: PERFORMANCE while accessing remote database
    We have IBMWebSphere Application Server Standard Edition 3.5.3 running on
    AS/400 iSeries Server (V4R5, test)and local DB2 Database.
    I am using AS/400 Developer Kit for Java JDBC Driver(type2, com.ibm.db2.jdbc.app.DB2Driver)
    to talk to local database. The performance was very good.
    When I try to access remote database (every thing same as local) which is on another AS/400
    machine of V4R4 (we use it for production, remote database) using IBM Toolbox for Java JDBC driver
    (com.ibm.as400.access.AS400JDBCDriver, type 4 driver), I can see 30to40%decrease in performance.
    Here we have WAS on previous V4R5 AS/400 machine.
    My questions are
    Is the performance decrease is due to
    1. the driver I am using? if it is Is there any other alternative drivers to access
    remote database to boost performance?
    2. the release difference of local(V4R5) and remote data base(V4R5)
    3. Currently most uses remote database while we do this testing. Is that the cause?
    or Is there any other cause or Drivers etc??? Suggestions and help is most welcome.
    Thank you.

    What about
    4. the data has to travel across the network.

  • Problem while accessing object in remote database

    Hi All,
    We have a procedure "UPDATE_CONV_DETAILS" created in the remote databse in the "apps" schema. The synonym for the procedure is created in the billing schema(present in the remote database). A dblink is created the local database through which we are tring to access the remote object "UPDATE_CONV_DETAILS".
    Dblink script:
    create public database link PRE_TO_CEL
    connect to BILLING
    identified by BILLING
    using 'MAP1';
    When trying to access the object from the local machine using the schema name.object_name it works fine:
    SQL> DESC APPS.UPDATE_CONV_DETAILS@PRE_TO_CEL
    PROCEDURE APPS.UPDATE_CONV_DETAILS@PRE_TO_CEL
    Argument Name Type In/Out Default?
    IN_MOBILE VARCHAR2 IN
    IN_SERVICE_CODE VARCHAR2 IN
    IN_STATUS VARCHAR2 IN
    OUT_ERROR_CODE NUMBER OUT
    But when trying the access the same object using the synonym_name it gives the error:
    SQL> DESC UPDATE_CONV_DETAILS@PRE_TO_CEL
    ERROR:
    ORA-04043: object APPS.UPDATE_CONV_DETAILS does not exist
    Regards,
    Kirti

    You have two schema.
    apps
    billing
    both reside in remote database.
    there on procedure and on procedure u create one synonym and u want to access it.
    ur remote database name is "map1"
    on ur local database ur create one dblink to access remote database.
    ORA-04043: object string does not exist
    Cause: An object name was specified that was not recognized by the system. There are several possible causes:
    - An invalid name for a table, view, sequence, procedure, function, package, or package body was entered. Since the system could not recognize the invalid name, it responded with the message that the named object does not exist.
    - An attempt was made to rename an index or a cluster, or some other object that cannot be renamed.
    Action: Check the spelling of the named object and rerun the code. (Valid names of tables, views, functions, etc. can be listed by querying the data dictionary.)

  • Error while accessing remote server using applet in jsp page

    hii..
    We are accessing a data repository MDSPlus. Its used for storing data such as signals in tree like structure. We r coding for client side in JSP .
    For this we are invoking applet which uses jar files of jScope(java tool for displaying waveforms). We r getting the following error when we try to access a remote server in network. But it works fine with local server.
    So kindly help .
    ERROR IS:
    java.security.AccessControlException:access denied(java.net.SocketPermission 202.41.112.140:8000 connect,resolve)
    url mds:://202.41.112.140/SST_DAQ/11/\SST_DAQ::TOP.BOLOMETER:BOLO_1
    Use ploicytool.exe in JDK or JRE installation directory to add socket access permission.
    The IP address mentioned above in error is the computer with which v have to connect. SST_DAQ is the expt name,11 is the shot no. , BOLOMETER and BOLO_1 are the tree node s..
    plzz reply fast........

    Hi Frank,
    Are you using standalone OC4J or 9iAS ? If you are using standalone OC4J then you need to add a proper data source entry in %OC4J_HOME%j2ee\home\config\data-sources.xml file.
    If you are using 9iAS the you can log in to the Enterprise Manager console and add the data source entry by using wizard provided by 9iAS.
    Ensure the case of the JNDI lookup string, since, it is case sensitive.
    Hope this helps.
    Abhijeet

  • Accessing NFS mounted share in Finder no longer works in 10.5.3+

    I have setup an automounted NFS share previously with Leopard against a RHEL 5 server at the office. I had to go through a few loops to punch a hole through the appfirewall to get the share accessible in the Finder.
    A few months later when I returned to the office after a consultancy stint and upgrades to 10.5.3 and 10.5.4 the NFS mount no longer works. I have investigated it today and I can't get it to run even with the appfirewall disabled.
    I've been doing some troubleshooting, and the interaction between the statd, lockd and perhaps the portmap seem a bit fishy, even with the appfirewall disabled. Both the statd and lockd complains that they can not register; lockd once and statd indefinitely.
    Jul 2 15:17:10 ySubmarine com.apple.statd[521]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd[521]): Exited with exit code: 1
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    ... and rpcinfo -p gets connection refused unless I start portmap using the launchctl utility.
    This may be a bit obscure, and I'm not exactly an expert of NFS, so I wonder if someone else stumbled across this, and can point me in the right direction?
    Johan

    Sorry for my late response, but I have finally got around to some trial and error. I can mount the share using mount_nfs (but need to use sudo), and it shows up as a mounted disk in the Finder. However, when I start to browse a directory on the share that I can write to, I end up with the lockd and statd failures.
    $ mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    mount_nfs: /Users/yyyy/xxxx-home: Permission denied
    $ sudo mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    Jul 7 10:37:34 zzzz com.apple.statd[253]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd[253]): Exited with exit code: 1
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:44 zzzz com.apple.statd[254]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd[254]): Exited with exit code: 1
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:54 zzzz com.apple.statd[255]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd[255]): Exited with exit code: 1
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:58 zzzz loginwindow[25]: 1 server now unresponsive
    Jul 7 10:37:59 zzzz KernelEventAgent[26]: tid 00000000 unmounting 1 filesystems
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /net updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /home updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: no unmounts
    Jul 7 10:38:02 zzzz loginwindow[25]: No servers unresponsive
    ... and firewall wide open.
    I guess that the Finder somehow triggers file locking over NFS.

  • NFS Mounted Directory And Files Quit Responding

    I mounted a remote directory using NFS and I can access the mount point and all of its sub-directories and files. After a while, all of the sub-directories and files no longer respond when clicked; in column view there is no longer an icon nor any statistics for those files. If I go back and click on Network->Servers->myserver->its_subdirectories, it will eventually respond again.
    I have found no messages in the system log. And nfsstat shows no errors.
    I am using these these mount parameters with the Directory Utility->Mounts tab:
    ro net -P -T -3
    Any idea why the NFS mounted directories and files quit responding?
    Thanks.

    I may have found an answer to my own question.
    It looks like automount will automatically unmount a file system if it has not been accessed in 10 minutes. This time-out can be changed using the automount command. I am going to try increasing this time-out value.
    Here is part of the man page:
    SYNOPSIS
    automount [-v] [-c] [-t timeout]
    -t timeout
    Set to timeout seconds the time after which an automounted file
    system will be unmounted if it hasn't been referred to within
    that period of time. The default is 10 minutes (600 seconds).

  • Permanent NFS Mount

    Hi
    I'm trying to figure out how to create a permanent NFS mount on my 10.5.6 Server hosts. Using Directory Utility seems to only create autofs mounts - which I've had trouble with in the past on other platforms, so I'm not very trusting of it.
    /etc/fstab.hd is apparently ignored, so I'm not sure how else to get a permanent mount. Is it even possible on Leopard Server?
    Thanks.

    I also found that nested / heirarchical mounts don't work with the Apple version of Sun's automounter. I thought for sure if I made a ``multiple mounts'' entry they would work since AIUI in this case the automounter gets to see the whole proposed subtree at once from a single Directory Services lookup, but no, it doesn't work either. There are ``multiple mounts'' examples on both Apple's man page and Sun's, but on Sun's page the example is nested and on Apple's it isn't. I guess that's a kind of transparency, but a rather CYA-ish kind that leaves us out here wagging our jaws quite a bit when we expect it to behave like other automounters.
    However! nested mounts with the 'net' option DO work. ?!
    and in this case unlike the traditional Sun ``multiple mounts'' case, the automounter must build the tree with multiple Directory Services lookups not just one. How can it even do that? Is it Searching the directory instead of doing a simple Lookup?
    I can load all the mounts into Open Directory as separate, nodes, or whatever you call them, like this:
    cat > nested-example
    0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Mounts 3 dsAttrTypeStandard:RecordName dsAttrTypeStandard:VFSType dsAttrTypeStandard:VFSOpts
    terabithia\:/arrchive/incoming:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/Radio:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/backup:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/ebooks:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/fonts:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/movies:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/music/Antoine:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/music/Lauren:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/music/Roger:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/music/jen:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive:nfs:nosuid,nodev,hard,intr,net
    ^D
    dsimport -g nested-example /Local/Default I -u someadminuser
    and they will show up under /Network/Servers/terabithia/arrchive. I can no longer choose the mountpoint myself, which is a disadvantage for more than vanity---with the Solaris automounter, it's possible to build a single nested tree on the client out of filesystems pulled in from a bunch of different NFS servers, while the Mac's 'net' naming convention straightjackets me into only rebuilding trees that exist within one NFS server.
    Also, this works on 10.4, too, though in that case of course you use netinfo or niload fstab instead of dsimport.
    Now can someone explain why 'net' suddenly works so much better? And is there a hidden downside to using it?

  • Static NFS Mounts

    Running OS X 10.5 Server
    I have found that I can mount a NFS export via the automount system through the auto_* configuration files in /etc. As I understand it automount will mount the share on access and unmount it after a period of inactivity.
    1. Is there another way to "statically" mount a NFS volume similar to Linux's /etc/fstab or Solaris's /etc/vfstab? I don't want this unmounted, ever. Back to Google for now....
    2. If I edit /etc/autofs.conf option AUTOMOUNT_TIMEOUT and set it to 0 will it leave all mounts up after first access? If not, what is the valid range for this parameter?

    I have seen this a couple times now. Can you explain why the Directory Utility is the place for something like this? As I understood it this utility/application is for working with LDAP directories not file system directories.
    Also, I believe that Directory Utility basically is imputing these mounts into automount which means they become unmounted after X seconds as defined in autofs.conf We have a cron job that needs to access data on the mount once a day. If fails however because it gets unmounted. I have considered putting a "cd /PATH &&" before my job but that seems to be a bit of a hack.
    I'd really like to just mount the export and have it remain regardless of what automount wants to do. Surely that is possible.

Maybe you are looking for

  • How to trace the changes made by other user in portal

    Hi All, Actually i have a requirement like, some body has done some changes in Development server how to trace those chnages in portals or Visual Admin (who is the user and what are all the objects that got changed) . Actually our development system

  • Error while trying to initialize database (-8008)

    I get error -8008 when trying to connect to a company through ASP.NET.  I have another winforms program which runs the same code and connects OK so I believe it is related to the ASPNET account.  The code runs fine on my development environment. Is t

  • Exporting as ePub

    Sometimes when I export a Pages file as an ePub it generates the file iTunesMetadata.plist into the zip file. Other times when re-exporting the same file it does not generate the plist. The plist is not declared in the OPF file, which causes the epub

  • Is the 30 pin conector usb cable that comes with the ipad the same that comes with the iphone?

    Hi Recently I've try to charge mi iPad 3er gen. with the usb cable 30 pin that comes with the iPhone 4S (original) via my MacBook Air, but when I had plug it into the usb port, my iPad started to make like repetitive sounds (charging, not charging so

  • Supress animation during presentation when going to a certain slide

    Sometimes during a presentation I am asked to show a certain slide and I would like to go to that slide without going through all the animation again but just show the slide in its final stage. Anyone know how to do this?