Automount Home Directories Failed

Hi There,
i have solaris 10 server that is running zfs filesystem.
after patching this server, the clients running sol 10 are not mounting the home directories anymore.
i see that /etc/dfs/dfstab file has the word "Error: Syntax" infront of the line where home directories are getting shared.
also the autofs svcs is up, while the nfs/server svc is offline*.
any thoughts, what should i check.
any help will be greatly appreciated.
thanks
wasim.

Thanks alot for the reply, here is what you need.
svcs -xv nfs/server
svc:/network/nfs/server:default (NFS server)
State: offline since Tue Feb 22 09:56:10 2011
Reason: Start method is running.
See: http://sun.com/msg/SMF-8000-C4
See: man -M /usr/share/man -s 1M nfsd
See: /var/svc/log/network-nfs-server:default.log
Impact: This service is not running.
bash-3.00# dfshares
nfs dfshares:edison: RPC: Program not registered
bash-3.00# vi dfs/dfstab
"dfs/dfstab" 16 lines, 629 characters
# Do not modify this file directly.
# Use the sharemgr(1m) command for all share management
# This file is reconstructed and only maintained for backward
# compatibility. Configuration lines could be lost.
# Place share(1M) commands here for automatic execution
# on entering init state 3.
# Issue the command 'svcadm enable network/nfs/server' to
# run the NFS daemon processes and the share commands, after adding
# the very first entry to this file.
# share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource]
# .e.g,
# Error: Syntax share -F nfs -o rw -d "home directory" /tank/home
# Error: Syntax share -F nfs -o ro -d "local" /tank/local
bash-3.00# zfs get sharenfs tank/home
NAME PROPERTY VALUE SOURCE
tank/home sharenfs rw=soemgr,rw=soelab113 local
well i did try to correct the dfstab file but did not work. i dont know what was being used to share the home directories, but i do recall that dfstab file was not like the one above.
any thoughts,
wasim
a

Similar Messages

  • Automount Home Directories from LDAP

    I have a Red Hat Linux LDAP/kerberos Server (IPA Server) that i beside authentication also use as a NFS Server sharing users Home Directories.
    All information for Solaris machine is provided from a custom DUAProfile in LDAP.
    Relevant autofs information in DUAProfile:
    serviceSearchDescriptor: automount:cn=default,cn=automount,dc=example,dc=org
    serviceSearchDescriptor:auto_master:automountMapName=auto.master,cn=default,cn=automount,dc=example,dc=org
    All users on the network have their home directories under /home
    I have a auto.home map on the server with key:
    * -rw,soft ipaserver.example.org:/home/&
    This setup works perfect for our Linux clients but not for Solaris.
    In Solaris, autofs seems to look for local users home directories too in the LDAP tree and thus making them unavailable when logging in.
    Even though +auto_home is after the local usermappings.
    t4 LOOKUP REQUEST: Tue Dec 25 22:08:36 2012
    t4 name=localuser[] map=auto.home opts= path=/home direct=0
    t4 LOOKUP REPLY : status=2
    Removing autofs entries in DUAProfile and specifying every user directly in /etc/auto_home works with a delay in mount.
    This is however a less than satisfactory solution.
    I thought about just removing local user mounts to /home from /export/home but that does not seem to be a good idea.
    How could i make this work the way i want with wildcards?
    Regards,
    Johan.

    I have now tried with a different share and mountpoint (/nethome) on a different test server.
    Verified that i can mount it through krb5 and automount works for Red Hat Linux clients.
    ssh, su and console login works on Solaris 11 except for finding home directory through automount.
    root@solaris2:~# ldapclient list
    NS_LDAP_FILE_VERSION= 2.0
    NS_LDAP_BINDDN= uid=solaris,cn=sysaccounts,cn=etc,dc=example,dc=org
    NS_LDAP_BINDPASSWD= {XXX}XXXXXXXXXXXXXX
    NS_LDAP_SERVERS= server.example.org
    NS_LDAP_SEARCH_BASEDN= dc=example,dc=org
    NS_LDAP_AUTH= tls:simple
    NS_LDAP_SEARCH_REF= TRUE
    NS_LDAP_SEARCH_SCOPE= one
    NS_LDAP_SEARCH_TIME= 10
    NS_LDAP_CACHETTL= 6000
    NS_LDAP_PROFILE= solaris_authssl1
    NS_LDAP_CREDENTIAL_LEVEL= proxy
    NS_LDAP_SERVICE_SEARCH_DESC= passwd:cn=users,cn=accounts,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= group:cn=groups,cn=compat,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= netgroup:cn=ng,cn=compat,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= ethers:cn=computers,cn=accounts,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= automount:cn=default,cn=automount,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= auto_master:automountMapName=auto.master,cn=default,cn=automount,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= aliases:ou=aliases,ou=test,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= printers:ou=printers,ou=test,dc=example,dc=org
    NS_LDAP_BIND_TIME= 5
    NS_LDAP_OBJECTCLASSMAP= shadow:shadowAccount=posixAccount
    NS_LDAP_OBJECTCLASSMAP= printers:sunPrinter=printerService
    root@solaris2:~# sharectl get autofs
    timeout=600
    automount_verbose=true
    automountd_verbose=true
    nobrowse=false
    trace=2
    environment=
    From /var/svc/log/system-filesystem-autofs\:default.log:
    t4 LOOKUP REQUEST: Wed Dec 26 12:28:43 2012
    t4 name=user02[] map=auto.nethome opts= path=/nethome direct=0
    t4 getmapent_ldap called
    t4 getmapent_ldap: key=[ user02 ]
    t4 ldap_match called
    t4 ldap_match: key =[ user02 ]
    t4 ldap_match: ldapkey =[ user02 ]
    t4 ldap_match: Requesting list for (&(objectClass=automount)(automountKey=user02)) in auto.nethome
    t4 ldap_match: __ns_ldap_list FAILED (2)
    t4 ldap_match: no entries found
    t4 ldap_match called
    t4 ldap_match: key =[ \2a ]
    t4 ldap_match: ldapkey =[ \2a ]
    t4 ldap_match: Requesting list for (&(objectClass=automount)(automountKey=\2a)) in auto.nethome
    t4 ldap_match: __ns_ldap_list FAILED (2)
    t4 ldap_match: no entries found
    t4 getmapent_ldap: exiting ...
    t4 do_lookup1: action=2 wildcard=FALSE error=2
    t4 LOOKUP REPLY : status=2
    The automount map is called auto.nethome
    key is: * -rw,soft server.example.org:/nethome/&
    Is it that Solaris automount dont like asterisk(*) in a automount key?
    At least now the local users home directories work when i am not trying to autofs mount to /home.
    Anyone know what is wrong here?
    Thank you for your help.
    Regards,
    Johan.

  • Automounting home directories

    I really like how Solaris keeps home directories at /export/home/<username> and then mounts them at /home/<username> upon login. I tried to get this same functionality with OL63 but couldn't get the automounter to work.
    My setup is:
    /etc/auto.master contains
    /home /etc/auto.homeAnd /etc/auto.home contains
    * :/export/home/&I restarted the services but when any user logs in the system complains about not having a home directory. What am I missing?

    I have not configured autofs recently, but have the following example in my notes:
    <pre>
    # cat auto.master
    /nfs-photon01 /etc/auto.photon01 vers=3,rw,hard,proto=tcp,intr
    # cat auto.photon01
    * photon01.example.com:/&
    # mkdir /nfs-photon01
    # service autofs reload
    </pre>
    Does your /etc/auto.home file specify the NFS server?
    By the way, NFS4 is default in OL6, which requires that you export all NFS directories under one virtual home. For instance, if /ext/nfs is the NFS root (fsid=0), everything else that you want to be shared over NFS4 must be accessible under /ext/nfs. Check your /etc/export file. There are examples on the web, you should be able to find it searching for "NFS4 fsid-0".

  • Automount home directories from another computer

    Hello,
    after 2 days of work, I write here to find some help.
    I have a well configured Leopard Server 10.5.8 which serves users accounts through Open Directory: network users can log to all my Mac OS X clients, home directories are well automounted in /Network/Servers/myserver.com/Users/user1. myserver.com is the server of user1 Home directory.
    Now, I want that (let's say) host1 (Leopard Workstation) becomes the Home directory server of user1. So, I created a local account on host1 with network credentials (uid, gid and passwd) and configure /etc/exports to export his home on the network with NFS.
    How to put in LDAP server that his home directory is not located on myserver.com but on host1 ? That is when user1 logs on host2, host2 automount host1:/Users/user1 on /Network/Servers/host1/Users/user1 or elsewhere.
    Note: I already test manually the configuration and it works!
    1. In WGW, I put /path/user1 as user1's default Home Directory
    2. On the client (host2), I manually mount host1:/Users/user1 to /path/user1
    3. user1 logs into host2, it works fine.
    But I cannot do it for each clients and for each new such user! This is why I want to put this information (for each such user) in LDAP to automatically distribute the information to clients.
    Thank you for your help,
    Joan

    Hello,
    after 2 days of work, I write here to find some help.
    I have a well configured Leopard Server 10.5.8 which serves users accounts through Open Directory: network users can log to all my Mac OS X clients, home directories are well automounted in /Network/Servers/myserver.com/Users/user1. myserver.com is the server of user1 Home directory.
    Now, I want that (let's say) host1 (Leopard Workstation) becomes the Home directory server of user1. So, I created a local account on host1 with network credentials (uid, gid and passwd) and configure /etc/exports to export his home on the network with NFS.
    How to put in LDAP server that his home directory is not located on myserver.com but on host1 ? That is when user1 logs on host2, host2 automount host1:/Users/user1 on /Network/Servers/host1/Users/user1 or elsewhere.
    Note: I already test manually the configuration and it works!
    1. In WGW, I put /path/user1 as user1's default Home Directory
    2. On the client (host2), I manually mount host1:/Users/user1 to /path/user1
    3. user1 logs into host2, it works fine.
    But I cannot do it for each clients and for each new such user! This is why I want to put this information (for each such user) in LDAP to automatically distribute the information to clients.
    Thank you for your help,
    Joan

  • Automounting home directories from Redhat Linux OpenLDAP server

    We have an existing, functioning autofs environment here. At least the linux boxes have no problem automounting user home directories.
    I am looking for a more comprehensive solution to getting our macs integrated into this environment.
    What should the ldap entries contain?
    What should the attribute mappings be set to.
    I have ldap authentication working - the only thing left is automounting.
    Also - is there a way to get the nfs client to work over secure ports by default? Or is this a BSD thing?
    Thanks

    http://rajeev.name/blog/2007/12/09/integrating-leopard-autofs-with-ldap/
    There's some additional LDAP schema stuff that has to be done; Apple seems to have gone with the most absolutely bleeding edge RFC for automounts - and then removed all legacy support.
    This covers most of the issues, however, there is one that I'm still unable to resolve:
    typically, a linux box does autofs using an entry like
    "* -fstype=nfs foo:/home/&"
    LDAP uses a slighty different entry, but it works.
    I haven't for the life of me been able to get auto.home mounting from LDAP as easily as if it is defined in the file.
    The frustrating part is that the post gives a really good example LDIF; but it still doesn't seem to work.
    So while I have other automounts working wonderfully, the wildcarded home directories are still a bust.
    So if you're willing to forgo using LDAP for autofs mounting home, then hard-coding /etc/auto_home will fit the bill.
    But since the link seems to imply that it works, I'm wondering what's gong on...
    Message was edited by: pariah0
    Trying to get the asterisk...

  • Login crashes at loading home directories

    I hope someone can point me in the right direction. About 2 weeks ago I replaced our network router with a brand new one. There were no directions for manual install, just a "wizard" to run for setup. BECAUSE I AM AN IDIOT I used the closest computer to run the wizard- my Snow Leopard Server. The router wizard did not ask what settings you want for your router, instead it CHANGED THE IP OF MY SERVER! No client was able to login. I finally got into the admin settings for the router, and changed it back to what had been before- server manual address 192.168.0.10 and router 192.168.0.1. It took me a half a day, but I got the server IP changed, and doing DNS correctly, the router does DHCP.
    After that little glitch, most clients were ok. I had a handful, OS from 10.4 to 10.7, no rhyme or reason, that could not login. "you are unable to login at this time because an error occurred". Accounts would login fine on a different machine, but no account would login on the handful. I deleted and re-added server in directory utility, deleted prefs, with no luck. Quite a few clients, like more than half, had weird sloooooooow login problems, taking 2 or 3 minutes to fully load home, and occasional spinning beachballs after logging in.
    Fast forward: We had a huge power outage- lasted about 2 hours last week. When power came back on, and I started up the server, at first Server Admin showed no services. I restarted, and my server returned to what I thought was normal- but now NO USER can login. I do not know if this is related to my earlier problem, or a new development. Logging in from any client is attempted, from the logs it looks like kerberos authentication succeeds, but home directories fail to load, and user is dumped to the "you are unable to login because of an error" helpful screen.
    What I have tried: Checking DNS - sudo changeip -checkhostname returns correctly, ip addresses match and are correct. FQDN for server is correct. I can ping the server both by name and ip from the client. nslookup on client returns correctly. I have checked sharepoint for home directories- appears to be shared correctly. If I login to client computer with a local login, then Go:Connect to Server, and login with a user account- the user's home is loaded as a connected disk- everything is there. I have looked through console log on client, and various logs in server admin, but I don't really know what to look for. I have gone as far as exporting my open directory database, demoting the server to a standalone server, re-promoting to open directory master, and restoring from the database, all of which seemed to go well- I am able to connect to server accounts manually as above, and all my users are back. In Workgroup Manager, accounts show as normal, and home folders are located in the same place they have always been. I don't know what to try next. Users who do not have server accounts (windows machines, and macs with local logins) can connect to the internet and all is fine.
    I have searched support postings on several different occasions, but did not find any helpful suggestions.

    We ran into this issue too because we forgot to enable the Network Mount for the users. Go to Sharing --> Share Point --> setup the Network Mount as Home Directory Mount.

  • Stumped on AFP network home directories.

    Heyo,
    Been RTFMs on File Services, User Management and Open Directory. Also looked in www.AFP548.com but didn't find anything helpful.
    We have a mixed environment and windows users aren't having any problem with network domain logins or using smb shares. Mac clients can mount the network shares with afp but network homes are a no go.
    Made the changes needed for the firewall and tried it with the firewall off just to be sure.
    The /Home share is automounted (not using the default /Users).
    Guest access is on in Sharing and AFP.
    Network Mount for /Home is set to Enable network mounting, AFP and User Home Directories.
    SMB Windows Homes are in the same directory and run without problems.
    Directory Access on the Client saw the server and looks ok.
    Only ref. I can find for the login attempt is under Open Directory Password Service Server Log:
    Apr 23 2006 16:42:31 RSAVALIDATE: success.
    Apr 23 2006 16:42:31 USER: {0x00000000000000000000000000000001, netadmin} is the current user.
    Apr 23 2006 16:42:31 AUTH2: {0x00000000000000000000000000000001, netadmin} CRAM-MD5 authentication succeeded.
    Apr 23 2006 16:42:31 QUIT: {0x00000000000000000000000000000001, netadmin} disconnected.
    and OD LDAP log:
    Apr 23 16:42:31 ci slapd[81]: bind: invalid dn (netadmin)\n
    Nothing in the AFP log.
    Any thoughts on what I should try or something obscure I may have missed when setting up MacOS client network home directories with AFP?
    Thanks
    Mitch
    Server: 10.4.6
    Workstations: 10.4.6

    Getting closer.
    Kerberos wasn't running and the ODM wouldn't Kerberize.
    This thread sorted out the issue:
    http://discussions.apple.com/thread.jspa?messageID=2186542&#2186542
    Kerberos is running now but still canna login for mac clients.
    hostname and sso_util info -g both resolve properly.
    but when i run:" slapconfig -kerberize diradmin REALM_NAME "
    all looks good until the command (with the proper substituions)
    "sso_util configure -r REALM_NAME -f /LDAPv3/127.0.0.1 -a diradmin -p diradmin_password -v 1 all"
    automatically runs and I get a list of:
    SendInteractiveCommand: failed to get pattern.
    SendInteractiveCommand: failed to get pattern.
    SendInteractiveCommand: failed to get pattern.
    and "sso_util command fialed with status 2"
    the sso_util command by itself spits out
    Contacting the directory server
    Creating the service list
    Creating the service principals
    kadmin: Incorrect password while initalizing kadmin interface
    SendInteractiveCommand: failed to get pattern.
    kadmin: Incorrect password while initalizing kadmin interface
    SendInteractiveCommand: failed to get pattern.
    kadmin: Incorrect password while initalizing kadmin interface
    SendInteractiveCommand: failed to get pattern.
    etc...
    even though the login/pass are good
    any thoughts on what i should check or where i should go next?
    Thanks
    Mitch
    iMac G5   Mac OS X (10.4.6)  
    iMac G5   Mac OS X (10.4.6)  

  • How to specify one ethernet port for network home directories (other for normal filesharing)?

    So I'm trying to get Home Directories up and running on a 10.6.8 Xserve (waiting until I get my NFS sharepoints migrated to a Linux server [for other reasons] before moving up to 10.7 Server). But posting here since that will be happening in the next few weeks, and it might be applicable now (so I can at least get that resolved ahead of time).
    I have a different DNS entry for each ethernet port: server.office.domain.com at 192.168.0.11 for the first, and homes.services.internal at 192.168.0.10 for the second. DNS lookups for both resolve correctly (as does the reverse lookup).
    If I use the Server Admin to pick a sharepoint as an automount for Home Directories, everything is fine, but it picks the server.office.domain.com hostname. Picking that works just fine, but that is also the connection that feeds the filesharing. I'd prefer to split that home directory traffic out onto the second ethernet port. So I tried just duplicating the initial connection (since it can't be edited directly in Workgroup Manager) and changing the hostname to the internal one, but I get an error when attempting to log in (the client login screen gives a very helpful "Couldn't login because of an error" error message) and don't see anything in the server logs.
    The client machine shows the following line:
    Code:
    10/20/12 5:27:42.688 PM authorizationhost: ERROR | -[HomeDirMounter mountNetworkHomeWithURL:attributes:dirPath:username:] |
         PremountHomeDirectoryWithAuthentication( url=afp://homes.services.internal/Users,
         homedir=/Network/Servers/homes.services.internal/Volumes/HomeDirectories/Users/ user123, name=user123 ) returned 45
    (added line breaks so it didn't extend off the page)
    So it looks like this is failing because the automount isn't in place, but I'm not sure how to work that out either (i.e. how do I add that making sure it uses the internal hostname?).
    Any suggestions on getting this to work?
    I realize one solution is just to LACP the two ports, but that is a different ball of wax (I may do that later if I get a 4 port ethernet card and performance limitations demand it).

    A possible solution might be this.
    On ADSLBOX and CABLEBOX configure different subnets for the LAN, e.g.
    ADSLBOX:    192.168.1.0/24
    CABLEBOX: 192.168.2.0/24
    The MEDIABOX gets these static IPs:
    ADSL-LAN: 192.168.1.2
    CABLE-LAN: 192.168.2.2
    On the MEDIABOX, configure the two network interfaces using two routing tables.
    The ADSL-LAN routing table
    ip route add 192.168.1.0/24 dev eth0 src 192.168.1.2 table 1
    ip route add default via 192.168.1.1 table 1
    The CABLE-LAN routing table
    ip route add 192.168.2.0/24 dev eth1 src 192.168.2.2 table 2
    ip route add default via 192.168.2.1 table 2
    The main routing table
    ip route add 192.168.1.0/24 dev eth0 src 192.168.1.2
    ip route add 192.168.2.0/24 dev eth1 src 192.168.2.2
    # use the CABLE-LAN gateway as default, so general internet traffic from MEDIABOX runs over CABLEBOX
    ip route add default via 192.168.2.1
    define the lookup rules
    ip rule add from 192.168.1.2 table 1
    ip rule add from 192.168.2.2 table 2
    To test the setup:
    ip route show
    ip route show table 1
    ip route show table 2
    I don't know how to persist something like this in ArchLinux using netctl. Might require to write a special systemd unit for it. Above is a working example from a RedHat box at my company.
    Last edited by teekay (2013-12-04 07:42:22)

  • Home directories deleted

    I have four Solaris 10 x86_64 virtual servers on vmware which are tightened with DoD STIGs. Recently I was given the approval to enable file sharing through NFS, which required enabling the following services:
    rpc/bind:default
    nfs/status:default
    nfs/mapid:default
    nfs/cbd:default
    nfs/nlockmgr:default
    nfs/rquota:default
    nfs/client:default
    nfs/server:default
    When I enabled these services, all of the subdirectories in the /home/ directory were purged from the system. As far as I can tell this was the only negative affect that I witnessed, and there is nothing in the logs to indicate a problem.
    Please advise.
    Thanks!
    Peter

    Hi,
    Sorry for the late reply. There are no services which are in maintenance mode or have failed to start; plainly i have zero return on svcs -xv
    After doing some reading about /etc/auto_master and /etc/auto_home per your suggestion, i learned that typically the standard /home/ directory is used for NIS automounts as you also said. I'm learning now that I was incorrect in my configuration and that i should move the home directories to /export/home instead.
    The system is not configured as part of NIS or LDAP.
    Thanks for your help!

  • Local access to Network Home directories

    Under Leopard, I want to allow a user to log in to the machine that hosts his network home directory, and access it locally from that machine.
    User joe is set up in Open Directory to use a network home directory that is served from machine joe-ws. In other words, his Home record points to afp://;AUTH=Client%20Krb%20v2@joe-ws/Users/joe. There is also a mount record in OD that causes joe-ws:/Users to auto mount as /Network/Servers/joe-ws/Users
    This is working perfectly -- Joe can log in anywhere on the network and see his files. He can also create portable home directories, sync them, and the like.
    Except that he can't log in on joe-ws: if he does so, joe-ws tries to mount its own sharepoint via afp in order to find joe's home directory and that isn't a happy situation.
    Is there any obvious way to do what I want?

    I have found the source of my problem and resolved it -- it relates to case-sensitivity of host names.
    What is supposed to happen is that automount and autofs are smart enough not to try to mount shares that are hosted locally. If, for example, if there is mount record in the directory asking for afp://joe-ws/Users to be mounted in /Network/Servers/joe-ws/Users, then on every machine but joe-ws, it'll happen. On joe-ws, on the other hand, automount just creates /Network/Servers/joe-ws as a link to /
    In my case, there was a typo in the local DNS zone records, causing joe-ws to think it's name was joe-ws.DOMAIN.com, whereas the mount records referred to joe-ws.domain.com (difference being case).
    Therefore, automount, running on joe-ws.DOMAIN.com tried to mount a sharepoint hosted on joe-ws.domain.com. DNS sees these as the same host; automount doesn't, so fails to apply the special magic that normally applies when you ask it to mount a sharepoint that is hosted locally.

  • How to configure Airport Extreme AFP disk sharing to host multiple users' home-directories (Lion, using autofs)

    I have this working, but only by completely bypassing access control, using guest access with read+write permissions.
    Do I need to buy Lion Server, to do this. All my past unix/linux experience says Lion Server should _not_ be necessary.
    This seems like a simple & obvious setup objective, but it is proving to be harder than I would imagine.
    Setup:
    multiple users, sharing two mac mini's running OSX Lion
    connected to an Airport Extreme (4th gen) with a USB disk shared (either via disk password, AEBS password, or using AEBS user's passwords).
    After much experimentation and web research, I finally have managed to get the mini's to auto mount the Airport Extreme's AFP shared USB disk. Well almost... It only works if, on the Airport, I set the guest access permissions to read+write and select the "Secure Shared Disks" method to "With disk password" or "with Airport Extreme password".  In other words, it only works if I essentially bypass/disable access control by using the guest authentication mechanism to the AFP shared disk.
    On the Lion side of this, I am automounting the users directories via "autofs". The config files for this are
    /etc/auto_master:
    # Automounter master map
    +auto_master            # Use directory service
    /net                    -hosts          -nobrowse,hidefromfinder,nosuid
    /home                   auto_home       -nobrowse,hidefromfinder
    /Network/Servers        -fstab
    /-                      -static
    /-                      auto_afp
    /etc/auto_afp:
    # Automounter AFP master map
    # https://discussions.apple.com/thread/3336384?start=0&tstart=0
    /afp/users -fstype=afp afp://;AUTH=No%20User%[email protected]/Users/
    Then, after rebooting and verifying read+write access to the /afp/users directories, I change each user's home directory: In System Preferences > System > Users & Groups, I right-click over the users to access the Advanced Options, changing the Home directory field to point at the AFP-mounted /afp/users/Users/* home directories.
    I experimented with alternate UAM specifications, as well as both OSX and AESB users & passwords. Using guest access is the only thing that has worked.
    Any pointers would be appreciated...

    Based on lots more experimentation which confirms the information in a parallel discussion (cf. Automount share as non ROOT or SYSTEM user! https://discussions.apple.com/thread/3221944), I have concluded that the Lion 10.7.2 implementation of AutoFS mechanism is broken. I submitted a bug report via apple.com/feedback.
    Work arounds..?
    Earlier I wondered if installing Lion OSX Server was necessary.  The more I contemplate this, the more I am convinced it _should_not_ be necessary. The client-server architecture is clear: my mac's are the file-server client's and the Airport Extreme is supposed to act as the file server. The only thing instaling Lion Server would do (besides enriching Apple.com) is enable me to configure one of the mac's as the file server. This would require it to be "always on" (thus enriching my electric utility as wel).  Okay, an additional benefit would be configuring software RAID disks attached to the Lion server, but Time Machine has worked fine for me in the past, backing up to disks mounted on the Airport Extreme.
    One solution is to create a disk partition for each user and instruct each user to connect / authenticate to the Airport Extreme AFP share at login.  The multiplicity of partitions is necessary since the first user to mount the AFP share, takes ownership of it, blocking other users from accessing that disk partition.  A user can "steal" ownership by reconnecting, but this will leave the other user's applications & open files dangling.
    This disfunctional situation really *****.  Before instaling Lion, I put a 64 GB SSD (solid state disk) in each of our mac's. I did this expecting to easily configure the /Users/* data on external networked storage. I'm having a dejavu "Bill Gates"-ware moment; problems like this were why I abandoned Windoz.
    I will make a few more experiments using the depreciated /etc/fstab mechanism.  Maybe that will bypass the broken-ness of AutoFS...? Alternately, I guess I could also try to run Kerberos authentication to bypass whatever is broken in AutoFS, but that would require a running a Kerberos daemon somewhere.  Possibly I could configure a Kerberos service to run on both my mac's (without installing Apple's Lion Server)...?
    Stay tuned...

  • Home Directories not mounting

    I'm setting up an OS X network for the first time.
    I've got Open Directory based network logins working, but I can't get the home directories to mount over the network. When logging in, a dialog box says that an error occurred and that the home direcotry is mounted via SMB or AFP.
    So I log in as a local user on the client machine to poke around. I don't see the server listed in /Network/Servers, but I can manually do a Connect To Server and put in afp://server.dom.ain/Users/usename and it's fine. This afp:// URL is the same as is specified as the user's home directory.
    I have verified that /Users is exported on the server.
    Do I need to go in to every client and create an automount map for this or is there something else I've forgotten?
    Thanks...
    various   Mac OS X (10.4.9)   10.4.9 server and clients

    The first thing to do when you're having any kind of login problem is to ssh in to the client machine and tail -f /var/log/system.log, then log in to the client machine and watch for clues.
    Step by step:
    1. make sure Remote Login is enabled in the Sharing preferences on the client machine (you can turn it off when you're done if you're paranoid)
    2. on any other mac (or ssh equipped PC) run Terminal (in /Applications/Utilities) and type "ssh username@IP-of-client-machine" obviously replacing "username" and "IP-of-client" with your values, and no quotes of course. Note that "username" needs to be an administrative user. If you haven't logged in with Terminal before, keep in mind that it does not echo back characters when you type in the password. Just type it and press enter. You may have to type "yes" after that to set up the initial trust relationship between the two computers.
    3. Once you're logged in to the client machine, type "tail -f /var/log/system.log" (again, no quotes) and leave it like that. You now have one computer watching another computer's logs in "real time" -- VERY handy when you're troubleshooting a reproducible error.
    4. Go back to the client computer and log in with the problematic account. The other computer will show you everything being logged in system.log. Watch for clues that something is wrong. (something couldn't be found, access denied, anything that doesn't sound too friendly)
    5. Figure out what they mean or copy/paste 'em here! The part that counts is anything that came up on the watching computer's screen from the moment you clicked "Log In" on the client computer to the moment you are at your regular (deficient) desktop, confident it's not gonna do anything else.

  • Home directories from GUI work but not from command line

    I'm having trouble accessing home directories through SSH. After significant trouble, I reinstalled OS 10.4.6 Server on each of my 24 XServes. This is a HPC with an XServe RAID providing the storage space. I promoted the first XServe to an Open Directory master and created 2 test users. I created a two sharepoints from the XServe RAID--one for general data and one for home directories. I enabled AFP on both, granted R/W access to the default group "staff" (of which my two test users are members) and set the home directory sharepoint ("HomeDir") to automount using AFP for users' home directories through WGM. If I use Remote Desktop to login to one of the cluster nodes, the home directory seems to mount correctly. However, if I try to access the same user account through the command line--the home directory cannot be found.
    I can cd to /Network/Servers/headnode.domain.com/Volumes/HomeDir; but I cannot see any of the folders listed there. On the head node, I can verify that the user's home directory has been created--it seems to be fully populated. I've checked permissions, and they seem to be correct; but the fact that I cannot access it from the command line seems to suggest that there's a greater permissions issue.
    I've tried doing the identical setup using an NFS automount instead of AFP with no success. I can't find any answers for command line/SSH access to this problem. Any help would be appreciated.
    Thanks,
    CF

    I've discovered something else in the course of troubleshooting this problem. If I login as a test user through remote desktop to, say, node1.domain.com; the home directory mounts correctly; and, as long as I do not reboot either headnode.domain.com or node1.domain.com, I can login via SSH and access my home directory.
    Of course, if I do reboot--access no longer works. I've browsed through dozens of other posts and tried to follow other users' suggestions. I've manually created a hosts file, which I've uploaded to /etc/hosts on each node. I've double and triple checked DNS and DHCP--I have LDAP propagated through autodiscovery on DHCP; I have each node statically assigned; and I have DNS entries for each node. I also have computer entries in WGM; and I've used the FQDN of each node (node#.domain.com) for everything across the board.
    I'm also hitting the "authentication error" when I try to access my other AFP sharepoint. I can't figure this out.

  • Home Directories can't be deleted in Workgroup Manager

    I set up a Home Directory at the ROOT level of my server to test it. I was successful so I "thought" I knew what I was doing.
    I needed the Directories to be in my XRaid as that's where the "room" is and I expect to have 15-20 Home Directories.
    So I deleted the User folders at the ROOT level and unshared them in Server Admin (prob the WRONG order).
    Now the path to the deleted Directories still shows up in Workgroup Manager and the little "negative" sign is grayed out. I see no other way to delete it.
    Now I'm stuck as it appears that any time I try to create a new Home Directory, it "saves" quietly but the user folder it creates is only 44k (although it includes all the Home folders). When I attempt a Log-In I get an error:
    "You are unable to log in to the user account "jeff" at this time. Logging in to the account failed because an error occurred. The home folder for the user account is located on an AFP or SMB server....."
    I tried exporting all my Users, deleting then and importing them... same issue.
    Any other ideas??
    Thanks

    Hi
    When you install OSX Server by default it creates and shares Users, Groups and Public. This has been the case ever since 10.2 came out. Leopard Server continues this 'tradition'. If you delete any of these default folders after first unsharing them, the server will complain mightily as well as giving you problems.
    If you want require a sharepoint for your users networked home folders to reside elsewhere simply unshare those default folders and create similar folders wherever you want them (an XServe RAID for example), share these and continue doing what you need to do.
    Whenever I have had to attend a site where the local admin has deleted these folders more often than not it has required a rebuild - drastic I know. I have had some limited success by stopping all the services and unfortunately this would also mean demotion to Standalone for your OD Master and recreating the default folders (name them the same) at the root level of your server's boot drive. You can do it using the finder or terminal:
    sudo mkdir /Users
    Then restart the server. If on successful login the icon on the Users folder comes back then you should be OK.
    Hope this helps, Tony

  • Key-based SSH Authentication and AFP Home Directories

    I'm setting up some users with AFP home directories (hosted on an Xserve, with a couple of G5 towers as Open Directory clients). When logging in on the console on a G5 tower, the home directories work fine. The users can SSH into the Xserve using SSH key authentication. However, the users can not SSH into the G5 towers using SSH key authentication, and are instead asked for passwords - presumably because the AFP home directory is mounted with guest access (and thus the keys are unreadable) before the password is entered.
    Is there a known workaround for this? A different way of setting up the home directory mounting? I don't particularly want to go the mobile home directory route, because (among other things), as far as I know, mobile home directories only sync when a user logs into the GUI. If that's not the case (that is, if they will sync when a user logs into the machine with SSH), then I guess that would be a reasonable solution.
    Thanks in advance for any suggestions!

    That was just speculation on my part; I'm not sure exactly what's happening. I do know that until the user authenticates, the entire automount is mounted with guest access... and that the user can't authenticate until the key file can be read. It may be the case that I was just encountering some transient failure or the like, however.

Maybe you are looking for

  • How to split pdf files also how to downsize a pdf file?

    how to split pdf files also how to downsize a pdf file?

  • Why the set operation failed in ".TMIB" tpcall

    I want to change the state of client. So I write the follow lines to complete it, but it failed, the error code is:tperrno=11,tperrtext=TPESVCFAIL - application level service failure. Please help me how can I change the state of client. Fchg32(SndBuf

  • IDOC Message type suitable for VA32

    Hi, I am trying to investigate the automation of transaction VA32, "Change Schedule agreement ". Information relating to my updates from the customer are now available in an electronic format. I therefore wish to now automate this time consuming job

  • EA6900 - Hangs every 2 days

    I have an EA6900 wireless router working in bridge mode as a simple wifi access point to my network. The problem is that the device gets hanged every 2 days (no access from wireless, nor from cable, manual restart helps). Does anyone have any idea ho

  • IPhoto Crashing when trying to clean up duplicates?

    I have identified some 5,000 plus duplicates and am trying to move them to the trash, but each time that I do my system goes into loops and crashes. I don't know how to get this cleaned up so that I can move onto splitting up my library. I have tried