Automounting home directories

I really like how Solaris keeps home directories at /export/home/<username> and then mounts them at /home/<username> upon login. I tried to get this same functionality with OL63 but couldn't get the automounter to work.
My setup is:
/etc/auto.master contains
/home /etc/auto.homeAnd /etc/auto.home contains
* :/export/home/&I restarted the services but when any user logs in the system complains about not having a home directory. What am I missing?

I have not configured autofs recently, but have the following example in my notes:
<pre>
# cat auto.master
/nfs-photon01 /etc/auto.photon01 vers=3,rw,hard,proto=tcp,intr
# cat auto.photon01
* photon01.example.com:/&
# mkdir /nfs-photon01
# service autofs reload
</pre>
Does your /etc/auto.home file specify the NFS server?
By the way, NFS4 is default in OL6, which requires that you export all NFS directories under one virtual home. For instance, if /ext/nfs is the NFS root (fsid=0), everything else that you want to be shared over NFS4 must be accessible under /ext/nfs. Check your /etc/export file. There are examples on the web, you should be able to find it searching for "NFS4 fsid-0".

Similar Messages

  • Automount home directories from another computer

    Hello,
    after 2 days of work, I write here to find some help.
    I have a well configured Leopard Server 10.5.8 which serves users accounts through Open Directory: network users can log to all my Mac OS X clients, home directories are well automounted in /Network/Servers/myserver.com/Users/user1. myserver.com is the server of user1 Home directory.
    Now, I want that (let's say) host1 (Leopard Workstation) becomes the Home directory server of user1. So, I created a local account on host1 with network credentials (uid, gid and passwd) and configure /etc/exports to export his home on the network with NFS.
    How to put in LDAP server that his home directory is not located on myserver.com but on host1 ? That is when user1 logs on host2, host2 automount host1:/Users/user1 on /Network/Servers/host1/Users/user1 or elsewhere.
    Note: I already test manually the configuration and it works!
    1. In WGW, I put /path/user1 as user1's default Home Directory
    2. On the client (host2), I manually mount host1:/Users/user1 to /path/user1
    3. user1 logs into host2, it works fine.
    But I cannot do it for each clients and for each new such user! This is why I want to put this information (for each such user) in LDAP to automatically distribute the information to clients.
    Thank you for your help,
    Joan

    Hello,
    after 2 days of work, I write here to find some help.
    I have a well configured Leopard Server 10.5.8 which serves users accounts through Open Directory: network users can log to all my Mac OS X clients, home directories are well automounted in /Network/Servers/myserver.com/Users/user1. myserver.com is the server of user1 Home directory.
    Now, I want that (let's say) host1 (Leopard Workstation) becomes the Home directory server of user1. So, I created a local account on host1 with network credentials (uid, gid and passwd) and configure /etc/exports to export his home on the network with NFS.
    How to put in LDAP server that his home directory is not located on myserver.com but on host1 ? That is when user1 logs on host2, host2 automount host1:/Users/user1 on /Network/Servers/host1/Users/user1 or elsewhere.
    Note: I already test manually the configuration and it works!
    1. In WGW, I put /path/user1 as user1's default Home Directory
    2. On the client (host2), I manually mount host1:/Users/user1 to /path/user1
    3. user1 logs into host2, it works fine.
    But I cannot do it for each clients and for each new such user! This is why I want to put this information (for each such user) in LDAP to automatically distribute the information to clients.
    Thank you for your help,
    Joan

  • Automounting home directories from Redhat Linux OpenLDAP server

    We have an existing, functioning autofs environment here. At least the linux boxes have no problem automounting user home directories.
    I am looking for a more comprehensive solution to getting our macs integrated into this environment.
    What should the ldap entries contain?
    What should the attribute mappings be set to.
    I have ldap authentication working - the only thing left is automounting.
    Also - is there a way to get the nfs client to work over secure ports by default? Or is this a BSD thing?
    Thanks

    http://rajeev.name/blog/2007/12/09/integrating-leopard-autofs-with-ldap/
    There's some additional LDAP schema stuff that has to be done; Apple seems to have gone with the most absolutely bleeding edge RFC for automounts - and then removed all legacy support.
    This covers most of the issues, however, there is one that I'm still unable to resolve:
    typically, a linux box does autofs using an entry like
    "* -fstype=nfs foo:/home/&"
    LDAP uses a slighty different entry, but it works.
    I haven't for the life of me been able to get auto.home mounting from LDAP as easily as if it is defined in the file.
    The frustrating part is that the post gives a really good example LDIF; but it still doesn't seem to work.
    So while I have other automounts working wonderfully, the wildcarded home directories are still a bust.
    So if you're willing to forgo using LDAP for autofs mounting home, then hard-coding /etc/auto_home will fit the bill.
    But since the link seems to imply that it works, I'm wondering what's gong on...
    Message was edited by: pariah0
    Trying to get the asterisk...

  • Automount Home Directories from LDAP

    I have a Red Hat Linux LDAP/kerberos Server (IPA Server) that i beside authentication also use as a NFS Server sharing users Home Directories.
    All information for Solaris machine is provided from a custom DUAProfile in LDAP.
    Relevant autofs information in DUAProfile:
    serviceSearchDescriptor: automount:cn=default,cn=automount,dc=example,dc=org
    serviceSearchDescriptor:auto_master:automountMapName=auto.master,cn=default,cn=automount,dc=example,dc=org
    All users on the network have their home directories under /home
    I have a auto.home map on the server with key:
    * -rw,soft ipaserver.example.org:/home/&
    This setup works perfect for our Linux clients but not for Solaris.
    In Solaris, autofs seems to look for local users home directories too in the LDAP tree and thus making them unavailable when logging in.
    Even though +auto_home is after the local usermappings.
    t4 LOOKUP REQUEST: Tue Dec 25 22:08:36 2012
    t4 name=localuser[] map=auto.home opts= path=/home direct=0
    t4 LOOKUP REPLY : status=2
    Removing autofs entries in DUAProfile and specifying every user directly in /etc/auto_home works with a delay in mount.
    This is however a less than satisfactory solution.
    I thought about just removing local user mounts to /home from /export/home but that does not seem to be a good idea.
    How could i make this work the way i want with wildcards?
    Regards,
    Johan.

    I have now tried with a different share and mountpoint (/nethome) on a different test server.
    Verified that i can mount it through krb5 and automount works for Red Hat Linux clients.
    ssh, su and console login works on Solaris 11 except for finding home directory through automount.
    root@solaris2:~# ldapclient list
    NS_LDAP_FILE_VERSION= 2.0
    NS_LDAP_BINDDN= uid=solaris,cn=sysaccounts,cn=etc,dc=example,dc=org
    NS_LDAP_BINDPASSWD= {XXX}XXXXXXXXXXXXXX
    NS_LDAP_SERVERS= server.example.org
    NS_LDAP_SEARCH_BASEDN= dc=example,dc=org
    NS_LDAP_AUTH= tls:simple
    NS_LDAP_SEARCH_REF= TRUE
    NS_LDAP_SEARCH_SCOPE= one
    NS_LDAP_SEARCH_TIME= 10
    NS_LDAP_CACHETTL= 6000
    NS_LDAP_PROFILE= solaris_authssl1
    NS_LDAP_CREDENTIAL_LEVEL= proxy
    NS_LDAP_SERVICE_SEARCH_DESC= passwd:cn=users,cn=accounts,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= group:cn=groups,cn=compat,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= netgroup:cn=ng,cn=compat,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= ethers:cn=computers,cn=accounts,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= automount:cn=default,cn=automount,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= auto_master:automountMapName=auto.master,cn=default,cn=automount,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= aliases:ou=aliases,ou=test,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= printers:ou=printers,ou=test,dc=example,dc=org
    NS_LDAP_BIND_TIME= 5
    NS_LDAP_OBJECTCLASSMAP= shadow:shadowAccount=posixAccount
    NS_LDAP_OBJECTCLASSMAP= printers:sunPrinter=printerService
    root@solaris2:~# sharectl get autofs
    timeout=600
    automount_verbose=true
    automountd_verbose=true
    nobrowse=false
    trace=2
    environment=
    From /var/svc/log/system-filesystem-autofs\:default.log:
    t4 LOOKUP REQUEST: Wed Dec 26 12:28:43 2012
    t4 name=user02[] map=auto.nethome opts= path=/nethome direct=0
    t4 getmapent_ldap called
    t4 getmapent_ldap: key=[ user02 ]
    t4 ldap_match called
    t4 ldap_match: key =[ user02 ]
    t4 ldap_match: ldapkey =[ user02 ]
    t4 ldap_match: Requesting list for (&(objectClass=automount)(automountKey=user02)) in auto.nethome
    t4 ldap_match: __ns_ldap_list FAILED (2)
    t4 ldap_match: no entries found
    t4 ldap_match called
    t4 ldap_match: key =[ \2a ]
    t4 ldap_match: ldapkey =[ \2a ]
    t4 ldap_match: Requesting list for (&(objectClass=automount)(automountKey=\2a)) in auto.nethome
    t4 ldap_match: __ns_ldap_list FAILED (2)
    t4 ldap_match: no entries found
    t4 getmapent_ldap: exiting ...
    t4 do_lookup1: action=2 wildcard=FALSE error=2
    t4 LOOKUP REPLY : status=2
    The automount map is called auto.nethome
    key is: * -rw,soft server.example.org:/nethome/&
    Is it that Solaris automount dont like asterisk(*) in a automount key?
    At least now the local users home directories work when i am not trying to autofs mount to /home.
    Anyone know what is wrong here?
    Thank you for your help.
    Regards,
    Johan.

  • Automount Home Directories Failed

    Hi There,
    i have solaris 10 server that is running zfs filesystem.
    after patching this server, the clients running sol 10 are not mounting the home directories anymore.
    i see that /etc/dfs/dfstab file has the word "Error: Syntax" infront of the line where home directories are getting shared.
    also the autofs svcs is up, while the nfs/server svc is offline*.
    any thoughts, what should i check.
    any help will be greatly appreciated.
    thanks
    wasim.

    Thanks alot for the reply, here is what you need.
    svcs -xv nfs/server
    svc:/network/nfs/server:default (NFS server)
    State: offline since Tue Feb 22 09:56:10 2011
    Reason: Start method is running.
    See: http://sun.com/msg/SMF-8000-C4
    See: man -M /usr/share/man -s 1M nfsd
    See: /var/svc/log/network-nfs-server:default.log
    Impact: This service is not running.
    bash-3.00# dfshares
    nfs dfshares:edison: RPC: Program not registered
    bash-3.00# vi dfs/dfstab
    "dfs/dfstab" 16 lines, 629 characters
    # Do not modify this file directly.
    # Use the sharemgr(1m) command for all share management
    # This file is reconstructed and only maintained for backward
    # compatibility. Configuration lines could be lost.
    # Place share(1M) commands here for automatic execution
    # on entering init state 3.
    # Issue the command 'svcadm enable network/nfs/server' to
    # run the NFS daemon processes and the share commands, after adding
    # the very first entry to this file.
    # share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource]
    # .e.g,
    # Error: Syntax share -F nfs -o rw -d "home directory" /tank/home
    # Error: Syntax share -F nfs -o ro -d "local" /tank/local
    bash-3.00# zfs get sharenfs tank/home
    NAME PROPERTY VALUE SOURCE
    tank/home sharenfs rw=soemgr,rw=soelab113 local
    well i did try to correct the dfstab file but did not work. i dont know what was being used to share the home directories, but i do recall that dfstab file was not like the one above.
    any thoughts,
    wasim
    a

  • How to configure Airport Extreme AFP disk sharing to host multiple users' home-directories (Lion, using autofs)

    I have this working, but only by completely bypassing access control, using guest access with read+write permissions.
    Do I need to buy Lion Server, to do this. All my past unix/linux experience says Lion Server should _not_ be necessary.
    This seems like a simple & obvious setup objective, but it is proving to be harder than I would imagine.
    Setup:
    multiple users, sharing two mac mini's running OSX Lion
    connected to an Airport Extreme (4th gen) with a USB disk shared (either via disk password, AEBS password, or using AEBS user's passwords).
    After much experimentation and web research, I finally have managed to get the mini's to auto mount the Airport Extreme's AFP shared USB disk. Well almost... It only works if, on the Airport, I set the guest access permissions to read+write and select the "Secure Shared Disks" method to "With disk password" or "with Airport Extreme password".  In other words, it only works if I essentially bypass/disable access control by using the guest authentication mechanism to the AFP shared disk.
    On the Lion side of this, I am automounting the users directories via "autofs". The config files for this are
    /etc/auto_master:
    # Automounter master map
    +auto_master            # Use directory service
    /net                    -hosts          -nobrowse,hidefromfinder,nosuid
    /home                   auto_home       -nobrowse,hidefromfinder
    /Network/Servers        -fstab
    /-                      -static
    /-                      auto_afp
    /etc/auto_afp:
    # Automounter AFP master map
    # https://discussions.apple.com/thread/3336384?start=0&tstart=0
    /afp/users -fstype=afp afp://;AUTH=No%20User%[email protected]/Users/
    Then, after rebooting and verifying read+write access to the /afp/users directories, I change each user's home directory: In System Preferences > System > Users & Groups, I right-click over the users to access the Advanced Options, changing the Home directory field to point at the AFP-mounted /afp/users/Users/* home directories.
    I experimented with alternate UAM specifications, as well as both OSX and AESB users & passwords. Using guest access is the only thing that has worked.
    Any pointers would be appreciated...

    Based on lots more experimentation which confirms the information in a parallel discussion (cf. Automount share as non ROOT or SYSTEM user! https://discussions.apple.com/thread/3221944), I have concluded that the Lion 10.7.2 implementation of AutoFS mechanism is broken. I submitted a bug report via apple.com/feedback.
    Work arounds..?
    Earlier I wondered if installing Lion OSX Server was necessary.  The more I contemplate this, the more I am convinced it _should_not_ be necessary. The client-server architecture is clear: my mac's are the file-server client's and the Airport Extreme is supposed to act as the file server. The only thing instaling Lion Server would do (besides enriching Apple.com) is enable me to configure one of the mac's as the file server. This would require it to be "always on" (thus enriching my electric utility as wel).  Okay, an additional benefit would be configuring software RAID disks attached to the Lion server, but Time Machine has worked fine for me in the past, backing up to disks mounted on the Airport Extreme.
    One solution is to create a disk partition for each user and instruct each user to connect / authenticate to the Airport Extreme AFP share at login.  The multiplicity of partitions is necessary since the first user to mount the AFP share, takes ownership of it, blocking other users from accessing that disk partition.  A user can "steal" ownership by reconnecting, but this will leave the other user's applications & open files dangling.
    This disfunctional situation really *****.  Before instaling Lion, I put a 64 GB SSD (solid state disk) in each of our mac's. I did this expecting to easily configure the /Users/* data on external networked storage. I'm having a dejavu "Bill Gates"-ware moment; problems like this were why I abandoned Windoz.
    I will make a few more experiments using the depreciated /etc/fstab mechanism.  Maybe that will bypass the broken-ness of AutoFS...? Alternately, I guess I could also try to run Kerberos authentication to bypass whatever is broken in AutoFS, but that would require a running a Kerberos daemon somewhere.  Possibly I could configure a Kerberos service to run on both my mac's (without installing Apple's Lion Server)...?
    Stay tuned...

  • Home Directories not mounting

    I'm setting up an OS X network for the first time.
    I've got Open Directory based network logins working, but I can't get the home directories to mount over the network. When logging in, a dialog box says that an error occurred and that the home direcotry is mounted via SMB or AFP.
    So I log in as a local user on the client machine to poke around. I don't see the server listed in /Network/Servers, but I can manually do a Connect To Server and put in afp://server.dom.ain/Users/usename and it's fine. This afp:// URL is the same as is specified as the user's home directory.
    I have verified that /Users is exported on the server.
    Do I need to go in to every client and create an automount map for this or is there something else I've forgotten?
    Thanks...
    various   Mac OS X (10.4.9)   10.4.9 server and clients

    The first thing to do when you're having any kind of login problem is to ssh in to the client machine and tail -f /var/log/system.log, then log in to the client machine and watch for clues.
    Step by step:
    1. make sure Remote Login is enabled in the Sharing preferences on the client machine (you can turn it off when you're done if you're paranoid)
    2. on any other mac (or ssh equipped PC) run Terminal (in /Applications/Utilities) and type "ssh username@IP-of-client-machine" obviously replacing "username" and "IP-of-client" with your values, and no quotes of course. Note that "username" needs to be an administrative user. If you haven't logged in with Terminal before, keep in mind that it does not echo back characters when you type in the password. Just type it and press enter. You may have to type "yes" after that to set up the initial trust relationship between the two computers.
    3. Once you're logged in to the client machine, type "tail -f /var/log/system.log" (again, no quotes) and leave it like that. You now have one computer watching another computer's logs in "real time" -- VERY handy when you're troubleshooting a reproducible error.
    4. Go back to the client computer and log in with the problematic account. The other computer will show you everything being logged in system.log. Watch for clues that something is wrong. (something couldn't be found, access denied, anything that doesn't sound too friendly)
    5. Figure out what they mean or copy/paste 'em here! The part that counts is anything that came up on the watching computer's screen from the moment you clicked "Log In" on the client computer to the moment you are at your regular (deficient) desktop, confident it's not gonna do anything else.

  • Home directories from GUI work but not from command line

    I'm having trouble accessing home directories through SSH. After significant trouble, I reinstalled OS 10.4.6 Server on each of my 24 XServes. This is a HPC with an XServe RAID providing the storage space. I promoted the first XServe to an Open Directory master and created 2 test users. I created a two sharepoints from the XServe RAID--one for general data and one for home directories. I enabled AFP on both, granted R/W access to the default group "staff" (of which my two test users are members) and set the home directory sharepoint ("HomeDir") to automount using AFP for users' home directories through WGM. If I use Remote Desktop to login to one of the cluster nodes, the home directory seems to mount correctly. However, if I try to access the same user account through the command line--the home directory cannot be found.
    I can cd to /Network/Servers/headnode.domain.com/Volumes/HomeDir; but I cannot see any of the folders listed there. On the head node, I can verify that the user's home directory has been created--it seems to be fully populated. I've checked permissions, and they seem to be correct; but the fact that I cannot access it from the command line seems to suggest that there's a greater permissions issue.
    I've tried doing the identical setup using an NFS automount instead of AFP with no success. I can't find any answers for command line/SSH access to this problem. Any help would be appreciated.
    Thanks,
    CF

    I've discovered something else in the course of troubleshooting this problem. If I login as a test user through remote desktop to, say, node1.domain.com; the home directory mounts correctly; and, as long as I do not reboot either headnode.domain.com or node1.domain.com, I can login via SSH and access my home directory.
    Of course, if I do reboot--access no longer works. I've browsed through dozens of other posts and tried to follow other users' suggestions. I've manually created a hosts file, which I've uploaded to /etc/hosts on each node. I've double and triple checked DNS and DHCP--I have LDAP propagated through autodiscovery on DHCP; I have each node statically assigned; and I have DNS entries for each node. I also have computer entries in WGM; and I've used the FQDN of each node (node#.domain.com) for everything across the board.
    I'm also hitting the "authentication error" when I try to access my other AFP sharepoint. I can't figure this out.

  • Key-based SSH Authentication and AFP Home Directories

    I'm setting up some users with AFP home directories (hosted on an Xserve, with a couple of G5 towers as Open Directory clients). When logging in on the console on a G5 tower, the home directories work fine. The users can SSH into the Xserve using SSH key authentication. However, the users can not SSH into the G5 towers using SSH key authentication, and are instead asked for passwords - presumably because the AFP home directory is mounted with guest access (and thus the keys are unreadable) before the password is entered.
    Is there a known workaround for this? A different way of setting up the home directory mounting? I don't particularly want to go the mobile home directory route, because (among other things), as far as I know, mobile home directories only sync when a user logs into the GUI. If that's not the case (that is, if they will sync when a user logs into the machine with SSH), then I guess that would be a reasonable solution.
    Thanks in advance for any suggestions!

    That was just speculation on my part; I'm not sure exactly what's happening. I do know that until the user authenticates, the entire automount is mounted with guest access... and that the user can't authenticate until the key file can be read. It may be the case that I was just encountering some transient failure or the like, however.

  • Stumped on AFP network home directories.

    Heyo,
    Been RTFMs on File Services, User Management and Open Directory. Also looked in www.AFP548.com but didn't find anything helpful.
    We have a mixed environment and windows users aren't having any problem with network domain logins or using smb shares. Mac clients can mount the network shares with afp but network homes are a no go.
    Made the changes needed for the firewall and tried it with the firewall off just to be sure.
    The /Home share is automounted (not using the default /Users).
    Guest access is on in Sharing and AFP.
    Network Mount for /Home is set to Enable network mounting, AFP and User Home Directories.
    SMB Windows Homes are in the same directory and run without problems.
    Directory Access on the Client saw the server and looks ok.
    Only ref. I can find for the login attempt is under Open Directory Password Service Server Log:
    Apr 23 2006 16:42:31 RSAVALIDATE: success.
    Apr 23 2006 16:42:31 USER: {0x00000000000000000000000000000001, netadmin} is the current user.
    Apr 23 2006 16:42:31 AUTH2: {0x00000000000000000000000000000001, netadmin} CRAM-MD5 authentication succeeded.
    Apr 23 2006 16:42:31 QUIT: {0x00000000000000000000000000000001, netadmin} disconnected.
    and OD LDAP log:
    Apr 23 16:42:31 ci slapd[81]: bind: invalid dn (netadmin)\n
    Nothing in the AFP log.
    Any thoughts on what I should try or something obscure I may have missed when setting up MacOS client network home directories with AFP?
    Thanks
    Mitch
    Server: 10.4.6
    Workstations: 10.4.6

    Getting closer.
    Kerberos wasn't running and the ODM wouldn't Kerberize.
    This thread sorted out the issue:
    http://discussions.apple.com/thread.jspa?messageID=2186542&#2186542
    Kerberos is running now but still canna login for mac clients.
    hostname and sso_util info -g both resolve properly.
    but when i run:" slapconfig -kerberize diradmin REALM_NAME "
    all looks good until the command (with the proper substituions)
    "sso_util configure -r REALM_NAME -f /LDAPv3/127.0.0.1 -a diradmin -p diradmin_password -v 1 all"
    automatically runs and I get a list of:
    SendInteractiveCommand: failed to get pattern.
    SendInteractiveCommand: failed to get pattern.
    SendInteractiveCommand: failed to get pattern.
    and "sso_util command fialed with status 2"
    the sso_util command by itself spits out
    Contacting the directory server
    Creating the service list
    Creating the service principals
    kadmin: Incorrect password while initalizing kadmin interface
    SendInteractiveCommand: failed to get pattern.
    kadmin: Incorrect password while initalizing kadmin interface
    SendInteractiveCommand: failed to get pattern.
    kadmin: Incorrect password while initalizing kadmin interface
    SendInteractiveCommand: failed to get pattern.
    etc...
    even though the login/pass are good
    any thoughts on what i should check or where i should go next?
    Thanks
    Mitch
    iMac G5   Mac OS X (10.4.6)  
    iMac G5   Mac OS X (10.4.6)  

  • Getting rid of phantom home directories in WGM

    My users have home directories listed in Workgroup Manager that they are no longer using, but the buttons to edit and remove these entries are grayed out. Even when I try to make a new user with no preset, these entries show up in the list and cannot be modified. I have configured a new share to automount for home directories, and unshared and deleted the old folders, but WGM still insists on listing their paths. Restarting AFP and the server doesn't help.
    How do I convince WGM these folders don't exist?
    Thanks!
    Mitch

    These are automounts you had set up for your user homes, you need to delete the records for them.
    Using the "All Records" (bullseye) tab (enable it in WGM Preferences) delete the outdated entries under 'Mounts".
    - Norbert

  • How to specify one ethernet port for network home directories (other for normal filesharing)?

    So I'm trying to get Home Directories up and running on a 10.6.8 Xserve (waiting until I get my NFS sharepoints migrated to a Linux server [for other reasons] before moving up to 10.7 Server). But posting here since that will be happening in the next few weeks, and it might be applicable now (so I can at least get that resolved ahead of time).
    I have a different DNS entry for each ethernet port: server.office.domain.com at 192.168.0.11 for the first, and homes.services.internal at 192.168.0.10 for the second. DNS lookups for both resolve correctly (as does the reverse lookup).
    If I use the Server Admin to pick a sharepoint as an automount for Home Directories, everything is fine, but it picks the server.office.domain.com hostname. Picking that works just fine, but that is also the connection that feeds the filesharing. I'd prefer to split that home directory traffic out onto the second ethernet port. So I tried just duplicating the initial connection (since it can't be edited directly in Workgroup Manager) and changing the hostname to the internal one, but I get an error when attempting to log in (the client login screen gives a very helpful "Couldn't login because of an error" error message) and don't see anything in the server logs.
    The client machine shows the following line:
    Code:
    10/20/12 5:27:42.688 PM authorizationhost: ERROR | -[HomeDirMounter mountNetworkHomeWithURL:attributes:dirPath:username:] |
         PremountHomeDirectoryWithAuthentication( url=afp://homes.services.internal/Users,
         homedir=/Network/Servers/homes.services.internal/Volumes/HomeDirectories/Users/ user123, name=user123 ) returned 45
    (added line breaks so it didn't extend off the page)
    So it looks like this is failing because the automount isn't in place, but I'm not sure how to work that out either (i.e. how do I add that making sure it uses the internal hostname?).
    Any suggestions on getting this to work?
    I realize one solution is just to LACP the two ports, but that is a different ball of wax (I may do that later if I get a 4 port ethernet card and performance limitations demand it).

    A possible solution might be this.
    On ADSLBOX and CABLEBOX configure different subnets for the LAN, e.g.
    ADSLBOX:    192.168.1.0/24
    CABLEBOX: 192.168.2.0/24
    The MEDIABOX gets these static IPs:
    ADSL-LAN: 192.168.1.2
    CABLE-LAN: 192.168.2.2
    On the MEDIABOX, configure the two network interfaces using two routing tables.
    The ADSL-LAN routing table
    ip route add 192.168.1.0/24 dev eth0 src 192.168.1.2 table 1
    ip route add default via 192.168.1.1 table 1
    The CABLE-LAN routing table
    ip route add 192.168.2.0/24 dev eth1 src 192.168.2.2 table 2
    ip route add default via 192.168.2.1 table 2
    The main routing table
    ip route add 192.168.1.0/24 dev eth0 src 192.168.1.2
    ip route add 192.168.2.0/24 dev eth1 src 192.168.2.2
    # use the CABLE-LAN gateway as default, so general internet traffic from MEDIABOX runs over CABLEBOX
    ip route add default via 192.168.2.1
    define the lookup rules
    ip rule add from 192.168.1.2 table 1
    ip rule add from 192.168.2.2 table 2
    To test the setup:
    ip route show
    ip route show table 1
    ip route show table 2
    I don't know how to persist something like this in ArchLinux using netctl. Might require to write a special systemd unit for it. Above is a working example from a RedHat box at my company.
    Last edited by teekay (2013-12-04 07:42:22)

  • Workgroup Manager doesn't create home directories for OD accounts

    I'm having an issue where home directories aren't created for OD accounts. My setup is as follows, the home directories are stored on the OD Master (the only Apple/OD/AD server on the network), and the home directory paths are filled as afp://192.168.1.254/Customers, fakeuser, /Users/Customers/fakeuser
    This same pathing scheme works fine for local accounts, however for OD, clicking Create Home Directory and saving the account does nothing (no errors, nor folders created). If I ftp into said account, I wind up being directed to /Users (definitely not the expected behaviour)
    I am deploying a web based upload system that I want to authenticate against OD users so as to share home folders and permissions with the ftp server, once I have this figured out I will be migrating a bunch of accounts to OD from local.

    In addition to potential DNS issues, it sounds like you may be using the wrong procedure to define the users' home directories. You should never have to specify the paths manually; instead, define the share point ("Customers" in your case) to be automounted, and then it should automatically show up in the list of available home folder locations, with all the necessary paths predefined. Here's the full procedure:
    1. Run Server Admin, and select: the server name in the sidebar -> File Sharing in the toolbar -> Volumes & Browse under that -> navigate to the /Customers folder in the column view.
    2. Make sure the folder is being shared (with it selected, you should see an "Unshare" button near the top right of the window); if not share it with the Share Button (then Save the change).
    3. Select the Share Point tab under the file browser (NOT the one above it), and select the Enable Automount checkbox. A dialog will open asking for the automount details; make sure the Directory is set to /LADPv3/127.0.0.1, Protocol to AFP, and Use for is User home folders and group folders. OK the dialog, and be sure to click Save to make the change take effect.
    4. Run Workgroup Manager, and select Accounts in the toolbar -> Users (single person icon) tab under that -> some user account(s) you want to configure under that -> Home tab on the right.
    5. Select (None) from the location list and click Save (this wipes out any current setting, so we can rebuild it correctly).
    6. The Customers share point should be in the list of available locations (due to being configured for automount); select it, then click Create Home Now, and finally Save.

  • NFS and  LDAP on different servers: Problems with location of home director

    Dear Apple Experts.
    We are using LDAP server for user authentification
    and NFS server for home directories.
    Both are decictaed servers on differnt machines.
    on the NFS server there are directories
    /home/urpi
    for staff's home directories
    and
    /home/students
    for student's home directories
    both are mounted to the Mac minis in
    /Users directory
    so
    /Users/urpi
    contains home directories for staff
    /Users/students
    contains home directories for students
    Authentification works well andpermission are set as needed
    but OS X shows missing home directories for LDAP authentificated users
    and terminal shows missing home directory
    for me it is
    /home/urpi/fodrek
    I was tried to mount NFS to /home, but it is not allowed
    Would I ask if there is any setting to add directories, where home directories are placed,please?
    I look forward hearing form you.
    Yours faithfully
    Peter Fodrek

    So none of these machines are Snow Leopard servers?
    What exactly do you mean when you say you tried to mount the NFS share to home? Can you copy and paste the command and error?
    It sounds as though you don't actually have the NFS shares mounted. Assuming this is so, you might want to investigate how the automount command works so that your MacMinis mount the NFS shares on boot.
    If your NFS/LDAP server is an OS X 10.6 server, set the shares to be automounted as user/group directories. Make sure your LDAP server is providing correct information on the home directory location. If it is local, I think the home directories need to be in /Users. If your mounts are indeed working but you cannot login, you might consider making links from /Users to /home/urpi or /home/students on an account-by-account basis (could be done with a quick shell script).

  • Creating Home Directories

    Hi,
    I'm still fairly new to Mac Servers (come across from a Windows background), and am having trouble creating the home directories for the users I've created.
    Initially I created the user (just bog standard users - no mail, no calendars etc), bound the client machine to the server in Directory Utility (all working ok so far), even added the client machine to workgroup manager.
    However, the user was unable to logon - just a shaking screen after each logon attempt. Confirmed the password etc, all ok.
    Deduced (after looking on here) that it may be because the client has no home folder (a prerequisite for 10.5, even though it doesn't tell you that). However, coming from a Windows background, am unfamiliar with the syntax of network paths for Mac/Linux.
    The home folder location I've created is on the server: Server HD/Users/Shared/ and it is shared in Server manager as a Share Point. Actual folder permissions include Users: Read and Write, and share permissions are the same. AFP is on.
    In Workgroup manager, the syntax for the three fields I currently have is:
    Share point URL: afp://servername.domain.co.uk/Users/Shared
    Path to Home folder: username
    Full Path: /Network/Servers/servername.domain.co.uk/Users/Shared/username
    I click OK, then click on Create Home now, then Save and it returns the error: Unable to create Home Directory. The home directory could not be created because an error occurred.

    Hi
    +". . . The home folder location I've created is on the server: Server HD/Users/Shared . . ."+
    This is possibly where the problem lies? By default OSX Server, after installation, creates Users, Groups and Public as default share points. You only have to enable AFP and those shares are instantly available once users have been created to access them.
    Don't be tempted to delete the default Users and Groups folders as the Server will complain. There is already a default Shared folder that the Public folder resides in. Don't be tempted to delete these either.
    There is no need to create another shared directory within the top level User Directory as that is already being shared. Once you promote to OD Master and populate the node with users all you have to do is set the default Users folder to be auto-mounting for users Home folders. There is no further need to share it or define permissions. These are correctly set when the folder was initially created.
    In Workgroup Manager you should see the path as afp://fqdnofyourserver/Users. That's all you need. Simply select it and click Create Now and Save. Navigate to the Users folder and you should see the home directory has been created. There is no need either to tinker with permissions for individual users' home folders as these are correctly set at the time of creation. The default permissions model used for users' home folders is standard POSIX.
    For clients to access networked home folders correctly it's a good idea if the server's IP address is used to resolve DNS queries. Assuming the service is placed with the server?
    Unlike Microsoft, Apple don't tinker with Open Source OpenLDAP as much. They still modify it to suit their purposes but it's more standards based. If you don't want to use the default Users directory on the boot volume then simply un-share and un-automount and define a similar directory on another volume (a RAID for example) instead. Define it as a Share in Server Admin and set it for auto-mounting home directories. It will show in WGM with the correct path. Avoid long names and spaces if you can. You could stick with Users as it works.
    There is no need to resort to the command line in any of this as all the tools you need are there in the interface. Provided DNS is correctly configured on both pointers and you have not used .local as the basis for DNS it does work as it's supposed to and it works well.
    Tony

Maybe you are looking for

  • Java6: How to compile java using JavaCompiler class

    Hi all, Using JavaCompiler, we can run the java program thru programmaticaly using run() method. Could anyone please tell me how to compile a java program using JavaCompailer class? Or calling run() itself will compile java file? Thanks Shagil

  • PL/SQL performance questions

    Hi, I am responsible for a large, computation-intensive PL/SQL program that performs some batch processing on a large number of records. I am trying to improve the performance of this program and have a couple of questions that I am hoping this forum

  • WAP551 DHCP issue

    Hi, I have trouble with new WAP551 accesspoints. For supplying a new building on our campus we decided to use WAP551 accesspoints. Another building is working with WAP541 models an all is working fine. The accesspoints are working in cluster mode wit

  • Open a folder for a user to view

    hello fellow java programmers! Im trying to figure out if its possible to do something... basically I want to open a new "window" on the users machine that is in fact their file browsing program. so for instance for windows, it would be windows explo

  • How to set FROM ADDRESS for EMAIL activity in Process Flow

    Hi all, Can any one tell how to set(which address) FROM ADDRESS for EMAIL activity in Process Flow? Thanks, Suvvi