Automounting home directories from Redhat Linux OpenLDAP server

We have an existing, functioning autofs environment here. At least the linux boxes have no problem automounting user home directories.
I am looking for a more comprehensive solution to getting our macs integrated into this environment.
What should the ldap entries contain?
What should the attribute mappings be set to.
I have ldap authentication working - the only thing left is automounting.
Also - is there a way to get the nfs client to work over secure ports by default? Or is this a BSD thing?
Thanks

http://rajeev.name/blog/2007/12/09/integrating-leopard-autofs-with-ldap/
There's some additional LDAP schema stuff that has to be done; Apple seems to have gone with the most absolutely bleeding edge RFC for automounts - and then removed all legacy support.
This covers most of the issues, however, there is one that I'm still unable to resolve:
typically, a linux box does autofs using an entry like
"* -fstype=nfs foo:/home/&"
LDAP uses a slighty different entry, but it works.
I haven't for the life of me been able to get auto.home mounting from LDAP as easily as if it is defined in the file.
The frustrating part is that the post gives a really good example LDIF; but it still doesn't seem to work.
So while I have other automounts working wonderfully, the wildcarded home directories are still a bust.
So if you're willing to forgo using LDAP for autofs mounting home, then hard-coding /etc/auto_home will fit the bill.
But since the link seems to imply that it works, I'm wondering what's gong on...
Message was edited by: pariah0
Trying to get the asterisk...

Similar Messages

  • Automount Home Directories from LDAP

    I have a Red Hat Linux LDAP/kerberos Server (IPA Server) that i beside authentication also use as a NFS Server sharing users Home Directories.
    All information for Solaris machine is provided from a custom DUAProfile in LDAP.
    Relevant autofs information in DUAProfile:
    serviceSearchDescriptor: automount:cn=default,cn=automount,dc=example,dc=org
    serviceSearchDescriptor:auto_master:automountMapName=auto.master,cn=default,cn=automount,dc=example,dc=org
    All users on the network have their home directories under /home
    I have a auto.home map on the server with key:
    * -rw,soft ipaserver.example.org:/home/&
    This setup works perfect for our Linux clients but not for Solaris.
    In Solaris, autofs seems to look for local users home directories too in the LDAP tree and thus making them unavailable when logging in.
    Even though +auto_home is after the local usermappings.
    t4 LOOKUP REQUEST: Tue Dec 25 22:08:36 2012
    t4 name=localuser[] map=auto.home opts= path=/home direct=0
    t4 LOOKUP REPLY : status=2
    Removing autofs entries in DUAProfile and specifying every user directly in /etc/auto_home works with a delay in mount.
    This is however a less than satisfactory solution.
    I thought about just removing local user mounts to /home from /export/home but that does not seem to be a good idea.
    How could i make this work the way i want with wildcards?
    Regards,
    Johan.

    I have now tried with a different share and mountpoint (/nethome) on a different test server.
    Verified that i can mount it through krb5 and automount works for Red Hat Linux clients.
    ssh, su and console login works on Solaris 11 except for finding home directory through automount.
    root@solaris2:~# ldapclient list
    NS_LDAP_FILE_VERSION= 2.0
    NS_LDAP_BINDDN= uid=solaris,cn=sysaccounts,cn=etc,dc=example,dc=org
    NS_LDAP_BINDPASSWD= {XXX}XXXXXXXXXXXXXX
    NS_LDAP_SERVERS= server.example.org
    NS_LDAP_SEARCH_BASEDN= dc=example,dc=org
    NS_LDAP_AUTH= tls:simple
    NS_LDAP_SEARCH_REF= TRUE
    NS_LDAP_SEARCH_SCOPE= one
    NS_LDAP_SEARCH_TIME= 10
    NS_LDAP_CACHETTL= 6000
    NS_LDAP_PROFILE= solaris_authssl1
    NS_LDAP_CREDENTIAL_LEVEL= proxy
    NS_LDAP_SERVICE_SEARCH_DESC= passwd:cn=users,cn=accounts,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= group:cn=groups,cn=compat,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= netgroup:cn=ng,cn=compat,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= ethers:cn=computers,cn=accounts,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= automount:cn=default,cn=automount,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= auto_master:automountMapName=auto.master,cn=default,cn=automount,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= aliases:ou=aliases,ou=test,dc=example,dc=org
    NS_LDAP_SERVICE_SEARCH_DESC= printers:ou=printers,ou=test,dc=example,dc=org
    NS_LDAP_BIND_TIME= 5
    NS_LDAP_OBJECTCLASSMAP= shadow:shadowAccount=posixAccount
    NS_LDAP_OBJECTCLASSMAP= printers:sunPrinter=printerService
    root@solaris2:~# sharectl get autofs
    timeout=600
    automount_verbose=true
    automountd_verbose=true
    nobrowse=false
    trace=2
    environment=
    From /var/svc/log/system-filesystem-autofs\:default.log:
    t4 LOOKUP REQUEST: Wed Dec 26 12:28:43 2012
    t4 name=user02[] map=auto.nethome opts= path=/nethome direct=0
    t4 getmapent_ldap called
    t4 getmapent_ldap: key=[ user02 ]
    t4 ldap_match called
    t4 ldap_match: key =[ user02 ]
    t4 ldap_match: ldapkey =[ user02 ]
    t4 ldap_match: Requesting list for (&(objectClass=automount)(automountKey=user02)) in auto.nethome
    t4 ldap_match: __ns_ldap_list FAILED (2)
    t4 ldap_match: no entries found
    t4 ldap_match called
    t4 ldap_match: key =[ \2a ]
    t4 ldap_match: ldapkey =[ \2a ]
    t4 ldap_match: Requesting list for (&(objectClass=automount)(automountKey=\2a)) in auto.nethome
    t4 ldap_match: __ns_ldap_list FAILED (2)
    t4 ldap_match: no entries found
    t4 getmapent_ldap: exiting ...
    t4 do_lookup1: action=2 wildcard=FALSE error=2
    t4 LOOKUP REPLY : status=2
    The automount map is called auto.nethome
    key is: * -rw,soft server.example.org:/nethome/&
    Is it that Solaris automount dont like asterisk(*) in a automount key?
    At least now the local users home directories work when i am not trying to autofs mount to /home.
    Anyone know what is wrong here?
    Thank you for your help.
    Regards,
    Johan.

  • Automount home directories from another computer

    Hello,
    after 2 days of work, I write here to find some help.
    I have a well configured Leopard Server 10.5.8 which serves users accounts through Open Directory: network users can log to all my Mac OS X clients, home directories are well automounted in /Network/Servers/myserver.com/Users/user1. myserver.com is the server of user1 Home directory.
    Now, I want that (let's say) host1 (Leopard Workstation) becomes the Home directory server of user1. So, I created a local account on host1 with network credentials (uid, gid and passwd) and configure /etc/exports to export his home on the network with NFS.
    How to put in LDAP server that his home directory is not located on myserver.com but on host1 ? That is when user1 logs on host2, host2 automount host1:/Users/user1 on /Network/Servers/host1/Users/user1 or elsewhere.
    Note: I already test manually the configuration and it works!
    1. In WGW, I put /path/user1 as user1's default Home Directory
    2. On the client (host2), I manually mount host1:/Users/user1 to /path/user1
    3. user1 logs into host2, it works fine.
    But I cannot do it for each clients and for each new such user! This is why I want to put this information (for each such user) in LDAP to automatically distribute the information to clients.
    Thank you for your help,
    Joan

    Hello,
    after 2 days of work, I write here to find some help.
    I have a well configured Leopard Server 10.5.8 which serves users accounts through Open Directory: network users can log to all my Mac OS X clients, home directories are well automounted in /Network/Servers/myserver.com/Users/user1. myserver.com is the server of user1 Home directory.
    Now, I want that (let's say) host1 (Leopard Workstation) becomes the Home directory server of user1. So, I created a local account on host1 with network credentials (uid, gid and passwd) and configure /etc/exports to export his home on the network with NFS.
    How to put in LDAP server that his home directory is not located on myserver.com but on host1 ? That is when user1 logs on host2, host2 automount host1:/Users/user1 on /Network/Servers/host1/Users/user1 or elsewhere.
    Note: I already test manually the configuration and it works!
    1. In WGW, I put /path/user1 as user1's default Home Directory
    2. On the client (host2), I manually mount host1:/Users/user1 to /path/user1
    3. user1 logs into host2, it works fine.
    But I cannot do it for each clients and for each new such user! This is why I want to put this information (for each such user) in LDAP to automatically distribute the information to clients.
    Thank you for your help,
    Joan

  • Uninstall Oracle10g from RedHat Linux 5 Server and install Oracle11g

    Hi to all,
    I must uninstall Oracle 10g from Red Hat Linux 5 Server and then I must install Oracle 11g.
    How I can remove all things of Oracle10g for install after Oracle 11g?
    Thank you in advance,
    crystal

    crystal13 wrote:
    Hi to all,
    I must uninstall Oracle 10g from Red Hat Linux 5 Server and then I must install Oracle 11g.
    How I can remove all things of Oracle10g for install after Oracle 11g?
    Thank you in advance,
    crystalCheck the following link
    http://database.itags.org/oracle/267935/

  • Firefox 3.6 not compatible with home directories stored on AFP file server

    I just wanted to let everyone know that I have discovered, at least in my situation, that Firefox 3.6 does not work with user home directories stored on AFP file servers.
    My network consists of PPC 10.411 clients and a Mac OS X 10.62 server. User home directories are stored on the server, the user is logged into a "Golden Triangle" LDAP domain, where the Mac clients bind to a OSX Server and the OSX Server is a member of the Active Directory domain.
    Worked perfectly fine on Firefox 3.57, now in 3.6 it will either not launch, will freeze with the beachball or will only show the Firefox window and not the main web browser.
    This has happened before with a 3.0x update from a few months ago. I have posted a bug in the Bugzilla database and have outlined the bug on my personal MacPCSMB blog.
    http://www.macpcsmb.com
    https://bugzilla.mozilla.org/show_bug.cgi?id=542306
    Thanks
    Michael Yockey
    IT Administrator
    Yockey, Yockey and Schliem PC

    There is an update on the FireFox hosted AFP issue that I have uncovered:
    When users are rolled back with Firefox 3.57 (by installing FF 3.57 over 43.6) the following issue occurs:
    You launch Firefox and you get an error that states "XML scripting is not working; Firefox cannot open the window".
    This basically means that the plug-ins for Firefox 3.6 are still in the user's Firefox profile directory. These new plug-ins are not compatible with Firefox 3.57. You will have to manually go into the user's home directory and remove their profile folder and extract a specific file. The issue is that the user will have to be able to have access to their bookmarks. If you delete the profile folder their bookmarks are gone, though that is simpler to do.
    It looks like Mozilla significantly changed the profile folder setup in FF 3.6, so a profile rollback or deletion is necessary.
    If you DO NOT have a good backup:
    To solve this issue do the following. This guide assumes you have the users home directory stored on an AFP server and you have open directory logins:
    1. The Firefox profile is located here according to Mozilla: http://support.mozilla.com/en-US/kb/Profiles . The Mac OS X Directory is located at /~username/library/application support/Firefox.
    2. Find and COPY the places.sqlite file. This is the Firefox bookmarks and history database. This file is very important to back up.
    3. Now take the user's Firefox profile and TRASH it.
    4. Now either have the user launch Firefox with their Open Directory login, or change their password and login yourself. Open Firefox and then after it full loads quit the program. Copy the places.sqlite file back into the Firefox profile folder. You will have to do this manually for every user unless if you make an AppleScript to take care of this.
    5. The program will now work again.
    The second option is to go into Retrospect or Time Machine (or whatever backup solution you use) and restore the user's profile direct to a point in time before Firefox was updated to 3.6 and then subsequently reverted back to 3.57. How to use backup software is way beyond the scope of this blog posting.
    Thanks
    Mike Yockey
    www.MacPCSMB.com

  • Oracle 9.2.0.1  on Redhat Linux ADVANCED SERVER -PATCHES

    HI
    Can some one help me out,
    is Oracle 9.2.0.1 on Redhat Linux ADVANCED SERVER , is having some patch
    set to work on Developer2000,
    if so,please let me know where in we get the same
    tks
    narayana rao

    hi,
    I am trying to install oracle 9i rel2 on red hat adv server 2.1
    Installation and Linking looks fine and during configuration ,
    the database configuration assistant fails with 'End of Communication channel'.
    This makes the software not usable as we get the same error while trying to connect to the database too..
    any ideas?Check the "Oracle Installation Errors" section at
    http://www.puschitz.com/OracleOnLinux.shtml
    In this section I explain what I did about the
    "end-of-file on communication channel" error
    on RH AS.
    Any feedback on this problem and solution is appreciated.
    Werner

  • Moving Portable Home Directories from one server to another

    I am in the process of migrating users from an older xserve running 10.3 with open directory to a new xserve running 10.5. So far, everything is looking good with the migration, the only major issue I'm running into in my testing is with Portable Home Directories. Presently, the portable home directory on the computer still points to the old server for existing user accounts after they are moved to the new open directory server. On the 10.3 server, the home directories are all mounted under /Volumes/Home, where on Leopard it appears it wants to create the shares under /Volumes/ServerName/Folder. Granted, at present the original server's Home Folders are on a fiber attached raid and in testing I don't have this available. Any suggestions on a way to test easily without moving the raid? Also, is there an easy way to do a mass change on user machines where if I move my raid over to the new server, I can make sure that users data is being backed up to the proper location?
    Sorry for the lengthy post, just trying to make sure I'm covering all my bases, heh.

    Antonio, thanks for the response. I do have one more question regarding this. On the client side, the mirrors.plist file references the old server FQDN and share name. Because this will be being moved over to the new server, is there an easy method to update the clients mirror plist without breaking the PHD mirror? My big concern here is that either the users will not be able to synchronize phd's or we will have to re-establish all the phd's from the client machines to the server. My thought here is simply using a cname to direct any traffic still trying to hit the old server name to the new server name.

  • Home directories from GUI work but not from command line

    I'm having trouble accessing home directories through SSH. After significant trouble, I reinstalled OS 10.4.6 Server on each of my 24 XServes. This is a HPC with an XServe RAID providing the storage space. I promoted the first XServe to an Open Directory master and created 2 test users. I created a two sharepoints from the XServe RAID--one for general data and one for home directories. I enabled AFP on both, granted R/W access to the default group "staff" (of which my two test users are members) and set the home directory sharepoint ("HomeDir") to automount using AFP for users' home directories through WGM. If I use Remote Desktop to login to one of the cluster nodes, the home directory seems to mount correctly. However, if I try to access the same user account through the command line--the home directory cannot be found.
    I can cd to /Network/Servers/headnode.domain.com/Volumes/HomeDir; but I cannot see any of the folders listed there. On the head node, I can verify that the user's home directory has been created--it seems to be fully populated. I've checked permissions, and they seem to be correct; but the fact that I cannot access it from the command line seems to suggest that there's a greater permissions issue.
    I've tried doing the identical setup using an NFS automount instead of AFP with no success. I can't find any answers for command line/SSH access to this problem. Any help would be appreciated.
    Thanks,
    CF

    I've discovered something else in the course of troubleshooting this problem. If I login as a test user through remote desktop to, say, node1.domain.com; the home directory mounts correctly; and, as long as I do not reboot either headnode.domain.com or node1.domain.com, I can login via SSH and access my home directory.
    Of course, if I do reboot--access no longer works. I've browsed through dozens of other posts and tried to follow other users' suggestions. I've manually created a hosts file, which I've uploaded to /etc/hosts on each node. I've double and triple checked DNS and DHCP--I have LDAP propagated through autodiscovery on DHCP; I have each node statically assigned; and I have DNS entries for each node. I also have computer entries in WGM; and I've used the FQDN of each node (node#.domain.com) for everything across the board.
    I'm also hitting the "authentication error" when I try to access my other AFP sharepoint. I can't figure this out.

  • Automounting home directories

    I really like how Solaris keeps home directories at /export/home/<username> and then mounts them at /home/<username> upon login. I tried to get this same functionality with OL63 but couldn't get the automounter to work.
    My setup is:
    /etc/auto.master contains
    /home /etc/auto.homeAnd /etc/auto.home contains
    * :/export/home/&I restarted the services but when any user logs in the system complains about not having a home directory. What am I missing?

    I have not configured autofs recently, but have the following example in my notes:
    <pre>
    # cat auto.master
    /nfs-photon01 /etc/auto.photon01 vers=3,rw,hard,proto=tcp,intr
    # cat auto.photon01
    * photon01.example.com:/&
    # mkdir /nfs-photon01
    # service autofs reload
    </pre>
    Does your /etc/auto.home file specify the NFS server?
    By the way, NFS4 is default in OL6, which requires that you export all NFS directories under one virtual home. For instance, if /ext/nfs is the NFS root (fsid=0), everything else that you want to be shared over NFS4 must be accessible under /ext/nfs. Check your /etc/export file. There are examples on the web, you should be able to find it searching for "NFS4 fsid-0".

  • Automount Home Directories Failed

    Hi There,
    i have solaris 10 server that is running zfs filesystem.
    after patching this server, the clients running sol 10 are not mounting the home directories anymore.
    i see that /etc/dfs/dfstab file has the word "Error: Syntax" infront of the line where home directories are getting shared.
    also the autofs svcs is up, while the nfs/server svc is offline*.
    any thoughts, what should i check.
    any help will be greatly appreciated.
    thanks
    wasim.

    Thanks alot for the reply, here is what you need.
    svcs -xv nfs/server
    svc:/network/nfs/server:default (NFS server)
    State: offline since Tue Feb 22 09:56:10 2011
    Reason: Start method is running.
    See: http://sun.com/msg/SMF-8000-C4
    See: man -M /usr/share/man -s 1M nfsd
    See: /var/svc/log/network-nfs-server:default.log
    Impact: This service is not running.
    bash-3.00# dfshares
    nfs dfshares:edison: RPC: Program not registered
    bash-3.00# vi dfs/dfstab
    "dfs/dfstab" 16 lines, 629 characters
    # Do not modify this file directly.
    # Use the sharemgr(1m) command for all share management
    # This file is reconstructed and only maintained for backward
    # compatibility. Configuration lines could be lost.
    # Place share(1M) commands here for automatic execution
    # on entering init state 3.
    # Issue the command 'svcadm enable network/nfs/server' to
    # run the NFS daemon processes and the share commands, after adding
    # the very first entry to this file.
    # share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource]
    # .e.g,
    # Error: Syntax share -F nfs -o rw -d "home directory" /tank/home
    # Error: Syntax share -F nfs -o ro -d "local" /tank/local
    bash-3.00# zfs get sharenfs tank/home
    NAME PROPERTY VALUE SOURCE
    tank/home sharenfs rw=soemgr,rw=soelab113 local
    well i did try to correct the dfstab file but did not work. i dont know what was being used to share the home directories, but i do recall that dfstab file was not like the one above.
    any thoughts,
    wasim
    a

  • Migration Assistant does not see Home Directories from AD Users

    We use AD as authentication domain but home directories are stored local at the client computer. Hence they are backed up via Time Machine. AD Users can use Time Machine like any local user. But now one machine crashed and we wanted to restore the full computer via Migration Assistent. Unfortunally AD Users home directories are not restored to the computer although the home directories are on the backup set. Is there a way to restore the whole Computer from the Backup Set inclusive all Users directories (including these from the AD Users, which had been stored locally)?

    So, finally after fourten hours of unattended "migration," I let it continue overnight and in the morning found that the time remaining had not moved a minute.  I canceled MA and found that not one iota of data had transferred.
    As far as I'm concerned, Migration Assistant ranks lower on the Apple success list than Open Doc, Newton, Pink and Taligent.
    Now I have to manually install software I want to use on the MBA, apply licenses, and all the other stuff I would expect from Windows.
    NOT happy.

  • Migrate home directories from NW6.5/NSS to OES2/NSS

    Hi
    Is there a way to migrate the user home directory from NW65 to OES2/Linux in the same tree and copy trusties and change the users home directory setting in the user object?
    I've looked at the migration tool in OES2 but I can only copy data, and not trusties and change home directory...
    /Jonas

    I use a utility called HOMES...by HBware...slick!
    I have also used a utility called (I think) Mass volume changer...that was slick too. HOMES is the more powerful, though. Both are freeware.
    --El
    Originally Posted by jonhol
    My mistake... Trustees are copied. Just my luck that when I tested I randomly selected a homdir and that one had no trustees as someone deleted the user object without deleting the directory...
    Tried wit another homedir and it worked just fine.
    Just the problem with changing the home directory in the user object left to solve...
    /Jonas

  • 12.0.4 Migration from RedHat Linux 4.0 (32-bit) to RedHat Linux 5.0 (64-Bit

    Hi,
    1) I have migrated our E-bus Environment (Test Instance) from RHEL 4.0 32-bit to RHEL 5.0 (64-bit) as per the Note IDs 416301.1 and 471566.1.
    2) The Application has been tested by the test team after migration and it is working fine.
    3) I did not upgrade the JDK 1.5 from 32-bit to 64-bit.
    Is it a must to upgrade the JDK from 32-bit to 64-bit after the migration or will the application work fine if the JDK is left untouched?
    We also plan to upgrade to 12.1.1 and then to 12.1.3 on RHEL 5.0 64-bit which I think requires JDK 6.0 as per the note 752619.1.
    AS a part of JDK upgrade, Do I have to have JDK 6.0 64-bit for the application to work properly or JDK 6.0 32-bit is good enough?
    Thanks a lot in advance.

    3) I did not upgrade the JDK 1.5 from 32-bit to 64-bit.
    Is it a must to upgrade the JDK from 32-bit to 64-bit after the migration or will the application work fine if the JDK is left untouched?Keep the 32-bit version -- Using Latest Update of JDK 5.0 with Oracle E-Business Suite Release 12 [ID 384249.1], “Step 1 Download Latest Update of JDK 5.0”
    We also plan to upgrade to 12.1.1 and then to 12.1.3 on RHEL 5.0 64-bit which I think requires JDK 6.0 as per the note 752619.1.
    AS a part of JDK upgrade, Do I have to have JDK 6.0 64-bit for the application to work properly or JDK 6.0 32-bit is good enough?The 32-bit is good enough -- Using Latest Java 6.0 Update With Oracle E-Business Suite Release 12 [ID 455492.1], "Step 2.1: Download Latest JDK 6.0 Update" section.
    Thanks,
    Hussein

  • Oracle / Redhat Linux / Portal Server |  Sample Portal Error

    Team:
    I know I saw the listings for the patch_CR067935.zip which address what
    seems to be a similar problem with the JDBC Helper Service, but I am not
    sure this is the same issue.
    I just installed Portal Service Pack1 and did all the procedures for the
    move to Oracle Database. I even See the Oracle Pools COnnecting and in
    the Admin Console.
    ===================
    WHEN I TRY TO ACCESS THE SAMPLE PORTAL I GET THE FOLLOWING ERROR:
    Is this the patch_CR067935.zip issue or do I have another Issue?
    Thank you in advance................
    ####<Feb 22, 2002 4:16:52 PM EST> <Error> <HTTP> <localhost.localdomain>
    <portalServer> <ExecuteThread: '11' for queue: 'default'> <> <> <101018>
    <[WebAppServletContext(1573598,stockportal,/stockportal)] Servlet failed
    with ServletException>
    javax.servlet.ServletException: Received a null Portal object from the
    PortalManager.
         at
    com.bea.portal.appflow.servlets.internal.PortalWebflowServlet.setupPortalRequest(PortalWebflowServlet.java:194)
         at
    com.bea.portal.appflow.servlets.internal.PortalWebflowServlet.doGet(PortalWebflowServlet.java:99)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:265)
         at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:200)
         at
    weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:215)
         at weblogic.servlet.jsp.PageContextImpl.forward(PageContextImpl.java:112)
         at jsp_servlet.__index._jspService(__index.java:92)
         at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
         at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:265)
         at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:304)
         at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:200)
         at
    weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:2459)
         at
    weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2039)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)

    Hi Mr. BigMAN,
    I'm not sure if the CLOB issue solved by patch_CR067935.zip has this symptom. In any case, you should install
    the patch because you will need it if you are going to use Portal 4.0 sp1 with Oracle. Everyone out there who is
    using Oracle with Portal 4.0 sp1 should install this patch (search for "67935" at this location:
    http://e-docs.bea.com/wlp/docs40/relnotes/relnotes.htm#246667
    Back to your current problem: Did you run the loadSampleData script? Try running it after you install the
    patch. ( http://edocs.bea.com/wlp/docs40/deploygd/oraclnew.htm#1040434 ). Also, make sure you set up the JDBC
    Helper Service for all of the J2EE applications that use JDBC services. (
    http://edocs.bea.com/wlp/docs40/deploygd/oraclnew.htm#1064575 ). You need to configure the p13nApp, portal, and
    wlcsApp.
    Let me know if you continue to have problems with this and I'll see what I can do to help.
    "Mr. BigMAN" wrote:
    Team:
    I know I saw the listings for the patch_CR067935.zip which address what
    seems to be a similar problem with the JDBC Helper Service, but I am not
    sure this is the same issue.
    I just installed Portal Service Pack1 and did all the procedures for the
    move to Oracle Database. I even See the Oracle Pools COnnecting and in
    the Admin Console.
    ===================
    WHEN I TRY TO ACCESS THE SAMPLE PORTAL I GET THE FOLLOWING ERROR:
    Is this the patch_CR067935.zip issue or do I have another Issue?
    Thank you in advance................
    ####<Feb 22, 2002 4:16:52 PM EST> <Error> <HTTP> <localhost.localdomain>
    <portalServer> <ExecuteThread: '11' for queue: 'default'> <> <> <101018>
    <[WebAppServletContext(1573598,stockportal,/stockportal)] Servlet failed
    with ServletException>
    javax.servlet.ServletException: Received a null Portal object from the
    PortalManager.
    at
    com.bea.portal.appflow.servlets.internal.PortalWebflowServlet.setupPortalRequest(PortalWebflowServlet.java:194)
    at
    com.bea.portal.appflow.servlets.internal.PortalWebflowServlet.doGet(PortalWebflowServlet.java:99)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:265)
    at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:200)
    at
    weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:215)
    at weblogic.servlet.jsp.PageContextImpl.forward(PageContextImpl.java:112)
    at jsp_servlet.__index._jspService(__index.java:92)
    at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
    at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:265)
    at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:304)
    at
    weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:200)
    at
    weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:2459)
    at
    weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2039)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)--
    Ture Hoefner
    BEA Systems, Inc.
    2590 Pearl St.
    Suite 110
    Boulder, CO 80302
    www.bea.com

  • Mount homedir autofs with openldap server

    I'm having trouble mounting home directories on mac clients running leopard from a linux openldap server. The login/password auth works fine, but somehow the autofs is not working correctly with the openldap server.
    I need some help in troubleshooting. From what I've read on the web, autofs is now suppose to work in leopard.
    Thanks,
    Yasi

    Sounds like something you should be posting to the server or linux forums.

Maybe you are looking for

  • Mini-DVI to video adapter connects with S-video cord

    I see the picture, but no sound coming out of the tv. Sound only comes out of the MacBook. Any ideas?

  • Solution manager 4.0 installation on RHEL4 ES

    Hello, I have a problem with the Prerequisites Checker of sol man 4.0 on a RHEL 4 ES X86_64: In the notes, I found that RHEL 4 AS and ES were certified for sol man. But the checks result in a "Condition not met": Unknown distribution. I tried several

  • BPM Sync Interface

    Hi friends, In my current interface, i am using BPM and in that i am sending sync request to Proxy, when the proxy takes more time i.e. more than 3 mins, i am getting timeout error in xi monitoring. Because the default timeout parameter was 3 mins. N

  • Problem with duplicated songs in iTunes library.

    Hi folks, Following a recent sync between my MacBook Pro (OS X 10.6.8) and iPod touch (3rd Gen, iOS 6.1.6), my iTunes library (v 11.4) now contains duplicate tracks of every song on certain albums. Seems to affect random artists and albums and is onl

  • Flash CC frequently crashing

    I'm working on the most basic project you could imagine and Flash CC crashes on me every 5-10 minutes. I can't pinpoint what is causing this as all I am doing so far is making keyframes, making labels, layers, folders, creating symbols, using basic d