Developer6i Net8 client and OID/LDAP

Is it possible to get the Net8 client that comes with Devloper 6i to be able to speak LDAP to OID? Even though it has the 'i' suffix, it does not seem to have the 'internet' capabilties that could be used with OID? Is there a patch or an upgrade?
I tried to install the Oracle8i r2 and r3 client software to get the Net8 with ldap functionality, but it said I cannot install into an older Oracle_HOME.
Any insight would be appreciated.
Also please respond to this link below if you an input on this issue.
"What is the benefit in using OID for names resolution?" @ http://technet.oracle.com:89/ubb/Forum60/HTML/000231.html

So you are telling me that the Net8 client version 8.0.6.0.0 that comes with Oracle Forms and Reports 6i is capable of using OID or any other LDAP directory service instead of tnsnames.ora or Oracle Names, to do name lookups?
Can you please email and tell me how?
BTW, I have both OID and IPlanet setup for name resolution, and it works fine with the Net8 client off of the Oracle 8i r3 CD.
;-)

Similar Messages

  • Problem OIM OID Ldap Sync Configuration in 11g.

    Hi Team,
    I am doing OIM and OID LDAP Sync configuration There It is failed in "Configuration Process" Step.
    and also in weblogic OIM Maganaged server in ADMIN mode not in running mode.
    please find the both logs.
    *********************************Weblogic Logs**********************************************
    Enter username to boot WebLogic server:weblogic
    Enter password to boot WebLogic server:
    <28-Sep-2012 14:07:44 o'clock BST> <Info> <Management> <BEA-141107> <Version: We
    bLogic Server 10.3.5.0 Fri Apr 1 20:20:06 PDT 2011 1398638 >
    <28-Sep-2012 14:07:47 o'clock BST> <Notice> <WebLogicServer> <BEA-000365> <Serve
    r state changed to STARTING>
    <28-Sep-2012 14:07:47 o'clock BST> <Info> <WorkManager> <BEA-002900> <Initializi
    ng self-tuning thread pool>
    <28-Sep-2012 14:07:48 o'clock BST> <Notice> <Log Management> <BEA-170019> <The s
    erver log file E:\Oracle\Middleware\user_projects\domains\IAM_domain\servers\oim
    server1\logs\oimserver1.log is opened. All server side log events will be writ
    ten to this file.>
    28-Sep-2012 14:07:56 oracle.security.am.common.nap.util.NAPLogger log
    SEVERE: Failed to communicate with any of configured Access Server, ensure that
    it is up and running.
    <28-Sep-2012 14:07:57 o'clock BST> <Notice> <Security> <BEA-090082> <Security in
    itializing using security realm myrealm.>
    <28-Sep-2012 14:08:04 o'clock BST> <Notice> <WebLogicServer> <BEA-000365> <Serve
    r state changed to STANDBY>
    <28-Sep-2012 14:08:04 o'clock BST> <Notice> <WebLogicServer> <BEA-000365> <Serve
    r state changed to STARTING>
    <28-Sep-2012 14:08:20 o'clock BST> <Warning> <oracle.jps.upgrade> <JPS-06003> <C
    annot migrate credential folder/key ADF/anonymous#oimBpelCredKey.Reason oracle.s
    ecurity.jps.service.credstore.CredentialAlreadyExistsException: JPS-01007: The c
    redential with map ADF and key anonymous#oimBpelCredKey already exists..>
    <28-Sep-2012 14:08:21 o'clock BST> <Warning> <oracle.adf.share.ADFContext> <BEA-
    000000> <Automatically initializing a DefaultContext for getCurrent.
    Caller should ensure that a DefaultContext is proper for this use.
    Memory leaks and/or unexpected behaviour may occur if the automatic initializati
    on is performed improperly.
    This message may be avoided by performing initADFContext before using getCurrent
    To see the stack trace for thread that is initializing this, set the logging lev
    el of oracle.adf.share.ADFContext to FINEST>
    <28-Sep-2012 14:08:24 o'clock BST> <Error> <Deployer> <BEA-149205> <Failed to in
    itialize the application 'oim [Version=11.1.1.3.0]' due to error oracle.iam.plat
    form.utils.OIMAppInitializationException:
    OIM application intialization failed because of the following reasons:
    oim-config.xml was not found in MDS Repository.
    Unable to find keystore ".xldatabasekey" in <DOMAIN_HOME>/config/fmwconfig/.
    Password for OIMSchemaPassword is not seeded in CSF.
    Password for xell is not seeded in CSF.
    Password for DataBaseKey is not seeded in CSF.
    Password for JMSKey is not seeded in CSF.
    Password for .xldatabasekey is not seeded in CSF.
    Password for default-keystore.jks is not seeded in CSF.
    Password for SOAAdminPassword is not seeded in CSF.
    oracle.iam.platform.utils.OIMAppInitializationException:
    OIM application intialization failed because of the following reasons:
    oim-config.xml was not found in MDS Repository.
    Unable to find keystore ".xldatabasekey" in <DOMAIN_HOME>/config/fmwconfig/.
    Password for OIMSchemaPassword is not seeded in CSF.
    Password for xell is not seeded in CSF.
    Password for DataBaseKey is not seeded in CSF.
    Password for JMSKey is not seeded in CSF.
    Password for .xldatabasekey is not seeded in CSF.
    Password for default-keystore.jks is not seeded in CSF.
    Password for SOAAdminPassword is not seeded in CSF.
    at oracle.iam.platform.utils.OIMAppInitializationListener.preStart(OIMAp
    pInitializationListener.java:145)
    at weblogic.application.internal.flow.BaseLifecycleFlow$PreStartAction.r
    un(BaseLifecycleFlow.java:282)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(Authenticate
    dSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:
    120)
    at weblogic.application.internal.flow.BaseLifecycleFlow$LifecycleListene
    rAction.invoke(BaseLifecycleFlow.java:199)
    Truncated. see log file for complete stacktrace
    Caused By: oracle.iam.platform.utils.OIMAppInitializationException:
    OIM application intialization failed because of the following reasons:
    oim-config.xml was not found in MDS Repository.
    Unable to find keystore ".xldatabasekey" in <DOMAIN_HOME>/config/fmwconfig/.
    Password for OIMSchemaPassword is not seeded in CSF.
    Password for xell is not seeded in CSF.
    Password for DataBaseKey is not seeded in CSF.
    Password for JMSKey is not seeded in CSF.
    Password for .xldatabasekey is not seeded in CSF.
    Password for default-keystore.jks is not seeded in CSF.
    Password for SOAAdminPassword is not seeded in CSF.
    at oracle.iam.platform.utils.OIMAppInitializationListener.preStart(OIMAp
    pInitializationListener.java:145)
    at weblogic.application.internal.flow.BaseLifecycleFlow$PreStartAction.r
    un(BaseLifecycleFlow.java:282)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(Authenticate
    dSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:
    120)
    at weblogic.application.internal.flow.BaseLifecycleFlow$LifecycleListene
    rAction.invoke(BaseLifecycleFlow.java:199)
    Truncated. see log file for complete stacktrace
    >
    <28-Sep-2012 14:08:24 o'clock BST> <Warning> <Munger> <BEA-2156203> <A version a
    ttribute was not found in element application in the deployment descriptor in E:
    \Oracle\Middleware\Oracle_IDM1\server\apps\spml-xsd.ear/META-INF/application.xml
    . A version attribute is required, but this version of the Weblogic Server will
    assume that the JEE5 is used. Future versions of the Weblogic Server will reject
    descriptors that do not specify the JEE version.>
    <28-Sep-2012 14:08:24 o'clock BST> <Warning> <Munger> <BEA-2156203> <A version a
    ttribute was not found in element application in the deployment descriptor in E:
    \Oracle\Middleware\user_projects\domains\IAM_domain\servers\oim_server1\tmp\_WL_
    user\spml-xsd\s8d2b9/META-INF/application.xml. A version attribute is required,
    but this version of the Weblogic Server will assume that the JEE5 is used. Futur
    e versions of the Weblogic Server will reject descriptors that do not specify th
    e JEE version.>
    <28-Sep-2012 14:08:24 o'clock BST> <Emergency> <Deployer> <BEA-149259> <Server '
    oim_server1' in cluster 'OIM_Cluster' is being brought up in administration stat
    e due to failed deployments.>
    Loading xalan.jar for XPathAPI.
    14:08:30 INFO [[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default
    (self-tuning)'] -
    ----------------- NEXAWEB SERVER LICENSE ------------------
    - Customer ID : 122
    - License type : Enterprise
    - Max unique IPs : unlimited
    - Max XUL sessions : unlimited
    - Max CPUs/server : unlimited
    - Clustering allowed : true
    - Expiration date : none
    Nexaweb Technologies Inc.(C)2000-2004. All Rights Reserved.
    Nexaweb Technologies Inc.
    10 Canal Park
    Cambridge, MA 02141
    Tel: 617.577.8100. Email: [email protected]
    14:08:31 INFO [[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default
    (self-tuning)'] - Clustering is OFF.
    14:08:31 INFO [[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default
    (self-tuning)'] - Servlet Engine: WebLogic Server 10.3.5.0 Fri Apr 1 20:20:06 PD
    T 2011 1398638 Oracle WebLogic Server Module Dependencies 10.3 Thu Mar 3 14:37:5
    2 PST 2011 Oracle WebLogic Server on JRockit Virtual Edition Module Dependencies
    10.3 Thu Feb 3 16:30:47 EST 2011
    14:08:31 INFO [[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default
    (self-tuning)'] - Servlet API Version: 2.5
    14:08:31 INFO [[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default
    (self-tuning)'] - Nexaweb Server Info = Nexaweb Server 3.3.1072
    14:08:31 INFO [[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default
    (self-tuning)'] - Nexaweb Server initialized successfully.
    <28-Sep-2012 14:08:34 o'clock BST> <Notice> <Log Management> <BEA-170027> <The S
    erver has established connection with the Domain level Diagnostic Service succes
    sfully.>
    <28-Sep-2012 14:08:34 o'clock BST> <Notice> <Cluster> <BEA-000197> <Listening fo
    r announcements from cluster using unicast cluster messaging>
    <28-Sep-2012 14:08:34 o'clock BST> <Notice> <Cluster> <BEA-000133> <Waiting to s
    ynchronize with other running members of OIM_Cluster.>
    <28-Sep-2012 14:09:04 o'clock BST> <Notice> <Server> <BEA-002613> <Channel "Defa
    ult[2]" is now listening on 127.0.0.1:14000 for protocols iiop, t3, CLUSTER-BROA
    DCAST, ldap, snmp, http.>
    <28-Sep-2012 14:09:04 o'clock BST> <Notice> <Server> <BEA-002613> <Channel "Defa
    ult[3]" is now listening on 0:0:0:0:0:0:0:1:14000 for protocols iiop, t3, CLUSTE
    R-BROADCAST, ldap, snmp, http.>
    <28-Sep-2012 14:09:04 o'clock BST> <Notice> <Server> <BEA-002613> <Channel "Defa
    ult[1]" is now listening on fe80:0:0:0:0:5efe:a2f:f22a:14000 for protocols iiop,
    t3, CLUSTER-BROADCAST, ldap, snmp, http.>
    <28-Sep-2012 14:09:04 o'clock BST> <Warning> <Server> <BEA-002611> <Hostname "UK
    SHWTOAP03A.skandia.co.uk", maps to multiple IP addresses: 10.47.242.42, 0:0:0:0:
    0:0:0:1>
    <28-Sep-2012 14:09:04 o'clock BST> <Notice> <Server> <BEA-002613> <Channel "Defa
    ult" is now listening on 10.47.242.42:14000 for protocols iiop, t3, CLUSTER-BROA
    DCAST, ldap, snmp, http.>
    <28-Sep-2012 14:09:04 o'clock BST> <Notice> <WebLogicServer> <BEA-000330> <Start
    ed WebLogic Managed Server "oim_server1" for domain "IAM_domain" running in Prod
    uction Mode>
    <28-Sep-2012 14:09:04 o'clock BST> <Notice> <WebLogicServer> <BEA-000365> <Serve
    r state changed to ADMIN>
    <28-Sep-2012 14:09:04 o'clock BST> <Notice> <WebLogicServer> <BEA-000360> <Serve
    r started in ADMIN mode>
    **********************************OIM OID Ldap Sync Configuration Logs****************************
    [2012-09-28T14:49:11.171+01:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [[
    [OIM_CONFIG] Updating Ldap Sync Configuration
    [2012-09-28T14:49:11.171+01:00] [as] [TRACE:16] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: LdapSync] [SRC_METHOD: configurationLdap] ENTRY
    [2012-09-28T14:49:11.171+01:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: oracle.as.install.oim.config.util.LdapSync] [SRC_METHOD: configurationLdap] Create the Database connection
    [2012-09-28T14:49:11.171+01:00] [as] [TRACE:16] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: LdapSync] [SRC_METHOD: createDBConnection] ENTRY
    [2012-09-28T14:49:11.296+01:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: oracle.as.install.oim.config.util.LdapSync] [SRC_METHOD: configurationLdap] isLIBOVD:true
    [2012-09-28T14:49:11.312+01:00] [as] [TRACE:16] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: LdapSync] [SRC_METHOD: closeDBConnection] ENTRY
    [2012-09-28T14:49:11.312+01:00] [as] [TRACE:16] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: LdapSync] [SRC_METHOD: closeDBConnection] RETURN
    [2012-09-28T14:49:11.312+01:00] [as] [TRACE:16] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: LdapSync] [SRC_METHOD: configurationLdap] RETURN
    [2012-09-28T14:49:11.312+01:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [[
    Updated LDAP Server Details in mds schema
    [2012-09-28T14:49:11.312+01:00] [as] [TRACE:16] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: LdapSync] [SRC_METHOD: configurationLdap] RETURN
    [2012-09-28T14:49:11.812+01:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [OIM_CONFIG] Updated LDAPContainerRules.xml.
    [2012-09-28T14:49:11.812+01:00] [as] [TRACE:16] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [SRC_CLASS: mdsMetadata] [SRC_METHOD: loadEventhandler] RETURN
    [2012-09-28T14:49:14.687+01:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [[
    [OIM_CONFIG] Created jobs using seedSchedulerData. Log location C:\Program Files\Oracle\Inventory\logs
    [2012-09-28T14:49:14.687+01:00] [as] [ERROR] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] File not found[[
    java.io.FileNotFoundException: File not found
         at java.util.zip.ZipFile.open(Native Method)
         at java.util.zip.ZipFile.<init>(ZipFile.java:117)
         at java.util.jar.JarFile.<init>(JarFile.java:135)
         at java.util.jar.JarFile.<init>(JarFile.java:72)
         at oracle.as.install.oim.config.util.RoleSODJarUtil.updateFile(RoleSODJarUtil.java:32)
         at oracle.as.install.oim.config.OIMConfigManager.configureOIM(OIMConfigManager.java:783)
         at oracle.as.install.oim.config.OIMConfigManager.doExecute(OIMConfigManager.java:538)
         at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:335)
         at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:87)
         at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:104)
         at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
         at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:63)
         at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:158)
         at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
         at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:83)
         at java.lang.Thread.run(Thread.java:662)
    [2012-09-28T14:49:14.687+01:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] [[
    [OIM_CONFIG] Failed configuration step Configure OIM Server
    [2012-09-28T14:49:14.702+01:00] [as] [ERROR] [] [oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] One or More configurations failed. Exiting
    [2012-09-28T14:49:14.702+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] Install Adapter: Mark End for:CONFIG
    [2012-09-28T14:49:14.702+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] Install Adapter: Mark End for:INTERVIEW
    [2012-09-28T14:49:14.702+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] Install Adapter: Mark End for:INSTALL
    [2012-09-28T14:49:14.702+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] Install Adapter: Mark End for:COPY
    [2012-09-28T14:49:14.702+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] Install Adapter: Mark End for:LINK
    [2012-09-28T14:49:14.765+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine] [tid: 12] [ecid: 0000JcD8obD9pYjpp0_AiY1GPQHh000003,0] Setting valueOf(IS CONFIGURATION SUCCESSFUL) to:false. Value obtained from:USER
    [2012-09-28T15:11:21.461+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine] [tid: 11] [ecid: 0000JcD2jfD9pYjpp0_AiY1GPQHh000002,0] Setting valueOf(IS CONFIGURATION SUCCESSFUL) to:false. Value obtained from:USER
    [2012-09-28T15:11:27.914+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine] [tid: 11] [ecid: 0000JcD2jfD9pYjpp0_AiY1GPQHh000002,0] Setting valueOf(IS CONFIGURATION SUCCESSFUL) to:false. Value obtained from:USER
    Regards,
    Ravi.

    Your log files too give some hint... Please verify whether following files like .xldatabasekey are present in your environment:-
    OIM application intialization failed because of the following reasons:
    oim-config.xml was not found in MDS Repository.
    Unable to find keystore ".xldatabasekey" in <DOMAIN_HOME>/config/fmwconfig/.
    Password for OIMSchemaPassword is not seeded in CSF.
    Password for xell is not seeded in CSF.
    Password for DataBaseKey is not seeded in CSF.
    Password for JMSKey is not seeded in CSF.
    Password for .xldatabasekey is not seeded in CSF.
    Password for default-keystore.jks is not seeded in CSF.
    Password for SOAAdminPassword is not seeded in CSF.
    I doubt whether OIM is properly installed in your environment otherwise .xldatabasekey would have been present in <DOMAIN_HOME>/config/fmwconfig..
    Also, as far as Weblogic starting in ADMIN mode is concerned, you may try to do the following...
    ps -eaf| grep AdminServer
    Kill the process
    Then remove the lok file. i.e. Lock files...
    rm -rf /home/oracle/Oracle/Middleware/user_projects/domains/oimdomain/servers/oim_server1/tmp/*oim_server1.lok*
    rm -rf /home/oracle/Oracle/Middleware/user_projects/domains/oimdomain/servers/soa_server1/tmp/*soa_server1.lok*
    rm -rf /home/oracle/Oracle/Middleware/user_projects/domains/oimdomain/servers/AdminServer/tmp/*AdminServer.lok*
    After that
    Take the backup of /home/oracle/Oracle/Middleware/user_projects/domains/<DOMAIN_HOME>/servers/AdminServer/data/ldap/ldapfiles (I mean CUT this folder and save it in Backup folder..
    Share the result with us....

  • Help required in OIM-OID LDap Synch and GTC flat file connector

    Hi Experts,
    I am using OIM 11.1.1.5 with OID LDap Synch enabled. I have OIM protected with OAM 11.1.1.5.0 and almost all normal things are working.
    Once I am doing TRUSTED FLAT FILE GTC recon to OIM, the users are getting created in OIM without any password and due to that my users are not getting created in OID(Ldap Synch is enabled);
    The following exception is getting thrown:
    <Nov 13, 2011 9:48:21 AM CET> <Warning> <XELLERATE.GC.PROVIDER.RECONCILIATIONTRANSPORT> <BEA-000000> <FILE SUCCESSFULLY ARCHIVED : /home/oracle/OAM_ProtoTyping/TestCSV/Scheduled.csv>
    <Nov 13, 2011 9:48:21 AM CET> <Warning> <oracle.iam.callbacks.common> <IAM-2030146> <[CALLBACKMSG] Are applicable policies present for this async eventhandler ? : false>
    <Nov 13, 2011 9:48:22 AM CET> <Error> <oracle.iam.ldapsync.impl.eventhandlers.user> <IAM-3010021> <An error occurred while creating the user in LDAP.
    oracle.iam.platform.entitymgr.MissingRequiredAttributeException: [usr_password]
    at oracle.iam.platform.entitymgr.impl.EntityManagerImpl.checkRequired(EntityManagerImpl.java:1450)
    at oracle.iam.platform.entitymgr.impl.EntityManagerImpl.createEntity(EntityManagerImpl.java:263)
    at oracle.iam.ldapsync.impl.eventhandlers.user.UserCreateLDAPPostProcessHandler.createUser(UserCreateLDAPPostProcessHandler.java:261)
    at oracle.iam.ldapsync.impl.eventhandlers.user.UserCreateLDAPHandler.execute(UserCreateLDAPHandler.java:123)
    at oracle.iam.platform.kernel.impl.OrchProcessData.runPostProcessEvents(OrchProcessData.java:1166)
    at oracle.iam.platform.kernel.impl.OrchProcessData.runEvents(OrchProcessData.java:710)
    at oracle.iam.platform.kernel.impl.OrchProcessData.executeEvents(OrchProcessData.java:227)
    at oracle.iam.platform.kernel.impl.OrchestrationEngineImpl.resumeProcess(OrchestrationEngineImpl.java:675)
    at oracle.iam.platform.kernel.impl.OrchestrationEngineImpl.resumeProcess(OrchestrationEngineImpl.java:705)
    at oracle.iam.platform.kernel.impl.OrhestrationAsyncTask.execute(OrhestrationAsyncTask.java:108)
    at oracle.iam.platform.async.impl.TaskExecutor.executeUnmanagedTask(TaskExecutor.java:100)
    at oracle.iam.platform.async.impl.TaskExecutor.execute(TaskExecutor.java:70)
    at oracle.iam.platform.async.messaging.MessageReceiver.onMessage(MessageReceiver.java:68)
    at sun.reflect.GeneratedMethodAccessor1821.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at $Proxy335.onMessage(Unknown Source)
    at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:574)
    at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:477)
    at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:380)
    at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4659)
    at weblogic.jms.client.JMSSession.execute(JMSSession.java:4345)
    at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3822)
    at weblogic.jms.client.JMSSession.access$000(JMSSession.java:115)
    at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5170)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    >
    Has any body faced similar kind of issue.
    I tried to use post process event handler on create but while updating password its saying the user state is not in synch with OID.
    So I am unable to use post process event handlers as well.
    Regards,
    J

    Thanks Sunny,
    But the post process event handler with reset/update password is not working on CREATE;
    the following error message is being thrown:
    oracle.iam.platform.kernel.EventFailedException: Password reset failed because user JSMITH151 is not synchronized to the LDAP directory.
    at oracle.iam.ldapsync.impl.eventhandlers.user.util.LDAPUserHandlerUtil.resetPassword(LDAPUserHandlerUtil.java:203)
    at oracle.iam.ldapsync.impl.eventhandlers.user.UserResetPasswordLDAPHandler.execute(UserResetPasswordLDAPHandler.java:167)
    at oracle.iam.platform.kernel.impl.OrchProcessData.runPreProcessEvents(OrchProcessData.java:898)
    at oracle.iam.platform.kernel.impl.OrchProcessData.runEvents(OrchProcessData.java:634)
    at oracle.iam.platform.kernel.impl.OrchProcessData.executeEvents(OrchProcessData.java:227)
    at oracle.iam.platform.kernel.impl.OrchestrationEngineImpl.resumeProcess(OrchestrationEngineImpl.java:665)
    In 11.1.1.3 OIM, I found the password was available for mapping in GTC connector, but in OIM 11.1.1.5, oracle has removed the password mapping attribute.
    Can you please suggest?
    I checked with Oracle Support, They are saying in OIM 11.1.1.5 they have introduced a new post process event handler which should generate the password on every trusted reconcilication event.
    But in my environment its not behaving like that.
    Regards,
    J

  • Urgent: mapping between OID and iplanet ldap

    I am trying to configure the mapping between my iplanet ldap server (source) and OID (destination) . My iplanet dn is uid=sharam,ou=People,dc=xsj,dc=xilinx,dc=com and my OID dn is cn=sharam,cn=users,dc=xsj,dc=xilinx,dc=com
    My mapping file looks like this:
    DomainRules
    dc=xilinx,dc=com:cn=users,dc=xsj,dc=xilinx,dc=com:cn=%,cn=users,dc=xsj,dc=xilinx
    AttributeRules
    givenname
    facsimiletelephonenumber
    departmentnumber
    mail
    uid::::cn
    telephonenumber
    pager
    employeenumber
    l
    sn
    title
    When I load this using ldapUploadAgentFile.sh, I am getting the following error in ldap/odi/log/IPlanet.trc file. Any ideas what I am doing wrong??
    Trace Log Started at Mon Jul 08 11:28:47 PDT 2002
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708112903
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708112917
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708112933
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708112948
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113003
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113018
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113033
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113048
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113103
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113118
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113133
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113148
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113203
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113217
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113233
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113248
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113303
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113317
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113333
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered
    IPlanetImport:Error in Mapping EngineODIException: DIP_GEN_UNKNOWN_FAILURE
    ODIException: DIP_GEN_UNKNOWN_FAILURE
    at oracle.ldap.odip.map.MapEngine.constructDN(MapEngine.java:258)
    at oracle.ldap.odip.map.MapEngine.mapDomains(MapEngine.java:196)
    at oracle.ldap.odip.map.MapEngine.map(MapEngine.java:172)
    at oracle.ldap.odip.engine.AgentThread.mapExecute(AgentThread.java:323)
    at oracle.ldap.odip.engine.AgentThread.execMapping(AgentThread.java:214)
    at oracle.ldap.odip.engine.AgentThread.run(AgentThread.java:124)
    Updated Attributes
    orclodipLastExecutionTime: 20020708113348
    orclOdipSynchronizationStatus: Mapping Failure;Agent Execution Not Attempted
    orclOdipSynchronizationErrors: Unknown Error Encountered

    Start the odisrv with the debug flag set to 16. This should give you a more detailed trace which might help you sort this.
    Hope this helps
    Vinodh R.

  • What is the best way update similar OID and OAM LDAP attributes via OIM?

    Our environment uses OIM provisioning to an OID LDAP which is used by OAM.
    For legacy purposes, we need to populate both the Oracle "orcl*" attributes and OAM "ob*" in cases where they have the same or similar usage.
    Example: When a user is disabled in OIM we need to set orclisenabled="false" and obUserAccountControl="DEACTIVATED" in OID
    What is the best way to accomplish this in OIM? My initial thought was to write a custom adapter, similar to the out-of-the-box OID Modify User adapter, which supports modifying multiple attributes.
    Is there a better way?

    You can create two tasks which will modify two attributes of OID.
    On Disable user task, call task1 and on Success of task1, call Task2 (using Task to Generate Feature).
    You can make use of OOTB connector only.

  • Is there any way to config iws6.0 to connect to LDAP directory using SSL client and server authentication.  Only SSL server authentication worked when I tried.

    As my previous question, I followed the following instructions to setup up connection between iws and an LDAP server.
    "Using SSL to Communicate with LDAP
    You should require your Administration Server to communicate with LDAP using SSL. To enable SSL on your Administration Server, perform the following steps:
    1.Access the Administration Server and choose the Global Settings tab.
    2.Click the Configure Directory Service link.
    3.Select Yes to use Secure Sockets Layer (SSL) for connections.
    4.Click Save Changes.
    5.Click OK to change your port to the standard port for LDAP over SSL. "
    Q1. Any other steps needed to setup client authentication (or mutual authentication)?
    Q2. Do I need to enable security for connection groups in order to have this setup to work?

    Check out:
    http://docs.iplanet.com/docs/manuals/enterprise/60sp1/ag/esecurty.htm#1008113
    You will need to turn on Client Auth as described above. Hope it helps.

  • OAS and LDAP or OAS and OID ???

    1) Is OAS and LDAP a good combination or OAS and OID ???
    how do we connect and make use of LDAP from OAS?
    please let me know
    thanks in advance

    Get hold of Whitepaper 774783.1 LDAP Integration for Oracle Utilities Application Framework based products from My Support

  • Setting up OID/LDAP with SQL Developer?

    I have over 100 databases to add to SQL Developer. I use OID and would like to allowd SQL Developer use that. How do I set it up? Parameter file somewhere?

    I have installed SQL Developer on my laptop (windows XP pro), and we have LDAP server and other database servers on unix. I have done following and is working for me using OID/LDAP 9.2.0.7.
    TO use OID based name resolution,
    copy ojdbc14.jar from ORACLE_HOME(10.x) /jdbc/lib (windows client)
    to /<SQL Developer home>/jdev/lib/patches
    Hope this helps.

  • NET8의 LOGGING AND TRACE관련 PARAMETER에 대한 Q & A

    제품 : SQL*NET
    작성날짜 : 1999-07-30
    NET8의 LOGGING AND TRACE관련 PARAMETER에 대한 Q & A
    ==================================================
    PURPOSE
    NET8의 LOGGING AND TRACE관련 PARAMETER에 대해 알아 보도록한다
    Explanation
    1. NET8에서 trace를 왜 사용하고 어떤 component들에 trace를 할 수 있나요 ?
    Trace의 특징은 네트워크을 수행하게 될때 network event들을 기술한다
    즉 trace와 관련된 일련의 문장들이 자세하게 생성된다.
    "Tracing"의 운영으로 log파일에 제공되어 있는 것 보다 NET8의 component들의
    내부적인 정보를 보다 많이 얻을 수 있다.
    이러한 정보는 에러의 결과로 인하여 발생하는 동일한 event들로 파일들에
    결과가 생성되어 이를 이용하여 문제의 원인을 판단할 수 있다.
    주의 : trace의 기능을 이용하는 경우 충분한 disk space와 system
    performance의 현격한 저하를 가져올 수 있다.
    즉 trace의 기능은 반드시 필요할 경우에만 사용할 것을 권한다.
    Example
    Reference Ducumment
    << trace의 기능을 이용하여 trace를 할수 있는 component들 >>
    * Network listener
    * Net8 components on the client and server
    * Connection Manager
    * Oracle Names Server
    * Oracle Names Control Utility
    * TNSPING utility
    2. 어떤 parameter들을 설정하면 trace 기능을 이용할 수 있는가 ?
    tracing을 하기 위해서는 특정 trace parameter들을 설정함으로써 가능하며
    아래에 주어진 방법들과 또는 utility들중 하나를 선택하여 설정함으로써
    사용할 수 있다.
    * Component Configuration Files
    * Component Control Utilities
    * Oracle Trace
    component의 configuration 파일을 이용하여 traceing parameter를 설정하려면
    1) component의 configuration 파일에 다음의 traceing parameter를 설정한다.
    - SQLNET.ORA for client or server, LISTENER.ORA for listener:
    TRACE_LEVEL_<CLIENT/LISTENER/SERVER>=(0/4/10/16)
    TRACE_DIRECTORY_<CLIENT/LISTENER/SERVER>=<directory name>
    LOG_DIRECTORY_<CLIENT/LISTENER/SERVER>=<directory name>
    2) 만일 component들이 수행중인 동안 configuration 파일의 수정이 있었다면
    변경된 parameter들을 사용하기 위해 component들을 다시 시작하여야 한다.
    component control utility들을 이용하여 trace parameter들을 설정하려면
    1) listener의 경우, Listener Control Utility(lsnrctl)에서 TRACE 명령어를
    이용하여 listener가 수행중인 동안에도 trace level을 설정할 수 있다.
    EX)
    RC80:/mnt3/rctest80> lsnrctl
    LSNRCTL for SVR4: Version 8.0.4.0.0 - Production on 01-SEP-98 15:16:52
    (c) Copyright 1997 Oracle Corporation. All rights reserved.
    Welcome to LSNRCTL, type "help" for information.
    LSNRCTL> trace admin
    Connecting to (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY))
    Opened trace file: /mnt4/coe/app/oracle/product/8.0.4/network/trace/
    lsnr_coe.trc
    The command completed successfully
    LSNRCTL> trace off
    Connecting to (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY))
    The command completed successfully
    LSNRCTL> exit
    RC80:/mnt3/rctest80>
    2) Oracle Names의 경우, Names Control Utility(namesctl)에서 TRACE_LEVEL
    명령어를 이용하여 Oracle Names가 수행중인 동안에도 trace level을
    설정할 수 있다.
    주의 : Connection Manager의 경우, trace level은 configuration 파일인
    CMAN.ORA 에서만 설정할 수 있다.
    Oracle Enterprose manager(이하 OEM)에 있는 Oracle Trace는 trace parameter
    들을 설정하고 GUI를 통해 trace data의 형태를 볼수 있도록 하는 tracing tool
    이다.
    3. Trace된 data를 해석할 수 있는 다른 utility들이 있다면 ?
    Trace Assistant를 사용하면 사용자의 *.trc 파일 (SQL*Net v2의 형식에 의해
    생성된) 또는 *.txt (Orace Trace 과 TRCFMT에 의해 생성된 출력물)을 통해
    trac된 정보를 해석할 수 있다.
    이 유틸리티 네트워크의 문제들로 인해 발생하는 문제점들을 진단하고
    해결하는 데 보다 많은 정보를 제공하여 사용자의 이해를 돕는다.
    * the source and destination of trace files
    * the flow of packets between network nodes
    * which component of Net8 is failing
    * pertinent error codes
    다음에 주어진 명령어를 수행하므로써 Trace Assistant 실행할 수 있다.
    trcasst [options] <filename>
    Trace Assistant Text Formatting Options
    -o Displays connectivity and Two Task Common (TTC) information.
    After the -o the following options may be used:
    c (for summary connectivity information)
    d (for detailed connectivity information)
    u (for summary TTC information)
    t (for detailed TTC information)
    q (displays SQL commands enhancing summary TTC
    information)
    -p Oracle Internal Use Only
    -s Displays statistical information
    -e Enables display of error information After the -e, zero
    or one error decoding level may follow:
    0 or nothing (translates the NS error numbers dumped
    from the nserror function plus lists all
    other errors)
    1 (displays only the NS error translation from
    the nserror function)
    2 (displays error numbers without translation)
    만일 option들이 제공되지 않는다면 기본적으로 -odt -e -s가 지정되어 자세한
    connectivity, Two-Task Common, 에러 level 0 그리고 통계정보들이 tracing
    된다.
    4. SQL*Net v2 tracing과 어떻게 다른가 ?
    Net8 tracing에서는 이전 버전인 SQL*NET V2에서 제공 되는 모든 option을
    포함하고 있고 Oracle Trace의 기능이 추가되었다.
    이것은 Oracle Trace Repository를 OEM 콘솔을 통하여 사용자의 trace 정보를
    관리할 수 있도록 허용한다.
    5. *.cdf와 *.dat은 어떤 파일 인가 ?
    *.cdf 와 *.dat 파일들은 Oracle Trace에 의해 생성되는 파일들로서 이 파일들을
    읽기 위해서는 반드시 trcfmt utility를 이용해야만 한다.
    trcfmt는 binary (*.dat와 *.cdf의 확장자) 파일내에 있는 data를 일반text
    (.txt의 확장자)로 정보를 추출한다. 이 tool을 사용하기 위해서는 다음의
    명령어를 이용하면 된다.
    $ trcfmt collection.cdf
    주의 : .cdf와 .dat파일이 존재하는 디렉토리가 아닌 곳에서 이 tool을 이용
    한다면 path가 포함되야 한다. 만일 하나의 .cdf 와 .dat 파일들내에
    여러 프로세스들의 traceing정보가 수집된다면 그것들은 process_id.txt
    의 이름과 함께 파일이 추출될 것이다.
    6. trac관련 configuration은 어떤 것이 있으며 설정할 수 있는 parameter는
    무엇이 있는가 ?
    ==========================================================================
    || SQLNET.ORA Parameters ||
    ==========================================================================
    DAEMON.TRACE_DIRECTORY
    Purpose: Controls the destination directory of the Oracle
    Enterprise Manager daemon trace file
    Default Value: $ORACLE_HOME/network/trace
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_DIRECTORY=/oracle/traces
    DAEMON.TRACE_LEVEL
    Purpose: Turns tracing on/off to a certain specified level for
    the Oracle Enterprise Manager daemon.
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_LEVEL=10
    DAEMON.TRACE_MASK
    Purpose: Specifies that only the Oracle Enterprise Manager daemon
    trace entries are logged into the trace file.
    Default Value: $ORACLE_HOME/network/trace
    Description
    Available Oracle Enterprise Manager Installation Guide
    Example: DAEMON.TRACE_MASK=(106)
    LOG_DIRECTORY_CLIENT
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Example: LOG_DIRECTORY_CLIENT=/oracle/network/trace
    LOG_DIRECTORY_SERVER
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Valid in File: SQLNET.ORA
    Example: LOG_DIRECTORY_SERVER=/oracle/network/trace
    LOG_FILE_CLIENT
    Purpose: Controls the log output filename for an Oracle client.
    Default Value: SQLNET.LOG
    Example: LOG_FILE_CLIENT=client
    LOG_FILE_SERVER
    Purpose: Controls the log output filename for an Oracle server.
    Default Value: SQLNET.LOG
    Example: LOG_FILE_SERVER=svr
    NAMESCTL.TRACE_LEVEL
    Purpose: Indicates the level at which the NAMESCTL program should
    be traced.
    Default Value: OFF
    Values: OFF, USER, or ADMIN
    Example: NAMESCTL.TRACE_LEVEL=ADMIN
    NAMESCTL.TRACE_FILE
    Purpose: Indicates the file in which the NAMESCTL trace output is
    placed.
    Default Value: namesctl_PID.cdf and namesctl_PID.dat
    Example: NAMESCTL.TRACE_FILE=NMSCTL
    NAMESCTL.TRACE_DIRECTORY
    Purpose: Indicates the directory where trace output from the NAMESCTL
    utility is placed.
    Default
    Value: $ORACLE_HOME/network/trace
    Example: NAMESCTL.TRACE_DIRECTORY=/ORACLE/TRACE
    NAMESCTL.TRACE_UNIQUE
    Indicates whether a process identifier is appended to the
    Purpose: name of each trace file generated, so that several can
    coexist.
    Default
    Value: OFF
    Values: OFF or ON
    Example: NAMESCTL.TRACE_UNIQUE = ON
    TNSPING.TRACE_DIRECTORY
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TNSPING.TRACE_DIRECTORY=/oracle/traces
    TNSPING.TRACE_LEVEL
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TNSPING.TRACE_LEVEL=10
    TRACE_DIRECTORY_CLIENT
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_CLIENT=/oracle/traces
    TRACE_DIRECTORY_SERVER
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_SERVER=/oracle/traces
    TRACE_FILE_CLIENT
    Purpose: Controls the name of the client trace file
    Default Value: SQLNET.CDF and SQLNET.DAT
    Example: TRACE_FILE_CLIENT=cli
    TRACE_FILE_SERVER
    Purpose: Controls the name of the server trace file
    Default Value: SVR_PID.CDF and SVR_PID.DAT
    Example: TRACE_FILE_SERVER=svr
    TRACE_LEVEL_CLIENT
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TRACE_LEVEL_CLIENT=10
    TRACE_LEVEL_SERVER
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 or SUPPORT - WorldWide Customer Support trace
    information
    Example: TRACE_LEVEL_SERVER=10
    TRACE_UNIQUE_CLIENT
    Used to make each client trace file have a unique name to
    Purpose: prevent each trace file from being overwritten with the next
    occurrence of the client. The PID is attached to the end of
    the filename.
    Default
    Value: OFF
    Example: TRACE_UNIQUE_CLIENT=ON
    USE_CMAN
    If the session is in an Enhanced Discovery Network with a
    Purpose: Names Server, this parameter forces all sessions to go
    through a Connection Manager to get to the server.
    Default
    Value: FALSE
    Values: TRUE or FALSE
    Example: USE_CMAN=TRUE
    ==========================================================================
    || LISTENER.ORA Parameters ||
    ==========================================================================
    LOG_DIRECTORY_listener_name
    Purpose: Controls the directory for where the log file is written
    Default Value: Current directory where executable is started from.
    Example: LOG_DIRECTORY_LISTENER=/oracle/traces
    LOG_FILE_listener_name
    Purpose: Specifies the filename where the log information is
    written
    Default Value: listener_name.log
    Example: LOG_FILE_LISTENER=lsnr
    TRACE_DIRECTORY_listener_name
    Purpose: Control the destination directory of the trace file
    Default Value: $ORACLE_HOME/network/trace
    Example: TRACE_DIRECTORY_LISTENER=/oracle/traces
    TRACE_FILE_listener_name
    Purpose: Controls the name of the listener trace file
    Default Value: LISTENER_NAME.CDF and LISTENER_NAME.DAT
    Example: TRACE_FILE_LISTENER=lsnr
    TRACE_LEVEL_listener_name
    Purpose: Turns tracing on/off to a certain specified level
    Default Value: 0 or OFF
    * 0 or OFF - No trace output
    * 4 or USER - User trace information
    Available Values
    * 10 or ADMIN - Administration trace information
    * 16 - WorldWide Customer Support trace information
    Example: TRACE_LEVEL_LISTENER=10
    ==========================================================================
    || NAMES.ORA Parameters ||
    ==========================================================================
    NAMES.TRACE_DIRECTORY
    Purpose: Indicates the name of the directory to which trace files
    from a Names Server trace session are written.
    Default
    Value: platform specific
    Example: names.trace_directory = complete_directory_name
    NAMES.TRACE_FILE
    Purpose: Indicates the name of the output file from a Names Server
    trace session. The filename extension is always.trc
    Default
    Value: names
    Example: names.trace_file = filename
    NAMES.TRACE_LEVEL
    Purpose: Indicates the level at which the Names Server is to be
    traced.
    Default Value: OFF
    Example: names.trace_level = OFF
    NAMES.TRACE_UNIQUE
    indicates whether each trace file has a unique name, allowing
    Purpose: multiple trace files to coexist. If the value is set to ON, a
    process identifier is appended to the name of each trace file
    generated.
    Default
    Value: OFF
    Example: names.trace_unique = ON
    names.trace_file = names_05.trc
    ==========================================================================
    CMAN.ORA Parameters
    ==========================================================================
    TRACING
    Default
    Value: NO
    Example: TRACING = NO
    References
    7. listener.log 파일에 loggin정보를 남기지 않게 하는 방법이 있나요 ?
    고객이 개발하여 사용중인 application에서 NET8을 이용하여 접속하거나 접속을
    종료하는 경우 listener.log에 이와 관련된 정보가 남으며, 수 많은 사용자가
    접속을 하게 되므로서 급속하게 listener.log 파일이 커져 $ORACLE_HOME이 있는
    file system이 꽉 차서 데이터베이스가 hang이 되는 결과를 초래하는 경우가 있다.
    고객들은 listener.log에 write할수 있는 메세지의 양에 제한을 두기를 원하는
    경우가 있으나 이러한 기능은 제공되지 않는다. 하지만 listener의 logging은
    ON 또는 OFF는 설정을 통해서 가능하다.
    Net8에서는 listener.ora에 "LOGGING_(the listener name)=off"를 설정하게
    되면 listener의 logging을 멈출 수 있다.
    물론 설정후 listener stop후 재기동을 하셔야 변경된 paramerter에 의해
    이 기능이 enable됩니다.
    참고 : SQL*NET 2.3.x 에서도 이 parameter가 유효한가요 ?
    물론 사용이 가능합니다. NET8에서 사용하는 것과 동일하게 parameter를
    listener.ora에 설정함으로서 가능합니다.
    EX)
    LOGGING_LISTENER=OFF
    이 parameter는 listener의 전체 logging을 disable하는 parameter로 일부만
    여과하여 logging할 수 있는 기능은 아니다.
    이 parameter는 NET8에 알려진 parameter로 SQL*NET 2.3.x manuals에 나와
    있지는 않지만 정상적으로 사용할 수 있다.

  • How to get All Users from OID LDAP

    Hi all,
    I have Oracle Internet Directory(OID) and have created the users in it manually.
    Now I want to extract all the users from OID. How can I get Users from OID??
    Any response will be appritiated. If some one could show me demo code for that I shall be greatful to you.
    Thanks and reagards
    Pravy

    hi,
    the notes from metalink:
    bgards
    elvis
    Doc ID: Note:276688.1
    Subject: How to copy (export/import) the Portal database schemas of IAS 9.0.4 to another database
    Type: BULLETIN
    Status: PUBLISHED
    Content Type: TEXT/X-HTML
    Creation Date: 18-JUN-2004
    Last Revision Date: 05-AUG-2005
    How to copy (export/import) Portal database schemas of IAS 9.0.4 to another database
    Note 276688.1
    Download scripts Unix: Attachment 276688.1:1
    Download Perl scripts (Unix/NT) :Attachment 276688.1:2
    This article is being delivered in Draft form and may contain errors. Please use the MetaLink "Feedback" button to advise Oracle of any issues related to this article.
    HISTORY
    Version 1.0 : 24-JUN-2004: creation
    Version 1.1 : 25-JUN-2004: added a link to download the scripts from Metalink
    Version 1.2 : 29-JUN-2004: Import script: Intermedia indexes are recreated. Imported jobs are reassigned to Portal. ptlconfig replaces ptlasst.
    Version 1.3 : 09-JUL-2004: Additional updates. Usage of iasconfig.xml. Need only 3 environment variables to import.
    Version 1.4 : 18-AUG-2004: Remark about 9.2.0.5 and 10.1.0.2 database
    Version 1.5 : 26-AUG-2004: Duplicate job id
    Version 1.6 : 29-NOV-2004: Remark about WWC-44131 and WWSBR_DOC_CTX_54
    Version 1.7 : 07-JAN-2005: Attached perl scripts (for NT/Unix) at the end of the note
    Version 1.8 : 12-MAY-2005: added a work-around for the WWSTO_SESS_FK1 issue
    Version 1.9 : 07-JUL-2005: logoff trigger and 9.0.1 database export, import in 10g database
    Version 1.10: 05-AUG-2005: reference to the 10.1.2 note
    PURPOSE
    This document explains how to copy a Portal database schema from a database to another database.
    It allows restoring the Portal repository and the OID security associated with Portal.
    It can be used to go in production by copying physically a database from a development portal to a production environment and avoid to use the export/import utilities of Portal.
    This note:
    uses the export/import on the database level
    allows the export/import to be done between different platforms
    The script are Unix based and for the BASH shell. They can be adapted for other platforms.
    For the persons familiar with this technics in Portal 9.0.2, there is a list of the main differences with Portal 9.0.2 at the end of the note.
    These scripts are based on the experience of a lot of persons in Portal 902.
    The scripts are attached to the note. Download them here: Attachment 276688.1:1 : exp_schema_904.zip
    A new version of the script was written in Perl. You can also download them, here: Attachment 276688.1:2 : exp_schema_904_v2.zip. They do exactly the same than the bash ones. But they have the advantage of working on all platforms.
    SCOPE & APPLICATION
    This document is intented for Portal administrators. For using this note, you need basic DBA skills.
    This notes is for Portal 9.0.4.x only. The notes for Portal 9.0.2 are :
    Note 228516.1 : How to copy (export/import) Portal database schemas of IAS 9.0.2 to another database
    Note 217187.1 : How to restore a cold backup of a Portal IAS 9.0.2 on another machine
    The note for Portal 10.1.2 is:
    Note 330391.1 : How to copy (export/import) Portal database schemas of IAS 10.1.2 to another databaseMethod
    The method that we will follow in the document is the following one:
    Export:
    - export of the 4 portal schemas of a database (DEV / development)
    - export the LDAP OID users and groups (optional)
    Install a new machine with fresh IAS installation (PROD / production)
    Import:
    - delete the new and empty portal schema on PROD
    - import the schemas in the production database in place of the deleted schemas
    - import the LDAP OID users and groups (optional)
    - modify the configuration such that the infrastructure uses the portal repository of the backup
    - modify the configuration such that the portal repository uses the OID, webcache and SSO of the new infrastructure
    The export and the import are divided in several steps. All of these steps are included in 2 sample scripts:
    export : exp_portal_schema.sh
    import : imp_portal_schema.sh
    In the 2 scripts, all the steps are runned in one shot. It is just an example. Depending of the configuration and circonstance, all the steps can be runned independently.
    Convention
    Development (DEV) is the name of the machine where resides the copied database
    Production (PROD) is the name of the machine where the database is copied
    Prerequisite
    Some prerequisite first.
    A. Environment variables
    To run the import/export, you will need 3 environment variables. In the given scripts, they are defined in 'portal_env.sh'
    SYS_PASSWORD - the password of user sys in the Portal database
    IAS_PASSWORD - the password of IAS
    ORACLE_HOME - the ORACLE_HOME of the midtier
    The rest of the settings are found automatically by reading the iasconfig.xml file and querying the OID. It is done in 'portal_automatic_env.sh'. I wish to write a note on iasconfig.xml and the way to transform it in usefull environment variables. But it is not done yet. In the meanwhile, you can read the old 902 doc, that explains the meaning of most variables :
    < Note 223438.1 : Shell script to find your portal passwords, settings and place them in environment variables on Unix >
    B. Definition: Cutter database
    A 'Cutter Database' is the term used to designate a Database created by RepCA or OUI and that contains all the schemas used by a IAS 9.0.4 infrastructure. Even if in most cases, several schemas are not used.
    In Portal 9.0.4, the option to install only the portal repository in an empty database has been removed. It has been replaced by RepCA, a tool that creates an infrastructure database. Inside all the infrastucture database schemas, there are the portal schemas.
    This does not stop people to use 2 databases for running portal. One for OID and one for Portal. But in comparison with Portal 9.0.2, all schemas exist in both databases even if some are not used.
    The main idea of Cutter database is to have only 1 database type. And in the future, simplify the upgrades of customer installation
    For an installation where Portal and OID/SSO are in 2 separate databases, it looks like this
    Portal 9.0.2 Portal 9.0.4
    Infrastructure database
    (INFRA_SID)
    The infrastructure contains:
    - OID (used)
    - OEM (used)
    - Single Sign-on / orasso (used)
    - Portal (not used)
    The infrastructure contains:
    - OID (used)
    - OEM (used)
    - Single Sign-on / orasso (used)
    - Portal (not used)
    Portal database
    (PORTAL_SID)
    The custom Portal database contains:
    - Portal (used)
    The custom Portal database (is also an infrastructure):
    - OID (not used)
    - OEM (not used)
    - Single Sign-on / orasso (not used)
    - Portal (used)
    Whatever, the note will suppose there is only one single database. But it works also for 2 databases installation like the one explained above.
    C. Directory structure.
    The sample scripts given inside this note will be explained in the next paragraphs. But first, the scripts are done to use a directory structure that helps to classify the files.
    Here is a list of important files used during the process of export/import:
    File Name
    Description
    exp_portal_schema.sh
    Sample script that exports all the data needed from a development machine
    imp_portal_schema.sh
    Sample script that import all the data into a production machine
    portal_env.sh
    Script that defines the env variable specific to your system (to configure)
    portal_automatic_env.sh
    Helper script to get all the rest of the Portal settings automatically
    xsl
    Directory containing all the XSL files (helper scripts)
    del_authpassword.xsl
    Helper script to remove the authpassword tags in the DSML files
    portal_env_unix.sql
    Helper script to get Portal settings from the iasconfig.xml file
    exp_data
    Directory containing all the exported data
    portal_exp.dmp
    export on the database level of the portal, portal_app, ... database schemas
    iasconfig.xml
    copy the name of iasconfig.xml of the midtier of DEV. Used to get the hostname and port of Webcache
    portal_users.xml
    export from LDAP of the OID users used by Portal (optional)
    portal_groups.xml export from LDAP of the OID groups used by Portal (optional)
    imp_log
    Directory containing several spool and logs files generated during the import
    import.log Log file generated when running the imp command
    ptlconfig.log
    Log generated by ptlconfig when rewiring portal to the infrastructure.
    Some other spool files.
    D. Known limitations
    The scripts given in this note have the following known limitations:
    It does not copy the data stored in the SSO schema: external applications definitions and the passwords stored for them.
    See in the post steps: SSO migration to know how to do.
    The ssomig command resides in the Infrastructure Oracle home. And all commands of Portal in the Midtier home. And practically, these 2 Oracle homes are most of the time not on the same machine. This is the reason.
    The export of the users in OID exports from the default user location:
    ldapsearch .... -b "cn=users,dc=domain,dc=com"
    This is not 100% correct. The users are by default stored in something like "cn=users,dc=domain,dc=com". So, if the users are stored in the default location, it works. But if this location (user install base) is customized, it does not work.
    The reason is that such settings means that the LDAP most of the time highly customized. And I prefer that the administrator to copy the real LDAP himself. The right command will probably depend of the customer case. So, I prefered not to take the risk..
    orclCommonNicknameAttribute must match in the Target and Source OID .
    The orclCommonNicknameAttribute must match on both the source and target OID. By default this attribute is set to "uid", so if this has been changed, it must be changed in both systems.
    Reference Note 282698.1
    Migration of custom Java portlets.
    The script migrates all the data of Portal stored in the database. If you have custom java portlet deployed in your development machine, you will need to copy them in the production system.
    Step 1 - Export in Development (DEV)
    To export a full Portal installation to another machine, you need to follow 3 steps:
    Export at the database level the portal schemas + related schemas
    Get the midtier hostname and port of DEV
    Export of the users and groups with LDAPSEARCH in 2 XML files
    A script combining all the steps is available here.
    A. Export the 4 portals schemas (DEV)
    You need to export 3 types of database schemas:
    The 4 portal schemas created by default by the portal installation :
    portal,
    portal_app,
    portal_demo,
    portal_public
    The schemas where your custom database portlets / providers resides (if any)
    - The custom schemas you have created for storing your portlet / provider code
    The schemas where your custom tables resides. (if any)
    - Your custom schemas accessed by portal and containing only data (tables, views ...)
    You can get an approximate list of the schemas: default portal schemas (1) and database portlets schemas (2) with this query.
    SELECT USERNAME, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE
    FROM DBA_USERS
    WHERE USERNAME IN (user, user||'_PUBLIC', user||'_DEMO', user||'_APP')
    OR USERNAME IN (SELECT DISTINCT OWNER FROM WWAPP_APPLICATION$ WHERE NAME != 'WWV_SYSTEM');
    It still misses your custom schemas containing data only (3).
    We will export the 4 schemas and your custom ones in an export file with the user sys.
    Please, use a command like this one
    exp userid="'sys/change_on_install@dev as sysdba'" file=portal_exp.dmp grants=y log=portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)The result is a dump file: 'portal_exp.dmp'. If you are using a database 9.2.0.5 or 10.1.0.2, the database of the exp/imp dump file has changed. Please read this.
    B. Hostname and port
    For the URL to access the portal, you need the 2 following infos to run the script 'imp_portal_schema.sh below :
    Webcache hostname
    Webcache listen port
    These values are contained in the iasconfig.xml file of the midtier.
    iasconfig.xml
    <IASConfig XSDVersion="1.0">
    <IASInstance Name="ias904.dev.dev_domain.com" Host="dev.dev_domain.com" Version="9.0.4">
    <OIDComponent AdminPassword="@BfgIaXrX1jYsifcgEhwxciglM+pXod0dNw==" AdminDN="cn=orcladmin" SSLEnabled="false" LDAPPort="3060"/>
    <WebCacheComponent AdminPort="4037" ListenPort="7782" InvalidationPort="4038" InvalidationUsername="invalidator" InvalidationPassword="@BR9LXXoXbvW1iH/IEFb2rqBrxSu11LuSdg==" SSLEnabled="false"/>
    <EMComponent ConsoleHTTPPort="1813" SSLEnabled="false"/>
    </IASInstance>
    <PortalInstance DADLocation="/pls/portal" SchemaUsername="portal" SchemaPassword="@BR9LXXoXbvW1c5ZkK8t3KJJivRb0Uus9og==" ConnectString="cn=asdb,cn=oraclecontext">
    <WebCacheDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    <OIDDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    <EMDependency ContainerType="IASInstance" Name="ias904.dev.dev_domain.com"/>
    </PortalInstance>
    </IASConfig>
    It corresponds to a portal URL like this:
    http://dev.dev_domain.com:7782/pls/portalThe script exp_portal_schema.sh copy the iasconfig.xml file in the exp_data directory.
    C. Export the security: users and groups (optional)
    If you use other Single Sing-On uses than the portal user, you probably need to restore the full security, the users and groups stored in OID on the production machine. 5 steps need to be executed for this operation:
    Export the OID entries with LDAPSEARCH
    Before to import, change the domain in the generated file (optional)
    Before to import, remove the 'authpassword' attributes from the generated files
    Import them with LDAPADD
    Update the GUID/DN of the groups in portal tables
    Part 1 - LDAPSEARCH
    The typical commands to do this operation look like this:
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -b "cn=portal.040127.1384,cn=groups,dc=dev_domain,dc=com" -s sub "objectclass=*" > portal_group.xml
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -D "cn=orcladmin" -w $IAS_PASSWORD -b "cn=users,dc=dev_domain,dc=com" -s sub "objectclass=inetorgperson" > portal_users.xmlTake care about the following points
    The groups are stored in a LDAP directory containing the date of installation
    ( in this example: portal.040127.1384,cn=groups,dc=dev_domain,dc=com )
    If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name needs to be replaced by the production domain name everywhere in the files.
    Ldapsearch uses the option '- X '. It it to export to DSML files (XML). It avoids a problem related with common LDAP files, LDIF files. LDIF files are wrapped at 78 characters. The wrapping to 78 characters make difficult to change the domain name contained in the LDIF files. XML files are not wrapped and do not have this problem.
    A sample script to export the 2 XML files is given here in : step 3 - export the users and groups (optional) of the export script.
    Part 2 : change the domain in the DSML files
    If the domain of dev and prod is different, the exported files contains the name of the development domain in the form of 'dc=dev_domain,dc=com' in a lot of place. The domain name need to be replaced by the production domain name everywhere in the files.
    To do this, we can use these commands:
    cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
    cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
    Part 3 : Remove the authpassword attribute
    The export of all attributes from the all users has also exported an automatically generated attribute in OID called 'authpassword'.
    'authpassword' is a list automatically generated passwords for several types of application. But mostly, it can not be imported. Also, there is no option in ldapsearch (that I know) that allows removing an attribute. In place of giving to the ldapsearch command the list of all the attributes that is very long, without 'authpassword', we will remove the attribute after the export.
    For that we will use the fact that the DSML files are XML files. There is a XSLT in the Oracle IAS, in the executable '$ORACLE_HOME/bin/xml'. XSLT is a standard specification of the internet consortium W3C to transform a XML file with the help of a XSL file.
    Here is the XSL file to remove the authpassword tag.
    del_autpassword.xsl
    <!--
    File : del_authpassword.xsl
    Version : 1.0
    Author : mgueury
    Description:
    Remove the authpassword from the DSML files
    -->
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xml:output method="xml"/>
    <xsl:template match="*|@*|node()">
    <xsl:copy>
    <xsl:apply-templates select="*|@*|node()"/>
    </xsl:copy>
    </xsl:template>
    <xsl:template match="attr">
    <xsl:choose>
    <xsl:when test="@name='authpassword;oid'">
    </xsl:when>
    <xsl:when test="@name='authpassword;orclcommonpwd'">
    </xsl:when>
    <xsl:otherwise>
    <xsl:copy>
    <xsl:apply-templates select="*|@*|node()"/>
    </xsl:copy>
    </xsl:otherwise>
    </xsl:choose>
    </xsl:template>
    </xsl:stylesheet>
    And the command to make the transfomation:
    xml -f -s del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xmlWhere :
    imp_log/portal_users.xml is the final file without authpassword tags
    imp_log/temp_users.xml is the input file with the authpassword tags that can not be imported.
    Part 4 : LDAPADD
    The typical commands to do this operation look like this:
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_group.xml
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X portal_users.xmlTake care about the following points
    Ldapadd uses the option ' -c '. Existing users/groups are generating an error. The option -c allows continuing and ignoring these errors. Whatever, the errors should be checked to see if it is just existing entries.
    A sample script to import the 2 XML files given in the step 5 - import the users and groups (optional) of the import script.
    Part 5 : Update the GUID/DN
    In Portal 9.0.4, the update of the GUID is taken care by PTLCONFIG during the import. (Import step 7)
    D. Example script for export
    Here is a example script that combines the 3 steps.
    Depending of you need, you will :
    or execute all the steps
    or just execute the 1rst one (export of the database users). It will be enough you just want to login with the portal user on the production instance.
    if your portal repository resides in a database 9.2.0.5 or 10.1.0.2, please read this
    you can download all the scripts here, Attachment 276688.1:1
    Do not forget to modify the script to your need and mostly add the list of users like explained in point A above.
    exp_portal_schema.sh
    # BASH Script : exp_portal_schema.sh
    # Version : 1.3
    # Portal : 9.0.4.0
    # History :
    # mgueury - creation
    # Description:
    # This script export a portal dump file from a dev instance
    # -------------------------- Environment variables --------------------------
    . portal_env.sh
    # In case you do not use portal_env.sh you have to define all the variables
    # For exporting the dump file only.
    # export SYS_PASSWORD=change_on_install
    # export PORTAL_TNS=asdb
    # For the security (optional)
    # export IAS_PASSWORD=welcome1
    # export PORTAL_USER=portal
    # export PORTAL_PASSWORD=A1b2c3de
    # export OID_HOSTNAME=development.domain.com
    # export OID_PORT=3060
    # export OID_DOMAIN_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # ------------------------------ Help function -----------------------------------
    function press_any_key() {
    if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
    echo
    echo Press enter to continue
    read $ANY_KEY
    else
    echo
    fi
    echo "------------------------------- Export ------------------------------------"
    # create a directory for the export
    mkdir exp_data
    # copy the env variables in the log just in case
    export > exp_data/exp_env_variable.txt
    echo "--------------------- step 1 - export"
    # export the portal users, but take care to add:
    # - your users containing DB providers
    # - your users containing data (tables)
    exp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=exp_data/portal_exp.log owner=(portal,portal_app,portal_demo,portal_public)
    press_any_key
    echo "--------------------- step 2 - store iasconfig.xml file of the MIDTIER"
    cp $MIDTIER_ORACLE_HOME/portal/conf/iasconfig.xml exp_data
    press_any_key
    echo "--------------------- step 3 - export the users and groups (optional)"
    # Export the groups and users from OID in 2 XML files (not LDIF)
    # The OID groups of portal are stored in GROUP_INSTALL_BASE that depends
    # of the installation date.
    # For the user, I use the default place. If it does not work,
    # you can find the user place with:
    # > exec dbms_output.put_line(wwsec_oid.get_user_search_base);
    # Get the GROUP_INSTALL_BASE used in security export
    sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
    set serveroutput on
    spool exp_data/group_base.log
    begin
    dbms_output.put_line(wwsec_oid.get_group_install_base);
    end;
    IASDB
    export GROUP_INSTALL_BASE=`grep cn= exp_data/group_base.log`
    echo '--- Exporting Groups'
    echo 'creating portal_groups.xml'
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -X -s sub -b "$GROUP_INSTALL_BASE" -s sub "objectclass=*" > exp_data/portal_groups.xml
    echo '--- Exporting Users'
    echo 'creating portal_users.xml'
    ldapsearch -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -X -s sub -b "cn=users,$OID_DOMAIN_DN" -s sub "objectclass=inetorgperson" > exp_data/portal_users.xml
    The script is done to run from the midtier.
    Step 2 - Install IAS in a new machine (PROD)
    A. Installation
    This note does not distinguish if Portal is sharing the same database than Single-Sign On and OID. For simplicity, I will speak only about 1 database. But I could also create a second infrastructure database just for the portal repository. This way is better for production system, because the Portal repository is only product used in the 2nd database. Having 2 separate databases allows taking easily backup of the portal repository.
    On the production machine, you need to install a fresh install of IAS 9.0.4. Take care to use :
    the same IAS patchset 9.0.4.1, 9.0.4.2, ...on the middle-tier and infrastruture than in development
    and same characterset than in development (or UTF8)
    The result will be 2 ORACLE_HOMES and 1 infrastructure database:
    the ORACLE_HOME of the infrastructure (SID:infra904)
    the ORACLE_HOME of the midtier (SID:ias904)
    an infrastructure database (SID:asdb)
    The empty new Portal install should work fine before to go to the next step.
    B. About tablespaces (optional)
    The size of the tablespace of the production should match the one of the Developement machine. If not, the tablespace will autoextend. It is not really a concern, but it is slow. You should modify the tablespaces for to have as much space on prod and dev.
    Also, it is safer to check that there is enough free space on the hard disk to import in the database.
    To modify the tablespace size, you can use Oracle Entreprise Manager console,
    On Unix, . oraenv
    infra904oemapp dbastudio
    On NT Start/ Programs/ Oracle Application server - infra904 / Enterprise Manager Console
    Launch standalone
    Choose the portal database (typically asdb.domain.com)
    Connect with a DBA user, sys or system
    Click Storage/Tablespaces
    Change the size of the PORTAL, PORTAL_DOC, PORTAL_LOGS, PORTAL_IDX tablespaces
    C. Backup
    It could be a good idea to take a backup of the MIDTIER and INFRASTRUCTURE Oracle Homes at that point to allow retesting the import process if it fails for any reason as much as you want without needing to reinstall everything.
    Step 3 - Import in production (on PROD)
    The following script is a sample of an Unix script that combines all the steps to import a portal repository to the production machine.
    To import a portal reporistory and his users and group in OID, you need to do 8 things:
    Stop the midtier to avoid errors while dropping the portal schema
    SQL*Plus with Portal
    Drop the 4 default portal schemas
    Create the portal users with the same passwords than the just deleted users and give them grants (you need to create your own custom shemas too if you have some).
    Import the dump file
    Import the users and groups into OID (optional)
    SQL*Plus with SYS : Post import changes
    Recompile everything in the database
    Reassign the imported jobs to portal
    SQL*Plus with Portal : Post import changes
    Recreate the Portal intermedia indexes
    Correct an import errror on wwsrc_preference$
    Make additional post import changes, by updating some portal tables, and replacing the development hostname, port or domain by the production ones.
    Rewire the portal repository with ptlconfig -dad portal
    Restart the midtier
    Here is a sample script to do this on Unix. You will need to adapt the script to your needs.
    imp_portal_schema.sh
    # BASH Script : imp_portal_schema.sh
    # Version : 1.3
    # Portal : 9.0.4.0
    # History :
    # mgueury - creation
    # Description:
    # This script import a portal dump file and relink it with an
    # infrastructure.
    # Script to be started from the MIDTIER
    # -------------------------- Environment variables --------------------------
    . portal_env.sh
    # Development and Production machine hostname and port
    # Example
    # .._HOSTNAME machine.domain.com (name of the MIDTIER)
    # .._PORT 7782 (http port of the MIDTIER)
    # .._DN dc=domain,dc=com (domain name in a LDAP way)
    # These values can be determined automatically with the iasconfig.xml file of dev
    # and prod. But if you do not know or remember the dev hostname and port, this
    # query should find it.
    # > select name, http_url from wwpro_providers$ where http_url like 'http%'
    # These variables are used in the
    # > step 4 - security / import OID users and groups
    # > step 6 - post import changes (PORTAL)
    # Set the env variables of the DEV instance
    rm /tmp/iasconfig_env.sh
    xml -f -s xsl/portal_env_unix.xsl -o /tmp/iasconfig_env.sh exp_data/iasconfig.xml
    . /tmp/iasconfig_env.sh
    export DEV_HOSTNAME=$WEBCACHE_HOSTNAME
    export DEV_PORT=$WEBCACHE_LISTEN_PORT
    export DEV_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # Set the env variables of the PROD instance
    . portal_env.sh
    export PROD_HOSTNAME=$WEBCACHE_HOSTNAME
    export PROD_PORT=$WEBCACHE_LISTEN_PORT
    export PROD_DN=dc=`echo $OID_HOSTNAME | cut -d '.' -f2,3,4,5,6 --output-delimiter=',dc='`
    # ------------------------------ Help function -----------------------------------
    function press_any_key() {
    if [ $PRESS_ANY_KEY_AFTER_EACH_STEP = "Y" ]; then
    echo
    echo Press enter to continue
    read $ANY_KEY
    else
    echo
    fi
    echo "------------------------------- Import ------------------------------------"
    # create a directory for the logs
    mkdir imp_log
    # copy the env variables in the log just in case
    export > imp_log/imp_env_variable.txt
    echo "--------------------- step 1 - stop the midtier"
    # This step is needed to avoid most case of ORA-01940: user connected
    # when dropping the portal user
    $MIDTIER_ORACLE_HOME/opmn/bin/opmnctl stopall
    press_any_key
    echo "--------------------- step 2 - drop and create empty users"
    sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
    spool imp_log/drop_create_user.log
    ---- Drop users
    -- Warning: You need to stop all SQL*Plus connection to the
    -- portal schema before that else the drop will give an
    -- ORA-01940: cannot drop a user that is currently connected
    drop user portal_public cascade;
    drop user portal_app cascade;
    drop user portal_demo cascade;
    drop user portal cascade;
    ---- Recreate the users and give them grants"
    -- The new users will have the same passwords as the users we just dropped
    -- above. Do not forget to add your exported custom users
    create user portal identified by $PORTAL_PASSWORD default tablespace portal;
    grant connect,resource,dba to portal;
    create user portal_app identified by $PORTAL_APP_PASSWORD default tablespace portal;
    grant connect,resource to portal_app;
    create user portal_demo identified by $PORTAL_DEMO_PASSWORD default tablespace portal;
    grant connect,resource to portal_demo;
    create user portal_public identified by $PORTAL_PUBLIC_PASSWORD default tablespace portal;
    grant connect,resource to portal_public;
    alter user portal_public grant connect through portal;
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wwv/wdbigra.sql portal
    exit
    IASDB
    press_any_key
    echo "--------------------- step 3 - import"
    imp userid="'sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba'" file=exp_data/portal_exp.dmp grants=y log=imp_log/import.log full=y
    press_any_key
    echo "--------------------- step 4 - import the OID users and groups (optional)"
    # Some errors will be raised when running the ldapadd because at least the
    # default entries will not be able to be inserted. Remove them from the
    # ldif file if you want to avoid them. Due to the flag '-c', ldapadd ignores
    # duplicate entries. Another more radical solution is to erase all the entries
    # of the users and groups in OID before to run the import.
    # Replace the domain name in the XML files.
    cat exp_data/portal_groups.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/portal_groups.xml
    cat exp_data/portal_users.xml | sed -e "s/$DEV_DN/$PROD_DN/" > imp_log/temp_users.xml
    # Remove the authpassword attributes with a XSL stylesheet
    xml -f -s xsl/del_authpassword.xsl -o imp_log/portal_users.xml imp_log/temp_users.xml
    echo '--- Importing Groups'
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_groups.xml -v
    echo '--- Importing Users'
    ldapadd -h $OID_HOSTNAME -p $OID_PORT -D "cn=orcladmin" -w $IAS_PASSWORD -c -X imp_log/portal_users.xml -v
    press_any_key
    echo "--------------------- step 5 - post import changes (SYS)"
    sqlplus "sys/$SYS_PASSWORD@$PORTAL_TNS as sysdba" <<IASDB
    spool imp_log/sys_post_changes.log
    ---- Recompile the invalid packages"
    -- On the midtier, the script utlrp is not present. This step
    -- uses a copy of it stored in patch/utlrp.sql
    select count(*) INVALID_OBJECT_BEFORE from all_objects where status='INVALID';
    start patch/utlrp.sql
    set lines 999
    select count(*) INVALID_OBJECT_AFTER from all_objects where status='INVALID';
    ---- Jobs
    -- Reassign the JOBS imported to PORTAL. After the import, they belong
    -- incorrectly to the user SYS.
    update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
    commit;
    exit
    IASDB
    press_any_key
    echo "--------------------- step 6 - post import changes (PORTAL)"
    sqlplus $PORTAL_USER/$PORTAL_PASSWORD@$PORTAL_TNS <<IASDB
    set serveroutput on
    spool imp_log/portal_post_changes.log
    ---- Intermedia
    -- Recreate the portal indexes.
    -- inctxgrn.sql is missing from the 9040 CD-ROMS. This is the bug 3536937.
    -- Fixed in 9041. The missing script is contained in the downloadable zip file.
    start patch/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    ---- Import error
    alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
    primary key (subscriber_id, id)
    using index wwsrc_preference_idx1
    begin
    DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
    '', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
    static_policy=>true);
    end ;
    ---- Modify tables with full URLs
    -- If the domain name of prod and dev are different, this step is really important.
    -- It modifies the portal tables that contains reference to the hostname or port
    -- of the development machine. (For more explanation: see Addional steps in the note)
    -- groups (dn)
    update wwsec_group$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
    update wwsec_group$
    set dn_hash = wwsec_api_private.get_dn_hash( dn )
    -- users (dn)
    update wwsec_person$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' )
    update wwsec_person$
    set dn_hash = wwsec_api_private.get_dn_hash( dn)
    -- subscriber
    update wwsub_model$
    set dn=replace( dn, '$DEV_DN', '$PROD_DN' ), GUID=':1'
    where dn like '%$DEV_DN%'
    -- preferences
    update wwpre_value$
    set varchar2_value=replace( varchar2_value, '$DEV_DN', '$PROD_DN' )
    where varchar2_value like '%$DEV_DN%'
    update wwpre_value$
    set varchar2_value=replace( varchar2_value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where varchar2_value like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- page url items
    update wwv_things
    set title_link=replace( title_link, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- web providers
    update wwpro_providers$
    set http_url=replace( http_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- html links created by the RTF editor inside text items
    update wwv_text
    set text=replace( text, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where text like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- Portlet metadata nls: help URL
    update wwpro_portlet_metadata_nls$
    set help_url=replace( help_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where help_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- URL items (There is a trigger on this table building absolute_url automatically)
    update wwsbr_url$
    set absolute_url=replace( absolute_url, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where absolute_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    -- Things attributes
    update wwv_thingattributes
    set value=replace( value, '$DEV_HOSTNAME:$DEV_PORT', '$PROD_HOSTNAME:$PROD_PORT' )
    where value like '%$DEV_HOSTNAME:$DEV_PORT%'
    commit;
    exit
    IASDB
    press_any_key
    echo "--------------------- step 7 - ptlconfig"
    # Configure portal such that portal uses the infrastructure database
    cd $MIDTIER_ORACLE_HOME/portal/conf/
    ./ptlconfig -dad portal
    cd -
    mv $MIDTIER_ORACLE_HOME/portal/logs/ptlconfig.log imp_log
    press_any_key
    echo "--------------------- step 8 - restart the midtier"
    $MIDTIER_ORACLE_HOME/opmn/bin/opmnctl startall
    date
    Each step can generate his own errors due to a lot of factors. It is better to run the import step by step the first time.
    Do not forget to check the output of log files created during the various steps of the import:
    imp_log/drop_create_user.log
    Spool when dropping and recreating the portal users
    imp_log/import.log Import log file when importing the portal_exp.dmp file
    imp_log/sys_post_changes.log
    Spool when making post changes with SYS
    imp_log/portal_post_changes.log
    Spool when making post changes with PORTAL
    imp_log/ptlconfig.log
    Log file of ptconfig when rewiring the midtier
    Step 4 - Test
    A. Check the log files
    B. Test the website and see if it works fine.
    Step 5 - take a backup
    Take a backup of all ORACLE_HOME and DATABASES to prevent all hardware problems. You need to copy:
    All the files of the 2 ORACLE_HOME
    And all the database files.
    Step 6 - Additional steps
    Here are some additional steps.
    SSO external application ( that are part of the orasso schema and not imported yet )
    Page URL items ( they seems to store the full URL ) - included in imp_portal_schema.sh
    Web Providers ( the URL needs to be changed ) - included in imp_portal_schema.sh
    Text items edited with the RTF editor in IE and containing links - included in imp_portal_schema.sh
    Most of them are taken care by the "step 8 - post import changes". Except the first one.
    1. SSO import
    This script imports only Portal and the users/groups of OID. Not the list of the external application contained in the orasso user.
    In Portal 9.0.4, there is a script called SSOMIG that resides in $INFRA_ORACLE_HOME/sso/bin and allows to move :
    Definitions and user data for external applications
    Registration URLs and tokens for partner applications
    Connection information used by OracleAS Discoverer to access various data sources
    See:
    Oracle® Application Server Single Sign-On Administrator's Guide 10g (9.0.4) Part Number B10851-01
    14. Exporting and Importing Data
    2. Page items: the page URL items store the full URL.
    This is Bug 2661805 fixed in Portal 9.0.2.6.
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- page url items
    update wwv_things
    set title_link=replace( title_link, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where title_link like '%$DEV_HOSTNAME:$DEV_PORT%'
    2. Web Providers
    The URL to the Web providers needs also change. Like for the Page items, they contain the full path of the webserver.
    Or you can get the list of the URLs to change with this query
    select name, http_url from PORTAL.WWPRO_PROVIDERS$ where http_url like '%';
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- web providers
    update wwpro_providers$
    set http_url=replace( http_url, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where http_url like '%$DEV_HOSTNAME:$DEV_PORT%'
    4. The production and development machine do not share the same domain
    If the domain of the production and the development are not the same, the DN (name in LDAP) of all users needs to change.
    Let's say from
    dc=dev_domain,dc=com -> dc=prod_domain,dc=com
    1. before to upload the ldif files. All the strings in the 2 ldifs files that contain 'dc=dev_domain,dc=com', have to be replaced by 'dc=prod_domain,dc=com'
    2. in the wwsec_group$ and wwsec_person$ tables in portal, the DN need to change too.
    This following work-around is implemented in post import step of imp_portal_schema.sh
    -- groups (dn)
    update wwsec_group$
    set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
    update wwsec_group$
    set dn_hash = wwsec_api_private.get_dn_hash( dn )
    -- users (dn)
    update wwsec_person$
    set dn=replace( dn, 'dc=dev_domain,dc=com', 'dc=prod_domain,dc=com' )
    update wwsec_person$
    set dn_hash = wwsec_api_private.get_dn_hash( dn)
    5. Text items with HTML links
    Sometimes people stores full URL inside their text items, it happens mostly when they use link with the RichText Editor in IE .
    This following work-around is implemented in post import step in imp_portal_schema.sh
    -- html links created by the RTF editor inside text items
    update wwv_text
    set text=replace( text, 'dev.dev_domain.com:7778', 'prod.prod_domain.com:7778' )
    where text like '%$DEV_HOSTNAME:$DEV_PORT%'
    6. OID Custom password policy
    It happens quite often that the people change the password policy of the OID server. The reason is that with the default policy, the password expires after 60 days. If so, do not forget to make the same changes in the new installation.
    PROBLEMS
    1. Import log has some errors
    A. EXP-00091 -Exporting questionable statistics
    You can ignore this error.
    B. IMP-00017 - WWSRC_PREFERENCE$
    When importing, there is one import error:
    IMP-00017: following statement failed with ORACLE error 921:
    "ALTER TABLE "WWSRC_PREFERENCE$" ADD "
    IMP-00003: ORACLE error 921 encountered
    ORA-00921: unexpected end of SQL commandThe primary key is not created. You can create it with this commmand
    in SQL*Plus with the user portal.. Then readd the missing VPD policy.
    alter table "WWSRC_PREFERENCE$" add constraint wwsrc_preference_pk
    primary key (subscriber_id, id)
    using index wwsrc_preference_idx1
    begin
    DBMS_RLS.ADD_POLICY ('', 'WWSRC_PREFERENCE$', 'WEBDB_VPD_POLICY',
    '', 'webdb_vpd_sec', 'select, insert, update, delete', TRUE,
    static_policy=>true);
    end ;
    Step 8 in the script "imp_portal_schema.sh" take care of this. This can also possibly be solved by the
    C. IMP-00017 - WWDAV$ASL
    . importing table "WWDAV$ASL"
    Note: table contains ROWID column, values may be obsolete 113 rows importedThis error is normal, the table really contains a ROWID column.
    D. IMP-00041 - Warning: object created with compilation warnings
    This error is normal too. The packages giving these error have
    dependencies on package not yet imported. A recompilation is done
    after the import.
    E. ldapadd error 'cannot add add entries containing authpasswords'
    # ldap_add: DSA is unwilling to perform
    # ldap_add: additional info: You cannot add entries containing authpasswords.
    "authpasswords" are automatically generated values from the real password of the user stored in userpassword. These values do not have to be exported from ldap.
    In the import script, I remove the additional tag with a XSL stylesheet 'del_authpassword.xsl'. See above.
    F. IMP-00017: WWSTO_SESSION$
    IMP-00017: following statement failed with ORACLE error 2298:
    "ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1""
    IMP-00003: ORACLE error 2298 encountered
    ORA-02298: cannot validate (PORTAL.WWSTO_SESS_FK1) - parent keys not found
    Here is a work-around for the problem. I will normally integrate it in a next version of the scripts.
    SQL> delete from WWSTO_SESSION_DATA$;
    7690 rows deleted.
    SQL> delete from WWSTO_SESSION$;
    1073 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> ALTER TABLE "WWSTO_SESSION$" ENABLE CONSTRAINT "WWSTO_SESS_FK1";
    Table altered.
    G. IMP-00017 - ORACLE error 1 - DBMS_JOB.ISUBMIT
    This error can appear during the import when the import database is not empty and is already customized for some reasons. For example, you export from an infrastructure and you import in a database with a lot of other programs that uses jobs. And unhappily the same job id.
    Due to the way the export/import of jobs is done, the jobs keeps their id after the import. And they may conflict.
    IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=>42,WHAT=>'begin execute immediate " "''begin wwutl_cache_sys.process_background_inval; end;'' ; exc" "eption when others then wwlog_api.log(p_domain=> ''utl'', " " p_subdomain=>''cache'', p_name=>''background'', " " p_action=>''process_background_inval'', p_information => ''E" "rror in process_background_inval ''|| sqlerrm);end;', NEXT_DATE=" ">TO_DATE('2004-08-19:17:32:16','YYYY-MM-DD:HH24:MI:SS'),INTERVAL=>'SYSDATE " "+ 60/(24*60)',NO_PARSE=>TRUE); END;"
    IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated
    ORA-06512: at "SYS.DBMS_JOB", line 97 ORA-06512: at line 1
    Solutions:
    1. use a fresh installed database,
    2. Due that the jobs conflicting are different because it happens only in custom installation, there is no clear rule. But you can
    recreate the jobs lost after the import with other_ids
    and/or change the job id of the other program before to import. This type of commands can help you (you need to do it with SYS)
    select * from dba_jobs;
    update dba_jobs set job=99 where job=52;
    commit
    2. Import in a RAC environment
    Be aware of the Bug 2479882 when the portal database is in a RAC database.
    Bug 2479882 : NEEDED TO BOUNCE DB NODES AFTER INSTALLING PORTAL 9.0.2 IN RAC NODE3. Intermedia
    After importing a environment, the intermedia indexes are invalid. To correct the error you need to run in SQL*Plus with Portal
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    But $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/inctxgrn.sql is missing in IAS 9.0.4.0. This is Bug 3536937. Fixed in 9041. The missing scripts are contained in the downloadable zip file (exp_schema904.zip : Attachment 276688.1:1 ), directory sql. This means that practically in 9040, you have to run
    start sql/inctxgrn.sql
    start $MIDTIER_ORACLE_HOME/portal/admin/plsql/wws/ctxcrind.sql
    In the import script, it is done in the step 6 - recreate Portal Intermedia indexes.
    You can not WA the problem without the scripts. Running ctxcrind.sql alone does not work. You will have this error:
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
    ORA-06512: at "PORTAL.WWV_CONTEXT", line 1035
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "PORTAL.WWERR_API_EXCEPTION", line 164
    ORA-06512: at "PORTAL.WWV_CONTEXT", line 476
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-20000: Oracle Text error:
    DRG-12603: CTXSYS does not own user datastore procedure: WWSBR_THING_CTX_69
    ORA-06512: at line 13
    4. ptlconfig
    If you try to run ptlconfig simply after an import you will get an error:
    Problem processing Portal instance: Configuring HTTP server settings : Installing cache data : SQL exception: ERROR: ORA-23421: job number 32 is not a job in the job queue
    This is because the import done by user SYS has imported the PORTAL jobs to the SYS schema in place of portal. The solution is to run
    update dba_jobs set LOG_USER='PORTAL', PRIV_USER='PORTAL' where schema_user='PORTAL';
    In the import script, it is done in the step 8 - post import changes.
    5. WWC-41417 - invalid credentials.
    When you try to login you get:
    Unexpected error encountered in wwsec_app_priv.process_signon (User-Defined Exception) (WWC-41417)
    An exception was raised when accessing the Oracle Internet Directory: 49: Invalid credentials
    Details
    Error:Operation: dbms_ldap.simple_bind_s
    OID host: machine.domain.com
    OID port number: 4032
    Entry DN: orclApplicationCommonName=PORTAL,cn=Portal,cn=Products,cn=OracleContext. (WWC-41743)Solution:
    - run secupoid.sql
    - rerun ptlconfig
    This problem has been seen after using ptlasst in place of ptlconfig.
    6. EXP-003 with a database 9.2.0.5 or 10.1.0.2
    In fact, the DB format of imp/exp has changed in 9.2.0.5 or 10.1.0.2. The EXP-3 error only occurs when the export from the 9.2.0.5.0 or 10.1.0.2.0 database is done with a lower release export utility, e.g. 9.2.0.4.0.
    Due to the way this note is written, the imp/exp utility used is the one of the midtier (9014), if your portal resides in a 9.2.0.5 database, it will not work. To work-around the problem, there are 2 solutions:
    Change the script so that it uses the exp and imp command of database.
    Make a change to the 9.2.0.5 or 10.1.0.2 database to make them compatible with previous version. The change is to modify a database internal view before to export/import the data.
    A work-around is given in Bug 3784697
    1. Make a note of the export definition of exu9tne from
    $OH/rdbms/admin/catexp.sql
    2. Copy this to a new file and add "UNION ALL select * from sys.exu9tneb" to the end of the definition
    3. Run this as sys against the DB to be exported.
    4. Export as required
    5. Put back the original definition of exu9tne
    eg: For 9204 the workaround view would be:
    CREATE OR REPLACE VIEW exu9tne (
    tsno, fileno, blockno, length) AS
    SELECT ts#, segfile#, segblock#, length
    FROM sys.uet$
    WHERE ext# = 1
    UNION ALL
    select * from sys.exu9tneb
    7. EXP-00006: INTERNAL INCONSISTENCY ERROR
    This is Bug 2906613.
    The work-around given in this bug is the following:
    - create the following view, connected as sys, before running export:
    CREATE OR REPLACE VIEW exu8con (
    objid, owner, ownerid, tname, type, cname,
    cno, condition, condlength, enabled, defer,
    sqlver, iname) AS
    SELECT o.obj#, u.name, c.owner#, o.name,
    decode(cd.type#, 11, 7, cd.type#),
    c.name, c.con#, cd.condition, cd.condlength,
    NVL(cd.enabled, 0), NVL(cd.defer, 0),
    sv.sql_version, NVL(oi.name, '')
    FROM sys.obj$ o, sys.user$ u, sys.con$ c,
    sys.cdef$ cd, sys.exu816sqv sv, sys.obj$ oi
    WHERE u.user# = c.owner# AND
    o.obj# = cd.obj# AND
    cd.con# = c.con# AND
    cd.spare1 = sv.version# (+) AND
    cd.enabled = oi.obj# (+) AND
    NOT EXISTS (
    SELECT owner, name
    FROM sys.noexp$ ne
    WHERE ne.owner = u.name AND
    ne.name = o.name AND
    ne.obj_type = 2)
    The modification of exu8con simply adds support for a constraint type that had not previously been supported by this view. There is no negative impact.
    8. WWSBR_DOC_CTX_54 is invalid
    After the recompilation of the package, one package remains invalid (in sys_post_changes.log):
    INVALID_OBJECT_AFTER
    1
    select owner, object_name from all_objects where status='INVALID'
    CTXSYS WWSBR_DOC_CTX_54
    CREATE OR REPLACE procedure WWSBR_DOC_CTX_54
    (rid in rowid, bilob in out NOCOPY blob)
    is begin PORTAL.WWSBR_CTX_PROCS.DOC_CTX(rid,bilob);end;
    This object is not used anymore by portal. The error can be ignored. The procedure can be removed too. This is Bug 3559731.
    9. You do not have permission to perform this operation. (WWC-44131)
    It seems that there are problems if
    - groups on the production machine are not residing in the default place in OID,
    - and that the group creation base and group search base where changed.
    After this, the cloning of the repository work without problem. But it seems that the command 'ptlconfig -dad portal' does not reset the GUID and DN of the groups correctly. I have not checked this yet.
    The solution seems to use the script given in the 9.0.2 Note 228516.1. And run group_sec.sql to reset all the DN and GUID in the copied instance.
    10. Invalid Java objects when exporting from a 9.x database and importing in a 10g database
    If you export from a 9.x database and import in a 10g database, after running utlrp.sql, 18 Java objects will be invalid.
    select object_name, object_type from user_objects where status='INVALID'
    SQL> /
    OBJECT_NAME OBJECT_TYPE
    /556ab159_Handler JAVA CLASS
    /41bf3951_HttpsURLConnection JAVA CLASS
    /ce2fa28e_ProviderManagerClien JAVA CLASS
    /c5b98d35_ServiceManagerClient JAVA CLASS
    /d77cf2ab_SOAPServlet JAVA CLASS
    /649bf254_JavaProvider JAVA CLASS
    /a9164b8b_SpProvider JAVA CLASS
    /2ee43ac9_StatefulEJBProvider JAVA CLASS
    /ad45acec_StatelessEJBProvider JAVA CLASS
    /da1c4a59_EntityEJBProvider JAVA CLASS
    /66fdac3e_OracleSOAPHTTPConnec JAVA CLASS
    /939c36f5_OracleSOAPHTTPConnec JAVA CLASS
    org/apache/soap/rpc/Call JAVA CLASS
    org/apache/soap/rpc/RPCMessage JAVA CLASS
    org/apache/soap/rpc/Response JAVA CLASS
    /198a7089_Message JAVA CLASS
    /2cffd799_ProviderGroupUtils JAVA CLASS
    /32ebb779_ProviderGroupMgrProx JAVA CLASS
    18 rows selected.
    This is a known issue. This can be solved by applying patch one of the following patch depending of your IAS version.
    Bug 3405173 - PORTAL 9.0.4.0.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    Bug 4100409 - PORTAL 9.0.4.1.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    Bug 4100417 - PORTAL 9.0.4.2.0 PATCH FOR 10G DB UPGRADE (FROM 9.0.X AND 9.2.X)
    11. Import : IMP-00003: ORACLE error 30510 encountered
    When importing Portal 9.0.4.x, it could be that the import of the database side produces an error ORA-30510.The new perl script work-around the issue in the portal_post_import.sql script. But not the BASH scripts. If you use the BASH scripts, after the import, please run this command manually in SQL*Plus logged as portal.
    ---- Import error 2 - ORA-30510 when importing
    CREATE OR REPLACE TRIGGER logoff_trigger
    before logoff on schema
    begin
    -- Call wwsec_oid.unbind to close open OID connections if any.
    wwsec_oid.unbind;
    exception
    when others then
    -- Ignore all the errors encountered while unbinding.
    null;
    end logoff_trigger;
    This is logged as <Bug;4458413>.
    12. Exporting from a 9.0.1 database and import in a 9.2.0.5+ or 10g DB
    It could be that when exporting from a 9.0.1 database to a 10g database that the java classes do not get compiled correctly. The following errors are seen
    ORA-29534: referenced object PORTAL.oracle/net/www/proto/https/HttpsURLConnection could not be resolved
    errors:: class oracle/net/www/proto/https/HttpsURLConnection
    ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactoryImpl could not be found
    ORA-29521: referenced name oracle/security/ssl/OracleSSLSocketFactory could not be found
    In such a case, please apply the following patches after the import in the 10g database.
    Bug 3405173 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.0
    Bug 4100409 PORTAL REPOS DB UPGRADE TO 10G: for Portal 9.0.4.1
    Main Differences with Portal 9.0.2
    For the persons used to this technics in Portal 9.0.2, you could be interested to read the main differences with the same note for Portal 9.0.2
    Portal 9.0.2
    Portal 9.0.4
    Cutter database
    Portal 9.0.2 can be part of an infrastructure database or in a custom external database.
    In Portal 9.0.2, the portal schema is imported in an empty database.
    Portal 9.0.4 can only be installed in a 'Cutter database', a database created with RepCA or OUI containing always OID, DCM and so on...
    In Portal 9.0.4, the portal schema is imported in an 'Cutter database' (new)
    group_sec.sql
    group_sec.sql is used to correct the GUIDs of OID stored in Portal
    ptlconfig -dad portal -oid is used to correct the GUIDs of OID stored in Portal (new)
    1 script
    The import / export are divided by several steps with several scripts
    The import script is done in one step
    Additional steps are included in the script
    This requires to know the hostname and port of the original development machine. (new)
    Import
    The steps are:
    creation of an empty database
    creation of the users with password=username
    import
    The steps are:
    creation of an IAS 10g infrastructure DB (repca or OUI)
    deletion of new portal schemas (new)
    creation of the users with the same password than the schemas just dropped.
    import
    DAD
    The dad needed to be changed
    The passwords are not changed, the dad does not need to be changed.
    Bugs
    In portal 9.0.2, 2 bugs were workarounded by change_host.sh
    In Portal 9.0.4, some tables additional tables needs to be updated manually before to run ptlasst. This is #Bug:3762961#.
    export of LDAP
    The export is done in LDIF files. If the prod and the dev have different domain, it is quite difficult to change the domain name in these file due to the line wrapping at 78 characters.
    The export is done in XML files, in the DSML format (new). It is a lot easier to change the XML files if the domain name is different from PROD to DEV.
    Download
    You have to cut and paste the scripts
    The scripts are attached to the note. Just donwload them.
    Rewiring
    9.0.2 uses ptlasst.
    ptlasst.csh -mode MIDTIER -i custom -s $PORTAL_USER -sp $PORTAL_PASSWORD -c $PORTAL_HOSTNAME:$PORTAL_DB_PORT:$PORTAL_SERVICE_NAME -sdad $PORTAL_DAD -o orasso -op $ORASSO_PASSWORD -odad orasso -host $MIDTIER_HOSTNAME -port $MIDTIER_HTTP_PORT -ldap_h $INFRA_HOSTNAME -ldap_p $OID_PORT -ldap_w $IAS_PASSWORD -pwd $IAS_PASSWORD -sso_c $INFRA_HOSTNAME:$INFRA_DB_PORT:$INFRA_SERVICE_NAME -sso_h $INFRA_HOSTNAME -sso_p $INFRA_HTTP_PORT -ultrasearch -oh $MIDTIER_ORACLE_HOME -mc false -mi true -chost $MIDTIER_HOSTNAME -cport_i $WEBCACHE_INV_PORT -cport_a $WEBCACHE_ADM_PORT -wc_i_pwd $IAS_PASSWORD -emhost $INFRA_HOSTNAME -emport $EM_PORT -pa orasso_pa -pap $ORASSO_PA_PASSWORD -ps orasso_ps -pp $ORASSO_PS_PASSWORD -iasname $IAS_NAME -verbose -portal_only
    9.0.4 uses ptlconfig (new)
    ptlconfig -dad portal
    Environment variables
    A lot of environment variables are needed
    Just 3 environment variables are needed:
    - password of SYS
    - password of IAS,
    - ORACLE_HOME of the Midtier
    All the rest is found in iasconfig.xml and LDAP (new)
    TO DO
    - Check if the orclcommonapplication name fits SID.hostname
    - Check what gives the import of a portal30 upgraded schema inside a schema named portal
    - Explain how to copy the portal*.dbf files in place of export/import and the limitation of tra

  • OIM User Creation Error After OIM and OID Intregation

    Hi,
    I am new in oim and i am getting popup error message for user creation from OIM application after oim and oid intregation through libovd.
    Error message : LDAP create event failed : orclguid attribute has duplicate value.
    please guide me for resolving error.
    Thanks & Regards,
    Rajeev

    Hi,
    Thanks for reply...i checked1307549.1 in metalink, In that link they are telling us to modify some tables in the data base.i have some question regarding the following steps please help.
    === ODM Solution / Action Plan ===
    1. Use the following query to find fields with "plain text" values:
    select svr.svr_name, spd.spd_field_name, svp.svp_key, svp_field_value
    from svp
    inner join spd on spd.spd_key = svp.spd_key
    inner join svr on svr.svr_key = svp.svr_key
    2. Set these plain text values to null after making backup of table.
    *(kashyap:: Which fields values we have to change)*
    3. Edit the Directory Server to re-set values.
    *(kashyap:: could you please explain this)*
    Expected error at this stage:
    -- no "System Error call admin...", but that makes sense since the values in question pertained directly to the Directory Server --

  • Error in OID ldap integration

    I'm trying to integrate Portal and OID authentication.
    I followed all the documentation in conf_ldap.pdf but I get the error:Unexpected errors (WWC-41400).
    Both the tnsping exproc_connection_data
    and lsnrctl status give the right result as stated in the document.
    So I've tryed to launch from portal30_sso user this command:
    select WWSSO_AUTH_EXTERNAL.authenticate_user('portal30','portal30') from dual
    and I get the error:
    ORA-28576: lost RPC connection to external procedure agent
    ORA-06512: at "PORTAL30_SSO.WWSSO_AUTH_EXTERNAL", line 281
    ORA-06512: at line 1
    Both tnsnames.ora and lisner.ora seems to be configures fine.
    I'm using OID coming from Oracle 8.1.7.0 and OiAS 1.0.2.1 for NT on a win 2000 sp1,
    Where is the problem?
    Thank's in advance
    Mauro
    null

    Here are some things to check:
    I beleive that some of the newer versions of Portal have a user
    called "portal309_sso" instead of "portal30_sso". My examples
    below use portal30_sso". Use whatever user is appropriate for
    your version of Portal.
    If you have not yet installed OID (Oracle's LDAP server) none of
    this will work. Make sure OID is installed and running. OID can
    be installed in the same database that Portal uses.
    All of the following sql command steps must be executed as
    portal30_sso schema user, NOT portal30.
    Examples for NT:
    Copy the appropriate library file (ssoxldap.dll) used for the
    LDAP API callouts from the $PORTAL_HOME/portal30/admin/plsql/sso
    directory of the product installation into the appropriate place
    on the Login Server machine:
    Examples for NT copy:
    F:\>copy \PORTAL_HOME\portal30\admin\plsql\sso\ssoxldap.dll
    ORACLE_HOME\bin
    F:\>sqlplus portal30_sso/portal30_sso create or replace library
    auth_ext as F:\Oracle\Ora8db\bin\ssoxldap.dll';
    Notice that you must type a forward slash on a line by itself
    after you execute the command.
    Make sure that your network connectivity is working.
    Make sure you have at least 1 service handler for PLSExtProc:
    Example:
    F:\>set ORACLE_HOME=F:\Oracle\Ora8db
    F:\>lsnrctl status
    PLSExtProc has 1 service handler(s)
    Make sure you can tnsping extproc_connection_data.
    Example:
    F:\>tnsping extproc_connection_data
    Attempting to contact (ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC0))
    OK (80 msec)
    F:\>
    If either of these two network connectivity checks fail nothing
    else will work.
    Next make sure you enter the correct information for the
    ssoldap.sql script. One small typing error will cause the problem
    you had. In the example below there are a couple of common
    mistakes people make. Make sure you type the full Search base.
    The value for the search base should be "cn=Login Server
    (portal30_sso)". Don't forget the cn= and be sure to put in the
    spaces and capitol letters where you see them. In the "Bind DN"
    make sure you dont forget to put in the "cn=" in front of the
    "orcladmin".
    Example:
    sqlplus portal30_sso/portal30_sso
    @\oracle\isuites9i\portal30\admin\plsql\sso\ssoldap
    Host: 144.25.95.92
    Port: 389
    Search Base: cn=Login Server (portal30_sso)
    Unique Attribute: cn
    Bind DN: cn=orcladmin
    Bind Password: welcome
    Note: If you have already changed the password for cn=orcladmin
    in the OID LDAP server you must use that password instead of
    "welcome" for the "Bind Password:".
    Creating the users.ldif file for migrating existing users in the
    portal30 database schema.
    sqlplus portal30_sso/portal30_sso
    @f:\oracle\isuites9i\portal30\admin\plsql\sso\ssoldif
    Generating 'users.ldif' file for existing Portal users.
    Enter the desired file location.
    F:\oracle\admin\oiddb2\udump
    NOTE: The file location must be specified in the appropriate
    parameter in the init.ora file.
    Example (you should see a line like this in the init.ora file):
    UTL_FILE_DIR = F:\Oracle\admin\oid2111\udump
    This line specifies where to dump data the you want to migrate.
    If this line was not present in the init.ora file before you
    started your database you will have to restart the database for
    this step to succeed.
    Using the file that was created in the last step (users.ldif),
    add the entries to the LDAP directory. This example uses Oracle
    Internet Directory's ldapadd command line utility:
    Note. The following command is one long line. If you have already
    done this next step before you may want to go into OID and delete
    the existing data that is already in OID. Use the ODM (Oracle
    Directory Manager) tool to do this. Under "Entry management" make
    sure you delete any entries that you may have already created. If
    the directory entries already exist you will get an error when
    you run the next command indicating that the entries already
    exist. Because any previous entries you may have created may not
    be good those entries should be deleted.
    ldapadd -h 144.25.95.92 -p 389 D cn=orcladmin -w welcome f
    f:\oracle\admin\oiddb2\udump\users.ldif
    Once these users are successfully added, you are ready to log
    into the Portal through the Login Server, authenticating against
    this LDAP directory.
    Make sure you login as a valid user that is under the "cn=Login
    Server (portal30_sso)" directory of your LDAP server.
    Example:
    Open your browser and go to the URL:
    http://ip_or_hostname:80/pls/portal30
    Click on the Login link
    Login as portal30_sso/portal30_sso
    Note: Assuming portal30_sso is a valid user in the LDAP server. I
    beleive that some of the newer versions of Portal have a user
    called "portal309_sso" instead of "portal30_sso".
    Hope this helps.
    Jay

  • Connection Errors between Oracle 8 Client and Oracle 8.1.6 EE

    Our current application has been developed using PowerBuilder 6.5 and Oracle 8.1.5. Since PB 6.5 does not have a "native driver" for 8.1.5, we have used their driver for Oracle 8.0.5 for Net8 connectivity to the server. Everything works fine.
    Recently, I downloaded the current version of Oracle 8.1.6 EE from Technet to try out the compatibilty of our software with the next version of 8I, it failed. The error seems to be somewhere between the 8.0.5 Oracle Client Software and the downloaded Oracle 8.1.6 EE Server software. Some how we have lost the ability to declare cursors. I tried declaring a cursor in a procedure through SQL*Plus and it failed their as well. I have even tried the Oracle 8.0.6 Client, with same results.
    Has anyone seen this? If so, what should I do?
    Thanks,
    Jason

    HOW TO SUPPORT TWO-TASK COMMON ERRORS - 1012295.102
    Troubleshooting
    ===============
    Two-task common errors are generally RDBMS related issues, but could be caused by a problem with SQL*Net, or an application (i.e. Pro*C).
    ORA-03106
    ======== Possible reasons for the ORA-03106 errors include:
    1. Incompatibilities between the client application and the RDBMS server.
    For example, version incompatibilities, or a client trying to use a feature not supported by the database kernel.
    2. When using database links or gateways.
    3. Network or SQL*Net problems.
    4. Corruptions.
    5. PL/SQL - language related.
    check the NLS_LANG variable on the client and server.

  • TROUBLE SHOOTING PROBLEM ON SQL*NET, NET8.X, AND NET SERVICES

    제품 : SQL*NET
    작성날짜 : 2002-11-24
    TROUBLE SHOOTING PROBLEM ON SQL*NET, NET8.X, AND NET SERVICES
    (GENERATING TRACE FILE OF SQL*NET, NET8.X, AND NET SERVICES)
    ==========================================================
    PURPOSE
    이 글은 고객들이 Oracle Server의 Oracle Networking 제품들을 사용하다가
    Networking과 관련된 문제를 만났으나 스스로 해결할 수 없는 경우, 예를
    들어 http://metalink.oracle.com 에서 같은 case를 발견할 수 없는 경우,
    원인을 찾기 위하여 Networking 제품의 trace를 떠 보는 방법을 설명합니다.
    Explanation
    Oracle Networking 제품을 사용 중에 어떠한 문제를 겪는 경우 무엇보다도
    OS의 Networking Protocol Stack 쪽을 살펴보는 것이 선행되어야 합니다.
    일부 고객분들은 Oracle Networking 제품들이 OS의 Protocol Stack이 동작
    하는데 어떤 중대한 영향을 주는 것이 아닌가 하고 문의를 합니다만
    그러한 의문은 Oracle Networking 제품도 OS의 Protocol Stack을 사용하는
    하나의 network client라는 것을 이해함으로써 해소가 됩니다.
    Oracle Networking 제품도 OS의 Protocol Stack을 이용하는 하나의 network
    client이기 때문에 Oracle Networking 제품이 OS Protocol Stack의 동작에
    어떠한 식으로든 영향을 준다던가 하는 것은 있을 수 없기 때문입니다.
    고객의 관점에서 OS의 Protocol Stack은 정상적으로 설정이 되었다고 믿고
    있으며 기타 networking 환경을 정확히 판단할 수 없는, 예를 들어
    host설정은 잘 되어 있다고 믿고 있으나 firewall을 통해야만 하는 등의
    다소 접근하기 어려운 networking 환경에서 networking이 잘 되지 않는,
    예를 들어, Oracle Server에 연결이 잘 안되거나 연결 중 networking
    error가 발생하는 경우에는 http://metalink.oracle.com에서 같은 case를
    찾는 것이 가장 빠른 해결 방법입니다.
    그러나 드물게 http://metalink.oracle.com에서 같은 case가 나오지 않는
    경우 Oracle Networking 제품의 trace file을 떠서 원인을 찾게 됩니다.
    Oracle Networking 제품은 Oracle 7.x에서 SQL*Net, Oracle 8.x에서 Net 8.x,
    Oracle 9.x에서 Oracle Net Services라는 이름이 가지고 있으나 여기에서는
    편의상 모두 Oracle Networking 제품으로 부르도록 하겠습니다.
    아울러 여기서 언급하는 Trace File은 Oracle Server의 Trace File과
    다른 것입니다. Oracle Server의 Trace File은 database instance의 대한
    내용만을 기록하며 Oracle Networking 제품의 Trace File은 networking에
    대한 내용만을 기록합니다.
    Trace File의 작성 방법은 Oracle 7.x부터 9.x까지 같습니다.
    Solution Description
    1. Client가 되는 host에서 어떠한 문제를 경험한 경우, 간혹 문제의 발생
    빈도가 낮거나 network나 server 쪽에 이상을 느끼시면서도 문제를 확인하기
    여러운 경우가 있습니다. 이런 경우엔는 문제가 다시 발생될 때 까지
    Client에서 Client Trace File를 작성하여 보아야 합니다.
    a. prompt$ echo $TNS_ADMIN
    b. $TNS_ADMIN이 설정되어 있으면, prompt$ cd $TNS_ADMIN
    아니면, prompt$ cd $ORACLE_HOME/network/admin
    c. vi sqlnet.ora
    # 다음 line들을 추가
    TRACE_LEVEL_client=16
    TRACE_FILE_client=<filename>
    TRACE_DIRECTORY_client=<directory>
    TRACE_UNIQUE_client=TRUE
    :wq
    prompt$
    예를 들어, client가 Windows OS인 경우 다음과 같이 합니다.
    prompt> notepad sqlnet.ora
    TRACE_LEVEL_client=16
    TRACE_FILE_client=client
    TRACE_DIRECTORY_client=D:\temp
    TRACE_UNIQUE_client=TRUE
    prompt>
    d. SQL*Plus와 같은, 또는 사용하던 Client software를 계속 이용
    e. 다시 문제가 발생
    f. 작성된 trace file들의 날자를 보아 문제가 발생한 시간과
    거의 같은 시간에 작성된 file들이 있는지 확인
    prompt> dir D:\temp
    client_<PID1>.trc ...
    client_<PID2>.trc ...
    prompt>
    g. trace를 중단합니다.
    prompt$ vi sqlnet.ora
    TRACE_LEVEL_client=0
    :wq
    prompt$
    h. Trace Assistant를 이용하여 Oracle Networking 제품의 error code
    (이하 TNS error code)를 trace file에서 찾아냅니다.
    prompt$ cd <directory of TRACE_DIRECTORY_client>
    prompt$ trcasst <filename of TRACE_FILE_client>_<PID>.trc > trcasst.out
    prompt$ vi -R trcasst.out
    Trace Assistant (trcasst command)는 Oracle 8.x부터 제공되며,
    Oracle 9.x의 Trace Assistant를 사용할 것을 권합니다.
    i. oerr command로 tns error에 대한 error message를 봅니다.
    예를 들어, 흔한 경우 중에 listener가 실행 중이지 않거나 잘못된
    tnsnames.ora 설정으로 client가 listener가 service 중이지 않은
    network address로 연결을 시도하다가 TNS-12541 TNS-12560 TNS-511
    error가 차례대로 나왔다면 다음과 같이 하여 각 TNS error code에
    대한 message와 설명 및 해결방법을 볼 수 있습니다.
    prompt$ oerr tns 12541
    prompt$ oerr tns 12560
    prompt$ oerr tns 511
    j. Trace Assistant가 알려준 TNS error code를 http://metalink.oracle.com
    에서 검색하여 같은 case가 있는지 확인합니다.
    Web browser에서:
    1) Go to http://metalink.oracle.com, then login
    오른쪽 위 HTML frame에서 :
    2) "Advanced" Button을 누릅니다.
    오른쪽 아래 HTML frame에서:
    3) "Enter Keyword" text box에 tns error code를 모두 입력합니다.
    예를 들어, tns-12541 tns-12560 tns-511
    4) "Search" button을 누릅니다.
    k. http://metalink.oracle.com에서 검색이 거의 되지 않거나
    문제가 매우 이상한 경우 이 글 아래를 보시기 바랍니다.
    2. Client에서 SQL*Plus나 Oracle Client를 사용하는 다른 3rd party
    appliction은 잘 동작하나 유독 tnsping command에서 error가 나는 경우가
    드물게 있습니다. 이 때에는 tnsping command의 trace file을 작성해봅니다.
    prompt$ vi sqlnet.ora
    TNSPING_TRACE_LEVEL=16
    TNSPING_TRACE_DIRECTORY=<directory>
    :wq
    prompt$ tnsping <tns alias>
    3. Server, 즉 listener에 문제가 있다고 생각되는 경우 우선 listener의
    log file을 봅니다. listener의 log에 대해서는 bulletin.18364를
    이용합니다.
    4. listener의 trace file은 다음과 같이 설정합니다.
    listener의 trace 설정은 parameter 이름에 trace file을 작성하고자 하는
    listener의 이름이 오는 점, 설정 후 listener process를 restart해야 하는
    점 두 가지를 제외하면 나머지 절차가 앞서 설명드린 client와 같습니다.
    prompt$ vi listener.ora
    TRACE_LEVEL_<listener name>=16
    TRACE_FILE_<listener name>=<filename>
    TRACE_DIRECTORY_<listener name>=<directory>
    :wq
    prompt$ lsnrctl stop <listener name>
    prompt$ lsnrctl start <listener name>
    5. 만일 위와 같이 http://metalink.oracle.com을 검색하여도
    같은 case가 전혀 없거나 찾아낸 error message들이 원인과 관련이 없어
    보일정도로 매우 이상한 경우 다음과 같이 해봅니다.
    a. Oracle Software의 version과 OS의 version이 certification matrix에
    있는 지 확인합니다.
    Web browser에서:
    1) Go to http://metalink.oracle.com, then login
    왼쪽 menu에서 :
    2) "Certify & Availabilty" menu item을 click합니다.
    오른쪽 아래 HTML frame에서:
    3) 현재 사용중인 Product와 그 version 및 OS와 그 version을 선택하
    여 검색합니다.
    Certification Matrix는 매우 정확하게 정보를 보여주고 있습니다.
    Matrix에 없는 Oracle Product와 OS version은 certify되어 있지 않으며
    모든 Oracle 제품을 Certify되지 않은 OS version에서 사용하는 것은
    그 어떠한 이유로든 보증과 지원이 되지 않습니다.
    만일 사용중인 제품이 certify되지 않은 경우, 무조건 certify된 제품
    과 OS의 version으로 install하여 다시 시도를 해야 합니다.
    예를 들어, Windows OS의 경우 그 어떠한 Oracle 제품도 Windows Me 및
    Windows XP Home Edition에 certify되어 있지 않습니다.
    Windows 2000 Professional 이상에 Certify된 제품은 Oracle 8.1.6부터
    이며 XP Professional 이상에 Certify된 제품은 Oracle 9.0.1부터
    입니다.
    Oracle 제품은 software이기는 하나 매우 완벽한 수준의 제품이라고 할
    수 있습니다. 그러나 Oracle 제품은 하나의 OS의 process로써 실행이
    되고 시간이 지나며 많은 환경이 major upgrade가 되면서 그러한 환경
    에 따라 Oracle software도 새로운 version으로 제작이 되면서 다시
    certification이 이루어지게 됩니다.
    그렇기 때문에 고객들은 특히 다수의 client나 고가의 server를 계획하
    시는 고객들은 저희 certification matrix를 사전에 철저히 확인하셔야
    합니다.
    앞서 Certification Matrix이전에 Installation Guide에 명시된
    OS version만이 certify되어 있다고 생각하시면 되겠습니다.
    b. Unix에서는 Oracle Software를 install한 후 OS의 Networking patch를
    씌우고 사용하다보면 문제가 되는 경우가 있습니다.
    이런 경우 relink를 해주어 바뀐 OS환경에서 Oracle 실행 file들을 다
    시 만들어 주어야 합니다.
    자세한 내용은 http://metalink.oracle.com에서
    다음 글을 읽어 보시기 바랍니다.
    Note:131321.1 How to Relink Oracle Database Software on UNIX
    c. Certify된 환경에서도 원인을 찾을 수 없는 경우 trace file을 가지고
    Oracle Support Services에 문의를 합니다. (지원 계약 고객에 한함)
    Reference Documents
    Chapter 17 Trouble shooting Oracle Net Services
    Oracle 9i Net Services Administrator's Guide
    SQL*Net, Net8.x, Net Services 9.x Network Administration Guide/Reference

    제품 : SQL*NET
    작성날짜 : 2002-11-24
    TROUBLE SHOOTING PROBLEM ON SQL*NET, NET8.X, AND NET SERVICES
    (GENERATING TRACE FILE OF SQL*NET, NET8.X, AND NET SERVICES)
    ==========================================================
    PURPOSE
    이 글은 고객들이 Oracle Server의 Oracle Networking 제품들을 사용하다가
    Networking과 관련된 문제를 만났으나 스스로 해결할 수 없는 경우, 예를
    들어 http://metalink.oracle.com 에서 같은 case를 발견할 수 없는 경우,
    원인을 찾기 위하여 Networking 제품의 trace를 떠 보는 방법을 설명합니다.
    Explanation
    Oracle Networking 제품을 사용 중에 어떠한 문제를 겪는 경우 무엇보다도
    OS의 Networking Protocol Stack 쪽을 살펴보는 것이 선행되어야 합니다.
    일부 고객분들은 Oracle Networking 제품들이 OS의 Protocol Stack이 동작
    하는데 어떤 중대한 영향을 주는 것이 아닌가 하고 문의를 합니다만
    그러한 의문은 Oracle Networking 제품도 OS의 Protocol Stack을 사용하는
    하나의 network client라는 것을 이해함으로써 해소가 됩니다.
    Oracle Networking 제품도 OS의 Protocol Stack을 이용하는 하나의 network
    client이기 때문에 Oracle Networking 제품이 OS Protocol Stack의 동작에
    어떠한 식으로든 영향을 준다던가 하는 것은 있을 수 없기 때문입니다.
    고객의 관점에서 OS의 Protocol Stack은 정상적으로 설정이 되었다고 믿고
    있으며 기타 networking 환경을 정확히 판단할 수 없는, 예를 들어
    host설정은 잘 되어 있다고 믿고 있으나 firewall을 통해야만 하는 등의
    다소 접근하기 어려운 networking 환경에서 networking이 잘 되지 않는,
    예를 들어, Oracle Server에 연결이 잘 안되거나 연결 중 networking
    error가 발생하는 경우에는 http://metalink.oracle.com에서 같은 case를
    찾는 것이 가장 빠른 해결 방법입니다.
    그러나 드물게 http://metalink.oracle.com에서 같은 case가 나오지 않는
    경우 Oracle Networking 제품의 trace file을 떠서 원인을 찾게 됩니다.
    Oracle Networking 제품은 Oracle 7.x에서 SQL*Net, Oracle 8.x에서 Net 8.x,
    Oracle 9.x에서 Oracle Net Services라는 이름이 가지고 있으나 여기에서는
    편의상 모두 Oracle Networking 제품으로 부르도록 하겠습니다.
    아울러 여기서 언급하는 Trace File은 Oracle Server의 Trace File과
    다른 것입니다. Oracle Server의 Trace File은 database instance의 대한
    내용만을 기록하며 Oracle Networking 제품의 Trace File은 networking에
    대한 내용만을 기록합니다.
    Trace File의 작성 방법은 Oracle 7.x부터 9.x까지 같습니다.
    Solution Description
    1. Client가 되는 host에서 어떠한 문제를 경험한 경우, 간혹 문제의 발생
    빈도가 낮거나 network나 server 쪽에 이상을 느끼시면서도 문제를 확인하기
    여러운 경우가 있습니다. 이런 경우엔는 문제가 다시 발생될 때 까지
    Client에서 Client Trace File를 작성하여 보아야 합니다.
    a. prompt$ echo $TNS_ADMIN
    b. $TNS_ADMIN이 설정되어 있으면, prompt$ cd $TNS_ADMIN
    아니면, prompt$ cd $ORACLE_HOME/network/admin
    c. vi sqlnet.ora
    # 다음 line들을 추가
    TRACE_LEVEL_client=16
    TRACE_FILE_client=<filename>
    TRACE_DIRECTORY_client=<directory>
    TRACE_UNIQUE_client=TRUE
    :wq
    prompt$
    예를 들어, client가 Windows OS인 경우 다음과 같이 합니다.
    prompt> notepad sqlnet.ora
    TRACE_LEVEL_client=16
    TRACE_FILE_client=client
    TRACE_DIRECTORY_client=D:\temp
    TRACE_UNIQUE_client=TRUE
    prompt>
    d. SQL*Plus와 같은, 또는 사용하던 Client software를 계속 이용
    e. 다시 문제가 발생
    f. 작성된 trace file들의 날자를 보아 문제가 발생한 시간과
    거의 같은 시간에 작성된 file들이 있는지 확인
    prompt> dir D:\temp
    client_<PID1>.trc ...
    client_<PID2>.trc ...
    prompt>
    g. trace를 중단합니다.
    prompt$ vi sqlnet.ora
    TRACE_LEVEL_client=0
    :wq
    prompt$
    h. Trace Assistant를 이용하여 Oracle Networking 제품의 error code
    (이하 TNS error code)를 trace file에서 찾아냅니다.
    prompt$ cd <directory of TRACE_DIRECTORY_client>
    prompt$ trcasst <filename of TRACE_FILE_client>_<PID>.trc > trcasst.out
    prompt$ vi -R trcasst.out
    Trace Assistant (trcasst command)는 Oracle 8.x부터 제공되며,
    Oracle 9.x의 Trace Assistant를 사용할 것을 권합니다.
    i. oerr command로 tns error에 대한 error message를 봅니다.
    예를 들어, 흔한 경우 중에 listener가 실행 중이지 않거나 잘못된
    tnsnames.ora 설정으로 client가 listener가 service 중이지 않은
    network address로 연결을 시도하다가 TNS-12541 TNS-12560 TNS-511
    error가 차례대로 나왔다면 다음과 같이 하여 각 TNS error code에
    대한 message와 설명 및 해결방법을 볼 수 있습니다.
    prompt$ oerr tns 12541
    prompt$ oerr tns 12560
    prompt$ oerr tns 511
    j. Trace Assistant가 알려준 TNS error code를 http://metalink.oracle.com
    에서 검색하여 같은 case가 있는지 확인합니다.
    Web browser에서:
    1) Go to http://metalink.oracle.com, then login
    오른쪽 위 HTML frame에서 :
    2) "Advanced" Button을 누릅니다.
    오른쪽 아래 HTML frame에서:
    3) "Enter Keyword" text box에 tns error code를 모두 입력합니다.
    예를 들어, tns-12541 tns-12560 tns-511
    4) "Search" button을 누릅니다.
    k. http://metalink.oracle.com에서 검색이 거의 되지 않거나
    문제가 매우 이상한 경우 이 글 아래를 보시기 바랍니다.
    2. Client에서 SQL*Plus나 Oracle Client를 사용하는 다른 3rd party
    appliction은 잘 동작하나 유독 tnsping command에서 error가 나는 경우가
    드물게 있습니다. 이 때에는 tnsping command의 trace file을 작성해봅니다.
    prompt$ vi sqlnet.ora
    TNSPING_TRACE_LEVEL=16
    TNSPING_TRACE_DIRECTORY=<directory>
    :wq
    prompt$ tnsping <tns alias>
    3. Server, 즉 listener에 문제가 있다고 생각되는 경우 우선 listener의
    log file을 봅니다. listener의 log에 대해서는 bulletin.18364를
    이용합니다.
    4. listener의 trace file은 다음과 같이 설정합니다.
    listener의 trace 설정은 parameter 이름에 trace file을 작성하고자 하는
    listener의 이름이 오는 점, 설정 후 listener process를 restart해야 하는
    점 두 가지를 제외하면 나머지 절차가 앞서 설명드린 client와 같습니다.
    prompt$ vi listener.ora
    TRACE_LEVEL_<listener name>=16
    TRACE_FILE_<listener name>=<filename>
    TRACE_DIRECTORY_<listener name>=<directory>
    :wq
    prompt$ lsnrctl stop <listener name>
    prompt$ lsnrctl start <listener name>
    5. 만일 위와 같이 http://metalink.oracle.com을 검색하여도
    같은 case가 전혀 없거나 찾아낸 error message들이 원인과 관련이 없어
    보일정도로 매우 이상한 경우 다음과 같이 해봅니다.
    a. Oracle Software의 version과 OS의 version이 certification matrix에
    있는 지 확인합니다.
    Web browser에서:
    1) Go to http://metalink.oracle.com, then login
    왼쪽 menu에서 :
    2) "Certify & Availabilty" menu item을 click합니다.
    오른쪽 아래 HTML frame에서:
    3) 현재 사용중인 Product와 그 version 및 OS와 그 version을 선택하
    여 검색합니다.
    Certification Matrix는 매우 정확하게 정보를 보여주고 있습니다.
    Matrix에 없는 Oracle Product와 OS version은 certify되어 있지 않으며
    모든 Oracle 제품을 Certify되지 않은 OS version에서 사용하는 것은
    그 어떠한 이유로든 보증과 지원이 되지 않습니다.
    만일 사용중인 제품이 certify되지 않은 경우, 무조건 certify된 제품
    과 OS의 version으로 install하여 다시 시도를 해야 합니다.
    예를 들어, Windows OS의 경우 그 어떠한 Oracle 제품도 Windows Me 및
    Windows XP Home Edition에 certify되어 있지 않습니다.
    Windows 2000 Professional 이상에 Certify된 제품은 Oracle 8.1.6부터
    이며 XP Professional 이상에 Certify된 제품은 Oracle 9.0.1부터
    입니다.
    Oracle 제품은 software이기는 하나 매우 완벽한 수준의 제품이라고 할
    수 있습니다. 그러나 Oracle 제품은 하나의 OS의 process로써 실행이
    되고 시간이 지나며 많은 환경이 major upgrade가 되면서 그러한 환경
    에 따라 Oracle software도 새로운 version으로 제작이 되면서 다시
    certification이 이루어지게 됩니다.
    그렇기 때문에 고객들은 특히 다수의 client나 고가의 server를 계획하
    시는 고객들은 저희 certification matrix를 사전에 철저히 확인하셔야
    합니다.
    앞서 Certification Matrix이전에 Installation Guide에 명시된
    OS version만이 certify되어 있다고 생각하시면 되겠습니다.
    b. Unix에서는 Oracle Software를 install한 후 OS의 Networking patch를
    씌우고 사용하다보면 문제가 되는 경우가 있습니다.
    이런 경우 relink를 해주어 바뀐 OS환경에서 Oracle 실행 file들을 다
    시 만들어 주어야 합니다.
    자세한 내용은 http://metalink.oracle.com에서
    다음 글을 읽어 보시기 바랍니다.
    Note:131321.1 How to Relink Oracle Database Software on UNIX
    c. Certify된 환경에서도 원인을 찾을 수 없는 경우 trace file을 가지고
    Oracle Support Services에 문의를 합니다. (지원 계약 고객에 한함)
    Reference Documents
    Chapter 17 Trouble shooting Oracle Net Services
    Oracle 9i Net Services Administrator's Guide
    SQL*Net, Net8.x, Net Services 9.x Network Administration Guide/Reference

  • OIM- OID Ldap Sync

    Hi Experts,
    I had configured OIM - OID Ldap Synchronization. Create/Modify/Delete of users are working as expected.
    During User Account creation, user type will be given as Role A or Role B in OIM. This user type is created as Group/Role in OID. Role A or Role B is a group in OID and adds the User DN under this group based on User Type from OIM.
    Now the problem is, When i modify User-Type of the User in OIM from Role A to Role B, in OID the user account is not getting added into the changed Groups. And also it is not getting deleted from old group which is assigned earlier.
    What are the changes that need to be performed for Group changes in OIM/OID. Please throw some pointers on this.
    Thanks in Advance,
    Sandeep.

    Any suggestions experts?

Maybe you are looking for