DIP (Directory Integration Platform)

What user does DIP actually make modifications with? I'm trying to limit the actions of an OID plug-in so that it is only invoked after (post_modify) DIP makes the modification (i.e. when the synchronized database is modified).
Likewise, it would be useful to know how to make a plug-in fire only when someone besides DIP has made the modification.
Basically this user would be in the Plug-In Management -> "PLUG-IN NAME HERE" -> Optional Properties tab -> Plug-In Request Group.

DIP performs the operations as the profile. The identity used will be orclodipagentname=<profilename>,cn=subscriber profile,cn=changelog subscriber,cn=oracle internet directory

Similar Messages

  • OID DIP (Directory Integration Platform) Question

    Does anyone have any experience or knowledge with running numerous DIP Profiles on an OID? We're looking at 500+ DIP Profiles, each of them running very frequently, every few minutes. Will it be a problem for OID to handle this number of DIP Profiles?
    Any help would be greatly appreciated.Thanks!

    We are syncing user information from a non-LDAP directory. Unfortunately, our DIT structure is such that certain types of users (as defined by geographic locations) are contained within separate folders. We have over 500 different "types" of users, each with their own place (container) in the tree. This removes the possibility of having one DIP mapping file. Each office has it's own mapping file and own config file that queries based upon said location.
    Thanks.

  • Directory Integration Platform Configuration Assistant hanging

    Hi,
    Installing 10gAS (10.1.2.0.2) infrastructure on Linux ES4 (64-bit) - High Availability cluster, node 1.
    The Directory Integration Platform CA is hanging when I retry it, it having previously failed probably due to a Load Balancer config error on my part (I had a timeout set too low (30s) and got "Broken Pipe" error message).
    Having fixed this LB config, hit retry on the CA (still in the OUI) but it is just hanging.
    Tried deleting the /tmp/EM_CONFIG_INSTALL.lk file, as per the install doc, and retrying but no success.
    I am on retry attempt no. 7 and previous attempt (I think attempt no. 4) reported the following in dipca.log:
    oracle.ldap.oidinstall.backend.OIDCAException: Invalid Credentials
    at oracle.ldap.oidinstall.backend.OIDConfiguration.sslbind(OIDConfiguration.java:814)
    at oracle.ldap.oidinstall.backend.OIDConfiguration.<init>(OIDConfiguration.java:144)
    at oracle.ldap.oidinstall.backend.OIDConfigWrapper.configDIP(OIDConfigWrapper.java:463)
    This log has not been written to since this attempt.
    Any ideas?
    Thanks,
    Gavin

    Hi,
    If the SSOCA falis, you may notice a Java Stack Trace. Can you please post the Java Stack Trace too ? It might give a clue about the problem.
    Regards,
    Sandeep

  • Directory Integration Platform Configuration Assistant - Invalid Credential

    O/S: SuSE Enterprise 9
    Situation: During the installer for Oracle Application Server 10g Basic Installation/Portal, I see the following error:
    "Directory Integration Platform Configuration failed. Please see lofile file: /home/oracle/product/10.1.2/OracleAS/infra/ldap/log/dipca.log"
    A review of dipca.log shows the following:
    "oracle.ldap.oidinstall.backend.OIDCAException: Invalid Credentials at oracle.ldap.oidinstall.backend.OIDConfiguration.sslbind(OIDConfiguration.java:787"
    The user authentication method set up on the server was local (/etc/passwd). Could this be my issue? If so, how do I correct it? Any thoughts as to what the issue is?

    You can have the BI, J2EE and webcache in a single midtier. That should not be an issue.
    The password for b2b stored in OID may be out of sync with the password you have set. Please find out what is the password in OID and then reset your b2b schema password with the password stored in OID. The way to locate the b2b password in OID is as follows:
    Login to OID and traverse through the following nodes:
    Entry Management | cn=OracleContext | Products | IAS | IAS
    Infrastructure Database | orclReferenceName | orclResourceName=B2B
    Hope this will resolve your problem,
    Eng

  • Directory Integration Platform - DN Mapping Rule

    Hi all,
    Can anyone help me with setting up the mapping in a directory synchronization profile in DIP?
    Using the GUI, the three fields required are Source Container, DIP-OID Container and DN Mapping rule.
    The containers are obvious but what is the syntax for setting the DN mapping rule? I just want to match equality on the cn attributes in each container.
    Many thanks,
    Bernie

    r u using the AD specific or Default schema?
    Cant u simply put the selection criteria as "Present" instead of "Bank" - of course since you say that dn is not available we need to see which field should provide the info.

  • Jabber Directory Integration in hybrid mode

    We are running Jabber in cloud/hybrid mode.  Meaning we use WebEx connect cloud for IM/Presence but use our on premise CUCM for voice services.  This works great for the most part.
    We recently had Cisco enable "Directory Integration" for our ORG domain.  This allows us to export our users from AD, create a CSV file and then upload this CSV file to Cisco's SFTP server to be imported into Jabber.  Everything is scheduled and requires no user intervention to add/deacitvate users.  We have the import working and are able to get accounts created.  The problem is that enabling Directory Integration has disabled our ability to manually update user profiles and our users ability to upload their contact photo from the Jabber client. 
    From the docs it looks like we can specify a webserver that hosts our contact photos.  But this webserver would need to be wide open, no security so that Jabber can access the photos directly.  Seems like there has to be a better solution.  
    Anyone else running a hybrid deployment and tackled the contact photos dilemma?

    Hi Venkat,
    Yes, we installed a Windows 2008 with AD-LDS. Configured CUCM to sync to this AD-LDS and then configured Jabber to use UDS (CUCM 8.6(2) feature). This way Jabber has (indirect) access to the multi-forrest AD data.
    Used this document to configure the AD-LDS (https://supportforums.cisco.com/docs/DOC-16356
    Regards,
    Erik
    PS. One thing we learned in this is that you've got to be carefull on selecting the "user-id" field. We initially used "mail" as the user id to make sure all accounts are unique but found out that when using Extension Mobility the user-id input on the phone doesn't accept very long usernames (which you might encounter when integrating based on the mail attribute).

  • Help with Active Directory Integration and kerberos

    Hello,
    I’m encountering a bug preventing me to use Active Directory integration with kerberos :
    Our domain name is CORP.DOMAIN.COM.
    When we request the GC in this domain :
    bash-3.00# nslookup -query=any gc.tcp.corp.domain.com
    Server: 1.2.1.6
    Address: 1.2.1.6#53
    ** server can't find gc.tcp.corp.domain.com: NXDOMAIN
    there is no answer.
    But when we request without corp, we find the servers :
    bash-3.00# nslookup -query=any gc.tcp.domain.com | grep sis
    gc.tcp.domain.com service = 0 100 3268 serveur02.corp.domain.com.
    gc.tcp.domain.com service = 0 100 3268 serveur01.corp.domain.com.
    bash-3.00#
    Is-it possible to add the possibility to enter the domain name where reside the gc.tcp ?
    Thank you.

    Hello
    the domain.com domain exist, but it's not our domain.
    so, when I put domain.com, it search with no result (nothing appends).
    our kdc.conf :
    [kdcdefaults]
    kdc_ports = 88,750
    [realms]
    CORP.DOMAIN.COM = {
    profile = /etc/krb5/krb5.conf
    database_name = /var/krb5/principal
    admin_keytab = /etc/krb5/kadm5.keytab
    acl_file = /etc/krb5/kadm5.acl
    kadmind_port = 749
    max_life = 8h 0m 0s
    max_renewable_life = 7d 0h 0m 0s
    default_principal_flags = +preauth
    krb.conf
    [libdefaults]
    default_realm = CORP.DOMAIN.COM
    default_checksum = rsa-md5
    [realms]
    CORP.DOMAIN.COM = {
    kdc = dc01.corp.domain.com
    kdc = dc02.corp.domain.com
    [domain_realm]
    .corp.domain.com = CORP.DOMAIN.COM
    corp.domain.com = CORP.DOMAIN.COM
    in every domain, I think the GC are in corp.domain.com. but in my company, it's in domain.com...
    Thank you,

  • Active Directory integration: Invalid Token Error in Verification Service

    I'm having problems with Active Directory integration. I'm able to browse users in the task routing slip in JDeveloper. But I'm unable to login to the worklist application.
    Getting an "Invalid Token Error in Verification Service" error. Any pointers?
    <2007-06-12 21:40:36,843> <ERROR> <default.collaxa.cube.services> <PCException::<init>> Identity Service Configuration error.
    <2007-06-12 21:40:36,843> <ERROR> <default.collaxa.cube.services> <PCException::<init>> Identity Service Configuration file has error.
    <2007-06-12 21:40:36,859> <ERROR> <default.collaxa.cube.services> <PCRuntimeException::<init>> Identity Service Configuration error.
    <2007-06-12 21:40:36,859> <ERROR> <default.collaxa.cube.services> <PCRuntimeException::<init>> Identity Service Configuration file has error.
    <2007-06-12 21:40:36,859> <ERROR> <default.collaxa.cube.services> <::> WorkflowService:: VerificationService.destroyContext: invalid token: c9pHcmBFtc4q7/EY3xGAv/6hhfa6Hf5tllCb8ZYKtdSA/8/y0exRcwpjy0vWiWGgBPzuIh5Ur+l+ZHDNe0PKb9KiFScsKAG3JK1y+nIJtC827Rljhn8E+/BoF+ZIN6GFYn/iyo/6Mrlmz02Pg4QtetftO7eHJ01rEV5MmZFTXsg8iV6LQPnkAPjqmmsq+5bVYGGfSFpHX7FXk/0FrSabClKy6DKiwt/1Kp2Ldbj2RY8=
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::> ORABPEL-30503
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::> Invalid Token Error in Verification Service.
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::> Invalid Token Error in Verification Service. Received invalid token c9pHcmBFtc4q7/EY3xGAv/6hhfa6Hf5tllCb8ZYKtdSA/8/y0exRcwpjy0vWiWGgBPzuIh5Ur+l+ZHDNe0PKb9KiFScsKAG3JK1y+nIJtC827Rljhn8E+/BoF+ZIN6GFYn/iyo/6Mrlmz02Pg4QtetftO7eHJ01rEV5MmZFTXsg8iV6LQPnkAPjqmmsq+5bVYGGfSFpHX7FXk/0FrSabClKy6DKiwt/1Kp2Ldbj2RY8= in destroyContext
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::> Check the underlying exception and correct the error. Contact oracle support if error is not fixable.
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at oracle.bpel.services.workflow.verification.impl.VerificationService.destroyContext(VerificationService.java:667)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at oracle.bpel.services.workflow.query.impl.TaskQueryService.destroyWorkflowContext(TaskQueryService.java:161)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at worklistapp.servlets.Logout.handleRequest(Logout.java:66)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at worklistapp.servlets.BaseServlet.doGet(BaseServlet.java:142)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at javax.servlet.http.HttpServlet.service(HttpServlet.java:743)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at oracle.security.jazn.oc4j.JAZNFilter$1.run(JAZNFilter.java:396)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at java.security.AccessController.doPrivileged(Native Method)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at javax.security.auth.Subject.doAsPrivileged(Subject.java:517)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at oracle.security.jazn.oc4j.JAZNFilter.doFilter(JAZNFilter.java:410)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:621)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:216)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:117)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:110)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>      at java.lang.Thread.run(Thread.java:595)
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::> Caused by: BPEL-10555
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::>
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::> Identity Service Configuration error.
    <2007-06-12 21:40:36,890> <ERROR> <default.collaxa.cube.services> <::> Identity Service Configuration file has error.

    Hi Adina,
    thank you for your answer (questions)!
    We use 10.1.3.1 SOA Suite and the default jazn.com Security Provider and what we set at java.naming.security.principal property is oc4jadmin.
    It is interesting, we deployed again out EAR and now it works again! There is not Invalid Token Error exception, but we didn't change almost anything...
    Can we debug it somehow?
    Where does this bug come from?
    Thanks!
    ric

  • TFS Integration Platform Error while Syncing - Caught and converted, The request was aborted: The request was canceled

    Hello,
    We are doing a 2-way version control sync between TFS 2013 servers. 
    TFS A - Project A is getting synched with TFS B - Project A.
    TFS A - Project A has got all the code and files, whereas TFS B - Project A is empty.
    Previously we have synced TFS 2013 projects across servers with out any error or issues. Today we came across an issue (check log at end of post) and hope Microsoft can
    provide some guidance on this. Following is the problem we are facing today:
    We have configured the TFS Integration platform tool for VC - 2 way sync and started the migration. Current
    status is that the discovery, analysis is complete and migration is half way when I started to get such errors
    (check log at end of post) .
    Note that the migration has not stopped...
    After doing some research and googling I understand that the reason for this error is that by default if
    an individual file transfer cannot complete 16MB in 5 minutes, the operation will fail. And the solution to this is to modify
    chunk size in the file of "TfsMigrationShell.exe.config". I have few questions regarding this...
    Question 1:Is
    above understanding and solution correct? I also notice that this error appears for every huge file (approx >30MB), is there any timeout setting at the target server? Is this something to be changed at the Tool level config or the Target IIS level
    config?
    Question 2: Suppose
    I do not change the config setting, then huge files are skipped due to this error. However, after the migration of the other files is finished, then, will the tool retry to sync the skipped (huge) files? Or has it skipped them forever?
    Question 3:
    Suppose I modify
    chunk size in the file of "TfsMigrationShell.exe.config", then, which of the following do I need to do (give me the best solution among the 3 below)
    a) Do I need
    to restart the migration (by deleting and recreating the project on the 2nd TFS Server and re-starting migration)? 
    b) Do
    I need to create a new configuration and restart migration?
    c) Do I need to restart the sync service?
    The concern here is that we have got 3 more jobs (TFS
    Integration tool configurations) running parallelly that
    is syncing Work Items & VC of other TFS projects and want to make sure that restarting the TFS Integration Job Service will not have any negative impact on the other jobs as well as no negative impact on the current migration job...
    Microsoft, Please advice. Thanks!
    Log:
    [1/21/2015 10:55:39 AM] AsyncUploadFileState.Completed -- caught and converted:
    [1/21/2015 10:55:39 AM] Microsoft.TeamFoundation.VersionControl.Client.VersionControlException: C:\TfsIPData\10\ications\MDM Folder\Binaries\v9.5\DMS Web Service Release 1.5.zip: The request was aborted: The request was canceled. ---> System.Net.WebException: The request was aborted: The request was canceled. ---> System.IO.IOException: Cannot close stream until all bytes are written.
    [1/21/2015 10:55:39 AM] at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting)
    [1/21/2015 10:55:39 AM] --- End of inner exception stack trace ---
    [1/21/2015 10:55:39 AM] at System.Net.ConnectStream.CloseInternal(Boolean internalCall, Boolean aborting)
    [1/21/2015 10:55:39 AM] at System.Net.ConnectStream.System.Net.ICloseEx.CloseEx(CloseExState closeState)
    [1/21/2015 10:55:39 AM] at System.Net.ConnectStream.Dispose(Boolean disposing)
    [1/21/2015 10:55:39 AM] at System.IO.Stream.Close()
    [1/21/2015 10:55:39 AM] at System.IO.Stream.Dispose()
    [1/21/2015 10:55:39 AM] at Microsoft.TeamFoundation.VersionControl.Client.FileUploader.UploadChunk(FileStream fileContentStream, Int64 end)
    [1/21/2015 10:55:39 AM] --- End of inner exception stack trace ---
    [1/21/2015 10:55:39 AM] at Microsoft.TeamFoundation.VersionControl.Client.Workspace.EndUploadFile(IAsyncResult asyncResult, PendingChange& pendingChange, Byte[]& uploadedContentBaselineFileGuid, Int64& uncompressedLength)
    [1/21/2015 10:55:39 AM] at Microsoft.TeamFoundation.VersionControl.Client.Workspace.AsyncUploadFileState.Completed(IAsyncResult asyncResult)

    Hi Charles,
    I have been doing in depth research and testing of this and here are my findings:
    Replies to your points:
    1) It failed with UploadChunkSize value change. Tried 1MB, 500KB, even 100KB. Fails with same error! Is there any other setting we can change for the timeouts at the target TFS server also?
    2) The huge files are not skipped. The migration loops on the failed files and doesn't continue to process the next lot of files. So the migration is blocked on the huge files.
    3) OK. Thanks. 
    Please see below the finding on when migration succeeds and when it fails:-
    Scenario 1:
    a) Where Migration fails:
    TFS A is in India and TFS B is in UK. Both TFS are on different domain. Migration tool is installed on same machine as TFS A. There is a firewall that allows the access from TFS A machine to TFS B machine. The migration tool fails to work in this scenario
    when it encounters a file size approx > 8MB. I tried setting the UploadChunkSize value to 100KB. Yet it failed!
    b) Where Migration works:
    However, if the files are approx less than 3MB then the migration works without any errors.
    Scenario 2 where TFS migration tool works:
    TFS A is in India and TFS B is also in India (located closeby). Both TFS are on different domain. Migration tool is installed on same machine as TFS A. There is no firewall in this case. No changes were made to UploadChunkSize value and files with size
    of 10's and 100's of MBs got synced without any issue.
    Questions
    Question 1: Can we conclude now that there is some issue with the firewall or network in case of scenario 1? As mentioned above Scenario 1 works if file size is less then approx 3MB.
    Question 2: Is there any timeout setting on the Target TFS server which can be set so that it waits for the file without timing out?
    Thanks for any help...

  • Tutorial: Azure Active Directory integration with Igloo Software

    Click reply and tell us what you think:
    Tutorial: Azure Active Directory integration with Igloo Software
    Markus Vilcinskas, Knowledge Engineer, Microsoft Corporation

    Hello
    Can you be little clear, what you have tested with Airwatch MDM cloud?.. which scenarios?.. 
    1) Device Enrollment ?
    2) Access to Airwatch console?
    3) Access to Airwatch self service portal?
    By following the steps We do not get it working at all. by the way some of the steps in this tutorial are unclear and outdated;  
    I finally personally figured out how things should look like, and  make it work but only with Device Enrollment scenarios from the mobile devices itself. not from the pc and browsers or from the Access panel.

  • Oracle.integration.platform.blocks.sdox.WLSSDODynamicStubHelper

    Hi All,
    I have developed an ADF-BC Service and created service interface in it. And trying to deploy in Admin server but getting the following error:
    [01:44:28 PM] Weblogic Server Exception: weblogic.application.WrappedDeploymentException: oracle.integration.platform.blocks.sdox.WLSSDODynamicStubHelper
    [01:44:28 PM] See server logs or server console for more details.
    [01:44:28 PM] weblogic.application.WrappedDeploymentException: oracle.integration.platform.blocks.sdox.WLSSDODynamicStubHelper
    Do we need SOA server for this to deploy or can deploy on Admin Server also? Please help me
    Regards,
    Ramesh.
    Edited by: user10956358 on Jul 25, 2012 6:30 PM

    I am using FusionOrderDemo_R1PS4 for FOD, but I am trying to build it on SOA_11.1.1.6.0_GENERIC_111217.1257. I know that there could be some mismatch, but I want to fix them.
    Any way to fix this without using SOA earlier version?

  • Why to use DI Server when I do have Integration Platform for SOAP messages?

    Hi All,
    a general question is to understand which may be the advantages of 'DI Server' versus 'Integration Platform' and vice versa when there is a need to integrate with a third party system (eg exchange SOAP messages). I don't know if there is already any material, blog etc
    Best Regards,
    Vangelis

    From the Safari menu bar, select
    Safari ▹ Preferences ▹ Extensions
    Turn all extensions OFF and test. If the problem is resolved, turn extensions back ON and then disable them one or a few at a time until you find the culprit.

  • How to start / stop Oracle Directory Integration service (Win 2k3 Server)

    I'm running a standard OAS 10.1.2.0.2 instance on Windows Server 2k3, and am having trouble starting and stopping ODISRV instances. Well, at least I think I am, because I can't find any way of telling if they're actually running.
    The System Components in the Application Server Control web page shows the following to be all up and running:
    HTTP_Server
    Internet Directory
    OC4J_SECURITY
    Single Sign-On:orasso
    Management
    Which is backed up by opmnctl:
    opmnctl statusProcesses in Instance:
    ------------------------------------------------+---------
    ias-component | process-type | pid | status
    ------------------------------------------------+---------
    DSA | DSA | N/A | Down
    LogLoader | logloaderd | N/A | Down
    dcm-daemon | dcm-daemon | 1988 | Alive
    OC4J | OC4J_SECURITY | 5008 | Alive
    HTTP_Server | HTTP_Server | 2800 | Alive
    OID | OID | 4956 | Alive
    In the Application Server Control, when I select the Internet Directory instance, and then the Status for Directory Integration, it shows two Directory Integration Servers, which correspond to instances I've previously started using oidctl. However, I cannot see any odisrv processes running. The only relevant processes appear to be 2 x oidldapd, 1 x oidmon, and 2 x opmn.
    When I try to stop these using oidctl, the command returns no errors, but these instances do not go away. There is also only one odisrv log file from a week or so ago that does not appear to get updated. It is very possible that I stuffed the parameters of these instances up when I started them, so they may well be in an invalid statr at the moment.
    So I have the following questions:
    - Is there anyway I can clear all these instances out back to a clean slate to start again?
    - Should I be using opmnctl to start an ODISRV instance, or should I somehow configure it to run under opmnctl startall?
    Thanks,
    Barney

    Once you register the integration server, OIDSRV should auto start using oidmon.
    To cleanup
    -- opmnctl stopall
    -- delete entries from ods.ods_process table, if there are any.
    -- Restart instance using opmnctl

  • Active Directory integration with call manager

    Hi,
    I am facing issues while Integrating the CCM to my Active Directory using AD Plug-in.
    SITE SETUP:
    1. Windows 2003 Parent Domain Controller located remotely with GC.
    2. Windows 2003 Child Domain for the Parent DC located Locally with GC.
    3. Cisco CallManager 4.1.3 sr3b
    My Requirement is to integrate CCM with my Windows 2003 AD.
    My Questions are:
    1. Do I need to Provide the Parent Domain name or the Child Domain name while performing the AD Plug-in Setup?
    2. Does my Call Manager need to have the Forest access of the Active Directory (i.e., Does it perform some modifications in the Parent Domain)?
    3. Does the user account (which is used for Directory Integration) need to have direct members of Schema Admins or thru some other domain admin groups (i.e., Admin user -> Child Domain Admins Groups -> Parent Domain and Schema Admin Groups)?
    Can anyone can help me on this?
    Thanks,
    V.Kumar

    1. Do I need to Provide the Parent Domain name or the Child Domain name while performing the AD Plug-in Setup?
    Use the root domain, in this case the Parent domain.
    Cisco does not recommend having a Cisco Unified CallManager cluster service users in different domains because response times while user data is being retrieved might be less than optimal if domain controllers for all included domains are not local.
    2. Does my Call Manager need to have the Forest access of the Active Directory (i.e., Does it perform some modifications in the Parent Domain)?
    Yes, actually all domains in the forest share the same Schema, which will be modified after running the AD plugin.
    3. Does the user account (which is used for Directory Integration) need to have direct members of Schema Admins or thru some other domain admin groups (i.e., Admin user -> Child Domain Admins Groups -> Parent Domain and Schema Admin Groups)?
    Account should be a member of the Schema Admins group in Active Directory, try the one in parent domain.
    Correct permissions for CCMAdministration and similar example for your setup:
    http://www.cisco.com/en/US/products/sw/voicesw/ps556/products_implementation_design_guide_chapter09186a00806e8c04.html#wp1043057
    HTH

  • Active directory Integration with OBIEE

    Hi all,
    Can any one send me a link for active directory integration with OBIEE.
    I have imported the users succesfully and I was able to login to analytics as an AD user.
    But SSO is not possible. Kindly help me over this.
    Thanks,
    Haree.

    Thanks for reply veeravalli.
    Me too followed the same link and successfully imported all the users from AD into OBIEE and login in is also possible.
    But my requirement is to have Single Sign On ie.., users may log on to their Windows PCs and access Oracle BI EE via a standard web browser with no further authentication required on their part.
    Thanks,
    Haree

Maybe you are looking for