OIM - multivalued data propagation

OIM guru,
We are implementing OIM project where among others we would like to propagate user roles (from HR) to AD. It means we need to deal with multivalued data in OIM.
Based on available documentation we have to setup target resource reconciliation from CSV (by GTC) and provisioning to AD.
CSV(HR) ---> OIM ---> AD
We have no idea how to setup propagation from CSV account table to AD process form and even how to propagate multivalued data from CSV (account child table) to AD group memberships.
Any idea is appreciated!
Jiri

Hi,
I guess you are reconciling from CSV so my suggestion will be stored group name as different delimiter.
Your csv should look like this:
UserId,FirstName,LastName,Group
test1,test1,test1,Group1|Group2|Group3|Group4
So now you reconcile the user the UDF will be populated with
Group1|Group2|Group3|Group4
Now when you these value in UDF you can easly process the UDF field using String oeprations.
Please let me know if you have follow up questions.
Regards
Nitesh

Similar Messages

  • Data propagation problems w/ NIS+ to LDAP migration..

    Hello All,
    I'm running in to an issue performing an NIS+ to LDAP migration with Solaris 9.
    It all happens like this: NIS+ successfully populates the directory through the 'initialUpdateAction=to_ldap' option-- afterwards, no updates made directly to LDAP are ever pushed back into NIS+.
    I'm of the understanding (which might be incorrect) that after performing the initial update, NIS+ should simply act as a cache to the data stored in LDAP. Do I need to perform an 'initialUpdateAction=from_ldap' after populating LDAP to force the direction of the data propagation to change?
    I'm experienced with LDAP, so I'm comfortable everything is all right on that side, however, I'm not so sure about NIS+. Anyone out there who has gone through this migration who'd be willing to offer some assistance or advice would be greatly appreciated.
    Many thanks in advance..
    ..Sean.

    Well, you neglected to outline exactly how you accomplished your migration.
    Starting with Tiger Server using NetInfo as a standalone server, we created an Open Directory Master, as described in Apple's Open Directory Guide. By the time we'd finished that, we had an OD admin. From there, we did as I previously described -- exported with WGM from NetInfo, imported with WGM into LDAP, deleted with WGM from NetInfo.
    See http://support.apple.com/kb/TA23888?viewlocale=en_US
    This seems to be an article on how to re-create a password that's been lost. That's not really what we need, though. The OD admin account we created works fine for other services, just not for WGM. And other admin users we created work fine for other services, but not for WGM. The problem is that although admin users can log into many services, they can't log into WGM -- only root can.

  • The method to provision the OIM System Date to a target System

    Hi,
    I want to provision the OIM System Date(date format : "YYYY-MM-DD HH:MI:SS") to a target System(DB Type:Oracle).
    The Column type in The target System is Date Type.
    I use the process adapter and assign the System Date to the Process Data - Date Type Column - in the target System.
    it doesn't work.
    How do i do?????
    please help me

    - That's simple. You have already created this date type variable in your process form. Now pass it in whichever format it is. In your code for creation in oracle, do a date conversion as required using custom code. This would work if you have written your code and you are not using DBApp Tables connector. Do it as follows:
         SimpleDateFormat input = new SimpleDateFormat("OIM_DATE_FORMAT");
         SimpleDateFormat output = new SimpleDateFormat("ORACLE_DB_DATE_FORMAT");
         Date date = input.parse("Pass form date over here");
         return output.format(date); // Pass this value to Oracle
    - If its DBApp Table connector then connector must take care of this by itself.
    Thanks
    Sunny

  • RAC data propagation delay?

    Hi Experts,
    I have a multi-threaded app that connects to a RAC DB using OCI.
    Flow:
    1.) Get an expired resource
    2.) Assigned it to a user
    Each thread executes the following sequence of queries:
    1.) SELECT id, data FROM table_name WHERE date_expiry = :min_date AND rownum = 1 FOR UPDATE;
    2.) UPDATE table_name SET date_expiry = trunc(sysdate) + 30, user = :user WHERE id = :id
    3.) COMMIT;
    -- :min_date is always <= trunc(sysdate)
    I expect that each row will only be assigned to a unique user.
    Apparently, this is not the case. It seems that some threads can still get a row even after the date_expiry has been updated.
    Is there a data propagation delay between RAC nodes?
    If there is, a thread can fetch a row even after another thread have updated it if the threads are connected in different nodes.
    This is the only reason I can think of.
    I tested this many times on a stand alone DB, but I can't replicate the error.
    Please help!
    BTW, our oracle version is:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP, Data Mining and Real Application Testing options
    Edited by: user11912154 on Sep 17, 2009 3:01 AM

    user11912154 wrote:
    Each thread executes the following sequence of queries:
    1.) SELECT id, data FROM table_name WHERE date_expiry = :min_date AND rownum = 1 FOR UPDATE;Bad method IMO. I have experimented with it when designing a PL/SQL replication system (Oracle SE) and this approach was not very robust and did not work properly.
    A better method would be to use something like a dispatcher/thread manager that hands out the work. So instead of each thread trying to discover what work needs to be done - and running into concurrency issues - the manager process picks up a batch of work to be done and distributed that amongst threads.
    Simplistic example. The thread manager fires of the SQL to find work and bulk fetches the 1st 50 rows. It closes the cursor, caches the 50 rowids and fires off 10 threads to process the 1st 10 rows. Each sleeps for a few seconds, wakes up, checks the thread number, finds that 8 threads are still busy and fires off 2 more threads to do the next 2 rowids. Repeat. When the cache has been processed (or when it is down to the last 10 rowids), the manager finds the next batch of work to do.
    The key design issue is not to have threads competing to find work. As this means competing for access to the same resource, and potential serialisation issues - threads stepping on one another's toes and getting hurt.

  • OIM: Retrieve data from stored procedure,  pre populate it in Resource form

    Hi,
    I need to retrieve data form a stored procedure that i have created . This data needs to be per populated in the Iplanet resource from. The stored procedure will have one input string and one output string.
    Thankz,
    Sanjay Rulez

    1. Prepare OIM Admin Console to interact with an Oracle Database, by adding jdbc jar files in the lib Directory of OIM Console.
    2. Restart OIM Console and Create a database Resource
    3. Create an Pre-populate adapter Task with a "Stored Procedure" Type
    4. Invoke you Procedure.
    The main problems can be around datatype conversion between Java and Oracle database.
    For instance, all Oracle NUMBER type are mapped on java LONG type, and you need to deal every time with conevsrion between INTEGER and LONG in the adapter task.

  • Security Data Propagation

    Hi
    Since propagation tool doesnt propagate some of the security data (like global roles etc for more details http://download.oracle.com/docs/cd/E13155_01/wlp/docs103/prodOps/propToolAdvanced.html#wp1054464 ).
    We would like to use the import/export options in the WLS console to migrate from one domain to another domain the embedded ldap data.
    Questions:
    can we use this options for migrating global roles. But this options will also move other data in embedded ldap associated with visitor roles etc. So can we use both the ldap migration option and propagation tool? In the ldap migration option there is no way to select to move only the global roles.
    Is there a possibility of inconsistency between the ldap data after the migration and using propagation tool
    Any ideas?

    Hi!
    calling 'new IntialContext()' should pass the authenticated user automatically to the
    initial context request. You could also pass the parameters like listed below:
    Principal princ = request.getUserPrincipal();
    Properties prop = new Properties();
    prop.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
    prop.put(Context.PROVIDER_URL, "t3://host:port");
    prop.put(Context.SECURITY_PRINCIPAL, princ.getName());
    prop.put(Context.SECURITY_CREDENTIALS, ((weblogic.security.acl.User)princ).getCredential(princ));
    new InitialContext(prop);
    Make sure, that your realm implements the getCredential() method (this is not the case in WLS examples).
    regards,
    przemek
    sudarson schrieb:
    Realm based basic or form authentication, so that whenever user asks for anything
    under some directory(or context), login page/dialog box will be shown.
    Regards,
    Sudarson
    "Amar Pratap" <[email protected]> wrote:
    What kind of authentication ur using in the Servet/JSP?
    "sudarson" <[email protected]> wrote in message
    news:3c5e65a9$[email protected]..
    Hi All,
    If I use realm to enter ceratin web application then will the securitycontext
    (what ever credential user will provide)propagate thru the session? And
    if I
    call a ejb from any of the servlet or jsp, will the same security rolewill be
    used to determine the authorization level ?
    If yes, how should I create the context in that case ? Or Should Iuse
    new IntialContext()with out environment property hashtable ?
    Any suggestion is welcome.
    TIA,
    Sudarson
    Przemyslaw Rychlewski . . . . . . Pixelpark AG
    Senior IT-Developer . . . . . . . Systems & Technology
    mailto:[email protected] . http://www.pixelpark.com/
    Tel.:++49.30.5058.1812. . . . . . Rotherstr. 8
    Fax.:++49.30.5058.1600. . . . . . 10783 Berlin

  • OIM 10g data migration

    Hi,
    We have an existing OIM server with the below configuration
    OIM- 9.1.0.2 + BP11
    Weblogic server: 10.3.2.0
    OIM DB: 10g
    Target resources: AD, ACF2, Lotus Notes, Remedy, Novell- e directory
    Trusted resource: ODS
    We have to replicate this enviroment(with data) to a entirely new server.(different host name and db)
    We have already installed the OIM on new server with the same configuration as above.
    Can anyone help us with the proper sequence of migration
    We have below entities in existing OIM
    Adapters, UDFs, Process Tasks, Resource Objects, Scheduled task, etc.
    Thanks,
    Garima

    run in below sequence at day 1
    1. disable access policy if any just remove the access policy from role.
    2. run trusted recon so that user will be created in OIM
    3. run manager mapping scheduler if you have else ignore it.
    3. run target recon for all the target systems so the existing account will be reconciled and linked with OIM->users.
    4. enable access policy if exist. I mean attach access policy to that role.
    better approach, don't take import of Lookup.ADGroupReconciliation.Lookup and Lookup.OrganizationReconciliation.Lookup. create manually. Else you can drop all the content on dev machine of lookup and then export as well
    Hope you understand
    --nayan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • OIM Initial Data Load - Suggestion Please

    Hi,
    I have the following scenario :
    1.My client is currently having 2 systems 1.AD and 2.HR application.
    2.HR application is having only Employee Information.(it contains information like firstname,lastname,employeenumber etc)
    3.AD is having both employees and contractor information
    4.Client wanted my HR application to be the trusted source but the IDM login ID of existing users should be same us that of AD samaccount name.
    I am using OIM 9.0 and 9041 connector pack.What would be the best way to do the initial data loading in this case?.Thanks in advance.

    Hi,
    Can you tell me how do you relate employee in HR to corresponding record in AD.Then I will be in better situation to explain how you can do it.
    But even without this information following approach will solve your requirment.
    1.Do the trusted recon from AD.
    2.samAccountName will be mapped to user id field of OIM profile.
    3.Do the trusted recon with HR.The matching key should be the answer of my above question.
    4.For trusted recon with HR remove the Action "Create User" on No Match Found event.
    Hope this will help.
    Regards
    Nitesh
    Regards
    Nitesh

  • OIM API Data Object Manager Functionality

    Hi all,
    I am working on creating a script for a client that will automate attaching handlers to data objects. In the design console this action can be performed on the Development Tools > Business Rule Definition > Data Object Manager screen. Does anyone know where in the API I can perform this operation?
    Regards,
    Luke

    After more investigation I have discovered that the DVT table in the oim schema stores the information about the event handlers that have been attached to a data object. I hope this helps shed more light on the situation. If anyone has any suggestions about where I can look for the API information please let me know. I have already looked through the java docs here:
    http://otndnld.oracle.co.jp/document/products/id_mgmt/idm_903/doc_cd/javadocs/operations/index.html

  • Need info on DDL and Data propagation

    Greetings Guys,
         I have a requirement for moving data and DDL from lower environment to production, this would be done on some frequency say weekly for deploying the latest code for the application, I would like to know what are the possible techniques
    and tools available in SQL Server. I am not sure if I can set up replication for all the tables in the database because in case of restoring the database from backup I suspect fixing the replication problem itself will become a big thing. Currently we use
    merge statements for moving data between the environments and redgate sql compare API for moving DDLs. Let me know if there are any other ways to do this. 
    Environment:
    SQL SERVER 2008 R2
    WINDOWS 2008 R2
    With regards,
    Gopinath.
    With regards, Gopinath.

    You can also create a SSDT database project and publish the changes using it
    see
    http://www.techrepublic.com/blog/data-center/auto-deploy-and-version-your-sql-server-database-with-ssdt/
    http://blogs.msdn.com/b/ssdt/archive/2013/08/12/optimizing-scripts-for-faster-incremental-deployment.aspx
    http://schottsql.blogspot.in/2012/11/ssdt-publishing-your-project.html
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • OIM 9.1 User data set

    Hi,
    In release note of 9.1 it is mentioned that :
    Display of all OIM User attributes on the Step 3: Modify Connector Configuration page
    On the Step 3: Modify Connector Configuration page, the OIM - User data set now shows all the OIM User attributes. In the earlier release, the display of fields was restricted to the ones that were most commonly used.
    and
    Attributes of the ID field are editable
    On the Step 3: Modify Connector Configuration page, you can modify some of the attributes of the ID field. The ID field stores the value that uniquely identifies a user in Oracle Identity Manager and in the target system.
    Can anyone please guide me how to get both things as I am getting only few fields of user profile in OIM-USer data set and also not able to modify ID field.
    I am using OIM 9.1 on Websphere application server 6.1
    Thanks

    Unfortunately i do not have experience using the SPML generic connector. Have you read through all the documentation pertaining to the GTC?
    -Kevin

  • How to check that data was propagated to all nodes in cluster?

    Hi.
    We are using Weblogic 10.3.5 and Coherence 3.6. Both applications work in cluster mode and we are using replicated mode as a Coherence topology. Also the NameCache use to store and retreive data from Coherence cluster. Now I have a task to calculate a time that take data propagation to all nodes. So, from my sight of view coherence should raise some kind of event when each node in cluster will fulfield with the same data. Or may be there is a standard coherence(weblogic?) listener that provide such an information.
    I will be appreciate for help how to solve my task.

    Jonathan.Knight wrote:
    Hi,
    If you are using a replicated cache then the time taken to replicate the data is the time taken to do a put. Coherence will not return from a put method call on a NamedCache until the data has reached all the nodes. That is why replicated caches are a bad idea for clusters with a lot of nodes where there are frequent updates as they are slow.
    JKHi JK,
    actually, AFAIK, it is not 100% correct.
    From what I remember from an earlier discussion or email, replication in a replicated cache is synchronous to one other member (the lease owner), and asynchronous thereafter. The synchronous part of the protocol involves the mutating member and the entry lease owner (which may be the same). As I understand the lease owner orders the operations and resolves races between multiple mutators, and drives the asynchronous part of the replication to all other members.
    In short, total network cost is linear with nodes, but latency wise you do not need to wait until all updates actually took place on all other nodes (that would be a really sad scenario when some nodes are communicating slowly).
    Best regards,
    Robert

  • Data is not getting replicating to the destination db.

    I has set up streams replication on 2 databases running Oracle 10.1.0.2 on windows.
    Steps for setting up one-way replication between two ORACLE databases using streams at schema level followed by the metalink doc
    I entered a few few records in the source db, and the data is not getting replication to the destination db. Could you please guide me as to how do i analyse this problem to reach to the solution
    setps for configuration _ steps followed by metalink doc.
    ==================
    Set up ARCHIVELOG mode.
    Set up the Streams administrator.
    Set initialization parameters.
    Create a database link.
    Set up source and destination queues.
    Set up supplemental logging at the source database.
    Configure the capture process at the source database.
    Configure the propagation process.
    Create the destination table.
    Grant object privileges.
    Set the instantiation system change number (SCN).
    Configure the apply process at the destination database.
    Start the capture and apply processes.
    Section 2 : Create user and grant privileges on both Source and Target
    2.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN;
    2.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    In 10g :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
    2.3 Create streams queue :
    connect STRMADMIN/STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    Section 3 : Steps to be carried out at the Destination Database PLUTO
    3.1 Add apply rules for the Schema at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'APPLY ',
    streams_name => 'STRMADMIN_APPLY',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    3.2 Specify an 'APPLY USER' at the destination database:
    This is the user who would apply all DML statements and DDL statements.
    The user specified in the APPLY_USER parameter must have the necessary
    privileges to perform DML and DDL changes on the apply objects.
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    3.3 Start the Apply process :
    DECLARE
    v_started number;
    BEGIN
    SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
    FROM DBA_APPLY WHERE APPLY_NAME = 'STRMADMIN_APPLY';
    if (v_started = 0) then
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    end if;
    END;
    Section 4 :Steps to be carried out at the Source Database REP2
    4.1 Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace.
    It is a good practice to create an alternate tablespace for the LogMiner
    tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON
    MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    4.2 Turn on supplemental logging for DEPT and EMPLOYEES table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk(deptno) ALWAYS;
    ALTER TABLE scott.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP dep_pk(empno) ALWAYS;
    Note: If the number of tables are more the supplemental logging can be
    set at database level .
    4.3 Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN
    CREATE DATABASE LINK PLUTO connect to
    STRMADMIN identified by STRMADMIN using 'PLUTO';
    Test the database link to be working properly by querying against the
    destination database.
    Eg : select * from global_name@PLUTO;
    4.4 Add capture rules for the schema SCOTT at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'CAPTURE',
    streams_name => 'STREAM_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    4.5 Add propagation rules for the schema SCOTT at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name => 'SCOTT',
    streams_name => 'STREAM_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@PLUTO',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    Section 5 : Export, import and instantiation of tables from
    Source to Destination Database
    5.1 If the objects are not present in the destination database, perform
    an export of the objects from the source database and import them
    into the destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each
    individual object at a particular system change number (SCN).
    exp USERID=SYSTEM/manager@rep2 OWNER=SCOTT FILE=scott.dmp
    LOG=exportTables.log OBJECT_CONSISTENT=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate
    information in the destination database corresponding to the SCN that
    is recorded in the export file.
    imp USERID=SYSTEM@pluto FULL=Y CONSTRAINTS=Y FILE=scott.dmp IGNORE=Y
    COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
    5.2 If the objects are already present in the desination database, there
    are two ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Specify ROWS=N during Export
    Specify IGNORE=Y during Import along with above import parameters.
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@source
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with
    this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure
    controls which LCRs for a table are to be applied by the
    apply process. If the commit SCN of an LCR from the source
    database is less than or equal to this instantiation SCN,
    then the apply process discards the LCR. Else, the apply
    process applies the LCR.
    connect STRMADMIN/STRMADMIN@destination
    BEGIN
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
    SOURCE_SCHEMA_NAME => 'SCOTT',
    source_database_name => 'REP2',
    instantiation_scn => &iscn );
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Note:In 9i, you must instantiate each table individually.
    In 10g recursive=true parameter of DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
    is used for instantiation...
    Section 6 : Start the Capture process
    begin
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
    end;
    /

    same problem, data not replicated.
    its captured,propagated from source,but not applied.
    also no apply errors in DBA_APPLY_ERROR. Looks like the problem is that LCRs propagated from source db do not reach target queue.can i get any help on this?
    queried results are as under:
    1.at source(capture process)
    Capture Session Total
    Process Session Serial Redo Entries LCRs
    Number ID Number State Scanned Enqueued
    CP01 16 7 CAPTURING CHANGES 1010143 72
    2. data propagated from source
    Total Time Executing
    in Seconds Total Events Propagated Total Bytes Propagated
    7 13 6731
    3. Apply at target(nothing is applied)
    Coordinator Session Total Total Total
    Process Session Serial Trans Trans Apply
    Name ID Number State Received Applied Errors
    A001 154 33 APPLYING 0 0 0
    4. At target:(nothing in buffer)
    Total Captured LCRs
    Queue Owner Queue Name LCRs in Memory Spilled LCRs in Buffered Queue
    STRMADMIN STREAMS_QUEUE 0 0 0

  • Japanese Data is not getting stored  if passes as Parameter.

    Hello Everyone,
    I have setup of ReDHat Linux 7.1 ( English supporting Japanese too ) with Apache Webserver , Apache Jserv , Gnu -JSP and i have Other server with NT - Japanese with Oracle 8.1.7 Japanese . I have some strege Problem
    I have created one JSP where i am using CharSet as Euc-JP and even for AapacheJserv i am using encoding as Euc-JP. from this JSP page if i tried to insert record with Typed Japanese with in SQL as hardcode value it is working smooth. It stored in oracle as it is.
    BUT if i creates 2 diff JSP for e.g DataEntry.jsp and calling other jsp is InsertRecord.jsp ... with same character set as EUC-JP.
    I am using <% String mnippon_no = request.getParameter("nippon_no"); %>
    and trying to Disply on browser it is showing me Junk character and even storing juck character.
    If i removed character set from my called JSP i.e. InsertRecord.jsp it showing me perfect Japanese Value which i passed but while storing in Databases it is showing Junk character.
    One Imp thing is using Perl and DBD and Oracle developed no of application these are working really smooth on same Server.
    If fact this is the First Jsp Program of my career and that too with Japanese ppl .. So Please help me ..
    Thanks in Advance
    Maruti Chavan

    same problem, data not replicated.
    its captured,propagated from source,but not applied.
    also no apply errors in DBA_APPLY_ERROR. Looks like the problem is that LCRs propagated from source db do not reach target queue.can i get any help on this?
    queried results are as under:
    1.at source(capture process)
    Capture Session Total
    Process Session Serial Redo Entries LCRs
    Number ID Number State Scanned Enqueued
    CP01 16 7 CAPTURING CHANGES 1010143 72
    2. data propagated from source
    Total Time Executing
    in Seconds Total Events Propagated Total Bytes Propagated
    7 13 6731
    3. Apply at target(nothing is applied)
    Coordinator Session Total Total Total
    Process Session Serial Trans Trans Apply
    Name ID Number State Received Applied Errors
    A001 154 33 APPLYING 0 0 0
    4. At target:(nothing in buffer)
    Total Captured LCRs
    Queue Owner Queue Name LCRs in Memory Spilled LCRs in Buffered Queue
    STRMADMIN STREAMS_QUEUE 0 0 0

  • OIM(11.1.1.3.0) supports connector Database Applications Table(9.1.0.5.0)?

    Hi, any one has been able to do a reconciliation from an Oracle database table(as trusted source) to OIM, based on the following versions:
    Database: Oracle Database 11g Release 2 (11.2.0.1.0)
    Connector: Database Applications Table(9.1.0.5.0)
    OIM: 11.1.1.3.0
    I have similar issues as in the following post:
    Re: Creating GTC in OIM 11g (11.1.1.3.0)
    The basic issue is, we don't know how to map the reconciliation staging data set to OIM user data set for such reconciliation. The attributes listed in the OIM user data set are missing the following ones, which are supposed to be required:
    user tyep;
    employee type.
    Any one can help?

    I had a liitle progress here. For the missing mandatory fields, once you are done the mapping during the creation of GTC, go to Design Console to add mandatory fields, such as user type which is not available in the creation process of GTC. Please see:
    => http://st-curriculum.oracle.com/obe/fmw/oim/10.1.4/oim/obe12_using_gtc_for_reconciliation/using_the_gtc.htm#t5
    Now actually I have exactly the same proplem as this one:
    => Re: Creating GTC in OIM 11g (11.1.1.3.0)
    => Event Recieved
    => Data Validation Succeded
    => No User Match Found
    => Creation Faild
    => Notes: ORA Error Code =>ORA-01400: cannot insert NULL into () ORA Error Stack =>ORA-06512: at DEV_OIM.OIM_SP_RECONBLKUSERCRUD", line 722
    I am looking into this, will keep you posted.
    Ningfeng

Maybe you are looking for