How to migrate LOVs to the target environment.

Hi,
I have 200 users with me.
How do I migrate it to the target environment?
Please let me know the Migration process of LOVs.
Thanks in advance
Tusar

If you arent familiar with the ADM, a simple way of migrating is using the EIM tables in combination with a database link between the environments, excel spreadsheet, or even access database.
Create an IFB file that will extract the specific records out of the S_LST_OF_VAL table (if you are using any hierarchy based LOVs be sure to have two separate exports):
[Export LOV Parent]
TYPE = EXPORT
BATCH = 1001
TABLE = EIM_LST_OF_VAL
CLEAR INTERFACE TABLES = TRUE
EXPORT ALL ROWS = FALSE
EXPORT MATCHES = S_LST_OF_VAL, (TYPE = 'LOV_TYPE' AND LAST_UPD > '2008-01-01')
[Export LOV]
TYPE = EXPORT
BATCH = 1002
TABLE = EIM_LST_OF_VAL
CLEAR INTERFACE TABLES = TRUE
EXPORT ALL ROWS = FALSE
EXPORT MATCHES = S_LST_OF_VAL, (TYPE <> 'LOV_TYPE' AND LAST_UPD > '2008-01-01')
Using some sort of technology, the easiest which is a DB link, copy the LOVs from one environment to the other...
Pushing data:
INSERT INTO SIEBEL.EIM_LST_OF_VAL@DESTINATION
SELECT * FROM SIEBEL.EIM_LST_OF_VAL
Pulling data:
INSERT INTO SIEBEL.EIM_LST_OF_VAL
SELECT * FROM SIEBEL.EIM_LST_OF_VAL@SOURCE
Then import the data:
[Import LOV Parent]
TYPE = IMPORT
BATCH = 1001
TABLE = EIM_LST_OF_VAL
ONLY BASE TABLES = S_LST_OF_VAL
[Import LOV]
TYPE = IMPORT
BATCH = 1002
TABLE = EIM_LST_OF_VAL
ONLY BASE TABLES = S_LST_OF_VAL
Personally i am not as familiar with the new ADM, so this is a quick and easy way of doing it. If you can't create a db link due to security issues, export the data from one EIM table to an access database, or excel. Good luck
Edited by: user10133940 on Jun 18, 2010 8:54 AM

Similar Messages

  • How to update fields in the target table in correspondance with the source file values

    Environment: win7, SQL server 2008 R2
    Application: Microsoft Management SQL Studio 2008 R2, Business Intelligence 2008 - SSIS
    SSIS competency level: Novice
    Problem: I have been trying to update some of the fields in the destination table,student table, in reference to data set in the staging table and ssn table.  I was able to insert/load new data to the destination using look up transformation
    while the driver is ssn (data mapping) but i couldn't know how to update some of the fields in the student table while keeping the orignal pn_id of both tables(ssn and student tables), because pn_id already exists in the SSN table and student table. There
    are other records also associated with the pn_id so I am not allowed to update the pn_id in the destination tables. For example,
    SSN Table (pn_id,ssn)
    ('000616850',288258466)
    ('002160790',176268917)
    Staging Table (ssn, id, pn_id, name, subject, academic year, comments)
    (288258466, 1001, '770616858',Sally Johnson, English,A, 2005,'great student')
    (176268917, 1002, '192160792',Will Smith, Math,38000,C, 2014,'no comments')
    (444718562, 1003, '260518681',Mike Lira, Math,38000,B, 2013,'no comments')
    Student Table (destination table)(id,pn_id,subject,academic year, grade, comments):
    (1001, '000616850', ' ',' ', ,'')
    (1002, '002160790', ' ',' ', ,'')
    Expected Results:
    My goal is to have student table updated as the following:
    Student Table
    (1001, '000616850', 'English','A' ,2005 ,'great student')
    (1002, '002160790', 'Math ',' C',2014 ,'no comments')
    please advise

    Why can't you use simple UPDATE command in EXECUTE SQL Task as below,
    DROP TABLE SSN
    DROP TABLE STAGING
    DROP TABLE STUDENT
    CREATE TABLE SSN(pn_id VARCHAR(100),ssn BIGINT)
    INSERT INTO SSN VALUES('000616850',288258466)
    INSERT INTO SSN VALUES('002160790',176268917)
    CREATE TABLE Staging (ssn BIGINT, id INT, pn_id BIGINT, name VARCHAR(100), subject VARCHAR(100),grade VARCHAR(10), [academic year] INT, comments VARCHAR(100))
    INSERT INTO Staging VALUES(288258466, 1001, '770616858','Sally Johnson', 'English','A', 2005,'great student')
    INSERT INTO Staging VALUES(176268917, 1002, '192160792','Will Smith', 'Math','C', 2014,'no comments')
    INSERT INTO Staging VALUES(444718562, 1003, '260518681','Mike Lira', 'Math','B', 2013,'no comments')
    CREATE TABLE Student(id INT,pn_id BIGINT,subject VARCHAR(100), [academic year] INT, grade VARCHAR(10), comments VARCHAR(100) )
    INSERT INTO Student VALUES(1001, '000616850', NULL,NULL,NULL ,NULL)
    INSERT INTO Student VALUES(1002, '002160790', NULL,NULL,NULL ,NULL)
    UPDATE Student SET Subject = C.Subject, [academic year]=C.[academic year], grade=C.grade,comments=C.comments
    FROM SSN A INNER JOIN Student B
    ON A.pn_id=B.pn_id INNER JOIN Staging C
    ON A.ssn = C.ssn
    SELECT * FROM Student
    Regards, RSingh

  • How to delete rows in the target table using interface

    hi guys,
    I have an Interface with source as src and target as tgt both has company_code column.In the Interface i need like if a record with company_code already exists we need to delete it and insert the new one from the src and if it is not availble we need to insert it.
    plz tell me how to achieve this?
    Regards,
    sai.

    gatha wrote:
    For this do we need to apply CDC?
    I am not clear on how to delete rows under target, Can you please share the steps to be followed.If you are able to track the deletes in your source data then you dont need CDC. If however you cant - then it might be an option.
    I'll give you an example from what im working on currently.
    We have an ODS, some 400+ tables. Some are needed 'Real-Time' so we are using CDC. Some are OK to be batch loaded overnight.
    CDC captures the Deletes no problem so the standard knowledge modules with a little tweaking for performance are doing the job fine, it handles deletes.
    The overnight batch process however cannot track a delete as its phyiscally gone by the time we run the scenarios, so we load all the insert/updates using a last modified date before we pull all the PK's from the source and delete them using a NOT EXISTS looking back at the collection (staging) table. We had to write our own KM for that.
    All im saying to the OP is that whilst you have Insert / Update flags to set on the target datastore to influence the API code, there is nothing stopping you extending this logic with the UD flags if you wish and writing your own routines with what to do with the deletes - It all depends on how efficient you can identify rows that have been deleted.

  • How does Check Names in the Target Audience picker work?

    Can anyone tell me how the Check Names button in the Target Audience picker works? We are having problem with a specific audience in one of our web applications. I don’t think the problem is the actual audience though:
    The Target Audience is a global target audience named ”XYZ”. It is compiled and contains member. When an article is tagged with this audience all works fine as long as we use the Audience Picker dialog. But, when using the Check Name button the audience
    is not saved. This happens when I edit properties on the article (not customized UI) and also when I edit the article in a customized UI. No ULS messages.
    I have read that the Target Audience Picker searches for matches in AD Distribution groups, SharePoint groups and Global Audiences. When we write “XYZ” in the audience field and presses the Check Names button one suggestion is displayed, named “XYZ”. When
    I write “XYZ N” and press the Check Names button several suggestions are displayed: 4 “XYZ” and a couple of other suggestions “XYZ xx…” etc. We have no SharePoint groups starting with “XYZ”, we have no AD Distribution Groups starting with “XYZ”, and we have
    only one Global Audience starting with “XYZ”. Where are all the “XYZ” suggestions coming from? Can anyone help, please?
    We are using SharePoint 2010.
    Best regards Heidi Lillebuen

    Hi Heidi,
    The target audience picker is similar to people picker, it searches the content of User Profile Service to find the matching users or groups.
    I did a test based on your description. In my testing, everything worked well.
    Please check whether there are some users whose display names are like ‘XYZ ***’.
    Please create a new global audience using a different name, compare the result.
    I hope this helps.
    Thanks,
    Wendy
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Wendy Li
    TechNet Community Support

  • Step by step procedure on how to duplicate subtree on the target side

    I have a requirement where i have two types 'A' and 'B'. On the target side i have 6 fields out of which 2 have to repeat for type 'A' and remaining 4 for type 'B'. I have to sort my records such that all type 'A' records come above and all type 'B' come below that.
    I had written a UDF for that initially and even used sort by key. I got to know that i can duplicate subtree on the target side thus i can have one subtree for type 'A' and one for type 'B'.
    If i get 10 records which can either 'A' or 'B' randomly arranged, how should i go with Duplicate Subtree for that. Please explain with a scenario.

    Hi Jaya,
    You just right click your node which you want to duplicate then you can choose the option Duplicate subtree. Now you mapped with you source element A twice and simlarly you just duplicate another type for 4 times and map with source element B.
    Your problem will be solved.
    Best Regards,
    vijay

  • How to sort columns in the target table

    I have a simple mapping which I am trying to design. There's only one table on the source and one in the target . There are no filter conditions, only thing is I want the target table to be sorted.
    Literally, say
    Src is source table has 3 columns x,y,z
    Trg is dest table and has 3 columns a,b,c
    x--->a
    y---->b
    z---->c
    The SQL should be
    select x,y,z from src order by x,y.
    I could do the mapping but the order by ..I could not do it .
    IKM used: IKM BIAPPS Oracle Incremental Update

    Why can't you use simple UPDATE command in EXECUTE SQL Task as below,
    DROP TABLE SSN
    DROP TABLE STAGING
    DROP TABLE STUDENT
    CREATE TABLE SSN(pn_id VARCHAR(100),ssn BIGINT)
    INSERT INTO SSN VALUES('000616850',288258466)
    INSERT INTO SSN VALUES('002160790',176268917)
    CREATE TABLE Staging (ssn BIGINT, id INT, pn_id BIGINT, name VARCHAR(100), subject VARCHAR(100),grade VARCHAR(10), [academic year] INT, comments VARCHAR(100))
    INSERT INTO Staging VALUES(288258466, 1001, '770616858','Sally Johnson', 'English','A', 2005,'great student')
    INSERT INTO Staging VALUES(176268917, 1002, '192160792','Will Smith', 'Math','C', 2014,'no comments')
    INSERT INTO Staging VALUES(444718562, 1003, '260518681','Mike Lira', 'Math','B', 2013,'no comments')
    CREATE TABLE Student(id INT,pn_id BIGINT,subject VARCHAR(100), [academic year] INT, grade VARCHAR(10), comments VARCHAR(100) )
    INSERT INTO Student VALUES(1001, '000616850', NULL,NULL,NULL ,NULL)
    INSERT INTO Student VALUES(1002, '002160790', NULL,NULL,NULL ,NULL)
    UPDATE Student SET Subject = C.Subject, [academic year]=C.[academic year], grade=C.grade,comments=C.comments
    FROM SSN A INNER JOIN Student B
    ON A.pn_id=B.pn_id INNER JOIN Staging C
    ON A.ssn = C.ssn
    SELECT * FROM Student
    Regards, RSingh

  • How to gather stats on the target table

    Hi
    I am using OWB 10gR2.
    I have created a mapping with a single target table.
    I have checked the mapping configuration 'Analyze Table Statements'.
    I have set target table property 'Statistics Collection' to 'MONITORING'.
    My requirement is to gather stats on the target table, after the target table is loaded/updated.
    According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
    Analyze Table Statements
    If you select this option, Warehouse Builder generates code for analyzing the target
    table after the target is loaded, if the resulting target table is double or half its original
    size.
    My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
    I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
    Thanks for your help in advance...
    ~Salil

    Hi
    Unfortunately we have to disable automatic stat gather on the 10g database.
    My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
    So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
    Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
    If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
    Any ideas to overcome this issue is appreciated.
    Thanks
    Salil

  • How to migrate Oracle Report from one environment to other environment

    Hi all,
    I registered a report and deployed in devr environment, now i want to migrate the report from devr to other environment.
    I dont want to undergo with same registering and deploying the report in other environment.
    Please let me know how should i go ahead and i come up with using of FNDLOAD, please can any one explain the process.
    Thanks,
    Vishal

    Sandeep,
    Yes, I think your command format is not correct.
    Try the one that Rod posted.
    About the note on metalink. It seems that it is under review.
    Here is the basics of the note:
    1. Determine the owner of the workbook. Say UserA.
    2. Open an sqlplus session to the database.
    3. Run the following sql:
    SQL> set heading off
    SQL> set feedback off
    SQL> set echo off
    4. Now spool the result of the following sql to a file.
    SQL> spool c:\exp.bat
    5. Run the sql statement
    NOTE: CHANGE DISCOE_HOME
    SQL>select '<Disco_Home>\discvr4\dis4adm /connect
    EUL_owner_name/passwd@connect_string /export c:\'||rownum||'.eex /workbook "'||
    doc_created_by||'.'||doc_name||'"' from
    SELECT EUL4_DOCUMENTS.DOC_NAME, doc_created_by, NVL(EUL4_EUL_USERS.EU_USERNAME,
    'Document Not Shared') shared_with
    FROM EUL4_ACCESS_PRIVS EUL4_ACCESS_PRIVS, EUL4_DOCUMENTS EUL4_DOCUMENTS,
    EUL4_EUL_USERS EUL4_EUL_USERS
    WHERE ( EUL4_DOCUMENTS.DOC_ID = EUL4_ACCESS_PRIVS.GD_DOC_ID(+) ) AND (
    EUL4_EUL_USERS.EU_ID(+) = EUL4_ACCESS_PRIVS.AP_EU_ID )
    where doc_created_by='UserA'
    where,
    Disco_Home is the Location or Discoverer 4 Home.
    4. SQL> spool off
    5. SQL> set feedback on
    6. Now run the batch command file (exp.bat)
    Regards
    Roelie Viviers

  • How to migrate "Access Restrictions"  from One Environment to another Env

    Hi ,
    Can anyone suggest regarding the "Access Restrictions" , we need to move the "Access Restrictions"  from one environement to another environment .
    Thanks & Regards
    Venkat.

    hi,
    When using Import wizard,
    while importing Universe, Please check the below option to migrate Access Restrictions.
    "Keep Universe Overloads for imported Users and Groups"
    Regards,
    Vamsee

  • AppleScript: How to let AppleScript use the same environment as Bash shell does?

    Howdy!
    I can run AppleScript from Bash shell.
    I can also run Bash shell from AppleScript.
    So my question is when I run a Bash shell script from AppleScript, I am hit with a path problem.
    Suppose my AppleScript is under ~user/a.scpt. I want my AppleScript to know its current path (i.e. ~user/a.scpt).
    So a quick snippet test in my AppleScript shows that AppleScript always begins from the root directory:
    do shell script "pwd"
    So let's do a workaround AppleScript like this:
    tell application "Finder" to set currentDir to (target of front Finder window) as text
    do shell script "cd " & (quoted form of POSIX path of currentDir) & "; pwd"
    This works. However, this approach poses a potential danger.
    Suppose when I run this AppleScript for a bash script, I inadvertently clicked other Finder Window. That would result in execution of bash script in that newly clicked Finder Window, resulting in a big mess.
    Is there a better way to let the AppleScript to know its current path?

    The following is what your Applescript environment looks like, so plan accordingly.  If you want your Applescripts to be portable to other user environments, you should NOT depend on your environment, but rather embed all the environment knowledge you need into your script.
    Applescript "do shell script" environment
    pwd:
    /bin/ls -dlaeO@ .
    drwxr-xr-x  36 root  wheel  - 1292 Feb 28 09:58 .
        /usr/bin/id -a
    uid=501(raharris)
    gid=20(staff)
    groups=20(staff),
    402(com.apple.sharepoint.group.1),
    12(everyone),
    61(localaccounts),
    79(_appserverusr),
    80(admin),
    81(_appserveradm),
    98(_lpadmin),
    403(com.apple.sharepoint.group.2),
    33(_appstore),
    100(_lpoperator),
    204(_developer),
    398(com.apple.access_screensharing),
    399(com.apple.access_ssh)
    $#
    0
    $0
    /path/to/your/shell/script
    printenv:      
    SHELL=/bin/bash
    TMPDIR=/var/folders/4t/t1djbq6j5pj951x44l9hwxtc0000gn/T/
    Apple_PubSub_Socket_Render=/tmp/launch-mSKKJo/Render
    USER=<username_running_the_Applescript>
    SSH_AUTH_SOCK=/tmp/launch-7D4zpF/Listeners
    __CF_USER_TEXT_ENCODING=0x1F5:0:0
    PATH=/usr/bin:/bin:/usr/sbin:/sbin
    __CHECKFIX1436934=1
    PWD=/
    HOME=/Users/<home_dir_of_running_user>
    SHLVL=2
    LOGNAME=<username_running_the_Applescript>
    DISPLAY=/tmp/launch-qKucki/org.macosforge.xquartz:0
    _=/usr/bin/printenv
    Message was edited by: BobHarris

  • How to migrate What is the best way to connectold iMac to new iMac

    What is the best way to connect a 2006 iMac to a 2013 iMac? ( Lion to Mountain Lion )
    I have a TM on the older Mac. I understand using set-up is best. I cannot afford to lose
    Any data.
    Thanks.
    OGT

    By Firewire. However, you will need a Thunderbolt to Firewire adaptor from Apple to use on your new iMac.
    See Target Disk Mode.

  • T-code : CRMC_R3_ORG_GENERATE, How can I link to the target system??

    Hi, everyone.
    First of all, Thanks for your reading my message with my heart.
    We are facing the critical problem.
    We want to download the customer master in R3 system to CRM(BBP600), but the problem is that there is no SALES AREA DATA!! (we need the sales office value for some reason.)
    Basically, CRM is linked to R/3(domestic system) and the customer data we want to download is in another R/3(Foreign GmbH system)
    we thought that the reason is there's no DIST,Channel and Division in PPOMA_CRM.
    so, we excuted the transaction, CRMC_R3_ORG_GENERATE to download the sales area organization,
    but the system we saw was not that we hope to link.
    How can we set the destination we hope to link for T-code CRMC_R3_ORG_GENERATE ?!?!
    If somebody know the procedure to connect to R/3(Foreign GmbH system), Please help us
    I really appreciate your help in advance.
    Thanks
    Best rgds,
    Hyo-ki

    Thank you for your reply.
    Before reading your advice, We delete all site ID and created new site ID for Foreign GmbH.
    after that, now we can connect to that system, and we can see the list of R3 sales area data.
    But, when we excuted the 'Creation' button, the system showed us red alert status in the bottom list screen of .CRMC_R3_ORG_GENERATE. T_T
    We already have the dist.channel code list and division code list for R3(domestic) in CRM,
    and another R3(Foreign) has the same codes list for Dist.channel and division.
    for example,
    R3(domestic) has customer code - 200341, and its sales area is 1000 / 20 / 10, and
    R3(Foreign) has the same customer code - 345201, and its sales area is 4100 / 20 / 10.
    so, I think the same codes of dis.channel and division causes that system showed red alert.
    am I correct??
    then, Is there any good strategy to maintain Customer master data (or any master data) in TWO R/3 systems with only one CRM
    using each dist.channel and division(the same code) ?

  • How to edit LOV of the status field in activity record type?

    Hi experts
    I just want to edit the list of values of status field in activity record type, it is a read only filed, can anyone tell me if it is possible to edit? seems like it is referenced from a field called "EVENT_STATUS";
    Thanks,
    Tiger

    Tiger, at this time it is not possible to edit the values in this field.

  • After all working, how to findout if all the changes applied to target?

    I finally got extracts/datapump/replicat working.
    I have a question though, how do I know if the target sync up with the source db?
    Do I have to use goldengate veradata? How to install and configure veridata?
    Thanks in advance.

    Data validation is nontrivial in logical replication products, including GoldenGate. With a physical standby database's data blocks are physically identical to the source database (and thus simple to validate). But if it's GoldenGate running SQL statements against the target, the physical data will be different. So data validation involves checking all the rows of any replicated table to make sure that all the data is indeed the same.
    Veridata is one way of doing this, but there are others, starting with simple SQL scripts that do count(distinct)s, or calculate hash values and compare them.
    A Google search for [oracle data comparison tool|https://www.google.com/search?q=data+comparison+too] shows a number of tools that do much the same thing.
    Marc

  • How to migrate BPM 11g server to another BPM 11g server

    Hi All,
    There is no change on BPM version (BPMN Editor 11.1.1.3.0.6.84 )
    just simply a physical migrate, changed application server, DB and server domain/IP
    Now we met questions when migrate DB date, there are four schemas DEV_MDS、DEV_SOAINFRA、DEV_ORABAM、DEV_ORASDPM, cannot used including install config settings, could not be used any more, could not be started.
    Is there any guide of how to migrate just user data please?
    Thanks for any help!
    Regards,
    Katherine

    Migration of environment should really be done through backup tools that ensure a consistent point-in-time view of the data. We recently had a need to use import/export for an environment. I'll post those notes here hopefully save some time if you have to use this approach. But I'll emphasize again that a backup/restore model is the better way to go.
    What we did
    We took an environment that had a small set of BPM processes and exported the data from the three core schemas.
    •     PS2_MDS
    •     PS2_SOAINFRA
    •     PS2_ORASDPM
    We then loaded that data into another domain, started the server, and interacted with the composites that had originally been deployed in another domain.
    What we didn’t do
    •     Export complex composites. We didn’t have access to things like AIA applications
    •     Export large amounts of data
    •     Test a broad set of features. We did use Workspace to create roles, view dashboard, and interact with task lists.
    What we learned
    We believe this confirmed the basic assumption that data can be exported from one BPM/SOA environment, then loaded and run in another environment. However the issues around user privileges, starting queues, etc, highlight the need to test the process in a broader context.
    Tools Used
    Data Pump
    Data Pump is an import/export tool provided by Oracle. A summary of features can be found at:
    http://wiki.oracle.com/page/Data+Pump+Export+%28expdp%29+and+Data+Pump+Import%28impdp%29
    SQL*Plus
    Used to run scripts
    Setup
    Create Users
    The data pump export will include privilege information in the export. And the data pump import will create users in the target database (assuming sufficient privileges for the user running the import). However we found that some roles related to AQ were not granted during the import. To resolve that issue we created scripts to pre-create the schema users and grant required privileges in the target environment.
    TODO:
    1.     Review the required privileges to see if there are data pump options that would resolve this
    2.     Verify that there are no additional privileges needed by applications in the target integration environment.
    Create Output Directory
    Data Pump needs to have access to the file system from the database.
    Make sure that the output directory is writable by the DB process.
    From SQL*Plus:
    create directory dbexport as '/scratch/ dbexport';
    grant read,write on directory dbexport to system;
    Export Data For Selected Schemas
    The Data Pump tool supports providing a list of schemas to export as part of a single operation. In this test case we exported the following core BPM schemas to a single file. The commands are in the do_exp.sh script.
    •     PS2_MDS
    •     PS2_SOAINFRA
    •     PS2_ORASDPM
    TODO:
    1.     Verify space required for the target database to ensure that there is sufficient space to export the data.
    Import Data into Target Schemas
    Data Pump supports mapping schema names and table spaces as part of the import specification. In this case we used the same Oracle instance, so the schemas were mapped to new names. The PS2 prefix was mapped to XYZ.
    Note that the OID transform property is required to avoid conflicts with existing object Identifiers.
    impdp system/welcome1 DIRECTORY=dbexport DUMPFILE=bpm.dmp transform=oid:n \
    REMAP_TABLESPACE=PS2_MDS:XYZ_MDS \
    REMAP_TABLESPACE=PS2_SOAINFRA:XYZ_SOAINFRA \
    REMAP_TABLESPACE=PS2_IAS_ORASDPM:XYZ_IAS_ORASDPM \
    REMAP_SCHEMA=PS2_MDS:XYZ_MDS \
    REMAP_SCHEMA=PS2_SOAINFRA:XYZ_SOAINFRA \
    REMAP_SCHEMA=PS2_ORASDPM:XYZ_ORASDPM \
    TODO:
    1.     Check the log for errors. The only errors we saw were related to trying to build stats on empty indexes.
    2.     Identify any other schemas being used
    3.     Verify if any other transform operators are needed
    Post Import
    Start Queues
    SOA infrastructure makes use of queues in the DB server. While the queue structures were created, they were not started as part of the import. The start_queues.sql script starts the queues.
    TODO:
    1.     Verify if there are any other queues that need to be started/configured
    2.     Verify if there are any other Stream/Queuing configuration tasks needed
    Update Data Sources on WLS
    Assuming an existing installation of SOA, the datasources need to be updated to use the new schemas.
    Start Domain and Verify Logs
    It’s important to check the admin and managed server logs for errors related to composite instances and MDS operations.
    Verify Composite State
    Log into EM and ensure that the state of instances is consistent with the prior state.
    Verify Application Roles and Role Mappings
    Since the composites are loaded from MDS, there will be no deployment to create application roles. If the environment is using LDAP or DB, then the validation is to check the mappings through Workspace. If the old environment used a file store, then you’d have to crate the role mappings through Workspace.
    Verify Workspace Operations on Existing Instances
    Verify that task list operations and dashboards are functional.

Maybe you are looking for

  • Reader XI, since 11.0.1. i cannot print any more under WIN8

    Since reader XI 11.0.1 I'm not able to print any more. Only the window "Save as" appears. So I can save the document (once more) and that was it. Printing with WIN8 Reader works fine ... but I want to use Adobe Reader to print .. What's wrong? START

  • Finder: FIXME: IOUnserialize has detected a string that is not valid UTF-8, "

    See this message a week ago in the console. I've upgrade my Macbook Pro Mid 2010 the last three years vom SL to Lion, Ml and now Mavericks. After seeing this message, I decide for a clean install. All runs fine, but the message appears in the console

  • Problem with BBP_CATALOG_TRANSFER

    Hi everybody, I need some help with an implementation for BAdI BBP_CATALOG_TRANSFER. We use the BAdI for determining the vaules for vendor, contract and product group which are required for positing of the order in the backend system. It works fine f

  • USERS tablespace keeps expanding even after deletions.

    I am using version 9.0.4.0.0 of CMSDK with a 10.1.0.3.0 database. Whenever I store files on the tablespace and remove them afterwards the tablespace does not seem to be cleaned. I have checked the logs for activiations of the agents (which I set at 1

  • Non repeated field in repeated report

    Greedings, i have the following query in my report. The only problem i have is that for every account it repeats the 'c.com' value so i end up having the same 'c.com'. Is it something wrong with my query or do i have to change something in my report