Project related tables in work repository

Hi,
I am new to ODI.
There are two repositories - Master and Work.
I have read that ideally master and work repositories should be in different schemas.
Where should we put our project related tables? In Master/Work or should i create a different schema for my project related tables?
Thanks.

Hi,
Master repos contains set of SNP_* metadata tables which stores information about Security Manager, Topology Manager and Versions, so whatever objects u create in those components will be stored here.
Work repos contains set of SNP_* metadata tables which stores information about Designer and Operators so whatever objects u create and logs will be stored here.
So for project related tables its suggested to create a dedicated schema for not to mess up with repository schemas.
Makes sense?
Thanks,
Guru

Similar Messages

  • Error tables in work repository

    Hi All,
    I am interested in knowing how ODI stores the log information in the work repository. It is apparent that some tables are used but what is not very obvious is which ones and the relationship among them. Does anyone know about it? Any help is greatly appreciated.
    Edited by: Sankash on Feb 25, 2009 3:09 AM

    Hi Santos,
    Could you please post the querry which will return the complete error message in operator like below
    java.lang.Exception: Variable has no value: ODI_ACTUAL_JOURNAL_FINP1.odi_zip_base_directory
         at com.sunopsis.dwg.dbobj.SnpVarSess.getValue(SnpVarSess.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.bindSessVar(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.bindSessVar(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskPreTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.h.y(h.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Thread.java:595)
    I am not able to understand the relationship amonst tables in working repository.
    Hope you can help me.
    Edited by: user4810906 on 26-Feb-2009 06:16

  • Project related tables and link for services in Network Activity

    Dear Experts,
        My requirement is I need a table for services in Network activity before release of Network and link for services to another table(Project related before PR).
    Ex : Network             Activity           Service         
            XXX                     XX                   12345
          So I need Service  field, which is updating in ESLL was unable to link with another table to get the Project Number. So please help me .
    Regards,
    Srikanth.

    Dear sushrut sheth,
    thank for the reply
    I looked in AFVC before to post the Issue but in this table the Activity is only updating but not the individual service items
    I required the table which is updating the individual service line items

  • Project Balance Table

    Hi experts!
    What are the DB tables i can retrieve the Project & WBS balance???
    I have to develop a report on project balances.
    Please help!!!
    Thank you all in advance!!!

    I suggest you for the PS to use std. reports till the time you can extend and satisfy your requirement.
    If your requirement isn ot satisfied in that case you can go for the development.
    Any way tables you can look after are
    Please refer this link
    SAP Project System - A ready Reference ( Part 2 )
    PS Tables AFAB Network - Relationships
    PS Tables IMAK Appropriation requests
    PS Tables IMAV Appropriation request variants
    PS Tables IMPR Capital Investment Program Positions
    PS Tables IMPU Texts for cap. inv. program positions
    PS Tables IMTP Capital Investment Programs
    PS Tables IMZO CO object assignment
    PS Tables PMCO Cost structure of maintenance order
    PS Tables PRHI WBS Hierarchy
    PS Tables PROJ Projects
    PS Tables PRPS Work Breakdown structures
    RPSCO : Project info database
    These are the FI /CO tables which may be helpful to you in PS context.
    FI / CO Tables BPEJ Budget revision header
    FI / CO Tables BPEP Line Item Period Values
    FI / CO Tables BPGE Overall/annual budget
    FI / CO Tables BPJA Overall/annual budget
    COSP : pri mary cost
    COSS : Sec. cost
    COEP : Actual cost
    COOI : Commitment
    COEJ : Planned cost.
    Network: AFKO, AUFK
    Activity: AFVV, AFVC
    Activity Element: AFVV, AFVC
    Milestone: MLST
    Budget: BPGE, BPJA
    Regards
    Nitin P.

  • What table stores Interface Filters in Work Repository

    Hi there
    I have a number of simple 1 to 1 interfaces (i.e. 1 source, 1 target). And each interface has its own filter between source and target.
    I would like to know how these filters are stored in the work repository database.
    I can see the filter text in the TXT field in SNP_TXT. But I would like to be able to somehow extract the name of the source table, the name of the target table and the filter text in a query.
    There seems to be a myriad of cross referenced ID numbers accross many tables to achieve this. Its all very confusing trying to follow the ID numbers to a source and target table.
    Is there any documentation that would tell me how the work repository database tables are inter-referenced?
    Thanks in advance,
    N

    I have answered this in one of previous threads:
    Re: trying to find join condition
    Hope that helps

  • Number of SNP tables created during master and work repository creation

    Hi All,
    I would like to know how many SNP tables are created during Master and work repository creation in
    10.1.3.5 v as I remember it was around 147 in earlier version .
    Any help will be appreciated
    regards,
    Palash Chatterjee

    Hi Palash ,
    ODI master repository have 58 SNP tables
    where as work repository have 88 SNP tables
    So in total 146 SNP tables are created during Master and work repository creation in
    10.1.3.5 .
    Thanks,
    Sutirtha

  • Master and work repository  grouping

    Hi , what is the most efficient and industry standard model of grouping master and work respository.
    . is it good to group master and related work for every technology like DB2 , SQl server etc ?
    or
    is it good to group master and related work repository depending on the project where we can have different technologies grouped in single work repository and multiple work repository for different technologies. ?
    Please share with me with your company standard or views on my above question.
    Thanks
    Dev

    As far as I know it is not recommended solution to create both master and work repositories in one Oracle schema, better create them as separate schemas.

  • SUNOPSIS to ODI migration : AS/400 error in work repository upgrade

    Hi,
    We're migrating SUNOPSIS v3.2.02.19 to ODI 10.1.3.5.0, with repositories on DB2 AS/400.
    We achieve to migrate master repository without any problem, and we try now to migrate the work repository and we have the following error :
    +java.sql.SQLException: [SQL0404] La valeur destinée à la colonne ou la variable TABLE_NAME est trop longue.+
    at com.ibm.as400.access.JDError.throwSQLException(JDError.java:520)
    at com.ibm.as400.access.AS400JDBCStatement.commonExecute(AS400JDBCStatement.java:822)
    at com.ibm.as400.access.AS400JDBCPreparedStatement.executeUpdate(AS400JDBCPreparedStatement.java:1179)
    at com.sunopsis.sql.SnpsQuery.executeUpdate(SnpsQuery.java)
    at com.sunopsis.dwg.xml.DwgXmlSession.execute(DwgXmlSession.java)
    It seems that as column name, table name or variable name in migration process is too long, but i don't have any ideas of which. What can i solve this problem ? Any solution or workaround ?
    thanks for help !
    Edited by: julienpeltier on 19 oct. 2010 04:12
    Edited by: julienpeltier on 19 oct. 2010 04:13

    Check out the metalink document for upgrade
    Best Practices For Upgrading From Sunopsis To Oracle Data Integrator [ID 437223.1]
    Important things to be aware of if you are upgrading from Sunopsis V3:
    Sunopsis V3 uses JVM 1.3.1 while ODI 10.1.3 requires a minimum of JVM 1.4.2. This is not an issue if you are upgrading from Sunopsis v4.
    Sunopsis V3 allows mixed-case Context Codes, which are automatically converted to upper-case in V4 and ODI versions. To fix the Issue, manual updates of all Context Codes referenced in the Work Repository must be performed. These updates should be made on all Work Repositories related to a same Master Repository.
    In Sunopsis V3 DB2UDB Technology definition (\lib\scripts\xml\TECH_IBMDB2UDB.xml), The BLOB is defined with a size of 2G. The Master Repository creation fails at
    CREATE TABLE SNP_DATA ( I_DATA NUMERIC(10) NOT NULL, DATA_CONTENTS BLOB(2G)
    with error message :
    SQLCODE -355 SQLSTATE 42993 SQLERRMC: DATA_CONTENTS
    The error means that the column is too large for logging. 2G exceeds DB2 1G limit for Blob datatypes. This is fixed in Sunopsis 4.0.
    After upgrading Sunopsis v3 Master to ODI, the user will still run into this issue if he tries to export this upgraded Master and import it into another Master using mimport.bat.
    Solution: Modify the upgraded V3 Master Repository TECH_IBMDB2UDB.xml file and change the 2g to 100M. i.e.
    <Field name="DdlLongrawMask" type="java.lang.String"><![CDATA[BLOB(100M)]]></Field>
    Then try the import again.

  • Import work repository

    Hi all,
    Can anyone help me out on this:
    I want to migrate my work repository from one machine to another after we have done some additional work which includes new Interfaces, but also changed properties for certain columns. Using the "import workrepository" option I can choose between INSERT, UPDATE and INSERT_UPDATE synonym-mode.
    When I choose INSERT_UPDATE (which makes most sense in my case, I think), it always fails with the error that an object is updated by another user.
    However, when I choose INSERT-mode, everything works fine; new objects are inserted and changes to existing objects are also updated.
    Am I missing something here? I cannot find anything about this in the docs...
    Thanks in advance,
    Steffen

    Hi Steffen,
    Ok, seems that I had a environment a little different, in my case there were 2 Development Work Repository.
    Anyway I already see the same situation between a Development Work Rep and the Execution Work Rep and the solution is (at the project I´m work right now, 25 ODI developers):
    Deployment Steps:
    1) Development Architecture:
    Any object is a scenario, it means that after you create a procedure, an interface, variable... (any object!) you need to create its scenario. In this way the package that will define the process flow will have just scenarios inside.
    This architecture will give you a little more work at the first deployment but allow you a better control of object (code) version at complex process when you need changes in a singe step for instance...
    2) Deployment between Repositories:
    At the first time of the object -> Normal export from Development Work Rep (Designer) and import in "Insert-Update" mode at Execution Work Rep.
    Post first Deployment:
    a) if you work with scenario version, just deploy the new version (it is like a first time) and make a new deploy of the "flow" package calling the new version at its related steep (unless if you are using the "-SCEN_VERSION= -1" at the call)
    b) if you work just with "Regenerate" Scenario and are keeping the same version then you need delete the object at Execution Work Rep and import it again in the "Insert-Update" mode.
    I hope to be helpfull!!! Any doubt, please, post new message!!!

  • Workflow related tables. Urgent. Please help.

    Hi all,
        Could anybody please give me the list of workflow related tables ? I need to retrieve the following details.
               Type,
               Workflow Item ID,
               Workflow Item Unique ID, Text
               Creator, Created Date and Time,
               User created WI,
               Technical Status,
               Agent,
               Task ID, Task ID text,
               Element, Element Value.
    Please let me know at the earliest. Thanks in advance.

    <i>I don't think this is the right place for this, but I will give one reply.</i>
    <i>>What you think when a person is facing some issue will first read guidelines and then post.</i>
    Here is the <a href="https://wiki.sdn.sap.com/wiki/display/HOME/RulesofEngagement">link</a> to the guidelines. It is expected somebody first reads the guidelines and then starts posting, it is the first line in the guidelines. So I expect it also. I don't see why I should expect it from someone who has at least 30 points, and not from someone who has never posted.
    About your analogy, it doesn't work because most of the time the functional consultant needs me to create the workflow, or he is part of the same project so we need each other. In this forum I spend my own precious time answering questions, and not for the points but because I want to contribute and help others. But I expect them to have tried to 'help' themselves first.
    And as long as there is no moderator for the forum I will try to point people at the guidelines so this forum will not end up generating a lot of 'noise' and running of those who can help maybe the most (to name two: Kjetil Kilhavn and Mike Pokraka). I know they are thinking about skipping this forum because of the noise.
    <i>> Try to be a bit pragmatic. Will you do this to your peers or your junior.</i>
    Depending on the situation I will do this to my peers and juniors, mostly because they will learn more from finding things out by themselves than from pre cooked answers. And learning is one of the main objectives here.
    And my remark about the points is because you keep asking for them.
    I hope this clarifies a little why I post answers like the one I gave to this somewhat lazy question (not the first one today).
    Martin

  • Error while crreate variable in New work repository -- integrity constraint

    Hi,
    I had created new Master and work reposiotry with new Internal ID's.
    I am able to import project, variables.
    while creating new variable I am getting below error.
    java.sql.BatchUpdateException: ORA-02291: integrity constraint (SNPW.FK_TXT) violated - parent key not found
    Could any one got this error before
    Is there any steps I missed. -- Any help is Appreciatied.

    Hi,
    Table SNP_TXT in your work repository has a foreign key reference with table SNP_ORIG_TXT
    on column I_TXT_ORIG,
    while you are inserting a new variable for the project etc, it is getting inserted in SNP_TXT
    before insertion it is checking the I_TXT_ORIG value with the table SNP_ORIG_TXT .
    Please do select on these tables in your old repository and your new repository.
    and let me know.
    Reshma

  • Insert/Update related tables

    Hello,
    I am attempting to take XML data - and using a transform, map it to a DB Adapter "merge" (insert if new, update if not) on two related tables. I have set up my db tables as follows...
    Table A
    ID (unique, primary key)
    param1
    param2
    Table B
    ID (foreign key related to Table A - ID)
    paramA
    There is a "TableA 1:M Table B" relationship set up in the DB as well as in the DB adapter TopLink project.
    If I do an initial "merge" (insert), everything works fine - and I get my new record in Table A - and several new related records inserted into Table B (way cool!).
    However, if I then try to repeat the merge after modifying something in the incoming data that is associated with Table B data, Table B records associated with that ID all get set to the last instance of Table B data in the input. I know that's confusing - here's an example...
    Initial input:
    <CAR>
    <ID>7</ID>
    <param1>Chevrolet</param1>
    <param2>Corvette</param2>
    <paramA>T-top</paramA>
    <paramA>Mag Wheels</paramA>
    <paramA>Hood Scoop</paramA>
    </CAR>
    Resulting Table Data:
    Table A:
    ID param1 param2
    7 Chevrolet Corvette
    Table B:
    ID paramA
    7 T-top
    7 Mag Wheels
    7 Hood Scoop
    Second run with the following input:
    <CAR>
    <ID>7</ID>
    <param1>Chevrolet</param1>
    <param2>Corvette</param2>
    <paramA>T-top</paramA>
    <paramA>Mag Wheels</paramA>
    <paramA>Front Hood Scoop</paramA>
    </CAR>
    Resulting data in DB...
    Resulting Table Data:
    Table A:
    ID param1 param2
    7 Chevrolet Corvette
    Table B:
    ID paramA
    7 Front Hood Scoop
    7 Front Hood Scoop
    7 Front Hood Scoop
    I have verified that my XSL transform appears to be working correctly.
    Am I using the db adapter "merge" function for something it's not designed to handle (i.e. related tables)? Do I have to code separate update adapter instances to update my related tables individually? I hope not - I was hoping TopLink would just take care of this for me. Maybe I'm too optimistic... ;-)
    Thanks for any help!
    Lon

    Nevermind. I'm an idiot.
    Obviously, Table B needs another column to form a unique key - so an update can change just that row. Sorry for wasting your time. :(
    Lon

  • How can I move the ODI Work Repository from one server to another server?

    How can I move the ODI Work Repository from one server to another server?

    Hi,
    If you would like to move your source models, target models and project contents from Work repository 1 to another work repository.
    I.e. Dev. server to Prod Server.
    1. Firstly, replicate the master repository connections i.e. with same naming conventions manually
    2. Go to Dev. Server work repository -> File Tab -> Click on Export work repository (save it in a folder)
    3. After exporting, you can view the xml files in the folders.
    4. Now, Open the Prod. server and make sure you already replicated mas. rep. details.
    5. Now, right click on model and import source model in synonym mode insert_update (select source model from the folder where your xml file located)
    6. Similarily, import again target then Project.
    Now, check. It should work.
    Thank you.

  • How to define join in physical layer between cube and relational table

    Hi
    I have a aggregated data in essbase cube. I want to supplement the information in the cube with data from relational source.
    I read article http://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/10/24/are-essbase-and-oracle-bi-enterprise-edition-obiee-a-match-made-in-heaven.aspx which describes how to do it.
    From this article I gather that I have to define a complex join between the cube imported from essbase to my relational table in physical layer.
    But when I use Join Manager I am only able to define jooin between tables from relation source but not with the imported cube.
    In My case I am trying to join risk dimension in the cube based on risk_type_code (Gen3 member) with risk_type_code in relation table dt_risk_type.
    How can I create this join?
    Regards
    Dhwaj

    Hi
    This has worked the BI server has joined the member from the oracle database to cube. So Now for risk type id defined in the cube I can view the risk type code and risk type name from the relational db.
    But now if I want to find aggregated risk amount against a risk type id it brings back nothing. If I remove the join in the logical model then I get correct values. Is there a way by which I can combine phsical cube with relational model and still get the aggregated values in the cube?
    I have changed the column risk amount to be sum in place of aggr_external both in logical and phsical model.
    Regards,
    Dhwaj

  • How to schedule two jobs from two different work repository at a time?

    Hi All,
    I have a scenario where I want to schedule two jobs at a time from two work repository.
    Explanation:
    Master Repository-A
    Work Rep-B
    Work Rep-C
    Now I need to schedule two scenario one from Work rep B and other from Work Rep-C
    As we know that odiparams batch file contains the connection details and at one time it can hold only one work repository name.
    Odiparams data:
    rem Repository Connection Information
    rem
    set ODI_SECU_DRIVER=oracle.jdbc.driver.OracleDriver
    set ODI_SECU_URL=jdbc:jdbc:oracle:thin:@10.10.10.10:1521:orcl
    set ODI_SECU_USER=odi_snpm
    set ODI_SECU_ENCODED_PASS=aYyHZZCrzk5SjQ9rPOIUHp
    set ODI_SECU_WORK_REP=*ODI_LOCALDEV_WRKREP*
    set ODI_USER=SUPERVISOR
    set ODI_ENCODED_PASS=LELKIELGLJMDLKMGHEHJDBGBGFDGGH
    Scheduler agent will pick this information from the odiparams file and update the schedule.
    So If I want to schedule two job, how it is possible?
    I tried all possible things but didn't get the proper solution?
    Edited by: user10765509 on Jul 21, 2010 4:58 AM

    You can do it in the following way
    1. copy/paste the original odiparams.bat file
    2. give it a name say odiparams_a.bat
    3. specify the work repository A related information in odiparams_a.bat
    4. Make another copy of odiparams.bat file
    5. give it a name say odiparams_b.bat
    6 specify the work repository B related information in odiparams_b.bat
    7. now make 2 copies of agentscheduler.bat give the name as follows
    agentscheduler_a.bat and agentscheduler_b.bat
    8. edit agentscheduler_a.bat and change
    call "%ODI_HOME%\bin\odiparams.bat"
    with
    call "%ODI_HOME%\bin\odiparams_a.bat"
    9. edit agentscheduler_b.bat and change
    call "%ODI_HOME%\bin\odiparams.bat"
    with
    call "%ODI_HOME%\bin\odiparams_b.bat"
    10. now start two scheduler agent by calling agentscheduler_a.bat and agentscheduler_b.bat
    Thanks,
    Sutirtha
    PS : Take a backup of each and every file getting modified

Maybe you are looking for