A mutilple sources to multiple targets integration interface

I need to populate multiple target tables based on multiple source tables. Is there a way to do it using ODI?
Many thanks,
Irina

Tables on the target side are related through a foreign key. So I need to populate target table 1 with interface 1, then populate table 2 with interface 2 using both tables on the source side and a newly populated table 1, right?

Similar Messages

  • Multiple Target Data Sources

    JDeveloper v11.1.1.2
    Is it possible to set multiple Target Data Source iterators in the edit tree binding dialog?

    Hi Ananda
    Thank You very much for your reply!
    B) Yes same data is required in both the applications, but not completely same format or structure. E.g. Siebel is the Customer Hub for the client. All the customer data are required to be migrated to multiple applications (say Oracle EBS & Oracle BRM) after verification process, which happens end of the day. What I require is, the ODI interface should pull data from Siebel once & upload those to both Oracle EBS & Oracle BRM simultaneously as per the mapping rules defined for them. Basically I wanted to avoid hitting Seibel applications twice for pulling Customer data by creating two separate interfaces. Is it possible by using ODI???
    C) Please take same customer scenario in B. Customer is inserted to Oracle EBS & Oracle BRM using two different interfaces executed sequentially in a package. I want to maintain atomicity. i.e. Either Customer to be created in both or none of the applications. If that particular customer failed in 1st interface, it should try in 2nd interface. Also if it get failed 2nd interface, it should rollback in 1st interface. Can this be achieved in ODI?
    Hope, the above would clear my query.
    Thank You Again
    Priyadarshi

  • Multiple Target Files as the number of times Item in source node

    Hi all
    I am new XI ,my scenario is File to File and my data type structures for source and target are as follows 
    Data type for source
    Source     
         Header      1:unbound
             Org       1:unbound
    In declaration of target data type occurrence of all child nodes are 1:unbounded. And I have used it in Message type and in message mapping for my target message type occurrence is showing as 1:1.
    My objective is to replicate this entire Target as the no of times the Item is occurring in source  ie for multiple items in source I want multiple target files. For this I have mapped item node of source to Target(parent node). But in mapping test it is only displaying one Target structure for multiple nodes in source. Please Help me in solving this issue

    Hi Satish,
    Use Multi Mappings :
    When you create message mapping change the occurence of target from 1 to unbounded. This will allow you to create multiple target structures.
    Then map them accordingly as per your need and you can see multiple output in test.
    Just you have to be more focused on the context and for that you have to go thro' the mapping documents.
    Search related documents on SDN and go thro' them.
    Regards,
    Shri

  • Multiple Target files as the item in source file

    Hi all ,
    I am new XI ,my scenario is File to File and my data type structures for source and target are as follows 
    _Data type for source:     _               
    Source
          Header     1:unbound                     
              org       1:unbound
              order     1:unbound
          Item             1:unbound    
               itemno   1:unbound
               matno   1:unbound
    Data type for Target
    Target
         org          1:unbound
         order        1:unbound
        itemno      1:unbound
       matno         1:unbound
    In declaration of target data type occurrence of all child nodes are 1:unbounded. And I have used it in Message type and in message mapping for my target message type occurrence is showing as 1:1.
    My objective is to replicate this entire Target as the no of times the Item is occurring in source  ie for multiple items in source I want multiple target files. For this I have mapped item node of source to Target(parent node). But in mapping test it is only displaying one Target structure for multiple nodes in source. Please Help me in solving this issue .
    Full Points will be awarded
    Thanks & Regards
    Satish.

    Hi,
    If you want multiple Targerts you need to use UseOneAsMany.
    check below link
    http://help.sap.com/saphelp_nw70/helpdata/en/38/85b142fa26c811e10000000a1550b0/content.htm
    Thanks,
    RamuV

  • Transform data from multiple sources to single target in bpel 10.13.1.0

    Hi,
    I need to transform data from mulitple sources to single target schema.
    Please any one can give some idea.
    Regards
    janardhan

    We are gone a other way merge multiple information into one target.
    First we have created a wrapper variable with the structure of the target variable that we want use in the transformation.
    With a assign element we filled the wrapper variable with information from the inputvariable of the bpel process and some addtional information.
    Then we called in the transformation the wrapper variable as source and the target variable as target.
    We have used that way in some bpel process and it works for us.
    Within Oracle SOA Suite 11g, there is the mediator component that does simply routing
    and transformation functionality within a composite (SCA).
    That's all, I hope that help.
    Michael

  • Single source to create multiple target nodes

    Hi Guys,
    I need to create multiple target node as many occurrence of source node. how should i achieve it?
    Source node (1...999999)  to Target node(1..1)
    please suggest.
    Regards
    Swapnil

    Hi Nutan,
    Sorry formatting got messed up so posting again.
    Sorry for the confusion. Target structure is 0..unbounded.
    Source structure ...................................... Target structure
    Message 1 ...................................................Message 1
    ZHRMD_A07 (1...1)..........................................MT_EMPLOYEE (0....unbounded)
         E1PLOG1(1...unbounded)................................ Field1
                                                                                    I need to create MT_EMPLOYEE multiple times depend upon occurences of E1PLOG1.
    Regards
    Swapnil
    Edited by: Swapnil Bhalerao on Mar 3, 2010 12:41 PM
    Edited by: Swapnil Bhalerao on Mar 3, 2010 12:47 PM

  • Transforming multiple occurence source element to target

    hi,
    i need help in mapping multiple occurrence source elements in source schema to target...
    unable to add for each in source...
    i need to map between an source schema and database schema
    this source schema has elemnts which is of multiple occurrence. so how do i insert records?? looping the entire transformation is an overhead to database.
    can many to one mapping be done in BPEL..??
    please help...
    Edited by: 870953 on Jul 7, 2011 4:27 AM

    Hi,
    this is forum for problems with Oracle SQL Developer Data Modeler. You need to find the right forum for your tool.
    Philip

  • Multiple target messages

    Hello,
    for our business process, we need to map a single source message to multiple target messages (of different message types).
    The (ABAP)mapping takes care of this mapping, so I put a transformation step after my receiver step, that invokes this mapping.
    What do I have to define after the transformation step: do I use a fork or a send-for-each-block or ... ? Since we're on SP15, can we use the "extended" option in the interface determination to facilitate our process ?
    Thanks for your answers !
    Kind regards,
    Frederik

    Hi,
    What is the SP that you are using. If it is XI 3.0 and SP less than 14 then it is not possible. You need to use Abstract interface.
    And why do you need Multimapping in the first place. You can do a simple mapping and in the receiver determination you can add more Business Server/System. Each Service/System you will have seprate IB interface and a separate Interface mapping.
    Hope this solves your problem.
    Thanks,
    Prakash

  • SAP XI 3.0 Same source for different target in std Value mapping function

    Hi,
    We have replicated 4 value mapping entries from R3 to XI having the same Context , Agency , Schema and value for source, but each of the 4 values has the same Context and Agency but different Schema and Value respectively.
    To illusstate :
    Source                             |Target
    Context Agency Schema    Value -----Context Agency   Schema     Value
    CS1      AS1      SS1      1        CT1       AT1      ST1       A
    CS1      AS1      SS1      1        CT1       AT1      ST2       A
    CS1      AS1      SS1      1        CT1       AT1      ST3       B
    This value mapping is not working and we always get the source value as the result.
    We are wondering if the reason for this is that we use the same source for different targets. But we are not 100 % sure of it.
    When I read the documentation on Value mapping or when we use the value mapping standard function in graphical mapping, we pass the context , agency and schema of the source and target respectively and the source value to get the target value, and this combination is always unique in our case as seen in the above example.
    Has anyone faced this kind of an issue, if yes I would appreciate if anyone could help us resolve this problem.
    Thanks in advance.
    regards,
    Advait

    Hi Advait,
    From the below what I understand is that you are not able to do value mapping for the follwoing
    1     A
    2     A
    3     B
    As value mapping allow one to one mapping only. Please do it like as mentioned below
    1     1*A
    2     2*A
    3     3*B
    Then in the graphical mapping of Integration Repository do the mapping for the same as shown below
    source field > VALUEMAPPING> UDF--> TARGET Field
    In UDF suppress the value of  1* , 2* , 3* which can be done as follows
    create one UDF with one input field
    //write the code as below to suppress the field
    return input.substring(2);
    Here the davantage of using 1* , 2* , 3* etc is that you have the option to use value mapping for 100 values which I think is not normally the case for any Interface.
    If you have same source you can do the same thing for that.
    Hope this helps you to resolve your query.
    Thanks & Regards
    Prabhat

  • Loading Data to Multiple Targets in BI

    Hi Experts,
    I have one doubt regarding data load to multiple targets in BI. I explain clearly- the scenario is to load the data coming from one source system to multiple Data Targets in BI. Like in BW, we will just create multiple update rules from InfoSource to different targets. In InfoPackage maintenance screen, under  Data Targets Tab, we will select the respective targets which we want to load and we run the InfoPackage, it will updates the data to all the selected targets in the infopacakge.
    But in BI, how we will implement this scenario, bcoz here we need to create the individual DTPs, and there is no options to load the data simultenaously to multiple targets.
    So, is there any solution to implement this scenario in BI, plz expalin.
    Thnaks in Advance
    Ragards
    Ramakrishna Kamurthy

    Hi Dennis,
    No worries at all. I've been trying different approaches and strangely it does seem to load data packages faster when going via an InfoSource. (I don't understand why). However, it doesn't want to do it parallel.
    Whereas when I went loaded direct from the DataSource to DataSource, it processed data packages twice as slow but three at a time. The result being without the InfoSource was faster. This can be seen in the DTP Process Monitor and in SM50.
    Both DTPs had the default setting in Settings for Batch Manager of 3 parallel processes.
    Our batch queues in SM50 have not been blocked with other processes.
    Has anyone else had problems with parallel processes when loading via an InfoSource?
    Thanks
    Adrian
    P.S.
    I think I've discovered two cases where InfoSource may bring performance improvements:
    Filtering Records
    Transformation A includes the common and more simplistic transformations. e.g Sets a flag "Relevant" for certain conditions being met.
    Transformation B includes the complex transformations. At the beginning of B, you include a Start Routine that filters out records not marked "Relevant". That way it only does the complex work on relevant records.
    Time Conversion
    If your Data Source has Fiscal Period, but you wish your DataTarget to have Calendar Month, you need to write a routine to covert if you extract direct from DataSource to DataTarget.
    Whereas, if the Fiscal Period is passed to an InfoSource, you can use Time conversions or formulas to convert Fiscal Period to Calendar Month in a transformation between InfoSource and DataSource
    Edited by: Adrian Bell on Jul 31, 2008 9:33 AM

  • Source Optional vs Target Optional

    Using SDDM 3.3.0.747.
    I've got a situation with the Logical Model that is confusing me.  Can anyone shed some light for me?
    I have two Entities (i.e. the things that look like tables) in my logical model.  One table is Orders.  The other is Order Detail.
    If a row exists in the Order Detail table, it must be tied (via a PK/FK) to a row in the Orders table.  In other words, the Order Detail can't just be a random row -- it has to "belong" to an order.  There can be many order detail rows for a given Order (i.e. you can order multiple things on the same order, and each thing is stored on its own row in the Order Detail table).
    However, a single row in the Orders table doesn't necessarily have to be associated with any rows in the Orders Detail table.  For example, perhaps we just started the order and got interrupted before actually adding anything that we wanted to order.  So we can have an order number (PK in the Orders table) that doesn't yet tie to any rows in the Order Detail table.
    What I've just described seems to me to be a 1..0M, meaning that a single Order may be associated with any number of Order Detail rows, or none at all.  If the Orders table is on the left and the Order Detail table is on the right, I THINK I should see this connector: -|-----0<-
    I have set the Relation Properties as follows:
    Source Cardinality
    Source: Orders
    Source to Target Cardinality: - --<-*  (1 Order for many Order Details)
    Source Optional: UNCHECKED
    Target Cardinality
    Target: Order Detail
    Target to Source Cardinality: --1 (1 Order per Order Detail)
    Target Optional: CHECKED
    Now here's where my brain is getting all wonky: The O indicating an optional constraint is located on the Orders end of the connection line.  -|O-----<-   and to me, that feels backwards.  It feels like that's telling me that "multiple Order Detail lines can be connected to either zero or 1 order", and that's not correct.  An order detail line MUST be connected to an Order.  (Sure wish I could include a screenshot or two).
    I feel that the O should be on the Order Detail end of the line, which to me says "one order is associated with any number of detail lines, including zero".
    So to me, the position of the O feels wrong.
    I can move it into what I think is the "correct" position only by reversing the CHECKED and UNCHECKED status of the Source Optional and Target Optional boxes.  When I do that, the O moves, but the relation properties screen now appears wrong to me.
    I know this has to be really basic Data Modeling 101 stuff, but I'm just not getting it.  And I HAVE had my morning Starbucks, so that's not the trouble.
    Any help in getting me thinking straight?

    AH-HAH!!!  Now I get it.  If we forget Orders and Order Details and instead look at a list of Women and a list of Children, it makes more sense.
    There is a one-to-zero-or-many relationship between Women and Children.   I have a list of Women.  For each woman, it is her option to have children or not.  The option rests with the Woman. 
    But a child has no such option. If the child exists, it has no option as to whether or not it had a mother.
    So the words 'Target Optional' do, in fact, mean 'The Target Is Optional'. If I am looking at one woman, it is indeed optional as to whether or not that woman has children.  Children (target) are not required (i.e. they are optional) for every woman (source).  Therefore, there will be an O on the relationship cardinality line, indicating that the relationship is optional.
    What was hard to explain was the positioning of the O on the cardinality line.  The presence of the O simply means that the relationship is optional.  That much is easy.
    But I was expecting the O to be positioned on whichever end of the relationship is the optional one (i.e. children are optional, so the O should be positioned on the children's end of the line), and that is not true.  The position of the O indicates which entity the option rests with.  (Which, I contend, is still backwards, but at least now I can explain it.  I don't like it, but I can explain it.)  The woman may, at her option, have one or more children.  That's the way to translate the cardinality line into spoken words when the O is on the woman's (i.e. source) end of the line. 
    Philip, thank you for hanging in there with me.  Correct Answer awarded.

  • Single Logical Table Source VS Multiple Logical Table Source

    When is it appropriate to use a Single Logical Table Source VS Multiple Logical Table Source.

    Hi,
    Single Logical Table source:A logical fact/dimension table with a single physical source/table.
    Mutilple Logical Table source:A logical fact/dimension table with a multiple physical sources/tables.
    mark if helpful/correct....
    thanks,
    prassu

  • Share Data Source between multiple report projects in Microsoft SQL Server 2012 Reporting Services

    I have a reports solution in 2012 which contains multiple report projects one each for target deployment folder and we use TFS 2012 for report deployment.
    We have a template project which has a bunch of template reports and all the datasources used in different reports.
    When I develop a report, I cannot "Preview" in TFS but, for deploy this used to work fine util the reports solution was in TFS 2010 & Visual Studio 2008 R2. Since we moved to TFS 2012 & SSRS 2012 not being able to deploy till I create all
    the necessary datasources for each project and now all the developers complaining that they cannot develop reports in TFS itself as they cannot preview (this problem was existing previously) and now not being able to deploy as it errors for each report "Could
    not find specified rds file". I tried messing around with the .rptproj file DataSources tag that did not help either by modifying it like below.
    <DataSources>
    <ProjectItem>
    <Name>DB.rds</Name>
    <FullPath>Template\Data Source\DB.rds</FullPath>
    </ProjectItem>
    </DataSources>
    Is there a way I could share a Data Source between multiple projects in Microsoft SQL Server 2012 Reporting Services?
    Thanks in advance.............
    Ione

    Hi ione721,
    According to your description, you want to create a shared data source which works for multiple projects. Right?
    In Reporting Services, A shared data source is a set of data source connection properties that can be referenced by multiple reports, models, and data-driven subscriptions that run on a Reporting Services report server. It must be within one project.
    We can't specify one data source working for multple projects. In this scenario, we suggest you put those reports into one project. Otherwise you can only create one data source for each project.
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • How to put multiple targets in ODI

    Hi All,
    Can i know how to put multiple targets in Oracle Data Inegrator.
    Thanks in Advance!!

    You can have one target per interface. There are some KMs in ODI 11g to generate Oracle multi target insert statements or Teradata comparable ones. You could also hide additional inserts under the hood in a KM if it is a common design pattern.
    Cheers
    David

  • Restore single datafile from source database to target database.

    Here's my issue:
    Database Release : 11.2.0.3 across both the source and targets. (ARCHIVELOG mode)
    O/S: RHEL 5 (Tikanga)
    Database Storage: Using ASM on a stand-alone server (NOT RAC)
    Using Oracle GG to replicate changes on the Source to the Targets.
    My scenario:
    We utilize sequences to keep the primary key in tact and these are replicated utilizing GG. All of my schema tables are located in one tablespace and datafile and all of my indexes are in seperate tablespace (nothing is being partitioned).
    In the event of media failure on the Target or my target schema being completely out of whack, is there a method where I can copy the datafile/tablespace from my source (which is intact) to my target?
    I know there are possibilites of
    1) restore/recover the tablespace to a SCN or timestamp in the past and then I could use GoldenGate to run the transactions in (but this could take time depending on how far back I need to recover the tablespace and how many transactions have processed with GG) (This is not fool-proof).
    2) Could use DataPump to move the data from the Source schema to the Target schema (but the sequences are usually out of order if they haven't fired on the source, get that 'sequence is defined for this session message'). I've tried this scenario.
    3) I could alter the sequences to get them to proper number using the start and increment by feature (again this could take time depending on how many sequences are out of order).
    I would think you could
    1) back up the datafile/tablespace on the source,
    2)then copy the datafile to the target.
    3) startup mount;
    4) Newname the new file copied from the source (this is ASM)
    5) Restore the datafile/tablespace
    6) Recover the datafile/tablespace
    7) alter database open;
    Question 1: Do I need to also copy the backup piece from the source when I execute the backup tablespace on the source as indicated in my step 1?
    Question 2: Do I need to include "plus archivelog" when I execute the backup tablespace on the source as indicated in my step 1?
    Question 3: Do I need to execute an 'alter system switch logfile' on the Target when the recover in step 6 is completed?
    My scenario sounds like a Cold Backup but running with Archivelog mode, so the source could be online while the database is running.
    Just looking for alternate methods of recovery.
    Thanks,
    Jason

    Let me take another stab at sticking a fork into this myth about separating tables and indexes.
    Let's assume you have a production Oracle database environment with multiple users making multiple requests and the exact same time. This assumption mirrors reality everywhere except in a classroom where a student is running a simple demo.
    Let's further assume that the system looks anything like a real Oracle database system where the operating system has caching, the SAN has caching, and the blocks you are trying to read are split between memory and disk.
    Now you want to do some simple piece of work and assume there is an index on the ename column...
    SELECT * FROM emp WHERE ename = 'KING';The myth is that Oracle is going to, in parallel, read the index and read the table segments better, faster, whatever, if they are in separate physical files mapped by separate logical tablespaces somehow to separate physical spindles.
    Apply some synapses to this myth and it falls apart.
    You issue your SQL statement and Oracle does what? It looks for those index blocks where? In memory. If it finds them it never goes to disk. If it does not it goes to disk.
    While all this is happening the hundreds or thousands of other users on the database are also making requests. Oracle is not going to stop doing work while it tries to find your index blocks.
    Now it finds the index block and decides to use the ROWID value to read the block containing the row with KING's data. Did it freeze the system? Did it lock out everyone else while it did this? Of course not. It puts your read request into the queue and, again, first checks memory to see if it needs to go to disk.
    Where in here is there anything that indicates an advantage to having separate physical files?
    And even if there was some theoretical reason why separate files might be better ... are they separate in the SAN's cache? No. Are they definitely located on separate stripes or separate physical disks? Of course not.
    Oracle uses logical mappings (tables and tablespaces) and SANS use logical mappings so you, the DBA or developer, have no clue as to where anything physically is located.
    PS: Ouija Boards don't work either.

Maybe you are looking for

  • No picture on tv from 160GB classic

    having read all the forums re the problems with the TV OUT function on the new 160GB classic i find that i have no problem with that ....but i can get no picture on tv from connecting using the apple AV cable to the AV input on a new panasonic TV. i

  • Java Version for PI 7.1

    Hi all, Can anyone please tell  me what is the exact recommended JRE version for PI 7.1 with enhancement package1. thanks in advance Jagruthi

  • How to delimit leading zero in vendor number in OO ABAP ALV without using conversion routine

    Hi, How to delimit leading zero in OO ABAP ALV without using conversion routine, because I have many fields like vendor, customer, material number etc.. How to address this leading zero. I appreciate your quick response. Regards, Nalini S.

  • Issue in OFT(Auto Pages & TRANSFORMFRAMES property)

    Hi, I made the TRANSFORMFRAMES propert FALSE in OFT.But,still custome dynamic values(_Siebel_Application_Count_1245062193604_) are getting generated when played back in NAVIGATION EDITOR & script is failing. How can I suppress this? Moreover,AUTOpage

  • MacBook Pro Retina Windows Bootcamp Massively Overheating

    Hey all, After a struggle I finally got a Windows 7 64-bit installed into a partition and working. All well and good. However, during gaming (my only reason for a Windows partition, until the game manufacturers create a native OS X version) the MBP o