Transforming multiple occurence source element to target

hi,
i need help in mapping multiple occurrence source elements in source schema to target...
unable to add for each in source...
i need to map between an source schema and database schema
this source schema has elemnts which is of multiple occurrence. so how do i insert records?? looping the entire transformation is an overhead to database.
can many to one mapping be done in BPEL..??
please help...
Edited by: 870953 on Jul 7, 2011 4:27 AM

Hi,
this is forum for problems with Oracle SQL Developer Data Modeler. You need to find the right forum for your tool.
Philip

Similar Messages

  • How to Transform using multiple input sources to one target

    Hello - I'm trying to figure out a way in which I can map retrieved values from more than one table (defined via db adapter links) into a single target flat file (defined via file adapter). I can establish a 1-1 transformation process fine, but when I define a second transformation activity for another table I get an error when I try to run the BPEL process. Does anyone have any ideas on what I can try next? Thanks for helping out the rookie!
    P.S. I'm running SOA Suite 10.1.3.1 with Jdeveloper 10.1.3.1 and connecting to an MS-SQL Server database

    Hi,
    if i understand your question:
    you are getting several variables with values from the db adapters.
    you want to put these variables to the variable, which is the input for the file adapter (this procedure is called the assignment).
    you want to do this assignment with transformations.
    you have to ways to solve that:
    use the assign, which is much less comfortable.
    or
    ask for a feature enhancement via metalink that a transformation support more than one input...

  • Transform data from multiple sources to single target in bpel 10.13.1.0

    Hi,
    I need to transform data from mulitple sources to single target schema.
    Please any one can give some idea.
    Regards
    janardhan

    We are gone a other way merge multiple information into one target.
    First we have created a wrapper variable with the structure of the target variable that we want use in the transformation.
    With a assign element we filled the wrapper variable with information from the inputvariable of the bpel process and some addtional information.
    Then we called in the transformation the wrapper variable as source and the target variable as target.
    We have used that way in some bpel process and it works for us.
    Within Oracle SOA Suite 11g, there is the mediator component that does simply routing
    and transformation functionality within a composite (SCA).
    That's all, I hope that help.
    Michael

  • FCC's parameters for multiple occurence on target side

    Hi Experts,
    I have an issue in Fixed File content conversion(FCC).
    The fixed format file is being read into XI and is being converted in XML via FCC.
    I have changed the occurence of MT on target side,which resulted in adding Messages & Message1 tag on sender side as well.The structure has become like this:
    Messages
         ->  Message1
              -->  FCI0002_MT
                   --->  row
    >  Fields
    Target str:
    Messages
         ->Message1
              -->FCI002_MT
                   --->row
    >Fields
    If I use FCI0002_MT in document name and row,* in Recordset structure then message mapping fails at MT node.
    What should be the Document name and Recordset structure in the FCC adapter?
    Thanks and Regards,
    Indu Khurana
    Edited by: Indu Khurana on Feb 6, 2009 9:29 AM
    Edited by: Indu Khurana on Feb 6, 2009 9:34 AM

    Hi Mugdha,
    Yes Its has passed FCC and stuck in mapping.
    But now if I change  Recordset structure to :
    Message1,1,FCI000012_Forecast_DATA2000_MT,1,row,*
    then its gving error in communication channel : "  Mandatory parameter 'xml.keyfieldName': no value found  ".
    And in my case I don't have any key field name.
    Kindly suggest.
    Thanks
    Indu

  • Transformation of Optional Date Elements

    I am new to Workshop and I am performing a transformation between two XML files that I will call source and target for example. I have multiple time/date elements that are optional in the target XML schema. I want to know how to set their values to null (nil?) if there is no data in the corresponding source XML schema element. Any ideas?
    I get syntax errors with xquery set up like this:
    if (data($iter_requestXML1/mySourceDate) != "" ) then
    xs:date($iter_requestXML1/mySourceDate)
    else
    (nil)
    I want to do something like above, but I don't know the right syntax.
    However if I leave the "else" clause as:
    else ()
    Then I don't have a syntax error, but I get an exception whenever I try to reference the target XML element whenever their was no data for this element.
    In the last example, I am thinking I get the reference error because the element was never initialized to NULL or NIL. Bit I don't know how to set it. Please Help!

    Hi Ilona,
    I think you alread have the solution. Create a new dso. Load the data from your 'old' dso month wise. So start with january. Just post the data in that case. Next load february. In the update rules, read the active table of your 'new' dso to get the related data for each record for the previous month. Calculate your new value and post the request.....
    regards
    Siggi

  • Splitting file depends upon no of occurence of element

    Hi
    I have the scenario as follows where i need to split the file depends upon number of occurences of element e1 .
    I have input xml file having structure as follows:
    <ROOT>
    <root1 attribute>
       <field1> </field1>
       <e1>
           <field2> ....</field2>
       </e1>
        <e1>
            <field2> .......</field2>
       </e1>
    </root1>
    <ROOT>
    <u>element e1 has "1 to unbounded" occurence.</u>
    I would like to have output file structure as
    <ROOT>
    <root1 attribute>
       <field1> </field1>
       <e1>
           <field2> ....</field2>
       </e1>
    </root1>
    <ROOT>
    the key thing for every occcurence of element e1, the output file will be generated and
    attribue value of root1 and element position will  be included in the file name .
    Thanking you in adbvance.
    Regards
    Piyush

    Hi Piyush,
    Create a message mapping for the spliting you want.
    Under <b>Messages</b> tab
    <b>Source Message:-</b> Your message type name example
    <ROOT>
    <root1 attribute>
    <field1> </field1>
    <e1>
    <field2> ....</field2>
    </e1>
    <e1>
    <field2> .......</field2>
    </e1>
    </root1>
    <ROOT>
    <b>Note:- Occurence is 1</b>
    <b>Target Message:-</b> your message type name example
    <ROOT>
    <root1 attribute>
    <field1> </field1>
    <e1>
    <field2> ....</field2>
    </e1>
    </root1>
    <ROOT>
    <b>Note:- Occurence is 0 to unbounded</b>
    In the Design tab do your graphical mapping one to one mapping
    <b>Note:- <e1> of the source element (0 to unbounded) should be mapped to <ROOT> of the target element (0 to unbounded).</b>
    Then you can test your mapping by giving your test data
    <b>While Creating the Interface mapping please make sure that the target interface occurence is <i>0 to unbounded</i>.</b>
    that's it you are done with it.
    Make sure in BPM use <b>transformation step</b> to convert the source to target messages and use <b>multiline container element</b> and use <b>foreach block</b> to catch the message

  • Mapping an XML from an input element to Target

    Hi ,
    I have an XML coming in source Element as below :
    <?xml version="1.0" encoding="UTF-8"?>
    <ResponsePayload>
       <RespString><?xml version="1.0" encoding="UTF-8"?>
    <Devices>
       <Device>1</Device>
       <Name>1</Name>
    </Devices></RespString>
    </ResponsePayload>
    and I need to map it to the target where the target structure is as below .
    <?xml version="1.0" encoding="UTF-8"?>
    <Devices>
       <Device/>
       <Name/>
    </Devices>
    The entire target xml is coming in a source field  and that needs to be mapped.
    How to perform this?Can any one suggest the methods to do this ?
    Thanks
    Rajesh

    Add the libraries and Tweak as needed.. here's the gist.
    public class GetResponse extends AbstractTransformation  {
    public void transform(TransformationInput arg0, TransformationOutput arg1)
                   throws StreamTransformationException {
    String inputPayload = convertInputStreamToString(arg0.getInputPayload()
                        .getInputStream());
              String inputPayload = convertInputStreamToString(arg0.getInputPayload()
                        .getInputStream());
    String outputPayload = "";
    inputPayload = inputPayload.replaceAll("<ResponsePayload>", "");
    inputPayload = inputPayload.replaceAll("</ResponsePayload>", "");
    inputPayload = inputPayload.replaceAll("<RespString>", "");
    inputPayload = inputPayload.replaceAll("</RespString>", "");
    inputPayload = start + inputPayload + end;
                     outputPayload = inputPayload;
    try {
                   arg1.getOutputPayload().getOutputStream().write(
                             outputPayload.getBytes("UTF-8"));
              } catch (Exception exception1) {
    public String convertInputStreamToString(InputStream in) {
              StringBuffer sb = new StringBuffer();
              try {
                   InputStreamReader isr = new InputStreamReader(in);
                   Reader reader = new BufferedReader(isr);
                   int ch;
                   while ((ch = in.read()) > -1) {
                        sb.append((char) ch);
                   reader.close();
              } catch (Exception exception) {
              return sb.toString();
    Mind you the code would change if there's and empty node.
    Similarly you would also need to remove an extra version and encoding tag which you get from source.
    Edited by: AMITH GOPALAKRISHNAN on Mar 22, 2011 4:57 PM

  • Multiple occurence not working for extended idocs

    hai friends
    iam doing file to idoc senario in that multiple occurence is not working for Z segments.
    it is extented idoc .
    in taht multilpe segments are not creating only one segment is creating
    what is the problem
    pls help me
    with regards
    srikanth vipparla

    Hi Srikanth,
    You should map the node of the target side(the one which you want to occur multple times) to the node of the source side based upon which the target side node has to occur multiple times. For example lets say our source and target side structures are like this:
    <Source>
                  <Element1>
                  <Element2>
                  <Element3>
    </Source>
    <Target>
                  <Element1>
                  <Element2>
                  <Element3>
    </Target>
    Now if you want Target to occur n times if  Source node occurs n times, you should map Target node to the Source node. Even in case you want that Element1 of target side should occur as many times as many times you have Element1 of Source side, you should do the same.
    Thanks and Regards,
    Sanjeev.
    PS: Reward points if helpful.

  • Detecting multiple occurences of a value in data? (conundrum)

    Ok, if there are any good java utilities to help, this is where I'm going to find them...
    I'm trying to detect multiple occurences in a data set. Piling them into an array and using Array.sort() was a good start - I'm trying to find the max and min value of data that occurs more than once in the set.
    However, when the data set becomes much larger, I can't hold the set in memory. Is there an elegant solution to this? I'm at a loss to find a nice way of doing it. Its an interesting problem...
    Thanks,
    Steve

    Hi Steve!
    You can do this with a bit of programming yourself :-) If all values don't fit into your memory, just split them into smaller pieces. Do you need to sort out all duplicates or do you need to detect them only?
    The idea:
    Create a new ArrayList oder a Set of your choice and put some elements in it (a fixed number or just wait for an OutOfMemoryException to be thrown and catch this). Then sort the ArrayList and write it to a temporary file, store the file handle or name in a seperate list. Go on like this until all your data is sorted.
    After that, build one loop fetching just one value from each file and compare them - mainly this is mergesort. If duplicate values occur then, well, you have dupes :-)
    If needed I could look for the source...
    Cheers,
    Jan

  • How to load data into an ods from multiple info sources.

    hi all...
    i am given a task to load data into an ods from 3 infosources ...
    can someone plz give me the flow .
    thank u in advance..

    Hi Hara Pradhan,
    You have to create 3 update rules by giving the 3 different infosources while creating each update rule. And u have to create the infopackages under each infosource. with this u can load the data to the same data target from multiple info sources.
    Hope it helps!
    Assign points if it helps!

  • In XI Mapping multiple fields mapping to single target field.

    Hi Friends,
    In XI Mapping multiple fields mapping to single target field.
    For example my requirement is :
    Source Fields:(This RFC BAPI Structure)
    Empno                0-1
    EmpName           0-1
    Address             0-1
    Taget Field is:
    Details               0-1
    The above three fields passed to the Details Field. Here i am using Concat function
    But i have one query on that on,Every field having "line Break" required.
    Can you please help me out above this requirement.
    Thanks in Advance,
    Sateesh N.

    If you want a line break between the three fields, then try
    passing a,b,c to the udf and in the udf you would have
    return a+"\n"+b+"\n"+c;

  • Source Optional vs Target Optional

    Using SDDM 3.3.0.747.
    I've got a situation with the Logical Model that is confusing me.  Can anyone shed some light for me?
    I have two Entities (i.e. the things that look like tables) in my logical model.  One table is Orders.  The other is Order Detail.
    If a row exists in the Order Detail table, it must be tied (via a PK/FK) to a row in the Orders table.  In other words, the Order Detail can't just be a random row -- it has to "belong" to an order.  There can be many order detail rows for a given Order (i.e. you can order multiple things on the same order, and each thing is stored on its own row in the Order Detail table).
    However, a single row in the Orders table doesn't necessarily have to be associated with any rows in the Orders Detail table.  For example, perhaps we just started the order and got interrupted before actually adding anything that we wanted to order.  So we can have an order number (PK in the Orders table) that doesn't yet tie to any rows in the Order Detail table.
    What I've just described seems to me to be a 1..0M, meaning that a single Order may be associated with any number of Order Detail rows, or none at all.  If the Orders table is on the left and the Order Detail table is on the right, I THINK I should see this connector: -|-----0<-
    I have set the Relation Properties as follows:
    Source Cardinality
    Source: Orders
    Source to Target Cardinality: - --<-*  (1 Order for many Order Details)
    Source Optional: UNCHECKED
    Target Cardinality
    Target: Order Detail
    Target to Source Cardinality: --1 (1 Order per Order Detail)
    Target Optional: CHECKED
    Now here's where my brain is getting all wonky: The O indicating an optional constraint is located on the Orders end of the connection line.  -|O-----<-   and to me, that feels backwards.  It feels like that's telling me that "multiple Order Detail lines can be connected to either zero or 1 order", and that's not correct.  An order detail line MUST be connected to an Order.  (Sure wish I could include a screenshot or two).
    I feel that the O should be on the Order Detail end of the line, which to me says "one order is associated with any number of detail lines, including zero".
    So to me, the position of the O feels wrong.
    I can move it into what I think is the "correct" position only by reversing the CHECKED and UNCHECKED status of the Source Optional and Target Optional boxes.  When I do that, the O moves, but the relation properties screen now appears wrong to me.
    I know this has to be really basic Data Modeling 101 stuff, but I'm just not getting it.  And I HAVE had my morning Starbucks, so that's not the trouble.
    Any help in getting me thinking straight?

    AH-HAH!!!  Now I get it.  If we forget Orders and Order Details and instead look at a list of Women and a list of Children, it makes more sense.
    There is a one-to-zero-or-many relationship between Women and Children.   I have a list of Women.  For each woman, it is her option to have children or not.  The option rests with the Woman. 
    But a child has no such option. If the child exists, it has no option as to whether or not it had a mother.
    So the words 'Target Optional' do, in fact, mean 'The Target Is Optional'. If I am looking at one woman, it is indeed optional as to whether or not that woman has children.  Children (target) are not required (i.e. they are optional) for every woman (source).  Therefore, there will be an O on the relationship cardinality line, indicating that the relationship is optional.
    What was hard to explain was the positioning of the O on the cardinality line.  The presence of the O simply means that the relationship is optional.  That much is easy.
    But I was expecting the O to be positioned on whichever end of the relationship is the optional one (i.e. children are optional, so the O should be positioned on the children's end of the line), and that is not true.  The position of the O indicates which entity the option rests with.  (Which, I contend, is still backwards, but at least now I can explain it.  I don't like it, but I can explain it.)  The woman may, at her option, have one or more children.  That's the way to translate the cardinality line into spoken words when the O is on the woman's (i.e. source) end of the line. 
    Philip, thank you for hanging in there with me.  Correct Answer awarded.

  • Restore single datafile from source database to target database.

    Here's my issue:
    Database Release : 11.2.0.3 across both the source and targets. (ARCHIVELOG mode)
    O/S: RHEL 5 (Tikanga)
    Database Storage: Using ASM on a stand-alone server (NOT RAC)
    Using Oracle GG to replicate changes on the Source to the Targets.
    My scenario:
    We utilize sequences to keep the primary key in tact and these are replicated utilizing GG. All of my schema tables are located in one tablespace and datafile and all of my indexes are in seperate tablespace (nothing is being partitioned).
    In the event of media failure on the Target or my target schema being completely out of whack, is there a method where I can copy the datafile/tablespace from my source (which is intact) to my target?
    I know there are possibilites of
    1) restore/recover the tablespace to a SCN or timestamp in the past and then I could use GoldenGate to run the transactions in (but this could take time depending on how far back I need to recover the tablespace and how many transactions have processed with GG) (This is not fool-proof).
    2) Could use DataPump to move the data from the Source schema to the Target schema (but the sequences are usually out of order if they haven't fired on the source, get that 'sequence is defined for this session message'). I've tried this scenario.
    3) I could alter the sequences to get them to proper number using the start and increment by feature (again this could take time depending on how many sequences are out of order).
    I would think you could
    1) back up the datafile/tablespace on the source,
    2)then copy the datafile to the target.
    3) startup mount;
    4) Newname the new file copied from the source (this is ASM)
    5) Restore the datafile/tablespace
    6) Recover the datafile/tablespace
    7) alter database open;
    Question 1: Do I need to also copy the backup piece from the source when I execute the backup tablespace on the source as indicated in my step 1?
    Question 2: Do I need to include "plus archivelog" when I execute the backup tablespace on the source as indicated in my step 1?
    Question 3: Do I need to execute an 'alter system switch logfile' on the Target when the recover in step 6 is completed?
    My scenario sounds like a Cold Backup but running with Archivelog mode, so the source could be online while the database is running.
    Just looking for alternate methods of recovery.
    Thanks,
    Jason

    Let me take another stab at sticking a fork into this myth about separating tables and indexes.
    Let's assume you have a production Oracle database environment with multiple users making multiple requests and the exact same time. This assumption mirrors reality everywhere except in a classroom where a student is running a simple demo.
    Let's further assume that the system looks anything like a real Oracle database system where the operating system has caching, the SAN has caching, and the blocks you are trying to read are split between memory and disk.
    Now you want to do some simple piece of work and assume there is an index on the ename column...
    SELECT * FROM emp WHERE ename = 'KING';The myth is that Oracle is going to, in parallel, read the index and read the table segments better, faster, whatever, if they are in separate physical files mapped by separate logical tablespaces somehow to separate physical spindles.
    Apply some synapses to this myth and it falls apart.
    You issue your SQL statement and Oracle does what? It looks for those index blocks where? In memory. If it finds them it never goes to disk. If it does not it goes to disk.
    While all this is happening the hundreds or thousands of other users on the database are also making requests. Oracle is not going to stop doing work while it tries to find your index blocks.
    Now it finds the index block and decides to use the ROWID value to read the block containing the row with KING's data. Did it freeze the system? Did it lock out everyone else while it did this? Of course not. It puts your read request into the queue and, again, first checks memory to see if it needs to go to disk.
    Where in here is there anything that indicates an advantage to having separate physical files?
    And even if there was some theoretical reason why separate files might be better ... are they separate in the SAN's cache? No. Are they definitely located on separate stripes or separate physical disks? Of course not.
    Oracle uses logical mappings (tables and tablespaces) and SANS use logical mappings so you, the DBA or developer, have no clue as to where anything physically is located.
    PS: Ouija Boards don't work either.

  • Multiple data sources to open hub destination ??

    Hi,
            Is it possible to assign multiple data sources to open hub destination?
    At present we are FTP'ing a file to users. Users need one more field added to this file. The information for this field belongs to another ODS. I  checked the infospoke but it does not accept multiple data sources.
    Any ideas?
    Thanks!
    Edited by: BI Quest on Sep 4, 2008 8:12 PM

    You can create multiple transformations for open hub destination. Create destination of open hub destination to be file on application server and then use transfer mechanism to transfer files to desired destination.
    Regards,
    Pravin Karkhanis.

  • Multiple xml sources - JSP

    Hi to all,
    Ok this is my situation. I have XML data that is coming from servlets. I need to build a JSP page that will output the XML data, but from multiple servlets. Is there anyway to do this? I have XSLT that transforms the XML data, but it has to be linked at runtime, since the servlets only outputs the data (I need a clear separation between data presentation and the data itself)
    Coul somebody she some light plz...
    Thanks

    Well, not exactly.
    Let's say I have two HTML lists in my JSP page that needs to be outputted. Each of these lists get their data from different servlets. These servlets outputs the XML data. I have this JSP page that needs to take the 2 XML outputs (from each servlets), link the XML data with their respective XSLT file (dynamically!) and render it in HTML.
    So I have multiple XML sources, not just one. That is exactly what I don't want because the XML data combinaison is irrelevant. I want to be able to use this output with other pages. (So joining the 2 sources is not acceptable)
    Thank you

Maybe you are looking for