Howto map 1 XMLdocument to 2 Tables?

Can somebody tell me how to map 1 (nested) XML document to 2 Oracle tables, using the XSQL Servlet. There are plenty of examples in the release documents about inserting 1 XML document in 1 Table (using <xsl: insert-request ...>), but it doesn't say how to map 1 XML file to multiple tables.
Thanks.

Post Author: slyder
CA Forum: Data Connectivity and SQL
Hi foghat, im not going to post all the fields because these tables are huge! but here are the fields im using:Table1 = OE_OrdItem
Table2 = CT_CareProvOE_OrdItem.OEORI_Doctor_DR -> CTCareProv.RowId
OE_OrdItem.OEORI_AuthoriseClinician_DR -> CTCareProv.RowIdNow my problem is as follows:
OEORI_Doctor_DR = 533 and OEORI_AuthoriseClinican_DR = 233     (on 1 record)On
CTCareProv.Desc names are stored, so if i link up the OEORI_Doctor_DR
to the CTCareProv.RowId field in Crystal and put CTCareProv.Desc on the
report i get the persons name, but as soon as i link
OEORI_AuthoriseClinican_DR to the CTCareProv.RowId as well i cant use
CTCareProv.Desc again on my report and this is where im getting stuck.using normal SQL you would just put different aliases as Yangster said, but im not sure how to do it in Crystal Reports.

Similar Messages

  • Mapping from nested to one table (sales order) XML to IDOC

    Hello,
    I have to map a xml file to the IDOC SALESORDER_CREATEFROMDAT2.SALESORDER_CREATEFROMDAT202.
    How I can map the the longtext from the XML file to the IDOC struktur.
    Part of the XML file, there could be n times BPosition with n times longtext. The longtext must be map with a table. I some case its a mapping from nested to mornal table.
        <BPosition>
             <lpos>1</lpos>
             <bbl_sap_nr/>
             <milvonr/>
             <kurztitel/>
             <anzbest/>
             <anzliefer/>
             <kostenpflichtig/>
             <longtext>
                <line>pos1 zeile1</line>
                <line>pos1 zeile 2</line>
             </longtext>
          </BPosition>
    thanks for your help.

    Hi, I have to map this 1 XML to 1 IDOC
    XML:
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:MT_Milver xmlns:ns0="http://ccssap.bfi.admin.ch/milver">
       <bestellung>
          <besteller>
             <bestellnr/>
             <auftragdefit/>
             <wempfdebit/>
             <bestelldat/>
             <lieferdat/>
             <anzpos/>
             <language/>
             <adrzeile1/>
             <adrzeile2/>
             <adrzeile3/>
             <adrzeile4/>
          </besteller>
          <Kopf>
             <Lkopf>Kopf1 zeile1</Lkopf>
             <Lkopf>Kopf1 zeile2</Lkopf>
          </Kopf>
          <BPosition>
             <lpos>1</lpos>
             <matnr/>
             <milvonr/>
             <kurztitel/>
             <anzbest/>
             <anzliefer/>
             <kostenpflichtig/>
             <bemerkung>
                <line>pos1 zeile1</line>
                <line>pos1 zeile 2</line>
             </bemerkung>
          </BPosition>
          <BPosition>
             <lpos>2</lpos>
             <matnr/>
             <milvonr/>
             <kurztitel/>
             <anzbest/>
             <anzliefer/>
             <kostenpflichtig/>
             <bemerkung>
                <line>pos2 zeile1</line>
                <line>pos2 zeile 2</line>
             </bemerkung>
          </BPosition>
       </bestellung>
    </ns0:MT_Milver>
    IDOC:
    The Idoc have a segment for the longtext (table).I have to map, lpos, line into  E1BPSDTEXT from SALESORDER_CREATEFROMDAT2. Its now more clear?

  • Map FLAT file to oracle table using 9.04 version - PLS HELP!!!!

    Hello all
    I am having a problem with mapping a flat file to oracle table. The validation is successful, when I go to Project/Deployment manager. Try to deploy the mapping itself and the target table. It said succesful, and the last step is another "Deploy", this one is fail. Saying could not locate the file (which is a flat file) , but it is there on the server.
    I have read all the help on line and follow what they show me, but still not work
    Any ideas? Please provide detail answer if you know it.
    Thank in advance

    Hallo,
    just give a rights on connector
    Variant 1
    1. connect to user sys
    2. grant read,write on directory <connector_name> to <target_schema>;
    or
    Variant 2
    1. as user sys or system give CREATE_ANY_DIRECTORY to <target_schema>
    2. manualy make CREATE DIRECTORY <connector_name> as '<full_path_to_directory>';
    and enjoy :)
    PS: <connector_name> you can take from script CREATE_TABLE wisch in Generation phase was created!
    Kirill

  • Need help on Inbound Delivery - Mapping from IDoc to LIKP Table

    Hi,
    For the DESADV IDoc (Inbound Delivery) we are currently mapping data to the LIKP table for a subset of fields via  function module INPUT_IDOC_DESADV.  If I wanted to map the ABLAD - Unloading Point field from the IDoc to the LIKP - ABLAD - Unloading Point field is this possible and if yes, what work would be required? 
    And could you please let me know how to find out an user exit for the message type? (message type = desadv)

    Hi Murphy,
    Try this custom function '002'.
    CALL CUSTOMER-FUNCTION '002'
           EXPORTING
                xekko     = ekko
                xlfa1     = lfa1
                xlfb1     = lfb1
                dobject   = object
           TABLES
                int_edidd = int_edidd
                xekpo     = xekpo
                xeket     = xeket
                dvbak     = xvbak
                dvbap     = xvbap
                dvbkd     = xvbkd
           EXCEPTIONS
                error_message_received        = 1
                data_not_relevant_for_sending = 2.
    Hope this will help.
    Regards,
    Ferry Lianto

  • N-m mapping generates blobs in ref-table

    Why generates the schematool blobs in the reference-table of a n-m mapping?
    I want to map an existing schema and this isn't possible with blobs. I
    expected a table with the primary-keys of each class.

    Abe White wrote:
    Why generates the schematool blobs in the reference-table of a n-m mapping?
    I want to map an existing schema and this isn't possible with blobs. I
    expected a table with the primary-keys of each class.
    Kodo fully supports n-m and m-m mappings. If you post your .jdo files and
    describe your schema somehow we can probably help you find out where the
    problem lies.Hello,
    here I send you a short and useless example which contains
    my two problems:
    - n-m mapping generates blobs in ref-table
    - multi-table inheritance mapping
    This are my classes:
    Class Person
    String firstname
    String lastname
    Class Employee extends Person
    long personalNumber
    Collection memberOf ( TYPE Project )
    Class Project
    String projectname
    Collection members ( TYPE Employee )
    The jdo-metadata:
    <?xml version="1.0"?>
    <jdo>
    <package name="test">
    <class name="Person" identity-type="application"
    objectid-class="PersonPK">
    <extension vendor-name="kodo" key="table" value="DB_PERSON" />
    <extension vendor-name="kodo" key="lock-column" value="VERSION_ID" />
    <extension vendor-name="kodo" key="class-column" value="none" />
    <field name="firstname" null-value="exception" primary-key="true">
    <extension vendor-name="kodo" key="data-column"
    value="FIRSTNAME_TXT" />
    <extension vendor-name="kodo" key="column-length" value="50"/>
    </field>
    <field name="lastname" null-value="exception" primary-key="true">
    <extension vendor-name="kodo" key="data-column"
    value="LASTNAME_TXT" />
    <extension vendor-name="kodo" key="column-length" value="50"/>
    </field>
    </class>
    <!-- I tried both -->
    <!-- <class name="Employee"
    persistence-capable-superclass="test.Person"> -->
    <class name="Employee" identity-type="application"
    objectid-class="PersonPK" persistence-capable-superclass="test.Person">
    <extension vendor-name="kodo" key="table" value="DB_EMPLOYEE" />
    <extension vendor-name="kodo" key="lock-column" value="VERSION_ID" />
    <extension vendor-name="kodo" key="class-column" value="none" />
    <field name="personalNumber" null-value="exception">
    <extension vendor-name="kodo" key="data-column"
    value="PERSONAL_NUM" />
    </field>
    <field name="memberOf" null-value="none">
    <extension vendor-name="kodo" key="inverse" value="members" />
    <extension vendor-name="kodo" key="table"
    value="DB_EMPLOYEE_PROJECT_REF" />
    <extension vendor-name="kodo" key="projectname-data-column"
    value="PROJECT_NAME" />
    <extension vendor-name="kodo" key="firstname-ref-column"
    value="EMPLOYEE_FIRSTNAME" />
    <extension vendor-name="kodo" key="lastname-ref-column"
    value="EMPLOYEE_LASTNAME" />
    </field>
    </class>
    <class name="Project" identity-type="application"
    objectid-class="ProjectPK">
    <extension vendor-name="kodo" key="table" value="DB_PROJECT" />
    <extension vendor-name="kodo" key="lock-column" value="VERSION_ID" />
    <extension vendor-name="kodo" key="class-column" value="none" />
    <field name="projectname" null-value="exception" primary-key="true">
    <extension vendor-name="kodo" key="data-column" value="NAME_TXT" />
    <extension vendor-name="kodo" key="column-length" value="50"/>
    </field>
    <field name="members" null-value="none">
    <extension vendor-name="kodo" key="inverse" value="memberOf" />
    <extension vendor-name="kodo" key="table"
    value="DB_EMPLOYEE_PROJECT_REF" />
    <extension vendor-name="kodo" key="firstname-data-column"
    value="EMPLOYEE_FIRSTNAME" />
    <extension vendor-name="kodo" key="lastname-data-column"
    value="EMPLOYEE_LASTNAME" />
    <extension vendor-name="kodo" key="projectname-ref-column"
    value="PROJECT_NAME" />
    </field>
    </class>
    </package>
    </jdo>
    Kodo generates this db-schema:
    Table db_person
    Column VERSION_ID ( Type int(11) )
    Column FIRSTNAME_TXT ( Type varchar(50) ) <PrimaryKey>
    Column LASTNAME_TXT ( Type varchar(50) ) <PrimaryKey>
    Table db_employee
    Column PERSONAL_NUM ( Type bigint(20) )
    Table db_project
    Column VERSION_ID ( Type int(11) )
    Column NAME_TXT ( Type varchar(50) ) <PrimaryKey>
    Table db_employee_project_ref
    Column MEMBERSX ( Type blob )
    Column MEMBEROFX ( Type blob )
    I expected this:
    Table db_person
    Column VERSION_ID ( Type int(11) )
    Column FIRSTNAME_TXT ( Type varchar(50) ) <PrimaryKey>
    Column LASTNAME_TXT ( Type varchar(50) ) <PrimaryKey>
    Table db_employee
    Column PERSONAL_NUM ( Type bigint(20) )
    **** I think Kodo needs this reference to the base-class ****
    Column FIRSTNAME_TXT ( Type varchar(50) ) <ForeignKey>
    Column LASTNAME_TXT ( Type varchar(50) ) <ForeignKey>
    Table db_project
    Column VERSION_ID ( Type int(11) )
    Column NAME_TXT ( Type varchar(50) ) <PrimaryKey>
    Table db_employee_project_ref
    **** the mapping should use the primary keys, no blobs ****
    Column EMPLOYEE_FIRSTNAME ( Type varchar(50) ) <ForeignKey>
    Column EMPLOYEE_LASTNAME ( Type varchar(50) ) <ForeignKey>
    Column PROJECT_NAME ( Type varchar(50) ) <ForeignKey>

  • Mapping multiple columns of a table to single dimension using odi

    Hi John,
    Can we map multiple columns of a table to a single dimnesion?
    For example, in RDBMS, for the employee details, Grade position etc will be in different columns, and in Planning these would be as members of one dimension.
    So while loading data from oracle to essbase can we map these multiple columns to single dimension?
    If yes how?

    Hi,
    In your staging area/target you can concatentate the columns.
    So in your interface and on your target datastore, pick the column which is going to hold the details of the concatenation.
    Then in the expression editor use the CONCAT function, or you could use ||
    eg CONCAT(<sourceCol1>, <sourceCol2>)
    or <sourceCol1> || <sourceCol2>
    obviously you need to change the information between <sourceCol1> to your source datastore column
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Multiple columns (named the same originally) and mapped to the same lookup table are causing a Cube Build issue

    Hey folks, looking for some insight here.
    I've an implementation that contains some custom Enterprise columns mapped to lookup tables.  In the instance I'm working with now, it looks like there was/is an issue with one of those columns.  In this scenario, I have a column named
    ProjectType, created initially with that name, mapped to a lookup table.  This field's name was then changed to
    Project Type.  After that, it looks like another column was created, also called
    ProjectType.  So now, we have what I would have originally thought was two distinct columns, even though the names used are the same.
    Below is the error we're currently getting during the Cube Build Process...
    PWA:http://ps2010/PWA, ServiceApp:Project Web App, User:DOMAIN\user, PSI: SqlException occurred in DAL:  <Error><Class>1</Class><LineNumber>1</LineNumber><Number>4506</Number><Procedure>MSP_EpmProject_OlapView_B8546719-4D4C-473A-84B1-89DEDA2307E0</Procedure> 
    <Message>  System.Data.SqlClient.SqlError: Column names in each view or function must be unique. Column name 'ProjectType' in view or function 'MSP_EpmProject_OlapView_B8546719-4D4C-473A-84B1-89DEDA2307E0' is specified more than once.  </Message> 
    <CallStack>   
     at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)   
     at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)   
     at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)   
     at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)   
     at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)   
     at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)   
     at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)   
     at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()   
     at Microsoft.Office.Project.Server.DataAccessLayer.DAL.SubDal.ExecuteStoredProcedureNoResult(String storedProcedureName, SqlParameter[] parameters)  </CallStack>  </Error>
    I've tried deleting the one column, but the build still gives the above error.
    Any thoughts as to how the above could be resolved?
    Thanks! - M
    Michael Mukalian | Jan 2010 - Dec 2010 MVP SharePoint Services | MCTS: MOSS 2007 Configuration | http://www.mukalian.com/blog

    We tried taking it out of the cubes, and it builds fine.  The challenge we're having is in building the cubes with that custom field "ProjectType".  It's as if the cubes still hold some reference to it even when it's deleted.
    Since the OLAP View ('MSP_EpmProject_OlapView_{guid}') is recreated, would it be as simple as deleting that View, and trying to recreate?
    Thanks - M
    Michael Mukalian | Jan 2010 - Dec 2010 MVP SharePoint Services | MCTS: MOSS 2007 Configuration | http://www.mukalian.com/blog

  • Creating a bit map index on a partitioned table

    Dear friends,
    I am trying to create a bitmap index on a partitioned table but am receiving the following ORA error. Can you please let me know on how to create a local bit map index as the message suggests?
    ERROR at line 1:
    ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables
    Trying to use the keyword local in front leads to wrong syntax.
    Thanks in advance !!
    Somnath

    ORA-25122 Only LOCAL bitmap indexes are permitted on partitioned tables
    Cause: An attempt was made to create a global bitmap index on a partitioned table.
    Action: Create a local bitmap index instead
    Example of a Local Index Creation
    CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL;
    Example is about btree and I think it will work for bitmap also.

  • How to Maintain Surrogate Key Mapping (cross-reference) for Dimension Tables

    Hi,
    What would be the best approach on ODI to implement the Surrogate Key Mapping Table on the STG layer according to Kimball's technique:
    "Surrogate key mapping tables are designed to map natural keys from the disparate source systems to their master data warehouse surrogate key. Mapping tables are an efficient way to maintain surrogate keys in your data warehouse. These compact tables are designed for high-speed processing. Mapping tables contain only the most current value of a surrogate key— used to populate a dimension—and the natural key from the source system. Since the same dimension can have many sources, a mapping table contains a natural key column for each of its sources.
    Mapping tables can be equally effective if they are stored in a database or on the file system. The advantage of using a database for mapping tables is that you can utilize the database sequence generator to create new surrogate keys. And also, when indexed properly, mapping tables in a database are very efficient during key value lookups."
    We have a requirement to implement cross-reference mapping tables with Natural and Surrogate Keys for each dimension table. These mappings tables will be populated automatically (only inserts) during the E-LT execution, right after inserting into the dimension table.
    Someone have any idea on how to implement this on ODI?
    Thanks,
    Danilo

    Hi,
    first of all please avoid bolding something. After this according Kimball (if i remember well) is a 1:1 mapping, so no-surrogate key.
    After that personally you could use Lookup Table
    http://www.odigurus.com/2012/02/lookup-transformation-using-odi.html
    or make a simple outer join filtering by your "Active_Flag" column (remember that this filter need to be inside your outer join).
    Let us know
    Francesco

  • Key mapping during the transportation (lookup tables)

    Hello,
    For look up tables with key mapping during the transportation from DEV-> QA -> Prod it is asked to copy development repository with "out master data" How can we do it? and also how are workflows and matching stragetits transported ?
    Thanks

    Hi
    For look up tables with key mapping during the transportation from DEV-> QA -> Prod it is asked to copy development repository with "out master data" How can we do it? and also how are workflows and matching stragetits transported ?
    When we are moving from dev to QA to prod normally the remote system to which MDM is interacting also moves to similar environments. In different environments the reference table data may not match and hence it is advised not to move with same data. You can do this by simply exporting the schema of the repository from dev to QA and so on. That is create a new repository in QA and Prod by using option of "Export from Schema".
    Matching strategies and workflows are not supported in this schema transport for MDM 5.5
    These has to be created manually once the repository has been created.
    Good news is MDM 7.1 supports this.
    regards
    Ravi

  • Mapping XSD to a relational table

    Hi
    I jdeveloper how do i go about mapping a XSD file to a relational table ?
    I know some tools like xml spy ca do these can jdeveloper do this ?
    regards

    When using the DATAEXPORT command to export data for direct insertion into a relational database:
    ● The table to which the data is to be written must exist prior to data export
    ● Table and column names cannot contain spaces
    Check with the below syntax.
    SET DATAEXPORTOPTIONS
    DATAEXPORTDECIMAL 1;
    DataExportLevel "LEVEL0";
    DATAEXPORTCOLHEADER "Periods";
    FIX("FY2011","Budget","Version1",FixedAssets,"Amount","622185","3011","BU_None" ,@LEVMBRS(Periods,0) AND @DESCENDANTS(Annual,0),@LEVMBRS(Currencies,0));
    DATAEXPORT "DSN" "bfp" "ClaritydataCS.dbo.BEAM_Output_Budget" "bfpcssql" "pBaUsDsGwEoTrd123";
    ENDFIX;

  • Problem about mapping 1 column with different tables..

    Hi,
    I have 3 Tables (I will give examples not exact tables but same structure and logic)
    Cars :
       ID  ( Car ID)
       Name
    Planes :
       ID ( Plane ID)
       Name
    Processes :
       ID ( Process ID )
       Type
       VehicleID {code}*Sample Processes Table Data*
    {code:java}ID     Type      VehicleID
    1          1               1       
    2          1               2
    3          2               1
    3          2               2{code}
    When type is *1*, This is *Car* and means that *VehicleID* maps to *Cars Table*,
    When type is *2*, This is *Plane* and means that *VehicleID* maps to *Planes Table*.
    And So On 3, 4, 5, 6 .. for Additional tables.
    How can i map something like that? I can not merge these tables, they all must be separated..
    I used to handle this by writing native SQL with some functions, however with JPA i could not figure it out..
    Thanks again
    Regards.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    I do not know an easy way to do this in pure JPA. EclipseLink though is able to do this using a variable one to one mapping:
    http://wiki.eclipse.org/Introduction_to_Relational_Mappings_%28ELUG%29#Variable_One-to-One_Mapping
    This can be set using a customizer or JPA like annotations:
    @VariableOneToOne(
    targetInterface=Vehicle.class,
    cascade=PERSIST,
    fetch=LAZY,
    discriminatorColumn=@DiscriminatorColumn(name="TYPE", discriminatorType=INTEGER),
    discriminatorClasses={
    @DiscriminatorClass(discriminator="1", value=Cars.class),
    @DiscriminatorClass(discriminator="2", value=Planes.class)
    Best Regards,
    Chris

  • Mapping Problem with 2 ALV Tables after sorting

    Hi,
    I have a context node INCIDENTS with a sub-node SUB_INCIDENTS.
    The sub-node is filled via a Supply Function.
    Both nodes are displayed in separate ALV Grids on the same view, which works pretty well.
    The only problem I face is when I try to sort one of the ALVs, then I get an error "The node specified in mapping ( SUB_INCIDENTS) could not be found ".
    I have no clue how to solve that, already tried to fill the sub-node with the ALV standard function "ON_STD_FUNCTION_AFTE", no effect.
    Does anybody can imagine what might be the reason?
    Best regards, Steffen
    Edited by: Steffen Weber on Aug 27, 2008 2:55 PM

    In general, having two ALVs for parent and then a sub node is not supported.  Here is a section from the online help that describes what happens when a sort occurs on the parent node in ALV:
    Important Exception: Sorting
    Here ALV has to use the entire dataset so that the data records can be arranged in the new order. For this purpose, the ALV component temporarily takes control of the internal data table and invalidates the corresponding context node of your application during this time. This ensures that the application cannot access the context node while the ALV component is editing the internal data table.
    Once the internal data table has been resorted, ALV rebuilds the context node, releases it again for the application, and displays the data accordingly.
    This ensures that the internal data table is never copied. This is important because large volumes of data would considerably impact performance and memory space.
    When you are planning your application, note the following side-effects of this mechanism:
    ●      When the context node is invalidated, information about current selections, and in particular the lead selection, is lost.
    ●      If your application has created subnodes for the context node, (master-detail scenario), these subnodes are lost as soon as the ALV component invalidates the context node. If the application then tries to access the subnodes, a runtime error occurs.
    Because of the invalidation that is described here, the subnode leadselection is lost and is invalidated as well. This leads to the error you are encountering.

  • Mapping Issue with 7 source tables and one target table in one step

    Hi,
    Request for a small help in OWB.
    I am trying to map 7 source tables to a single target table in one step.These source tables are in Oracle 10G database but dont have PK and FK relation ship.
    I am able to link one table to the target by pointing some of the columns. But when coming to the second table it is giving some error message :
    Ap18003: Connection target attribute group is already connected to a incompatible data source. Use a joiner or set operator to join the upstream data first before connection it to this operator.
    As per the error message I used a Join operator and tried to map the second table to the target but still giving me the same error message. Could some body give me a hand to give out from this step.
    Thanks for your help in advance.
    Cheers,
    Krishna.

    Hi,
    like this:
    Ingroup1
    - id -> Number(9,0)
    - name -> VARCHAR2(500)
    Ingroup2
    - my_id -> Number(9,0)
    - name -> VARCHAR2(500)
    Outgroup
    - id -> point to target_table.id
    - name -> point to target_table.name
    Not:
    Ingroup1
    - id -> Number(9,0)
    - name -> VARCHAR2(500)
    Ingroup2
    - name -> VARCHAR2(500)
    - my_id -> VARCHAR2(9)
    Regards
    Detlef
    null

  • Decode statement in a mapping involve Source Text File & Table.

    Hi All,
    Oracle 9i Warehouse Builder Client: 9.2.0.4.0
    Oracle 9i Warehouse Builder Repository: 9.2.0.2.0
    Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
    Does OWB allow to use Decode statement in a mapping involves Source Text File and Oracle Table.
    My understanding is it's not possible, since OWB makes use of Sqlloader. For this work arround could be make use of External Table instead of Text Flat File.
    However I came across an old posting (June-2003) which says that this feature is available OWB 9.2.
    Following is the url:
    IF THEN LOGIC from Flat File to Table
    Can someone please confirm this?
    Thanks in Advance.
    Regards,
    Vidyanand

    Hi all,
    If you want to validate correctly this mapping you must to :
    1. Right click on the mapping, then Configure
    2. Right click on Sql Loader Data Files, then Create
    3. Verify that the location name is correct and complete the Data File Name
    4. OK
    The validation is now OK.
    I hope it will be help you
    Best Regards
    Samy

Maybe you are looking for