Flat file as a source in a mapping

Dear all
i noticed that when you create a mapping with a Flat File Operator as a source, the OWB generates SQL* Loader code instead of PL/SQL package.
Is there any parameter you have to change in order to generate PL/SQL code or, if not how can you convert SQL* Loader code into PL/SQL?
Any help would be appreciated

Hi,
OWB generates SQL * Loader code for Flat File operator, if you try to change the configuration parameter to PL/SQL it results in error.
External tables would be a better choice as it reduces the processing time, disk space required and also makes it possible to use SQL and other complex transformations.

Similar Messages

  • Flat file as a source - now more confused then before.

    Sorry to open this topic again - i just got confused to what needs to be done (i did read all of the posts on previous topic - honest!) got confused to who did what and how it needs to be done. it's hard to be stupid sometimes :(
    here is what i have:
    OWB server (target + runtime repository) = RHEL 4
    Source (the location of the flat files) = shared drive/folder on Windows
    OWB client (my machine) = Windows (unfortunatelly)
    I followed the steps described in
    http://download-uk.oracle.com/docs/cd/B31080_01/doc/owb.102/b28223/def_flatfiles.htm#CHDHBECD
    created samba mount point from OWB server to the Source files:
    /mnt/folderA/folderB/source_file.txt
    mapped shared drive from the source onto my machine
    Z:/folderA/loderB
    created location for the flat files in OWB:
    TEST_LOCATION = \\windows\folderA\folderB\
    imported flat files into OWB (had to go through the sample)
    created the map 'TARGETLOAD' - simple load the data from the flat file into the table
    validation produced the following message:
    VDL-2398: No data file specified. Generated code will use the sampled file name and the file location 'TEST_LOCATION'
    --which is fine isn't it? it should be using this file in that location
    generation went OK.
    deployment produced this error:
    SQL*Loader-500: Unable to open file (/u01/app/oracle/product/10.2.0/owb_1/owb/bin/unix/\windowsfolderAfolderBTARGETLOAD.ctl)
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    i checked this directory - there are 2 files there that were created by OWB:
    \\windows\folderA\folderB\TARGETLOAD.ctl
    \windowsfolderAfolderBTARGETLOAD.log
    Could anyone please enlighten me on what is going on? What do i have to change/amend/edit/tweak in order to solve this? all i want is to be able to load the data from flat file on the source windows box into the table in OWB server.
    Any help/thoughts would be greatly appreciated.

    Hi GB
    Thank you very much for your reply.
    I cannot use external tables - my superiors decided agains it, so i am lambered with external files (even jedi mind trick didn't work!).
    What i do is import external files from windows box. The location is specified as \\windows\forlderA\folderB
    --Mount the drive onto unix then register the files location as the unix path.
    i have a samba mount point from unix box to this windows location. But what i can't do is to select is as a location. i even tried to mapping this mount point onto windows box, so that it looks like \\unix\mnt\folderA\folderB - and of cause when i execute - i get the same error.
    I even tried configuring the sql loader (right click on map and select configure -- create), but i cannot do it even that way. coz everytime i try to create - OWB automatically assigns windows path to it.
    IT told me to let them know what i need and they will create it (map the drive etc) the problem is - i can't think of how to do it! and i do not have a priveleges to start 'trying things out'... well they say if i break their security one more time -i am in trouble!
    I know it sounds sad, but could you please let me know what i need to have/to do in order to make it work.
    Hope you do not mind my request and thank you very much for all your kind help and advise
    Kind Regards
    Vix

  • Extraction, Flat File, Generic Data Source, Delta Initialization

    Extraction, Flat File, Generic Data Source, Delta Initialization
    I have couple of questions regarding data extraction.
    1.     If you have Data Source a Flat File e.g. Excel file I know that you have to create Data Source at BW side. How do you upload updates, by selecting Delta Update when executing next Data Load? Do you ever u201Cconvertu201D this Excel file into Application Tables to become SAP Source 
    2.     Can you please give me example of situation where you had to create Generic Data Source? What is difference between Time Stamp, Calend. Day and Numeric Pointer. Which one is most common to select?
    3.     I read somewhere that Generic Data Source does not have Setup Table. I thought that you have to have Setup Table in order to load transaction Data otherwise you will lock the Application Tables. Where am I going wrong im my thinking please?
    4.     What are steps in terms of IP before, under and after Delta Initialization. I belive that you can do two ways:
    Full Update - Initialize Delta Process (without Data Transfer) u2013 Delta Update  or
    Initialize Delta Process (with Data Transfer) u2013 Delta Update
    Am I right? What is most common method and why?
    5.     If you want to add a filed in Data Source after 6 month using it, you want to do it without re-init Delta Queue. You add field in RSA6, then provide info for ABAP to populate new filed (info u2013 name of Data Source, Extract Structure, field added, name of Application Table which contains the field). How does it work now as there is no SetUp table it has been deleted after Initialisation? How does Delta Queue know that it is going to receive data which has been  expanded by one field or it may does not need to know at all?
    THANKSSSSSSSSSs

    Hi,
    1. If you have Data Source a Flat File e.g. Excel file I know that you have to create Data Source at BW side. How do you upload updates, by selecting Delta Update when executing next Data Load? Do you ever u201Cconvertu201D this Excel file into Application Tables to become SAP Source
    Once you create Datasource for A flat file extraction then it is file source system specific hence you cont change to Application table source Data source
    In info package you can change the source as application server instead of desktop no need to change the DS
    2. Can you please give me example of situation where you had to create Generic Data Source? What is difference between Time Stamp, Calend. Day and Numeric Pointer. Which one is most common to select?
    When we don't find any standard extractor then we can go for Generic(if i want information sales along with finance information in a data source then generally we dont get standard one hence we can go for generic DS)
    Check the below link for More about generic DS .
    http://wiki.sdn.sap.com/wiki/display/BI/Generic+Extraction
    for Delta capturing you can use
    Timestamp(if the table has time stamp filed  so it can stamp the last changed record in that table hence it is easy to get delta based on the time stamp)
    Calday- (If the table doesn't have the Timestamp filed then search for Calday where you can stamp the delta based on the date when documents are changed )
    Numericpointer : If the table doesn't above both then we go for this option where you can the numeric value change stamp )
    3. I read somewhere that Generic Data Source does not have Setup Table. I thought that you have to have Setup Table in order to load transaction Data otherwise you will lock the Application Tables. Where am I going wrong im my thinking please?
    Generic datasource nothing but we extracting data directly from the database table without any interface between the application/systems
    4. What are steps in terms of IP before, under and after Delta Initialization. I belive that you can do two ways:
    Full Update - Initialize Delta Process (without Data Transfer) u2013 Delta Update or
    Initialize Delta Process (with Data Transfer) u2013 Delta Update
    Am I right? What is most common method and why?
    Correct
    5. If you want to add a filed in Data Source after 6 month using it, you want to do it without re-init Delta Queue. You add field in RSA6, then provide info for ABAP to populate new filed (info u2013 name of Data Source, Extract Structure, field added, name of Application Table which contains the field). How does it work now as there is no SetUp table it has been deleted after Initialisation? How does Delta Queue know that it is going to receive data which has been expanded by one field or it may does not need to know at all?
    Once you add the new field to structure(DS) you will get the data as on date onwards not historical data hence what is the concept of setup table  ( delta records come from the Delta Que not from the setup table )
    If you want histaric data to new field then you need to setp table deletion ...etc...
    Hope it is clear..
    Regards,
    Satya

  • Flat file as a source table in my mapping

    Hi all..
    I am using owb 10gR2. when i debug with flat file and one target table,it is throwing error as it doesn't support flat file,you must access it through exteral table.
    But i dont want to use the external table. i want the flat file to be my source file.
    pls help me out.
    Thanks in advance,
    Bharath.
    Edited by: user11342336 on Aug 11, 2009 4:26 AM

    hi sutirtha. you have already solved one my of issue. hope you will solve this too :)
    yes,it doesn't support flat files,but i am getting the following error,
    "DBG1047 :Mapping flat file operators are not supported as sources when running mappings in debug mode. To access flat files use external table operators"
    but i want to run or debug it without using external table operators.
    Thanks,
    Bharath,

  • Flat file conversion from source to destination

    Hi
    I'm new to SAP and was wondering if someone could help me with the following:
    Have a source text file (fields are FirstName and LastName) and would like to run it through PI and get an output as a flat file with the FullName and LastName concaternated.
    Example:
    SourceFile.txt:
    Jason
    Gates
    TargetFile.txt:
    Jason Gates
    The mapping is working if I test it using the test feature in Enterprise Services Builder's test mapping feature without any files.
    How can I execute the mapping using plain text flat files?
    Thanks in advance!
    Additional information:
    Source file has structure:
    First Name
    Last Name
    Target file has structure:
    Full name
    The object is to:
    u2022  Create data, message, service interface structures for source and target to represent the source and target.
    u2022  Create the mapping program between source and target.
    u2022  Test mapping works internally in the builder using the internal test facilities.
    u2022  Configure the directory to pick up the source file, and content covert to xml using sender file channel
    u2022  Configure receiver determination, interface determination, receiver agreement, and target file channel
    u2022  Test using files place on the file server
    u2022  Trouble shoot using the runtime work bench.
    u2022  Once this work, increase the complexity e.g. multiple people in the source file, etc
    Edited by: NetSpike on Feb 17, 2010 2:25 PM

    https://wiki.sdn.sap.com/wiki/display/XI/FLATFILETOFLATFILE

  • Create IDoc from flat file - How to do the message mapping

    Hi everybody,
    I want to create an IDoc from a flat file.
    The file structure looks like this:
    MT_XYZ
    - Node001       0...1
      - Leaf001
    - Node002       0...unbounded
       - Node003    0...1
          - Leaf002
          - Leaf003
       - Node004    0...1
          - Leaf004
       - Node005    0...1
          - Leaf005
    I have created a mapping and all the other stuff to set up the IDoc adapter.
    Now when I try to test the interface the IDoc is created properly but obviously no information from the nodes "Node003" to "Node005" is inserted in the IDoc. Only the information kept in the elements (leafs) of "Node001" are inserted.
    All IDoc segments and their "Segment" elements are linked to the root node of the file structure "MT_XYZ". I tried to link some IDoc segments only to "Node002" but then this segment is not created.
    So how can I set up the message mapping in a way that the information from "Node003" to "Node005" is transported to the IDoc? Can anyone help me here?
    Thanks in advance for all answers!
    Regards,
    Torben
    Edited by: Torben Hönemann on Dec 14, 2009 4:26 AM

    Hi Torben,
    >>I want to create an IDoc from a flat file.
    So you are using File content Conversion on the sender side. Right?
    >>no information from the nodes "Node003" to "Node005" is inserted in the IDoc.
    Are these information available in the source XML (you can check in transaction SXMB_MONI-> Input Payload).. There is a limitation in File Content Conversion of File adapter and it is that I can make an XML structure of 3 level.. Since these nodes details are in level 4 and 5 they should be missing in Source XML structure itself (after content conversion of File adapter). check that
    >>So how can I set up the message mapping in a way that the information from "Node003" to "Node005" is transported to the IDoc? Can anyone help me here?
    So you need to take an alternate approach where you do File Content Conversion and make 3 level structure.. then using a mapping convert this three level structure to your 5 level one and then try to map with the idoc.
    Check this blog for an idea
    http://www.riyaz.net/blog/xipi-file-content-conversion-for-complex-structures/
    http://www.riyaz.net/blog/xipi-convert-flat-file-to-deeply-nested-xml-structures-using-only-graphical-mapping/
    Regards
    Suraj

  • Dynamic flat file name as source

    Hi All,
    I working on ODI 11g 11.1.1.3 version.
    I have Flat file say ABC01 as source, flat file was successfully load into target.
    But next day new file say ABC02 with same structure was generate from client.
    Every day new file is generate say ABC03, ABC04 and so on.
    how to use this file to load data, without changing file name.
    dynamically can it is possible.
    Please give me suggestion.
    Thanks in advance
    Prashant

    Hi Prashant,
    You can achieve this by doing the following :
    1. Create a variable Called FILE_NAME
    2. Go to your file datastore and replace the Resource Name with variable #FILE_NAME. So you need to pass correct file name to the variable.
    3. To get the filename into variable , you can look into the following thread
    Read FileNames in Directory
    Hope it helps

  • Is a database table required for temporary interfaces with flat file data set source ?

    Folks,  this is the situation I have in ODI 11.1.1.7
    I have a temporary interface (yellow), called MJ_TEMP_INT,  that pulls data from TWO data sets in the source into a temporary target (TEMP_TARG). The catch is one data set pulls from a from a table whereas the other data set pulls from a flat file.  A union is done on the data sets.
    I then create another interface, called MJ_INT, that uses the MJ_TEMP_INT as a source and the target is a real db. table called "REAL_TARGET"
    Two questions:
    When I execute my second interface  (MJ_INT), I get a message "ORA-00942: table or view does not exist" because it is looking for a real db table TEMP_TARG. Why must I have one ? because I am pulling from a flat file ?
    On my second interface (MJ_INT) when I look at the property sheet of my source interface MJ_TEMP_INT (yellow), the checkbox next to "Use temporary interface as Derived table" is DISABLED.  Why ? Is is also because my temporary interface is pulling from a flat file ?
    I have attached a file that shows a screen shot of my ODI studio.
    By the way,  IF my temporary interface source has only one data set pulling from a db. table into a temporary target table, say called MJ_TEMP2_TARG,  and then when I use this temporary interface as a source to another other real db. target table (REAL2_TARGET),  THEN, every thing works.  ODI does not require me to have a real db. table MJ_TEMP2_TARG and the checkbox for "Use temporary interface as Derived table" is NOT DISABLED and my REAL2_TARGET table gets populated.
    Thank you in advance.
    M. Jamal.

    Thanks SH. I thought so. 
    Though I understand the reason to materialize the file in a staging area, but that almost defeats the purpose of having a temporary interface in this case if we have to save the data in a permanent db. table first.  I assume the db. table sticks around and is not automatically dropped once the interface executing ends.  If the db. table sticks around then I also must truncate it first before executing the temporary interface each time. Right ?

  • How to import flat file as a source in DAC Client

    Dear All,
    I am doing some R&D,I have a problem that how can i import the flat file in DAC client which i am using as a source.
    Can any one please explain in detail step wise.
    Regards
    Tarang Jain

    Hi Tarang,
    How long will it take to run the sample execution plan? Since when i run 'CRM Automotive' EP, it runs for more than 3 hours and still it is running. My system config is 1GB RAM with windows os. I have also opened the DAC server and client in console mode and I m not able to see any errors. I have only got only the task 'Load row into run table ' completed successfully and next 10 are in Running stage from the past 2 hours.
    My DAC client shows continously the below shown message for 1 hr.
    'Sep 30, 2008 3:52:59 PM com.siebel.analytics.etl.client.data.model.ConnectionHan
    dler getConnection
    INFO: Getting connection for: Model structure for W_ETL_RUN_STEP (RUN_STEP) (cur
    rently 0)
    Sep 30, 2008 3:52:59 PM com.siebel.analytics.etl.client.data.model.ResultSetPars
    er logMinutesSeconds
    INFO: Query took 0 minutes, 0 seconds
    Sep 30, 2008 3:52:59 PM com.siebel.analytics.etl.client.data.model.ConnectionHan
    dler cleanUp
    INFO: Returning connection for: Model structure for W_ETL_RUN_STEP (RUN_STEP) (c
    urrently 1)
    Sep 30, 2008 3:52:59 PM com.siebel.analytics.etl.client.data.model.ResultSetPars
    er logMinutesSeconds
    INFO: Populating DACObjects took 0 minutes, 0 seconds
    Sep 30, 2008 3:52:59 PM com.siebel.analytics.etl.client.data.model.DACChildTable
    Model setParentID
    INFO: PARENT ROW ID IS: 95D45F88DBB268B3D19DB71B4E42D3FF
    Sep 30, 2008 3:52:59 PM com.siebel.analytics.etl.client.data.objectcomponents.Mo
    delStruct setQuery
    INFO: SELECT
    RUN_STEP.DEPTH
    ,RUN_STEP.STEP_NAME
    ,RUN_STEP.STATUS
    ,RUN_STEP.START_TS
    ,RUN_STEP.END_TS
    ,RUN_STEP.STATUS_DESC
    ,RUN_STEP.PHY_SRC_NAME
    ,RUN_STEP.PHY_TRGT_NAME
    ,RUN_STEP.PHY_FOLDER_NAME
    ,RUN_STEP.SERVER_NAME
    ,RUN_STEP.PHASE_NAME
    ,RUN_STEP.EXEC_TYPE_NAME
    ,RUN_STEP.SUCESS_ROWS
    ,RUN_STEP.FAILED_ROWS
    ,RUN_STEP.ERROR_CODE
    ,RUN_STEP.DEFN_STEP_WID
    ,RUN_STEP.ROW_WID
    ,RUN_STEP.LAST_UPD
    ,RUN_STEP.APP_WID
    ,RUN_STEP.RUN_WID
    ,RUN_STEP.STEP_WID
    ,RUN_STEP.PHY_SRC_WID
    ,RUN_STEP.PHY_TRGT_WID
    ,RUN_STEP.PHY_FOLDER_WID
    ,A_W_ETL_APP.ROW_WID
    ,A_W_ETL_APP.NAME
    FROM W_ETL_RUN_STEP RUN_STEP
    INNER JOIN W_ETL_APP A_W_ETL_APP ON RUN_STEP.APP_WID=A_W_ETL_APP.ROW_WID
    WHERE
    (RUN_STEP.RUN_WID='95D45F88DBB268B3D19DB71B4E42D3FF')
    ORDER BY
    RUN_STEP.START_TS ASC,
    RUN_STEP.DEPTH ASC'
    Any comments on this?
    I had also copied the CRM Analytics source file into the "D:\Informatica\PowerCenter8.1.1\server\infa_shared\SrcFiles"
    as metioned in Appendices ' Configuring Universal Adapter for CRM Analytics'.
    With Regards,
    Prads

  • Flat file should have source field name and value

    Hi Friends
    I have a scenario of Idoc to Flat file , where in the target structure the field name and value should be present, generally only the value is available in a flat file.The target structure is as below:
    for e.g RECTYPE is the Field Name and A is the value.
    RECTYPE,A
    DATEH,20111101
    TIMEH,173125
    RECTYPE,B
    ORDNUM,4500054536
    ORDITM,150
    SUPDAT,20090218
    PLNQTY,000000006
    MATNR,14B300
    BATCH,5697
    PLANT,3026
    DELIVTYPE,PO
    SUPPLIER,0000023305
    SUPNAME,Deutsche BP AG
    SUPADRS,Erkelenzer Strasse 20
    SUPCITY,Monchengladbach
    SUPPOST,41179
    SUPCOUN,DE
    TRUCK_NBR,7589
    BOND_FLG,X
    STG_LOC,PLCL
    LINE_ACTION,CRE
    RECTYPE,B
    ORDNUM,4500056721
    ORDITM,10
    SUPDAT,20090218
    PLNQTY,000000013
    MATNR,116703
    BATCH,6589
    PLANT,3026
    DELIVTYPE,PO
    SUPPLIER,0000023380
    SUPNAME,DOW Belgium NV
    SUPADRS,Havenlaan 7
    SUPCITY,Tessenderlo
    SUPPOST,3980
    SUPCOUN,BE
    TRUCK_NBR,7589
    BOND_FLG,X
    STG_LOC,PLCL
    LINE_ACTION,CHG
    RECTYPE,X
    DATEH,20111101
    TIMEH,173125
    RECORDS,3

    Hi,
         NameA.addHeaderLine
    Specify whether the text file will have a header line with column names. The following values are permitted:
    ■       0 u2013 No header line
    ■       1 u2013 Header line with column names from the XML document
    ■       2 u2013 As for 1, followed by a blank line
    ■       3 u2013 Header line is stored as NameA.headerLine in the configuration and is applied
    ■       4 u2013 As for 3, followed by a blank line
    This specification is only permitted if exactly one structure is defined.
    regards,
    ganesh.

  • Problem to transform Flat file to Data Type Structurated and map to IDOC

    Hi all,
    I have a file to idoc scenario.
    The information is like this:
    1#!445#!AI12#!1#!20070214#!DVXXXXR#!201#!31GINHG876#!#!
    2#!#!ETC
    3#!000000000030008888#!#!3000#!#!10#!#!20070215
    4#!dades45#!b#!c#!d#!e#!f#!g
    5#!pos5
    where 1, 2, ... = key segments for the file adapter and #! are the field separator.
    this must go to a Data Type with substructures like this:
    (xml)
    .1
    ......2
    .3
    ......4
    ......5
    where 2 is inside 1 and 4-5 inside 3.
    The reason of this is we can get unbounded replays of 3/4/5 for one header.
    But XI reads the information like if all segments are headers.
    .1
    .2
    .3
    .4
    .5
    So the IDOC is bad created.
    Where is the issue?  At Message Mapping all substructures are mapped with his default and at TEST option it works fine.
    Thanks in advance for your help.
    best regards
    Message was edited by:
            Federico Martin
    Message was edited by:
            Federico Martin

    Dear people,
    according to license problems at client, it's impossible to get the convert agent.
    so now we come back to try to solve with normal mapping.
    the next step is try to convert it using 2 message mappings: dt (flat xml) to dt (structurated) and this one to the idoc.
    i suppose it can be done adding program at interface mapping.
    I created the no hiereachical Data Type for load the information and it's fine. But i am unable to get the first mapping OK, because XI has problems with unbounded segments.
    If you agreed and has time, i copy here the information and his structure.
    I receive (#! defines separation and key segment fields are 1,2,3,4,5):
    1#!445#!AH02#!1#!20070214#!DVPOSTER#!201#!31GINHGIN0#!#!
    2#!#!ETC
    3#!000000000030008888#!#!3000#!#!10#!#!20070215
    4#!dades45#!b#!c#!d#!e#!f#!g
    3#!000000000030008888#!#!3000#!#!10#!#!20070215
    3#!000000000030008888#!#!3000#!#!10#!#!20070215
    4#!dades45#!b#!c#!d#!e#!f#!g
    5#!pos5
    5#!pos5
    5#!pos5
    The destination must be:
    CHASE A
    1 (1..1)
    ....2 (1..1)
    3 (1..unbounded)
    ....4 (0..1)
    3 (1..unbounded)
    3 (1..unbounded)
    ....4 (0..1)
    5 (1..unbounded)
    5 (1..unbounded)
    5 (1..unbounded)
    CHASE B
    Or, in another case with 5 as child of 3 (like example in last mail)
    1#!445#!AH02#!1#!20070214#!DVPOSTER#!201#!31GINHGIN0#!#!
    2#!#!ETC
    3#!000000000030008888#!#!3000#!#!10#!#!20070215
    4#!dades45#!b#!c#!d#!e#!f#!g
    5#!pos5
    3#!000000000030008888#!#!3000#!#!10#!#!20070215
    5#!pos5
    3#!000000000030008888#!#!3000#!#!10#!#!20070215
    4#!dades45#!b#!c#!d#!e#!f#!g
    5#!pos5
    1 (1..1)
    ....2 (1..1)
    3 (1..unbounded)
    ....4 (0..1)
    ....5 (1..unbounded)
    3 (1..unbounded)
    ....5 (1..unbounded)
    3 (1..unbounded)
    ....4 (0..1)
    ....5 (1..unbounded)
    3 (1..unbounded)
    ....4 (0..1)
    ....5 (1..unbounded)
    Questions:
    How i do for map Data Type (no hierachical) -to-> Data Type (hierachical) and  Data Type (hierachical) -to-> IDOC? Is any loop or context object required?
    Thank you in advance for your help and sorry the long thread.

  • EDI flat file to X12 xml using XSLT mapping

    Hi all,
    I have a scenario EDI File -> XI -> file. Here on the source side, it is a txt IDOC document. I have created an XSLT mapping to convert txt document to X12 xml.
    Can any body please suggest that what should be the message type that i need to choose at source inbound message?
    Thanks
    -Kulwant

    Hi
    It is not very clear from what you have explained above..
    1) whats the format when the msg enters XI?
    2) which stage of the flow this XSLT mapping is located...??
    3) when you say that your XSLT converts txt to xml, then whats the ROOT tag you use??
    4) whats your incoming msg structure??
    make the above clear for better answers
    Regards
    Vishnu

  • OBIEE flat file download as source to ODI

    Hello there,
    I have downloaded output from OBIEE Answers in "data" format (Tab delimited)
    I now want to use this as a source for ODI ELT
    Steps:
    1- I create a model
    Technology = File
    Logical Schema = FILE_DEMO_SRC
    3- Datastore.
    I apply the following:
    Resource name = location of .csv file
    File Format = Delimited
    Heading = 1 (Number of lines)
    Field Separator = tab
    Record Separator = MS-DOS
    3- Click reverse on the columns screen
    The following error occurs:
    SQLException: ORA-12899: value too large for column "ORA_DI_WORK"."SNP_REV_COL"."COL_HEADING"
    Can someone tell me if I am missing a step for using tab delimited as a data source?
    Note that every column is below 30 size for the extract.
    Thanks,

    Thanks John,
    I've tried this without the header row.
    When viewing the data it shows the first record only and has squares between each character in the columns.
    It looks like it's having great difficulty in parsing the file.....
    I've also tried changing the record separator to Unix.
    This resulted in returning all the rows, but still with the little squares between each character.
    I wonder if anyone else had tried this using an OBIEE data download (from answers) as a source to ODI?
    Regards,

  • Create source structure -Need to create flat file or table?

    Hi All,
    Can any one let me know what needs to be created?? it should be flat file or a table on the data base?
    I am going to reecive flat files from the source system??
    If it is a flat file then is there any way to create flat file structure in the DS using SQL query statements?
    Thanks
    Rajeev

    you can create table same as file structure and use the table instead, but how are you going to populate the table with the data from flat file ? you will need some application to do that, you can do this is DS by creating a file format and using the source file as input for that file format and create a template table as target for the database datastore to wihch you want to load data from this source file, and use that table as source in other dataflows
    you can also create the table with the same structure in the database and import that in Datastore and use that as target for the source file

  • Loading data from flat file...

    Hello,
    I am actually experiencing a problem where I cant load data from a flat file into a table. I have actually reverse engineered the flat file into ODI. But the thing is I don't know how to load the data that is reversed into a RDBMS table. Added, I don't know how create this RDBMS table form within ODI to be reflected on the DB. Added, don't know how to load the data from the flat file onto this table without having to add the columns in order to map between the flat file and the table.
    In conclusion, I need to know how to create an RDBMS table from within ODI on the database. And how to automatically map the flat file to the DB table and load the data into the DB table.
    Regards,
    Hossam

    Hi Hossam,
    We can used ODI procedure to create table in the DB.
    Make sure you keep the column name in the table name same as the column name in FLAT FILE so that it can automatically map the column.
    and regarding Loading data from FLAT File i.e. our source table is FLAT FILE till ODI 10.1.3.4 we need to manually insert the datastore since the file system cannot be reversed.
    Please let me know Hossam if i can assis you further.
    Thanks and Regards,
    Andy

Maybe you are looking for

  • Making Purchase Order approval window to pickup custom rtf template.

    Hi, We customized our purchase order using rtf method. I changed the print option to text from PDF in Purchasing options so that it will pickup the "print purchase order" which is a program based on rdf. My rtf is getting input from this program (POX

  • Terms and Agreement Violation?

    Hey, thanks for taking to help me out. Anyways, I'm gonna be buying a web-hosting service pretty soon and I need to check something first. In the Terms and Agreement section of the site, I saw this paragraph. [CLIENT must not run any kind of 'server

  • EKBE and EKBEH query

    I am trying to create a query that summarizes the receipts of our purchase orders (PO) and scheduling agreements (SA) by date.  The issue -- the system has archived some of the SA into the EKBEH table so all I see in the EKBE is one line posted on th

  • What is the best 2 TB hard drive to put in a Mid 2011 21.5" iMac ?

    Looking to Upgrade my hard drive to 2 tb what is the best hard drive to use?

  • 8600 all in one won't print or scan documents placed directly on glass?

    I have a new Officjet 8600 All In One model N911A.  I can't get it to print or scan a document placed directly on the glass.  It will print and scan using the auto document feeder only. I am working wirelessly and tried to run the HP Hardware Diagnos