ODI target as flatfile --  split

Hi can anyone let me know how to split the flatfile output in ODI. I do know that there is a tool called xmlsplit for xml files but i don't know how we can split the flatfiles. Please guide me on this. Thank you!

1. Create 2 ODI variables min_row_count, iteration_counter. Set all of them to 1.
2. Create another variable called max_row_count and set it to 50000.
3. Create another variable called total_row_count and put a query in refresh block: SELECT count(*) FROM your_source_table
3. Create a temporary interface (if you are not comfortable with that, create the target table and bring it to ODI model) that uses your table as source and map all the columns plus an extra column (let us call it rownbr) which should be mapped as row_number() OVER (ORDER BY <some other column>). Execute this if you are in 10g (no need of this if you are in 11g).
4. Create another interface that will write to a flat file. Use your temporary interface (or the table if you created it) as source. Use flat file model as target.
5. Create a filter like: source.rownbr between #min_row_count and #max_row_count
6. Find a suitable standard IKM that will load to a file. use it for the interface. Generate a scenario.
7. Create a package where you set the iteration_counter to 1. Invoke the scenario in the next step. If it succeeds, create a OS command where you will copy the output file to <some name>.#iteration_counter.dat file in the desired directory. Once the command succeeds, increment the iteration counter by 1. Increment the min_row_count and max_row_count by 50000. Test if min_row_count > total_row_count. If yes, exit the loop (and send mail or something). Else point to scenario run step again. What you have is a loop which will use iterate the scenario over your table 50k rows at a time.

Similar Messages

  • Odi target as xml

    Hi,
    If I have a column in xml target and the data that gets loaded into this column(type varchar) has special symbols like
    AT&T
    Bachelor's
    When I run the interface, the output that i get is
    AT & amp ;T
    Bachelor & apos ; s
    How can I solve this problem?
    Thank you!
    p.s. xml encoding is utf-8
    Edited by: 959411 on Oct 10, 2012 9:56 AM
    Edited by: 959411 on Oct 10, 2012 9:57 AM
    Edited by: 959411 on Oct 11, 2012 8:45 AM

    Does anybody have an idea on how to print special characters in xml file. Odi prints the actual xml codes rather than the symbols such as & and ' in the output xml file.

  • Odi-target table-primary key

    hi,
    is it compulsory to have a primary key constraint in the target table in odi.
    Can we have a target table without any primary key constraint?

    Hi,
    Yes u can have target table without PK.
    Only for IKM Incremental Update u need to define Update key (which may not be persist in back end). For IKM Control append u no need to have PK defined.
    Thanks,
    Guru

  • Reading fixed length flatfile & splitting it into header and lineitem data

    Hi Friends,
    I am reading a fixed length flat file from application server into an Internal table.
    But problem is, data in flat file is in the below format with fixed start and end positions.
    1 - 78 -  control header
    1 - 581 - Invoice header data
    1 - 411 - Invoice Line item data
    1 - 45 -   trailer record
    There will be one control header and one trailer record per file and number of invoice headers and its line items can vary.
    There is unique identifiers to identify as below.
    Control header - starts with 'CHR'
    Invoice Header starts with - '000'
    Invoice Lineitem stats with - '001'
    trailer record - starts with 'TRL'
    So its like
    CHR.......control  data..(79)000.....header data...(660)001....lineitem1...(1481)001...lineitem2....multiples of 411 and 581 and ends with... TRL...trailer record..
    (position)
    I am first reading the data set and store in internal table with a field of 255char.
    by looping on above ITAB i have to split it into Header records and line item records.
    Did anyone face this kind of scenario before. If yes appreciate if you can throw some ideas on logic to split this data.
    Any help in splitting up the data is highly appreciated.
    Regards,
    Simha
    ITAB declaration
    DATA: BEGIN OF ITAB OCCURS 0,
                   FIELD(255),
               END OF ITAB,
                lt_header type table of ZTHDR,
                lt_lineitem type table of ZTLINITM.

    Hi,
    i am sending sample code which resembles your requiremeant.
    data: BEGIN OF it_input OCCURS 0, "used for store all the data in one line.
          line type string ,
          END OF it_input,
          it_header type TABLE OF string WITH HEADER LINE,"use to store all header with corresponding items
          it_item   type TABLE OF string WITH HEADER LINE.."used to store all item data
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        filename                      = 'd:\test.txt'
      FILETYPE                      = 'ASC'
      HAS_FIELD_SEPARATOR           = ' '
      HEADER_LENGTH                 = 0
      READ_BY_LINE                  = 'X'
      DAT_MODE                      = ' '
      CODEPAGE                      = ' '
      IGNORE_CERR                   = ABAP_TRUE
      REPLACEMENT                   = '#'
      CHECK_BOM                     = ' '
      VIRUS_SCAN_PROFILE            =
      NO_AUTH_CHECK                 = ' '
    IMPORTING
      FILELENGTH                    =
      HEADER                        =
      tables
        data_tab                      = it_input
    EXCEPTIONS
      FILE_OPEN_ERROR               = 1
      FILE_READ_ERROR               = 2
      NO_BATCH                      = 3
      GUI_REFUSE_FILETRANSFER       = 4
      INVALID_TYPE                  = 5
      NO_AUTHORITY                  = 6
      UNKNOWN_ERROR                 = 7
      BAD_DATA_FORMAT               = 8
      HEADER_NOT_ALLOWED            = 9
      SEPARATOR_NOT_ALLOWED         = 10
      HEADER_TOO_LONG               = 11
      UNKNOWN_DP_ERROR              = 12
      ACCESS_DENIED                 = 13
      DP_OUT_OF_MEMORY              = 14
      DISK_FULL                     = 15
      DP_TIMEOUT                    = 16
      OTHERS                        = 17
    IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    LOOP AT  it_input.
       write : it_input-line.
    ENDLOOP.
    before doing the below steps just takethe controle record and trail record cutt of from the table input based up on the length
      split it_input-line AT '000' INTO TABLE it_header IN CHARACTER MODE.
      LOOP AT  it_header.
        split it_header AT '0001' INTO TABLE it_item IN CHARACTER MODE.
        write :/ it_header.
      ENDLOOP.
    after this you need to cut the records in tocorresponding  fields in to respective tables by putting loop on item and header table as i decleared above.
    i think this may solve your problem you need to keep some minnor effort.
    Regaurds,
    Sree..

  • 1 eBS source - 2 Planning targets - how to split data?

    hi
    I am using FDM ERPI to load eBS data to Hyperion Planning (eBS 11.5 and EPM 11.1.2.1).
    I have to load data into 2 Planning databases (which are in the same Planning application), so 2 target adapters and two locations.
    The two locations have the same source and the same mapping.
    But some accounts have to go to app1, whereas the others have to go to app2.
    But no information in gl about the target app.
    How could I do?
    Use the EPMA table or source file to see in which plan type each account is used?
    And when should I exclude the accounts : in the import action? in the exportToDat?
    Thanks in advance for your halp, hoping it is clear...
    Fanny

    Hi
    Thanks to all of you for your answers.
    Finally, I have chosen to use the BefExportDat script : I create a temporary table in which I load the list of accounts I want to keep for the location. Then I delete from the TDATASEGxx table the rows related to the accounts I don't want to keep.
    There are lots of accounts and no easy way to identify those to load and the others.
    So using the mapping rules would create lots of rules, so something not very user friendly, even if I import the rules.
    But I wanted to use the drill-through from both application :-(
    Fanny

  • Split the Target Structure.

    Hi,
    I have Header and Detail in my Target Structure, in the output I have to create 2 separate files, one for Header and one for Detail.I dont want to go for BPM, is there any solution for this with out using BPM?
    Thanks & Regards,
    Pragathi.

    Hi Pragathi
    Assuming you mean you have Header and Detail in the source structure and would like to create 2 messages on the target side, you have 2 options (Since you mentioned you dont want to use BPM)
    1. Create 2 sets of messages interface and related objects for the target structure and split the flow into 2, which will map the header to one message and the item to the other.
    2. Use multimap and map the header to message1 and item to message2. This will be a better option as the interface will use one map for creating the target message and you can pass the message onto 2 comm channels that will create the file.
    Hope this help.
    Regards
    Prav

  • Message splitting 1:n without BPM error : 404   Not Found

    hi,
    is u r server is updated with sps14
    please once check this.. may be this is the problem
    Thanks,
    Madhav.
    Note:points if useful

    hi,
    please go through the limitation of this bolg
    A mapping-based message split will ultimately produce n individual messages, but not until it reaches the Adapter Engine (AE). Inside the Integration Engine (IE), the messages are grouped together and persisted as one bulk message. The bulk message is sent to the AE where it splits the bulk message into individual messages and persists them.
    Restrictions
    u2022  Messages that result from the split in a mapping-based message split are sent using one AE. So only adapters running on the AE are supported. In particular, this means that target IDOC/HTTPmassagee splits are not supported since the IDOC/HTTP adapter is not part of the AE.
    u2022  The target system of the message splits cannot be an integration process.
    u2022  Attachments from the original message are not appended to the messages resulting from the message split.
    now i think u wont be able to send the the 2 diffrent file in diffrent location.both the file cab be sent to be togethre.because when u configure the receiver determination in ID u need to include both receiver service as a receiver.and thus u need to configuer 4 interface determinatio as well as 4 receiver aggrement. this is because you r using the only one IM which containg both the interface.
    regards,
    navneet

  • Splitting a RFC

    Dear all,
    we received the RFC CONTROLRECIPEDOWNLOAD (CRD) from SAP system.
    Because a CRD could contain two (or more) messages for different receivers,
    we need to split and save the incoming messages into files and save them to
    PI file system. Every message contains after this split only on recipe.
    After this we read the messages from file system and send them to a target system
    depending on value in field WERK (Plant). Without splitting, we send all containing
    recipes to the first plant we found in XPath.
    My questions:
    - The szenario do not use an BPM because we would like to avoid them (performance, monitoring ..)
       Would you use a BPM here?
    - How can I do the splitting in the best way? In splitting step, I would not like to do a value or structure mapping,
      just splitting the messages into single messages.
      I would take the RFC in source and target, set the occurency in tab signature to 0..unbounded for target structure
      and split the message for every header that is appearing.
      Would you do it the same way or is there another (better) way?
    Thanks
    Regards
    Chris

    Hi,
    You can avoid  SAP PI file system for storing the intermediate file.
    In Receiver determination step use  Extended receiver determination to determine all the receiver from field values WERK (Plant).
    In interface determination step use the corresponding mapping for each plant or use same common mapping with different parameter values for different plant, to filter out the CRD for that particular receiver(Plant).
    Thanks,
    Arindam

  • Populating the time dimension in ODI

    I need to populate my time dimension in ODI> I read a solution in this forum suggesting to create a time table/view in the source schema, reverse it in ODI and then use it as source to populate the time dimension. Is there another way to do this? One way I thought of was to use the ORDERDATE field in my ORDER table (my source table in Oracle) and map it to my time dimension in SQL Server via an interface. But I also have DUEDATE, SHIPDATE and PAYDATE fields in my ORDERS table and this approach would mean that I have to map them through separate interfaces to the time dimension as well. I have created a procedure in the source schema(Oracle) and want to use it in ODI to populate the time dimension. But I amnt sure if that is possible in ODI. Could anyone help me with this please?
    Regards,
    Neel

    Hi Neelab,
    Sorry for my delay to reply you, I had no time the lasts days...
    To get the four distinct key from your time dimension, just add four instance of dimension table at interface each one joined with one of the columns.
    I believe that you load your time dimension from some other table than PRJ_TBL_TRANSACTION because you have the HolidayType column in your time dimension...
    A view is one possible solution to load the time table but depends how the performance of the query is.
    A way to do it at ODI is:
    - Create 4 interfaces, one for each column, to load 1 singe table with 1 single date column, don't worry about duplicated value at this time, than you can just use the "IKM Control Append" that has more performance but check the "Distinct" box (flow tab) at each interface
    - Create a last interface from this temp table as source, to the time dimension target table. Now you will use the "IKM Incremental Update" and do choose the "Update" option to "NO". Check the "Distinct" box.
    As this table will have no more than 6.200 records from the last 20 years it will be a small table where you shouldn't have performance problems.
    These are some of possible solutions but I would like to add other "way to think".
    By the table that you show here you have a simple time table with no special feature, for that, let me suggest you other way.
    - in the current way you will join but didn't get the record that "fail" from the join once they will be exclude if a date do not exist at time dimension
    My suggestion:
    - Load the dimension time table from your source table
    - as PK in time dimension table, use the ''Julian Day"
    - At ODI target fact table (datastore), create a 4 reference constraints (one by column) to the time dimension
    - at interface do not use the dimension as source and transform the 4 date to Julian and let the 4 constraints take care if they exists or not at dimension table.
    OR
    - Look for the minimum "possible" date at your company
    - populate your time dimension with every each day since then until a future date (Dec 31, for instance)
    - create a process to populate the future date that will be execute in a interval that you decide (once a year, once a month, as you wish) dependent on how further the date is populated
    - use the "Julian date" as PK
    - At interface just transform any date to "Julian Date" it will be at dimension time once it is naturally unique
    You could substitute the Julian date for "YYYYMMDD" that is a unique value too.
    I presented you 2 way to consider be considered, each one could be used based on how important is for the business know if a date was loaded or not.
    Someone can question that has the dates loaded from source against has all dates previous loaded could help to find errors from days that wasn’t loaded but it has a failure. As there are 4 dates source columns (and we are talking just about one source table until now) if a date loaded math a date when the load failure there is no value in use the time dimension date to analyze this possibility.
    I defend the full time dimension load.
    Make sense and/or help you??

  • OBIEE-ODI Lineage:Hiding or disabling lineage column

    Hi,
    I am working on OBIEE-ODI lineage implementation. When I click on the lineage icon its going to next level. For eg when I click on lineage icon the report is showing OBIEE logical table and logical column details ....In turn when I click on lineage icon in this report its showing the OBIEE physical table and column details...in turn when I click on lineage icon in this report its displaying the ODI target details and on next click the underlying ODI source details....once we reached the ultimate ODI source I want the lineage icon to be disabled or the column showing the lineage icon to be hidden. Any pointers on this would be of great help.
    Regards,
    Rajesh

    Hi,
    If you are on ODI 11g remove the odiRef.getObjectName("BI_OBJ_ID.NEXTVAL") function call and just give the sequence BI_OBJ_ID.NEXTVAL where ever its being used.
    Regards,
    Rajesh

  • Very simple, but not working Table to Flat File

    I'm new to ODI, but I am having too much difficulty in performing a very basic task. I have data in a table and I want to load the data to a flat file. I've been all over this board, all over the documentation, and all over the Googles, and I can not get this to work. Here is a run down of what I have done thus far:
    1. created a physical schema under FILE_GENERIC that points to my target directory
    2. created a logical schema that points to my new physical schema
    3. imported a test file as a model (very simple, two string columns, 50 chars each)
    4. set my parameters for my file (named columns, delimited, filename resource name, etc.)
    5. created a new interface
    6. dragged my new file model as the target
    7. dragged my source table as well
    8. mapped a couple of columns
    9. had to select two KMs: LKM - SQL to SQL (for the source) & IKM - SQL to File Append (for the target)
    10. execute
    Now, here is where I started hitting problems. This failed in the "load data" step with the following error:
    +7000 : null : java.sql.SQLException: Column not found: C1_ERRCODE+
    I found a note on the forum saying to change the "Staging Area Different From Target" setting on the Definition tab. Did that and selected the source table's schema in the dropdown, ran again. Second error:
    +942 : 42000 : java.sql.SQLException: ORA-00942: table or view does not exist+
    This occurred in the "Integration - insert new rows" step.
    The crazy thing is that in a step prior to this ("Export - load data"), the step succeeded and the data shows up in the output file!
    So why is ODI trying to export the data again? And why am I getting an ORA error when the target is a file?
    Any assistance is appreciated...
    A frustrated noob!
    Edited by: Lonnie Morgan (CALIBRE) on Aug 12, 2009 2:58 PM

    I found the answer. But still not sure why this matters...
    Following this tutorial, I recreated my mapping:
    [http://www.oracle.com/technology/obe/fusion_middleware/ODI/ODIproject_table-to-flatfile/ODIproject_table-to-flatfile.htm]
    I was still getting the same error. I reimported my IKM and found that the "Multi-Connections" box was unchecked before and now it was checked. I may have unchecked it in my trial & error.
    Anyway, after running the mapping again with the box checked, the extract to a file worked.
    So, I'm happy now. But, I am also perturbed with the function of the this checkbox and why it caused so much confusion within ODI.

  • Unable to find ODI_SAmple_DATA.zip file to work with oracle profiling.

    I am unable to find ODI_SAmple_DATA.zip file to work with oracle profiling.Any help regarding profiling???Do i need to
    copy it from sw installation folder.?How profiling is different or related to odi data quality???Do we take source data twice -
    1) For ODI target load
    2)For profiling into entities.

    Try:
    http://www.oracle.com/technology/products/oracle-data-quality/pdf/oracledq_sample_data.zip
    and
    http://download.oracle.com/otn/nt/ias/101340/oracledq_sample_directory.zip
    Hope this helps.
    G

  • Duplicate Message ID issue in case of Multi mapping (without BPM)

    Hi Experts,
    I am doing one sample Example for my requirement of converting the single source message data into the multiple Target messages.
    for example, when sender system is sending the 5 sale order details into a single message in PI then my Inbound proxy class of ECC R/3 receiver system must get this sale orders separately, that means inbound proxy class method must be triggered separately for 5 sale orders from sender system. for achieving this, I have used the Multi mapping concept in ESR (without BPM).
    This scenario is in Asynchronous mode.
    The below screen shots give the details on what I have configures so far.
    IN ESR
    IN ID
    IN SXMB_MONI of PI
    IN SXMB_MONI of ECC R/3 Receiver system (Error in Processing)
    due to this error, inbound Proxy class method is not being triggered for the single Sale order details as well,
    please let me know how to tackle this duplicate message ID issue when we have multiple payloads to process in a single message.
    please also let me know if is there any other workaround to fulfil this requirement.
    Thank you,
    Regards,
    Jagesh

    Hi Nunu,
    Check the below blog for restrictions.
    Multi-Mapping without BPM - Yes, it’s possible!
    Restrictions
    Messages that result from the split in a mapping-based message split are sent using one AE. So only adapters running on the AE are supported. In particular, this means that target IDOC message splits are not supported since the IDOC adapter is not part of the AE.
    Regards,
    Praveen.

  • Populate 2nd combo box based on value selected in 1st combo box

    I am still using Acrobat 6 though I may be upgrading soon to Acrobat 8. I have a form with two combo boxes, the first "state" has values of MN and WI. Based on which value the user picks I would like to populate a "county" combo box with lists of counties that we deal with.
    Thanks,
    Gene

    One can set the option and export value using an arry:<br /><br />// document level script<br />// Master List of Lists <br />// Each entry in this object listeral is the name of a State <br />//Manually enter the State Names into the state field combo box <br />// The associated value is the item list, where each item is a name value pair, [<County> and [county code, zip code]] <br /><br />// state: ["county name", ["county code", "zip code"]]<br />var oStateNames = {MN: [["-", ["", ""] ], <br />                       ["St. Louis", ["MNStl", "55001"] ], <br />                       ["Carlton", ["MNSCrl", "55002"] ], <br />                       ["Pine", ["MNPin", "55003"] ],<br />                       ["Cook", ["MNCok", "55004"] ] <br />                       ], <br />                   WI: [["-", [" ", " "] ],<br />                        ["Douglas", ["WIDou", "55005"] ] ,<br />                        ["Bayfield", ["WIBay", "55006"] ],<br />                        ["Burnette", ["WIBur", "55007"] ],<br />                        ["Ashland", ["WIAsh", "55008"] ]<br />                       ]<br />                     }; <br /><br />//SetCountyEntries() on keystroke entry in state field <br />function SetCountyEntries() <br />{ <br />   if(event.willCommit) <br />   { <br />      // Get the new counties list from the Master List <br />      // Since the selection is being committed, <br />      // event.value contains the State name <br />      var lst = oStateNames[event.value]; <br />      // Clear the county list if there are no counties for the selected state <br />      this.getField("ee.address.county").clearItems();<br />      this.resetForm(["ee.address.code", "ee.address.zip"]);<br />      if( (lst != null) && (lst.length > 0) )<br />           this.getField("ee.address.county").setItems(lst); // set opiton and export value<br />   } <br />} <br />//  end document level script<br /><br />For the combo box "ee.address.county" one can create an array from the export value to populate the county code and zip code<br /><br />// custom keystroke for county combo box<br />if(event.willCommit & event.value != "") {<br />// split county and zip codes<br />var aCodes = this.getField(event.target.name).value.split(",");<br />this.getField("ee.address.code").value = aCodes[0];<br />this.getField("ee.address.zip").value = aCodes[1];<br />}<br />// end custom key stroke code

  • One Message, Call multiple IDOCS in PI 7.1

    Hello SDN!!
    We are trying to implement a scenario and curious if anyone has done this. We are looking to have one message exposed to another system, which will be mapped and sent to four different IDOCS on the same system. From some blogs it seems this really can't be done except by BPM. Is this the case, and if so, are there any good tips to do the most basic BPM to have this scenario complete? (Also all the documentation seems to be in 7.0)
    Cheers
    Devlin

    Hello from the first response mapping without bpm it states it is not possible with IDOC
    Messages that result from the split in a mapping-based message split are sent using one AE. So only adapters running on the AE are supported. In particular, this means that target IDOC message splits are not supported since the IDOC adapter is not part of the AE.
    The other two blogs are nice, but wrong direction, instead of collecting IDOCs I was hoping to have one message split out to 4 IDOCS,
    Cheers
    Devlin

Maybe you are looking for

  • Motion and Field Order Stupidness

    I'll try not to rant too much. But what is up with Motion and working with DV NTSC video with fields? I have had nothing but headaches. I am a long time After Effects user, and I was glad to get Motion 2 (2.1.2), but I've just spent the last hour and

  • Auto-redirct page?

    Could anyone tell me how to automatically redirect a page? I have a JSP(or servlet) which calls an ejb and inserts a data into database..then I need to redirect the page to another page...how to do it? thanks for your help~~

  • How to create Proxy CLIENT USING .WSDL ?

    I apologize if i have posted this question in the wrong forum.My question is concerned with WSDL. Recently i came across a question in as : Write a command to Create proxy client by using .WSDL and set of XSD file. Can anyone provide me some pointers

  • PDF Text Characters

    File exported from crsytal reports to PDF.  Text good in crystal, but every letter "f" is followed by the letter "i". Example, word "of" displayed as "ofi" in PDF.  Word "forward" displayed as "fiorward" in PDF.

  • Add anchor to resp URL via DynamoHttpServletResponse.sendRedirect in IE7

    Right now redirecting the user to a URL with an anchor like so: response.sendRedirect("/fake_page.html#fake_anchor"); where 'response' is a DynamoHttpServletRequest, succeeds in any browser other than IE7. For some reason, IE7 truncates the URL at th