Associating a header record to its detail record in SQL Loader

I've a SQL loader script that loads data into 2 separate tables, one table containing header and one having details. Once the load is done there is no way to find out which item is associated to its header.
Is there a way to associate a header record to its corresponding detail level?
Edited by: 934858 on May 16, 2012 4:49 PM

The load script is loading into 2 separate tables as below.
load data
append
into table post1.thead
when (1:5) = 'thead'
TRAILING NULLCOLS
(file_type POSITION(1:5) CHAR,
file_line POSITION(6:15) INTEGER EXTERNAL,
business_date POSITION(16:23) DATE "YYYYMMDD",
load data
append
into table post1.tdetl
trailing nullcols
The data in the input file is as shown below...
THEAD0000000002201109142011091400000000000002091 1 0 -1
TTAIL0000000003000000
THEAD0000000012201109142011091400000000000002091 1 0 -1 20110914-1
TDETL0000000013CMPSPL1004 0 000000010000P0000000000000019990000000000000000200000
TTAIL0000000014000001
THEAD0000000021201109142011091400000000000002091 1 0 -1
TDETL0000000022EMPDSC1005 0 000
TDETL0000000023SCOUP 1006 0 973452 000

Similar Messages

  • How to skip footer record in SQL*Loader input file

    I have am using SQL*Loader in a batch import process.
    The input files to SQL*Loader have a header record and footer record - always the 1st and last records in the file.
    I need SQL*Loader to ignore these two records in every file when performing the import. I can easily ignore the header by using the SKIP function.
    Does anybody know how to ignore the last record (footer) in an input file??
    I do not want to physically pre-strip the footer since the business want all data files to have the header and footer records.

    Thanks - how do I use the when clause to specify the last line of the input file?
    I am presuming it requires me to have a unique identifier at a given position on that last line. If I don't have such an identifier can I still use your solution?
    Cheers Why not putting an idetifier at the end of your input file: echo "This_is_the_End" >> input_file ?

  • Need help, Trouble in uploading records using sql loader in Forms 6i

    Hi,
    I am trying to develop a screen for uploading records to a table by using a ctl file, batch file and sql loader.
    Env: Forms 6i, Oracle 8
    Table to be updated is: shy_upload_table
    My TSN entry looks similar to this,
    TEST_AXA.CNB.COM =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 11.23.11.123)(PORT = 1234))
    (CONNECT_DATA =
    (SID = axdabc)
    My intention is whenever i press the upload_button, I should truncate the table and upload it with the contents of the file.
    In the when-button-pressed event of the upload_button I have the following code. always I am able to truncate the table but am not able to upload it with the contents of the file. Can any of you help me fix this problem ?
    declare
         var_control varchar2(256);
         VAR_DATA VARCHAR2(256);
         VAR_OUTPUT VARCHAR2(500);
         var_filename varchar2(256);
         str varchar2(50);
         cnt number;
    begin
         FORMS_DDL('TRUNCATE TABLE shy_upload_table ');
         select count(*) into cnt from shy_upload_table;
         message('count '||cnt);
         MESSAGE('');
    If NOT form_success Then
         MESSAGE('Upload Failed');
         MESSAGE('Upload Failed');           
    else
         set_item_property('DISPLAY_PB',enabled,property_true);
    --when ever i run, i am able to see the display_pb enabled. it means form_success is true.
    end if;
         var_filename := :txt_filename;
    --I have tried with each of the below option,
    --sqlldr userid/[email protected] control=F:\ERP\file_upload.ctl
    --sqlldr userid/password@axdabc control=F:\ERP\file_upload.ctl
    --sqlldr userid/password@TEST_AXA.CNB.COM control=F:\ERP\file_upload.ctl
         VAR_DATA :='data=' || var_filename ;
         VAR_OUTPUT := var_control|| ' ' ||VAR_DATA;
         host('F:\a.bat');
    end;
    batch file contents...
    # I have tried with each of the below options
    sqlldr userid/[email protected] control=F:\ERP\file_upload.ctl data=F:\ERP\sample.txt log=F:\ERP\x.log bad=F:\ERP\x.bad
    #sqlldr userid/password@axdabc control=F:\ERP\file_upload.ctl data=F:\ERP\sample.txt log=F:\ERP\x.log bad=F:\ERP\x.bad
    #sqlldr userid/password@TEST_AXA.CNB.COM control=F:\ERP\file_upload.ctl data=F:\ERP\sample.txt log=F:\ERP\x.log bad=F:\ERP\x.bad
    pause
    Thanks
    vish

    Hi Francois,
    Thanks for responding, I am not very sure of what you want me to try out.
    When I double click the batch file containing the below, the record gets inserted in the table. Only when using my form and trying to upload, it fails to insert the record.
    batch file contents...
    #sqlldr userid/password@TEST_AXA.CNB.COM control=F:\ERP\file_upload.ctl data=F:\ERP\sample.txt log=F:\ERP\x.log bad=F:\ERP\x.bad
    pause
    Thanks
    Vish

  • Extracting Multiple Logical Records through sql loader

    Hello gurus,
    I have few questions regarding the sql loader. I m totally new to this i have never used, this is the first time i m using
    1.
    How do i find the position number of the charcter or a number ? do i need to physically count the position ? is ther any specific way of counting or use textpad to do that ?
    I know it sounds like silly question... but i wanted to know if there is any better way of doing it
    2.
    example data
    1119 Smith      1120 Yvonne
    1121 Albert     1130 Thomas
    The following control file extracts the logical records:
    INTO TABLE emp
         (empno POSITION(1:4)  INTEGER EXTERNAL,
          ename POSITION(6:15) CHAR)
    INTO TABLE emp
         (empno POSITION(17:20) INTEGER EXTERNAL,
          ename POSITION(21:30) CHAR)---
    2. Can you please expalin me what does the "null if deptno = blanks " ?
    deptno POSITION(1:2)  INTEGER EXTERNAL(2)
                  NULLIF deptno=BLANKS,I really appriciate it ~
    Thanks

    Hi,
    The NULLIF means load blanks as NULL
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_field_list.htm#sthref1129
    Not sure I understand your first question. You must of course know the format of the file, TextPad could be an ok tool for determining that format for a fixed column width file.
    Regards
    Peter

  • (8I) SQL*LOADER에서 | (PIPE LINE)을 RECORD SEPARATOR로 사용하기

    제품 : ORACLE SERVER
    작성날짜 : 2003-10-21
    ===============================================================
    (8I) SQL*LOADER에서 | (PIPE LINE)을 RECORD SEPARATOR로 사용하기
    ===============================================================
    PURPOSE
    Oracle8i부터는 , SQL*Loader을 사용할때 record terminator을 지정할 수 있게
    되었다.
    Explanation
    Oracle8i 이전에는 record seperator로 default로 linefeed(carriage return,
    newline 등)였다. 이전에는 VAR 또는 FIX 등의 적당한 file을 다루기 위한 옵션을
    주어야 하기 때문에 복잡한 감이 있었고 flexible하지 못했다.
    Oracle8i부터는 , SQL*Loader을 사용할때 record terminator을 지정할 수 있게
    되었다. newline 또는 carriage return 문자를 포함하는 data 또는 special 문자를
    포함하는 data를 load하고자 할때 record terminator를 hexadecimal로 지정하여 활용할 수 있다.
    Example
    다음의 예제는 '|' (pipe line)을 record separator로 사용한다.
    record separator를 사용하기 위해서 SQL*Loader의 control file에 'infile'절에 적당한 값을 지정하여야 한다.
    아래의 예는 '|' (pipe line)을 사용하기 위해서
    "str X'7c0a'"을 'infile'절에 지정하였다.
    --controlfile : test.ctl
    load data
    infile 'test.dat' "str X'7c0a'"
    into table test
    fields terminated by ',' optionally enclosed by '"'
    (col1, col2)
    --datafile: test.dat
    1,this is the first line of the first record
    this is the second|
    2,this is the first line of the second record
    this is the second|
    SQL> desc test
    Name Null? Type
    COL1 VARCHAR2(4)
    COL2 VARCHAR2(100)
    $ sqlldr scott/tiger control=test.ctl log=test.log
    load된 data을 보면 아래와 같이 carriage return이 들어가 있는 data가 한 column에
    제대로 들어간 것을 볼 수 있다.
    SQL> select * from test;
    COL1
    COL2
    1
    this is the first line of the first record
    this is the second
    2
    this is the first line of the second record
    this is the second
    RELATED DOCUMENT
    <Note:74719.1>

  • Sql loader master detail tables

    Hi
    I am trying to load master detail tables using sql loader. I have a sequence in the master table as primary key and have to load primary key in detail table. is there any way i can query up the primary key by name in the control file. please whats the best way to do this.
    can i query up the primary key in the master table by the name in detail table which is jan09.
    e.g
    master data
    jan09,description
    detail data
    jan09,'test1','100,'y'
    LOAD DATA
    INFILE 'test.data'
    insert
    INTO TABLE master_table
    fields terminated by ',' optionally enclosed by '"' trailing nullcols
    (id position(1:2) "myseq.nextval",
    code char,
    creation_date date sysdate,
    last_update_date date sysdate,
    into TABLE BDETAILS
    fields terminated by ',' optionally enclosed by '"' trailing nullcols
    (id number "select id from master_table where code=" || :code
    name char,
    amt decimal,
    flag char
    )

    Hey Here is what you need.
    You will have to understand and test this properly.
    It took me quiet some time before i could get to this stage
    Here is the test data with all the commands.
    As far the ID in the detail table is concerned you have no option but to update that column once the loading is complete.
    1) create Table
    CREATE TABLE master_table(ID NUMBER,code VARCHAR2(50),creation DATE,last_update DATE);
    CREATE TABLE bdetails(ID NUMBER,NAME VARCHAR2(100),amt NUMBER,flag VARCHAR2(10));
    ALTER TABLE master_table ADD (chc VARCHAR2(10));2) Create Sequence
    CREATE SEQUENCE testseq INCREMENT BY 1;3) CTL FILE
    LOAD DATA
    INFILE 'c:\sqlldr\test.txt'  /*path of data file */
    append
    INTO TABLE master_table
    when (1) ='M'  /*to check if the record in the data file is a master record */
    fields terminated by ',' optionally enclosed by '"' trailing nullcols
    (id  expression "testseq.nextval", /*select next value from sequence */
    mcol1 filler, /*FILLER keyword is sused to skip first column from data file */
    chc,
    code  ,
    creation "sysdate",
    last_update "sysdate"
    Into TABLE BDETAILS
    when (1) ='D'  /*to check if the record in the data file is a Detail record */
    fields terminated by ',' optionally enclosed by '"' trailing nullcols
    (id  expression "testseq.currval", /* UNFORTUNATELY THIS PICKS UP THE LAST ID OF THE MASTER RECORD SESSION STUFF GOING ON HERE*/
    col1 filler position(1:2),  /* specifying this POSITION(1:2) is very very important as you need to reset the cursor to start of the line for the next record */
    name  ,
    amt ,
    flag )4) SQLLDR Command
    sqlldr scott/tiger data=c:\sqlldr\test.txt control=c:\sqlldr\1.ctl
    5) data file
    M,jan09,description
    D,test1,100,y
    M,feb09,description1
    D,test2,200,x
    M,mar09,description2
    D,test3,200,xHope this helps you and give you a lead how to proceed.
    Cheers!!!
    Bhushan

  • Header Record not repeated when detail records span to more than a page

    In RTF template,
    1. I have a for-each with code of
    <?for-each@section:G_HEADER?>
    2. Under 'for-each', I have the header record is defined in a 1 table row.
    The row property for this table is set to 'Repeat as header row at the top of each page'
    3. Under this I have another table with fields for LINES data with its own 'for-each' code <?for-each:G_LINES?>
    4. Followed by end for the above two 'for-each'
    However, when I generate the PDF output from Preview, when the lines data spans to more than one page, the header record is not repeating on the subsequent pages. Anyone has any ideas how to fix this
    My XML Publisher version is 10.1.3.4.1 Build 130. I have a sample data and the template, but not sure if I can upload here.

    user6098416 ,
    I am also facing the same Problem as yours.
    My Template layout is as follows:
    <?for-each:G_HEADER?>
    Header table
    <?for-each:G_DTL1?>
    detail table  -->  (<?for-each:G_DTL2?> Detail Fields <?end for-each?> )
    <?end for-each?>   --> End for each for G_DTL1
    <?end for-each?>   --> End for each for G_HEADERMy problem is Header Table is not repeating on each page when detail Table data goes beyound one page.
    I tried the following but its not working:
    <?template:header?>
    Header table
    <?end template?>
    <?for-each:G_HEADER?>
    <?call@inlines:header?>
    <?for-each:G_DTL1?>
    detail table  -->  (<?for-each:G_DTL2?> Detail Fields <?end for-each?> )
    <?end for-each?>   --> End for each for G_DTL1
    <?end for-each?>   --> End for each for G_HEADERI am not sure whats wrong here.
    Thanks
    Sandeep

  • I need to delete header record automaticaly when deleting detail

    Hi all
    I have to master-detail tables
    The relation between the header and details is one-to-one.
    on my form i want to do the following:
    When user delete the detail record the header record must be automatically deleted.
    on the trigger key_delrecord on the form level i write this code
    BEGIN
    IF :SYSTEM.CURSOR_BLOCK = 'DETAILS' THEN
    Delete_record;
    FORMS_DDL('COMMIT');
    Go_block('Data');
    Delete_record;
    FORMS_DDL('COMMIT');
    ELSE
    Delete_record;
    END IF;
    END;
    but this does not work
    an error message appear on the runtime " can not delete header while dependent detail exists"
    Please how can I archive this
    Thanks in advance
    Edited by: [email protected] on Oct 4, 2009 11:01 AM

    Hi!
    May set the Delete Record Behavior Relation Property to Cascading.
    And change your trigger to:
    BEGIN
    IF :SYSTEM.CURSOR_BLOCK = 'DETAILS' THEN
    Go_block('Data');
    END IF;
    Delete_record;
    END; But, may do not use the forms_dll build-in for a commit.
    Use the commit_form build-in so forms recognize that you commit the form.
    Regards

  • Use field from file header record in detail records

    Hi,
    I receive a file with different records type (transmission header, file header, detail records, file footer). The fields in each record type are different. The transmission header tells me who the sender of the file is. I need this information, because I am merging data of various senders into one table.
    When processing the files, I need to split each file according to recordtype. (I can not use the same logic for each recordtype). How can I populate an extra column in the detail files with the value from the header record?
    What I tried so far is the following:
    - Populate a global variable using a script: This does not seem to work in the context of dataflows.
    - Create a 'join' between header record and detail record. However there are no fields I can join, so Data Services is not happy with that.
    - Store the field value from the header record in a DS (I chose a cache datastore, memory datastore did not work in batch scenario. I feel I can get this to work, but it seems quite an elaborate and expensive solution: First Filter the header record from the file, write the required field to the cache datastore, then read the same file but now filter the detail records, and join this with the datastore.
    Is there a better way to do this? I was hoping I could just use a script and read the first line of my file and move the value to a global variable....
    It would be great if someone could give me some tips...
    Many thanks,
    Jan.

    Hi Werner,
    I am not sure if I understand you correctly.
    I have declared $State as a global variable, not as a parameter.
    When I check my custom function I do get a warning that a variable is called which is not declared.
    I thought I did not have to worry about this as the variable does exist within the job where the function is called.
    It looks like I can not set a value to a global variable in a custom function.
    Is that correct, or is there something wrong with my syntax?
    Really the only reason I call a custom function is to set the global variable, so I could even make my function as simple as this:
    $State = 'NSW';
    Return 0;
    It still doesn't work...
    Any idea why?
    Thanks,
    Jan.

  • Content Conversion issue for header record

    Hi,
    We have a very urgent question on an issue here with one of our XI objects. 
    This is an inbound interface from an external system into R/3 & BW.  The inbound file has a header record (with about 8 fields) and detail records (about 900 fields per detail record). Data going into R/3 & BW don't have header records and everything goes in as detail records. One field from the header of this source file should be passed to the target structure at the detail level. Also, we are NOT using BPM.
    Can someone help us how we could define the file content conversion parameters for File adapter.
    Thanks in advance ......
    Prashant

    I'm so sorry, I wasn't subscribed to this thread and I didn't realize there were responses.
    If you have a message type made up of a Header with 1 occurence and Detail with 1 to unbounded occurunces, you'd want to do the following in content conversion:
    Document Name - your message type
    Document Namespac - your message type namespace
    Recordset Structure - Header,1,Detail,*
    Recordset Structure - Ascending
    Then you'll need to set some of the parameters, depending on the layout of your incoming file. 
    As for the problem of having hundreds of fields, I'm less sure about that.
    Would it be possible to break your detail data type down into smaller data types.  Each with fewer fields.  You'd still have to maintain every field in content conversion, but at least they'd be in seperate parameters, instead of all 900 in one tiny box.
    Here's a very rough example of what I mean:
    If you have 900 fields, instead of making 1 data type of detail, you could make 9 data types, Detail1, Detail2, Detail3, Detail4, Detail5, Detail6, Detail7,Detail8, Detail 9, each with 100 fields in them (or more with even less fields).
    Setting things up the file content conversion would be more complex in this scenario, so it might be a toss up if it's worth it to break it up this way or not if it meant configuring quite a few more parameters.
    For example,
    You'd have to declare your recordset structure like Header,1,Detail1,,Detail2,,Detail3,* etc, and you'd have to make sure to set the .endSeparator to '0' for all of the first 8 details, so it would recognize that they were all on one line.
    I hope this helps a little bit.

  • IDOC Header Record

    Our scenario is  File(s) ---> XI -
    > IDOC
    We are receiving 3 Different Files, with Employee information from 3 different Countries and using XI , the records will be posted into R/3. Each record from the file will be treated as an IDoc.
    The File is going to contain the Header Record, followed by the Detail records.
    The header record will determine from whom this file was sent and we need to map the header info with the Control record.
    How can I achieve the above task of separating the Header Record and the other detail records in the Message Mapping.
    I would Greatly appreciate the Help.

    hi Karen,
    if you target XML fill contain
    one header and many details (which have to be transformed into many idocs) then you can do it in
    1:N mapping
    1 file N idocs
    just map the header so that it occures many times
    (as many as the details section)
    if I understand your issue correctly
    Regards,
    michal

  • Extra delimiters in header record

    Creating a header row with totals; union all with contents from rest of the file; sorting and then creating a flat file delimited by semicolon. Header record has semicolons in last field. How can I get rid of those?

    This is because the downward components see metadata comprised of a fixed number of rows. You need to split this process into more steps: 1) Create file detail records; 2) Add the header. I suggest you see http://microsoft-ssis.blogspot.ca/2012/03/adding-header-row-to-flat-file.html
    as it closely matches what you did.
    Arthur My Blog

  • DMEE - Only header record is output in test run

    Dear Experts,
    I have created a format tree for tree type PAYM. I have a header, trailer and payment item segment. The header and trailer have level 1 and the payment item segment is having level 2. The header and trailer have to be only one per file.
    When I test the format tree, I am only getting the header record in display. The header record elements are all non-sap fields. However, the payment item some fields have mapping to SAP fields. But when I enter the values in test mode, the system only gives me the header record.
    Could anyone give me a solution? It will be very much appreciated.
    Thanks and regards,
    Vishal Thakur.

    Prashant,
    In transaction DMEE, there is no checkbox for the test run. I just select the tree type and your format tree and then there is a button with a quick text 'Test active version' or we press F8.
    I noted your point that in test run the line item details and the trailer data won't be output. But this is not the case with the SAP standard delivered format tree 'SAP_EXAMPLE'. If you test run this format tree, it shows all the details.
    Could you help in this case?
    Thanks,
    Vishal.

  • Plz help me for checking the Header record exists or not

    Hi
    How we check the whether the Header record exists or not.. if it is not exists i have raise an exception.. i wanna do this using UDF
    Plz help me  its urgent..
    venkat

    Maybe you should read all the responses in your earleir threads and read the rules of engagement also,
    /thread/117188 [original link is broken]
    Regards
    Bhavesh

  • Generate target/out file with header record as Record Count ?

    Hi Kareem, Please try the below approach. Pipeline 1: Load actual data(without header with record count) from source to target. Let say your file name is intermediate1.dat Pipeline 2: Take the target from pipeline 1 as source and create the header with count of source file using an aggregator. The filename of target for pipeline 2 will be your final file(header and detail data). Pipeline 3: Take the target of pipeline 1 again and do 1-to-1 load to the target file of second pipeline. In session properties, dont forget to tick the check box append if exists for the third pipeline target. There may be other simple approaches also. If you have no time in hand try the above approach. Let me know if you find any issues. Thanks,Deeshan.

    Generate target/out file with header record as Record Count ? Out file:---------------------------Record Count :2000  Coulmn1, Column2...Data, data........

Maybe you are looking for

  • IPod updater 2006-06-28 doesn't work

    Everytime I open the ipod updater with my ipod plugged in, I get a ipod service error. Then, if I just turned on my computer and plugged in my ipod, "My Computer" comes up instead of itunes. This is my second ipod and it's brand new. I can't update a

  • Bug in SQL Developer Version 1.5.1

    I have a problem with the Database Export : The generated Insert-Command look's like: INSERT INTO atable (Field1, Field2) VALUES (10.'test'); The Value-Separator is a point and not a comma. Is it a bug or have I chance to change the separator-paramet

  • Do I need a RAID system or just a larger external hard drive

    Hi guys... I am working in my fathers company and running the media side of our small business. I do our website, print media and also our video production.... I have an iMac and it is running on 500gig of storage and 8meg of RAM.... I have almost fi

  • SCMA Schedule Manager Email

    Hi, This is more of an FI-CA question but there doesn't seem to be a specific forum for FI-CA so I'll apologise in advance. I am having a problem with SCMA flow definitions.  On each task of the flow there is an option to send email to someone if the

  • How do you install earlier versions of Firefox, specifically 1.5 or 2.0?

    I need Firefox 1.5 or 2.0 because it is the only version compatible with what I need but it will only let me install the latest version. How do I install earlier versions?