Sqlloader controlfileparam's to avoid duplicate records loading

Hi All,
I am trying for a option in control file which should restrict sql loader to load duplicate records.I know if we apply a constraint on table itself, it wont allow you to load duplicate data and reject into reject file but in controlfile i have to do it to make sql loader to avoid loading duplicate records.Can you please suggest me which option in controlfile will enable this.
Can you please also inform me the exact difference between badfile and reject file.In what scenarios it will write to reject or to badfile
Regards

Hey
i dont think there is any option to avoid loading duplicate records by using a parameter in the control file
On the difference between the bad and reject files try this link
http://www.exforsys.com/content/view/1587/240/
Regards,
Sushant

Similar Messages

  • Avoid duplicate records

    hi guys
    could u pls let me know where is the option for avoiding duplicate records?
    1. in case of Info package
    2.In case of DTP?

    Hi,
    Incase of infopackage in 3.5 - > Processing tab -> select only PSA ,update subsequent data tagets,ignore double data records
    in 7.0 processing tab by default the selection is only PSA
    Incase of DTP - >update tab -> select handle duplicate data records.

  • How to avoid Duplicate Records  while joining two tables

    Hi,
    I am trying to join three tables, basically two tables are same one is like history table, so I wrote a query like
    select
    e.id,
    e.seqNo,
    e.name,
    d.resDate,
    d.details
    from employees e,
    ((select * from dept)union(select * from dept_hist)) d
    join on d.id=e.id and e.seqno=d.seqno
    but this returing duplicate records.
    Could anyone please tell me how to avoid duplicate records of this query.

    Actually it is like if the record is processed it will be moved to hist table, so both table will not have same records and I need the record from both the tables so i have done the union of both the tables, so d will have the union of both records.
    But I am getting duplicate records if even I am distinct.

  • Avoiding duplicate records while inserting into the table

    Hi
    I tried the following insert statement , where i want to avoid the duplicate records while inserting itself
    but giving me the errror like invalid identifier, though the column exists in the table
    Please let me know Where i'm doing the mistake.
    INSERT INTO t_map tm(sn_id,o_id,txt,typ,sn_time)
       SELECT 100,
              sk.obj_id,
              sk.key_txt,
              sk.obj_typ,
              sysdate,
              FROM S_KEY sk
        WHERE     sk.obj_typ = 'AY'
              AND SYSDATE BETWEEN sk.start_date AND sk.end_date
              AND sk.obj_id IN (100170,1001054)
               and   not exists  (select 1
                                                                   FROM t_map tm1 where tm1.O_ID=tm.o_id
                                                                        and tm1.sn_id=tm.sn_id
                                                                        and tm1.txt=tm.txt
                                                                        and tm1.typ=tm.typ
                                                                        and tm1.sn_time=tm.sn_time )

    Then
    you have to join the table with alias tml where is that ?do you want like this?
    INSERT INTO t_map tm(sn_id,o_id,txt,typ,sn_time)
       SELECT 100,
              sk.obj_id,
              sk.key_txt,
              sk.obj_typ,
              sysdate,
              FROM S_KEY sk
        WHERE     sk.obj_typ = 'AY'
              AND SYSDATE BETWEEN sk.start_date AND sk.end_date
              AND sk.obj_id IN (100170,1001054)
               and   not exists  (select 1
                                                                   FROM t_map tm where sk.obj_ID=tm.o_id
                                                                        and 100=tm.sn_id
                                                                        and sk.key_txt=tm.txt
                                                                        and sk.obj_typ=tm.typ
                                                                        and sysdate=tm.sn_time )

  • How to avoid duplicate record in a file to file

    Hi Guys,
              Could you please provide a soultion
              in order  to avoid duplicate entries in a flat file based on key field.
              i request in terms of standard functions
             either at message mappingf level or by configuring the file adapter.
    warm regards
    mahesh.

    hi mahesh,
    write module processor for checking the duplicate record in file adapter
    or
    With a JAVA/ABAP mapping u can eliminate the duplicate records
    and check this links
    Re: How to Handle this "Duplicate Records"
    Duplicate records
    Ignoring Duplicate Records--urgent
    Re: Duplicate records frequently occurred
    Re: Reg ODS JUNK DATA
    http://help.sap.com/saphelp_nw2004s/helpdata/en/d0/538f3b294a7f2de10000000a11402f/frameset.htm
    regards
    srinivas

  • Oracle 10 -   Avoiding Duplicate Records During Import Process

    I have two databases on different servers(DB1&DB2) and a dblink connecting the two. In DB2, I have 100 million records in table 'DB2Target'.
    I tried to load 100 million more records from tables DB1Source to DB2Target on top of existing 400 million records but after an hour I got a network error from DB2.
    The load failed after inserting 70% records.Now I have three tasks. First I have to find the duplicate record between DB1 and DB2. Second I have to find out the remaining 30% records missing from DB2Target.
    Third I have to re-load the remaining 30% records. What is the best solution?
    SELECT COUNT(*), A, B FROM DB2TARGET
    GROUP BY A, B
    HAVING COUNT(*) > 2
    re-loading
    MERGE INTO DB2TARGET tgt
    USING DB1SOURCE src
    ON ( tgt .A=  tgt .A)
    WHEN NOT MATCHED THEN
    INSERT ( tgt.A,  tgt .B)
    VALUES ( src .A,  src .B)Thanks for any guidance.

    when I execute this I get the folllowing error message:
    SQL Error: ORA-02064: distributed operation not supported
    02064. 00000 - "distributed operation not supported"
    *Cause:    One of the following unsupported operations was attempted
    1. array execute of a remote update with a subquery that references
    a dblink, or
    2. an update of a long column with bind variable and an update of
    a second column with a subquery that both references a dblink
    and a bind variable, or
    3. a commit is issued in a coordinated session from an RPC procedure
    call with OUT parameters or function call.
    *Action:   simplify remote update statement                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • How to avoid duplicate data loading from SAP-r/3 to BI

    Hi !
           I have created one process chain that will load data into some ODS from R/3,where(in R/3)the datasources/tables r updated daily.
           I want to scheduled the system such that ,if on any day the source data is not updated (if the tables r as it is) then that data shuold not be loaded into ODS.
           Can any one suggest me such mechanism,so that I can always have unique data in my data targets.
           Pls ! Reply soon.
          Thank You !
           Pankaj K.

    Hello Pankaj,
    By setting the unique records option, you pretty much are letting the system know to not check the uniqueness of the records using the change log and the ODS active table log.
    Also, in order to avoid the problem where you are having dual requests which are getting activated at the same time. Please make sure you select the options "Set Quality Status to 'OK' Automatically" and "Activate Data Automatically" that way you would be having an option to delete a request as required without having to delete the whole data.
    This is all to avoid the issue where even the new request has to be deleted to delete the duplicate data.
    Untill and unless the timestamp field is available in the table on top of which you have created the datasource it would be difficult to check the delta load.
    Check the table used to make sure there is no timestamp field or any other numeric counter field which can be used for creating a delta queue for the datasource you are dealing with.
    Let me know if the information is helpful or if you need additional information regarding the same.
    Thanks
    Dharma.

  • Scenario - Webservice - XI - BW. How to Avoid duplicate records?

    Hi all,
    Webservice --> XI -->BW .
    BPM has been used to send to send the response back.
    BPM :
    start ->Receive(Request)> Transformation(Responsemap)>Send(SendtoBW)->Send(Send Response) ---> stop.
    We are making use of MSGID to maintain the uniqueness of each message which is coming from Webservice. Uniqueness is maintained for the combination of sale-num:trans-num:sap-saletype:sale-type like below. One msgID will be registered in XI for each Unique message.
    ex:   sale-num:trans-num:sap-saletype:sale-type
           1983:5837:E:NEW
    If they receive any duplicate message again it will send the response back to webservice as "DUPLICATE_GUID:c9237200-0c69-2a80-dd11-79d5b47b213a".*
    It is working correctly. But only problem is when XI is down or if any communication failure happens in the middle of the processing like below example.
    Sample example which has failed recently. A webservice call has been failed three times and the reasons are..
    First time :
    It got the error as ""FAILED TO INVOKE WEB SERVICE OPERATION OS_CWUSales
    Error receiving Web Service Response: Fatal Error: csnet read operation failed (No such file or directory) (11773)" .
    Second time:
    MessageExpiredException: Message c9237200-0c69-2a80-dd11-79d5b47b213a(OUTBOUND) expired.
    Third Time :
    "DUPLICATE_GUID:c9237200-0c69-2a80-dd11-79d5b47b213a" ""
    If you observe when the call made 2nd time, the MsgID has been registered but due to server down or some other reason it could not able to process further.So MSGID got registered here but processing wasn't happened for the message. When they retried thrid time again to send the same call again they are getting error as "DUPLICATE GUID".
    DUPLICATE GUID means it has meaning that the message has been processed and the records has been updated in the backend system which is not happened now.
    Final Result is :
    Status in Webservice showing as "it has been updated in receicing system" as it is indicating as duplicate guid.
    - But it has not updated in backend system which is a problem.
    Firstly is there any suggestions on how to solve this problem?
    Is there any better way to handle this duplicate thing instead of Msgid?
    Please help me in solving this.
    Thanks & Regards
    Deepthi.
    Edited by: deepthi reddy on Jan 7, 2009 2:07 AM
    Edited by: deepthi reddy on Jan 7, 2009 2:07 AM

    >> My suggestion: You can have a Webservice - BW synch-synch synario without BPM. Sender SOAP adapter sending a synch req msg, getting it mapped to BW req and getting its response back from BW, then map it to webservice response and send to webservice without any receiver SOAP adapter.
    Thanks for the suggestion . Looks like a good Idea.
    >> Regarding the problem of duplicate check: see when your BW system gets req msg, it processes it and then sends a response msg.........in this response message have a STATUS field with values S for success and E for error - send this response to webservice and store it in database for that req MSGID ........now in web service application, check the response code for a MSGID and if it is E for error, then only resend the msg to BW.
    Initially they have planned the same way. But sometimes the response is getting very late back from BW. So now once the request reached to XI then immediately they are sending back the response from XI itself withhardcoded "OK" status. They are not waiting for BAPI response from BW.
    So If message is succesfull the Status will go as "OK"
         If message is not succesfull the status will go with blank. and this will indicate that there is some problem in the other side. Then they will check the SOAP Fault message which will have the errors like 
    "DUPLICATE_GUID:c9237200-0c69-2a80-dd11-79d5b47b213a".
    "FAILED TO INVOKE WEB SERVICE OPERATION "
    Right now they are having the issue only for duplicate data and response time to go back. So by making use of MsgID they solved the Issue.But due to this we are seeing daily so many error messages failing with this DUPLICATE error in production which is hampering the performance.
    So we are thinking of to get rid of this MsgId method and BPM. So from your first answer I think we can able to acheive it w/o BPM and after that I need to check the response time how fast it will go back to webservice.
    -Deepthi.

  • Avoiding duplicate records in report

    Hi All,
                 I have a scenario where
    Delivery document gets created in R/3 say on 7/1 with Act GI date "#" and KFs are all "0". This gets loaded into BI.
    On "7/5" this is PGId and the status in R/3 changesto ACT GI date "7/5" and Qty of "100" . This when loaded into BI is getting published as dupicate records i.e.
    Del doc     Created date     Act GI     Del. Ind     Qty
    12345     1-Jul                         #        #     0
    12345     1-Jul                         5-Jul        #     100
    Please note that the data is getting loaded from DSO into Infocube and DSO is in overwrite mode.
    Any suggestions to overcome this problem.

    Is ACT GI date  a keyfield in the DSO ?
    If yes, data will not be overwritten and two records will be loaded into the Cube.
    Make ACT GI date  a datafield which will result in only one record 12345 1-Jul 5-Jul # 100 as the keyfield values are same.
    Firstly make sure if this is right for all business scenarios.

  • BDC program - avoid duplicate records

    Hello Experts,
    I m doing BDC .My query is not working
    My req is if company code,Pur Org.,A/c Grp & name1 is same as in the database i.e. if vendor having same these 4 fields in the databse as in the excel, this particular record will not get uploaded .
    In it_success is the internal table with records to be uploaded after some validations.
    PLz check where i m wrong
    loop at i_lfa1.
        loop at i_lfb1 where lifnr = i_lfa1-lifnr.
                 loop at i_lfm1 where lifnr = i_lfb1-lifnr.
                   main_table-lifnr  = i_lfa1-lifnr.
                   main_table-bukrs = i_lfb1-bukrs.
                   main_table-ekorg = i_lfm1-ekorg.
                   main_table-KTOKK = i_lfa1-KTOKK.
                   main_table-name1 = i_lfa1-name1.
                   append main_table.
    endloop.
        endloop.
    endloop.
    loop at it_success.
       loop at main_table where ( bukrs = it_success-bukrs_001
                           and EKORG = it_success-EKORG_002
                           and KTOKK = it_success-KTOKK_003
                           and name1 = it_success-name1_006 ).
    MOVE-CORRESPONDING it_success TO it_error.
    APPEND it_error.
    delete main_table.
    if sy-subrc eq 0.
    delete it_success.
    e_fret-type = 'E'.
    e_fret-name = it_error-NAME1_006.
    e_fret-message = 'Vendor Name already exists '.
    append e_fret.
    endif.
    endloop.
    endloop.
    Ravi
    Edited by: Julius Bussche on Oct 1, 2008 9:30 AM
    Please use meaningfull subject titles.

    Hi,
    loop at i_lfa1.
    loop at i_lfb1 where lifnr = i_lfa1-lifnr.
    read table i_lfm1 with key lifnr = i_lfb1-lifnr.
    if sy-subrc = 0.
    main_table-ekorg = i_lfm1-ekorg.
    endif.
    main_table-lifnr = i_lfa1-lifnr.
    main_table-bukrs = i_lfb1-bukrs.
    main_table-KTOKK = i_lfa1-KTOKK.
    main_table-name1 = i_lfa1-name1.
    append main_table.
    endloop.
    endloop.
    Try Above code.
    Thanks,
    Durai.V

  • BI statistics, Duplicate records in ST03N

    Hi Friends,
    We had applied BI statistics, and now we are checking the query performance in ST03n.
    In the Reporting Analysis View, I can monitor the query access, the problem is that this view show de statistical data saved in 0TCT_C02 (infocube) and 0TCT_VC02 (virtual cube, the the entries that the view displar, are duplicated.
    How can I solve this?
    Thanks in advance!

    Hi,
    Please implement the OSS Note:
    1401235: Avoid Duplicate records and handling of Virtual Cube call
    -Vikram

  • Avoid duplicate standard receipe qty

    Dear All,
           I have found one query when i am making one report. In C203 t.code we can see product receipe. Generally receipe group is only one for one product but in some products i have found two receipe group like 5....100 & 5...200 and it is ok and it happens.
    Now i need to fetch standard qty for input materials vs process order qty for input materials. so currently i can fetch two receipe group like 0001...820 for one receipe group and 0001...820 for second receipe group but i need only one receipe group qty. currently it seems double standard qty against process order qty because BOM no(STLNR) is same for both receipe group.
    I can also see in COR3 t.code in master data tab, there is defined particular receipe group like 5...100. and this effect we see in AFKO table. But mainly i need std.qty of receipe so i have found STAS,STKO and STPO table.In STPO table i can see std.qty of input materials and in STKO we can see Product no and its batch size.  STLAL field in STAS table and also in STKO but noy in STPO for linking purpose. Now in STPO i can see like,
    STLNR        IDNRK           Qty 
    00000639   0001...820    50
    00000639   0001...820    50
    In my report std.qty comes 100 but i want 50 qty because i have not ound any link to filter one BOM no.(STLNR).
    Is there any other tables that i can search or what to do.
    Regards,
    Shivam.

    Hi! shivam pastagia
                                      u can use delete adjacent syntax to avoid duplicate records in internal table.
    STLNR IDNRK Qty
    00000639 0001...820 50
    00000639 0001...820 50
    sort itab by stlnr idrnk etc..
    DELETE ADJACENT DUPLICATES FROM itab comparing stlnr idrnk tetc...
    Regards,
    Mohammed Rasul.S

  • Loading ODS - Data record exists in duplicate within loaded data

    BI Experts,
    I am attemping to load an ODS with the Unique Data Records flag turned ON.  The flat file I am loading is a crosswalk with four fields, the first 3 fields are being used as Key Fields in order to make the records unique.  I have had this issue before, but gave up in frustration and added an ascending number count field so simply create a unique key.  This time I would like to solve the problem if possible.
    The errors come back referring to two data rows that are duplicate:
    Data record 1 - Request / Data package / Data record: REQU_4CNUD93Q3RCC80XFBG2CZXJ0T/000003/ 339
    Data record 2 - Request / data package / data record: REQU_4CNUD93Q3RCC80XFBG2CZXJ0T/000003/ 338
    And below here are the two records that the error message refers to:
    3     338     3902301480     19C*     *     J1JD     
    3     339     3902301510     19C*     *     J1Q5     
    As you can see, the combination of my three Key Fields should not be creating a duplicate. (3902301480, 19C(asterisk) , (asterisk))   and (3902301510, 19C(asterisk) , (asterisk))  I replaced the *'s because they turn bold!
    Is there something off with the numbering of the data records?  Am I looking in the wrong place?  I have examined my flat file and can not find duplicates and the records that BW say are duplicates are not, I am really having a hard time with this - any and all help greatly appreciated!!!

    Thank you for the response Sabuj....
    I was about to answer your questions but I wanted to try one more thing, and it actually worked.  I simply moved the MOST unique Key Field to the TOP of my Key Field list. It was at the bottom before.
    FYI for other people with this issue -
    Apparantly the ORDER of your Key Fields is important when trying to avoid creating duplicate records.
    I am using four data fields, and was using three data fields as the Key Fields.  Any combination of all three would NOT have a duplicate, however when BW finds that the first two key fields match, sometimes it apparantly doesn't care about the third one which would make the row unique.  By simply changing the order of my Key Fields I was able to stop getting the duplicate row errors...
    Lesson - If you KNOW that your records are unique, and you are STILL getting errors for duplicates, try changing the ORDER of your key fields.

  • Master Data Load Failure- duplicate records

    Hi Gurus,
    I am a new member in SDN.
    Now,  work in BW 3.5 . I got a data load failure today. The error message saying that there is 5 duplicate records . The processing is in to PSA and then to infoobject. I checked the PSA and data available in PSA. How can I avoid these duplicate records.
    Please help me, I want to fix this issue immediately.
    regards
    Milu

    Hi Milu,
    If it is a direct update, you willn't have any request for that.
    The Data directly goes to Masterdata tables so don't have Manage Tab for that infoobject to see the request.
    Where as in the case of flexible update you will have update rules from your infosource to the infoobject so you can delete the request in this case.
    Check this link for flexible update of master data
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/37dda990-0201-0010-198f-9fdfefc02412

  • BI 7.0 - Duplicate Record Error while loading master data

    I am working on BI 7.0 and I am trying to load master data to an info object.
    I created an Infopackage and loaded into PSA.
    I created transformation and DTP and I get an error after I execute the DTP about duplicate records.
    I have read all previous threads about duplicate record error while loading master data and most of the them suggested to check 'Ignore duplicate records' option in the infopackage. But in 7.0, I can only load to PSA with infopackage and it doesn't have any option for me to ignore duplicate records.
    My data is getting loaded to PSA fine and I get this error while loading to info object using DTP.
    I would appreciate your help to resolve this issue.
    Regards,
    Ram.

    Hi,
    Refer:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/45/2a8430131e03c3e10000000a1553f6/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/d0/538f3b294a7f2de10000000a11402f/frameset.htm
    With rgds,
    Anil Kumar Sharma .P

Maybe you are looking for