Setting change to correct duplicate records ..

Hi Experts,
> I tried loading a flat file to an ODS which was in the quality system and i got some duplicate rows which i had to correct manually in the PSA.
> When i tried loading the same flat file to the same ODS in Production , i did not get the error message 'duplicate records' .
Can you please tell me what is the setting that has to be changed to allow duplicate records, or correct it ...that i have to do in the quality.
Help done would be defenately rewarded with points ....
Thanks ,
Santosh ....

Hi Santhosh,
Remove check for unique data records.
<i><b>Unique Data Records</b></i>
With the Unique Data Records indicator, you determine whether only unique data records are to be updated to the ODS object. This means that you cannot load a data record into the ODS object the key combination for which already exists in the system – otherwise a termination occurs. Only use this setting when you are sure that only unique data records are to be loaded into the ODS object (for example, single documents). A typical application of this is in the loading of mass data. It improves the load performance.
You can also deselect this indicator again (even if data has already been loaded into the ODS object). This can be necessary if you want to re-post deleted data records using a repair request (see: Tab Page: Updating). In this case, you need to deselect the Unique Data Records indicator before posting the repair request, following which you can then reset the Unique Data Records indicator once more. The regeneration of metadata of the Export DataSource, which takes place when the ODS object is reactivated, has no effect on the existing data mart delta method.
More info @ <a href="http://help.sap.com/saphelp_nw04/helpdata/en/a6/1205406640c442e10000000a1550b0/content.htm">ODS Object Settings</a>
Hope it Helps
Srini

Similar Messages

  • Duplicate records in ODS change log

    Hi Experts,
    I have loaded records  from R/3 to ODS (psa and then data target).
    Records looks good in ODS active data.
    Then I did a delta load from ODS to InfoCube and I have recevied are the changed entries to Cube.
    Then, I checked the ODS and the Datamart status was Checked.
    But, when I did a delta load again after couple of days from ODS to InfoCube, I received some duplicate records. I have checked the Change Log of ODS and the duplicate records were there.
    I am not sure why my ODS is transporting entries from the ODS request for which there was a datamart status already checked.
    ( I cannot do selective deletion because there are about 1000 duplicate entries and in random order..).
    I beleive my best bet would be to:
    1. Delete both the delta load requests (the request with correct records and the request with duplicate records) from InfoCube.
    2. Remove datamart status from ODS
    3. Delta load from ODS to InfoCube
    Please let me know if this will solve the issue or I will still get the duplicate records in InfoCube?
    Also, do I need to Delete Change log data?

    Hi Ramesh,
    The entries in Active Data of ODS looks good. No duplicates.
    When i checked Change log,
    I had entries in ODS change log from two differnent requests.
    Request1 (Full repair request via PSA) in ODS (ODSR_xxxx1) has records the same as in R/3
    (eg., entry for DocumentA has keyfigure as value +$1000)
    Request2 (Delta Load via PSA) in ODS (ODSR_xxxx2) has two records with posive and negative keyfigues
    (eg, entries for DocumentA has keyfugure as value +$1000 and DocumentA has keyfugure as value -$1000)
    So, not sure if I should call these records as 'duplicates'.
    But, I see 3 entries in InfoCube for DocumentA with keyfigure values as +$1000, -$1000 and +$1000.
    I beleive the ODS change log should actually cancel the earlier +$1000 and load the new +$1000 entry.
    But, it seems to be not replacing the old one.
    I hope i am clear.

  • Document prints in reverse, what setting can I change to correct this?

    Document prints in reverse, what setting can I change to correct this?  Not a printer problem - documents print in correct order when using other programs.  Using Acrobat X Pro on a Mac.

    One more place to check.  Open a document you'd like to print, then click the 'Printer...' button.  Hit 'Yes' to the warning dialog box that comes up.  For your specific printer, this might already be enabled.  Change the drop-down menu that says 'Layout' to 'Paper Handling'.  It could be that the Page Order preference has been changed here too.  Just a thought!
    -David

  • Duplicate Records in Details for ECC data source. Help.

    Hello. First post on SDN. I have been searching prior posts, but have come up empty. I am in the middle of creating a report linking directly into 4 tables in ECC 6.0. I am having trouble in getting either the table links set up correctly, or filtering out duplicate record sets that are being reporting in the details section of my report. It appears that I have 119 records being displayed, when the parameters values should only yeild 7. The details section is repeating the 7 records 17 times (there are 17 matching records for the parameter choices in one of the other tables which I think is the cause).
    I think this is due to the other table links for my parameter values. But, I need to keep the links the way they are for other aspects of the report (header information). The tables in question are using an Inner Join, Enforced Both, =. I tried the other link options, with no luck.
    I am unable to use the "Select Disctinct Records" option in the Database menu since this is not supported when connecting to ECC.
    Any ideas would be greatly appreciated.
    Thanks,
    Barret
    PS. I come from more of a Functional background, so development is sort of new to me. Take it easy on the newbie.

    If you can't establish links to bring back unique data then use a group to diplay data.
    Group report by a filed which is the lowest commom denominator.
    Move all fields into group footer and suppress Group header and details
    You will not be able to use normal summaries as they will count/sum all the duplicated data, use Running Totals instead and select evaluate on change of the introduced group
    Ian

  • Duplicate records in TABLE CONTROL

    Hi folks,
    i am doing a module pool where my internal table (itab) data is comming to table ontrol(ctrl).then i need to select one record in table control & then i press REFRESH push button.
    after putting the refresh button, some new records are comming to that same internal table.then i need to display the modified internal table (some new records are added) data in the table control.
    The modified internal table data is comming to the table control but to the last of table control, some records are repeating.
    before comming to table control, i checked the modified itab. it contains correct data.i.e it contains 15 records.(previously i have 5 records.after REFRESH button 10 more records are added.). but when this table is comming to table control, it contains some 100 record.i should get only 15 record.
    why these records r repeting. how to delete the duplicate records from table control?
    plz suggest me where i am doing mistake.
    correct answer will be rewarded
    Thanks & Regards

    Hi ,
    Thanks for ur help. but i should not refresh the internal table  as some records r already present.after putting the REFRESH button, some new records r appending to this existing table.then i am going to display the previous records & the new records as well.
    i checked the internal table after modification.it contains actual number of records. but after comming to table control , more records r comming.
    is this the problem with scrolling or waht?
    plz suggest where i am doing mistake.i am giving my coding below.
    PROCESS BEFORE OUTPUT.
    MODULE STATUS_0200.
    module tc_shelf_change_tc_attr.
    loop at object_tab1
           with control tablctrl
           cursor tablctrl-current_line.
        module tc_shelf_get_lines.
      endloop.
    PROCESS AFTER INPUT.
    module set_exit AT EXIT-COMMAND.
       loop at object_tab1.
             chain.
              field: object_tab1-prueflos,
                     object_tab1-matnr.
               module shelf_modify on chain-request.
             endchain.
            field object_tab1-idx
             module shelf_mark on request.
                   endloop.
    module shelf_user_command.
    module user_command_0200.
    ***INCLUDE Y_RQEEAL10_STATUS_0200O01 .
    *&      Module  STATUS_0200  OUTPUT
          text
    MODULE STATUS_0200 OUTPUT.
      SET PF-STATUS 'MAIN'.
    SET TITLEBAR 'xxx'.
    ENDMODULE.                 " STATUS_0200  OUTPUT
    *&      Module  tc_shelf_change_tc_attr  OUTPUT
          text
    MODULE tc_shelf_change_tc_attr OUTPUT.
    delete adjacent duplicates from object_tab1 comparing prueflos matnr.
    describe table object_tab1 lines tablctrl-lines.
    ENDMODULE.                 " tc_shelf_change_tc_attr  OUTPUT
    *&      Module  tc_shelf_get_lines  OUTPUT
          text
    MODULE tc_shelf_get_lines OUTPUT.
    data:  g_tc_shelf_lines  like sy-loopc.
    if tablctrl-current_line > tablctrl-lines.
    stop.
    endif.
    g_tc_tablctrl_lines = sy-loopc.
    *refresh control tablctrl from screen 0200.
    ENDMODULE.                 " tc_shelf_get_lines  OUTPUT
    ***INCLUDE Y_RQEEAL10_SHELF_MODIFYI01 .
    *&      Module  shelf_modify  INPUT
          text
    MODULE shelf_modify INPUT.
    modify object_tab1
        index tablctrl-current_line.
    ENDMODULE.                 " shelf_modify  INPUT
    *&      Module  set_exit  INPUT
          text
    module set_exit INPUT.
    leave program.
    endmodule.                 " set_exit  INPUT
    *&      Module  shelf_mark  INPUT
          text
    MODULE shelf_mark INPUT.
    data: g_shelf_wa2 like line of object_tab1.
      if tablctrl-line_sel_mode = 1
      and object_tab1-idx = 'X'.
        loop at object_tab1 into g_shelf_wa2
          where idx = 'X'.
          g_shelf_wa2-idx = ''.
          modify object_tab1
            from g_shelf_wa2
            transporting idx.
        endloop.
      endif.
      modify object_tab1
        index tablctrl-current_line
        transporting idx plnty plnnr plnal.
    ENDMODULE.                 " shelf_mark  INPUT
    *&      Module  shelf_user_command  INPUT
          text
    MODULE shelf_user_command INPUT.
    ok_code = sy-ucomm.
      perform user_ok_tc using    'TABLCTRL'
                                  'OBJECT_TAB1'
                         changing ok_code.
      sy-ucomm = ok_code.
    ENDMODULE.                 " shelf_user_command  INPUT
    *&      Module  user_command_0100  INPUT
          text
    MODULE user_command_0200 INPUT.
    data:v_line(3).
    case OK_CODE.
    when 'LAST'.
    read table object_tab1 with key idx = 'X'.
    if sy-subrc = 0.
    select * from qals
                          where enstehdat <= object_tab1-enstehdat
                          and   plnty ne space
                          and   plnnr ne space
                          and   plnal ne space.
    if sy-dbcnt > 0.
    if qals-enstehdat = object_tab1-enstehdat.
       check qals-entstezeit < object_tab1-entstezeit.
       move-corresponding qals to object_tab2.
       append object_tab2.
       else.
       move-corresponding qals to object_tab2.
       append object_tab2.
       endif.
         endif.
            endselect.
       sort object_tab2 by enstehdat entstezeit descending.
    loop at object_tab2 to 25.
      if not object_tab2-prueflos is initial.
    append object_tab2 to object_tab1.
      endif.
      clear object_tab2.
    endloop.
      endif.
    when 'SAVE'.
    loop at object_tab1 where idx = 'X'.
      if ( not object_tab1-plnty is initial and
                    not object_tab1-plnnr is initial and
                               not object_tab1-plnal is initial ).
       select single * from qals into corresponding fields of wa_qals
       where prueflos = object_tab1-prueflos.
          if sy-subrc = 0.
           wa_qals-plnty = object_tab1-plnty.
           wa_qals-plnnr = object_tab1-plnnr.
           wa_qals-plnal = object_tab1-plnal.
    update qals from wa_qals.
      if sy-subrc <> 0.
    Message E001 with 'plan is not assigned to lot in sap(updation)'.
    else.
    v_line = tablctrl-current_line - ( tablctrl-current_line - 1 ).
    delete object_tab1.
    endif.
       endif.
          endif.
               endloop.
    when 'BACK'.
    leave program.
    when 'NEXT'.
    call screen 300.
    ENDCASE.
    ***INCLUDE Y_RQEEAL10_USER_OK_TCF01 .
    *&      Form  user_ok_tc
          text
         -->P_0078   text
         -->P_0079   text
         <--P_OK_CODE  text
    form user_ok_tc  using    p_tc_name type dynfnam
                              p_table_name
                     changing p_ok_code like sy-ucomm.
       data: l_ok              type sy-ucomm,
             l_offset          type i.
       search p_ok_code for p_tc_name.
       if sy-subrc <> 0.
         exit.
       endif.
       l_offset = strlen( p_tc_name ) + 1.
       l_ok = p_ok_code+l_offset.
       case l_ok.
         when 'P--' or                     "top of list
              'P-'  or                     "previous page
              'P+'  or                     "next page
              'P++'.                       "bottom of list
           perform compute_scrolling_in_tc using p_tc_name
                                                 l_ok.
           clear p_ok_code.
       endcase.
    endform.                    " user_ok_tc
    *&      Form  compute_scrolling_in_tc
          text
         -->P_P_TC_NAME  text
         -->P_L_OK  text
    form compute_scrolling_in_tc using    p_tc_name
                                           p_ok_code.
       data l_tc_new_top_line     type i.
       data l_tc_name             like feld-name.
       data l_tc_lines_name       like feld-name.
       data l_tc_field_name       like feld-name.
       field-symbols <tc>         type cxtab_control.
       field-symbols <lines>      type i.
       assign (p_tc_name) to <tc>.
       concatenate 'G_' p_tc_name '_LINES' into l_tc_lines_name.
       assign (l_tc_lines_name) to <lines>.
       if <tc>-lines = 0.
         l_tc_new_top_line = 1.
       else.
         call function 'SCROLLING_IN_TABLE'
           exporting
             entry_act      = <tc>-top_line
             entry_from     = 1
             entry_to       = <tc>-lines
             last_page_full = 'X'
             loops          = <lines>
             ok_code        = p_ok_code
             overlapping    = 'X'
           importing
             entry_new      = l_tc_new_top_line
           exceptions
             others         = 0.
       endif.
       get cursor field l_tc_field_name
                  area  l_tc_name.
       if syst-subrc = 0.
         if l_tc_name = p_tc_name.
           set cursor field l_tc_field_name line 1.
         endif.
       endif.
       <tc>-top_line = l_tc_new_top_line.
    endform.                              " COMPUTE_SCROLLING_IN_TC
    Thanks

  • Duplicate records problem

    Hi everyone,
    I'm having a a little difficulty resolving a problem with a repeating field causing duplication of data in a report I'm working on, and was hoping someone on here can suggest something to help!
    My report is designed to detail library issues during a particular period, categorised by the language of the item issued. My problem is that on the sql database that out library management system uses, it is possible for an item to have more than one language listed against it (some books will be in more than one language). When I list the loan records excluding the language data field, I get a list of distinct loan records. Bringing the language data into the report causes the loan record to repeat for each language associated with it, so if a book is both in English and French, it will cause the loan record to appear like this:
    LOAN RECORD NO.     LANGUAGE CODE
      123456                             ENG
      123456                             FRE
    So, although the loan only occurred once I have two instances of it in my report.
    I am only interested in the language that appears first and I can exclude duplicated records from the report page. I can also count only the distinct records to get an accurate overall total. My problem is that when I group the loan records by language code (I really need to do this as there are millions of loan records held in the database) the distinct count stops being a solution, as when placed at this group level it only excludes duplicates in the respective group level it's placed in. So my report would display something like this:
    ENG     1
    FRE      1
    A distinct count of the whole report would give the correct total of 1, but a cumulative total of the figures calculated at the language code group level would total 2, and be incorrect. I've encountered similar results when using Running Totals evaluating on a formula that excludes repeated loan record no.s from the count, but again when I group on the language code this goes out of the window.
    I need to find a way of grouping the loan records by language with a total count of loan records alongside each grouping that accurately reflects how many loans of that language took place.
    Is this possible using a calculation formula when there are repeating fields, or do I need to find a way of merging the repeating language fields into one field so that the report would appear like:
    LOAN RECORD     LANGUAGE CODE
      123456                      ENG, FRE
    Any suggestions would be greatly appreciated, as aside from this repeating language data there are quite a few other repeating database fields on the system that it would be nice to report on!
    Thanks!

    if you create a group by loan
    then create a group by language
    place the values in the group(loan id in the loan header)
    you should only see the loan id 1x.
    place the language in the language group you should only see that one time
    a group header returns the 1st value of a unique id....
    then in order to calculate avoiding the duplicates
    use manual running totals
    create a set for each summary you want- make sure each set has a different variable name
    MANUAL RUNNING TOTALS
    RESET
    The reset formula is placed in a group header report header to reset the summary to zero for each unique record it groups by.
    whileprintingrecords;
    Numbervar  X := 0;
    CALCULATION
    The calculation is placed adjacent to the field or formula that is being calculated.
    (if there are duplicate values; create a group on the field that is being calculated on. If there are not duplicate records, the detail section is used.
    whileprintingrecords;
    Numbervar  X := x + ; ( or formula)
    DISPLAY
    The display is the sum of what is being calculated. This is placed in a group, page or report footer. (generally placed in the group footer of the group header where the reset is placed.)
    whileprintingrecords;
    Numbervar  X;
    X

  • 36 duplicate record  found. -- error while loading master data

    Hello BW Experts,
    Error while loading master data
    ( green light )Update PSA ( 50000 Records posted ) No errors
    ( green light )Transfer rules ( 50000  „³ 50000 Records ): No errors
    ( green light )Update rules ( 50000 „³ 50000 Records ): No errors
    ( green light ) Update ( 0 new / 50000 changed ): No errors
    ( red light )Processing end: errors occurred
         Processing 2 finished
    ƒÞ     36 duplicate record found. 15718 recordings used in table /BIC/PZMATNUM
    ƒÞ     36 duplicate record found. 15718 recordings used in table /BIC/XZMATNUM
    This error repeats with all the data-packets.
    what could be the reason of the error. how to correct the error.
    Any suggestions appreciated.
    Thanks,
    BWer

    BWer,
    We have exactly the same issue when loading the infoobject 0PM_ORDER, the datasource is 0PM_ORDER_ATTR.
    The workaround we have been using is 'Manual push' from the details tab of the monitor. Weirdly we don't have this issue in our test and dev systems, even in Production, this doesn't occur somedays. We have all the necessary settings enabled in the infopackage.
    Did you figure out a solution for your issue, if so please let me know and likewise I will.
    -Venkat

  • Data loader : Import -- creating duplicate records ?

    Hi all,
    does anyone have also encountered the behaviour with Oracle Data Loader that duplicate records are created (also if i set the option: duplicatecheckoption=externalid) When i am checking the "import request queue - view" the request parameters of the job looks fine! ->
    Duplicate Checking Method == External Unique ID
    Action Taken if Duplicate Found == Overwrite Existing Records
    but data loader have created new records where the "External Unique ID" is already existent..
    Very strange is that when i create the import manually (by using Import Wizard) exactly the same import does work correct! Here the duplicate checking method works correct and the record is updated....
    I know the data loader has 2 methods, one for update and the other for import, however i do not expect that the import creates duplicates if the record is already existing, rather doing nothing!
    Anyone else experiencing the same ?? I hope that this is not expected behaviour!! - by the way method - "Update" works fine.
    thanks in advance, Juergen
    Edited by: 791265 on 27.08.2010 07:25
    Edited by: 791265 on 27.08.2010 07:26

    Sorry to hear about your duplicate records, Juergen. Hopefully you performed a small test load first, before a full load, which is a best practice for data import that we recommend in our documentation and courses.
    Sorry also to inform you that this is expected behavior --- Data Loader does not check for duplicates when inserting (aka importing). It only checks for duplicates when updating (aka overwriting). This is extensively documented in the Data Loader User Guide, the Data Loader FAQ, and in the Data Import Options Overview document.
    You should review all documentation on Oracle Data Loader On Demand before using it.
    These resources (and a recommended learning path for Data Loader) can all be found on the Data Import Resources page of the Training and Support Center. At the top right of the CRM On Demand application, click Training and Support, and search for "*data import resources*". This should bring you to the page.
    Pete

  • Duplicate records-problems

    Hi gurus
    We created a text datasource in R/3 and replicated it into BW 7.0
    An infopackage (loading to PSA) and DataTransferProcess was created and included in a process chain.
    The job failed because of duplicate records.
    We now discovered that the setting of the u201CDelivery of Duplicate recordsu201D for this Datasource in BW is set to u201CUndefinedu201D
    When creating the datasource in R/3, there were no settings for the u201CDelivery of duplicate recordsu201D.
    In BW, Iu2019ve tried to change the settings of u201CDelivery of Duplicate data recordsu201D to NONE, but when I go into change-mode, the u201CDelivery of duplicateu201D is not changeable.
    Does anyone have any suggestion on how to solve this problem?
    Thanks,
    @nne Therese

    Hi Muraly,
    I do have the same issue. I am loading texts from R/3 to PSA using Infopackage with Full update. From PSA I am using DTP with Delta with the option" vaild records update, No reporting(Request Record).
    It was running fine for last few weeks like transfer records and added records are same as in the PSA request every day.
    suddenly the load has filed to infoobect . I deleted the request from Infoobject and reloaded using the DTP then again failed. I tried loading full update as it texts then again failed. Now I analised the error it says Duplicate records. So I changed the DTP by checking the option Handling Duplicate records and loaded with full update . It worked fine the transferred records more than 50000 and added records are exact no of the PSA request.
    I reset the DTP again back to Delta and loaded today but the transferred records are 14000 and added records(3000) same as PSA request. I am fine if you see the history of loads the no of records in Transfer and Added in infoobjects and the no of records in PSA request are same every day..
    Why this difference is now? But in Production I have no issues. Since I changed the DTP if I transport to Production does it make any difference. I am first time doing BI 7.0.
    Please suggest me and explain me if I am wrong.
    Thanks,
    Sudha..

  • ITunes changing aspect ratio of recorded TV shows

    I use Elgato's EyeTV to record TV programmes, and their Turbo264 plug-in to encode for the Apple TV. All the programmes added to my iTunes library are suddenly appearing with the aspect ratio of 1456*576.
    I am certain that this is an iTunes problem rather than an Elgato problem because:
    1) If you look at the file properties in Finder the files are the correct aspect ratio of 1024*576
    2) I recorded an entire series in December and watched the first two, which displayed correctly, but appear to have become retrospectively stretched.

    Hi Richard,
    sounds like you are suffering from the same sort of issue I am;
    http://discussions.apple.com/thread.jspa?threadID=1377284&tstart=0
    Played-back programmes are ending up extremely letterboxed through iTunes and if they weren't wide-screen to start with they appear as a window in centre screen.
    The weird thing is if you open the iTunes library files using Quicktime you will find they display in the same strange way until you change a couple of playback settings. Then they appear perfectly.
    It would seem that you can't make the same playback setting changes in iTunes.

  • Delete duplicate records based on condition

    Hi Friends,
    I am scratching my head as how to select one record from a group of duplicate records based upon column condition.
    Let's say I have a table with following data :
    ID   START_DATE   END_DATE    ITEM_ID     MULT    RETAIL            |                      RETAIL / MULT
    1     10/17/2008   1/1/2009     83     3     7                 |                            2.3333
    2     10/17/2008   1/1/2009     83     2     4                 |                            2
    3     10/17/2008   1/1/2009     83     2     4                 |                            2
    4     10/31/2008   1/1/2009     89     3     6                 |                            2
    5     10/31/2008   1/1/2009     89     4     10                |                            2.5
    6     10/31/2008   1/1/2009     89     4     10                |                            2.5
    7     10/31/2008   1/1/2009     89     6     6                 |                            1
    8     10/17/2008   10/23/2008     124     3     6                 |                            2From the above records the rule to identify duplicates is based on START_DATE,+END_DATE+,+ITEM_ID+.
    Hence the duplicate sets are {1,2,3} and {4,5,6,7}.
    Now I want to keep one record from each duplicate set which has lowest value for retail/mult(retail divided by mult) and delete rest.
    So from the above table data, for duplicate set {1,2,3}, the min(retail/mult) is 2. But records 2 & 3 have same value i.e. 2
    In that case pick either of those records and delete the records 1,2 (or 3).
    All this while it was pretty straight forward for which I was using the below delete statement.
    DELETE FROM table_x a
          WHERE ROWID >
                   (SELECT MIN (ROWID)
                      FROM table_x b
                     WHERE a.ID = b.ID
                       AND a.start_date = b.start_date
                       AND a.end_date = b.end_date
                       AND a.item_id = b.item_id);Due to sudden requirement changes I need to change my SQL.
    So, experts please throw some light on how to get away from this hurdle.
    Thanks,
    Raj.

    Well, it was my mistake that I forgot to mention one more point in my earlier post.
    Sentinel,
    Your UPDATE perfectly works if I am updating only NEW_ID column.
    But I have to update the STATUS_ID as well for these duplicate records.
    ID   START_DATE   END_DATE    ITEM_ID     MULT    RETAIL    NEW_ID   STATUS_ID |   RETAIL / MULT
    1     10/17/2008   1/1/2009     83     3     7         2         1      |     2.3333
    2     10/17/2008   1/1/2009     83     2     4                                |     2
    3     10/17/2008   1/1/2009     83     2     4           2         1      |     2
    4     10/31/2008   1/1/2009     89     3     6           7         1      |     2
    5     10/31/2008   1/1/2009     89     4     10          7         1      |     2.5
    6     10/31/2008   1/1/2009     89     4     10          7         1      |     2.5
    7     10/31/2008   1/1/2009     89     6     6                            |     1
    8     10/17/2008   10/23/2008     124     3     6                            |     2So if I have to update the status_id then there must be a where clause in the update statement.
    WHERE ROW_NUM = 1
      AND t2.id != t1.id
      AND t2.START_DATE = t1.START_DATE
      AND t2.END_DATE = t1.END_DATE
      AND t2.ITEM_ID = t1.ITEM_IDInfact the entire where_ clause in the inner select statement must be in the update where clause, which makes it totally impossible as T2 is persistent only with in the first select statement.
    Any thoughts please ?
    I appreciate your efforts.
    Definitely this is a very good learning curve. In all my experience I was always writing straight forward Update statements but not like this one. Very interesting.
    Thanks,
    Raj.

  • Reconcil. acct is missing in master record, correct master record

    Dear Experts,
    i created the intercompany billing document.
    but it has not generated any accounting document.
    it is giving the error Reconcil. acct is missing in master record, correct master record.
    In my customer master record i have filled up reconcil acct.
    and now if i want to made any changes in that i am not able to do that.
    so now what can be the alternate solution for that.

    hello, friend.
    you mentioned that the field for recon account (company code view) has been greyed out in change mode?  it is possible your FI has set this field as display only (text control) while in change mode.  please check with your FI consultant and if so, the field should be made modifiable so you can make changes.
    if this is not possible, then cancelling the intercompany transaction becomes a likely option.
    please post feedback again.

  • Re: ICR - Duplicate records in FBICRC003A

    Hi Ralph,
    It seems that FBICA3 creates duplicate records in FBICRC003A if assigned items were subsequently unassigned. I believe this is due to program error. These are the steps that I went through to get this error:
    1) Post cross-company FI document (FB01)
    2) Run FBICS3
       FBICRC003A records 1 and 2 were created with RTYPE = 'blank'
    3) Run FBICA3
       FBICRC003A records 3 and 4 were created with RTYPE = '2'
    4) Run FBICR3 and unassign the items
       FBICRC003A records 3 and 4 were changed with RTYPE = '1'
       ZUONR was set to the record number of the unassigned record (ie: 1 or 2)
    5) Run FBICS3
    6) Run FBICA3
       FBICRC003A records 3 and 4 were changed with RTYPE = '2'
       Duplicate FBICRC003A records 5 and 6 were created with RTYPE = '2'
    Are you aware of this issue ? FYI, OSS notes 863630 and 1062292 are both implemented in our system.
    Thank you.
    Regards,
    Siong

    Hello Siong,
    This is not a known issue. I suggest that you create a service ticket.
    Best regards,
    Ralph

  • Duplicate records in BW Data Loads

    In my Project I am facing duplicate records in  Data Loads,  when I compare with PSA and DSO.  How to check those are duplicate and is there any mechanism through Excel Sheet or any?  Please help me out.  Advance thanks for your quick response.
    Edited by: svadupu on Jul 6, 2011 3:09 AM

    Hi ,
    Getting duplicate records in PSA is fine because there are no keys set in PSA and all the records come directly from the source .
    In case of a standard DSO, records are always overwritten you would not get any duplicates .
    In case you are getting duplicate records in PSA and need to find them,
    Go to PSA -> manage -> PSA maintainance->change the no of records from 1000 to the actual no of records that have come ->IIn the menu tab, go to list ->Save-> file -> change the path from SAP directory to some other path and save the file .
    Open the file ,take the columns forming the DSO keys together and sort ascending .you will find the duplicate records in PSA .

  • Duplicate records in material master

    Hi All
    I am trying to init material master and I am getting this error message
    "281 duplicate record found. 0 recordings used in table /BI0/XMATERIAL"
    This is not the first time , I am initializing . Deltas were running for a month and then I had to run a repair request to capture some changes and then when I ran the delta from then onwards I am getting this message.
    I started by deleting all the records and running this inti packet.
    I have tried the error handling and also I do not see any option for "ignore duplicate records " in the packet.
    I cannot see any error (red) records also in the PSA enough though the message says there are errors
    Please advice
    Thanks

    Hi,
    The duplicate record check is in the extraction program. I would suggest you do not deactivate/comment it out.
    What you should do is to go back to your material master records from the source system and sort out the materials not having unique identifier. Once this is sorted out, you can then re-run your delta. You shouldn't have the problem again. I once had the same problem from an HR extraction and i had to go back to the source data and ask the business to correct the duplication. A record for an employee was changed and there was overlap of dates in the employees record. The BW extraction program saw this as a duplicate record.
    I hope this help.
    Do not forget to award the points please.
    Regards,
    Jacob

Maybe you are looking for

  • Date format in the file datastore

    Hi, I have source as file, in which I have date column. While creating the file datastore, I have selected the datatype as date & have given the source date format in the format column and it is working fine. But I have source file coming with differ

  • Get Delay from MD4C Report

    Hi All, I have been trying to use FM  MD_SALES_ORDER_STATUS_REPORT to get the Total Delay, Unit of delay and Icon delay from the MD4C report. The FM runs properly with valid data provided but for any input I have not been able to get the ET_MLDELAY t

  • Bluetooth headset crashes OS 10.4.11

    Every time I use the travel cable to charge my apple blujetooth headset, it locks my computer in sleep mode. I have to hard restart it to reboot the OS. It's a MacBook Pro and is using the latest version of the bluetooth software. Any ideas?

  • I am disappointed apple assistance in Italy, kindly can you help?

    I am disappointed apple assistance in italy, kindly can you help me?

  • Making Data DVD from iMovie HD (5)

    HELP!!! I need to make a data DVD of my iMovie project to send to a stock-footage company. Anybody know EXACTLY how to do this? I'm using Toast 7. Thanks for your help.--Porter