Duplicate Records in COPA cube

Dear Friends,
We have two BW systems with one R/3 as a source system. There are identical COPA cubes in both the BW systems. We had created the COPA cube in the second BW system  recently and did a full load. When i validate the data between the two COPA cubes, i see duplicate records in the second COPA cube. I am wondering why is this so? Is there any setting in the R/3 side that i am missing or anything else? Any ideas?
Thanks
Raj

HI ,
I am also facing the same problem. In ZCOPA_C01_Q06 i am getting the value double. kindly help with steps as i am new to BI.
Regards
Amarendra

Similar Messages

  • How to delete duplicate records in cube

    Hi,
    can u help me how to delete the duplicate records in my cube
    and tell me some predifined cubes and data sourcess for MM and SD modules

    Hi Anne,
    about "duplicate records" could you be more precise?.
    The must be at least one different Characteristic to distinguish one record from the other (at least Request ID). In order to delete Data from InfoCubes (selectively) use ABAP Report RSDRD_DELETE_FACTS (be carefull it does not request any confirmation as in RSA1 ...).
    About MM and SD Cubes see RSA1 -> Business Content -> InfoProvider by InfoAreas. See also for MetadataRepository about the same InfoProviders.
    About DataSources just execute TCode LBWE in you source sys: there you see all LO-Cockipt Extrators.
    Hope it helps (and if so remember reward points)
    GFV

  • Delete overlapping/duplicate records from cube

    Hi All,
    Kindly let me know how to delete overlapping requests from a cube. Actually the cube is getting loaded from varuous infosources, but there are records which get duplicated and the are not wanted , then hiow to delete the duplicate records from the cube.
    Regards,
    dola

    I think what arun is perfectly right....
    use DSO for consolidation of various requests..from diferenet infosources...
    Now load from DSO to cube...and it is very much possible...though will require little work.
    Delete duplicate records option is usually used for master data.With transacdtion data i don't think its advisable.
    Regards,
    RK

  • Regarding duplicate records allowing

    Hi,
       From R3 system we are getting duplicate records., we need this duplicate records on the cube., so what option have to select
    as at cube to allow duplicates.
                                  Can anyone please let me know on it.

    Hi,
    Using T-code RSKC you can allow BW system to accept special characters in the data coming from the source system. The list of charu2019s can be obtained after analyzing S.S or u can confirm from the client.
    If it is very important go to RSKC& editu2026.. if it is not required Go to PSA deleteu2026
    Thanks
    sudhakar.

  • Duplicate records in Cube Level ( BI 7.0 )

    Dear All
    I am working on BI 7.0 , I have an issue , i am  loading the data from  Flat File to ODS  and from ODS to Cube . In ODS  we are selected Overwrite option, in cube level we have an Summation option. the problem is while loading the data from Flatfile to ODS  records are fine . from ODS  to cube data  loading also fine but here i am getting the Duplicate records .
    what are the  best options to go ahead in such a situations??
    Regards
    KK

    I am sharing a case occured for me. Please see if its applicable for you.
    Sometimes, in the step loading to Cube, when any type of problem occurs, we restart the load. If the cube load prompts saying 'the lost load was unsuccessful.... reload?', this problem may occur. It will load the records in the previous load also.
    Verify what has been duplicated from the ODS changelog table and the cube load record count. If you see the number of records updated being the total of the records in the different ODSR request (in the change log table). Delete the previous load in the cube. (provided no other side effect is produced e.g from start routine etc)
    Cheers.

  • Duplicate records in cube.

    Hi,Xperts,
    i have checked in my PSA which has no duplicate records.but when i am loading the data to cube,in cube i am getting duplicate records.
    can any one help me on this?

    Hi Satish,
    please check in R/3,
    U told that it is delta load,  go to RSO2, select the required one and enter
    there click on the generic delta tab and check the settings: what u have given:
    Safety level lower limit: give the particular value to eliminate the data records so that the system knows from where exactly the data has to be loaded.
    Check the two options below:
    New status for changed records
    Additive Delta
    Select the appropriate one.
    if helpful, please try this.
    Regards
    Swathi

  • Getting duplicate records in cube from each data packet.

    Hi Guys,
    I am using 3.x BI version. I am getting duplicate records in cube. for deleting these duplicate records i have written code. still it is giving same result. Actually i have written a start routine for deleting duplicate records.
    These duplication is occurring depending on the number of packets.
    Eg: If the number of packets are 2 then it is giving me 2 duplicate records.
    If the number of packets are 7 then it is giving me 7 duplicate records.
    How can i modify my code so that it can fetch only one record by eliminating duplicate records? Or any other solution is welcomed.
    Thanks in advance.

    Hi  Andreas, Mayank.
      Thanks for your reply.
      I created my own DSO, but its giving error. And I tried with the stanadard DSO too. Still its giving the same error as could not activate.
    In error its giving a name of function module RSB1_OLTPSOURCE_GENERATE.
    I searched in R3 but could not get that one.
    Even I tried creating DSO for trial basis, they are also giving the same problem.
    I think its the problem from BASIS side.
    Please help if you have any idea.
    Thanks.

  • How to get rid of duplicate records generated frm hierarchical cube in sql?

    Hi All,
    database version 10gR2.
    I am trying to aggregated data for two hierarchical dimensions, specifically organization and products.
    I am using one ROLLUP for each dimension, which would be two ROLLUP in GROUP BY clause to do the aggregation for every level of organization and product that are in included in the hierarchy.
    the troubling part is that that products that have data in corresponding fact table are not always located at the lowest level (which is 6) of the product hierarchy.
    e.g.
    product_id                               level
    0/01/0101/010102/01010201    5                           -->01010201, at level 5 , has data in fact table
    0/01/0101/010103                   4                           -->010103, at level 4, has data in fact table as well
    0/02/0201/020102/02010203/0201020304/020102030405              6   --> at level 6,(lowest level) and has data in fact table     we have a flat product hierarchy stored in table as below:
    prod_id  up_code_1 up_code_2 up_code_3   up_code_4   up_code_5 up_code_6
    01010201     0     01     0101     010102     01010201      NULL
    010103     0     01     0101     010103     null             nulldue to the NULL in product in level 6 for 01010201, when i run the query below, one duplicate record will be generated.
    for 010103, there will be 2 duplicate records, and for 020102030405 will be none.
    Encounter the same issue with the organizational dimension.
    currently, I am using DISTINCT to get rid of the duplicate records, but I don`t feel right to do it this way.
    So, I wonder if there is a more formal and standard way to do this?
    select distinct ORG_ID, DAY_ID,  TRADE_TYPE_ID, cust_id, PRODUCT_ID, QUANTITY_UNIT, COST_UNIT, SOURCE_ID,
          CONTRACT_AMOUNT, CONTRACT_COST, SALE_AMOUNT,SALE_COST, ACTUAL_AMOUNT, ACTUAL_COST, TRADE_COUNT
    from (     
    select  coalesce(UP_ORG_ID_6, UP_ORG_ID_5, UP_ORG_ID_4, UP_ORG_ID_3, UP_ORG_ID_2, UP_ORG_ID_1) as ORG_ID,
          a.day_id as day_id,        
          a.TRADE_TYPE_ID as TRADE_TYPE_ID,
          a.CUST_ID,
          coalesce(UP_CODE_6, UP_CODE_5, UP_CODE_4, UP_CODE_3, UP_CODE_2, UP_CODE_1) as product_id,
          QUANTITY_UNIT,
          COST_UNIT,
          A.SOURCE_ID as SOURCE_ID,
          SUM(CONTRACT_AMOUNT) as CONTRACT_AMOUNT,
          SUM(CONTRACT_COST) as CONTRACT_COST,
          SUM(SALE_AMOUNT) as SALE_AMOUNT,
          SUM(SALE_COST) as SALE_COST,
          SUM(ACTUAL_AMOUNT) as ACTUAL_AMOUNT,
          SUM(ACTUAL_COST) as ACTUAL_COST,
          SUM(TRADE_COUNT) as TRADE_COUNT     
    from DM_F_LO_SALE_DAY a, DM_D_ALL_ORG_FLAT B, DM_D_ALL_PROD_FLAT D --, DM_D_LO_CUST E
    where a.ORG_ID=B.ORG_ID
          and a.PRODUCT_ID=D.CODE
    group by rollup(UP_ORG_ID_1, UP_ORG_ID_2, UP_ORG_ID_3, UP_ORG_ID_4, UP_ORG_ID_5, UP_ORG_ID_6),
          a.TRADE_TYPE_ID,
          a.day_id,
          A.CUST_ID,
          rollup(UP_CODE_1, UP_CODE_2, UP_CODE_3, UP_CODE_4, UP_CODE_5, UP_CODE_6),
          a.QUANTITY_UNIT,
          a.COST_UNIT,
          a.SOURCE_ID );Note, GROUPING_ID seems not help, at least i didn`t find it useful in this scenario.
    any recommendation, links or ideas would be highly appreciated as always.
    Thanks

    anyone ever encounter this kind of problems?
    any thought would be appreciated.
    thanks

  • Donu00B4t load duplicates File to a cube through DTP

    Hi gurus... i want to know if is possible to avoid load the same data from files to a cube.. doing this trough a DTP.
    I think to do this with a step in a process chain that eliminate dupplicates entry in the cube, but the problem is that i haven´t any selection in DTP... this is a full load.
    Regards..

    Neo,
    do you want to avoid loading duplicate files to a cube using  DTP or avoid loading duplicates from the file to the cube ?
    if it is avoid loading duplicate records from file to cube - then ideally you can use a DSo in between...

  • How to delete the duplicate requests in a cube after compression.

    Hi experts,
        1. How to delete the duplicate requests in a cube after compression.?
        2. How to show a charaterstics and a keyfigure side by side in a bex query output?
    Regards,
    Nishuv.

    Hi,
    You cannot delete the request as its compressed as all the data would have been moved to E table ..
    If you have the double records you may use the selective deletion .
    Check this thread ..
    How to delete duplicate data from compressed requests?
    Regards,
    shikha

  • How to delete duplicate record in Query report

    Hi Experts,
    I had created an infoset and query in my sap, but I want to delete some duplicate records before the list out put.Please can we add some further codes in the Extras code to delete duplicates? And how do it? Would you please give me a simple brief.
    Joe

    Hi,
    You can try to restrict in the filter area in query designer with the values for characteristic which gives correct
    result.
    But still i would suggest that in the cube you keep not the duplicate records as this is not your requirement and giving
    you wrong result.
    So you can reload the correct records in the cube inorder to avoid such problems even in future.
    Regards,
    Amit

  • Multiple record created in Cube

    Hi BI Expert,
    I have got a DSO and it's passing data to infocube. When it passes data, the cube generates multiple records. I am using the following code and this was originally written by BI consultant.
    PROGRAM UPDATE_ROUTINE.
    $$ begin of global - insert your declaration only below this line  -
    TABLES: /bic/azo_sop_100,
            /bic/aZO_POP_100,
            /bic/azo_pop0600,
            /BIC/AZO_PRE_100,
            /BIC/AZO_SBP_100,
            /BIC/AZO_SDP_100.
    DATA: in    TYPE f,
          out   TYPE f,
          denom TYPE f,
          numer TYPE f,
          vc_df_preq    type /BI0/OIDF_PREQ.
    Def. of 'credit-documents': following doc.categ. are 'credit docs'
      return order (H)
      credit memo  (K)
    Credit-documents are delivered with negative sign. Sign is switched
    to positive to provide positive key-figures in the cube.
    The combination of characteristics DE_CRED and DOC-CLASS provides
    a comfortable way to distinguisch e.g. positive incoming orders or
    order returns.
    Def. der 'Soll-Dokumente': folgende Belegtypen sind 'Soll-Belege'
      Retoure (H)
      Gutschriftsanforderung (K)
    Soll-Dokumente werden mit negativem Vorzeichen geliefert. Um die Kenn-
    zahlen positiv in den Cube zu schreiben, wird das Vorzeich. gedreht
    Die Kombination der Merkmale DEB_CRED und DOC-CLASS gibt Ihnen die
    Möglichkeit schnell z.B. zwischen Auftrags-Eingang oder Retouren zu
    unterscheiden.
    DATA: deb_cred(2) TYPE c VALUE 'HK'.
    DATA: quot(1) TYPE c VALUE 'B'.
    CONSTANTS: c_msgty_e VALUE 'E'.
    Variable declarations for
    derivation of Fiscal week {by SB:18.10.2007}
    DATA: lv_createdon TYPE /bi0/oicreatedon,
          lv_period    LIKE t009b-poper,
          lv_year      LIKE t009b-bdatj,
          lv_fiscweek  TYPE /bi0/oicalweek,
          w_vendor     like /BIC/AZO_POP_100-VENDOR.
    Deriving Master Data Attribute (0MRP_CONTRL)
    from 0MAT_PLANT {by SB:18.10.2007}
    Hashed table declaration for 0MAT_PLANT master data
    *DATA: it_mat_plant
         TYPE HASHED TABLE OF /bi0/pmat_plant
         WITH UNIQUE KEY plant mat_plant
         INITIAL SIZE 0.
    $$ end of global - insert your declaration only before this line   -
    FORM compute_data_field
      TABLES   MONITOR STRUCTURE RSMONITOR "user defined monitoring
      USING    COMM_STRUCTURE LIKE /BIC/CS8ZO_SOP_5
               RECORD_NO LIKE SY-TABIX
               RECORD_ALL LIKE SY-TABIX
               SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
      CHANGING RESULT LIKE /BIC/VZIB_SOP_1T-SUBTOT_1S
               RETURNCODE LIKE SY-SUBRC
               ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
    $$ begin of routine - insert your code only below this line        -
      DATA: VALUE LIKE COMM_STRUCTURE-NET_VALUE.
      DATA: US_RATE_TYPE LIKE COMM_STRUCTURE-RATE_TYPE.
      CLEAR RESULT.
      IF NOT COMM_STRUCTURE-SUBTOTAL_1 IS INITIAL AND
         COMM_STRUCTURE-DOC_CURRCY NE COMM_STRUCTURE-STAT_CURR.
        US_RATE_TYPE = COMM_STRUCTURE-RATE_TYPE.
        IF US_RATE_TYPE EQ SPACE.
          US_RATE_TYPE = 'M'.
        ENDIF.
      ENDIF.
      IF COMM_STRUCTURE-DOC_CURRCY NE COMM_STRUCTURE-STAT_CURR.
        CALL FUNCTION 'CONVERT_TO_STAT_CURRENCY'
             EXPORTING
                  DATE                 = COMM_STRUCTURE-ST_UP_DTE
                  DOCUMENT_AMOUNT      = COMM_STRUCTURE-SUBTOTAL_1
                  DOCUMENT_CURRENCY    = COMM_STRUCTURE-DOC_CURRCY
                  LOCAL_CURRENCY       = COMM_STRUCTURE-LOC_CURRCY
                  STAT_CURRENCY        = COMM_STRUCTURE-STAT_CURR
                  LOCAL_RATE           = COMM_STRUCTURE-EXCHG_RATE
                  STAT_RATE            = COMM_STRUCTURE-EXCHG_STAT
                  LOCAL_TYPE_OF_RATE   = US_RATE_TYPE
                  STAT_TYPE_OF_RATE    = US_RATE_TYPE
             IMPORTING
                  STATISTICAL_AMOUNT   = VALUE
             EXCEPTIONS
                  LOCAL_RATE_NOT_FOUND = 1
                  STAT_RATE_NOT_FOUND  = 2.
        CASE SY-SUBRC.
          WHEN 0.
            RESULT = VALUE.
            RETURNCODE = 0.
          WHEN 1.
            CLEAR MONITOR.
            MONITOR-msgno = '005'.
            MONITOR-msgid = 'SDBW'.
            MONITOR-msgty = c_msgty_e.
            MONITOR-msgv1 = COMM_STRUCTURE-DOC_NUMBER.
            MONITOR-msgv2 = COMM_STRUCTURE-ST_UP_DTE.
            MONITOR-msgv3 = COMM_STRUCTURE-DOC_CURRCY.
            MONITOR-msgv4 = COMM_STRUCTURE-LOC_CURRCY.
            append MONITOR.
            RETURNCODE = 4.
          WHEN 2.
            CLEAR MONITOR.
            MONITOR-msgno = '006'.
            MONITOR-msgid = 'SDBW'.
            MONITOR-msgty = c_msgty_e.
            MONITOR-msgv1 = COMM_STRUCTURE-DOC_NUMBER.
            MONITOR-msgv2 = COMM_STRUCTURE-ST_UP_DTE.
            MONITOR-msgv3 = COMM_STRUCTURE-DOC_CURRCY.
            MONITOR-msgv4 = COMM_STRUCTURE-STAT_CURR.
            append MONITOR.
            RETURNCODE = 4.
        ENDCASE.
      ELSE.
        RESULT = COMM_STRUCTURE-SUBTOTAL_1.
      ENDIF.
      IF COMM_STRUCTURE-DOC_CATEG CA DEB_CRED.
        RESULT = RESULT * ( -1 ).
      ENDIF.
    If the order type is 'Returns' or 'Credit Memo' then the figure will be changed to positive value. But the cube has got both negative and positive value for the order line.
    Order   line      PO no.  0COST     0CML_OR_QTY 
    89576  10       925401  130.60      1
    89576  10       925401  -130.60     1-
    Could you please help me as to how I can resolve this issue?
    Thanks.
    Edited by: Bhai Basnet on Mar 7, 2008 11:58 AM

    Hi,
    the code you provided does not duplicate records.
    Most probably the records are duplicated in the start routine, or the key figure is copied in the update rule (so you get 0AMOUNT en 0AMOUNT_01).
    So I'd check your start routine and take a look if the key figures are copied.

  • Duplicate record with same primary key in Fact table

    Hi all,
       Can the fact table have duplicate record with same primary key . When i checked a cube i could see records with same primary key combination but the key figure values are different. My cube has 6 dimentions (Including Time,Unit and DP) and 2 key figures. So 6 fields combined to form the composite primary key of the fact table. When i checked the records in se16 i could see duplicate records with same primary key. Ther are no parallel loading happening for the cube.
    BW system version is 3.1
    Data base is : Oracle 10.2
    I am not sure how is this possible.
    Regards,
    PM

    Hi Krish,
       I checked the datapacket dimention also. Both the record have same dimention id (141). Except the Keyfigure value there is no other change in the Fact table record.  I know this is against the basic DBMS primary key rule. But i have records like this in the cube.
    Can this situation arise when same records is there in different data packet of same request.
    Thx,
    PM
    null

  • Duplicate Records error when processing transaction file....BPC 7.0

    Hi All,
    I have a situation. I am using BPC NW 7.0 and I have updated my dimension files. When I try to validate my transaction file every single record is validated successfully. But when I try to import the flat file into my application, I am getting a lot of dupplication records error and these are my questions.
    1. Will we get duplicate records in transaction files?
    2. Even if there are duplication, since it is a cube it should summarize not display that as a error and reject records?
    3. Is there something I can do to accept duplicates (I have checked the Replace option in the data package, to overwrite the simillar records, but it is only for account, category and entity only.
    5. In mycase I see identical values in all my dimension and the $value is the only difference. Why is it not summing up.
    Your quickest reply is much appreciated.
    Thanks,
    Alex.

    Hi,
    I have the same problem.
    In my case the file that I want to upload has different row that differ for the nature column. In the conversion file I map different nature to one internal nature.
    ES: cost1 --> cost
          cost2 --> cost
          cost3 --> cost
    In my desire was that in BPC the nature cost assume the result  cost = cost1 + cost2 + cost3.
    The result is that only the first record is uploaded and all other recorda are rejected as duplicate.
    Any suggestion?

  • Loacate and remove duplicate records in infocube.

    Hi!!
    we have found the infocube 0PUR_C01 contians duplicate records for the month april 2008, approx 1.5lac records are extracted to this infocube, similar situations may be occuring in the subsequent months.
    How do I locate these records and remove them for the infocube?
    How do I ensure that duplicate records are not extracted in the infocube?
    All answers/ links are welcome!!
    Yours Truly
    K Sengupto

    First :
    1. How do I locate duplicate records in an Infocube? other than down load all the records in an excel file and use excel funtionality to locate duplicate records.
    This is not possible since a duplicate record would not exist - the records are sent to a cube with a + and - sign to accordingly summarize data.
    You search for duplicate data would become that much troublesome.
    If you have a DSO to load it from - delete data for that month and reload if possible this would be quicker and cleaner as opposed to removing duplicate records.
    If you had
    ABC|100 in your DSO and it got doubled
    it would be
    ABC|+100
    ABC|+100
    against different requests in the cube - and added to this ill be your correct deltas also.

Maybe you are looking for

  • How to find out the page number of an xml element

    Hi everybody, sorry for my very bad english (french guy! logical). I hope you see what I'm looking for. I'm scripting (javascript) an exporting script of every articles from a multi-pages indesign script. In fact I could have one or more article in o

  • Why does the content of my emails not show on the main Mail page but does in the Inbox list of all emails?

    Why does the content of my emails not show on the main Mail page but does in the Inbox list of all emails?

  • SQL LOADER LPAD CONTROL FILE QUESTION

    Hi, inthe flat file company_cd is "1" or "01" and center_cd is 3 digits... ex: "493" in the table the userid coulmn should be 6 digits currently i am getting this as userid in the table "010493" is right, because company_cd is "01" "10493" is not rig

  • OSB JMS Proxy

    Hi , can i use a Proxy service with out a Business service to write a messge onto a queue.I jus tried a sample on my local and i am able to write a message on to a queue using a proxy service . Flow :- PS1(http) using publish action invokes -> PS2 (j

  • Accessing ressources in jar file

    Hi, I need some help with jar files, please. The concept of jar files is quite new to me and I din't fully unterstand it yet I guess... I'm loading an image by ImageIcon image = new ImageIcon("image.gif"); now I'd like to pack that image into a jar f