Getting duplicate data in Dimensions

Hi,
Whenever I am associating a single product with a Dimension, duplicate data is being shown up but as soon as more products are added, duplication disappears. I presume I am missing one of the configuration in pipeline but not sure what. Could anyone help.

Hey Vaibhav,
One reason I can think of is that you might be associating data with both property and dimension. Also can elaborate more on your question? I am not sure if I understand your question right
Thanks
Ritwik
Cirrus10

Similar Messages

  • Getting duplicate data records for master data

    Hi All,
    When the process chain for the master data, i am getting duplicate data records and , for that  selected the options in Info package level under processing 1)a  update PSA and subsequentky data targets and alternateely select the option Ignore double data records. But still the load was failing and error message "Duplicate  Data Records" after that rhe sehuduled the Info package then i am not getting the error message next time,
    Can any one help on this to resolve the issue.
    Regrasd
    KK

    Yes, for the first option u can write a routine ,what is ur data target--> if it is a cube, there may be a chances of duplicate records because of the additive nature.if its a ODS then u can avoid this, bec only delta is going to be updated.
    Regarding the time dependant attributes, its based on the date field.we have 4 types of slowly changing dimensions.
    check the following link
    http://help.sap.com/bp_biv135/documentation/Multi-dimensional_modeling_EN.doc
    http://www.intelligententerprise.com/info_centers/data_warehousing/showArticle.jhtml?articleID=59301280&pgno=1
    http://help.sap.com/saphelp_nw04/helpdata/en/dd/f470375fbf307ee10000009b38f8cf/frameset.htm

  • Getting Duplicate data Records error while loading the Master data.

    Hi All,
    We are getting Duplicate data Records error while loading the Profit centre Master data. Master data contains time dependent attributes.
    the load is direct update. So i made it red and tried to reloaded from PSA even though it is throwing same error.
    I checked in PSA. Showing red which records have same Profit centre.
    Could any one give us any suggestions to resolve the issues please.
    Thanks & Regards,
    Raju

    Hi Raju,
            I assume there are no routines written in the update rules and also you ae directly loading the data from R/3 ( not from any ODS). If that is the case then it could be that, the data maintained in R/3 has overlapping time intervals (since  time dependency of attributes is involved). Check your PSA if the same profit center has time intervals which over lap. In that case, you need to get this fixed in R/3. If there are no overlapping time intervals, you can simply increase the error tolerance limit in your info-package and repeat the load.
    Hope this helps you.
    Thanks & Regards,
    Nithin Reddy.

  • Duplicate data coming in HANA table

    Hi ,
    we are getting duplicate data old and new records coming to HANA. Table have both old and new in filed created on in table. is it required to re triggred the particular table load. please any help on this.

    Hello Rama,
    there is a separate forum for HANA-related questions: the SAP HANA Development Center.
    Could you please ask this question there (as that seems to be a more relevant space for HANA issues)?
    Regards,
    Laszlo

  • Duplicate data records through DTP

    Hi Guys,
    I am loading duplicate data records to customer master data.
    data upto PSA level is correct,
    now when  i am it from psa to customer master through dtp ,
    when in DTP in update tab i select the check box for duplicate data records then at bottom it shows the
    message that *ENTER VALID VALUE*
    After this message i am unable to click any function and repeat the same message again & again.
    So please give me solution that the above mentioned message shouldnt appear and then
    i will be able to Execute the data ?
    Thanks .
      Saurabh jain.

    Hi,
    if you get duplicate data for your customer there might be something wrong with your data source or the data in psa. But anyway, leave the dtp by restarting rsa1. Edit or create the dtp again and press save immediately after entering edit mode. Leave the dtp again and start editing it. That should do the trick.
    regards
    Siggi

  • Duplicate data records through DTP for attribute

    Hi Guys,
    I am loading data to customer master data, But it contains duplicate data in large volume.
    I have to load both attribute and text data .
    Data upto PSA level is correct, and text data also is loaded successfully.
    When i am loading attribute data to customer master ,it fails due to duplicate data records.
    Then in dtp with update tab, I select the check box for duplicate data records .
    As i select this check box ,at bottom it shows the
    message that *ENTER VALID VALUE* .
    After this message i am unable to click any function and repeat the same message again & again.
    So i am unable to execute the DTP.
    helpful answer will get full points.
    So please give me solution that the above mentioned message should not appear and then
    i will be able to Execute the data ?
    Thanks .
    Saurabh jain.

    Hi,
    if you get duplicate data for your customer there might be something wrong with your data source or the data in psa. But anyway, leave the dtp by restarting rsa1. Edit or create the dtp again and press save immediately after entering edit mode. Leave the dtp again and start editing it. That should do the trick.
    regards
    Siggi

  • Report is showing duplicate data due to update

    Hi All,
    I need to eliminate the duplicate data in report which is occurring due to the update(not correction) in employee assigment at oracle HRMS.I am already using Max effective start date in where clause at both tables like per_all_people_f and per_all_assignment_f
    Regards,
    Ssali

    I you get duplicate data, change your "select" to "select unique".
    Maybe this is a specific Oracle EBusiness Suite thing. If so, ask it in the EBusiness Suite forum.

  • Not getting data in Dimension

    Hi all,
    i am craeating a dimension and i did the mapping also.i get the data in that dimension table.But i am not able to get the data in the Dimesion (i.e When i right click on the dimesion which i have created,there one option is called Data Viewer,from that i am not getting data)
    Can any help me about this.

    Hi,
    Just try with the code given below... hope that helps you out....
    also write a break-point statement just before the if statement and check in the debugging mode if it i_mseg table has some values or not...
    if i_mseg[] is not initial.
    select zzlcno zzlcdt
    into i_ekpo from ekpo
    for all entries in i_mseg
    where ebeln = i_mseg-ebeln.
        append i_ekpo.
    endselect.
    endif.
          OR
    if i_mseg[] is not initial.
    select zzlcno zzlcdt
    into corresponding fields of  i_ekpo from ekpo
    for all entries in i_mseg
    where ebeln = i_mseg-ebeln.
        append i_ekpo.
    endselect.
    endif.
    Regards,
    Siddarth

  • This may be a duplicate but, I have a new ipad mini that I need to activate but I need to get my data from my 1pad 1 which has never been backed up.  How do I get the data from the ipad 1 to the new mini?  I have itunes acct and also Icloud acct.  Thanks

    I have a new ipad mini that I need to activate but I need to get the data from my old ipad 1 and install on the new mini.  I have a itunes acct and also icloud acct.  How do I back up the ipad 1 data and then activate the mini.  I assume I will need to transfer the sim from the ipad 1 to the mini.
    Thanks for any info
    wino454

    How to Transfer Everything from an Old iPad to New iPad
    http://osxdaily.com/2012/03/16/transfer-old-ipad-to-new-ipad/
     Cheers, Tom

  • Getting duplicate records in cube from each data packet.

    Hi Guys,
    I am using 3.x BI version. I am getting duplicate records in cube. for deleting these duplicate records i have written code. still it is giving same result. Actually i have written a start routine for deleting duplicate records.
    These duplication is occurring depending on the number of packets.
    Eg: If the number of packets are 2 then it is giving me 2 duplicate records.
    If the number of packets are 7 then it is giving me 7 duplicate records.
    How can i modify my code so that it can fetch only one record by eliminating duplicate records? Or any other solution is welcomed.
    Thanks in advance.

    Hi  Andreas, Mayank.
      Thanks for your reply.
      I created my own DSO, but its giving error. And I tried with the stanadard DSO too. Still its giving the same error as could not activate.
    In error its giving a name of function module RSB1_OLTPSOURCE_GENERATE.
    I searched in R3 but could not get that one.
    Even I tried creating DSO for trial basis, they are also giving the same problem.
    I think its the problem from BASIS side.
    Please help if you have any idea.
    Thanks.

  • Get problem when create dimension using dimension build rule file

    I got the following warning when I tried to load dimension. I have a product dimension hierarchy which contains 6 levels. I manually create a dimension call product which include six generations. I also create a rule file using SQL. This SQL selects 6 columns and I map each column to each generation. When I load data to build dimension, I always got the following warning and only some part of data get loaded. Anybody know that? Do I have to load parent first before loading child or I can load them at same time?
    Thanks
    \\Record #1 - Incorrect Parent [10] For Member [10] (3307)
    1     2     3     10     10     171     
    \\Record #2 - Incorrect Parent [1] For Member [1] (3307)
    1     1     1     8     8     39

    You are getting duplicate names. YOu need to prefix or suffix (or sql concatinate) to make the members unique. in the record 1 example you might want the outpout to look like
    Product line 1 Group 2 subgroup 3 Product family 10 product 10 sku 171
    Of course I used full names to make it undersatandable, some would use one, two or three letter abrevaitions.

  • Getting duplicates in Case  and Count clause in Report Generation

    Hi all,
    let me explain the Base first (just the Section which is in the scope of this code )then ill go to code and my problem.i have a Set of Pre-defined Tasks in *"TASK"* Table.i have a system which will provision the User Request by allotting the Particular Task to their Request.Each Request will mapped to the Instance of the (Pre-defined)Task, this will be maintained in a separate table *"TASK_INSTANCE"* against the user request id. Each task has a pre-defined duration.and their date of completion date will be stored in column of data type Time Stamp.
    My scenario is i need to generate report based on the completion date.report requirement is , i need to give the count of tasks which having completion date as today,tomorrow and next day they grouped based on Task Names.
    my problem is, im getting duplicates though i used the Distinct.There is no possible of duplicates by means of join, since im using group by task name.each Record in the Task_instance table has direct relation to the Task ID. For eg : im getting one row with the count satisfying the Condition and next row with empty set.i cant figure it out why happening.need your help in figuring out this.
    Let me append the query below,
    SELECT   task.task_name,
    *(CASE*
    WHEN ( (TRUNC (SYSDATE) - TRUNC (task_instance.ptd_pdd_date)) =
    -1)
    THEN
    *(COUNT (task_instance.ptd_pdd_date))*
    END)
    AS "1_day_behind",
    *(CASE*
    WHEN ( (TRUNC (SYSDATE) - TRUNC (task_instance.ptd_pdd_date)) =
    -2)
    THEN
    COUNT (task_instance.ptd_pdd_date)
    END)
    AS "2_day_behind",
    *(CASE*
    WHEN ( (TRUNC (SYSDATE) - TRUNC (task_instance.ptd_pdd_date)) =
    -3)
    THEN
    COUNT (task_instance.ptd_pdd_date)
    END)
    AS "3_day_behind"
    FROM   task, task_instance
    WHERE       task.task_id = task_instance.task_id
    AND task_instance.status_id = 1
    AND task_instance.ptd_pdd_date IS NOT NULL
    GROUP BY   (TRUNC (SYSDATE) - TRUNC (task_instance.ptd_pdd_date)),
    task.task_name;
    task_instance.status_id = 1 it refers to the task which are in "IN PROGRESS" state.
    This is the (Sample) result set i am getting.In this, the task UI_Contact_Customer is repeated three times. with different count in separate rows and null in separate row .i need to avoid those duplicates.please advise.
    TASK_NAME     | "1_DAY_BEHIND"     | "2_DAY_BEHIND"     | "3_DAY_BEHIND" |
    ______________________________|________________|_________________|_________________|               
    UI_Conduct_Fiber_Plant_Survey_____|________________|________________ |_________________|
    UI_Conduct_Site_Survey__________ |_______________ |________________ |_________________|
    UI_ConductFiberSurvey_C___________|________________|________________ |________________|
    UI_ConductSiteSurvey_C __________|________________|_________________|_________________|
    UI_Contact_Customer_____________|________________|_________________|_________________|
    UI_Contact_Customer ____________ |_______10_______|________________ |_________________|
    UI_Contact_Customer_____________|________________|_______ 12_______|_________________|
    UI_Create_Account_Equip_C_______ |________________|_________________|_________________|
    UI_Create_Account_Equipment_____ |________________|_________________|_________________|
    UI_Create_CM_Ticket           |     | | |
    ______________________________|________________|_________________|_________________|
    In the Above result set, especially UI_Contact_Customer task,ten of their instance having completion date tomorrow and 12 instance having next day as completion date. i need get all those as single row without any duplicates.
    Thanks,
    Jeevanand.K

    hey super dude,
    it really works fine.matching my requirement exactly.
    My hearty appreciation to you friend, and a small appreciation for me tooo,because i formed the base query. :-)
    i used the below query,it requirement gets completed.
    A big Thanks for your super fast Response
    SELECT   task.task_name,
    COUNT(CASE
    WHEN TRUNC (SYSDATE) - TRUNC (task_instance.ptd_pdd_date) BETWEEN 1
    AND  14
    THEN
    *1*
    END)
    AS "TWO weeks older",
    COUNT(CASE
    WHEN TRUNC (SYSDATE) - TRUNC(task_instance.ptd_pdd_date) =
    -1
    THEN
    *1*
    END)
    AS "1_day_left",
    COUNT(CASE
    WHEN ( (TRUNC (SYSDATE)
    - TRUNC (task_instance.ptd_pdd_date)) = -2)
    THEN
    *1*
    END)
    AS "2_day_left",
    COUNT(CASE
    WHEN ( (TRUNC (SYSDATE)
    - TRUNC (task_instance.ptd_pdd_date)) = -3)
    THEN
    *1*
    END)
    AS "3_day_left",
    COUNT(CASE
    WHEN ( (TRUNC (SYSDATE)
    - TRUNC (task_instance.ptd_pdd_date)) = -4)
    THEN
    *1*
    END)
    AS "4_day_left",
    COUNT(CASE
    WHEN ( (TRUNC (SYSDATE)
    - TRUNC (task_instance.ptd_pdd_date)) = -5)
    THEN
    *1*
    END)
    AS "5_day_left",
    COUNT(CASE
    WHEN ( (TRUNC (SYSDATE)
    - TRUNC (task_instance.ptd_pdd_date)) = -6)
    THEN
    *1*
    END)
    AS "6_day_left",
    COUNT(CASE
    WHEN ( (TRUNC (SYSDATE)
    - TRUNC (task_instance.ptd_pdd_date)) >= -7)
    THEN
    *1*
    END)
    AS "After one week"
    FROM   task, task_instance
    WHERE   task.task_id = task_instance.task_id AND task_instance.status_id = 1
    GROUP BY   task.task_name;
    Thanks,
    Jeevanand.K

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • DTP Error: Duplicate data record detected

    Hi experts,
    I have a problem with loading data from DataSource to standart DSO.
    In DS there are master data attr. which have a key  containing id_field.
    In End routine I make some operations which multiple lines in result package and fill new date field - defined in DSO ( and also in result_package definition )
    I.E.
    Result_package before End routine:
    __ Id_field ____ attra1 ____  attr_b  ...___   attr_x ____ date_field
       ____1________ a1______ b1_________ x1         
       ____2________ a2______ b2_________ x2       
    Result_package after End routine:
    __ Id_field ____ attra1 ____  attr_b  ..___   attr_x ____ date_field
       ____1________ a1______ b1_________ x1______d1         
       ____2________ a1______ b1_________ x1______d2    
       ____3________ a2______ b2_________ x2______d1         
       ____4________ a2______ b2_________ x2______d2   
    The  date_field (date type)  is in a key fields in DSO
    When I execute DTP I have an error in section Update to DataStore Object: "Duplicate data record detected "
    "During loading, there was a key violation. You tried to save more than one data record with the same semantic key."
    As I know the result_package key contains all fields except fields type i, p, f.
    In simulate mode (debuging) everything is correct and the status is green.
    In DSO I have uncheched checkbox "Unique Data Records"
    Any ideas?
    Thanks in advance.
    MG

    Hi,
          In the end routine, try giving
    DELETE ADJACENT DUPLICATES FROM RESULT_PACKAGE COMPARING  XXX  YYY.
    Here XXX and YYY are keys so that you can eliminate the extra duplicate record.
    Or you can even try giving
        SORT itab_XXX BY field1 field2  field3 ASCENDING.
        DELETE ADJACENT DUPLICATES FROM itab_XXX COMPARING field1 field2  field3.
    this can be given before you loop your internal table (in case you are using internal table and loops)  itab_xxx is the internal table.
    field1, field2 and field 3 may vary depending on your requirement.
    By using the above lines, you can get rid of duplicates coming through the end routine.
    Regards
    Sunil
    Edited by: Sunny84 on Aug 7, 2009 1:13 PM

  • How to delete the duplicate data  from PSA Table

    Dear All,
    How to delete the duplicate data  from PSA Table, I have the purchase cube and I am getting the data from Item data source.
    In PSA table, I found the some cancellation records for that particular records quantity  would be negative for the same record value would be positive.
    Due to this reason the quantity is updated to target but the values would summarized and got  the summarized value  of all normal and cancellation .
    Please let me know the solution how to delete the data while updating to the target.
    Thanks
    Regards,
    Sai

    Hi,
    in deleting the records in PSA table difficult and how many you will the delete.
    you can achieve the different ways.
    1. creating the DSO maintain the some key fields it will overwrite the based on key fields.
    2. you can write the ABAP logic deleting the duplicate records at info package level check with the your ABAPer.
    3.you can restrict the cancellation records at query level.
    Thanks,
    Phani.

Maybe you are looking for

  • TS3899 Can't send or receive email on iPhone 5 nor my iPad 2?  Tried hard resetting, deleting account then adding again-- no luck!  Please help!!

    CAn someone please tell me how to fix my email problem-- all of a sudden I can't receive or send email-- it's an Apple problem-- cause I can get online-- tried the hard reset and deleting account and adding again-- no luck thanks mary

  • Default Duplex Printing - Adobe Reader 11

    Hello, I recently installed Adobe Reader 11. I notice that, while the default for my Brothers duplex printer is set in WINDOWS 7 64 bit to Duplex, the Adobe Reader "Print" screen is displayed with the "Print on both sides of paper" box unchecked. If

  • Can't Title Second Y-Axis

    Post Author: jlspublic CA Forum: Charts and Graphs                                                   I have a graph with two Y axes.  On the Chart Options dialog, Titles tab, whatever I enter in the Data2 Title box is discarded when I click OK. Based

  • Field SCN Counter in AITT or OITT tables

    Good morning!! I'm making changes in the window of tha materials' list, and i don't find the sifnificate of the next field: SCN Counter This field appears in AITT and in OITT tables, but i don't know what is making this field, what is the meaning of

  • Notification Profiles

    I have a new 8330 and have edited each of the notification profiles to my liking.  However, I am only getting vibrations on reciept of messages and calls, even though the profile I have activated clearly indicates tone+vibrate for the type of communi