Duplicate data in cube

Hi,
we have 2 reports, there is difference in date between two reports. it is showing exact double data.
can any body help me how can i check the cube is having double data.
explaing me how to check the contents in cube.
Thanks
Rajini

Hi Rajini,
As everybody said, you can use T-code LISTCUBE and give the cube's technical name and execute it. Then click on field selection for output, select the characteristics you want, since you are saying data is getting doubled, don't select all the fields...maybe you would want to select fields like 0calday and characteristics through which you can track down double records. Then execute it and see, you will find your data.
My guess is since you are saying double data, there would have been duplicate records i.e. data would have been loadde twice so once you are done checking the data and if you are sure same data was loaded twice, delete one request ID in your DTP.
Before loading data next time, there is an option in DTP, where you have to check mark a box and the msg next to that box will be something like "Avoid Duplicate records". It will avoid loading same data twice.
Guess this should solve your problem.
Guru

Similar Messages

  • To analyse duplicate data in cube

    Hi all,
    There is delta update in cube everyday twice. But sometimes data gets duplicated in cube though it is correct in ODS. In cube the requests are compressed so I don't have any request id to view data corresponding to that request. Is there any way to find out what went wrong and when for which data is getting duplicated.
    Thanks and Regards,
    Sananda

    Hi Sananda,
    Let me assume that you have this issue in 'Production' and duplication is happening intermittently.
    Since the 'results' are wrong, the data cannot be used for valid business decisions.
    Thus, it is best to 'turn-off' compression for a short while (e.g. a day or a week), and check a few things:-
    - loads that you don't expect are happening
    - 0RecordMode in Communication structure is being incorrectly interpreted in Update-rules/Start/end-routines etc
    Let me know what you find out.
    Best wishes,
    Venu

  • Getting duplicate records in cube from each data packet.

    Hi Guys,
    I am using 3.x BI version. I am getting duplicate records in cube. for deleting these duplicate records i have written code. still it is giving same result. Actually i have written a start routine for deleting duplicate records.
    These duplication is occurring depending on the number of packets.
    Eg: If the number of packets are 2 then it is giving me 2 duplicate records.
    If the number of packets are 7 then it is giving me 7 duplicate records.
    How can i modify my code so that it can fetch only one record by eliminating duplicate records? Or any other solution is welcomed.
    Thanks in advance.

    Hi  Andreas, Mayank.
      Thanks for your reply.
      I created my own DSO, but its giving error. And I tried with the stanadard DSO too. Still its giving the same error as could not activate.
    In error its giving a name of function module RSB1_OLTPSOURCE_GENERATE.
    I searched in R3 but could not get that one.
    Even I tried creating DSO for trial basis, they are also giving the same problem.
    I think its the problem from BASIS side.
    Please help if you have any idea.
    Thanks.

  • How to delete the duplicate data  from PSA Table

    Dear All,
    How to delete the duplicate data  from PSA Table, I have the purchase cube and I am getting the data from Item data source.
    In PSA table, I found the some cancellation records for that particular records quantity  would be negative for the same record value would be positive.
    Due to this reason the quantity is updated to target but the values would summarized and got  the summarized value  of all normal and cancellation .
    Please let me know the solution how to delete the data while updating to the target.
    Thanks
    Regards,
    Sai

    Hi,
    in deleting the records in PSA table difficult and how many you will the delete.
    you can achieve the different ways.
    1. creating the DSO maintain the some key fields it will overwrite the based on key fields.
    2. you can write the ABAP logic deleting the duplicate records at info package level check with the your ABAPer.
    3.you can restrict the cancellation records at query level.
    Thanks,
    Phani.

  • How to rectify the error message " duplicate data records found"

    Hi,
    How to  rectify the error "duplicate data records found" , PSA is not there.
    and give me brief description about RSRV
    Thanks in advance,
    Ravi Alakunlta

    Hi Ravi,
    In the Info Package screen...Processing tab...check the option Do not allow Duplicate records.
    RSRV is used for Repair and Analysis purpose.
    If you found Duplicate records in the F fact table...Compress it then Duplicate records will be summarized in the Cube.
    Hope this helps.

  • Frequenty occured  ERRORS in loading data to cube

    hi sap gurs
      can u give me frequently occured errors in loading data to cube.
    giri

    Hi Giri,
    There will be thousands of errors which can occur in any kind of environment.
    Some of the errors listed
    1. SID missing
    2. No alpha conforming values found
    3. Replicate datasource error
    4. BCD_Overflow error
    5. update rule inactive
    6. DUplicate masterdata found error
    RFC connection lost.
    7. Invalid characters while loading.
    8. ALEREMOTE user is locked.
    9. Lower case letters not allowed.
    10. While loading the data i am getting messeage that 'Record
    the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
    11. object locked.
    12. "Non-updated Idocs found in Source System".
    13. While loading master data, one of the datapackage has a red light error message:
    14. Extraction job aborted in r3
    15. repeat of last delta not possible
    16. datasource not replicated
    17. datasource/transfer structure not active
    18. Idoc Or Trfc Error
    Pls find these links to help yourself out
    production support
    Re: BW   production support
    https://forums.sdn.sap.com/click.jspa?searchID=1844533&messageID=1842076
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    /people/valery.silaev/blog/2006/10/09/loading-please-wait
    Help on "Remedy Tickets resolution"
    Re: What is caller 01?
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    https://forums.sdn.sap.com/click.jspa?searchID=678788&messageID=1842076
    Production Support
    Production support issues
    check Siggi's weblog for common errors in data loading
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    Assign points if it is helpful.
    Regards,
    Sreedhar

  • Duplicate records in cube.

    Hi,Xperts,
    i have checked in my PSA which has no duplicate records.but when i am loading the data to cube,in cube i am getting duplicate records.
    can any one help me on this?

    Hi Satish,
    please check in R/3,
    U told that it is delta load,  go to RSO2, select the required one and enter
    there click on the generic delta tab and check the settings: what u have given:
    Safety level lower limit: give the particular value to eliminate the data records so that the system knows from where exactly the data has to be loaded.
    Check the two options below:
    New status for changed records
    Additive Delta
    Select the appropriate one.
    if helpful, please try this.
    Regards
    Swathi

  • ABAP Routine to duplicate data

    Dear Experts:
                 I need to duplicate data in a cube, changing the time characteristics. Can I do this with an ABAP Routine. This duplication has to be done each time the infopackage is executed. If this is posible, where should I write this routine? In the Start Routine of the Update Rules of the cube? or in the Update Rules inside the time characteristics?
    Thanks in advance.
    Pablo.

    So something has gone bump somewhere inthe Export DataSource creation.  The cube name ZBALANCE should not be an issue.
    One thing that might be an cause, although I think you get a message about it is a missing Factview when you try to run the InfoPkg, but it's worth a look.  Each cube should have a Factview created.  It is a union view of both the F and E fact tables.  Normally, the export datasource uses this view to extract tdata form the cube.
    In SE12, search Views to see if you have the FactView - /BIC/VZBALANCEF.  If you are missing it, search OSS notes on "Missing Factview".  There is a pgm you can run - SAP_FACTVIEWS_RECREATE that recreates all teh Factviews.

  • How to delete duplicate data that generated from csv file??

    The thing i have done is that i red all the data in the csv file without removing the duplicate data and display it.. For where i should have a known number of index (about 13), my main file should have 13 records no duplicate values. and how can my Java program update the corresponding record in the main file each time an index is updated??
    Hope somebody can assist me on this..it would be really helpful for me...
    Thank you.
    -Rao-
    Edited by: reemarao on Apr 1, 2010 3:58 AM

    Hi Sudhir,
    In case you have edit access in your system carry out the following procedure:
    1. Create an export datasource on your cube.
    2. Now create the update rules to your cube using the datamart Infosource.
    3. In the update rules multiply all the key figures by
    -1.
    4. Now create an infopackage and give the request id of the duplicate request as selection.
    5. Load the datamart request and do the data validation.
    If you do not ahve edit access there is no other alternative you would have to delete all the data and reconstruct the requests that you need.
    Bye
    Dinesh

  • Duplicate Data Records indicator / the handle duplicate records

    Hi All,
    I am getting double data in two request. How can I delete extra data using "Duplicate Data Records indicator ".
    I am not able to see this option in PSA as well as DTP for "the handle duplicate records".
    can u help me to find the option in PSA/DTP.
    Regards
    Amit Srivastava

    What Arvind said is correct.
    But if you can try this out in an End Routine, this may work, Not sure though.
    Because then you will be dealing with the entire result_package.
    Also, say if the target that you are talking about is a DSO, then you can Delete Adjacant Duplicates in Start Routine while updating it into your next target. That can be a cube, for ex.

  • Getting duplicate data records for master data

    Hi All,
    When the process chain for the master data, i am getting duplicate data records and , for that  selected the options in Info package level under processing 1)a  update PSA and subsequentky data targets and alternateely select the option Ignore double data records. But still the load was failing and error message "Duplicate  Data Records" after that rhe sehuduled the Info package then i am not getting the error message next time,
    Can any one help on this to resolve the issue.
    Regrasd
    KK

    Yes, for the first option u can write a routine ,what is ur data target--> if it is a cube, there may be a chances of duplicate records because of the additive nature.if its a ODS then u can avoid this, bec only delta is going to be updated.
    Regarding the time dependant attributes, its based on the date field.we have 4 types of slowly changing dimensions.
    check the following link
    http://help.sap.com/bp_biv135/documentation/Multi-dimensional_modeling_EN.doc
    http://www.intelligententerprise.com/info_centers/data_warehousing/showArticle.jhtml?articleID=59301280&pgno=1
    http://help.sap.com/saphelp_nw04/helpdata/en/dd/f470375fbf307ee10000009b38f8cf/frameset.htm

  • Virtual KF(as Date) in Cube and pass the variable value to this VKF runtime

    Hi ,
    User would enter 1 date using date variable untime.
    My cube also has 1 Completed Date (KF).
    And i wann do comparisan based on input variable and exisitng variable.
    Can i add 1 Virtual KF(as Date) in Cube and pass the variable value to this VKF runtime and do the calculation in cube ???
    I know the same thing i can do in formula , but i have some different req.which i am unable to explain u here .
    So please let me know can i use VKF if yes how ???
    Points would be thrown for all .
    Bapu

    it's the exact posting from your last post. Please don't duplicate the postings, so that we can help you in one thread and not so many different threads

  • Why do You compress the data in cube

    Why We Compress Data In Fact table of Cube(0IC_C03) After loading  Stock Intialization Data into Cube.

    Hi,
    Generally the request compression is done to reduce the no. of partitions of Fact table, so that the query performance increases. It helps in reducing the no. of records in infocube by aggregating duplicate records. Which internally reduces the data base size.
    Regards,
    Durgesh.

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • Data in cube is different from psa in the production system

    hi friends
    this is very urjent ,the data is fine and  same as r/3 in psa.for example i have sales for one article ie billing
    2lis_13_vditm . which picked the data from r/3 when i see the records in the psa they r good when tried to see the same record in cube record is not avalible in the cube . only few records are filtered out inbetween psa to cube which is leading to lot of data inconsistency . they r no routines which can filter out the data . only standard sap routines .which updates the data to cube . what could be the problem . any help is apperciated and helpful and will be rewarded. thanks in advance for kind replays.

    veda,
    In a cube the data gets added up for similar records -
    Do you have the same number of records in PSA and Cube ?
    if yes - then maybe there exist similar records and the KF is getting summed up in the result.
    Also how did you search for the same record in the cube ? since the characteristics go into the dim tables and all that is in the fact table is only dim ids and KFs ...
    Do one thing - can you try a report on the cube and check if the data is getting summed up.
    Or another workaround - put the PSA Data into an Excel file ( flat file ) and then upload it into an ODS with the same records in Production - you will know if multiple records exits and thereby find out what the problem is due to. ( Bad workaround - but cannot do it in Production)
    Arun

Maybe you are looking for

  • Receiving error while updating a folder inside sharepoint list

    What my code does - runs through each folders present in a list, and if any of the folders are in the name with current year, then it shall be renamed, ie 2012 to 2012_Backup. Soon after it is renamed we will create a name folder named 2012 and I get

  • Zend AMF issue

    I've got an application that uses Zend Framework to make PHP calls to access a db.  Everything works fine on my development server.  After a few issues, I made the necessary changes to amf_config.ini for the production server.  I checked that the end

  • No video displayed when making movies using Vado third generation

    I have windows xp with sp3 installed. I am able to make movies using the create movie function in vado central. However, when I play back the video (this is a wmv file), only the audio is available. The screen is blank. Any suggestions? Thank you

  • MM pricing EOU

    Hello Experts, In MM pricing procedure for local Purchase from EOU. the Structure is as follows...... A--BCD=7.5%(50% of 7.5%) B--CVD 14%          - Cenvatable CESecc on CVD2%----Cenvatable DSECess on CVD 1%-Cenvatable E--Total Custom Duty --ABC+D F-

  • Cannot "Edit in Photoshop CS5" in 16 bit, only 8 bit

    Adobe Lightroom v3.6 with Camera Raw 6.6 Adobe Photoshop v12.0.04 x64 Working on my Mac, I've never had problems with this before - always been able to edit in Photoshop without any problems. Suddenly, when I try to edit in PS in 16 bit, my image is