Duplicate records in Cube Level ( BI 7.0 )
Dear All
I am working on BI 7.0 , I have an issue , i am loading the data from Flat File to ODS and from ODS to Cube . In ODS we are selected Overwrite option, in cube level we have an Summation option. the problem is while loading the data from Flatfile to ODS records are fine . from ODS to cube data loading also fine but here i am getting the Duplicate records .
what are the best options to go ahead in such a situations??
Regards
KK
I am sharing a case occured for me. Please see if its applicable for you.
Sometimes, in the step loading to Cube, when any type of problem occurs, we restart the load. If the cube load prompts saying 'the lost load was unsuccessful.... reload?', this problem may occur. It will load the records in the previous load also.
Verify what has been duplicated from the ODS changelog table and the cube load record count. If you see the number of records updated being the total of the records in the different ODSR request (in the change log table). Delete the previous load in the cube. (provided no other side effect is produced e.g from start routine etc)
Cheers.
Similar Messages
-
Getting duplicate records in cube from each data packet.
Hi Guys,
I am using 3.x BI version. I am getting duplicate records in cube. for deleting these duplicate records i have written code. still it is giving same result. Actually i have written a start routine for deleting duplicate records.
These duplication is occurring depending on the number of packets.
Eg: If the number of packets are 2 then it is giving me 2 duplicate records.
If the number of packets are 7 then it is giving me 7 duplicate records.
How can i modify my code so that it can fetch only one record by eliminating duplicate records? Or any other solution is welcomed.
Thanks in advance.Hi Andreas, Mayank.
Thanks for your reply.
I created my own DSO, but its giving error. And I tried with the stanadard DSO too. Still its giving the same error as could not activate.
In error its giving a name of function module RSB1_OLTPSOURCE_GENERATE.
I searched in R3 but could not get that one.
Even I tried creating DSO for trial basis, they are also giving the same problem.
I think its the problem from BASIS side.
Please help if you have any idea.
Thanks. -
Duplicate records in cube.
Hi,Xperts,
i have checked in my PSA which has no duplicate records.but when i am loading the data to cube,in cube i am getting duplicate records.
can any one help me on this?Hi Satish,
please check in R/3,
U told that it is delta load, go to RSO2, select the required one and enter
there click on the generic delta tab and check the settings: what u have given:
Safety level lower limit: give the particular value to eliminate the data records so that the system knows from where exactly the data has to be loaded.
Check the two options below:
New status for changed records
Additive Delta
Select the appropriate one.
if helpful, please try this.
Regards
Swathi -
How to delete duplicate records in cube
Hi,
can u help me how to delete the duplicate records in my cube
and tell me some predifined cubes and data sourcess for MM and SD modulesHi Anne,
about "duplicate records" could you be more precise?.
The must be at least one different Characteristic to distinguish one record from the other (at least Request ID). In order to delete Data from InfoCubes (selectively) use ABAP Report RSDRD_DELETE_FACTS (be carefull it does not request any confirmation as in RSA1 ...).
About MM and SD Cubes see RSA1 -> Business Content -> InfoProvider by InfoAreas. See also for MetadataRepository about the same InfoProviders.
About DataSources just execute TCode LBWE in you source sys: there you see all LO-Cockipt Extrators.
Hope it helps (and if so remember reward points)
GFV -
Delete overlapping/duplicate records from cube
Hi All,
Kindly let me know how to delete overlapping requests from a cube. Actually the cube is getting loaded from varuous infosources, but there are records which get duplicated and the are not wanted , then hiow to delete the duplicate records from the cube.
Regards,
dolaI think what arun is perfectly right....
use DSO for consolidation of various requests..from diferenet infosources...
Now load from DSO to cube...and it is very much possible...though will require little work.
Delete duplicate records option is usually used for master data.With transacdtion data i don't think its advisable.
Regards,
RK -
How to get rid of duplicate records generated frm hierarchical cube in sql?
Hi All,
database version 10gR2.
I am trying to aggregated data for two hierarchical dimensions, specifically organization and products.
I am using one ROLLUP for each dimension, which would be two ROLLUP in GROUP BY clause to do the aggregation for every level of organization and product that are in included in the hierarchy.
the troubling part is that that products that have data in corresponding fact table are not always located at the lowest level (which is 6) of the product hierarchy.
e.g.
product_id level
0/01/0101/010102/01010201 5 -->01010201, at level 5 , has data in fact table
0/01/0101/010103 4 -->010103, at level 4, has data in fact table as well
0/02/0201/020102/02010203/0201020304/020102030405 6 --> at level 6,(lowest level) and has data in fact table we have a flat product hierarchy stored in table as below:
prod_id up_code_1 up_code_2 up_code_3 up_code_4 up_code_5 up_code_6
01010201 0 01 0101 010102 01010201 NULL
010103 0 01 0101 010103 null nulldue to the NULL in product in level 6 for 01010201, when i run the query below, one duplicate record will be generated.
for 010103, there will be 2 duplicate records, and for 020102030405 will be none.
Encounter the same issue with the organizational dimension.
currently, I am using DISTINCT to get rid of the duplicate records, but I don`t feel right to do it this way.
So, I wonder if there is a more formal and standard way to do this?
select distinct ORG_ID, DAY_ID, TRADE_TYPE_ID, cust_id, PRODUCT_ID, QUANTITY_UNIT, COST_UNIT, SOURCE_ID,
CONTRACT_AMOUNT, CONTRACT_COST, SALE_AMOUNT,SALE_COST, ACTUAL_AMOUNT, ACTUAL_COST, TRADE_COUNT
from (
select coalesce(UP_ORG_ID_6, UP_ORG_ID_5, UP_ORG_ID_4, UP_ORG_ID_3, UP_ORG_ID_2, UP_ORG_ID_1) as ORG_ID,
a.day_id as day_id,
a.TRADE_TYPE_ID as TRADE_TYPE_ID,
a.CUST_ID,
coalesce(UP_CODE_6, UP_CODE_5, UP_CODE_4, UP_CODE_3, UP_CODE_2, UP_CODE_1) as product_id,
QUANTITY_UNIT,
COST_UNIT,
A.SOURCE_ID as SOURCE_ID,
SUM(CONTRACT_AMOUNT) as CONTRACT_AMOUNT,
SUM(CONTRACT_COST) as CONTRACT_COST,
SUM(SALE_AMOUNT) as SALE_AMOUNT,
SUM(SALE_COST) as SALE_COST,
SUM(ACTUAL_AMOUNT) as ACTUAL_AMOUNT,
SUM(ACTUAL_COST) as ACTUAL_COST,
SUM(TRADE_COUNT) as TRADE_COUNT
from DM_F_LO_SALE_DAY a, DM_D_ALL_ORG_FLAT B, DM_D_ALL_PROD_FLAT D --, DM_D_LO_CUST E
where a.ORG_ID=B.ORG_ID
and a.PRODUCT_ID=D.CODE
group by rollup(UP_ORG_ID_1, UP_ORG_ID_2, UP_ORG_ID_3, UP_ORG_ID_4, UP_ORG_ID_5, UP_ORG_ID_6),
a.TRADE_TYPE_ID,
a.day_id,
A.CUST_ID,
rollup(UP_CODE_1, UP_CODE_2, UP_CODE_3, UP_CODE_4, UP_CODE_5, UP_CODE_6),
a.QUANTITY_UNIT,
a.COST_UNIT,
a.SOURCE_ID );Note, GROUPING_ID seems not help, at least i didn`t find it useful in this scenario.
any recommendation, links or ideas would be highly appreciated as always.
Thanksanyone ever encounter this kind of problems?
any thought would be appreciated.
thanks -
Duplicate Records in COPA cube
Dear Friends,
We have two BW systems with one R/3 as a source system. There are identical COPA cubes in both the BW systems. We had created the COPA cube in the second BW system recently and did a full load. When i validate the data between the two COPA cubes, i see duplicate records in the second COPA cube. I am wondering why is this so? Is there any setting in the R/3 side that i am missing or anything else? Any ideas?
Thanks
RajHI ,
I am also facing the same problem. In ZCOPA_C01_Q06 i am getting the value double. kindly help with steps as i am new to BI.
Regards
Amarendra -
Fact Table allows duplicate records
Is Fact Table allows duplicate records?
What do you mean by duplicate records? It could be what appears duplicate to you is having some other technical key info that are different in the fact table.
At technical level, there wouldn't be duplicate records in fact table [in SAP's BW/BI there are two fact tables for each cube - which can itself cause some confusion] -
Master Data load fails because of Duplicate Records
Hi BW Experts,
I am loading historical data for a info-object using flexible update, first i tried to delete the data but it was not possible as it is being used in infocubes and ODS, As i am doing the rework of that cubes, ODS so i have to reload the whole data again. Anyway without deleting i tried loading the data in the info-object but it has thrown error that dulicate records found, I tried again then it has thrown an error ALEREMOTE has locked the object or lock not set for the object.
Please suggest me what to do in these scenario.
Please consider it as urgent.
Thanks in advance.
Sunil MorwalSunil,
First unlock the objects.go to SM12 give ALEREMOTE user name then list...select the request and delete.
Load the data from PSA....
OR reload... rememeber you will have one option at infopackage level in processing tab "<b>Ignore duplicate records"</b>.
Let me know the status.
Thanks
Ram
"BW is Everywhere"
Message was edited by: Ram -
How to create duplicate records in end routines
Hi
Key fields in DSO are:
Plant
Storage Location
MRP Area
Material
Changed Date
Data Fields:
Safety Stocky
Service Level
MRP Type
Counter_1 (In flow Key figure)
Counter_2 (Out flow Key Figure)
n_ctr (Non Cumulative Key Figure)
For every record that comes in, we need to create a dupicate record. For the original record, we need to make the Counter_1 as 1 and Counter_2 as 0. For the duplicate record, we need to update Changed_Date to today's date and rest of the values will remain as is and update the counter_1 as 0 and counter_2 as -1. Where is the best place to write this code in DSO. IS it End
routine?
please let me know some bais cidea of code.Hi Uday,
I have same situation like Suneel and have written your logic in End routine DSO as follows:
DATA: l_t_duplicate_records TYPE TABLE OF TYS_TG_1,
l_w_duplicate_record TYPE TYS_TG_1.
LOOP AT RESULT_PACKAGE ASSIGNING <result_fields>.
MOVE-CORRESPONDING <result_fields> TO l_w_duplicate_record.
<result_fields>-/BIC/ZPP_ICNT = 1.
<result_fields>-/BIC/ZPP_OCNT = 0.
l_w_duplicate_record-CH_ON = sy-datum.
l_w_duplicate_record-/BIC/ZPP_ICNT = 0.
l_w_duplicate_record-/BIC/ZPP_OCNT = -1.
APPEND l_w_duplicate_record TO l_t_duplicate_records.
ENDLOOP.
APPEND LINES OF l_t_duplicate_records TO RESULT_PACKAGE.
I am getting below error:
Duplicate data record detected (DS ZPP_O01 , data package: 000001 , data record: 4 ) RSODSO_UPDATE 19
i have different requirement for date. Actually my requirement is to populate the CH_ON date as mentioned below:
sort the records based on the key and get the latest CH_ON value with unique Plant,sloc, material combination and populate
that CH_ON value for duplicate record.
Please help me to resolve this issue.
Thanks,
Ganga -
Hi everyone,
I'm having a a little difficulty resolving a problem with a repeating field causing duplication of data in a report I'm working on, and was hoping someone on here can suggest something to help!
My report is designed to detail library issues during a particular period, categorised by the language of the item issued. My problem is that on the sql database that out library management system uses, it is possible for an item to have more than one language listed against it (some books will be in more than one language). When I list the loan records excluding the language data field, I get a list of distinct loan records. Bringing the language data into the report causes the loan record to repeat for each language associated with it, so if a book is both in English and French, it will cause the loan record to appear like this:
LOAN RECORD NO. LANGUAGE CODE
123456 ENG
123456 FRE
So, although the loan only occurred once I have two instances of it in my report.
I am only interested in the language that appears first and I can exclude duplicated records from the report page. I can also count only the distinct records to get an accurate overall total. My problem is that when I group the loan records by language code (I really need to do this as there are millions of loan records held in the database) the distinct count stops being a solution, as when placed at this group level it only excludes duplicates in the respective group level it's placed in. So my report would display something like this:
ENG 1
FRE 1
A distinct count of the whole report would give the correct total of 1, but a cumulative total of the figures calculated at the language code group level would total 2, and be incorrect. I've encountered similar results when using Running Totals evaluating on a formula that excludes repeated loan record no.s from the count, but again when I group on the language code this goes out of the window.
I need to find a way of grouping the loan records by language with a total count of loan records alongside each grouping that accurately reflects how many loans of that language took place.
Is this possible using a calculation formula when there are repeating fields, or do I need to find a way of merging the repeating language fields into one field so that the report would appear like:
LOAN RECORD LANGUAGE CODE
123456 ENG, FRE
Any suggestions would be greatly appreciated, as aside from this repeating language data there are quite a few other repeating database fields on the system that it would be nice to report on!
Thanks!if you create a group by loan
then create a group by language
place the values in the group(loan id in the loan header)
you should only see the loan id 1x.
place the language in the language group you should only see that one time
a group header returns the 1st value of a unique id....
then in order to calculate avoiding the duplicates
use manual running totals
create a set for each summary you want- make sure each set has a different variable name
MANUAL RUNNING TOTALS
RESET
The reset formula is placed in a group header report header to reset the summary to zero for each unique record it groups by.
whileprintingrecords;
Numbervar X := 0;
CALCULATION
The calculation is placed adjacent to the field or formula that is being calculated.
(if there are duplicate values; create a group on the field that is being calculated on. If there are not duplicate records, the detail section is used.
whileprintingrecords;
Numbervar X := x + ; ( or formula)
DISPLAY
The display is the sum of what is being calculated. This is placed in a group, page or report footer. (generally placed in the group footer of the group header where the reset is placed.)
whileprintingrecords;
Numbervar X;
X -
How to delete duplicate record in Query report
Hi Experts,
I had created an infoset and query in my sap, but I want to delete some duplicate records before the list out put.Please can we add some further codes in the Extras code to delete duplicates? And how do it? Would you please give me a simple brief.
JoeHi,
You can try to restrict in the filter area in query designer with the values for characteristic which gives correct
result.
But still i would suggest that in the cube you keep not the duplicate records as this is not your requirement and giving
you wrong result.
So you can reload the correct records in the cube inorder to avoid such problems even in future.
Regards,
Amit -
Purchase Order Import inserts duplicate records in po_line_locations
Hi,
I'm running standard Purchase Order Import program to import few PO's. We have only one shipment for each item, so its only one record for each line in po_line_locations. But after running the import, it inserts a duplicate record with same qty into po_line_locations. Basically it is inserting same item twice in po_line_locations_all table and quantity is getting doubles at the line level. Seached Metalink but no hits for this, till now.
This is in R12 (12.0.6).
Did anyone encounter this problem earlier? Any hints or comments would help.
Thanks in advance.
Edited by: user2343071 on Sep 2, 2009 3:54 PMHi,
Once you please debug the particular program with the help of ABAPer. That may resolve your issue. Thanking you -
Duplicate record with same primary key in Fact table
Hi all,
Can the fact table have duplicate record with same primary key . When i checked a cube i could see records with same primary key combination but the key figure values are different. My cube has 6 dimentions (Including Time,Unit and DP) and 2 key figures. So 6 fields combined to form the composite primary key of the fact table. When i checked the records in se16 i could see duplicate records with same primary key. Ther are no parallel loading happening for the cube.
BW system version is 3.1
Data base is : Oracle 10.2
I am not sure how is this possible.
Regards,
PMHi Krish,
I checked the datapacket dimention also. Both the record have same dimention id (141). Except the Keyfigure value there is no other change in the Fact table record. I know this is against the basic DBMS primary key rule. But i have records like this in the cube.
Can this situation arise when same records is there in different data packet of same request.
Thx,
PM
null -
Hi,
we have 2 reports, there is difference in date between two reports. it is showing exact double data.
can any body help me how can i check the cube is having double data.
explaing me how to check the contents in cube.
Thanks
RajiniHi Rajini,
As everybody said, you can use T-code LISTCUBE and give the cube's technical name and execute it. Then click on field selection for output, select the characteristics you want, since you are saying data is getting doubled, don't select all the fields...maybe you would want to select fields like 0calday and characteristics through which you can track down double records. Then execute it and see, you will find your data.
My guess is since you are saying double data, there would have been duplicate records i.e. data would have been loadde twice so once you are done checking the data and if you are sure same data was loaded twice, delete one request ID in your DTP.
Before loading data next time, there is an option in DTP, where you have to check mark a box and the msg next to that box will be something like "Avoid Duplicate records". It will avoid loading same data twice.
Guess this should solve your problem.
Guru
Maybe you are looking for
-
How can I delete an adress that appear automatically when I want to write an email
Je semble recevoir des polluriels à une adresse que j'aimerais supprimer de ma liste. Cette adresse n'est pas dans mes contacts, mais quand j'envoie un courriel, si je tape les premières lettres de l'adresse, elle apparait dans les choix. I recei
-
HT1677 My web pages are not refreshing properly - works on my pc with windows?
When I change accounts in my bank, the same info appears for each account. It worked properly until the iOS upgrade to 6. It works fine with my PC, running windows and using explorer. Any idea what is causing this problem?
-
How do I make Smart Quotes in iMovie 11
I'm trying to make a title in iMovie 11 and don't see a way to make "smart" (typographers) quotes. Anyone know how?
-
Change Number Release (JS41)
Hi All, I am trying to release a change number from js40, but within 500 secs it gives me the following Dump message saying.. "ABAP/4 processor: TSV_TNEW_PAGE_ALLOC_FAILED" please suggest any soution to it. thanks, Abhay Mehta.
-
I am getting the following error when the following application code runs: public void testPOF() { Token T1 = new Token(1,1,"Toronto",3,5); NamedCache aceCache = CacheFactory.getCache("ACE"); aceCache.put("TokenTest1", T1); Token T2 = (Token) aceCach