Unique Data Record Settings in DSO
Hello Experts,
I have checked Unique Data Record Settings in DSO.
Error Message while Activation of DSO through Process Chains.
Records already available in Activation Table
Duplicate Records
Activation error occurs & it specifies the Req No/Record No.
But in reality there are no duplicate records available in the specified Record No.
I faced this issue several times, why does this happen.
I guess this Unique Data Records Setting doesnt work.
Regards,
KV
Hi,
I am not sure why unique data setting is set, This is generally used when data with the same key will not come again or you are flushing the DSO and loading everyday. In normal cases it may happen for e.g there can be two records with the same sales document number as user might change the sales document twice.
So i think in u r case u dont need the check mark. Let me know if u have doubt.
Regards,
Viren
Similar Messages
-
Unique data record means you can't update a record from ECC with same key.
Unique data record means you can't update a record from ECC with same key fileds right?
Details: For example i have two requests Req1 and Req2 in DSO with unique data record setting checked. on day one Req1 has a filed quantity with value 10 in Active data table. On day two Req1 can not be overwitten from ECC with Req2 with the same key fields but different value 20 in the filed quantity because of the Unique data record settings. finally the delta load fails from ECC going to DSO because of this setting. is it right?
I think we can only use unique record setting going from DSO to cube right?
Please give me a simple scenario in which we can use this setting.
I already search the threads and will assign points only to valuable information.
Thanks in advance.
YorkHi Les,
Unique Data Records:
With the Unique Data Records indicator, you determine whether only unique data records are to be updated to the ODS object. This means that you cannot load a data record into the ODS object the key combination for which already exists in the system otherwise a termination occurs. Only use this setting when you are sure that only unique data records are to be loaded into the ODS object (for example, single documents). A typical application of this is in the loading of mass data. It improves the load performance.
Hope it Helps
Srini -
How to select set of unique data records from internal table
Hi
I am looking for a command in order to select all unique data-records from an internal table without using a loop. Does anybody know a command that could do this?
<b><u>An illustrating example:</u></b>
<i>Example:
Table content
a 1
a 2
a 3
b 1
b 2
c 1
c 2
c 3
d 1</i>
So I am looking for a command that should provide a, b, c & d for the first column or 1, 2 & 3 for the second column<b></b>Hi,
SELECT DISTINCT MATNR
FROM MARA
into table i_mara.
Best regards,
Prashant -
How to see the data records for a dso based upon the request id?
Hi all,
How to see the data records based upon the request id on the dso.
I need to see the data for a dso based upon the request id 444493 loaded from another dso through repair full update.
thanksHi,
Step 1: select your request from DSO request tb
Step 2: Select your DSO just above your contents/requests/reconstruction tabs
Step 3: Click contents(Spectacles symbol) in the top area of your screen
Step 4: Slect the required fields
Regards,
Suman -
DTP Error: Duplicate data record detected
Hi experts,
I have a problem with loading data from DataSource to standart DSO.
In DS there are master data attr. which have a key containing id_field.
In End routine I make some operations which multiple lines in result package and fill new date field - defined in DSO ( and also in result_package definition )
I.E.
Result_package before End routine:
__ Id_field ____ attra1 ____ attr_b ...___ attr_x ____ date_field
____1________ a1______ b1_________ x1
____2________ a2______ b2_________ x2
Result_package after End routine:
__ Id_field ____ attra1 ____ attr_b ..___ attr_x ____ date_field
____1________ a1______ b1_________ x1______d1
____2________ a1______ b1_________ x1______d2
____3________ a2______ b2_________ x2______d1
____4________ a2______ b2_________ x2______d2
The date_field (date type) is in a key fields in DSO
When I execute DTP I have an error in section Update to DataStore Object: "Duplicate data record detected "
"During loading, there was a key violation. You tried to save more than one data record with the same semantic key."
As I know the result_package key contains all fields except fields type i, p, f.
In simulate mode (debuging) everything is correct and the status is green.
In DSO I have uncheched checkbox "Unique Data Records"
Any ideas?
Thanks in advance.
MGHi,
In the end routine, try giving
DELETE ADJACENT DUPLICATES FROM RESULT_PACKAGE COMPARING XXX YYY.
Here XXX and YYY are keys so that you can eliminate the extra duplicate record.
Or you can even try giving
SORT itab_XXX BY field1 field2 field3 ASCENDING.
DELETE ADJACENT DUPLICATES FROM itab_XXX COMPARING field1 field2 field3.
this can be given before you loop your internal table (in case you are using internal table and loops) itab_xxx is the internal table.
field1, field2 and field 3 may vary depending on your requirement.
By using the above lines, you can get rid of duplicates coming through the end routine.
Regards
Sunil
Edited by: Sunny84 on Aug 7, 2009 1:13 PM -
Loading ODS - Data record exists in duplicate within loaded data
BI Experts,
I am attemping to load an ODS with the Unique Data Records flag turned ON. The flat file I am loading is a crosswalk with four fields, the first 3 fields are being used as Key Fields in order to make the records unique. I have had this issue before, but gave up in frustration and added an ascending number count field so simply create a unique key. This time I would like to solve the problem if possible.
The errors come back referring to two data rows that are duplicate:
Data record 1 - Request / Data package / Data record: REQU_4CNUD93Q3RCC80XFBG2CZXJ0T/000003/ 339
Data record 2 - Request / data package / data record: REQU_4CNUD93Q3RCC80XFBG2CZXJ0T/000003/ 338
And below here are the two records that the error message refers to:
3 338 3902301480 19C* * J1JD
3 339 3902301510 19C* * J1Q5
As you can see, the combination of my three Key Fields should not be creating a duplicate. (3902301480, 19C(asterisk) , (asterisk)) and (3902301510, 19C(asterisk) , (asterisk)) I replaced the *'s because they turn bold!
Is there something off with the numbering of the data records? Am I looking in the wrong place? I have examined my flat file and can not find duplicates and the records that BW say are duplicates are not, I am really having a hard time with this - any and all help greatly appreciated!!!Thank you for the response Sabuj....
I was about to answer your questions but I wanted to try one more thing, and it actually worked. I simply moved the MOST unique Key Field to the TOP of my Key Field list. It was at the bottom before.
FYI for other people with this issue -
Apparantly the ORDER of your Key Fields is important when trying to avoid creating duplicate records.
I am using four data fields, and was using three data fields as the Key Fields. Any combination of all three would NOT have a duplicate, however when BW finds that the first two key fields match, sometimes it apparantly doesn't care about the third one which would make the row unique. By simply changing the order of my Key Fields I was able to stop getting the duplicate row errors...
Lesson - If you KNOW that your records are unique, and you are STILL getting errors for duplicates, try changing the ORDER of your key fields. -
Do Not Check Uniqueness of Data in Write Optimised DSO
Hello,
I am working with write optimised DSO with already billion records in it. I have flag 'Do Not Check Uniqueness of Data' in Settings as checked(means it will surely not check for uniqueness of data). I am thinking of removing this flag off and activate the DSO again. I am willing to remove this flag as it will provide option of Req ID input in listcube on DSO and without this list cube will never return back with results.(I have to analyze aggregations)
I tried removing this flag and then activate DSO in production system with 17 million records which took 5 mins for activation (index creation). So maths says for a billion records activation transport will take around 6 hrs for moving to Production.
I am willing to remove this flag as it will provide option of Req ID input in listcube and without this list cube will never return back with results.
Questions:
How does this flag checks the uniqueness of record? WIll it check Active table or from the Index?
To what extent DTP will slow down the process of subsequent data load?
Any other factors/risks/precautions to be taken?
Let me know if questions are not clea or further inputs are required from my side.
Thanks.
AgastiHi,
Please go through the site :
/people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
As far as your ques are concerned... i hope above blog will answer most of it , and if it does'nt please read below mentioned thread.
Use of setting "Do Not Check Uniqueness of Data" for Write Optimized DSO
Regards
Raj -
Error while loading data from PSA to DSO using DTP
Hi,
I have a Unique aplha numeric identifier of type "Char" length "32" . When I am loading the data from PSA to DSO using DTP I get the following error message:
"An error occurred while executing a transformation rule:
The exact error message is
Overflow converting from ' '
The error was triggered at the following point in the program:
GP4JJHUI6HD7NYAK6MVCDY4A01V 425
System response
Processing the data record has been terminated"
Any idea how I can resolve this....
ThanksHi,
fist check weather any special characteristics if not
check in data source under this we have fields tab check the format of a particular field format internal/external/check u choose internal format, if any check routine once
use Semantic Groups in the DTP.
Try it
Thanku
lokeeshM
Edited by: lmedaSAP_BI on Oct 20, 2010 6:44 AM -
Hi Experts
Please..Please update me on in detail if possible with example ......Key Fields and Dimensions
How Data Records will be processed in DSO and Cube
What is 0RECORD MODE in DSO
I tried to search but i can't able to find what i am looking forDear User,
KeyFields: Simply like Primary keys...if the keyfields combination is same...then the relevant datafield values will be overwritten.
Dimension: Nothing but a collection of LOGICALLY RELATED INFOOBJECTS. Its an angle of viewing the data from the Fact table.
0RECORDMODE: Used to maintain change history. It is having 7 different type of images, using which we can identify delta records.
Regards,
Ram. -
Error while loading data from PSA to DSO
Hi,
How to identify the erroneous records in DSO?
While loading the data from PSA to DSO through Process chains we are facing the error says:
"Value '#' (hex. '0023') of characteristic 0BBP_DELREF contains invalid characters"
"Error when assigning SID: Action VAL_SID_CONVERT InfoObject 0BBP_DELREF"
There is no error records in PSA but it seems some invalid characters exists.
Could you please help us how to find the error records in DSO and how to correct it.Hi,
These are errors BRAIN290 & RSDRO302.
The problem here most likely is that BW doesn't recognise a character you are trying to load. Generally the character is not "#",
as BW displays all symbols it does not recognise as #. You should decode from the hex string what the actual value is. Note that hex values < 20 are not allowed in BW.
Please review Note 173241 and the document mentioned within.
This shows what characters are not allowed in BW characteristic values.
You should check if the character is allowed, and then you can solve the problem in one of the following ways:
1) add this character to the "permitted character" list in RSKC as described in the note.
2) correct the value in the source system.
3) correct the value in the PSA (to do this, you will need to delete the request from the ODS object and then you can change the disallowed character via the PSA maintenance).
4) follow Note 1075403 so that the characters HEX00 to HEX1F are not checked (this affects only characteristics that do not allow "lower case").
5) if you cannot use any of the above options, then you will need to create a routine in your transfer rules for the affected infoobject, and change the value to a character which is permitted in BW.
These are the usual ways to solve this issue.
Rgds,
Colum -
Duplication Error while loading data in write optimized DSO
Hi Experts,
I have an issue.In BI7 I'm trying to load the data in a WRITE OPTIMIZED ODS from the controlling data source 0CO_OM_CCA_10. I'm getting the data properly in PSA, but while loading it into my WODSO i'm getting the duplication error although my keys fields and data fields are properly placed in my data target(WODSO).
pls let me know what is the solution to load it successfully.
Thanks in Advance.
AmitHi,
thanks for your reply
I'm getting this error message:
Diagnosis
During loading, there was a key violation. You tried to save more than
one data record with the same semantic key.
The problematic (newly loaded) data record has the following properties:
o DataStore object: GWFDSR02
o Request: DTPR_4BA3GF8JMQFVQ8YUENNZX3VG5
o Data package: 000006
o Data record number: 101
Although i have selected the key fields which identifies unique record,then also i'm getting the duplication error.
Even i have reffered to the BI content for this data source and found that it has the same key fields as of mine.
Debjani: i need unique records without duplication and i'm doing a full load in DTP.
What is to be done pls help
Thanks in advance.
Edited by: Amit Kotwani on Sep 26, 2008 10:56 AM -
Duplicate Error while loading data to Write Optimized DSO
Hi,
When i do a dataload for Write Optimized DSO, I am getting an Error as "Duplicate Data Record Detected". I have Sales Document Number, Fiscal Year Variant & Billing Item as Semantic Key in the DSO.
For this DSO, I am getting data from a Test ECC system, in which most of the Sales Document Number column is Blank for this Datasource.
When i go into the Error Stack of the DSO, all the rows whichever has Sales Document number as Blank are displayed. For all this rows, the Item Number is 10.
Am i getting this Duplicate error as the Sales Document Number is Blank & Item number is 10 for all of them? I read in Threads that Write Optimized DSO doesnt care about the Key Values, it loads the data even if the Key values are same.
Any help is highly appreciated.
Regards,
MuraliHi Murali,
Is the Item Number a key field ?
When all the key fields are same then data gets aggreagted depending on the setting done in the transformation for the key figures. These 2 options for key figures are :
1. Add up the key figures
2. Replace the key figure
Since the Sales Document No is blank and Item Number is same then their is a possibility that the key figures for these records might get added up or replaced and thus due to this property of SDSO it might not be throwing error.
Check the KF value in the SDSO for that Sales Doc No and Item No. and try to find out what is the value for KF. It may have addition of all the common data fields or the KF value of last common record.
Regards
Raj Rai -
Urgent : Error while loading data from PSA to DSO
Hi,
I am working on 7.0.
When i look for data into PSA then i have 1 date field which is containing data in proper date format. This date field is mapped with 0CALMONTH & 0CALYEAR for which i have written routine and directly assigned to 0DOC_DATE .
I have written a start routine to do some modifications in SOURCE_PACKAGE.
But When i am loading data from PSA to DSO its giving below error:
" Exception wrong_date ; see long text RSTRAN 303
Diagnosis
An exception wrong_date was raised while executing function module
RST_TOBJ_TO_DERIVED_TOBJ .
System Response
Processing the corresponding record has been terminated.
Procedure
To analyze the cause, set a break point in the program of the
transformation at the call point of function module
RST_TOBJ_TO_DERIVED_TOBJ . Simulate the data transfer process to
investigate the cause. "
I am not getting why this error is coming up
pls guide me !!you can map the code to DOC_DATE .
The code is used to read from any date and convert that into Fiscal year/period.
Try this code.
data: l_fiscyear type t009b-bdatj.
call function 'FISCPER_FROM_CALMONTH_CALC'
exporting
id_date = COMM_STRUCTURE-doc_date
iv_periv = 'V9'
importing
ev_fiscyear = l_fiscyear.
result value of the routine
RESULT = l_fiscyear. -
Duplciuate Data record error while reloading a DTP
I have a DTP created to load etxt datasource and put in the process chain. I have set "delta mode" so that it picks up if additional records are available.
But, I noiticc that when there are no additional records (for eg: earlier load had 269 records and current load has same 269 records), I am gettign the error - "Duplciate data record" and status is set as "Red". I do not want to see error in this case as it did not find any new records and want the status to be green if there are no new records.
Could you please suggest what settings will do that.
Regards
RajDelta DTP will fetch only unloaded requests ( requests which donot exist in target ) and not additional records.
Is the text datasource delta enabled ?
Do you have Infopackage of Update type Delta setup?
Did you run a Full DTP before Delta DTP?
Assuming Full Infopackage has loaded 269 records to PSA and the same will be loaded ( if no additional records in source system ) again.
Req 1 - 269 - Yesterday
Req 2 - 269 - Today
Full DTP ran yesterday will load 269 records to target.
Delta DTP ran today will load Req1 & Req 2 to target ( master data will be overwritten ) - Reason Delta DTP is acting like Init with data transfer.
You can start off with a Delta DTP instead Full, if Full DTP is ran before a Delta DTP maje sure you delete requests loaded by Full DTP.
This can be ignored as this is Master Data which wil be overwritten.
To get rid of error jst Check " Handle Duplicate Records " in Update Tab of DTP. -
How to avoid 'duplicate data record' error message when loading master data
Dear Experts
We have a custom extractor on table CSKS called ZCOSTCENTER_ATTR. The settings of this datasource are the same as the settings of 0COSTCENTER_ATTR. The problem is that when loading to BW it seems that validity (DATEFROM and DATETO) is not taken into account. If there is a cost center with several entries having different validity, I get this duplicate data record error. There is no error when loading 0COSTCENTER_ATTR.
Enhancing 0COSTCENTER_ATTR to have one datasource instead of two is not an option.
I know that you can set ignore duplicates in the infopackage, but that is not a nice solution. 0COSTCENTER_ATTR can run without this!
Is there a trick you know to tell the system that the date fields are also part of the key??
Thank you for your help
PeterAlessandro - ZCOSTCENTER_ATTR is loading 0COSTCENTER, just like 0COSTCENTER_ATTR.
Siggi - I don't have the error message described in the note.
"There are duplicates of the data record 2 & with the key 'NO010000122077 &' for characteristic 0COSTCENTER &."
In PSA the records are marked red with the same message (MSG no 191).
As you see the key does not contain the date when the record is valid. How do I add it? How is it working for 0COSTCENTER_ATTR with the same records? Is it done on the R/3 or on the BW side?
Thanks
Peter
Maybe you are looking for
-
Showing % ,Name ,Value in Pie Chart
Hi Experts, I am running into an issue I built a pie chart i want to show Data Labels with (Value,Name,Percentage of Total) but in properties we will get only Name and Value or Percentage of total and name. !http://img268.imageshack.us/img268/5968/pi
-
How can i upgrade the jdk or jre of OAS 10g because some of the java class were not supported like sendRedirect, useBean, StringTokenizer. They dont work at 10g App Server but they work on 9i App Server
-
OR mapping one-to-many with the movie EJB application
Hi, I have downloaded from Orion the Movie EJB application with OR mapping one-to-many. When I ran it, it's fine. But I have spurious messages on my OC4J LOG: Oracle9iAS (9.0.2.0.0) Containers for J2EE initialized java.lang.Exception at MovieHome_Ent
-
Hi!!! I want to practice ABAP programming, somebody have exercises or something that can help me? thanks & regards
-
I am attending an on-line webinar 1-hour. Their site requires Adobe Connect Add-in. I have installed this, however their site it cannot find it. Please advise