Updating Write Optimized DataStore Objects
Are there any API's avaliable to update Write Optimized DataStore Objects ?
Hello,
You may wish to check below OSS note -
<b>954550 -Unable to convert status for write-opt. DataStores </b>
This solves the problem ,in case it doesnt ,raise customer message to SAP pls.
Hope it Helps
Chetan
@CP..
Similar Messages
-
BI 7.0 Write Optimized Datastore - Deltas
Hello,
I have been using 2LIS_03_BF and 2LIS_03_UM extractors to bring data into BI 7.0 (SP14) into custom datastore. We are not using SAP delivered cube.
I changed from standard DSO to write optimized DSO to bring everything in from the _UM extractor -custom standard datastore was not bringing in all records because of key structure.
Deleted data in BI, Deleted delta 03 delta job in SM37, Cleared RSA7, Deleted setup tables, filled setup tables. I then ran init w.out data transfer and next I ran full repair. Loaded everything perfectly.
Next Question - can I use the regular delta to bring in changes or do I have to use a DTP?
Thanks,
LyndaDelta Administration: Write-Optimized Data Store objects
Data that is loaded into Write-Optimized Data Store objects is available immediately for further processing. The activation step that has been necessary up to now is no longer required. Note here that the loaded data is not aggregated. If two data records with the same logical key are extracted from the source, both records are saved in the Data Store object, since the technical key for the both records not unique. The record mode (0RECORDMODE) responsible for aggregation remains, however, the aggregation of data can take place at a later time in standard Data Store objects. Write-Optimized DataStore does not support the image based delta, it supports request level delta, and you will get brand new delta request for each data load.
Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
Write-Optimized Data Store supports request level delta. In order to capture before and after image delta, you must have to post latest request into further targets like Standard DataStore or Infocubes. -
How to update a write optimized ODS ? Are there API's avaliable ?
Hi Satya,
Some important points on Write Optimized ODS:
The system does not generate SIDs for write-optimized DataStore objects and you do not need to activate them. This means that you can save and further process data quickly. Reporting is possible on the basis of these DataStore objects. However, SAP recommend that you use them as a consolidation layer, and update the data to additional InfoProviders, standard DataStore objects, or InfoCubes
For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. However, in comparison to standard DataStore objects, you can expect slightly worse performance because the SID values have to be created during reporting.
If you want to use write-optimized DataStore objects in BEx queries, SAP recommend that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query.
Regards,
Anil -
Multiple data loads in PSA with write optimized DSO objects
Dear all,
Could someone tell me how to deal with this situation?
We are using write optimized DSO objects in our staging area. These DSO are filled with full loads from a BOB SAP environment.
The content of these DSO u2013objects are deleted before loading, but we would like to keep the data in the PSA for error tracking and solving. This also provides the opportunity to see what are the differences between two data loads.
For the normal operation the most recent package in the PSA should be loaded into these DSO-objects (as normal data staging in BW 3.5 and before) .
As far as we can see, it is not possible to load only the most recent data into the staging layer. This will cause duplicate record errors when there are more data loads in the PSA.
We all ready tried the functionality in the DTP with u201Call new records, but that only loads the oldest data package and is not processing the new PSA loads.
Does any of you have a solution for this?
Thanks in advance.
HaraldHi Ajax,
I did think about this, but it is more a work around. Call me naive but it should be working as it did in BW3.5!
The proposed solution will ask a lot of maintenance afterwards. Beside that you also get a problem with changing PSA id's after the have been changed. If you use the posibility to delete the content of a PSA table via the process chain, it will fail when the datasourcese is changed due to a newly generated PSA table ID.
Regards,
Harald -
Error while updating data from DataStore object
Hi,
Currently we are upgrading BW3.5 to BI7.0 for technical only,
we found and errors during process chain run in further processing step. This step is basically a delta loading from DSO to Cube.
The error message are:
Error while updating data from DataStore object 0GLS_INV
Message no. RSMPC146
Job terminated in source system --> Request set to red
Message no. RSM078
That's all no further errors message can be explained clearly here from system.
I have applied SAP note 1152453 and reactivate the datasource, infosource, and data target.
Still no help here!?
Please advise if you encountered these errors before.
Thanks in advance.
Regards,
David
Edited by: David Tai Wai Tan on Oct 31, 2008 2:46 PM
Edited by: David Tai Wai Tan on Oct 31, 2008 2:50 PM
Edited by: David Tai Wai Tan on Oct 31, 2008 2:52 PMHi Vijay,
I got this error:
Runtime Errors MESSAGE_TYPE_X
Date and Time 04.11.2008 11:43:08
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time.
Short text of error message:
No start information on process LOADING
Long text of error message:
Diagnosis
For process LOADING, variant ZPAK_0SKIJ58741F4ASCSIYNV1PI9U, the
end should be logged for instance REQU_D4FIDCEKO82JUCJ8RWK6HZ9KX
under the log ID D4FIDCBHXPLZMP5T71JZQVUWX. However, no start has
been logged for this process.
System Response
No log has been written. The process (and consequently the chain)
has been terminated.
Procedure
If possible, restart the process.
Procedure for System Administration
Technical information about the message:
Message class....... "RSPC"
Number.............. 004
Variable 1.......... "D4FIDCBHXPLZMP5T71JZQVUWX"
Variable 2.......... "LOADING"
Variable 3.......... "ZPAK_0SKIJ58741F4ASCSIYNV1PI9U"
Variable 4.......... "REQU_D4FIDCEKO82JUCJ8RWK6HZ9KX"
Any idea? -
What is the main difference of Direct update DSO and Write optimized DSO
What is the main difference of Direct update DSO and Write optimized DSO?
Hi chandra:
Check this link.
http://help.sap.com/saphelp_nw04s/helpdata/en/f9/45503c242b4a67e10000000a114084/content.htm
You can find another difference on page 147, section "Reclustering DataStore Objects" of the document "Enterprise Data Warehousing"
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2-b582e94bcf8a?quicklink=index&overridelayout=true
>You can only use reclustering for standard DataStore objects and DataStore objects for direct update. You cannot use reclustering for write-optimized DataStore objects. User-defined multidimensional clustering is not available for write-optimized DataStore objects
Regards,
Francisco Milán.
Edited by: Francisco Milan on Aug 11, 2010 7:09 PM -
Error occurs while activating a 'Write Optimized' DSO.
I am getting error " There is no PSA for infosource 'XXXX' and source system 'XXX' while activating a newly defined DSO object.
I am able to activate a Standard DSOs, however the error occurs while activating a 'Write Optimized' DSOHi,
For write optimised DSO, check if you have tick the uniqueness of the records. If you check that and if there are two same records coming from source in one go, then you will get error
From SAP help
You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
Thanks
Srikanth -
Duplicate Semantic Key in Write Optimized DSO
Gurus
Duplicate semantic keys have a unique index KEY in the key fields of the DSO when WRITE OPTIMIZED DSO is used. (Of course this is assuming the Do not check uniqueness of data indicator is not checked..)
See help https://help.sap.com/saphelp_crm60/helpdata/en/a6/1205406640c442e10000000a1550b0/frameset.htm
This means that the DSO can contain duplicate records.
My question is: What happens to these duplicates when a request level delta update is done to a Standard DSO or Infocube?
Do duplicates end up in the error stack? Are they simply aggregated in further loads? - because this would be a problem for reporting (double-counting).
thanks
tonyHi Tony,
It will aggregate the data in some undesired way.
Read on...
https://help.sap.com/saphelp_crm60/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
If you want to use write-optimized DataStore objects in BEx queries, we recommend that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query.
Hope it helps...
Regards,
Ashish -
Missing PARTNO field in Write Optimized DSO
Hi,
I have a write optimized DSO, for which the Partition has been deleted (reason unknown) in Dev system.
For the same DSO, partition parameters exists in QA and production.
Now while transporting this DSO to QA, I am getting an error "Old key field PARTNO has been deleted", and the DSO could not be activated in the target system.
Please let me know, how do I re-insert this techinal key of PARTNO in my DSO.
I pressuming it has something to do with partitioning of the DSO.
Please Help.......Hi,
Since the write-optimized DataStore object only consists of the table of active data, you do not have to activate the data, as is necessary with the standard DataStore object. This means that you can process data more quickly.
The loaded data is not aggregated; the history of the data is retained. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, so that the aggregation of data can take place later in standard DataStore objects.
The system generates a unique technical key for the write-optimized DataStore object. The standard key fields are not necessary with this type of DataStore object. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical keys. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). Only new data records are loaded to this key.
You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
PS: Excerpt from http://help.sap.com/saphelp_nw2004s/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
Hope this helps.
Best Regards,
Rajani -
Queries on Write Optimized ODS
Hi All,
Do BEx Queries work efficiently on write optimized ODS ?
Regards,
SatyaHi Satya,
Some important points on Write Optimized ODS:
The system does not generate SIDs for write-optimized DataStore objects and you do not need to activate them. This means that you can save and further process data quickly. Reporting is possible on the basis of these DataStore objects. However, SAP recommend that you use them as a consolidation layer, and update the data to additional InfoProviders, standard DataStore objects, or InfoCubes
For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. However, in comparison to standard DataStore objects, you can expect slightly worse performance because the SID values have to be created during reporting.
If you want to use write-optimized DataStore objects in BEx queries, SAP recommend that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query.
Regards,
Anil -
Can a write-optimized DSO be used for Delta upload
Hi,
can any one please answer following..
1. can a write optimized DSO be used for Delta upload?
2. Does industry based content is available in BI Content ?
Thanks&Regards
SatyaHi,
Write-Optimized DataStore does not support the image based delta, it supports request level delta, and you will get brand new delta request for each data load.
Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
Write-Optimized Data Store supports request level delta. In order to capture before and after image delta, you must have to post latest request into further targets like Standard DataStore or Infocubes. -
Unable to delete request from write-optimized DSO (Error during rollback)
Hi Gurus,
I am trying to delete a delta request from a Write-Optimized DSO. This request was uploaded with a DTP from another Write-optimized DSO.
The actual overall status of the request is RED and the description of that status is now: 'Error during rollback of request DTPR_4JW6NLSVDUNYY3GTD6F4DQJWR; only rollback allowed'.
I checked the log of all Request Operations in DataStore (from the same line where the red request is now) and I see my several attemps to delete this request under a RED radiobutton with the title Rollback. The details for this error are the following:
Could not delete request data from active table
Message no. RSODSO_ROLLBACK114
Diagnosis
The system could not delete the request data from the active table of a write-optimized DataStore object.
System Response
Write-optimized DataStore object: DTFISO02
Active table: /BIC/ADTFISO0200
Request: DTPR_4JW6NLSVDUNYY3GTD6F4DQJWR
Procedure
Search for Notes containing the key words "Delete write-optimized DSO PSA"
I am relatively new to SAP BI 7.0 and I do not know how to delete this request. Any help will be highly appreciated !!
LeticiaHi Leticia:
Take a look at the SAP Notes below.
Note 1111065 - "701: Delta consistency check for write-optimized DSOs"
Note 1263877 - "70SP20: Delta consistency check for write-optimized DSOs"
Note 1125025 - "P17:PSA:DSO:ODSR missing in PSA process for write-opt. DSO"
Additionally, some ideas from the alternative presented on the blog by KMR might help you.
"How to generate a selective deletion program for info provider"
Regards,
Francisco Mílán. -
Deletion of requests from Write optimized DSO
Hi all,
Tried using this RSSM_DELETE_WO_DSO_REQUESTS to delete requests more than 60 days, via process chain.
But this is not removing all requests more than 60 days. Quite a few are not getting deleted and even on second run of this program, deletes one request at a time. Not sure why this behaves this way.
Any other alternative as Deletion of WDSO (Write-optimized DataStore object) Load requests and active table data without deleting it from target
This also seems to work...?
Also, how should I ensure uniformity of timelines w.r.to PSA and changelog of this DSO flow? PSA 15 days and changelog 90 days retain is fine? Want to ensure no data comes again thats deleted in WDSOCan some one suggest pls
-
Difference between - Write Optimized and Direct Update DSO
Hi Gurus,
I know the similarities of the Write Optimized and Direct Update DSO.
But I want to know what is the difference between the both.
Can any expert let me know the difference between the both please.
ThanksHi,
Write Optimised DSO:
Write optimsed DSO has been designed to be the initial staging of the source system data from where the data could be transfered to the standard DSO or the Infocube.
The Data is immediately written to the further data targets.We can save the activation time.Reporting also possible on this.
SAP recommends to use Write-Optimized DataStore as a EDW inbound layer, and update the data into further targets such as standard DataStore objects or InfoCubes.
Direct Update DSO:
Dat a store object for direct update conatains daya in a single version.therefore data is stored same form in which it was written to DSO for direct update by the application.We can use this type of DSO for APD.
In IP we can use this dso and directly we can enter the data using RSINPUT.
Thanks
Madhavi -
Update of write optimized DSO by csv file
Hi Gurus,
I have observed few things while trying to upload a write optimized (WO) DSO from a flat file in BI 7.0. The data flow is as follows:
data source -> transformation -> data target (DSO - WO).
from a test perspective, i have updated 5 records till PSA with IPAK and subsequently updated it to the DSO using DTP. When i check the in the manage i found records transferred = 5 and added = 5. which is okay.
then i again updated the same 5 records to PSA, and triggered DTP. Now DTP brings 10 records (5 + 5). records transferred = 10 and updated = 10. when i checked in the header tab in DTP monitor i found the selection brings both the PSA request IDs.
again i loaded the same 5 records to PSA, a new PSA request ID generated and DTP extracts this PSA id along with the old 2 already transferred. Now records transferred = 10 added = 15. why transferred 10 ? i am getting confused here. I was expecting it to follow the same way, then it should have transferred 15 and added 15. Ideally there is no routine which generates additional record. There for this is not possible at all. Anybody has observed this strange behaviour ?
Why this is happening ? I was expecting it should only bring 5 records every time ? Is it something specific to write optimized DSO or am i doing something wrong here ? Is there any setting where i can set the parameter no to select the old PSA request that already updated to DSO ?
Please clarify.
Soumya
Message was edited by:
Soumya MishraIs your DTP full or delta.
Here is some interesting discussion.
/thread/348238 [original link is broken]
Maybe you are looking for
-
Hello I have a problem with my ipod touch 1G the problem is that see me key to the floor! And when recogi not prendia after 5 minutes prendio but it gave to me the surprise of which the battery had finished completely! What I did was to set it to loa
-
Passing multiple parameter to where condition
Hi all, I want to pass multiple parameters to where condition.. it is as follows: i'll pass only TODAY date to M_DATE_TO variable. CURSOR C_COUNT(M_DATE_FROM DATE, M_DATE_TO DATE) IS SELECT COUNT (*) FROM table_name WHERE TRUNC(table_name.join_da
-
itunes isnt connecting o the itunes store
-
Can I restore iPad from time machine backup without my MacAir
All my gear has been stolen including my MacAir and iPad(4thGen). Fortunately I just found my iPad under my car seat. Now I would like to restore my iPad. The thing is I do iTune-backup through my MacAir then I backup my MacAir with TimeMachine on to
-
Netstream consuming 100% of the CPU
Hi friends, Netstream consuming 100% of the CPU of Mac OSX, and 55% of Linux Code: private function testInit():void var nc:NetConnection = new NetConnection(); nc.connect(null); var nsMict:NetStream = new NetStream(nc); nsMict.attachAudio(Microphone.