4 LSA architecture - EDW layer - write optimized DSO settings

Dear Colleagues,
I have a question regarding the 4 LSA architecture, available on the following article.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/306f254c-1e3f-2d10-9da0-bcff4e35e0ef
When we activate a SAP business content dataflow.
If we want to add in the EDW layer write optimized DSO to have a faster upload from source system to SAP BI.
Should we always check the" do not check uniqueness of data" setting in the write optimized DSO to avoid data upload error ?
Based on your experience what would be your recommendation ?
Cheers,

I would suggest to check "ON"
-If the check is ON It will allow to load several records with same semantic key and in next layer
-If the check is OFF It will not allow to load same record twice and throws error
Did you seen this ?
/people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
Edited by: Srinivas on Aug 24, 2010 2:16 PM

Similar Messages

  • Delta loading procedure from Write Optimized DSO to Infocube

    Hi All,
    We are using Write optimized DSO in our project to which I am loading data using Standard DSO 0FI_GL_12
    From Write optimized DSO, we are loading delta records into Infocube, please provide your inputs to my following questionnaire:
    1) I am quite interested to know how the delta records gets loaded into infocube whenever we are using 'Write optimized DSO' as we don't have any image concept in Optimized DSO
    Ex: If am using Standard DSO, we have Change log table and image concept will allow to get updated value to Cube
    let us assume
    Active Table
    111            50
    111            70 (overwrite)
    Change Log Table
    111            -50        (X -- Before Image)
    111             70    ( '  ' -- After Image) symbol for after image is 'Space'
    So if we load this record to the target as a delta the above two records from change log table will get loaded to the CUBE and Cube will have 70 as the updated record
    If am using 'Write Optimized',
    Active Table
    111            50
    111            70 (overwrite)
    When this record loaded to the cube, as Info Cube is always having 'Additive' feature so the total value will 50+70 =120 which is wrong?
    Correct me what feature will work here to get updated value as '70' into Cube from Write Optimized DSO'
    2)As the data source is delta capable and having  'ADDITIVE' delta process, only the delta records based on REQUEST ID are loaded into Info Cube with the  updated key figure value?
    Thanks for your inputs and much appreciated.
    Regards,
    Madhu

    Hi Madhu,
    In best practice, we use WODSO in Initial layer and then Standard DSO. Just for mass data load/staging purpose.
    In best practice : Data source ----> WODSO ---> std. DSO
    In your case : Data source ----> Std.DSO  -----> WODSO.
    In both cases if data load design is not in accurate way, then your cube will have incorrect entries.
    For ex:  today 9 am : 111,  50  (in active table)
    Data load to cube, same day 11 am : then cube will have 111    50.
    Same day, value got changed  in std. DSO  1 pm :   111   70(over write function @ active table).
    Same day/next day if you load data to cube, it will have 2 records one is with value 50 and other would be 70.  So to avoid such scenarios we should plan load in accurate way.  Else make change your DTP settings as  ‘source table : change log table.
    Coming to your case:
    Once after the load to Std. DSO, load data to WODSO by changing the DTP settings ‘Delta Init.Extraction from’  : Change log.
    Now data available @WODSO from change log table, then you load to cube In delta mode.

  • Write optimized dso - Stand DSO loading by Delta Req by req is not working

    Hi All,
    We have a data flow which has 1st layer having write optimized dso and then we are loading data to Standard DSO.
    We are using Delta DTP to load the data with the option "Get All New Data Request By Request", but it is not working.
    Suppose if  i have 2 requests which are need to update to Standard Dso from Write Optimized Dso, it is updating in single request. We are on SAP BW 701 patch level is 005.
    Please suggest me ....

    Hi Prasanna,
    For reasons of downward compatibility, the system behaves differently for DTPs that were created before "SAP NetWeaver 7.0 Support Package Stack 13"
    DTPs for which the indicator is set only get the first new request, even if there is more than one new source request at the time of processing. This restricts the way in which these DTPs can be used in process chains, because requests accumulate in the source and the target may not contain the current data. The Retrieve Until No More New Data indicator is therefore displayed for these DTPs.
    I Suggest you to set this indicator and activate the DTP.
    If you set the Retrieve Until No More New Data indicator and then activate the DTP, once it completes processing, a DTP request checks whether there are any further requests in the source. If the source contains more requests, a new DTP request is automatically generated and processed. Once the DTP is activated, the indicator is no longer visible in DTP maintenance.
    This applies when the DTP is started by a process chain and also when you start the DTP directly from DTP maintenance.
    The indicator is set by default for new DTPs that get data request by request.
    If you do not select the indicator, the label for the Get All New Data Request By Request indicator changes to Get One Request Only. Once the DTP is activated, only this indicator is displayed in DTP maintenance.
    Regards,
    Sudheer.

  • Multiple data loads in PSA with write optimized DSO objects

    Dear all,
    Could someone tell me how to deal with this situation?
    We are using write optimized DSO objects in our staging area. These DSO are filled with full loads from a BOB SAP environment.
    The content of these DSO u2013objects are deleted before loading, but we would like to keep the data in the PSA for error tracking and solving. This also provides the opportunity to see what are the differences between two data loads.
    For the normal operation the most recent package in the PSA should be loaded into these DSO-objects (as normal data staging in BW 3.5 and before) .
    As far as we can see, it is not possible to load only the most recent data into the staging layer. This will cause duplicate record errors when there are more data loads in the PSA.
    We all ready tried the functionality in the DTP with u201Call new records, but that only loads the oldest data package and is not processing the new PSA loads.
    Does any of you have a solution for this?
    Thanks in advance.
    Harald

    Hi Ajax,
    I did think about this, but it is more a work around. Call me naive but it should be working as it did in BW3.5!
    The proposed solution will ask a lot of maintenance afterwards. Beside that you also get a problem with changing PSA id's after the have been changed. If you use the posibility to delete the content of a PSA table via the process chain, it will fail when the datasourcese is changed due to a newly generated PSA table ID.
    Regards,
    Harald

  • Standard DSO - Write Optimized DSO, key violation, same semantic key

    Hello everybody,
    I'm trying to load a Write-Optimized DSO from another Standard DSO and then is raised the "famous" error:
    During loading, there was a key violation. You tried to save more than
    one data record with the same semantic key.
    The problematic (newly loaded) data record has the following properties:
    o   DataStore object: ZSD_O09
    o   Request: DTPR_D7YTSFRQ9F7JFINY43QSH1FJ1
    o   Data package: 000001
    o   Data record number: 28474
    I've seen many different previous posts regarding the same issue but not quite equal as mine:
    [During loading, there was a key violation. You tried to save more than]
    [Duplicate data records at dtp]
    ...each of them suggests to make some changes in the Semantic Key. Here's my particular context:
    Dataflow goes: ZSD_o08 (Standard DSO) -> ZSD_o09 (Write-Optimized DSO)
    ZSD_o08 Semantic Keys:
    SK1
    SK2
    SK3
    ZSD_o09 Semantic Keys:
    SK1
    SK2
    SK3
    SK4 (value is taken in a routine as SY-DATUM-1)
    As far as I can see there are no repeated records for semantic keys into ZSD_o08 this is confirmed by querying at active data table for ZSD_o08 ODS. Looking for the Temporary Storage for the crashed DTP at the specific package for the error I can't neither see any "weird" thing.
    Let's suppose that the Semantic Key is crucial as is currently set.
    Could you please advice?. I look forward for your quick response. Thank you and best regards,
    Bernardo

    Hi  Bernardo:
    By maintaining the settings on your DTP you can indicate wether data should be extracted from the Active Table or Change Log Table as described below.
    >-Double click on the DTP that transfers the data from the Standard DSO to the Write Optimized DSO and click on the "Extraction" Tab, on the group at the bottom select one of the 4 options:
    >Active Table (With Archive)
    >Active Table (Without Archive)
    >Active Table (Full Extraction Only)
    >Change Log
    >Hit the F1 key to access the documentation
    >
    >===================================================================
    >Indicator: Extract from Online Database
    >The settings in the group frame Extraction From... or Delta Extraction From... of the Data Transfer Process maintenance specify the source from which the data of the DTP is extracted.  For a full DTP, these settings apply to all requests started by the DTP. For a delta DTP, the settings only apply to the first request (delta initialization), since because of the delta logic, the following requests must all be extracted from the change log.
    >For Extraction from the DataStore Object, you have the following options:
    >Active Table (with Archive)
    >The data is read from the active table and from the archive or from a near-line storage if one exists. You can choose this option even if there is no active data archiving process yet for the DataStore object.
    >Active Table (Without Archive)
    >The data is only read from the active table. If there is data in the archive or in a near-line storage at the time of extraction, this data is not extracted.
    >Archive (Only Full Extraction)
    >The data is only read from the archive or from a near-line storage. Data is not extracted from the active table.
    >Change Log
    >The data is read from the change log of the DataStore object.
    >For Extraction from the InfoCube, you have the following options:
    >InfoCube Tables
    >Data is only extracted from the database (E table and F table and aggregates).
    >Archive (Only Full Extraction)
    >The data is only read from the archive or from a near-line storage.
    Have you modified the default settings on the DTP? How is the DTP configured right now? (or how was configured before your testing?)
    Hope this helps,
    Francisco Milán.

  • Write-Optimized DSO data deletion

    Hello All,
    We have a requirement to delete old data from a write-optimized DSO. The specific requirement is to delete the data older than 15 days (requests older than 15 days) in the DSO. I could not find any process type that could be used in process chains which would automatically delete the old data based on our settings (similar to the one used for PSA or Change-log deletion). Has any of you come across a process type or a batch job that be scheduled to delete old data in a Write-optimized DSO. Your help is much appreciated.
    Thanks,
    Veera

    Hi Nagesh,
    Thanks for your answer. But the use of these function modules would still require custom development to identify the requests to be deleted. We are trying to see if SAP has any standard process for deleting the old request out of a write optimized DSO or if anyone was successful in achieveing this with least amount of customization.
    Thanks,
    Veera

  • Testing Delta Consistency for write-optimized DSO

    Hi,
    Please do let me know how to carry  testing for delta consistency for write-optimized DSO.
    Is it just to check whether the check box under the settings.. "Check Delta Consistency " is checked or the other way.
    Please do let me know.
    Thanks & Regards,
    Lavanya.

    Hi,
    The delta consistency flag for W-O DSO is used to ensure data consistency between the w-o DSO and any target propagated to. zfor example, you have dataflow ZWDSO1 -> zsdso1. ZWDSO1 has the flag 'check delta consistency' set to ON. You load data from ZWDSO1 to ZSDSO1 via request 1.
    When the load is successful, if you try to delete the request 1 from ZWDSO1, deletion should not be possible; also you will not be able to change the flag to OFF as delta has already been propagated, and if you change the flag, delta consistency might be lost!
    Now, if you delete request 1 from ZSDSO1 and then try to delete request 1 from ZWDSO1, deletion should be possible.
    Hope the explanation helps!
    Regards,
    Rakesh

  • Load performance Write-Optimized DSO

    Dear all,
    I'm looking for some practical tips concerning improving load performance from the PSA to a write-optimized DSO (41M records) via a DTP.
    All parameters that could be tweaked have been checked (e.g. packet size, batch jobs, uniqueness of data flag, etc.) and optimized.
    However this load stays extremely slow (init load from source to BW is faster).
    The BW system runs on SP18.
    Please share all your tips our recommendations for us to solve this major issue.
    Your help is much appreciated!
    Thanks
    JvB

    Hi JvB,
    Here are some options I can think of and let you know if I remember something:
    1) Increase number of prallel processes in DTP i.e. DTP->GOTO->Settings for bacth monitor->Number of prallel process to 4 or 5.
    2) Note 409641 - Examples of packet size dependency on ROIDOCPRMS
    The general formula for data transfer is:
    packet size = MAXSIZE * 1000 \ transfer structure size
    but not more than MAXLINES.
    eg. if MAXLINES < than the result of the formula, MAXLINES size is transferred into BW.
    3) you should check if there are any locks or deadlocks in ST04 and also System Analysis SM21....it will show u all the details,,, and Short dump in ST22.....there may be some ..
    4) Check and Analyze DSO in RSRV. If there are any issues repair it.
    5) If you are loading huge volumes of data split the records by Filters in DTP ex. Plant, calendar year or calendar month selections.
    And also Try
    Try partioning,craeting indexes/secondary indexes or archiving old data.
    Check these links for furtehr reference:
    /people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
    /message/2987899#2987899 [original link is broken]
    Good Luck.
    Regards
    Satish Arra
    Edited by: Satish Arra on Feb 7, 2009 9:28 PM

  • Error while deleting data from a write optimized DSO using a Process Chain?

    Dear Bwers,
    I am facing a strange error while using process chains to delete data from data target which is a write optimized DSO. The process shows as failed but the data gets deleted in the DSO. The error message is as below. Did anybody have a similar problem? Any suggestions.
    Thanks
    Raj
    Error while deleting content of InfoCube/DataStore object ZLSD_G03
    Message no. RSDODSO153

    Please, check if you get any short dump in ST22 related to this issue.

  • Changes to write optimized DSO containing huge amount of data

    Hi Experts,
    We have appended two new fields in DSO containg huge amount of data. (new IO are amount and currency)
    We are able to make the changes in Development (with DSO containing data).  But when we tried to
    tranport the changes to our QA system, the transport hangs.  The transport triggers a job which
    filled-up the logs so we need to kill the job which aborts the transport.
    Does anyone of you had the same experience.  Do we need to empty the DSO so we can transport
    successfully?  We really don't want to empty the DSO's as it will take time to load? 
    Any help?
    Thank you very muhc for your help.
    Best regards,
    Rose

    emptying the dso should not be necessary, not for a normal dso and not for a write optimized DSO.
    What are the things in the logs; sort of conversions for all the records?
    Marco

  • Duplicate Semantic Key in Write Optimized DSO

    Gurus
    Duplicate semantic keys have a unique index KEY in the key fields of the DSO when WRITE OPTIMIZED DSO is used. (Of course this is assuming the Do not check uniqueness of data indicator is not checked..)
    See help https://help.sap.com/saphelp_crm60/helpdata/en/a6/1205406640c442e10000000a1550b0/frameset.htm
    This means that the DSO can contain duplicate records.
    My question is: What happens to these duplicates when a request level delta update is done to a Standard DSO or Infocube?
    Do duplicates end up in the error stack? Are they simply aggregated in further loads? - because this would be a problem for reporting (double-counting).
    thanks
    tony

    Hi Tony,
    It will aggregate the data in some undesired way.
    Read on...
    https://help.sap.com/saphelp_crm60/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    If you want to use write-optimized DataStore objects in BEx queries, we recommend that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query.
    Hope it helps...
    Regards,
    Ashish

  • Problem loading data into write optimized dso.....

    Hi ,
    I am having problem loading the data from PSA to write optimised DSO
    I have changed the normal DSO into a Write Optimised DSO. I have 2 data sources to be loaded into the write optimized DSO. One for Demand and one for Inventory.
    The loading of Demand data from PSA to DSO happens fine with out any error.
    Now while loading the Inventory data from PSA to DSO , i get the below errors
    "Data Structures were changed. Start Transaction before hand"
    Execption CX_RS_FAILED Logged
    I have tried reactivating the DSO, Transformation, DTP and tried to load the data again to the write optimised DSO, but no luck, I get the above error message always.
    Can some one please suggest me what could be done to avoid the above error messages and load the data successfully.
    Thanks in advance.

    Hi,
    Check the transformations is there any routines written.
    Check your Data struture of cube and DS as well.
    Is there any changes in structure.
    Data will load upto PSA normally, if is there any changes in struture of DSO, then the error may occur.just check it.
    Check the below blog:
    /people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
    Let us know status.........
    Reg
    Pra

  • Missing PARTNO field in Write Optimized DSO

    Hi,
    I have a write optimized DSO, for which the Partition has been deleted (reason unknown) in Dev system.
    For the same DSO, partition parameters exists in QA and production.
    Now while transporting this DSO to QA, I am getting an error "Old key field PARTNO has been deleted", and the DSO could not be activated in the target system.
    Please let me know, how do I re-insert this techinal key of PARTNO in my DSO.
    I pressuming it has something to do with partitioning of the DSO.
    Please Help.......

    Hi,
    Since the write-optimized DataStore object only consists of the table of active data, you do not have to activate the data, as is necessary with the standard DataStore object. This means that you can process data more quickly.
    The loaded data is not aggregated; the history of the data is retained. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, so that the aggregation of data can take place later in standard DataStore objects.
    The system generates a unique technical key for the write-optimized DataStore object. The standard key fields are not necessary with this type of DataStore object. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical keys. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). Only new data records are loaded to this key.
    You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
    PS: Excerpt from http://help.sap.com/saphelp_nw2004s/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    Hope this helps.
    Best Regards,
    Rajani

  • Semantic keys for write optimized DSO

    Hi experts,
    Can anyone tell me more about semantic key in a write optimized DSO ?
    I have a DSO cocnerning sales orders with 3 Datasources.
    How can you define the semantic key ?
    I have schedule line number - document number and position number in semantic key but when I load the PSA, I have an error with duplicate data.
    Any clues ?
    Thanks.

    Hi Oliver,
    If you specify any of the charaterstics symantic fileds, when you load data to the DSO, if any error record comes the following records which has the same characterstic(symantic) combinations will n't update into DSO even though they are correct. they will be written to error stact to ensure the data quality.
    In write optimized DSO, technical fileds are automatically taken, then in the semantic fileds you can specify the charcterstic which should act as primary keys(but not exactly).
    Thanks
    Sreekanth.

  • Changing of Write Optimized DSO to Standard DSO

    Dear Experts,
    I have created few Write Optimized (WO)  DSOs based on the requirement and I have created few reports also on these WO DSOs.
    The problem is when I am creating an Info Set on the WO DSO, Standard DSO and a Cube (total 3 Info Providers I am including in this info Set) it is throwing an error while I am displaying the data from that Info Set.
    I came to know that the problem is with WO DSO So I want to change this WO DSO to Standard DSO.
    FYI We are in Development stage only.
    If I copy the same DSO and make it as a Standard DSO my reports which I created on these will disturb so I want alternate solution for this.
    Regards,
    Phani.
    Edited by: Sai Phani on Nov 12, 2010 5:25 PM

    Hi Sai
    Write optimized DSO's always help in optimal data performance. I am sure you created the WOD only to take advantage of this point.
    However, WOD are not suitable when it comes to reporting.
    So instead of converting the WOD to a standard DSO, (where you do lose the data load performance advantage), why don't you connect the WOD to a standard DSO wherein you extract data from the WOD to a standard DSO and use the standard DSO for reporting.  This would give you benefit during data loas as well as reportiing.
    Cheers
    Umesh

Maybe you are looking for

  • DreamWeaver CS4 on Windows 7 64bit

    I've been using DW successfully and seamlessly since 1997 on Windows 95, 98, XP. I have the same problem with DW 8, CS4 trial upgrade and CS4 upgrade (purchased) on my brand spanking new HP Windows 7 64 bit computer: When I update an existing DW page

  • Volume doesn't fit together... (Lautstärkeanpassung funktioniert nicht)

    I have a problem: Since version 7.0 the volume of my songs is different, although I ticked the "Adapt volume"-setting (in German it's called "Lautstärkeanpassung"). I deleted all the library and imported the songs again and again, but iTunes does onl

  • 1.3.5.1 update issue with MS Exchange

    My MS Exchange work account had been working (after copying our security certificate to the phone) until the 1.3.5.1 update. Now my phone can send messages but I do not recieve any and my calender won't update over the exchange server. Post relates t

  • Error code 11 when trying to open .indd file

    Im having a small issue opening certain .indd  files, they were working fine when i tried to view them earlier in the week. When i double clicked to open them today, i got a small box saying " configuration error, uninstall and reinstall and it shows

  • Creating Sub-folders

    I have two email accounts, call them email A and email B. There are sub-folders for each for inbox, sent, drafts and trash. I would like to create a saved folder for each of the two accounts; however, it seems that in can only be attached to "On My M