Semantic keys for write optimized DSO

Hi experts,
Can anyone tell me more about semantic key in a write optimized DSO ?
I have a DSO cocnerning sales orders with 3 Datasources.
How can you define the semantic key ?
I have schedule line number - document number and position number in semantic key but when I load the PSA, I have an error with duplicate data.
Any clues ?
Thanks.

Hi Oliver,
If you specify any of the charaterstics symantic fileds, when you load data to the DSO, if any error record comes the following records which has the same characterstic(symantic) combinations will n't update into DSO even though they are correct. they will be written to error stact to ensure the data quality.
In write optimized DSO, technical fileds are automatically taken, then in the semantic fileds you can specify the charcterstic which should act as primary keys(but not exactly).
Thanks
Sreekanth.

Similar Messages

  • Duplicate Semantic Key in Write Optimized DSO

    Gurus
    Duplicate semantic keys have a unique index KEY in the key fields of the DSO when WRITE OPTIMIZED DSO is used. (Of course this is assuming the Do not check uniqueness of data indicator is not checked..)
    See help https://help.sap.com/saphelp_crm60/helpdata/en/a6/1205406640c442e10000000a1550b0/frameset.htm
    This means that the DSO can contain duplicate records.
    My question is: What happens to these duplicates when a request level delta update is done to a Standard DSO or Infocube?
    Do duplicates end up in the error stack? Are they simply aggregated in further loads? - because this would be a problem for reporting (double-counting).
    thanks
    tony

    Hi Tony,
    It will aggregate the data in some undesired way.
    Read on...
    https://help.sap.com/saphelp_crm60/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    If you want to use write-optimized DataStore objects in BEx queries, we recommend that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query.
    Hope it helps...
    Regards,
    Ashish

  • Data archiving for Write Optimized DSO

    Hi Gurus,
    I am trying to archive data in Write Optimized DSO.
    Its allowing me to archive on request basis but it archives entire requests in the DSO(Means all data).
    But i want to select to archive from this request to this request.(Selection of request by my own).
    Please guide me.
    I got the below details from SDN.Kindly check.
    Archiving for Write optimized DSOs follow request based archiving as opposed to time slice archiving in standard DSO. This means that partial request activation is not possible; only complete requests have to be archived.
    Characteristic for Time Slice can be a time characteristic present in the WDSO, or the request creation data/request loading date. You are not allowed to add additional infoobjects for semantic groups, default is 0REQUEST & 0DATAPAKID.
    The actual process of archiving remains the same i.e
    Create a Data Archiving Process
    Create and schedule archiving requests
    Restore archiving requests (optional)
    Regards,
    kiruthika

    Hi,
    Please check the below OSS Note :
    http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_whm/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d31313338303830%7d
    -Vikram

  • Testing Delta Consistency for write-optimized DSO

    Hi,
    Please do let me know how to carry  testing for delta consistency for write-optimized DSO.
    Is it just to check whether the check box under the settings.. "Check Delta Consistency " is checked or the other way.
    Please do let me know.
    Thanks & Regards,
    Lavanya.

    Hi,
    The delta consistency flag for W-O DSO is used to ensure data consistency between the w-o DSO and any target propagated to. zfor example, you have dataflow ZWDSO1 -> zsdso1. ZWDSO1 has the flag 'check delta consistency' set to ON. You load data from ZWDSO1 to ZSDSO1 via request 1.
    When the load is successful, if you try to delete the request 1 from ZWDSO1, deletion should not be possible; also you will not be able to change the flag to OFF as delta has already been propagated, and if you change the flag, delta consistency might be lost!
    Now, if you delete request 1 from ZSDSO1 and then try to delete request 1 from ZWDSO1, deletion should be possible.
    Hope the explanation helps!
    Regards,
    Rakesh

  • Duplicate Error while loading data to Write Optimized DSO

    Hi,
    When i do a dataload for Write Optimized DSO, I am getting an Error as "Duplicate Data Record Detected".  I have Sales Document Number, Fiscal Year Variant & Billing Item as Semantic Key in the DSO. 
    For this DSO, I am getting data from a Test ECC system, in which most of the Sales Document Number column is Blank for this Datasource.
    When i go into the Error Stack of the DSO, all the rows whichever has Sales Document number as Blank are displayed.  For all this rows, the Item Number is 10.
    Am i getting this Duplicate error as the Sales Document Number is Blank & Item number is 10 for all of them?  I read in Threads that Write Optimized DSO doesnt care about the Key Values, it loads the data even if the Key values are same.
    Any help is highly appreciated.
    Regards,
    Murali

    Hi Murali,
    Is the Item Number a key field ?
    When all the key fields are same then data gets aggreagted depending on the setting done in the transformation for the key figures. These 2 options for key figures are :
    1. Add up the key figures
    2. Replace the key figure
    Since the Sales Document No is blank and Item Number is same then their is a possibility that the key figures for these records might get added up or replaced and thus due to this property of SDSO it might not be throwing error.
    Check the KF value in the SDSO for that Sales Doc No and Item No. and try to find out what is the value for KF. It may have addition of all the common data fields or the KF value of last common record.
    Regards
    Raj Rai

  • Unable to delete request from write-optimized DSO (Error during rollback)

    Hi Gurus,
    I am trying to delete a delta request from a Write-Optimized DSO. This request was uploaded with a DTP from another Write-optimized DSO.
    The actual overall status of the request is RED and the description of that status is now: 'Error during rollback of request DTPR_4JW6NLSVDUNYY3GTD6F4DQJWR; only rollback allowed'.
    I checked the log of all Request Operations in DataStore (from the same line where the red request is now) and I see my several attemps to delete this request under a RED radiobutton with the title Rollback.  The details for this error are the following:
    Could not delete request data from active table
    Message no. RSODSO_ROLLBACK114
    Diagnosis
    The system could not delete the request data from the active table of a write-optimized DataStore object.
    System Response
    Write-optimized DataStore object: DTFISO02
    Active table: /BIC/ADTFISO0200
    Request: DTPR_4JW6NLSVDUNYY3GTD6F4DQJWR
    Procedure
    Search for Notes containing the key words "Delete write-optimized DSO PSA"
    I am relatively new to SAP BI 7.0 and I do not know how to delete this request.  Any help will be highly appreciated !!
    Leticia

    Hi Leticia:
    Take a look at the SAP Notes below.
    Note 1111065 - "701: Delta consistency check for write-optimized DSOs"
    Note 1263877 - "70SP20: Delta consistency check for write-optimized DSOs"
    Note 1125025 - "P17:PSA:DSO:ODSR missing in PSA process for write-opt. DSO"
    Additionally, some ideas from the alternative presented on the blog by KMR might help you.
    "How to generate a selective deletion program for info provider"
    Regards,
    Francisco Mílán.

  • Write Optimized DSO Probelm

    Hi Friends,
    I have to create a process chain for write optimized DSO so i know that every day we have to delete the data from both data target and psa but i am confused how to set delete data every day in psa of my data source.
    can any body help me in this issue
    Thanks and Regards
    pedamarla

    Hi,
    While creating the process chain, select the process type Delete PSA, give the datasource name and schedule it for daily run. Thenafter, run the load through process chain,
    Cheers...
    Puneesh

  • Standard DSO - Write Optimized DSO, key violation, same semantic key

    Hello everybody,
    I'm trying to load a Write-Optimized DSO from another Standard DSO and then is raised the "famous" error:
    During loading, there was a key violation. You tried to save more than
    one data record with the same semantic key.
    The problematic (newly loaded) data record has the following properties:
    o   DataStore object: ZSD_O09
    o   Request: DTPR_D7YTSFRQ9F7JFINY43QSH1FJ1
    o   Data package: 000001
    o   Data record number: 28474
    I've seen many different previous posts regarding the same issue but not quite equal as mine:
    [During loading, there was a key violation. You tried to save more than]
    [Duplicate data records at dtp]
    ...each of them suggests to make some changes in the Semantic Key. Here's my particular context:
    Dataflow goes: ZSD_o08 (Standard DSO) -> ZSD_o09 (Write-Optimized DSO)
    ZSD_o08 Semantic Keys:
    SK1
    SK2
    SK3
    ZSD_o09 Semantic Keys:
    SK1
    SK2
    SK3
    SK4 (value is taken in a routine as SY-DATUM-1)
    As far as I can see there are no repeated records for semantic keys into ZSD_o08 this is confirmed by querying at active data table for ZSD_o08 ODS. Looking for the Temporary Storage for the crashed DTP at the specific package for the error I can't neither see any "weird" thing.
    Let's suppose that the Semantic Key is crucial as is currently set.
    Could you please advice?. I look forward for your quick response. Thank you and best regards,
    Bernardo

    Hi  Bernardo:
    By maintaining the settings on your DTP you can indicate wether data should be extracted from the Active Table or Change Log Table as described below.
    >-Double click on the DTP that transfers the data from the Standard DSO to the Write Optimized DSO and click on the "Extraction" Tab, on the group at the bottom select one of the 4 options:
    >Active Table (With Archive)
    >Active Table (Without Archive)
    >Active Table (Full Extraction Only)
    >Change Log
    >Hit the F1 key to access the documentation
    >
    >===================================================================
    >Indicator: Extract from Online Database
    >The settings in the group frame Extraction From... or Delta Extraction From... of the Data Transfer Process maintenance specify the source from which the data of the DTP is extracted.  For a full DTP, these settings apply to all requests started by the DTP. For a delta DTP, the settings only apply to the first request (delta initialization), since because of the delta logic, the following requests must all be extracted from the change log.
    >For Extraction from the DataStore Object, you have the following options:
    >Active Table (with Archive)
    >The data is read from the active table and from the archive or from a near-line storage if one exists. You can choose this option even if there is no active data archiving process yet for the DataStore object.
    >Active Table (Without Archive)
    >The data is only read from the active table. If there is data in the archive or in a near-line storage at the time of extraction, this data is not extracted.
    >Archive (Only Full Extraction)
    >The data is only read from the archive or from a near-line storage. Data is not extracted from the active table.
    >Change Log
    >The data is read from the change log of the DataStore object.
    >For Extraction from the InfoCube, you have the following options:
    >InfoCube Tables
    >Data is only extracted from the database (E table and F table and aggregates).
    >Archive (Only Full Extraction)
    >The data is only read from the archive or from a near-line storage.
    Have you modified the default settings on the DTP? How is the DTP configured right now? (or how was configured before your testing?)
    Hope this helps,
    Francisco Milán.

  • Sematic group for a write optimized DSO

    Hello,
    I am loading from a datasource to a write optmized DSO. While doing so, I am not able to set semtantic key/group for the DTP.  I have turned error handling on using the setting 'Valid records update, No reporting(request RED)'.
    Is this setting not possible for a write optimized DSO? if not why not? and if yes, what am i missing!?
    Thanks a lot in advance,
    Prakash

    Thank you,
    here are my findings:
    - if data has been loaded with delta using the DTP, it is not possible to change the sematic groups any more - all requests have to be deleted first
    - the error handling must be turned on
    The issue is resolved.
    Marek

  • Can a write-optimized DSO be used for Delta upload

    Hi,
    can any one please answer following..
    1. can a write optimized DSO be used for Delta upload?
    2. Does industry based content is available in BI Content ?
    Thanks&Regards
    Satya

    Hi,
    Write-Optimized DataStore does not support the image based delta, it supports request level delta, and you will get brand new delta request for each data load.
    Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
    Write-Optimized Data Store supports request level delta. In order to capture before and after image delta, you must have to post latest request into further targets like Standard DataStore or Infocubes.

  • Error occurs while activating a 'Write Optimized' DSO.

    I am getting error " There is no PSA for infosource 'XXXX'  and source system 'XXX' while activating a newly defined DSO object.
    I am able to activate a Standard DSOs, however the error occurs while activating a 'Write Optimized' DSO

    Hi,
    For write optimised DSO, check if you have tick the uniqueness of the records. If you check that and if there are two same records coming from source in one go, then you will get error
    From SAP help
    You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
    Thanks
    Srikanth

  • Missing PARTNO field in Write Optimized DSO

    Hi,
    I have a write optimized DSO, for which the Partition has been deleted (reason unknown) in Dev system.
    For the same DSO, partition parameters exists in QA and production.
    Now while transporting this DSO to QA, I am getting an error "Old key field PARTNO has been deleted", and the DSO could not be activated in the target system.
    Please let me know, how do I re-insert this techinal key of PARTNO in my DSO.
    I pressuming it has something to do with partitioning of the DSO.
    Please Help.......

    Hi,
    Since the write-optimized DataStore object only consists of the table of active data, you do not have to activate the data, as is necessary with the standard DataStore object. This means that you can process data more quickly.
    The loaded data is not aggregated; the history of the data is retained. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, so that the aggregation of data can take place later in standard DataStore objects.
    The system generates a unique technical key for the write-optimized DataStore object. The standard key fields are not necessary with this type of DataStore object. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical keys. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). Only new data records are loaded to this key.
    You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
    PS: Excerpt from http://help.sap.com/saphelp_nw2004s/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    Hope this helps.
    Best Regards,
    Rajani

  • Error during loading and deletion of write-optimized DSO

    Hey guys,
    I am using a write optimized DSO ZMYDSO to store data from several sources (two datasources and one DSO).
    I have disabled the check of uniqueness in the DSO, but I defined a semantic key for the fields ZCLIENT, ZGUID, ZSOURCE, ZPOSID which are used in a non-unique index.
    In the given case, I want to delete existing rows in the DSO. I execute these steps in the endroutine. Here the abstract coding:
    LOOP AT RESULT_PACKAGE ASSINING <RESULT_FIELDS>.
    u201Csome other logic [u2026]
    DELETE /BIC/AZMYDSO00
    WHERE /BIC/ZCLIENT = RESULT_FIELDS-/BIC/ZCLIENT
         AND /BIC/ZGUID = RESULT_FIELDS-/BIC/ZGUID
    AND /BIC/ZSOURCE = RESULT_FIELDS-/BIC/ZSOURCE
    AND /BIC/ZPOSID = RESULT_FIELDS-/BIC/ ZPOSID.
    ENDLOOP.
    COMMIT WORK AND WAIT.
    During the Loading (after the transformation step in the updating step), I get the messages (not every time):
    1.     Error while writing the data. (RSAODS131)
    2.     Could not Save DataPackage xy in DataStore ZMYDSO (RSODSO_UPDATE027).
    Diagnosis: DataPackage XY could not be saved. Reasons therefore could be violation of key uniqueness (duplicate data) or general database error.
    3.     Error in the substep of updating DataStore.
    I have checked the system log (SM21) and the system dumps (ST22) but I could not find an exact error description.
    I guess, I am creating some inconsistencies or locks (I also checked the SM12) so that the load process interrupts. But I also tried a serial updating within the DTP (I reduced the number of batch processes to 1). No success.
    Perhaps the loading of one specific package could take a longer time so that the following package would overtake the predecessor. Could that be a problem? Do you generally advise against the deletion of rows within the endroutine?
    Regards,
    Philipp

    Hi,
    is ZMYDSO the name of the DSO?
    And is this the end routine of the transformation while loading the same DSO?
    if so we never do such a thing.
    you are comparing the DSO with the data that is flowing in and then deleting the data from the DSO...
    Which doesnt actually make any sense... because when loading the data to a DSO (or a cube or any table) the DSO (or cube) will be locked exclusively for any modifications of data. You can only read data from it.
    If your requirement is that existing duplicate records need not arrive in the DSO then you can delete the data from the SOURCE_PACKAGE in the start routine like below
    SELECT FIELDS FROM /BIC/AZMYDSO00 INTO INTERNAL_TABLE WHERE <CONDITION>.
    LOOP AT INTERNAL_TABLE.
       DELETE SOURCE_PACKAGE
      WHERE SOURCE_PACKAGE-/BIC/ZCLIENT = INTERNAL_TABLE-/BIC/ZCLIENT
         AND SOURCE_PACKAGE-/BIC/ZGUID = INTERNAL_TABLE-/BIC/ZGUID
      AND SOURCE_PACKAGE-/BIC/ZSOURCE = INTERNAL_TABLE-/BIC/ZSOURCE
      AND SOURCE_PACKAGE-/BIC/ZPOSID = INTERNAL_TABLE-/BIC/ ZPOSID.
    ENDLOOP.
    or if your requirement is that you need to delete the old data from the DSO for the same key which is arriving newly in order to load the new data into the DSO in that case, you could do something like this in the start routine
    SELECT FIELDS FROM /BIC/AZMYDSO00 INTO INTERNAL_TABLE FOR ALL ENTRIES IN SOURCE_PACKAGE
    WHERE /BIC/ZCLIENT = SOURCE_PACKAGE-/BIC/ZCLIENT
         AND /BIC/ZGUID = SOURCE_PACKAGE-/BIC/ZGUID
    AND /BIC/ZSOURCE = SOURCE_PACKAGE-/BIC/ZSOURCE
    AND /BIC/ZPOSID = SOURCE_PACKAGE-/BIC/ ZPOSID.
    * now update the new values you want to write in the loop
    LOOP AT INTERNAL_TABLE INTO WORK_AREA.
    "CODE FOR MANIPULATION of WORK_AREA
    *write a modify statement to update the RESULT_PACKAGE.
    MODIFY RESULT_PACKAGE FROM WORK_AREA TRANSPORTING FIELDS.
    ENDLOOP.
    hope it helps,
    Regards,
    Joe

  • Write -Optimized DSO Activation Issue

    Hi Experts,
    when ever I am activating the DSO(WRITE-OPTIMIZED), I am getting error like
    "no PSA for InfoSource and source system in Bi 7.0" 
    Can you pls provide solution for this issue.
    considerations:
    1. This Write-Optimized DSO does n't contain any semantic keys,all are taken as DATA Fields.
    2. I tried by checking and unchecking the uniqueness of data check box.
    Thanking You,
    R.Dama

    Hi Experts,
    I know that only active data table is available in WO-DSO.I am not trying to add activate process in Process Chain.
    And, I am not trying in Data Load time.
    I am saying that ,the problem is while creating WO-DSO, we need to activate dso..right
    ErrorIt is in the Initial step while creating WO-DSO.
    Pls help me and clarify that why I am getting that error message..

  • 4 LSA architecture - EDW layer - write optimized DSO settings

    Dear Colleagues,
    I have a question regarding the 4 LSA architecture, available on the following article.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/306f254c-1e3f-2d10-9da0-bcff4e35e0ef
    When we activate a SAP business content dataflow.
    If we want to add in the EDW layer write optimized DSO to have a faster upload from source system to SAP BI.
    Should we always check the" do not check uniqueness of data" setting in the write optimized DSO to avoid data upload error ?
    Based on your experience what would be your recommendation ?
    Cheers,

    I would suggest to check "ON"
    -If the check is ON It will allow to load several records with same semantic key and in next layer
    -If the check is OFF It will not allow to load same record twice and throws error
    Did you seen this ?
    /people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
    Edited by: Srinivas on Aug 24, 2010 2:16 PM

Maybe you are looking for