2LIS_04_P_ARBPL and Write Optimized

Hi All,
I have a need to load 2LIS_04_P_ARBPL into a write optimized DSO due to performance reason and since this data source has 'reversal indicator' the write optimized does not handle it. The standard DSO will take care of 0RECORDMODE, but not write optimized.
Is there any alternative to get around to it?
Thanks,
Alex.

Clsoing this and this cannot be done.

Similar Messages

  • What is the main difference of Direct update DSO and Write optimized DSO

    What is the main difference of Direct update DSO and Write optimized DSO?

    Hi chandra:
      Check this link.
    http://help.sap.com/saphelp_nw04s/helpdata/en/f9/45503c242b4a67e10000000a114084/content.htm
    You can find another difference on page 147, section "Reclustering DataStore Objects" of the document "Enterprise Data Warehousing"
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2-b582e94bcf8a?quicklink=index&overridelayout=true
    >You can only use reclustering for standard DataStore objects and DataStore objects for direct update. You cannot use reclustering for write-optimized DataStore objects. User-defined multidimensional clustering is not available for write-optimized DataStore objects
    Regards,
    Francisco Milán.
    Edited by: Francisco Milan on Aug 11, 2010 7:09 PM

  • Error during loading and deletion of write-optimized DSO

    Hey guys,
    I am using a write optimized DSO ZMYDSO to store data from several sources (two datasources and one DSO).
    I have disabled the check of uniqueness in the DSO, but I defined a semantic key for the fields ZCLIENT, ZGUID, ZSOURCE, ZPOSID which are used in a non-unique index.
    In the given case, I want to delete existing rows in the DSO. I execute these steps in the endroutine. Here the abstract coding:
    LOOP AT RESULT_PACKAGE ASSINING <RESULT_FIELDS>.
    u201Csome other logic [u2026]
    DELETE /BIC/AZMYDSO00
    WHERE /BIC/ZCLIENT = RESULT_FIELDS-/BIC/ZCLIENT
         AND /BIC/ZGUID = RESULT_FIELDS-/BIC/ZGUID
    AND /BIC/ZSOURCE = RESULT_FIELDS-/BIC/ZSOURCE
    AND /BIC/ZPOSID = RESULT_FIELDS-/BIC/ ZPOSID.
    ENDLOOP.
    COMMIT WORK AND WAIT.
    During the Loading (after the transformation step in the updating step), I get the messages (not every time):
    1.     Error while writing the data. (RSAODS131)
    2.     Could not Save DataPackage xy in DataStore ZMYDSO (RSODSO_UPDATE027).
    Diagnosis: DataPackage XY could not be saved. Reasons therefore could be violation of key uniqueness (duplicate data) or general database error.
    3.     Error in the substep of updating DataStore.
    I have checked the system log (SM21) and the system dumps (ST22) but I could not find an exact error description.
    I guess, I am creating some inconsistencies or locks (I also checked the SM12) so that the load process interrupts. But I also tried a serial updating within the DTP (I reduced the number of batch processes to 1). No success.
    Perhaps the loading of one specific package could take a longer time so that the following package would overtake the predecessor. Could that be a problem? Do you generally advise against the deletion of rows within the endroutine?
    Regards,
    Philipp

    Hi,
    is ZMYDSO the name of the DSO?
    And is this the end routine of the transformation while loading the same DSO?
    if so we never do such a thing.
    you are comparing the DSO with the data that is flowing in and then deleting the data from the DSO...
    Which doesnt actually make any sense... because when loading the data to a DSO (or a cube or any table) the DSO (or cube) will be locked exclusively for any modifications of data. You can only read data from it.
    If your requirement is that existing duplicate records need not arrive in the DSO then you can delete the data from the SOURCE_PACKAGE in the start routine like below
    SELECT FIELDS FROM /BIC/AZMYDSO00 INTO INTERNAL_TABLE WHERE <CONDITION>.
    LOOP AT INTERNAL_TABLE.
       DELETE SOURCE_PACKAGE
      WHERE SOURCE_PACKAGE-/BIC/ZCLIENT = INTERNAL_TABLE-/BIC/ZCLIENT
         AND SOURCE_PACKAGE-/BIC/ZGUID = INTERNAL_TABLE-/BIC/ZGUID
      AND SOURCE_PACKAGE-/BIC/ZSOURCE = INTERNAL_TABLE-/BIC/ZSOURCE
      AND SOURCE_PACKAGE-/BIC/ZPOSID = INTERNAL_TABLE-/BIC/ ZPOSID.
    ENDLOOP.
    or if your requirement is that you need to delete the old data from the DSO for the same key which is arriving newly in order to load the new data into the DSO in that case, you could do something like this in the start routine
    SELECT FIELDS FROM /BIC/AZMYDSO00 INTO INTERNAL_TABLE FOR ALL ENTRIES IN SOURCE_PACKAGE
    WHERE /BIC/ZCLIENT = SOURCE_PACKAGE-/BIC/ZCLIENT
         AND /BIC/ZGUID = SOURCE_PACKAGE-/BIC/ZGUID
    AND /BIC/ZSOURCE = SOURCE_PACKAGE-/BIC/ZSOURCE
    AND /BIC/ZPOSID = SOURCE_PACKAGE-/BIC/ ZPOSID.
    * now update the new values you want to write in the loop
    LOOP AT INTERNAL_TABLE INTO WORK_AREA.
    "CODE FOR MANIPULATION of WORK_AREA
    *write a modify statement to update the RESULT_PACKAGE.
    MODIFY RESULT_PACKAGE FROM WORK_AREA TRANSPORTING FIELDS.
    ENDLOOP.
    hope it helps,
    Regards,
    Joe

  • Difference between -  Write Optimized and Direct Update DSO

    Hi Gurus,
    I know the similarities of the Write Optimized and Direct Update DSO.
    But I want to know what is the difference between the both.
    Can any expert let me know the difference between the both please.
    Thanks

    Hi,
    Write Optimised DSO:
    Write optimsed DSO has been designed to be the initial staging of the source system  data from where the data could be transfered to the standard DSO or the Infocube.
    The Data is immediately written to the further data targets.We can save the activation time.Reporting also possible on this.
    SAP recommends to use Write-Optimized DataStore as a EDW inbound layer, and update the data into further targets such as standard DataStore objects or InfoCubes.
    Direct Update DSO:
    Dat a store object for direct update conatains daya in a single version.therefore data is stored same form in which it was written to DSO for direct update by the application.We can use this type of DSO for APD.
    In IP we can use this dso and directly we can enter the data using RSINPUT.
    Thanks
    Madhavi

  • Error occurs while activating a 'Write Optimized' DSO.

    I am getting error " There is no PSA for infosource 'XXXX'  and source system 'XXX' while activating a newly defined DSO object.
    I am able to activate a Standard DSOs, however the error occurs while activating a 'Write Optimized' DSO

    Hi,
    For write optimised DSO, check if you have tick the uniqueness of the records. If you check that and if there are two same records coming from source in one go, then you will get error
    From SAP help
    You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
    Thanks
    Srikanth

  • Changes to write optimized DSO containing huge amount of data

    Hi Experts,
    We have appended two new fields in DSO containg huge amount of data. (new IO are amount and currency)
    We are able to make the changes in Development (with DSO containing data).  But when we tried to
    tranport the changes to our QA system, the transport hangs.  The transport triggers a job which
    filled-up the logs so we need to kill the job which aborts the transport.
    Does anyone of you had the same experience.  Do we need to empty the DSO so we can transport
    successfully?  We really don't want to empty the DSO's as it will take time to load? 
    Any help?
    Thank you very muhc for your help.
    Best regards,
    Rose

    emptying the dso should not be necessary, not for a normal dso and not for a write optimized DSO.
    What are the things in the logs; sort of conversions for all the records?
    Marco

  • Write optimized dso - Stand DSO loading by Delta Req by req is not working

    Hi All,
    We have a data flow which has 1st layer having write optimized dso and then we are loading data to Standard DSO.
    We are using Delta DTP to load the data with the option "Get All New Data Request By Request", but it is not working.
    Suppose if  i have 2 requests which are need to update to Standard Dso from Write Optimized Dso, it is updating in single request. We are on SAP BW 701 patch level is 005.
    Please suggest me ....

    Hi Prasanna,
    For reasons of downward compatibility, the system behaves differently for DTPs that were created before "SAP NetWeaver 7.0 Support Package Stack 13"
    DTPs for which the indicator is set only get the first new request, even if there is more than one new source request at the time of processing. This restricts the way in which these DTPs can be used in process chains, because requests accumulate in the source and the target may not contain the current data. The Retrieve Until No More New Data indicator is therefore displayed for these DTPs.
    I Suggest you to set this indicator and activate the DTP.
    If you set the Retrieve Until No More New Data indicator and then activate the DTP, once it completes processing, a DTP request checks whether there are any further requests in the source. If the source contains more requests, a new DTP request is automatically generated and processed. Once the DTP is activated, the indicator is no longer visible in DTP maintenance.
    This applies when the DTP is started by a process chain and also when you start the DTP directly from DTP maintenance.
    The indicator is set by default for new DTPs that get data request by request.
    If you do not select the indicator, the label for the Get All New Data Request By Request indicator changes to Get One Request Only. Once the DTP is activated, only this indicator is displayed in DTP maintenance.
    Regards,
    Sudheer.

  • Error while loading data from write optimized ODS to cube

    Hi All,
    I am loading data from a write optimized ODS to cube
    I have done Generate Export Datasource
    schedulled the info packge with 1 selection for full load
    then it gave me following error in Transfer IDOCs & TRFC
    Info IDOC 1: IDOC with errors added
    Info IDOC 2: IDOC with errors added
    Info IDOC 3: IDOC with errors added
    Info IDOC 4: IDOC with errors added
    Data packege 1 arrived in BW Processing : selected number does not agree with transferred number
    Processing below is green
    shows update of  4 new records to Datapackage 1.
    Please provide inputs for the resolution
    Thanks & Regards,
    Rashmi.

    please let me know, What more details you need?
    If I click F1 for error details i get following message
    Messages from source system
    see also Processing Steps Request
    These messages are sent by IDoc from the source system. Both the extractor itself as well as the service API can send messages. When errors occur, several messages are usually sent together.
    From the source system, there are several types of messages that can be differentiated by the so-called Info-IDoc-Status. The IDoc with status 2 plays a particular role here; it describes the number of records that have been extracted in a source system and sent to BI. The number of the records received in BI is checked against this information.
    Thanks & Regards,
    Rashmi.

  • Multiple data loads in PSA with write optimized DSO objects

    Dear all,
    Could someone tell me how to deal with this situation?
    We are using write optimized DSO objects in our staging area. These DSO are filled with full loads from a BOB SAP environment.
    The content of these DSO u2013objects are deleted before loading, but we would like to keep the data in the PSA for error tracking and solving. This also provides the opportunity to see what are the differences between two data loads.
    For the normal operation the most recent package in the PSA should be loaded into these DSO-objects (as normal data staging in BW 3.5 and before) .
    As far as we can see, it is not possible to load only the most recent data into the staging layer. This will cause duplicate record errors when there are more data loads in the PSA.
    We all ready tried the functionality in the DTP with u201Call new records, but that only loads the oldest data package and is not processing the new PSA loads.
    Does any of you have a solution for this?
    Thanks in advance.
    Harald

    Hi Ajax,
    I did think about this, but it is more a work around. Call me naive but it should be working as it did in BW3.5!
    The proposed solution will ask a lot of maintenance afterwards. Beside that you also get a problem with changing PSA id's after the have been changed. If you use the posibility to delete the content of a PSA table via the process chain, it will fail when the datasourcese is changed due to a newly generated PSA table ID.
    Regards,
    Harald

  • 4 LSA architecture - EDW layer - write optimized DSO settings

    Dear Colleagues,
    I have a question regarding the 4 LSA architecture, available on the following article.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/306f254c-1e3f-2d10-9da0-bcff4e35e0ef
    When we activate a SAP business content dataflow.
    If we want to add in the EDW layer write optimized DSO to have a faster upload from source system to SAP BI.
    Should we always check the" do not check uniqueness of data" setting in the write optimized DSO to avoid data upload error ?
    Based on your experience what would be your recommendation ?
    Cheers,

    I would suggest to check "ON"
    -If the check is ON It will allow to load several records with same semantic key and in next layer
    -If the check is OFF It will not allow to load same record twice and throws error
    Did you seen this ?
    /people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
    Edited by: Srinivas on Aug 24, 2010 2:16 PM

  • Duplicate Semantic Key in Write Optimized DSO

    Gurus
    Duplicate semantic keys have a unique index KEY in the key fields of the DSO when WRITE OPTIMIZED DSO is used. (Of course this is assuming the Do not check uniqueness of data indicator is not checked..)
    See help https://help.sap.com/saphelp_crm60/helpdata/en/a6/1205406640c442e10000000a1550b0/frameset.htm
    This means that the DSO can contain duplicate records.
    My question is: What happens to these duplicates when a request level delta update is done to a Standard DSO or Infocube?
    Do duplicates end up in the error stack? Are they simply aggregated in further loads? - because this would be a problem for reporting (double-counting).
    thanks
    tony

    Hi Tony,
    It will aggregate the data in some undesired way.
    Read on...
    https://help.sap.com/saphelp_crm60/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    If you want to use write-optimized DataStore objects in BEx queries, we recommend that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query.
    Hope it helps...
    Regards,
    Ashish

  • Problem loading data into write optimized dso.....

    Hi ,
    I am having problem loading the data from PSA to write optimised DSO
    I have changed the normal DSO into a Write Optimised DSO. I have 2 data sources to be loaded into the write optimized DSO. One for Demand and one for Inventory.
    The loading of Demand data from PSA to DSO happens fine with out any error.
    Now while loading the Inventory data from PSA to DSO , i get the below errors
    "Data Structures were changed. Start Transaction before hand"
    Execption CX_RS_FAILED Logged
    I have tried reactivating the DSO, Transformation, DTP and tried to load the data again to the write optimised DSO, but no luck, I get the above error message always.
    Can some one please suggest me what could be done to avoid the above error messages and load the data successfully.
    Thanks in advance.

    Hi,
    Check the transformations is there any routines written.
    Check your Data struture of cube and DS as well.
    Is there any changes in structure.
    Data will load upto PSA normally, if is there any changes in struture of DSO, then the error may occur.just check it.
    Check the below blog:
    /people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
    Let us know status.........
    Reg
    Pra

  • Missing PARTNO field in Write Optimized DSO

    Hi,
    I have a write optimized DSO, for which the Partition has been deleted (reason unknown) in Dev system.
    For the same DSO, partition parameters exists in QA and production.
    Now while transporting this DSO to QA, I am getting an error "Old key field PARTNO has been deleted", and the DSO could not be activated in the target system.
    Please let me know, how do I re-insert this techinal key of PARTNO in my DSO.
    I pressuming it has something to do with partitioning of the DSO.
    Please Help.......

    Hi,
    Since the write-optimized DataStore object only consists of the table of active data, you do not have to activate the data, as is necessary with the standard DataStore object. This means that you can process data more quickly.
    The loaded data is not aggregated; the history of the data is retained. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, so that the aggregation of data can take place later in standard DataStore objects.
    The system generates a unique technical key for the write-optimized DataStore object. The standard key fields are not necessary with this type of DataStore object. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical keys. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). Only new data records are loaded to this key.
    You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
    PS: Excerpt from http://help.sap.com/saphelp_nw2004s/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    Hope this helps.
    Best Regards,
    Rajani

  • Semantic keys for write optimized DSO

    Hi experts,
    Can anyone tell me more about semantic key in a write optimized DSO ?
    I have a DSO cocnerning sales orders with 3 Datasources.
    How can you define the semantic key ?
    I have schedule line number - document number and position number in semantic key but when I load the PSA, I have an error with duplicate data.
    Any clues ?
    Thanks.

    Hi Oliver,
    If you specify any of the charaterstics symantic fileds, when you load data to the DSO, if any error record comes the following records which has the same characterstic(symantic) combinations will n't update into DSO even though they are correct. they will be written to error stact to ensure the data quality.
    In write optimized DSO, technical fileds are automatically taken, then in the semantic fileds you can specify the charcterstic which should act as primary keys(but not exactly).
    Thanks
    Sreekanth.

  • Init from Write optimized DSO

    Hello,
    I have a requirement wherein a WDSO (Write Optimized) get everyday's Delta where we apply all Business rules. Then need to load this DELTA into a Cube in Analytical layer which is in a different system.
    Questions:
    1. Can we do Init without Data transfer from WDSO to Cubes while using Update rules from Exp DS of this WDSO to the Cubes, and then run Delta loads subsequently?
    2. Can we do Init without Data transfer from WDSO to Cubes while using DTP from this Exp DS to Cubes in a different system, and run Delta loads subsequently?
    As mentioned, please be aware that source WDSO and the Cubes are in 2 different systems.
    thanks in advance for the help...
    Kumar

    Hello Chetan,
    Really appreciate your help on this.
    Guess I am failing to explain my requirement.
    Process that you had suggested works fine if it is full load from source WDSO to the Cubes in a different system. But for this I have to drop and reload the delta in WDSO everyday.
    My actual requirement is to keep adding new Delta requests into WDSO everyday, then pull ONLY DELTA records (Requestes that are loaded on each day) into the Cubes across system.
    So in order to achieve this we have to load from WDSO to PSA in a different system (for doing Init or Delta), then pull using DTP into the Cubes.
    So as said earlier, while doing this Init from WDSO to PSA using Infopackage it is working fine but no subsequent Delta are getting pulled though new records added in WDSO.
    Hope it makes sense....
    This makes me to think whether we can load Deltas from WDSO or not. Appreciate your help.
    thanks
    Kumar

Maybe you are looking for

  • K7n2g-ilsr IGP graphics interference

    When using the IGP, I see moving diagonal lines on my desktop. I don't think it's a memory compatibility issue since I don't experience any reboots, lock-ups, crashes, etc. I suspect it's the PSU but I need a second opinion on this. I already experim

  • To do start dates

    I very much wanted to move everything into iCal, being somewhat tired of Now Up-To_Date after all these years. THe problem is, I use To Do's to make sales calls. I will set these to start in two weeks, or three weeks, after I have made the call and b

  • Interface with SAP / Bar code scanner.

    Hello, we are going  to integrate bar code reader to SAP directly Our thought are , Scanning  the bar code and then using IDOC or directly connected to SAP - ZTables for converting the data into the SAP system.. is it possible to convert the bar code

  • The iTunes playhead doesn't fully progress

    In iTunes the playerhead is displaying the wrong time. It says that the song is only 1:30 long but in reality its 3:11. But when I look in the selection list it displays the full list. iTunes playes the full song even though after 1 min and 30 sec it

  • Minefield crash after opening. Safe mode OK Win7 X64 ATI HD4890

    When I launch Minfield it's open then crash in a second. Launching in safe mode it's working. My PC use Win7 X64 graphic card ATI Radeon HD 4890. This happen from 5 builds. before it was working perfectly. Does it mean that I have incompatible soft v