Transportation optimizing in SAPECC6.0

Hi,
We have a customer implementing SAPECC6.0, together with a 3rd party transportation planning system (TMS).
They want the TMS to optimize planning of routes and carriers and then create shipments in SAPECC6.0. But, I am a bit confused how this is going to work, because routes are already defined in the sales orders and the deliveries in SAPECC6.0...and without rerunning ATP checks for these objects, how are we gonna have the TMS to pick the most cost effective route?
So our customer wants the TMS to pick the best route for our sales orders. My response back to them is that this is not how it works, since there will not be a real time interaction between the SAPECC6.0 and the TMS system (response time is in 3 hour intervals). My response back is that they need to pick the route in SAP in the sales order and/or delivery, send the delivery to the TMS system where the most cost efficient carrier is picked.
Does anyone have any experience from a project/issue like this? Is it possible to have the sales orders and/or deliveries in SAPECC6.0 rerouted from a TMS without rerunning ATP (and other functions)?
Thanks in advance,
Ola

Send the deliveries to TMS and let TMS create the shipments and the optimized route for delivering the shipments.  Once the shipment is set, TMS will send the information back to SAP and the sales orders/delivery will be updated.
Hope this helps.
Nathan

Similar Messages

  • 2 Models of hitachi HD?

    The only difference, other than price, between these are the "features". Can anyone tell me why the second one costs $40 more, & is it worth it?
    Thanks, Harris
    HITACHI Deskstar E7K500 HDS725050KLA360 (0A31619) 500GB 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive - OEM
    Features
    • Native Command Queuing (NCQ) for superior performance Rotational Vibration Safeguard for high density storage configurations Iridium manganese chromium head sensor technology for high reliability Low-power mode
    HITACHI Deskstar T7K500 HDT725050VLA360 (0A33437) 500GB 7200 RPM 16MB Cache SATA 3.0Gb/s Hard Drive - OEM
    Features
    • Native Command Queuing for maximum data speeds Thermal Monitoring and Fly Height Control for high reliability SMART Command Transport optimized response times

    Hello! The best information I have is that the E7k drive is an Enterprise grade drive while the T7K is more of a consumer grade drive. The E7K drive has a mean time between failure (MTBF) of 1.5 million hours while no comparable figure is given on the T7K drive. The T7K drive is measured as 50,000 start/stop cycles as is the E7K. Basically the E7K is a commercial grade drive while the T7K is the consumer grade drive. Tom

  • Does TR line card (ASR9K) support H-QoS?

    Hi,
    I have some question regarding on the 2nd generation line card for ASR9k.
    Line Card Types
    The Cisco ASR 9000 Series line cards are available in Service Edge Optimized and Packet Transport Optimized variants.
    • Service Edge Optimized (SE) line cards are designed for customer deployments requiring enhanced quality of service (QoS).
    • Packet Transport Optimized (TR) line cards are designed for network deployments where basic QoS is required.
    Different line card types may be mixed within the same system.
    I already review ASR9k presentation from Xander-Thuijs and found that
    SE line card support 256K queues/NP
    TR line card support 8 queues/Port
    I'm not sure whether TR line card support H-QoS.
    Could you please give more clarification about this?
    Br,
    Pipatpong

    HEre is an example:
    RP/0/RSP0/CPU0:A9K-BNG#show qos int g 0/0/0/0 out
    Sun Oct 13 13:31:19.867 EDT
    Interface: GigabitEthernet0_0_0_0 output
    Bandwidth configured: 200000 kbps Bandwidth programed: 200000 kbps
    ANCP user configured: 0 kbps ANCP programed in HW: 0 kbps
    Port Shaper programed in HW: 200000 kbps
    Policy: xtp Total number of classes: 4
    Level: 0 Policy: xtp Class: class-default
    QueueID: N/A
    Shape CIR : NONE
    Shape PIR Profile : 0/4(S) Scale: 195 PIR: 199680 kbps  PBS: 2496000 bytes
    WFQ Profile: 0/9 Committed Weight: 10 Excess Weight: 10
    Bandwidth: 0 kbps, BW sum for Level 0: 0 kbps, Excess Ratio: 1
    Level: 1 Policy: xt Class: m1
    Parent Policy: xtp Class: class-default
    QueueID: 136 (Priority 1)
    Queue Limit: 4 kbytes Abs-Index: 4 Template: 0 Curve: 6
    Shape CIR Profile: INVALID
    Policer Profile: 53 (Single)
    Conform: 296 kbps (300 kbps) Burst: 3750 bytes (0 Default)
    Child Policer Conform: TX
    Child Policer Exceed: DROP
    Child Policer Violate: DROP
    Level: 1 Policy: xt Class: m2
    Parent Policy: xtp Class: class-default
    QueueID: 137 (Priority 2)
    Queue Limit: 8 kbytes Abs-Index: 6 Template: 0 Curve: 6
    Shape CIR Profile: INVALID
    Policer Profile: 54 (Single)
    Conform: 600 kbps (600 kbps) Burst: 7500 bytes (0 Default)
    Child Policer Conform: TX
    Child Policer Exceed: DROP
    Child Policer Violate: DROP
    Level: 1 Policy: xt Class: class-default
    Parent Policy: xtp Class: class-default
    QueueID: 138 (Priority Normal)
    Queue Limit: 2496 kbytes Abs-Index: 89 Template: 0 Curve: 0
    Shape CIR Profile: INVALID
    WFQ Profile: 0/9 Committed Weight: 10 Excess Weight: 10
    Bandwidth: 0 kbps, BW sum for Level 1: 0 kbps, Excess Ratio: 1
    This class default will get reused by other interaces very likely unless you have a bw remaining (percent) in there.
    So every time you see a unique QueueID you burn a queue.
    the reason why the parent shaper at level 0 says N/A is because this is a hierarchical pmap and is shaping the overal chuld classes inclduing class-default.
    So in this example applying this pmap multiple times will burn queues for class-default.
    In other words, this Pmap example uses 3 queues. One for the shaper (child class-default) and 2 PQ's of prio's 1 and 2).
    policy-map xtp
    class class-default
      service-policy xt
      shape average 200 mbps
    end-policy-map
    RP/0/RSP0/CPU0:A9K-BNG#sh run policy-map xt
    Sun Oct 13 13:34:45.049 EDT
    policy-map xt
    class m1
      priority level 1
      police rate 300 kbps
    class m2
      priority level 2
      police rate 600 kbps
    class class-default
    end-policy-map

  • What is the difference between Service Edge Optimized line cards and Packet Transport line cards in the ASR 9000

    Hi Folks,
    What is the deep dive difference between the service edge optimized cards and the packet transport line cards in the ASR 9000?
    I know that QoS is the difference in general, for example, the service edge optimized has advanced QoS features while the Packet Transport line cards has basic QoS features but I want to know some more details please.
    Thanks

    There are four main deiffences (unfortunately don't have a reference link).
    L3 Interfaces:
    TR: 8k/LC, SE: 20k/LC
    L2 Interfaces:
    TR: 16k/LC, SE: 64k/LC
    QoS:
    TR: only 8 Queues per port, 8k policers per Network Processor
    SE: 256k Queues per port, 256k policers per NP
    ACL:
    TR: 24k ACE
    SE: 96k ACE
    Number of FIB/mFIB/MAC etries, L3 VRFs, PWs available per TR/SE linecard are the same.

  • Changes to write optimized DSO containing huge amount of data

    Hi Experts,
    We have appended two new fields in DSO containg huge amount of data. (new IO are amount and currency)
    We are able to make the changes in Development (with DSO containing data).  But when we tried to
    tranport the changes to our QA system, the transport hangs.  The transport triggers a job which
    filled-up the logs so we need to kill the job which aborts the transport.
    Does anyone of you had the same experience.  Do we need to empty the DSO so we can transport
    successfully?  We really don't want to empty the DSO's as it will take time to load? 
    Any help?
    Thank you very muhc for your help.
    Best regards,
    Rose

    emptying the dso should not be necessary, not for a normal dso and not for a write optimized DSO.
    What are the things in the logs; sort of conversions for all the records?
    Marco

  • Hyper Transport Sync Flood Error (msi 785gm-e51)

    hello all,
    my pc
    x4-630
    his 5750
    vgen 2gb 1333
    corsair 450w
    i have problem with my pc. sometimes its reboot when im playing games.after reboot there a error messege hyper transport sync flood error.
    i already google about this problem and i got a information on the google this problem caused by my motherboard.
    how to solve this?

    Quote from: Svet on 16-July-10, 17:45:09
    error in memtest or "Hyper Transport Sync Flood Error" ?
    Use normal English, not "r", "u" and such.
    What exactly? brand model?
    Have you tried with different one?
    In which dimm slot is installed?
    done >>Clear CMOS Guide<< with power cord removed,
    and 'Load bios optimized defaults" ?
    error in hyper transport sync flood error
    brand name vgen,
    here the full spec i got from vgen website :
    DDR3 PC 10600/1333Mhz
    Category
    PC Memory
    Type
    Long-DIMM DDR3
    Size
    130 x 30 x 2 mm
    Status
    New
    Description
    capacity : 2GB
    speed : 1333 MHz
    CAL : CL 9
    Chip Configuration: 32M x 8 8 Chip
    Chipset : Major Brand
    Jenis IC :Tiny BGA
    Slot : DIMM 240 pin
    Tipe : Unbuffered
    Voltage : 1.5 V
    ECC : No
    Registered : No
     i never tried with other ram
    honestly i dont know what is dimm slot, but if i dont get you wrong its number one slot ram.
    yes i already test clear cmos with power cord removed because i afraid get electric shock (im not pc expert)
    oh ya found quote in another forum
    After reading about 200 threads on this problem, I came to the conclusion that it's a voltage problem. It's not BIOS, an ATI series of video card, faulty CPU or RAM, or a motherboard. The frequency of AMD CPUs being mentioned, power needs, and the ability of BIOS to compensate for voltage changes led me to believe that this is a problem with under-voltage. I don't have the time to experiment with each and every variable, so I upped the voltages in BIOS on all elements of both RAM and the CPU. Before this, I was getting a reboot at random and increasingly irritating times (I do annoying grad homework on this thing). So, after upping the voltages, I haven't had one reboot for HTSFE. If anyone wants to go from here and mess with the individual voltages, go for it. It's definitely a voltage problem. Hope this helps.
    so i test it, i upped all voltages that i can change in BIOS. so far its already 2 day without Hyper transport error.
    it is fine to upped all voltages?i up it until the highest "white" voltage ( i read in BIOS, grey=auto, white=normal, red=no recommended)

  • How to move a transport request from one client to the other

    Hi,
    I created a new transport request with a lot of elements (and also some modifications), but I made it in the "wrong" client on the system. Now I have the question if there is any possibility to change the client-relation of a  transport request via SE01 (or similar transaction). Copy and Paste of the element list isn't the optimal way, because there are also table contents in the transport request.
    Can you give me a hint?
    Thank you
    Kind regards
    Markus

    Yes you can change the traget client.
    Check the TR properties and set the taget client .

  • Error in FM DDIF_NAMETAB_GET when deleting BI objects via transport request

    Dear all,
    when importing a transport request in which several types of BI objects are deleted (Infocubes, DSO's, transformations, routines, DTP's, query elements, infosets, process chains). The import terminates only when transporting from acceptance to production with return code 12 due to an uncaught exception:
    Transporting from development to acceptance did not raise this exception.
    The ST22 dump (see below) refers in the "contents of system fields" section to a DSO, and to post-import activities. The DSO and the associated tables could not be found (rsa1 & se16), since they are deleted as desired.
    Some of the relevant notes that I have found do refer to DSO related problems, but they all indicate to install SP19 which we already have installed (BW 7.0 Patch Level 23)
    Could you please assist in pointing out potential solutions based on the information from the ST22 Runtime Error below?
    Kind regards,
    PJ
    Runtime Errors         UNCAUGHT_EXCEPTION
    Except.                CX_RS_PROGRAM_ERROR
    Date and Time          08.02.2011 10:48:14
    Short text
        An exception occurred that was not caught.
    What happened?
        The exception 'CX_RS_PROGRAM_ERROR' was raised, but it was not caught anywhere
        along the call hierarchy.
        Since exceptions represent error situations and this error was not
        adequately responded to, the running ABAP program
        'CL_RSDD_DS====================CP' has to be
        terminated.
    Error analysis
        An exception occurred which is explained in detail below.
        The exception, which is assigned to class 'CX_RS_PROGRAM_ERROR', was not caught
        and therefore caused a runtime error.
        The reason for the exception is:
        Error in BW: Error in DDIF_NAMETAB_GET
    How to correct the error
        If the error occurs in a non-modified SAP program, you may be able to
        find an interim solution in an SAP Note.
        If you have access to SAP Notes, carry out a search with the following
        keywords:
        "UNCAUGHT_EXCEPTION" "CX_RS_PROGRAM_ERROR"
        "CL_RSDD_DS====================CP" or "CL_RSDD_DS====================CM001"
        "APPEND_DS_TEC_FIELDS"
    System environment
        SAP-Release 700
        Application server... "dp1ci"
        Network address...... "<removed>"
        Operating system..... "HP-UX"
        Release.............. "B.11.23";
        Hardware type........ "ia64"
        Character length.... 16 Bits
        Pointer length....... 64 Bits
        Work process number.. 45
        Shortdump setting.... "full"
        Database server... "spisap02"
        Database type..... "ORACLE"
        Database name..... "DP1"
        Database user ID.. "SAPBIW"
        Terminal................. " "
        Char.set.... "C"
        SAP kernel....... 700
        created (date)... "Dec 14 2009 20:47:35"
        create on........ "HP-UX B.11.23 U ia64"
        Database version. "OCI_102 (10.2.0.1.0) "
        Patch level. 236
        Patch text.. " "
        Database............. "ORACLE 10.1.0.*.*, ORACLE 10.2.0.*.*, ORACLE 11.2.*.*.*"
        SAP database version. 700
        Operating system..... "HP-UX B.11";
        Memory consumption
        Roll.... 5979936
        EM...... 0
        Heap.... 1459888
        Page.... 40960
        MM Used. 6988880
        MM Free. 415648
    User and Transaction
        Client.............. 000
        User................ "DDIC"
        Language key........ "E"
        Transaction......... " "
        Transactions ID..... "4D5111661004210BE10000000AFA2502"
        Program............. "CL_RSDD_DS====================CP"
        Screen.............. "SAPMSSY0 1000"
        Screen line......... 6
    Information on where terminated
        Termination occurred in the ABAP program "CL_RSDD_DS====================CP" -
         in "APPEND_DS_TEC_FIELDS".
        The main program was "RDDEXECU ".
        In the source code you have the termination point in line 61
        of the (Include) program "CL_RSDD_DS====================CM001".
        The program "CL_RSDD_DS====================CP" was started as a background job.
        Job Name....... "RDDEXECL"
        Job Initiator.. "DDIC"
        Job Number..... 10475900
    Source Code Extract
    Line  SourceCde
       31         RAISE EXCEPTION TYPE cx_rs_program_error
       32           EXPORTING
       33             text = 'Invalid DSO subtype'.                   "#EC NOTEXT
       34     ENDCASE.
       35
       36 *   get table name from DDIC
       37     CALL METHOD cl_rsd_odso=>get_tablnm
       38       EXPORTING
       39         i_odsobject = n_infoprov
       40         i_tabt      = l_tab
       41       IMPORTING
       42         e_tablnm    = l_tablnm
       43       EXCEPTIONS
       44         OTHERS      = 1.
       45
       46     IF sy-subrc <> 0.
       47       RAISE EXCEPTION TYPE cx_rs_program_error
       48         EXPORTING
       49           text = 'Error in CL_RSD_ODSO=>get_Tablnm'.        "#EC NOTEXT
       50     ENDIF.
       51
       52     CALL FUNCTION 'DDIF_NAMETAB_GET'
       53       EXPORTING
       54         tabname   = l_tablnm
       55       TABLES
       56         dfies_tab = l_t_comp
       57       EXCEPTIONS
       58         not_found = 1
       59         OTHERS    = 2.
       60     IF sy-subrc <> 0.
    >>>>>       RAISE EXCEPTION TYPE cx_rs_program_error
       62         EXPORTING
       63           text = 'Error in DDIF_NAMETAB_GET'.        "#EC NOTEXT
       64     ENDIF.
       65
       66
       67   ELSE.
       68 *   model table only needed for standard datastore objects, for
       69 *   write optimized DSOs target table is a changelog that is fully described by
       70 *   dta_pro (infoobjects)
       71     CHECK n_s_dta-odsotype = rsdod_c_type-standard.
       72
       73 *   get model table fields to add
       74     CALL METHOD cl_rsd_odso=>get_mod_tab
       75       IMPORTING
       76         e_mod_fld_ur = l_t_comp.
       77   ENDIF.
       78
       79 * according to T.B. ( 13.04.2007) the correct position is
       80 * not needed in the D version
    Contents of system fields
    Name     Val.
    SY-SUBRC 0
    SY-INDEX 0
    SY-TABIX 1
    SY-DBCNT 1
    SY-FDPOS 0
    SY-LSIND 0
    SY-PAGNO 0
    SY-LINNO 1
    SY-COLNO 1
    SY-PFKEY
    SY-UCOMM
    SY-TITLE Execute Post-Import Methods and XPRAs for Transport Request
    SY-MSGTY E
    SY-MSGID DA
    SY-MSGNO 300
    SY-MSGV1 /BIC/AV_AMOFCB40
    SY-MSGV2
    SY-MSGV3
    SY-MSGV4
    SY-MODNO 0
    SY-DATUM 20110208
    SY-UZEIT 104759
    SY-XPROG SAPLSYST
    SY-XFORM SYSTEM_HOOK_OPEN_DATASET

    Hi All
    We are also experiencing this same error when transporting deletions of multiple objects.
    Although the transport was cancelled in the destination system (production) it appears to have largely deleted all objects successfully except for DSOs.
    The DSOs still appear in the table RSDODSO (via SE11) but are not visible anywhere else.  When you try to view the DSO via RSDODS a warning appears saying "DataStore object to be deleted by transport -> delete only allowed".  If you try to delete them in this transaction the same ABAP runtime error as the transport appears.
    Any assistance would be greatly appreciated!
    Regards
    David

  • Edit Problem after changing and transporting a table view

    Hallo Experts,
    I hope you can help me.
    I changed a Z-table in the Data Dictionary. For this table there exists an Tableview.
    After I added three new fields I generate a new tableview:
    Utilities -> table maintenance generator
    Change -> create maintenance screen -> ok
    after this I optimized the layout of the screen.
    On our Enviromentsystem my Tableview works perfect.
    I can make changes in the table.
    Then I transported the new tableview to our testsystem.
    There I started the sm30 "Maintain table view" and tried to change table entries.
    After a click on "Maintain" the following Message appears:
    TK 730 Changes to Repository or cross-client Customizing are not permitted.
    The strangest thing is, that before my transport, it worked, I could change table entries.
    There were no changes on the userrights.
    And I can change a similar table on the testsystem.
    I compared the properties of both tables. But I could not find a difference.
    Have anybody an idea where the error can be?

    Hi
    Caution: The table is cross-client
    Message no. TB113
    Diagnosis
    You are in the process of maintaining a cross-client table. You are using the standard table maintenance screen here, which is frequently used to maintain client-specific Customizing entries. This message serves to warn you that each change you make will have an effect on all other clients in the system.
    System response
    You have the authorization required for cross-client maintenance.
    Procedure
    Think carefully about the effect of your changes. Due to logical dependencies between client-specific Customizing and applications data on the one hand and cross-client Customizing data on the other hand, changing or deleting cross-client data could result in inconsistencies.
    Therefore such changes should be carefully planned and coordinated, taking all the clients in the system into consideration.

  • Missing PARTNO field in Write Optimized DSO

    Hi,
    I have a write optimized DSO, for which the Partition has been deleted (reason unknown) in Dev system.
    For the same DSO, partition parameters exists in QA and production.
    Now while transporting this DSO to QA, I am getting an error "Old key field PARTNO has been deleted", and the DSO could not be activated in the target system.
    Please let me know, how do I re-insert this techinal key of PARTNO in my DSO.
    I pressuming it has something to do with partitioning of the DSO.
    Please Help.......

    Hi,
    Since the write-optimized DataStore object only consists of the table of active data, you do not have to activate the data, as is necessary with the standard DataStore object. This means that you can process data more quickly.
    The loaded data is not aggregated; the history of the data is retained. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, so that the aggregation of data can take place later in standard DataStore objects.
    The system generates a unique technical key for the write-optimized DataStore object. The standard key fields are not necessary with this type of DataStore object. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical keys. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD). Only new data records are loaded to this key.
    You can specify that you do not want to run a check to ensure that the data is unique. If you do not check the uniqueness of the data, the DataStore object table may contain several records with the same key. If you do not set this indicator, and you do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.
    PS: Excerpt from http://help.sap.com/saphelp_nw2004s/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    Hope this helps.
    Best Regards,
    Rajani

  • Error during loading and deletion of write-optimized DSO

    Hey guys,
    I am using a write optimized DSO ZMYDSO to store data from several sources (two datasources and one DSO).
    I have disabled the check of uniqueness in the DSO, but I defined a semantic key for the fields ZCLIENT, ZGUID, ZSOURCE, ZPOSID which are used in a non-unique index.
    In the given case, I want to delete existing rows in the DSO. I execute these steps in the endroutine. Here the abstract coding:
    LOOP AT RESULT_PACKAGE ASSINING <RESULT_FIELDS>.
    u201Csome other logic [u2026]
    DELETE /BIC/AZMYDSO00
    WHERE /BIC/ZCLIENT = RESULT_FIELDS-/BIC/ZCLIENT
         AND /BIC/ZGUID = RESULT_FIELDS-/BIC/ZGUID
    AND /BIC/ZSOURCE = RESULT_FIELDS-/BIC/ZSOURCE
    AND /BIC/ZPOSID = RESULT_FIELDS-/BIC/ ZPOSID.
    ENDLOOP.
    COMMIT WORK AND WAIT.
    During the Loading (after the transformation step in the updating step), I get the messages (not every time):
    1.     Error while writing the data. (RSAODS131)
    2.     Could not Save DataPackage xy in DataStore ZMYDSO (RSODSO_UPDATE027).
    Diagnosis: DataPackage XY could not be saved. Reasons therefore could be violation of key uniqueness (duplicate data) or general database error.
    3.     Error in the substep of updating DataStore.
    I have checked the system log (SM21) and the system dumps (ST22) but I could not find an exact error description.
    I guess, I am creating some inconsistencies or locks (I also checked the SM12) so that the load process interrupts. But I also tried a serial updating within the DTP (I reduced the number of batch processes to 1). No success.
    Perhaps the loading of one specific package could take a longer time so that the following package would overtake the predecessor. Could that be a problem? Do you generally advise against the deletion of rows within the endroutine?
    Regards,
    Philipp

    Hi,
    is ZMYDSO the name of the DSO?
    And is this the end routine of the transformation while loading the same DSO?
    if so we never do such a thing.
    you are comparing the DSO with the data that is flowing in and then deleting the data from the DSO...
    Which doesnt actually make any sense... because when loading the data to a DSO (or a cube or any table) the DSO (or cube) will be locked exclusively for any modifications of data. You can only read data from it.
    If your requirement is that existing duplicate records need not arrive in the DSO then you can delete the data from the SOURCE_PACKAGE in the start routine like below
    SELECT FIELDS FROM /BIC/AZMYDSO00 INTO INTERNAL_TABLE WHERE <CONDITION>.
    LOOP AT INTERNAL_TABLE.
       DELETE SOURCE_PACKAGE
      WHERE SOURCE_PACKAGE-/BIC/ZCLIENT = INTERNAL_TABLE-/BIC/ZCLIENT
         AND SOURCE_PACKAGE-/BIC/ZGUID = INTERNAL_TABLE-/BIC/ZGUID
      AND SOURCE_PACKAGE-/BIC/ZSOURCE = INTERNAL_TABLE-/BIC/ZSOURCE
      AND SOURCE_PACKAGE-/BIC/ZPOSID = INTERNAL_TABLE-/BIC/ ZPOSID.
    ENDLOOP.
    or if your requirement is that you need to delete the old data from the DSO for the same key which is arriving newly in order to load the new data into the DSO in that case, you could do something like this in the start routine
    SELECT FIELDS FROM /BIC/AZMYDSO00 INTO INTERNAL_TABLE FOR ALL ENTRIES IN SOURCE_PACKAGE
    WHERE /BIC/ZCLIENT = SOURCE_PACKAGE-/BIC/ZCLIENT
         AND /BIC/ZGUID = SOURCE_PACKAGE-/BIC/ZGUID
    AND /BIC/ZSOURCE = SOURCE_PACKAGE-/BIC/ZSOURCE
    AND /BIC/ZPOSID = SOURCE_PACKAGE-/BIC/ ZPOSID.
    * now update the new values you want to write in the loop
    LOOP AT INTERNAL_TABLE INTO WORK_AREA.
    "CODE FOR MANIPULATION of WORK_AREA
    *write a modify statement to update the RESULT_PACKAGE.
    MODIFY RESULT_PACKAGE FROM WORK_AREA TRANSPORTING FIELDS.
    ENDLOOP.
    hope it helps,
    Regards,
    Joe

  • Error while transporting transformations and datasource Urgent need Help

    Hi all,
    I have been working on this problem for a few days now. I created an ODS object, transformations,  DTP and I am using a  R/3 datasource for extraction. The ODS populates fine in the Dev box, the tranformations and the datasource are fine only in the Dev box. I tried to tranport the objects to the QA box from the Dev Box. Only the ODS transports properly. The transformations are not transported at all. Here is the error I get:
    Start of the after-import method RS_TRFN_AFTER_IMPORT for object type(s) TRFN (Activation Mode)
    No rule exists
    Rule 1 of transformation   () is not valid and is being deleted
    Rule 2 of transformation   () is not valid and is being deleted
    Error occurred during post-handling RS_After_import for TRFN L
    Ended with return code : ==> 8 <===    
    Please do let me know what could be the problem. Please do respond ASAP......
    if you experienced this issue please let me know how you solved it or a possible work around. any help will be greatly appreciated....
    So far we have been using only 3.5 objects. This is the first 7.0 (end to End) transport.

    Hi voodi,
    I just followed the exact sequence that you mentioned. I transported the
    DataSource first....then ODS(write optimized)....then transformations....then Dtp & infopackage.
    data source  ... no errors
    2) ODS ..... no errors
    3) Transformations .... Error....same as mentioned in my earlier post....
    4) DTP and Infopackage ....Error  similar to whats mentioned ...and it says that transformation does not exist.
    Could it be an issue with Source System Translation .....  I dont know much about it .....could you please tell me what it could be and how to resolve it.   I am thinking in these lines because the source system in QA3 exists in the Data Souces Folder and it has  translated properly to QA3Client300. But under the Infoproviders folder (in Q03)...under my DSO the data source still shows Dev3Client 200(client not changed). If this is the issue do you think I need to talk to Basis support for asking them to modify the transport(internally)......can it be done....
    please do give me your feedback....thank you.

  • Error while transport of transformations

    hi
    i am transportation transformations from dev to acceptance
    i chnaged write optimized dso to standard dso? that is reason i am collecting again into new request and move to acceptance
    i am using 4 flobal system
    a7a
    ana
    n6a
    f7a
    buy while moving i am getting N6Q system error N6Q for quality not for dev and acceptance
    ple let me know any solution
       Start of the after-import method RS_TRFN_AFTER_IMPORT for object type(s) TRFN (Activation Mode)
       No rule exists
       Source RSDS ZI_ISEG N6Q420 does not exist
       Start of the after-import method RS_TRFN_AFTER_IMPORT for object type(s) TRFN (Delete Mode)
       Errors occurred during post-handling RS_AFTER_IMPORT for TRFN L
       RS_AFTER_IMPORT belongs to package RS
       The errors affect the following components:
          BW-WHM (Warehouse Management)
       Post-import method RS_AFTER_IMPORT completed for TRFN L, date and time: 20080515051107
       Post-import methods of change/transport request GRDK938058 completed
            Start of subsequent processing ... 20080515051045

    Hi
    check whether ur request saved under local package or under ur package
    also check whther the request got locked
    Sridath

  • Transport error of request Standard Infocube to HANA Optimised Infocube migration

    Hi ,
    Migration of  Standard info cube to Hana optimized info cube done successfully in Dev System,but while transporting that request in PRD system
    error was coming and error log as follows-

    Hi Herman,
    Thanks for reply.Is there any SAP note or document for this steps?
    My scenario system Landscape like BW DEV-BW PRD.
    Our both  BW System were on Oracle DB. We have migrated oracle DB to HANA DB .
    PRD system infocube have 7 year of data and infocube is Standard Infocube.So we want to migrate all standard infocube to HANA optimized .
    Regards,
    Rajesh Behera

  • DSO Will Not Activate after Transport with rc=8

    Hello,
    I had a requirement to make two Purchasing DSOs (2LIS_02_ITM & 2LIS_02_SCL based) write-optimized. I made my changes in the development environment and transported them to QA. The transport went in with rc=8. All of my changes propogated; the DSOs are now write-optimized; however they are not active, nor are the corresponding transformations and DTPs.
    I tried activating directly in QA and it would not work. I also tried re-importing my transport, as well as creating a new activation transport and sending it to QA. Nothing worked.
    Can anyone help or give me some ideas as to how I can get this activated? Thank you!!!
    Here is some of my error text from the transport log:
    Start of the after-import method RS_ODSO_AFTER_IMPORT for object type(s)
    The creation of the export DataSource failed
    Error during the retrieval of the logon data stored in secure storage
    Error during the retrieval of the logon data stored in secure storage
    Error when creating the export DataSource and dependent Objects
    Error when activating DataStore Object
    The creation of the export DataSource failed
    Error during the retrieval of the logon data stored in secure storage
    Error during the retrieval of the logon data stored in secure storage
    Error when creating the export DataSource and dependent Objects
    Error when activating DataStore Object
    Error/warning in dict. activator, detailed log    > Detail
    Index /BIC/AZ_02_ITM00-RDR (field REQUEST is not in the table)
    Index /BIC/AZ_02_ITM00-RDR (field DATAPAKID is not in the table)
    Index /BIC/AZ_02_ITM00-RDR (field RECORD is not in the table)
    Index /BIC/AZ_02_SCL00-RDR (field REQUEST is not in the table)
    Index /BIC/AZ_02_SCL00-RDR (field DATAPAKID is not in the table)
    Index /BIC/AZ_02_SCL00-RDR (field RECORD is not in the table)
    Table /BIC/AZ_02_ITM00 could not be activated
    Table /BIC/AZ_02_SCL00 could not be activated
    Return code..............: 8
    DDIC Object TABL /BIC/ has not been activated
    Error when resetting DataStore Object Z_02_ITM to the active version
    DDIC Object TABL /BIC/ has not been activated

    Well...after reading your post it seems that only Basis team can help you.
    You need to restore connection from RSA1. ( context menu of source system) ..But to restore connection you will be needing Destination's IP Address or hostname, backtround user id /password. More over your system needs to be open ( SCC4) and modifiable ( SE06). All this access normally remains with Basis team.
    You can also go to SM59 -
    > ABAP connection--->Pick up your source system connection--.Check the RFC.
    I think your RFC connection is still pointing to the system where you did copy from. But that may not be the only problem. Even password may be wrong that you will know only after successful check of Destination test.
    So, in a nutshell , constact Basis team,
    Regards
    Anindya

Maybe you are looking for