Mass data load into SAP R/3 - with XI?

Hi guys!
I have an issue - mass data migration into SAP R/3. Is XI a good solution? It will be about 60GB of data. Or is there a better way of this data load?
Thanx a lot!
Olian

hi,
SAP doesn't recomment using XI for mass data migration
and 60 Gb is certainly too much
use LSMW for that purpose
Regards,
michal

Similar Messages

  • Data load into SAP ECC from Non SAP system

    Hi Experts,
    I am very new to BODS and I have want to load historical data from non SAP source system  into SAP R/3 tables like VBAK,VBAP using BODS, Can you please provide steps/documents or guidelines on how to achieve this.
    Regards,
    Monil

    Hi
    In order to load into SAP you have the following options
    1. Use IDocs. There are several standard IDocs in ECC for specific objects (MATMAS for materials, DEBMAS for customers, etc., ) You can generate and send IDocs as messages to the SAP Target using BODS.
    2. Use LSMW programs to load into SAP Target. These programs will require input files generated in specific layouts generated using BODS.
    3. Direct Input - The direct input method is to write ABAP programs targetting on specific tables. This approach is very complex and hence a lot of thought process needs to be applied.
    The OSS Notes supplied in previous messages are all excellent guidance to steer you in the right direction on the choice of load, etc.,
    However, the data load into SAP needs to be object specific. So targetting merely the sales tables will not help as the sales document data held in VBAK and VBAP tables you mentioned are related to Articles. These tables will hold sales document data for already created articles. So if you want to specifically target these tables, then you may need to prepare an LSMW program for the purpose.
    To answer your question on whether it is possible to load objects like Materials, customers, vendors etc using BODS, it is yes you can.
    Below is a standard list of IDocs that you can use for this purpose to load into SAP ECC system from a non SAP system.
    Customer Master - DEBMAS
    Article Master - ARTMAS
    Material Master - MATMAS
    Vendor Master - CREMAS
    Purchase Info Records (PIR) - INFREC
    The list is endless.........
    In order to achieve this, you will need to get the functional design consultants to provide ETL mapping for the legacy data to IDoc target schema and fields (better to ahve sa tech table names and fields too). You should then prepare the data after putting it through the standard check table validations for each object along with any business specific conversion rules and validations applied. Having prepared this data, you can either generate flat file output for load into SAP using LSMW programs or generate IDoc messages to the target SAPsystem.
    If you are going to post IDocs directly into SAP target using BODS, you will need to create a partner profile for BODS to send IDocs and define the IDocs you need as inbound IDocs. There are few more setings like RFC connectivity, authorizations etc, in order for BODS to successfully send IDocs into the SAP Target.
    Do let me know if you need more info on any specific queries or issues you may encounter.
    kind regards
    Raghu

  • Mass Data Loads into SNP Key figures

    Hi All,
    Does anyone have any knowledge of doing mass data uploads agains key figures (e.g. Safety Stock Planned). There is a transaction /SAPAPO/TSKEYFMAIN - Mass maintenace of Time Series Key figures, but this does not give me the option of loading thousands of materials at one time. Any thoughts would be appreciated.
    Rumi

    Rumi
    As Kaushik has mentioned you can upload data from info cube to planning book for time series key figures and you can read data maintained in excel and upload it to your info cube.
    But i have a question. Why cant you maintain safety stock in APO product master for the material branch and using macro read data from product master and populate safety stock key figure ?  Acutally you can maintain the safety stock in R/3 and as soon as you CIF material to APO the safety stock field in product master will get populated and you can read that using a macro .
    Thanks
    Aparna
    Edited by: Aparna.Ranganathan on Dec 9, 2010 6:17 PM

  • Excel data transfer into SAP internal table with GUI_UPLOAD

    hi all,
      i m using SRM4 system and i wanted to develop one report which will upload data from excel and convert it into IT.
    i know that many threads are posted on this topic.
    but my requirement is slight different. in the system only one function module is available that is "GUI_UPLOAD" and we want that user shd not save file as tab delimited before calling this fm. instead, program shd take care of all these things...
    please suggest something asap..
    helpful ans will be rewarded..
    thanks,
    jigs.

    Dear Jigs,
    Please go though the following lines of code:
    D A T A D E C L A R A T I O N *
    TABLES: ANEP,
    BKPF.
    TYPES: BEGIN OF TY_TABDATA,
    MANDT LIKE SY-MANDT, " Client
    ZSLNUM LIKE ZSHIFTDEPN-ZSLNUM, " Serial Number
    ZASSET LIKE ZSHIFTDEPN-ZASSET, " Original asset that was transferred
    ZYEAR LIKE ZSHIFTDEPN-ZYEAR, " Fiscal Year
    ZPERIOD LIKE ZSHIFTDEPN-ZPERIOD, " Fiscal Period
    ZSHIFT1 LIKE ZSHIFTDEPN-ZSHIFT1, " Shift No. 1
    ZSHIFT2 LIKE ZSHIFTDEPN-ZSHIFT1, " Shift No. 2
    ZSHIFT3 LIKE ZSHIFTDEPN-ZSHIFT1, " Shift No. 3
    END OF TY_TABDATA.
    Declaration of the Internal Table with Header Line comprising of the uploaded data.
    DATA: BEGIN OF IT_FILE_UPLOAD OCCURS 0.
    INCLUDE STRUCTURE ALSMEX_TABLINE. " Rows for Table with Excel Data
    DATA: END OF IT_FILE_UPLOAD.
    S E L E C T I O N - S C R E E N *
    SELECTION-SCREEN: BEGIN OF BLOCK B1 WITH FRAME,
    BEGIN OF BLOCK B2 WITH FRAME.
    PARAMETERS: P_FNAME LIKE RLGRAP-FILENAME OBLIGATORY.
    SELECTION-SCREEN: END OF BLOCK B2,
    END OF BLOCK B1.
    E V E N T : AT S E L E C T I O N - S C R E E N *
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_FNAME.
    CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
    EXPORTING
    PROGRAM_NAME = SYST-REPID
    DYNPRO_NUMBER = SYST-DYNNR
    FIELD_NAME = ' '
    STATIC = 'X'
    MASK = '.'
    CHANGING
    FILE_NAME = P_FNAME
    EXCEPTIONS
    MASK_TOO_LONG = 1
    OTHERS = 2
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    E V E N T : S T A R T - O F - S E L E C T I O N *
    START-OF-SELECTION.
    Upload Excel file into Internal Table.
    PERFORM UPLOAD_EXCEL_FILE.
    Organize the uploaded data into another Internal Table.
    PERFORM ORGANIZE_UPLOADED_DATA.
    E V E N T : E N D - O F - S E L E C T I O N *
    END-OF-SELECTION.
    *& Form UPLOAD_EXCEL_FILE
    text
    --> p1 text
    <-- p2 text
    FORM UPLOAD_EXCEL_FILE .
    CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
    EXPORTING
    FILENAME = P_FNAME
    I_BEGIN_COL = 1
    I_BEGIN_ROW = 3
    I_END_COL = 7
    I_END_ROW = 32000
    TABLES
    INTERN = IT_FILE_UPLOAD
    EXCEPTIONS
    INCONSISTENT_PARAMETERS = 1
    UPLOAD_OLE = 2
    OTHERS = 3
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    ENDFORM. " UPLOAD_EXCEL_FILE
    *& Form ORGANIZE_UPLOADED_DATA
    text
    --> p1 text
    <-- p2 text
    FORM ORGANIZE_UPLOADED_DATA .
    SORT IT_FILE_UPLOAD BY ROW
    COL.
    LOOP AT IT_FILE_UPLOAD.
    CASE IT_FILE_UPLOAD-COL.
    WHEN 1.
    WA_TABDATA-ZSLNUM = IT_FILE_UPLOAD-VALUE.
    WHEN 2.
    WA_TABDATA-ZASSET = IT_FILE_UPLOAD-VALUE.
    WHEN 3.
    WA_TABDATA-ZYEAR = IT_FILE_UPLOAD-VALUE.
    WHEN 4.
    WA_TABDATA-ZPERIOD = IT_FILE_UPLOAD-VALUE.
    WHEN 5.
    WA_TABDATA-ZSHIFT1 = IT_FILE_UPLOAD-VALUE.
    WHEN 6.
    WA_TABDATA-ZSHIFT2 = IT_FILE_UPLOAD-VALUE.
    WHEN 7.
    WA_TABDATA-ZSHIFT3 = IT_FILE_UPLOAD-VALUE.
    ENDCASE.
    AT END OF ROW.
    WA_TABDATA-MANDT = SY-MANDT.
    APPEND WA_TABDATA TO IT_TABDATA.
    CLEAR: WA_TABDATA.
    ENDAT.
    ENDLOOP.
    ENDFORM. " ORGANIZE_UPLOADED_DATA
    In the subroutine --> ORGANIZE_UPLOADED_DATA, data are organized as per the structure declared above.
    Regards,
    Abir
    Don't forget to award points *

  • EDT Cat. 15 Data loaded to SAP but failed when updating

    Hi,
    I have an structure to load data into BP using KCLJ with Cat 15.
    Here it is.
    AKTYP
    TYPE
    PARTNER
    ROLE1
    KUNNR
    KUNNR_EXT
    BU_GROUP
    FIBUKRS
    CHIND_ADDR
    NAME_CO
    NAME_ORG1
    NAME_ORG2
    STREET
    STR_SUPPL1
    STR_SUPPL2
    STR_SUPPL3
    LOCATION
    HOUSE_NUM1
    POST_CODE1
    CITY1
    COUNTRY
    REGION
    LANGU
    CHIND_TEL
    TEL_NUMBER
    TEL_EXTENS
    We had the data loaded to SAP with ROLE 000000 and TR0100 but somehow when we tried to update the address thru ROLE TR0100, we had a dump and failed to update. However, the update is good when we use ROLE 000000 to update.
    Anyone would have a clue of this?
    Thanks

    I am new to EDT also. I know Cat 15 which can load BP. SAP has more categories which can load more using KCLJ. Here's my experience.
    Under Tcode SIMGH, find IMG structure External Data Transfer for SAP Banking. Under that structure, you can find Display Required and Optional Entry Fields for SEM Banking. When you enter 15 in the Category box, you'll see a whole list of fields about BP. You can use these fields to create your own structure.
    After you created your structure, use Define Sender Structure under External Data Transfer for SAP Banking to define your structure. After that, it is done. You can try use KCLJ to load your BP.
    If you still have other issues, mostly will be the configuration.
    Enjoy.

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

  • Adding leading zeros before data loaded into DSO

    Hi
    In below PROD_ID... In some ID leading zeros are missing before data loaded into BI from SRM into PROD_ID. Data type is character. If leading zeros are missing then data activation of DSO is failed due to missing zeros and have to manually add them in PSA table. I want to add leading zeros if they're missing before data loaded into DSO.... total character length is 40.. so e.g. if character is 1502 then there should be 36 zeros before it and if character is 265721 then there should be 34 zeros. Only two type of character is coming either length is 4 or 6 so there will be always need to 34 or 36 zeros in front of them if zeros are missing.
    Can we use CONVERSION_EXIT_ALPHPA_INPUT functional module ? As this is char so I'm not sure how to use in that case.. Do need to convert it first integer?
    Can someone please give me sample code? We're using BW 3.5 data flow to load data into DSO.... please give sample code and where need to write code either in rule type or in start routine...

    Hi,
    Can you check at info object level, what kind of conversion routine it used by.
    Use T code - RSD1, enter your info object and display it.
    Even at data source level also you can see external/internal format what it maintained.
    if your info object was using ALPHA conversion then it will have leading 0s automatically.
    Can you check from source how its coming, check at RSA3.
    if your receiving this issue for records only then you need to check those records.
    Thanks

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Import data Automatically into SAP

    Hi all,
    Is is possible to import data Automatically in SAP?
    For Ex: if the client having online filling form (through website) is is possible to upload data directly into SAP?
    one getting the alert or approval they will add the document.
    Regards
    Anish

    Hi Anish,
    This can be done through scheduled DTW job.
    Thanks,
    Gordon

  • Mass Data upload in SAP from 3rd party system

    Hi Experts.
    Can anyone help me how to do mass data upload in SAP. Actually, when any new joining is done, a form is being filled by employee(joining form), and that data is finally updated in SAP manually using various infotypes. Now, i m planning to make that form available in webpage. The employee will go to the webpage, fill the data , also the HR will fill the required fields, and once the form is complete, the data will get updated in SAP, in resp. infotypes. Like personal details in infotye 2, address in infotype 6, bank details in 9 and so on, in a single shot. Is there any BAPI or something like that, using which this can be achieved.
    Thnx
    S Kumar

    You can try BAPI_BANK_CREATE for IT0009, BAPI_ADDRESSEMP_CREATE for IT0006 and BAPI_PERSDATA_CREATE for IT0002. Otherwise, you can also use FM HR_MAINTAIN_MASTERDATA to create any infotype.
    Have a look also at the Life and Work Events functionality in SAP Portal (http://help.sap.com/erp2005_ehp_04/helpdata/EN/f6/263359f8c14ef98384ae7a2becd156/frameset.htm)

  • Once the Add-on is loaded into SAP B1 it is genearting follwing the warning message

    Hi Experts ,
    Once the Add-on is loaded into SAP B1 it is genearting follwing the warning message  is "Conversion from string """" to type 'Double' is not valid "
    Please find the attachment
    Thanks for the Support,
    Satish.

    Hi,
    At least one of addon function is to convert a string to number. If that string is empty, it may cause error. Need verification before converting.
    Thanks,
    Gordon

  • How to make data loaded into cube NOT ready for reporting

    Hi Gurus: Is there a way by which data loaded into cube, can be made NOT available for reporting.
    Please suggest. <removed>
    Thanks

    See, by default a request that has been loaded to a cube will be available for reporting. Bow if you have an aggregate, the system needs this new request to be rolled up to the aggregate as well, before it is available for reporting...reason? Becasue we just write queries for the cube, and not for the aggregate, so you only know if a query will hit a particular aggregate at its runtime. Which means that if a query gets data from the aggregate or the cube, it should ultimately get the same data in both the cases. Now if a request is added to the cube, but not to the aggregate, then there will be different data in both these objects. The system takes the safer route of not making the 'unrolled' up data visible at all, rather than having inconsistent data.
    Hope this helps...

  • AWM Newbie Question: How to filter data loaded into cubes/dimensions?

    Hi,
    I am trying to filter the amount of data loaded into my dimensions in AWM (e.g., I only want to load like 1-2 years worth of data for development purposes). I can't seem to find a place in AWM where you can specify a WHERE clause...is there something else I must do to filter data?
    Thanks

    Hi there,
    Which release of Oracle OLAP are you using? 10g? 11g?
    You can use database views to filter your dimension and cube data and then map these in AWM
    Thanks,
    Stuart Bunby
    OLAP Blog: http://oracleOLAP.blogspot.com
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • Data loads into multiple InfoObjects from 0EHS_PHRASE_TEXT DataSource.

    Dear Experts,
    I am working on SAP HCM-BW 7.0 Implementation and am trying to load data into 8 different InfoObjects (Texts) through 0EHS_PHRASE_TEXT (Phrases) extractor.
    There are many InfoPackages in place created during the previous project loading into different set of Standard BCT InfoObjects.
    After migrating the 0EHS_PHRASE_TEXT DataSource from 3.x to 7.0 version, I have created different InfoObjects with Transformations and DTPs upon business requirements to load them. The relevant data is available in 0EHS_PHRASE_TEXT when checked in Extractor Checker (RSA3). However, when I create InfoPackages to load these different InfoObjects, I don't see the Data Targets listed and it makes sense since the data is first loaded into PSA and then will be loaded into Data Targets (in this case InfoObjects with Text) using DTPs.
    My concern is... when I create a Process Chain to load Master Data into these Objects, the following action is happening:
    1. Load 10 Records into PSA and then into InfoObject A. All 10 Records are loaded into A.
    2. Load 5 Records into PSA and then into InfoObject B. Previous 10 + New 5 Records are loaded into B.
    3. and it continues... for 8 InfoObjects.
    My question is... is there are way that I can see the corresponding Data Target tab in InfoPackages so the Data is immediately picked from PSA of 0EHS_PHRASE_TEXT DataSource and then loaded into the corresponding InfoObjects.
    Or should I make use of Transfer Rules by Re-storing the 0EHS_PHRASE_TEXT DataSource from 7.0 to 3.x version...!!
    Your help is much appreciated.
    Thanks,
    Chandu

    Hi Andreas
    Thanks for replying and sorry for the confusion.
    The extractor is delivering 10 and then 5 because of different selection parameters in the InfoPackage "selecting" which InfoObject I wish to load the texts for.
    Your understanding of my requirements is correct.
    Could you please elaborate Option 1 bit more..!
    Regarding your Option 2, I don't want the DTPs to extract directly in full from the datasource because by doing so, all the data apart from the relevant data is also loaded into each and every InfoObject. And I don't want that to happen. I want to load only relevant data into respective InfoObjects from the PSA using DTPs.
    I have tried to setup filters in DTPs with different Selection Parameters and tried to load relevant InfoObjects. For example, I am trying to load ZEHS_SUBS InfoObject with EHS_INJ_SUB_SUBSTANCE as the Selection Parameter. There are 54 records in both Source System and PSA for this parameter.
    And the request is showing 54 in Transferred Records but only the very last 1 in Added Records. When I check in the Target (ZEHS_SUBS) InfoObject, it is showing only 1 Record. I have tried many combinations. They are:
    1. Loaded only EHS_INJ_SUB_SUBSTANCE data from Source System into PSA and then tried to execute the relevant DTP.
    2. Checked the 'Do Not Extract from PSA but Access Data Source (for Small Amounts of Data) and tried it.
    3. Tried with both Delta and Full Extraction Mode.
    4. Set Filter on Single Value as 'EHS_INJ_SUB_SUBSTANCE' to extract this data only.
    5. Set Filter Excluding all the other Single Values.
    6. Checked the 'Handle Duplicate Record Keys'.
    Still only the last 1 records out of 54 is showing up in the Target InfoObject.
    Please let me know if you have any idea as to why this is happening so.
    Thanks for your time.
    Chandu

Maybe you are looking for