Preparing Master Data Package

Hi,
My question about packages. As you know we can make some packages for script logics and transaction data files. But when i want to make package for master datas(cpmb\import_iobj_master), it seems impossible. Because there is " set selection". i cant modify set selection part. I couldnt see any choose format about this.
My purpose customers should just click to finish button when they want to import master data .
Are there any documents about this or  anyone prepare it ?
Take it easy...

HI,
We can do modify the existing selection in DM pacakge.
Once your package is in Modify Package state,then follow the below steps to modify the script.
1.Click on the button beside Selection of Process chain tab,which takes you to new screen..
2.In the new scree,you find the Advanced tab in right of the screen..click on the tab,new screen appears.
3.Once you open the new screen,you find the Prompt,Task functions with what ever selections your doing in the package..
4.If you want you can modify that prompt members like dimensions from CATEGORY to CURRENCY..or you can delete it  completly if you dont want...be sure what you want to do..once you have done with your changes please save all the steps till the package closure..
Hope above steps help you to your issue..
Thanks,
Naresh Kumar Reddy.K

Similar Messages

  • Purpose of Update/Overwrite Hierarchy in Import master data package

    Hi All,
    Please let me know what is the difference in update and overwrite option in import master data package. Here when i tried running the package, both the option are altering the hierarchy of existing member.
    My requirement is just to add a new member ID, if it already exist it should not affect those records and only update the new master data records.
    Thanks & Regards,
    Ramanathan

    Hi Ramanathan,
    You can run Import Master Data DM package in BPC to perform delta master data load. While you run DM package you get option to load Master Data from BW/BI where you can select:
    1. MERGE: this will keep old data intact and also load new master data, and
    2. COPY and REPLACE: This will copy new data and delete old master data.
    As per your requirement, you need to select the First option. If you select Update option then it will not delete old master data, instead update the members and hierarchy according to the parent-children design which you would have given in the dimension membersheet.
    Hope this corresponds to your requirement.
    Rgds,
    Poonam

  • Update and Overwrite Option in Import Master Data Package

    Hello,
    In the package Import Master Data there is the option "Select method for loading master data". I don't understand very well what is the difference between the "Update" and "Overwrite" option. I have tested both and I get the same result.
    Can someone explain to me what is the difference between both options ?
    Thanks,
    Mathieu

    Option overwrite: all properties are overwritten. If they are not maintained in the source file, they will be deleted in the member sheet
    Option update: only the properties maintained in the source file are overwritten. The other ones are not changed.
    D

  • Steps to prepare and upload legacy master data excel files into SAP?

    Hi abap experts,
    We have brand new installed ECC system somehow configured but with no master or transaction data loaded .It is new empty system....We also have some legacy data in excel files...We want to start loading some data into the SAP sandbox step by step and to see how they work...test some transactions see if the loaded data are good etc initial tests.
    Few questions here are raised:
    -Can someone tell me what is the process of loading this data into SAP system?
    -Should this excel file must me reworked prepared somehow(fields, columns etc) in order to be ready for upload to SAP??
    -Users asked me how to prepared their legacy excel files so they can be ready in SAP format for upload.?Is this an abaper job or it is a functional guy job?
    -Or should the excel files be converted to .txt files and then imported to SAP?Does it really make some difference if files are in excel or .txt format?
    -Should the Abaper determine the structure of those excel file(to be ready for upload ) and if yes, what are the technical rules here ?
    -What tools should be used for this initial data loads? CATT , Lsmw , batch input or something else?
    -At which point we should test the data?I guess after the initial load?
    -What tools are used in all steps before...
    -If someone can provide me with step by step scenario or guide of loading some kind of initial master data - from .xls file alignment to the real upload - this will be great..
    You can email me some upload guide or some excel/txt file examples and screenshots documents to excersize....
    Your help is appreciated it.!
    Jon

    hi,
    excel sheet uploading:
    http://www.sap-img.com/abap/upload-direct-excel.htm
    http://www.sap-img.com/abap/excel_upload_alternative-kcd-excel-ole-to-int-convert.htm
    http://www.sapdevelopment.co.uk/file/file_upexcel.htm
    http://www.sapdevelopment.co.uk/ms/mshome.htm

  • DM Package Abort while loading master data - BPC 10 NW

    Hello gurus,
    I am trying to load master data from BW InfoObject to BPC.  The package is running successfully if I just load the master data (without selecting the hierarchy node).  However, when I select the hierarchy node, the package aborts without giving any details in the package log.
    Please note that the hierarchy for the BW InfoObject is time-dependent.
    Therefore, I was wondering if there are other requirements that should be in place (such as new SAP notes, etc.) or any other procedures that should be followed while loading the time-dependent master data from BW to BPC.
    Thanks in advance for all your valuable guidance.
    Best regards,
    Vijay

    Hi Vijay,
    Please have a look at your ST22 t-code if it gives this error:"exception 'CX_SY_RANGE_OUTOF_BOUNDS'   " apply the following note:
    1957783 - An exception 'CX_SY_RANGE_OUT_OF_BOUNDS' occurs while
    loading infoObject hierarchy on BW 740
    Regards,
    Noura

  • I need tables for prepare the master data templates for PM

    Dear Experts,
    can we prepare the master data templates by using the tables? if its possible means how we can prepare and what we have to considered ? is it right way or wrong?
    please suggest me to prepare the masterdata templates for ever use. if any small corrections we can correct it at usage time.
    please help me
    thanks a lot in advance

    Dear Jalu,
    Master data Templates have to be prepared based on Client's requirement after understanding their Business.
    For example, in Equipment master there are ~ 60 to 70 feilds, no client requires all of them. Some are inherited.
    After client requirement freezing, record LSMW and based on that prepare the Template.
    Regards,
    MLN Prasad

  • Loading Master Data with Flexible Update

    Hi,
    I have created an Infoobject - Business Partner (Master Data bearing characteristics). The attributes to this are Region, Sales Person, Industry code which are master bearing characteristic as well.
    I need to load data to the business partner from a csv file.
    The layout of the CSV file is -
    BP number, BP text (long Text), Region Code, Region Desc (Med Text), Sales Person Code, Sales Person Desc (Med Text), Industry Code, Industry description (Med Text).
    How do I define the infosource to load this data.
    Appreciate any help with this.
    Thank You,
    Prashanth

    Hi,
    First of all, I need to note that there are two kind of Infosources: with direct and flexible update.
    If you choose a direct IS, then in the creating of the IS you just enter the name of the infoobject where you are going to load data. The system will create a IS (comm. structure). Enter this IS for changing. The system will propose you the communication structure,  click on a bottom icon “Transfer structure/Transfer rules”, choose as a ‘Source system’ your flat file system. Agree with the system when it asks you to save assignments (up to 3 times). Activate TRs. Then click in the field for Datasource. You may see there another datasources (for texts, attributes and hierarchies). Choose one by one another datasources with their activation.
    Now you can create an infopackage for a load – you can choose what kind of package it is going to be – for loading texts, attributes or hierarchies.
    Note that in this case the structure of the flat file is proposed by the system and you need just prepare the flat file corresponding to the proposed structure (different for each of 3 possible datasources). Execute infopackage.
    If you use a flexible IS, then you may insert into comm. structure the fields that you think you may need in the master data. Note that here you may have not only attributes but TEXTS also. Save a comm. structure. Assign a flat file source system and activate a TRs.
    After that in RSA1-dataproviders tab – right click – insert IO as a data target – choose your IO. Refresh the screen. You’ll see up to three data targets. Create update rules for each of them. In URs map the fields in the IS with the fields in URs.
    Best regards,
    Eugene

  • How to automate handling of dimension members? (master data loads)

    Hi,
    We want our users to be able to upload master data (i.e. dimension member data) from a flat file. It is only possible to upload transactional data from a flat file via the Data Manager...
    Any suggestions?
    Thanks,
    Peter

    Hi Peter,
    1.  use BULK INSERT to load the flat file into SQL SERVER and automate using the SQL job or Windows schedule job. Then call the AdminTask_MakeDim package in DM. This is also be scheduled in BPC DM.
    You can also use SSIS packages to load flat file data into the SQL SERVER table.
    2. SSIS packages are not complex, you should know to use it. Basically have learnt using these SSIS packages after started working in BPC Datamanager.  You can find various online tutorials on SSIS packages.
    Flow of data should be:
    Flat file - staging table in BPC database - copy from staging tbale  to mbr<dimension> table - Admintask to process Dimension.
    The structure of staging table should be identical with mbr<dimension> table  or mbr<dimension> table should be a subset of Staging table.
    To schedule the BPC - DM packages:
    Open the eDATA menu - 'Run Package' window. You have a option of 'New automation', use that to create schedules on SSIS packages (can be used for both BPC standard packages and customized packages).
    hope it helps.
    Hi Sam,
    You can try by VBA into EXCEL, prepare the dimension member sheet and process. Basically it depends on our knowledge / ease on technology .
    I never had a chance to try on this. By visualising, this is possible but  ensure on the data type loading into the EXCEL.
    example, CompCode is 0310, while loading into EXCEL load as 310... omit the 0 in the front.
    Try and share with us how it works.
    Cheers
    Uma

  • Periodic Transfer of Master Data

    Hello,
    I need assistance understanding the underlying config related to master data updates in R/3 and the resulting entries in GTS.
    Our setup:
    GTS (7.2) is a plug-in to CRM
    CRM & R/3 in same logical system group (GTS)
    EWM in unique logical system group (FEEDSYS) - based on SAP config guide and direction, we split feeder logical systems from GTS logical systems. all logical systems had resided in a single logical system group (GTS)
    EWM > system for warehouse functions
    R/3 > feeder system for material master
    R/3 > GTS Plug-In activated (per SAP direction)
    R/3 > Configure Control Settings for Document Transfer > MMOA mapped to our purchase order doc type at logical system and logical system group level
    R/3 > Configure Control Settings for Document Transfer > MMOB mapped to our inbound delivery doc type (EL) at logical system and logical system group level for DTAVI
    R/3 > Configure Control Settings for Document Transfer > MMOC mapped to our movement types 101 and 601, in addition to others
    I mention the PO and Inbound delivery setup because we do see changes to material qty in R/3 after creating PO's and the subsequent IB deliveries.
    We are trying to implement GTS as a bonded warehouse - storing material as duty-unpaid only.  We are not concerned with duty-paid status for this implementation.
    There has been an initial transfer of material to GTS from R/3 via /SAPSLL/MENU_LEGALR3.  We have GTS master data marked as BondedWH.  Although T-Code MMBE shows qty's for the material, it does not match the qty being displayed in GTS stock overview. 
    The SAP GTS Config guide states, page 40, to create a job for RBDMIDOC, with a variant and have it scheduled.  We still need to do this part and it is my understanding this is to synce data.  With this, I am also trying to understand the use of change pointers in R/3, in addition to enhancement project SLLLEG04 for the User Exits for Material Masters.  I don't know if we need to worry about the SLLLEG04 enhancement or not.
    I know there is more information I can provide on our current setup, but what major things are there to consider to get the stock overview qty's of GTS to actually reflect what's in MMBE, AND, be displayed as duty-unpaid, rather than duty-paid via the initial transfer function?
    Any advise or experience getting R/3 to communicate correctly with GTS, especially in light of a bonded warehouse is greatly appreciated.
    Regards,
    Brian

    Hello Uwe,
    Our intent for GTS is the use of bonded warehouse functionality.
    As a part of preparation for go-live activities, we will stock the warehouse with material, this is a distribution center of service parts (replacement parts).  The stocking activitiy will be through STO's.  As a result, we expect all the material showing up in GTS to be duty-unpaid.  We will continue to receive material into the warehouse via STO's as no base-receipts are required at this time. 
    When we process orders through the warehouse system (EWM) we expect the qty's in GTS to reflect what has been shipped (exports).  As such, we will be able to reconcile any stock discrepancies with the authoriities.
    My questions about intial transfer is my attempt to get some inventory showing in GTS, so that after I process the inbound deliveries based on PO's (STO) I see the inventory building in GTS.
    The use of initial stock transfer may not be correct as I described my activities above, but with very little documentation specifying the actual process of establishing a bonded warehouse, especially in our landscape, I am working through this on a trial-and-error basis.
    I hope this provides clarity on our goals.  I will continue to research the change pointers as I am still not sure of the relevance.
    Regards,
    Brian

  • Not able to see the records in master data

    Hi friends
    My requirement needs the data to be loaded from a RDM table.I created a data source in the R/3 and by using the extracto checker i am able to see the data.I created the info packages one for Attr and other Txt,replicated the datasource and loaded the data.I am able to see the data in the PSA and when i go into monitor in the infopackage it says the data has been loaded and it shows no errors.But I am not able to see this data in the master data info object.Can anyone please help me out???
    Mouli

    Hi
    Check out your infopackage setting: if its upto PSA or to MD table also
    Open Master data table in SE11 and check the OBJVERS field wether the MD is activated or not!
    Regards
    GSK

  • Master Data Text Used as Line Items in ODS

    Hi Experts! </br></br>
    I would just like to know if I can use the text I loaded of master data Revenue Type from R/3 in BW? </br></br>
    What I have done already is that I have created a customized extractor in R/3 under the CO-PA-IO node in RSA6 (since I think this is where this data is coming from/should belong under. A consultant of ours (before I got here) have already set up the other individual extractors for Sub Product A & B, Product Line, others in the FS10N transaction, document line item details.</br>Hi Experts! </br></br>
    I would just like to know if I can use the text I loaded of master data Revenue Type from R/3 in BW? </br></br>
    What I have done already is that I have created a customized extractor in R/3 under the CO-PA-IO node in RSA6 (since I think this is where this data is coming from/should belong under. A consultant of ours (before I got here) have already set up the other individual extractors for Sub Product A & B, Product Line, others in the FS10N transaction, document line item details.</br></br>
    Once I have created it, I  already replicated it in BW and have already setup the infosource. I created a InfoObject as Master Data with text and I loaded data from infosource. After loading, when I maintain Master Data for the characteristic I created in BW, I can see the pulled data that I have loaded from R/3. It is correct. </br></br>
    Now this is where I am having a problem. I need to know how to be able to use this Master Data text when I load line items (0EC_PCA_3) to my PCA ODS. As it is right now, I have setup the EC_PCA datasource, replicated it and activated it as well, and I have already added a Revenue Type characteristic to this datasource already and have mapped it accordingly. However, I am unable to pull any data when i manage the contents of the ODS, but i can see the it in the maintain master data. </br></br>
    How can i set it up such that I am also able to match my line items with their corresponding Rev Type Text in the ODS results? It is important to note that for Sub Product A, B, and Product Line fields, the previous setup is able to do so, and I am able to see results for these in the ODS line items results. These are also setup as master data text in BW by our previous consultant and I followed the same procedure and setup. </br></br>
    I feel that I am just missing a step on how to tell BW that the master data text should also be matched to the Line Items in the ODS during loading.</br></br>
    Kindly seeking for your assistance and advance thanks!
    Once I have created it, I  already replicated it in BW and have already setup the infosource. I created a InfoObject as Master Data with text and I loaded data from infosource. After loading, when I maintain Master Data for the characteristic I created in BW, I can see the pulled data that I have loaded from R/3. It is correct. </br></br>
    Now this is where I am having a problem. I need to know how to be able to use this Master Data text when I load line items (0EC_PCA_3) to my PCA ODS. As it is right now, I have setup the EC_PCA datasource, replicated it and activated it as well, and I have already added a Revenue Type characteristic to this datasource already and have mapped it accordingly. However, I am unable to pull any data when i manage the contents of the ODS, but i can see the it in the maintain master data. </br></br>
    How can i set it up such that I am also able to match my line items with their corresponding Rev Type Text in the ODS results? It is important to note that for Sub Product A, B, and Product Line fields, the previous setup is able to do so, and I am able to see results for these in the ODS line items results. These are also setup as master data text in BW by our previous consultant and I followed the same procedure and setup. </br></br>
    I feel that I am just missing a step on how to tell BW that the master data text should also be matched to the Line Items in the ODS during loading.</br></br>
    Kindly seeking for your assistance and advance thanks!

    Hi Simon, </br></br>
    Thanks for the reply! </br></br>
    First of all, yes, I think we are on the same page regarding what I want to do... I have created a master data infoObject in BW, created a customized extractor in R/3 under CO-PA-IO node, and mapped them to each other in BW such that when i execute a package in InfoSource in BW, i am able to load master data text in InfoObject Rev Type in BW. </br></br>
    Kindly explain more on "you should have created the revenue type as a text datasource and loaded the data into the text and not the master data". Do you mean to say that I should have just created a regular datasource (i.e. Flexible Update in any Data Target InfoSource), rather than a direct update of Master Data datasource? </br></br>
    However, if this is what you meant, please remember that for other characteristics such as Sub Product A, B, Product Lines, Sales Region, Business Center etc. (as seen in FS10N tcode per GL), are reflected in ODS per line item, and they are setup as master data. So I was thinking maybe i just missed a step in mapping. </br></br>
    Is this what you meant? My main goal is to be able to have the line items i load in ODS be matched to the particular revenue type for that GL's profit center (seen in FS10N tcode in R/3)</br></br>
    Thanks again.

  • Master Data is not Getting Updated Properly.

    Dear Expert,
    I am trying to pull the data by Info package but The text is not getting updated.
    The Master data object is    0BPARTNER .  To update this Master data , we ghave 2 info packages
    0BPARTNER_ATTR   and    0BPARTNER_TEXT.    But the text is not getting updated ( like the fiels E-mail address, Address 1, Address 2, Address 3, Address 4,.... Phone No,Etc.
    What could be the Reason for this and How can I solve this issue  ?
    Tahnks,
    Utpal

    Hi,
    Could you please check the data in RSA3.
    If everything is there in R/3 side.Then,please check are you giving any
    condition in the infopackage .
    Please run the infopackage without giving any condition.
    Thanks,
    Saveen Kumar

  • Performance: reading huge amount of master data in end routine

    In our 7.0 system, each day a full load runs from DSO X to DSO Y in which from six characteristics from DSO X master data is read to about 15 fields in DSO Y contains about 2mln. records, which are all transferred each day. The master data tables all contain between 2mln. and 4mln. records. Before this load starts, DSO Y is emptied. DSO Y is write optimized.
    At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups. We redesigned and fill all master data attributes in the end routine, after fillilng internal tables with the master data values corresponding to the data package:
    *   Read 0UCPREMISE into temp table
        SELECT ucpremise ucpremisty ucdele_ind
          FROM /BI0/PUCPREMISE
          INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE ucpremise EQ RESULT_PACKAGE-ucpremise.
    And when we loop over the data package, we write someting like:
        LOOP AT RESULT_PACKAGE ASSIGNING <fs_rp>.
          READ TABLE lt_0ucpremise INTO ls_0ucpremise
            WITH KEY ucpremise = <fs_rp>-ucpremise
            BINARY SEARCH.
          IF sy-subrc EQ 0.
            <fs_rp>-ucpremisty = ls_0ucpremise-ucpremisty.
            <fs_rp>-ucdele_ind = ls_0ucpremise-ucdele_ind.
          ENDIF.
    *all other MD reads
    ENDLOOP.
    So the above statement is repeated for all master data we need to read from. Now this method is quite faster (1,5 hr). But we want to make it faster. We noticed that reading in the master data in the internal tables still takes a long time, and this has to be repeated for each data package. We want to change this. We have now tried a similar method, but now load all master data in internal tables, without filtering on the data package, and we do this only once.
    *   Read 0UCPREMISE into temp table
        SELECT ucpremise ucpremisty ucdele_ind
          FROM /BI0/PUCPREMISE
          INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise.
    So when the first data package starts, it fills all master data values, which 95% of them we would need anyway. To accomplish that the following data packages can use the same table and don't need to fill them again, we placed the definition of the internal tables in the global part of the end routine. In the global we also write:
    DATA: lv_data_loaded TYPE C LENGTH 1.
    And in the method we write:
    IF lv_data_loaded IS INITIAL.
      lv_0bpartner_loaded = 'X'.
    * load all internal tables
    lv_data_loaded = 'Y'.
    WHILE lv_0bpartner_loaded NE 'Y'.
      Call FUNCTION 'ENQUEUE_SLEEP'
      EXPORTING
         seconds = 1.
    ENDWHILE.
    LOOP AT RESULT_PACKAGE
    * assign all data
    ENDLOOP.
    This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
    Well this all seems to work: it takes now 10 minutes to load everything to DSO Y. But I'm wondering if I'm missing anything. The system seems to work fine loading all these records in internal tables. But any improvements or critic remarks are very welcome.

    This is a great question, and you've clearly done a good job of investigating this, but there are some additional things you should look at and perhaps a few things you have missed.
    Zephania Wilder wrote:
    At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups.
    This is not accurate. After SP14, BW does a prefetch and buffers the master data values used in the lookup. Note [1092539|https://service.sap.com/sap/support/notes/1092539] discusses this in detail. The important thing, and most likely the reason you are probably seeing individual master data lookups on the DB, is that you must manually maintain the MD_LOOKUP_MAX_BUFFER_SIZE parameter to be larger than the number of lines of master data (from all characteristics used in lookups) that will be read. If you are seeing one select statement per line, then something is going wrong.
    You might want to go back and test with master data lookups using this setting and see how fast it goes. If memory serves, the BW master data lookup uses an approach very similar to your second example (1,5 hrs), though I think that it first loops through the source package and extracts the lists of required master data keys, which is probably faster than your statement "FOR ALL ENTRIES IN RESULT_PACKAGE" if RESULT_PACKAGE contains very many duplicate keys.
    I'm guessing you'll get down to at least the 1,5 hrs that you saw in your second example, but it is possible that it will get down quite a bit further.
    Zephania Wilder wrote:
    This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
    This sleeping approach is not necessary as only one data package will be running at a time in any given process. I believe that the "global" internal table is not be shared between parallel processes, so if your DTP is running with three parallel processes, then this table will just get filled three times. Within a process, all data packages are processed serially, so all you need to do is check whether or not it has already been filled. Or are you are doing something additional to export the filled lookup table into a shared memory location?
    Actually, you have your global data defined with the statement "DATA: lv_data_loaded TYPE C LENGTH 1.". I'm not completely sure, but I don't think that this data will persist from one data package to the next. Data defined in the global section using "DATA" is global to the package start, end, and field routines, but I believe it is discarded between packages. I think you need to use "CLASS-DATA: lv_data_loaded TYPE C LENGTH 1." to get the variables to persist between packages. Have you checked in the debugger that you are really only filling the table once per request and not once per package in your current setup? << This is incorrect - see next posting for correction.
    Otherwise the third approach is fine as long as you are comfortable managing your process memory allocations and you know the maximum size that your master data tables can have. On the other hand, if your master data tables grow regularly, then you are eventually going to run out of memory and start seeing dumps.
    Hopefully that helps out a little bit. This was a great question. If I'm off-base with my assumptions above and you can provide more information, I would be really interested in looking at it further.
    Edited by: Ethan Jewett on Feb 13, 2011 1:47 PM

  • Extraction of certain master data not working

    Hi, we are on BI7 and some of the BC masterdata extractions are not working correctly.  Some masterdata working fine, consequently communication between systems working, but some master data fields e.g. 0co_area, 0chrt_accts and more are not working fine.  Get this on 3.x and new datasources.
    The data does get pulled through but the upload just never goes green, and keeps running.  If one look at the batch jobs (SM37) in the appropriate source system, it keeps on going until it is manually stopped. 
    I am only experiencing this problem with data pulled through from the production system - I am pulling it through to my testing and production box, but fails in both, where all other ECC systems are fine.
    In the monitor there is a warning and the following is the details of the monitor: 
    Overall status: Missing Messages or warnings
       Requests (messages):  Everything OK  (green)       Data request arranged  (green)
           Confirmed with:  OK (green)
       Extraction (messages): Missing messages (yellow)
           Data request received (green)
           Data selection scheduled (green)
           Missing message:  Number of sent records (yellow)
           Missing message: Selection completed (yellow)
       Transfer (IDocs and TRFC): Missing messages or warnings (yellow)
           Request IDoc: Application document posted (green)
           Info IDoc 1 : Application document posted (green)
           Data Package 1 : arrived in BW ; Processing : Selected number does not agree with transferred n (yellow)
           Info IDoc 2 : Application document posted (green)
       Processing (data packet) : Everyting OK (green)
           Inboun Processing (32 records) : No errors (green)
           Update PSA (32 records posted) : No errors (green)
           Processing end : No errors (green)

    Thanks, for the responses.  The appropriate Idocs work fine with transaction BD87. I looked in SM58 earlier during the course of the day and there was no error, just checked again (8PM) and found some entries with error texts for BW background user - 6 errors within 10 seconds.  Something to do with 'ERROR REQU... PG# IN 1' - do not have authorisation at this moment to check the specific requests, but not sure if the master data extractions would have caused these.
    In the job log for the problematic jobs, only 1 IDoc get created, unlike the other succesful ones that have 3 IDocs (not sure if this means anything)

  • Problem with Master Data Load

    Dear Experts,
    If somebody can help me by the following case, please give me some solution. Iu2019m working in a project BI 7.0 were needed to delete master data for an InfoObject material. The way that I took for this was through tcode u201CS14u201D. After that, I have tried to load again the master data, but the process was broken and the load done to half data.
    This it is the error:
    Second attempt to write record 'YY99993' to /BIC/PYYYY00006 failed
    Message no. RSDMD218
    Diagnosis
    During the master data update, the master data tables are read to determine which records of the data package that was passed have to be inserted, updated, or modified. Some records are inserted in the master data table by a concurrently running request between reading the tables at the start of package processing and the actual record insertion at the end of package processing.
    The master data update tries to overwrite the records inserted by the concurrently running process, but the database record modification returns an unexpected error.
    Procedure
    u2022     Check if the values of the master data record with the key specified in this message are updated correctly.
    u2022     Run the RSRV master data test "Time Overlaps of Load Requests" and enter the current request to analyze which requests are running concurrently and may have affected the master data update process.
    u2022     Re-schedule the master data load process to avoid such situations in future.
    u2022     Read SAP note 668466 to get more information about master data update scheduling.
    Other hand, the SID table in the master data product is empty.
    Thanks for you well!
    Luis

    Dear Daya,
    Thank for your help, but I was applied your suggesting. I sent to OSS with the following details:
    We are on BI 7.0 (system ID DXX)
    While loading Master Data for infoobject XXXX00001 (main characteristic in our system u2013 like material) we are facing the following error:
    Yellow warning u201CSecond attempt to write record u20182.347.263u2019 to BIC/ XXXX00001 was successfulu201D
    We are loading the Master data from data source ZD_BW_XXXXXXX (from APO system) through the DTP ZD_BW_XXXXX / XXX130 -> XXXX00001
    The Master Data tables (S, P, X) are not updated properly.
    The following reparing actions have been taken so far:
    1.     Delete all related transactional and master data, by checking all relation (tcode SLG1 à RSDMD, MD_DEL)
    2.     Follow instructions from OSS 632931 (tcode RSRV)
    3.     Run report RSDMD_CHECKPRG_ALL from tcode SE38 (using both check and repair options).
    After deleting all data, the previous tests were ok, but once we load new master data, the same problem appears again, and the report RSDMD_CHECKPRG_ALL gives the following error.
    u201CCharacteristic XXXX00001: error fund during this test.u201D
    The RSRV check for u201CCompare sizes of P and X and/or Q and Y tables for characteristic XXXX00001u201D is shown below:
    Characteristic XXXX00001: Table /BIC/ PXXXX00001, /BIC/ XXXXX00001 are not consistent 351.196 derivation.
    It seems that our problem is described in OSS 1143433 (SP13), even if we already are in SP16.
    Could somebody please help us, and let us know how to solve the problem?
    Thank for all,
    Luis

Maybe you are looking for