SMD (Shared master data tool)

Hi All,
Can anybody provide any poiners on SMD that is shared master data tool and how to use change pointers along with it to track changes in material master data.
Regrads,
Rahul

hi,
Shared Master Data is a component which drives the distribution of any changes in the centralally maintained master data. The SMD component writes a change status in tables BDCP and BDCPS. For EG. whenever you change any data with respect to a material, the changes are logged in two tables namely CDHDR and CDPOS. Now there is a program RBDMIDOC which reads these changes and distributes the same. Change document Object are nothing but the component with which the fields are configured,so that any changes on those fields are recorded. You can find them in Tx. SCDO.
For Material Master the change doc. object is. MATERIAL.
Reward points if you find this useful.
Rgds

Similar Messages

  • Shared Master Data - SMD Tool Configuration

    Hello Experts,
    I have a requirement to generate ALE-IDOC for Customer and Vendor master change. So we are planning to use Shared Master data Tool of SAP.
    Have done the settings in BD52, BD50, BD61.  But do i need to configure BD64 ?
    My requirement is generate the IDOC in the same client, in the sense when master data is changed IDOC are to be triggered and they will be stored in a shared folder for other system to pick it up.
    So we need to have the source and destination as same client....
    Not sure whether SHared master data will work for same client ? Please let me know your expert thoughts.

    hi,
    Shared Master Data is a component which drives the distribution of any changes in the centralally maintained master data. The SMD component writes a change status in tables BDCP and BDCPS. For EG. whenever you change any data with respect to a material, the changes are logged in two tables namely CDHDR and CDPOS. Now there is a program RBDMIDOC which reads these changes and distributes the same. Change document Object are nothing but the component with which the fields are configured,so that any changes on those fields are recorded. You can find them in Tx. SCDO.
    For Material Master the change doc. object is. MATERIAL.
    Reward points if you find this useful.
    Rgds

  • Issue with Master data change workflow in GRC PC 10.1

    Hi,
    I have configured the work flow for master data changes in GRC PC 10.1, however approver is not able to view the request in inbox where are the organization owner is able to see the review for change request in inbox.
    Please let me know if there is any config where we need to set the approvers for workflow, so that the system creates a request for approver.
    Regards,
    Giridhar

    Dear Giridhar,
    Please, check the following configuration:
    1. Activate the workflow (Path: SPRO -> GRC -> Shared Master Data Settings -> Activate Master Data Changes)
    2. See whether Checkbox "Approval" is ticked for the selected entity
    3. If you activate master data changes, please check whether the correct roles are indicated in the Maintain Custom Agent Determination Rules in the Workflow settings.
    Business Event: 0FN_MDCHG_APPR
    Role: Select the role you gave to the approver
    After performing this configuration, a task must appear in the work inbox.
    Best Regards,
    Fernando

  • Master Data Historization

    Hello,
    since DRM is used as a Master Data Tool, is there any possiblity to historize Master Data?
    Lets say I have a dimension for customers and a customer changes his name. How would the historization work in DRM? Is there a possiblity to automate this process?
    Kind Regards,
    Pascal

    DRM keeps track of all the changes onto RM_Transaction_History table, you can perform an Audit for all the changes, If you are thinking beyond DRM, you can think of having an in house EDW where you can pump all your metadata from DRM and leverage the SCD TYPES to maintain it further and use that for your reporting.

  • Master Data Services not available under shared feature while installing SQL server 2012

    Hi,
    I am trying to install Master Data Services but do not see the option to select MDS under the shared features when going through the SQL server 2012 installation. I have the SQL server 2012 SP1 (64 bit) install files. I have also installed SP2. I havent
    found anything online about the issue.
    Can someone please advise?
    I have a screenshot of the installation screen which I will attach as soon as I am able to get my account verified. Thanks!

    Hi Revees,
    This might be a very naïve and also out of the original scope of the thread question.
    We are thinking of going with the developer edition. We have 2/3 developers and some other testers and business users.
    1) I understand that we need a developer license for each developer. But would we need a license for the business user. Can they have a sort of read access to the dbs?
    2) If a developer has MSDN subscription, Would they need to purchase the license too assuming we purchase the developer edition of the software (and not download it using the MSDN subscription)?
    Thanks for your assistance!

  • SAP Master data migration tools

    Hi,
    I would like to know if any SAP standard tools which are available for all master data migration,Kindly share the same which is required for us now.
    We have to migrate the data from legacy systems to SAP and we have to use only SAP Standard master data migration tools.
    Kindly share the same.
    Thanks and Regards,
    Raveendra

    Raveendra,
    SAP migrates data from legacy system using standard tools like LSMW, BDC, BAPI. Under LSMW you will have Batch input, Batch recording, BAPI and IDOCs options. Depending upon requirement you can choose any one of them. BAPI is advisable instead of BDC method.
    Also for utilities industry SAP has provided ISU Migration tool (EMIGALL).

  • New SAP Tools for faster Master Data Upload?

    Hi All!
    I am interested in getting some detailed information on different tools that are used for faster master data upload. Please let me know about the tools apart from LSMW, BAPI, eCATT and BDC.
    We basically want to know the following things:
    1.Which are the tools available for this master data upload?
    for eg. SAP MDM, Info Shuttle, other available new age tools etc........
    2. Their performance / feature comparions?
    Please provide performance / feature comparison for different new age tools in detail.
    Thanking you in advance.
    Regards,

    Hi Amar,
    regarding SAP MDM you can read advantages and characteristics here: http://www.sap.com/platform/netweaver/components/mdm/index.epx
    Hope this help you,
    Vito

  • Master data sharing

    hi,
    In extended star schema, master data is stored separately . can this master data be shared across other star schemas as well?
    thanks.
    bwlearner

    HeLLo
    will explain with a simple example with sales.
    1. Create these InfoObjects
    CHARACTERISTICS:
    Sales Representative ID - SRPID
    " "     "     "     Name- SNAME (Attribute 1 of SRPID)
    " "     "     "      Age- SAGE  (Attribute 2 of SRPID)
    " "     "     "    Phone- SPHONE(Attribute 3 of SRPID)
    Sales Region            - SREG  (External Chars in HIER)
    Sales Office            - SOFF  (External Chars in HIER)
    KEY FIGURES:
    Product Quantity        - PQTY
    Sales Amount            - SALES
    Profit Amount           - PROFIT
    2. For the IO SRPID maintain the above said attributes and external chars in hierarchies.
    3. Using DIRECT UPDATE, load the Master Data.
    4. Create InfoSource(Flexible Update) IS_SRP_TD with the following objects
    SRPID, PQTY, SALES & PROFIT.
    5. Create InfoCube (IC_SRP) and Update Rules using the InfoSource - IS_SRP_TD and load the Transactional Data.
    Now you can create a Query on the Infocube, Drag and drop the SRPID into Colums and the Key Figures in the Rows. Then on the SRPID under Columns, give context menu (right click) add the 3 attributes & again give context menu Properties and Display Hierarchy, there check Active & select the hierarchy name.
    Now save and execute your query, this report will display the InfoCube data with attributes and the hierarchy.
    You could prove this with the following tables.
    Fact Table - /BIC/FIC_SRP  (This table will have KEY_IC_SRP1 - Dimension Table Key)
    Dim  Table - /BIC/DIC_SRP1 (This Table will have the DIMID field - Dimension Table Key)
    Here Fact and Dimension Table gets the link.
    DimID Table - /BIC/DIC_SRP1 (This Table will have the SID_SRPID field - Master data ID to link with the following SID table)
    SID Table   - /BIC/SSRPID (This table have the InfoObjects /BIC/SRPID, SID (MASTER DATA), CHCKFL, DATAFL, etc.
    Using these teh SID table will check the P, T & H tables.
    Hope this clears you ?
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • How the master data of multi language is shared in the info cubes

    hi frindz,
    how the master data of multi language is shared in the info cubes
                                                                     bye bye
                                                                      venkata ramana reddy.G

    Hi Ventak,
    In the infocube you have the key. Then you have a text master data table that has as key (for the table) the key of the value (the one in the cube) and the language, and the text in that language as another field (short, med, long).
    Hope this helps.
    Regards,
    Diego

  • Master data sharing using ext. star schema

    Hi,
    I have undetrstood the concept of extended star schema.
    I understand,
    One of the advantages of extended star schema is, that the master data can be shared, meaning, since the master data is stored separately, other star schema's can also share this master data provided it is of the same relevance( with same infoobjects used in this star schema).
    Confirm if my understanding is right and if right,
    Any idea or suggestion how to demonstrate this ?
    Points will be given for good answers.
    thanks.
    bwlearner

    Hey,
    If u could map this with programming.
    MASTER DATA Tables are like GLOBAL declaration and InfoCubes (Fact Tables) are like LOCAL declaration. Any number of IncoCubes can access one particular Master Data Table.
    Assume Sales data coming from 5 regions. Here 0CUSTOMER & its attributes will update the same Master Data Table. The data fields will be stored in the Infocube. So all the 5 InfoCubes access the same Master Data tables using the Dimension & SID tables.
    Clear ?
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Is LSMW tool only used to update master data??

    Hi experts,
    Is LSMW tool only used to update master data??i.e for updating material master,customer master etc.,
    or it can be used to updated configuration tables also.i.e when we do configuration a CTS is to be created.then how the CTS generation is handled.
    Can anyone plesae explain me in detail?
    Thanks in advance,
    Regards,
    N.Sreelatha

    Hi
    LSMW is used to transfer Legacy data from one system to another system.
    Obvioulsy Configuration objects should be transported through CTS only and not with LSMW .
    Cheers,
    Hakim

  • Master data load - conversion tool

    Hi all,
    We need to extract data (around 60 milion entries)
    from the current system, convert and upload into cubes from the newly merged system.
    Does anyone have any good advice about a tool accomodating such amount of data?
    Thanks in advance for suggestions!
    Agnieszka

    Hi Agnieszka,
    I am not quite sure about your issue. Are you talking about just master data or transactional data as well? What type of conversion do you want to do?
    Please explain a bit more.
    regards
    Siggi
    PS: Welcome to the SDN!

  • Upload the master data in abap-hr using Lsmw tool

    Hi Friends,
    Any one Help me How to Upload the Master data in ABAP-HR using LSMW.
    In Recording what need to fill.
    If any screenshots are availave please provied me.
    Thanks and Regards,
    Sai.

    http://www.scmexpertonline.com/downloads/SCM_LSMW_StepsOnWeb.doc
    Try this .....

  • Steps to prepare and upload legacy master data excel files into SAP?

    Hi abap experts,
    We have brand new installed ECC system somehow configured but with no master or transaction data loaded .It is new empty system....We also have some legacy data in excel files...We want to start loading some data into the SAP sandbox step by step and to see how they work...test some transactions see if the loaded data are good etc initial tests.
    Few questions here are raised:
    -Can someone tell me what is the process of loading this data into SAP system?
    -Should this excel file must me reworked prepared somehow(fields, columns etc) in order to be ready for upload to SAP??
    -Users asked me how to prepared their legacy excel files so they can be ready in SAP format for upload.?Is this an abaper job or it is a functional guy job?
    -Or should the excel files be converted to .txt files and then imported to SAP?Does it really make some difference if files are in excel or .txt format?
    -Should the Abaper determine the structure of those excel file(to be ready for upload ) and if yes, what are the technical rules here ?
    -What tools should be used for this initial data loads? CATT , Lsmw , batch input or something else?
    -At which point we should test the data?I guess after the initial load?
    -What tools are used in all steps before...
    -If someone can provide me with step by step scenario or guide of loading some kind of initial master data - from .xls file alignment to the real upload - this will be great..
    You can email me some upload guide or some excel/txt file examples and screenshots documents to excersize....
    Your help is appreciated it.!
    Jon

    hi,
    excel sheet uploading:
    http://www.sap-img.com/abap/upload-direct-excel.htm
    http://www.sap-img.com/abap/excel_upload_alternative-kcd-excel-ole-to-int-convert.htm
    http://www.sapdevelopment.co.uk/file/file_upexcel.htm
    http://www.sapdevelopment.co.uk/ms/mshome.htm

  • Performance: reading huge amount of master data in end routine

    In our 7.0 system, each day a full load runs from DSO X to DSO Y in which from six characteristics from DSO X master data is read to about 15 fields in DSO Y contains about 2mln. records, which are all transferred each day. The master data tables all contain between 2mln. and 4mln. records. Before this load starts, DSO Y is emptied. DSO Y is write optimized.
    At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups. We redesigned and fill all master data attributes in the end routine, after fillilng internal tables with the master data values corresponding to the data package:
    *   Read 0UCPREMISE into temp table
        SELECT ucpremise ucpremisty ucdele_ind
          FROM /BI0/PUCPREMISE
          INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE ucpremise EQ RESULT_PACKAGE-ucpremise.
    And when we loop over the data package, we write someting like:
        LOOP AT RESULT_PACKAGE ASSIGNING <fs_rp>.
          READ TABLE lt_0ucpremise INTO ls_0ucpremise
            WITH KEY ucpremise = <fs_rp>-ucpremise
            BINARY SEARCH.
          IF sy-subrc EQ 0.
            <fs_rp>-ucpremisty = ls_0ucpremise-ucpremisty.
            <fs_rp>-ucdele_ind = ls_0ucpremise-ucdele_ind.
          ENDIF.
    *all other MD reads
    ENDLOOP.
    So the above statement is repeated for all master data we need to read from. Now this method is quite faster (1,5 hr). But we want to make it faster. We noticed that reading in the master data in the internal tables still takes a long time, and this has to be repeated for each data package. We want to change this. We have now tried a similar method, but now load all master data in internal tables, without filtering on the data package, and we do this only once.
    *   Read 0UCPREMISE into temp table
        SELECT ucpremise ucpremisty ucdele_ind
          FROM /BI0/PUCPREMISE
          INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise.
    So when the first data package starts, it fills all master data values, which 95% of them we would need anyway. To accomplish that the following data packages can use the same table and don't need to fill them again, we placed the definition of the internal tables in the global part of the end routine. In the global we also write:
    DATA: lv_data_loaded TYPE C LENGTH 1.
    And in the method we write:
    IF lv_data_loaded IS INITIAL.
      lv_0bpartner_loaded = 'X'.
    * load all internal tables
    lv_data_loaded = 'Y'.
    WHILE lv_0bpartner_loaded NE 'Y'.
      Call FUNCTION 'ENQUEUE_SLEEP'
      EXPORTING
         seconds = 1.
    ENDWHILE.
    LOOP AT RESULT_PACKAGE
    * assign all data
    ENDLOOP.
    This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
    Well this all seems to work: it takes now 10 minutes to load everything to DSO Y. But I'm wondering if I'm missing anything. The system seems to work fine loading all these records in internal tables. But any improvements or critic remarks are very welcome.

    This is a great question, and you've clearly done a good job of investigating this, but there are some additional things you should look at and perhaps a few things you have missed.
    Zephania Wilder wrote:
    At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups.
    This is not accurate. After SP14, BW does a prefetch and buffers the master data values used in the lookup. Note [1092539|https://service.sap.com/sap/support/notes/1092539] discusses this in detail. The important thing, and most likely the reason you are probably seeing individual master data lookups on the DB, is that you must manually maintain the MD_LOOKUP_MAX_BUFFER_SIZE parameter to be larger than the number of lines of master data (from all characteristics used in lookups) that will be read. If you are seeing one select statement per line, then something is going wrong.
    You might want to go back and test with master data lookups using this setting and see how fast it goes. If memory serves, the BW master data lookup uses an approach very similar to your second example (1,5 hrs), though I think that it first loops through the source package and extracts the lists of required master data keys, which is probably faster than your statement "FOR ALL ENTRIES IN RESULT_PACKAGE" if RESULT_PACKAGE contains very many duplicate keys.
    I'm guessing you'll get down to at least the 1,5 hrs that you saw in your second example, but it is possible that it will get down quite a bit further.
    Zephania Wilder wrote:
    This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
    This sleeping approach is not necessary as only one data package will be running at a time in any given process. I believe that the "global" internal table is not be shared between parallel processes, so if your DTP is running with three parallel processes, then this table will just get filled three times. Within a process, all data packages are processed serially, so all you need to do is check whether or not it has already been filled. Or are you are doing something additional to export the filled lookup table into a shared memory location?
    Actually, you have your global data defined with the statement "DATA: lv_data_loaded TYPE C LENGTH 1.". I'm not completely sure, but I don't think that this data will persist from one data package to the next. Data defined in the global section using "DATA" is global to the package start, end, and field routines, but I believe it is discarded between packages. I think you need to use "CLASS-DATA: lv_data_loaded TYPE C LENGTH 1." to get the variables to persist between packages. Have you checked in the debugger that you are really only filling the table once per request and not once per package in your current setup? << This is incorrect - see next posting for correction.
    Otherwise the third approach is fine as long as you are comfortable managing your process memory allocations and you know the maximum size that your master data tables can have. On the other hand, if your master data tables grow regularly, then you are eventually going to run out of memory and start seeing dumps.
    Hopefully that helps out a little bit. This was a great question. If I'm off-base with my assumptions above and you can provide more information, I would be really interested in looking at it further.
    Edited by: Ethan Jewett on Feb 13, 2011 1:47 PM

Maybe you are looking for

  • Not receiving picture messages

    Hello, Sorry if its a stupid question but after reading a few other topics online I've become quite confused. What happened is, I sent my girlfriend pictures to her @vtext.com address and she said she didn't receive them. Did they go to her online pi

  • Automatic legend not required

    hello, I m authoring a musical dvd (dual layer) with 2 subtitles options (spanish and portuguese) , by default the disc subtitle settings are set to "not set". When the DVD is done the spanish subtitles (stream 01) apear automaticly without chosing i

  • How to Save/Transport Crystal Reports 2013 reports in ECC (not BW)

    Hi, we are using Crystal Report 2013 and have reports which are based on ALV´s and SAP Tables directly. I want to know if it´s possible to save the reports directly in the ECC system, with transport connection etc. and maybe the possibility to transl

  • How to use picture in picture in imovie 9.0.7

    how can i do to use picture in picture in imovie 9.0.7

  • Integration between FedEx & SAP

    Hi Gurus, I have requirement to integrate between FedEx and SAP. I checked in SDN and read different threads but not meeting my requirement. My requirement is to send data(XML) based on Order Number in  SAP TO FedEx VICEVERSA. Here we are not using P