Master data sharing

hi,
In extended star schema, master data is stored separately . can this master data be shared across other star schemas as well?
thanks.
bwlearner

HeLLo
will explain with a simple example with sales.
1. Create these InfoObjects
CHARACTERISTICS:
Sales Representative ID - SRPID
" "     "     "     Name- SNAME (Attribute 1 of SRPID)
" "     "     "      Age- SAGE  (Attribute 2 of SRPID)
" "     "     "    Phone- SPHONE(Attribute 3 of SRPID)
Sales Region            - SREG  (External Chars in HIER)
Sales Office            - SOFF  (External Chars in HIER)
KEY FIGURES:
Product Quantity        - PQTY
Sales Amount            - SALES
Profit Amount           - PROFIT
2. For the IO SRPID maintain the above said attributes and external chars in hierarchies.
3. Using DIRECT UPDATE, load the Master Data.
4. Create InfoSource(Flexible Update) IS_SRP_TD with the following objects
SRPID, PQTY, SALES & PROFIT.
5. Create InfoCube (IC_SRP) and Update Rules using the InfoSource - IS_SRP_TD and load the Transactional Data.
Now you can create a Query on the Infocube, Drag and drop the SRPID into Colums and the Key Figures in the Rows. Then on the SRPID under Columns, give context menu (right click) add the 3 attributes & again give context menu Properties and Display Hierarchy, there check Active & select the hierarchy name.
Now save and execute your query, this report will display the InfoCube data with attributes and the hierarchy.
You could prove this with the following tables.
Fact Table - /BIC/FIC_SRP  (This table will have KEY_IC_SRP1 - Dimension Table Key)
Dim  Table - /BIC/DIC_SRP1 (This Table will have the DIMID field - Dimension Table Key)
Here Fact and Dimension Table gets the link.
DimID Table - /BIC/DIC_SRP1 (This Table will have the SID_SRPID field - Master data ID to link with the following SID table)
SID Table   - /BIC/SSRPID (This table have the InfoObjects /BIC/SRPID, SID (MASTER DATA), CHCKFL, DATAFL, etc.
Using these teh SID table will check the P, T & H tables.
Hope this clears you ?
Best Regards....
Sankar Kumar
+91 98403 47141

Similar Messages

  • Master data sharing using ext. star schema

    Hi,
    I have undetrstood the concept of extended star schema.
    I understand,
    One of the advantages of extended star schema is, that the master data can be shared, meaning, since the master data is stored separately, other star schema's can also share this master data provided it is of the same relevance( with same infoobjects used in this star schema).
    Confirm if my understanding is right and if right,
    Any idea or suggestion how to demonstrate this ?
    Points will be given for good answers.
    thanks.
    bwlearner

    Hey,
    If u could map this with programming.
    MASTER DATA Tables are like GLOBAL declaration and InfoCubes (Fact Tables) are like LOCAL declaration. Any number of IncoCubes can access one particular Master Data Table.
    Assume Sales data coming from 5 regions. Here 0CUSTOMER & its attributes will update the same Master Data Table. The data fields will be stored in the Infocube. So all the 5 InfoCubes access the same Master Data tables using the Dimension & SID tables.
    Clear ?
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Master Data Services not available under shared feature while installing SQL server 2012

    Hi,
    I am trying to install Master Data Services but do not see the option to select MDS under the shared features when going through the SQL server 2012 installation. I have the SQL server 2012 SP1 (64 bit) install files. I have also installed SP2. I havent
    found anything online about the issue.
    Can someone please advise?
    I have a screenshot of the installation screen which I will attach as soon as I am able to get my account verified. Thanks!

    Hi Revees,
    This might be a very naïve and also out of the original scope of the thread question.
    We are thinking of going with the developer edition. We have 2/3 developers and some other testers and business users.
    1) I understand that we need a developer license for each developer. But would we need a license for the business user. Can they have a sort of read access to the dbs?
    2) If a developer has MSDN subscription, Would they need to purchase the license too assuming we purchase the developer edition of the software (and not download it using the MSDN subscription)?
    Thanks for your assistance!

  • SMD (Shared master data tool)

    Hi All,
    Can anybody provide any poiners on SMD that is shared master data tool and how to use change pointers along with it to track changes in material master data.
    Regrads,
    Rahul

    hi,
    Shared Master Data is a component which drives the distribution of any changes in the centralally maintained master data. The SMD component writes a change status in tables BDCP and BDCPS. For EG. whenever you change any data with respect to a material, the changes are logged in two tables namely CDHDR and CDPOS. Now there is a program RBDMIDOC which reads these changes and distributes the same. Change document Object are nothing but the component with which the fields are configured,so that any changes on those fields are recorded. You can find them in Tx. SCDO.
    For Material Master the change doc. object is. MATERIAL.
    Reward points if you find this useful.
    Rgds

  • Shared Master Data - SMD Tool Configuration

    Hello Experts,
    I have a requirement to generate ALE-IDOC for Customer and Vendor master change. So we are planning to use Shared Master data Tool of SAP.
    Have done the settings in BD52, BD50, BD61.  But do i need to configure BD64 ?
    My requirement is generate the IDOC in the same client, in the sense when master data is changed IDOC are to be triggered and they will be stored in a shared folder for other system to pick it up.
    So we need to have the source and destination as same client....
    Not sure whether SHared master data will work for same client ? Please let me know your expert thoughts.

    hi,
    Shared Master Data is a component which drives the distribution of any changes in the centralally maintained master data. The SMD component writes a change status in tables BDCP and BDCPS. For EG. whenever you change any data with respect to a material, the changes are logged in two tables namely CDHDR and CDPOS. Now there is a program RBDMIDOC which reads these changes and distributes the same. Change document Object are nothing but the component with which the fields are configured,so that any changes on those fields are recorded. You can find them in Tx. SCDO.
    For Material Master the change doc. object is. MATERIAL.
    Reward points if you find this useful.
    Rgds

  • How the master data of multi language is shared in the info cubes

    hi frindz,
    how the master data of multi language is shared in the info cubes
                                                                     bye bye
                                                                      venkata ramana reddy.G

    Hi Ventak,
    In the infocube you have the key. Then you have a text master data table that has as key (for the table) the key of the value (the one in the cube) and the language, and the text in that language as another field (short, med, long).
    Hope this helps.
    Regards,
    Diego

  • Performance: reading huge amount of master data in end routine

    In our 7.0 system, each day a full load runs from DSO X to DSO Y in which from six characteristics from DSO X master data is read to about 15 fields in DSO Y contains about 2mln. records, which are all transferred each day. The master data tables all contain between 2mln. and 4mln. records. Before this load starts, DSO Y is emptied. DSO Y is write optimized.
    At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups. We redesigned and fill all master data attributes in the end routine, after fillilng internal tables with the master data values corresponding to the data package:
    *   Read 0UCPREMISE into temp table
        SELECT ucpremise ucpremisty ucdele_ind
          FROM /BI0/PUCPREMISE
          INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE ucpremise EQ RESULT_PACKAGE-ucpremise.
    And when we loop over the data package, we write someting like:
        LOOP AT RESULT_PACKAGE ASSIGNING <fs_rp>.
          READ TABLE lt_0ucpremise INTO ls_0ucpremise
            WITH KEY ucpremise = <fs_rp>-ucpremise
            BINARY SEARCH.
          IF sy-subrc EQ 0.
            <fs_rp>-ucpremisty = ls_0ucpremise-ucpremisty.
            <fs_rp>-ucdele_ind = ls_0ucpremise-ucdele_ind.
          ENDIF.
    *all other MD reads
    ENDLOOP.
    So the above statement is repeated for all master data we need to read from. Now this method is quite faster (1,5 hr). But we want to make it faster. We noticed that reading in the master data in the internal tables still takes a long time, and this has to be repeated for each data package. We want to change this. We have now tried a similar method, but now load all master data in internal tables, without filtering on the data package, and we do this only once.
    *   Read 0UCPREMISE into temp table
        SELECT ucpremise ucpremisty ucdele_ind
          FROM /BI0/PUCPREMISE
          INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise.
    So when the first data package starts, it fills all master data values, which 95% of them we would need anyway. To accomplish that the following data packages can use the same table and don't need to fill them again, we placed the definition of the internal tables in the global part of the end routine. In the global we also write:
    DATA: lv_data_loaded TYPE C LENGTH 1.
    And in the method we write:
    IF lv_data_loaded IS INITIAL.
      lv_0bpartner_loaded = 'X'.
    * load all internal tables
    lv_data_loaded = 'Y'.
    WHILE lv_0bpartner_loaded NE 'Y'.
      Call FUNCTION 'ENQUEUE_SLEEP'
      EXPORTING
         seconds = 1.
    ENDWHILE.
    LOOP AT RESULT_PACKAGE
    * assign all data
    ENDLOOP.
    This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
    Well this all seems to work: it takes now 10 minutes to load everything to DSO Y. But I'm wondering if I'm missing anything. The system seems to work fine loading all these records in internal tables. But any improvements or critic remarks are very welcome.

    This is a great question, and you've clearly done a good job of investigating this, but there are some additional things you should look at and perhaps a few things you have missed.
    Zephania Wilder wrote:
    At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups.
    This is not accurate. After SP14, BW does a prefetch and buffers the master data values used in the lookup. Note [1092539|https://service.sap.com/sap/support/notes/1092539] discusses this in detail. The important thing, and most likely the reason you are probably seeing individual master data lookups on the DB, is that you must manually maintain the MD_LOOKUP_MAX_BUFFER_SIZE parameter to be larger than the number of lines of master data (from all characteristics used in lookups) that will be read. If you are seeing one select statement per line, then something is going wrong.
    You might want to go back and test with master data lookups using this setting and see how fast it goes. If memory serves, the BW master data lookup uses an approach very similar to your second example (1,5 hrs), though I think that it first loops through the source package and extracts the lists of required master data keys, which is probably faster than your statement "FOR ALL ENTRIES IN RESULT_PACKAGE" if RESULT_PACKAGE contains very many duplicate keys.
    I'm guessing you'll get down to at least the 1,5 hrs that you saw in your second example, but it is possible that it will get down quite a bit further.
    Zephania Wilder wrote:
    This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
    This sleeping approach is not necessary as only one data package will be running at a time in any given process. I believe that the "global" internal table is not be shared between parallel processes, so if your DTP is running with three parallel processes, then this table will just get filled three times. Within a process, all data packages are processed serially, so all you need to do is check whether or not it has already been filled. Or are you are doing something additional to export the filled lookup table into a shared memory location?
    Actually, you have your global data defined with the statement "DATA: lv_data_loaded TYPE C LENGTH 1.". I'm not completely sure, but I don't think that this data will persist from one data package to the next. Data defined in the global section using "DATA" is global to the package start, end, and field routines, but I believe it is discarded between packages. I think you need to use "CLASS-DATA: lv_data_loaded TYPE C LENGTH 1." to get the variables to persist between packages. Have you checked in the debugger that you are really only filling the table once per request and not once per package in your current setup? << This is incorrect - see next posting for correction.
    Otherwise the third approach is fine as long as you are comfortable managing your process memory allocations and you know the maximum size that your master data tables can have. On the other hand, if your master data tables grow regularly, then you are eventually going to run out of memory and start seeing dumps.
    Hopefully that helps out a little bit. This was a great question. If I'm off-base with my assumptions above and you can provide more information, I would be really interested in looking at it further.
    Edited by: Ethan Jewett on Feb 13, 2011 1:47 PM

  • Issue with Master data change workflow in GRC PC 10.1

    Hi,
    I have configured the work flow for master data changes in GRC PC 10.1, however approver is not able to view the request in inbox where are the organization owner is able to see the review for change request in inbox.
    Please let me know if there is any config where we need to set the approvers for workflow, so that the system creates a request for approver.
    Regards,
    Giridhar

    Dear Giridhar,
    Please, check the following configuration:
    1. Activate the workflow (Path: SPRO -> GRC -> Shared Master Data Settings -> Activate Master Data Changes)
    2. See whether Checkbox "Approval" is ticked for the selected entity
    3. If you activate master data changes, please check whether the correct roles are indicated in the Maintain Custom Agent Determination Rules in the Workflow settings.
    Business Event: 0FN_MDCHG_APPR
    Role: Select the role you gave to the approver
    After performing this configuration, a task must appear in the work inbox.
    Best Regards,
    Fernando

  • How to insert e-mail into a customer master data

    Hi all,
    i need to insert e-mail into customer's master data, which bapi can i use? and, if that possible, how?
    i explain the situation: we need to let some users change a customer data, what we want is create a transaction, let say ZXD02, that only let a user add or change emai for customer and update the Notes (see attachment). What user do, have to be replicated in Master Data in General Data-->Address-->E-Mail
    Any suggestions?
    Thnks a lot

    Hi Cristina Martínez Eguiarte
    We can implement field level authorization in SAP standard IMG settings. There is no need of any exit BAPI or screen/transaction variant for this. Please go through my doucument in which I have shared the process and settings. I did this for some other field and I hope you can do it for Email ID field as well.
    Field level authorization for customer and vendor master in SAP.
    Please test and update.
    Thank$

  • BAPI/function module to create/update vendor master data

    Hi
       We are on ECC 50 and have a need to update vendor master data through a programatic interface ( non-dialog ) with ABAP. What is a good function module that can be used to create/update vendor master data - I looked at BAPI_VENDOR_CREATE and did not find any input interfaces that can be passed to this BAPI.
    Previous experiences with the right BAPI for this purpose that can be shared is appreciated. <REMOVED BY MODERATOR>
    Edited by: Alvaro Tejada Galindo on Feb 26, 2008 5:58 PM

    Hi Kiran,
    If you want to load the vendor data into sap its better to go with LSMW batch input program.
    object 0040
    method 0001
    program name RFBIKR00
    Program type B
    this would be a good choice. Recently i did the same.

  • Picture on Item master Data

    Dear all expert,
    I want to put picture on item master data. I'm already set up picture folder on General Setting in C:/SAP Images.
    If I'm uploading the picture on C:/SAP Images in my computer, another user can't see the picture.
    I'm trying to upload in the server (C:/SAP Images) but the problem still happen. Only SAP on server that can see the picture.
    So where do I have to put the picture ?
    Thanks

    Hi!
    The Client should Connect to the Server automaticaly rather giving User Name and Password.
    In User Machine > Go to RUN >
    Server name
    1. it should connect without asking password, to attain this each client should be register in SEVER Machin. (Take a help of IT ADMIN)
    2. That Picture folder should be shared and tht folder should have full access to the user.
    3. Define Picture Folder Path in SAP as
    Server Name\Path...

  • Is it  possible to create  isu technical master data using EMIGALL

    Hi exports,
                    I ma new to emigall ,but i need to upload 1000's of master data as well as technical master data .
    kindly any body  provide me  related isu master data upload documents other wise sugesset me step by step guidelines regarding EMIGALL for upload technical master data ..
    kindly send me link  if have any EMIGALL screens doc..
    Thanks in advance..
    Moderator note - thread locked, no research done

    Post Author: swalker
    CA Forum: General
    yangster thanks for the quick reply.  I am not really sure about combining reports, so I looked into what you said.  I tried to create a shared variable on the field that I need to share but that field is a running total and I get an error when I try to run the report "This field cannot be used becaus eit must be evaluated later."    So I am not sure what to do now.  Is there a work around for this?  Thanks for any help.

  • Catalog group code data - is it Master data or Configuration data

    Hi all,
    Need some clarifications between Master Data and Configuration data. In CRM we use catalog code groups and codes for ticketing systems. The code groups and codes are defined in SPRO (IMG) once and consumed in catalog schema’s at client level. I refer to  this code groups and codes in SPRO as Master Data, cause it is shared across modules and does not change so frequently. However I was recently corrected by my manger that since it is transportable it cannot be Master Data, it is configuration data. And tasked to contact SAP and get the correct/official answer. Is it configuration data or master data and what is the determination.
    BTW, allow me to clarify that we like to know the right answer for proper documentation and this is not the who’s right challenge J
    Best,
    Michael Sh

    1.server creates new client thread for each newly accepted connection Good. Make sure the server doesn't do any I/O in the accepting thread, including even construction of input and output streams.
    2.if one of these client threads receives some data from the client, then:
    3. this client thread creates new ControlThread object (and passes received data to it )Why? What does the Control thread do that the client thread can't do? Does the client thread really create a new Control thread per piece of received data? What's the purpose of the Control thread here?
    4. ControlThread object in turn spawns a new thread Again why?
    From your description you appear to be creating:
    (a) a thread per connection
    (b) a thread per piece of received data
    (c) another thread per (b)
    What you probably need for a chat server is an input thread and an output thread per connection. The output thread should read a queue and the input thread should write to the appropriate queue(s), making sure you don't create a loop by sending the client's own data back to him.
    5. Finally, this spawned thread will send data to all client apps.No. Use one output thread per connection, as described above. As writing to a socket can block, you shouldn't use a single thread for writes to more than one client, otherwise the whole system can stall due to one non-co-operating client.
    a) how scalable is this server app?It's about as non-scalable as it could possibly be. You will have an explosion of threads per item of data received if you've described it correctly, and you already have too many threads per connection.
    b) what would you�ve done differently ( in terms of basic program structure )?Almost everything: see above.
    Perhaps creating ControlThread object each time data needs to be sent to other client apps is a bad ideaMost definitely.

  • Central Master Data Management

    Hi All,
    Can anyone tell me about central master data management ?
    In what way it is difference from consolidation and harmonization?
    I have gone through several documents but i did n't get any idea on that topic.
    If anyone have knowledge on CMDM, Please explain in their own words.
    Thanks
    Narendra

    Consolidation and Harmonization procedures are very different from CMDM (central master data management), however often times CMDM is not very useful without considering the other two as well. 
    Consolidation essentially means creating identity around your master data across your landscape.  More specifically you want to identify duplicate records and merge them into one record.  Harmonization is the process of pushing the new cleansed data back out to your partner systems.
    Central master data management is the process of creating and managing enterprise level attributes in one place.  One common mistake that people make is that central master data management means having one place to create your records and containing ALL data.  This is not correct, MDM focuses on enterprise level attributes.  In other words, which attributes are KPI's, shared across multiple processes in scope, shared across multiple systems, important for reporting, need ultimate quality, etc.  Once you have determined your data model then you begin developing the workflow (if needed) around the creation of master data.  This way you can easily keep your data clean and free of duplicates moving forward following a consolidation and harmonization process.

  • SID table in the general tab and master data/text tab

    Hello Bi Experts,
      For example 0material Info-object:
    There is a SID table in the general tab i.e /BI0/SCOMP_CODE
    There is another  SID table Attribute  in the master data/text tab i.e /BI0/XCOMP_CODE,here there are Nav attributes with name S__0COMPANY.
    When i got 0company info-object it got its own SID table i.e /BI0/SCOMPANY
    Can some body explain me what is significance of SID table Attribute  in the master data/text tab and what is difference with SID Table of attribute?
    Cheers,
    Stalin

    Hi,
    SID is surrogate ID generated by the system. The SID tables are created when we create a master data IO. In SAP BW star schema, the distinction is made between two self contained areas: Infocube & master data tables/SID tables.
    The master data doesn't reside in the satr schema but resides in separate tables which are shared across all the star schemas in SAP BW. A numer ID is generated which connects the dimension tables of the infocube to that of the master data tables.
    The dimension tables contain the dim ID and SID of a particular IO. Using this SID the attributes and texts of an master data Io is accessed.
    The SID table is connected to the associated master data tables via teh char key 
    Sid Tables are like pointers in C
    Tables Starting with  Description:
    M - View of master data table 
    Q  - Time Dependent master data table 
    H - Hierarchy table 
    K - Hierarchy SID table 
    I  - SID Hierarchy structure 
    J  - Hierarchy interval table 
    S  - SID table 
    Y  - Time Dependent SID table 
    T  - Text Table 
        F  - Fact Table - Direct data for cube ( B-Tree Index ) 
    E  - Fact Table - Compress cube ( Bitmap Index ) 
    For more info go through the belwo link
    http://www.sap-img.com/bw010.htm
    Regards,
    Marasa.

Maybe you are looking for

  • Issues in SAP(Idoc)-XI-File scenario

    Dear All, I am working on SAP(Idoc)-XI-File scenario. But in the Receiver agreement i am not able to see the idoc in order to specify some conditions there.  Please suggest if I have to do any settings in order to specify some conditions in the idoc.

  • 500 Internal Server Error   BEx Web Application

    Hello, can everybody help for the fault, or had the same error ? 500 Internal Server Error   BEx Web Application Failed to process request; contact your system administrator Error Summary Exception occured while processing the current request; this e

  • How can I use the green screen with iMovie?

    Hey guys! I was wondering how does the green screen effect work with iMovie 11? I need to use green screen in iMovie in school for a project and I am trying to troubleshoot the problem. Help is appreciated. Thanks, Ben

  • Can we produce a newsletter in one CS4 application and output it to both print and the web?

    Can we produce a newsletter in one CS4 application and output it to both print and the web? If so what do you suggest that we use? Message was edited by: [email protected]

  • MacBook Air - My Passport 1T won't mount after Yosemite update

    1) Tried to switch cables2) Won't mount on Linux or Windows3) Mac WD Drive Tools - shows me drive 1T click on - run drive status check, nothing ...same with other buttons.4).Not showing in Disk Utility or terminal 5). Lights...... 2-3 minutes steady