MDM data distribution case study

Hello. I have been looking everywhere for a detailed case study or example of harmonizing MD on MDM and then distributing it to MDCs. Can anyone help me?
I am still unsure of how this should work. It is one thing to normalize and clean data on the MDM, but I am still unsure as to how it is distributed back to the MDCs without compromising the systems.

Hi Mark,
As an introduction (on a higher conceptual level) to the MDM distribution scenarios, you can also read the MDM Scenario Descriptions. They are at https://service.sap.com/instguidesnw04/ > Planning > SAP MDM > <i>MDM 5.5 SP02 Scenario Descriptions</i>.
As an example, you may refer to the Master Data Harmonization scenario.
Kind regards,
Markus

Similar Messages

  • SAP MDM case study,step by step implementation procedures.

    Hi All,
    If anyone has some case study documents describing the scenarios in MDM & step by step procedures for achievbing that, pl send it accross to me.
    Regards,
    Pramod

    Hello Pramod,
    The folowing links will help you to understand market scenarios of MDM and its integration.
    http://hosteddocs.ittoolbox.com/RD021507b.pdf
    demo
    http://www.sap.com/community/int/innovation/esoa/demo/MDM_demo/index.html
    http://www.asug.com/DesktopModules/Bring2mind/DMX/Download.aspx?TabId=66&DMXModule=370&Command=Core_Download&EntryId=3431&PortalId=0
    MDM
    http://www.asug.com/DesktopModules/Bring2mind/DMX/Download.aspx?TabId=66&DMXModule=370&Command=Core_Download&EntryId=1666&PortalId=0
    SAP Netweaver MDM Overview
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b09b548d-7316-2a10-1fbb-894c838d8079
    SAP NETWEAVER MDM Leverage MDM in ERP Environments - An Evolutionary Approach -
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4059f477-7316-2a10-5fa1-88417f98ca93
    Master Data Management architecture patterns
    http://www-128.ibm.com/developerworks/db2/library/techarticle/dm-0703sauter/
    MDM and Enterprise SOA
    http://www.saplounge.be/Files/media/pdf/Lagae---MDM-and-Enterprise-SOA2007.10.10.pdf
    Effective Hierarchy Management Using SAP NetWeaver MDM for Retail
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/70ee0c9e-29a8-2910-8d93-ad34ec8af09b
    MDM World
    http://mdm.sitacorp.com/
    MDM: Master Data for Global business
    http://www.sitacorp.com/mdm.html
    MDM Master Data Management Hub Architecture
    http://blogs.msdn.com/rogerwolterblog/archive/2007/01/02/mdm-master-data-management-hub-architecture.aspx
    Improve Efficiency and Data Governance with SAP NetWeaver MDM
    http://www.sapnetweavermagazine.com/archive/Volume_03_(2007)/Issue_02_(Spring)/v3i2a12.cfm?session=
    Data Modeling i MDM
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d4211fa-0301-0010-9fb1-ef1fd91719b6
    http://www.sap.info/public/INT/int/index/Category-28943c61b1e60d84b-int/0/articlesVersions-31279471c9758576df
    SRM-MDM Catalog
    http://help.sap.com/saphelp_srmmdm10/helpdata/en/44/ec6f42f6e341aae10000000a114a6b/frameset.htm
    http://events.techtarget.com/mdm-ent/?Offer=DMwn716mdm
    http://viewer.bitpipe.com/viewer/viewDocument.do?accessId=6721869
    http://searchdatamanagement.bitpipe.com/data/search?site=sdmgt&cr=bpres&cg=VENDOR&sp=site_abbrev%Asdmgt&cp=bpres&st=1&qt=MasterDataManagement
    http://viewer.bitpipe.com/viewer/viewDocument.do?accessId=6721819
    http://www.dmreview.com/channels/master_data_management.html
    http://searchdatamanagement.techtarget.com/originalContent/0,289142,sid91_gci1287620,00.html?bucket=NEWS&topic=307330
    Hope these may help u.
    Regds
    Ankit

  • MDG-F data distribution in case of add on deployment

    Hi Expert
    Can any one describes the procedure for MDG-F data distribution for add-on deployment scenario?
    Is it required to release edition each time? Then what is meaning of check box- Immediately Distribute Change Requests:in MDG 6.1  if you have to replicate edition manually each time? Is data replication model is required for data distribution as well?

    Hello Sanjay
    MDG-F is a Flex model. You have to configure replication model in any case.
    There are 2 ways to replicate data from MDG.
    1. Auto Replication - Here you have to mark  immediately distribute change request while creating edition. With this tick mark, data will get replicated automatically. No need to release any edition.
    2. Manual replication - When you want to replicate the data manually, do not tick the auto distribution option. The data will get stored in the edition. You can release the edition any time. Only thing you need to make sure that there is no open change request in the edition. Once you release the edition, you can view the edition in manual replication option.
    Kiran

  • Organization structure for Case study .... need yr advice

    Hi …I am in process of preparing an org structure for one of the company.
    The company operations is like ….
    The company has four factories in different locations and have 3 Regional distribution centers (RDC) across india. Each distribution center has 5 branches (Sales Offices), in this way they 15 Sales offices across India.
    Most of the materials are procured by factories independently while some materials are centrally procured. Factories produce Finish goods and they are either sale to customer directly ( big regular customers) , or transferred to RDC to cater the requirements of other MID size customers and the other possibility to transfer it to Branches for Retail customers. RDCs also transfer FG to Branches.
    All factories , RDC and branches have warehouse facility , however branches have limited warehouse facility as the customers supported by them are very few and generally deliveries are made directly from RDC / Factory.
    They also have one R&D department in another location to develop new products and material is generally transferred from factories to R&D department.They also have one project division at the same (R&D) location which handles commissioning of new plants and procurement of capital goods.
    Raw Material , Semi finish and FG are transferred between factories and FG material is also transferred between RDCs.
    The group also has another Company which produces another line of products. RDCs and Branches of first company also work for second company (which means same RDC can store FG produced by both companies) and also Branches can also have FG material of both companies. Second company doesn’t have any specific R&D or capital division and R&D and capital of first company also work for second company.
    Inter company transfer between both companies is a regular practice.
    > Proposed Org structure
    Company – Group Company
    Company Code – First company and Second company
    Factory Configurations
    For first company
    4 Factories will be 4 plants
    Factories Structure (Storage locations / warehouses and storage type)
    Since factories have different type of material ( like FG , Raw material , WIP ) therefore I am confused what option I should select 
    1) 3 storage location ( for each plant) and one warehouse, 1 storage type
    2) 1 storage  location and three warehouses (I think this is not possible in SAP , not sure ?? )
    3) 1 storage location , 1 warehouse and 3 storage types
    RDC structure
    RDCs have full fledged warehouse facility and therefore thinking of making them also as Plant. Every RDC will have 6 Storage locations , 1 for First company RDC itself, 5 for associated branches and I feel each “RDC storage locations” should be a associated to a separate warehouse also.
    ( Since both companies produce different set of products therefore they will have different material and hence thinking of handling products of both companies in same plant and same storage location and warehouse , may be two storage type/area can be defined to differentiate material of both companies.)
    Branches: Will be storage locations under RDC …..
    While making branches as Storage location …. I have another question about valuation , since material from Plant and RDC is being transferred to Branches and  from there it is distributed to customers therefore the valuation of FG at BRANCH and RDC should be different as transportation cost , packing , unpacking etc cost should get added at Branch level.
    Therefore in case I make branch as Storage location , I will miss the valuation part while if I make them as Plant then there will be lot of documents get generated for each movement. Hence not sure for what option I should go.
    R&D:
    Since R& D department is involved in testing in New Product development and generally material is required to be scrapped after testing therefore thinking of making it as Plant with one storage location and may be here I don’t need any warehouse (WM) here.
    Project Division
    Project division may also be a separate plant with one storage location and NO WAREHOUSE is required to linked with it.
    Purchase Organizations
    4 purchase organizations for each plant respectively and 1 centralized purchase Org for central procurement.
    Your all valuable inputs are welcome .... looking foward to have a best possible configuration for above mentioned case study.
    Regards

    Phew... long long question
    The rule I generally follow is that the org structure should map reality.
    Company code - where Balance Sheet and Profit and Loss are prepared at the end of the year. If an entity does not do this, it is not a company code.
    Plant
    1) Each manufacturing facility is a plant (could be in 1 location or separate).
    2) Each separate location is a plant (even if it only stores goods). May be an overkill, but helps when legal requirements change or the location is upgraded.
    So in your case, RDC will be a plant (may be more than 1 if necessary). R&D should be a separate plant without warehouse management
    Storage location
    The only rule I follow is that there should not be any overlap of storage locations. So physically, any area of say 1 square foot should belong to one and only one storage location. Otherwise there is a lot of confusion during physical inventory. So logical locations are a no-no.
    I usually use a lot of storage locations.
    Don't have a warehouse spanning multiple plants (even though it is allowed).
    Purchasing organizations
    Have one for each location of purchase. If there is a team sitting in each plant procuring for only that plant, have it as a separate purchasing org (not strictly necessary, but makes authorization simple). For central purchasing, a central purchasing org should do nicely...
    Hope this helps,
    Lakshman

  • Master Data Distribution !

    Hi!
       I want to know the purpose of master data distribution for the following between the vendor & the customer.
       1. Material Master
       2. Vendor Master & Customer Master.
      Whats the purpose of linking our system with our vendor or customer etc with <b>regard to master data</b>
      Pls explain in detail.
      Thanks
      Rahul.

    Hi Rahul,
    We dont do master data distribution with customer system or vendor system.
    Master data distribution is done between distributed systems of the same organization using ALE configuration. So we dont link to customer or vendor systems for transfering master data but for transfering transactional data like purchase orders or sales orders etc.
    Master Data Distribution
    Rather than distributing the complete master data information, views of the master data can be distributed (for example, material sales data, material purchase data). Each view of the master data is stored in a separate message type.
    Users can specify which data elements in a master record are to be distributed.
    Various distribution strategies are supported:
    ·        Cross-system master data can be maintained centrally and then distributed. The final values are assigned locally.
    ·        A view of the master data can be maintained locally. In this case there is always one maintenance system for each view. After the master data has been maintained it is transferred to a central SAP system and distributed from there.
    Types of Distribution
    ·        Active distribution (PUSH)
    If the master data is changed (for example, new data, changes or deletions), a master data IDoc is created in the original system and is distributed by class as specified in the distribution model.
    ·        Requests (PULL)
    A request occurs when a client system needs information about master data held in the system. You can select specific information about the master data, for example, the plant data for a material.
    If you want to be notified of all subsequent changes to the master data, this has to be set up “manually” between the two systems. It is not yet possible for this to be done automatically in the distribution mechanism in the original system.
    Transferring the Master Data
    A distinction is made between transferring the entire master data and transferring only changes to the master data.
    If the entire master data is transferred, a master IDoc is created for the object to be distributed in response to a direct request from a system. Only the data that was actually requested is read and then sent. The customer specifies the size of the object to be distributed in a request report.
    If only changes are transferred, the master IDoc is created based on the logged changes.
    Reward points for the useful answers,
    Aleem.

  • Case study: "Large?" labview programs flooded with different VIT's

    Case study: "Large?" labview programs flooded
    with different VIT's
    Type of application:
    Computer with loads of individual hardware connected or other software (either
    onsite (different buses) or offsite (Satelite/GSM/GPRS/radio etc.).
    Hardware
    description: little data "RPM" but communications to all devices are intact.
    More "RPM" when many VITs are involved.
    Size: 1000+
    VITS in memory (goal). Total software has been tested and simulated with 400.
    I'm posting
    this post after reading this thread (and actually I cant sleep and am bored as
    hell).
    Note: I do
    not use LVOOP (but sure post OOP examples, am starting to learn more and more
    by the day.)
    Things I
    will discuss are:
    Case 1: Memory usage using a plugin
    architecture
    CASE 2: memory usage using VITs (!)
    CASE 3: updating datastructures:
    CASE 4: shutdown of the whole system
    CASE 5: stability & heath monitoring
    CASE 6: Inifiles
    CASE 7: When the hardware is getting crappy
    Total
    application overview:
    We have a
    main application. This main application is mainly empty as hell, and only holds
    a plugin functionality (to register and administer plugins) and holds an
    architecture that holds the following items:
    Queue state
    machine for main application error handling
    Queue state
    machine for status messages
    Queue state
    machine for updating virtual variables
    Event state
    machine for GUI
    Some other
    stuff
    Other
    global functionality is:
    User
    logins, user configurations and unique access levels
    Different
    nice tools like the good old BootP and other juicy stuff
    Supervision
    of variables (like the NI tag engine, but here we have our own datastructures)
    Generation
    of virtual variables (so that the user can configure easy mathematical
    functions and combining existing tags)
    Licensing
    of plugins (hell we free-lance programmers need some money to don't we?)
    Handles
    all communication between plugins themselves, or directly to a plugin or vice
    versus.
    And now we don't
    talk about that (or marketing) the main application .
    Message Edited by Corny on 01-20-2010 08:52 AM

    CASE 3: updating datastructures:
     As we do NOT use clusters here (that would
    just be consuming) we only use an 1D array of data that needs to be updated in
    different functional globals. If the the number of VITS exceeds so that the
    updating of this datastructures becomes the bottleneck, this would cause
    delays. And since in this example we use 250 serial interfaces (lol) we do not
    want to disrupt that by any delays. When this happends, does anyone know a good
    solution to transfer data?
    A thought:
    perhaps sending it down to the plugin and let the plugin handle it, this should
    save some time, but then again if more VITs are added again this would become a
    bottleneck and the queue would fill up after a while unable to process it fast
    enough. Any opinions?
    CASE 4: shutdown of the whole system
    Lets say we
    want to close it all down, but the VITs need perhaps to do some shutdown
    procedure towards the hardware, that can be heavy.
    If we ask
    them to shutdown all together we can use an natofier or userevent to do this
    job. Well, what happends next is that the CPU will jump to the roof, and well
    that can only cause dataloss and trouble. The solution here was to let the
    plugin shut them all down one by one, when one has been shutdown, begin at the
    next. Pro; CPU will not jump to the moon. Con's: shutdown is going to take a
    while. Be ready with a cup of coffee.
    Also we
    want the main application not to exit before we exit. The solution above solved
    this as the plugin knows when all have been shut down, and can then shut itself
    down. When all plugins are shutdown - the application ends.
    Another
    solution is to use rendovous (arg cant spell it) and only shut the system down
    when all rendezvous have met.
    CASE 5: stability & heath monitoring
    This IS
    using a lot of memory. How to get it down. And has anyone experienced any
    difficulties with labview using A LOT of memory? I want to know if something
    gets corrupt. The VITs send out error information in case, but what if
    something weird happens, how can I surveillance all the VIT's in memory to know
    one is malfunctioning in an effective way/code (as backup
    solution  so the application knows
    something is wrong?
    CASE 6: Inifiles
    Well, we
    all like them. Even if XML is perhaps more fahionally. Now Ive runned some
    tests on large inifiles. And the labview Inifile functions use ages to parsing
    all this information. Perhaps an own file structure in binary format or
    something would be better? (and rather create an configuration program)?
    CASE 7: When the hardware is getting crappy:
    Now what if
    the system is hitting the limit and gradually exceeds the hardware req. of the
    software. What to do then (thinking mostly of memory usage)? Needing to install
    it on more servers or something and splitting configurations? Is that the best
    way to solve this? Any opinions?
    Wow.  Time for a coffee cup. Impressive if someone
    actually read all of this. My goal is to reach the 1000 VIT mark.. someday.. so
    any opinions, and just ask if something unclear or other stuff, Im open for all
    stuff, since I see the software will hit a memory barrier someday if I want to
    reach that 1000 mark hehe

  • Merging datasources - Confusion with SD Sales order case study

    Hello
    This is continuation of my last post.
    My case study states that the fields required are on the order. See the excerpts:
    "The SD Consultant thinks this is an excellent report and he has advised that the transmissions are held in table NAST , the gross value is sub total 1 in table VBAP and the cost is also in VBAP. He thinks the rest of the fields are on the order."
    I have been able to identify NAST table as billing header data, VBAP(2LIS_11_VAITM) as Sales Doc item info but what he means by on the order is not clear if it is the Order header table or schedule line or the old sales order 2LIS_01_S260?
    The other confusion is that payment terms and complete delivery indicator is not in any of those datasources. Is it a case of another term being used as the field in the order tables that I could not recognise or I will have to look elsewhere?

    Why not create a wrapper for the bapi, which will generate the serial nos and then call the BAPI_SALESORDER_CREATEFROMDAT2, passing it the serial no. info along with everything else?
    Hope this helps.
    Sudha

  • OBIEE 11g RPD and case study document required for practice

    Hi OBIEE guru's,
    Could you please help me by posting OBIEE 11g sample RPD and case study document for creating Answers and Dashboards.
    I need to brushup my skills on creating Answers and Dashboards.
    Thanks in advance.

    Satya,
    have you looked at the sample app. This has many different cases and you can "play" with the data yourself. Quite powerful:
    http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html
    have fun
    alex

  • [ask] ucm security model case study

    hi fellow stellent users,
    i have a question to ask about
    this case study, that im trying to solve.
    the case study is,
    suppose a corporate named acme
    then i create security groups (public, internal, sensitive, secret),
    semantically a clearance level.
    then i create hierarchical accounts based on acme's divisions:
    acme/finance
    acme/acct
    acme/marketing
    then i create this virtual folders (primarily used in webdav integration)
    /finance: account: acme/finance
    /acct: account: acme/acct
    /marketing: account: acme/marketing
    this seems ok, so all users in the finance dept
    can only view/access/edit the /finance folder (and its contents)
    but there are new requirements:
    -suppose finance users want to create subfolder in the /finance
    eg: /finance/shared
    but they want to share this folder so that it can be accessible to
    acct and marketing users.
    so how can i do this ?
    i already tried creating new account acme/finance/shared
    assign that to the /finance/shared folder,
    and adding that account to all users that need to access that folder
    but, there seems a problem,
    when i browse ucm with Windows Explorer (webdav) with a marketing user id.
    i cant see the /finance/shared folder.
    maybe because the parent /finance folder is hidden/not permissible to them (marketing guys).
    but then, what is the workaround for this problem? can a user
    create a folder that can be shared to other accounts ? with a parent
    folder that is not shared.
    what's the best practice in ucm to accomplish this scenario,
    especially for working in windows/webdav environment.
    is there any changes that i must make to my current security model ??
    thanks,
    your answers will be very appreciated. :)

    Sapan, Yes I understand that and I have read it also. The problem is we would rather take care of the ROLES within UCM, such that subadmins should be allowed to create roles etc with UCM who have no access to LDAP. Basically we would like to give access of role creation to a subadmin rather then set it up in LDAP, but at the same time we would like users to get authenticated via LDAP, because we want to use Single Sign On.
    So basically the solution that I am looking for is following:
    1) Users get Authenticated ONLY via LDAP. No group mappings or filtering needs to be done (Use Group Filtering/Use Full Group Names in LDAP provider are NOT checked)
    2) Setup user's roles/groups within UCM by a Sub Admin.
    Basically what I would like to do is that we can have several websites in our UCM and each website can have Subadmins who can give/remove permission for users that reside in UCM (External/Internal anyone). Moreover I would like to give subadmins only rights to there OWN Website and they should not be allowed to do any administration work for other websites that they are not sub admin for. Also, none of the users/subadmins can see any search results from any other website data that they do not have permission for.
    This is a little complex requirement, first I do not know if UCM is capable of this, second I am a newbie with UCM, I have worked with Documentum in the past, so any suggestion is very welcome. Thanks!

  • Small Case Study

    I have one small case study that I am trying to solve to understand the dimensioanl data modeling concept better.
    We deal with lots of small securities (loans) everyday. We generate Time Series reports from this data, most of the times we look at Market Values and Duration of the securities at aggregated level.
    Everyday we get thousands of securities, each having a type, category, coupon and an amount. E.g.
    Sec Id Type Category Coupon Amount Date
    100 Arm SF_ARMS 5.0 $1200 04/27/05
    101 Arm SF_ARMS_TREASURY5.5 $2000 04/27/05
    102 Fixed SHORT_TERM 5.5 $1800 04/27/05
    103 Fixed LONG_TERM 6.0 $1000 04/27/05
    Sec Id Market_Value Duration Market Calc Batch Date
    100 1350 3.12 M1 C1 B1 04/28/05
    101 2100 2.5 M2 C1 B1 04/28/05
    102 1900 3.0 M1 C1 B1 04/28/05
    103 1100 2.7 M1 C1 B1 04/28/05
    I have to produce a report like this:
    Market Value Duration
    Arm
    SF_ARMS X X
    SF_ARMS_TREASURY X X
    Fixed
    SHORT_TERM X X
    LONG_TERM X X
    My questions are:
    1) Dimensions I have identified are: Securities, Market, Calculator, Batch, Time.
    2) Do we need two separate fact tables for Market value and Duration ? Or they can be in one ?
    3) Should Amount and coupon be security attributes or sit in a separate fact table. According to one book any numerical values should go into a fact table.
    4) What about Type and Category, are these attributes of Security Dimension.
    Any guidance in this direction will be highly appreciated.
    Shalu

    Shalu, here are a few more items to consider. I'll take a stab at these because I'm currently working on a similar investments cube (though with a lot more dimensions)
    - Market value and duration can go into a single fact table if they share the same dimensionality (as noted). Not sure of your application, but are you trying to do any market rate scenario analysis (i.e. what happens if the yield curve shifts up 50 b.p. or down 50 b.p.)? If so, then some variables (duration, avg life, convexity) will need to be dimensioned by scenario, while others (book value, for instance) do not fluctuate based on the scenario and therefore would be in a different cube.
    - Amount and Coupon rate should probably not be stored in the fact table. Having said this, you have several options:
    1) store as attributes in the securities dimension
    Pros: easy for users to select all securities that match a given amount or coupon rate
    Cons: Difficult to band these together on a report or to aggregate the totals (i.e. total market value of all securities in the 4.00 - 4.99% coupon rate band)
    2) store as hierarchies in the securities dimension
    Pros: both amt and coupon could be banded and summarized over the hierarchy, making banding reports very easy to do
    Cons: Difficult to impossible to easily show BOTH amount bands and coupon bands on a single report, since one OLAP query will only allow one hierarchy to be selected
    3) store as separate dimensions outside of securities
    Pros: easy to band, can show both bands simultaneously on a report
    Cons: creates 2 more dimensions that increases cube size (although you will find the new "compressed composites" in 10g work wonders on this)
    Note that all these points also apply to your #4 re: type and category.
    Just because I'm curious, what information do your dimensions "calculator", "batch", and "time" provide?
    Thanks,
    Scott

  • Data distribution in distributed caching scheme

    When using the distributed ( partitioned ) scheme in coherence , how the data distribution is happening among the nodes in data-grid..? Is there any API to control it or are there some configurations to control it ?

    Hi 832093
    A distributed scheme works by allocating the data to partitions (by default there are 257 of these, but you can configure more for large clusters). The partitions are then allocated as evenly as possible to the nodes of the cluster, so each node owns a number of partitions. Partitions belong to a cache service so you might have a cache service that is responsible for a number of caches and a particular node will own the same partitions for all those caches. If you have a backup count > 0 then a backup of each partition is allocated to another node (on another machine if you have more than one). When you put a value into the cache Coherence will basically perform a hash function on your key which will allocate the key to a partition and therfore to the node that owns that partition. In effect a distibuted cache works like a Java HashMap which hashes keys and allocates them to buckets.
    You can have some control over which partition a key goes to if you use key association to co-locate entries into the same partition. You would normally do this to put related values into the same location to make processing them on the server side more efficient in use-cases where you might need to alter or query a number of related items. For example in finanial systems you might have a cache for Trades and a cache for TradeValuations in the same cache service. You can then use key association to allocate all the Valuations for a Trade to the same partition as the parent Trade. So if a Trade was mapped to partition 190 in the Trade cache then all of the Valuations for that Trade would map to partition 190 in the TradeValuations cache and hence be on the same node (in the same JVM process).
    You do not really want to have control over which nodes partitions are allocated to as this could alter Coherence ability to evenly distribute partitions and allocate backups properly.
    JK

  • BI case study

    Hi!
    I would like to implement a case study in area of FI or CO.
    Can some one give me a "How to Guide"  or other documentation in this area?
    My Case:
    I would like to load the data (accounting area, orders, etc.) from SAP ECC 6.0 IDES system and analyze the data in SAP BW.
    Questions:
    What are the appropriare Business objects (attributes, characteristics, Info Provider, etc.)?
    Which objects do I nead to load from SAP ECC system?
    Is there a Step-by-Step  example describing all the necessary steps?
    Thank you very much!
    regards
    Thom

    HI
    Help.sap.com would be the best source for the same.
    http://help.sap.com/saphelp_nw70/helpdata/EN/f1/b0833b33b0940ee10000000a11402f/frameset.htm
    http://sapdocs.info/2008/09/02/sap-bwa-step-by-step-guide/
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a7f2f294-0501-0010-11bb-80e0d67c3e4a
    Hope it helps

  • Connection to MDM data manager

    Hi All .
    I have connected to repository through console.But when I am trying to connect to data manager nothing is displayed like a blank screen ?What can be the reason for this?
    Thanks

    Hi,
    What do you mean by "like a blank screen". Are you able to see data manager main login window where we need to provide user login credentials?? Because, as soon as you click on the data manager Icon, it will open data manager login window. Once you provide user credentials, it will open data manager in record mode.
    Can you explain your issue further more...Is import Manager and other components working fine?? On what MDM version you are working...
    Also, if your lastly connected MDM server is not reachable (MDM 5.5), then sometime Data manager and import manager hangs up because of Last database entry in the registry.. Check this thread in case your problem is same
    Re: MDM Data Manager and Import Manager do not start
    Regards,
    Shiv

  • SAP BI Step by Step case study in connection with FI/CO

    Hi BI-experts!
    I would like to start with SAP BI. My goal is to get familiar with connection of SAP BI with SAP FI/CO.
    Can some one tell me how to find more or less real case study e.g. from SAP Best Practices on
    http://help.sap.com/bp_biv270/ ?
    I would like to load master and transaction data from SAP ERP system and analyze these data in SAP Bi 7.0.
    Any helpful information will be very appreciated!
    Thank you!
    Axel S.

    Axel - normally a FICO implementation will take 6-9 months for a real case study and is highly dependant on the configuration of FICO in the individual site
    You are best looking at the help files for bi content and using these as the basis for creating a workign data model for a sample project dependant on the source FICO config

  • SAP BI Step by Step case study

    Hi BI-experts!
    I would like to start with SAP BI. My goal is to get familiar with connection of SAP BI with SAP FI/CO.
    Can some one tell me how to find more or less real case study e.g. from SAP Best Practices on
    http://help.sap.com/bp_biv270/ ?
    I would like to load master and transaction data from SAP ERP system and analyze these data in SAP Bi 7.0.
    Any helpful information will be very appreciated!
    Thank you!
    Axel S.

    Hi,
    Please find below links...
    Practical SAP BI Step by Step Guides
    Business Intelligence : Steps to get started with SAP BW
    https://www.sdn.sap.com/irj/ sdn/wiki?path=/display/BI/ StepstogetstartedwithSAP BW&
    SAP Business Information Warehouse Scenarios
    http://help.sap.com/bp_biv335/ BI_EN/html/Bw.htm
    SAP BW Business Warehouse - Introduction
    http://www.thespot4sap.com/ Articles/SAP_BW_Introduction. asp
    Info object,infocube,infosource, datasource,commn structure, extract structure ..etc..
    http://www.erpgenie.com/ sapgenie/docs/MySAP%20BW% 20Cookbook%20Vol%201.pdf
    Thanks==points
    Regards
    Sudheer

Maybe you are looking for