Best practice for BI-BO 4.0 data model

Dear all,
we are planning to upgrade BOXI 3.1 to BO 4.0 next year and would like to know if Best Practice exists for BI data model. We find out some general BO 4.0 presentations and it seems that enhancements and changes have been implemented: our goal would be to better understand which BI data model best fits the BO 4.0 solution.
Have you find documentations or links to BI-BO 4.0 best practice to share ?
thanks in avance

Have a look in this document:
http://www.sdn.sap.com/irj/sdn/index?rid=/library/uuid/f06ab3a6-05e6-2c10-7e91-e62d6505e4ef#rating
Regards
Aban

Similar Messages

  • Best practices for loading apo planning book data to cube for reporting

    Hi,
    I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
    I have seen 2 types of Design:
    1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
    2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
    We do these data loads during evening hours once in a day.
    Rgds
    Gk

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • Best Practice for Oil and Gas Field Data Capture

    Hi All
    What is the best practice to capture volume data of Oil and Gas commodities in SAP? There is a solution [FDC|http://help.sap.com/miicont-fdc] that address the requirements however what parameters it asks, whats the process and how it calculates different variables isn't provided over the resource center.
    Appreciate any response.
    Regards
    Nayab

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • Best Practice for carrying forward monthly forecast data??

    Dear all:
    I am in the process of building demo for monthly forecast. I have 12 forecast categories for each period of the fiscal year. However, I am wondering whether there is a best practice to carry forward data in current forecast (month) category into the next forecast (month) category, besides running a standard copy at every month end.
    For instance, let's say I am in June, and I would forecast in a category named "JunFcst" for the period 2010.06, 2010.07, 2010.08... 2010.12. For the example purpose let's say I enter a value of 100 for each time period.
    When next month comes, I would select "JulFcst", which is supposed to initially carry the numbers I have entered in JunFcst. In other words, 2010.07 ~ 2010.12 in JulFcst should show a value of 100. There is no need to carry forward data prior 2010.07 because these months have the actual already.
    I know I can easily carry forward data in JunFcst to JulFcst category by running a standard copy data package, or perhaps use some allocation logic. But if I choose this approach, I will run the copy or allocation package at month end manually every month, or create/schedule a 12 unique copy data packages for each month respectively. Another approach is to use default logic to copy the data on the fly from JunFcst to JulFcst. I am less inclined to this because of the performance impact.
    Is there anyway to create just one script logic that is intelligent enough to copy from one forecast category to another based on the current month? Is there a system function that can grab the current month info within the script logic?
    Thanks a bunch!
    Brian

    Which business rule (i.e. transformation, carry-forward) do you suggest? I looked into carry-forward but didn't think it will work due to the fact that it carries forward within the same category. Also, because we currently do not use carry-forward, we do not have a sub-dimension created...
    Thank you!

  • Best Practice for SCVMM2012 setup in 3 data centers

    We are currently setting up SCVMM for our 2012 Hyper-V clusters. We have 3 data center each with one 2012 Hyper-V two node cluster. The data center are separated by a 10MB up and 50MB down pipe. There are three campuses.
    We plan to use SQL 2012 to house the VMMManagerDB.
    Is there any advantage to replicate this database to the other sites?
    Should we point the SCVMM to a single VMMManagerDB?
    In the past, for redundancy, we have VMM (2008) installed at each of the three campuses. We were planning to do the same for SCVMM 2012.
    Should we replicate the DB to the other 2 sites and have SCVMM point to the local replica?
    Thanks in advance

    For Best Design of SCVMM, you can refer below link
    http://blogs.technet.com/b/scvmm/archive/2012/07/24/now-available-infrastructure-planning-and-design-guide-for-system-center-2012-virtual-machine-manager.aspx
    Also i recommended to ask this question in
    VMM forum
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Mai Ali | My blog: Technical | Twitter:
    Mai Ali

  • Best Practices for loading large sets of Data

    Just a general question regarding an initial load with a large set of data.
    Does it make any sense to use a materialized view to aid with load times for an initial load? Or do I simply let the query run for as long as it takes.
    Just looking for advice on what is the common approach here.
    Thanks!

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • Best practice for real time requierement

    All,
    I am trying to find out what is the best practice for reporting against real time data ?
    is it while using
    1 - webi against universe/bex query on the top of hybrid cubes ?
    2 - Crystal report directly against the ECC data ?
    3 - using another solution such as Data Fedrator ? or something different ?
    I am looking to know if anyone got such req and also to share their experience 
    did they get some huge challenge against hybrid cubes ?
    Thanks in advance for your help
    Philippe

    Well their first requierement was to get real time data .. if i am in Xcelsius and click refresh then i want it to load my last data ..
    with live office , i can either schedule a crystal report and get the data delayed or use the option from live office to make iterfresh as right now .. is that a correct assumption ?
    I was talking about BW, just in case they are willing to change the requierement to go from Real time to every 5 min
    Just you know we are also thinking of the following option:
    1 - modify the virtual provider on the  CRM machine to get all the custom fields needed for the Xcelsius Dashboard
    2 - Build some interactive report on the top of these Virtual Provider within CRM
    3 - get the link to this report , it is one of the Report feature within CRM
    4 - design and build your dashboard on the top of it
    5 - EXport your swf file to the cRM web ui
    we are trying to see which one is the best one
    Philippe

  • Best Practices for AP Power

    Can someone tell me or explain to me if there are best practices for AP Power based on data or voice networks?  Specifically, if you're tasked to do a Wireless Site Survey with the intention that RRM will not be used how would you configure the APs power for data usage only or voice and data.
    I have never read anything that states a best practice for power ... I tend to set APs at 6 or 12 mW for 2.4GHz and 12mW or 25mW for 5GHz ... but there isn't anything to state whether my practice meet any type of best practice so I'm just looking for any supporting facts or idea of what other Wireless Engineers use when they do site surveys.
    Any information or shared thoughts are much appreciated!!!
    Thanks,
    Mal

    Hello Mal,
    Your question will cause a number of responses from the different RF chefs here, Im sure!
    Ive been in WiFi for a long time. When I sit down with educated customers they will ask the same question. Based on my experience designing high availability, data intensive networks I follow the practice to design for the "lowest" common denominator.
    With this being said, its often a 5 GHz Phone device or even a 2.4 GHz Vocera badge. In any case, both devices live in or around 20mW. I like to design my networks at 12.5mW. I find this works better for RRM and also builds in fluff incase I need to adjust power on a AP.
    If a customer requires both a 2.4GHz and 5 GHz design. I will just design 5 GHz  @ 12.5 mW, as the 2.4 GHz network will surly fit inside of the 5 GHz design.
    I hope this helps...

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practice for Using Static Data in PDPs or Project Plan

    Hi There,
    I want to make custom reports using PDPs & Project Plan data.
    What is the Best Practice for using "Static/Random Data" (which is not available in MS Project 2013 columns) in PDPs & MS Project 2013?
    Should I add that data in Custom Field (in MS Project 2013) or make PDPs?
    Thanks,
    EPM Consultant
    Noman Sohail

    Hi Dale,
    I have a Project Level custom field "Supervisor Name" that is used for Project Information.
    For the purpose of viewing that "Project Level custom field Data" in
    Project views , I have made Task Level custom field
    "SupName" and used Formula:
    [SupName] = [Supervisor Name]
    That shows Supervisor Name in Schedule.aspx
    ============
    Question: I want that Project Level custom field "Supervisor Name" in
    My Work views (Tasks.aspx).
    The field is enabled in Task.aspx BUT Data is not present / blank column.
    How can I get the data in "My Work views" ?
    Noman Sohail

  • Best Practice for report output of CRM Notes field data

    My company has a requirement to produce a report with variable output, based upon a keyword search of our CRM Request Notes data.  Example:  The business wants a report return of all Service Requests where the Notes field contains the word "pay" or "payee" or "payment".  As part of the report output, the business wants to freely select the output fields meant to accompany the notes data.  Can anyone please advise to SAP's Best Practice for meeting a report requirement such as this.  Is a custom ABAP application built?  Does data get moved to BW for Reporting (how are notes handles)?  Is data moved to separate system?

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best Practice for SAP PI installation to share Data Base server with other

    Hi All,
    We are going for PI three tire installation but now I need some best practice document for PI installation should share Data base with other Non-SAP Application or not. I never see SAP PI install on Data base server which has other Application sharing. I do not know what is best practice but I am sure sharing data base server with other non-sap application doesnu2019t look good means not clean architecture, so I need some SAP document for best practice to get it approve from management. If somebody has any document link please let me know.
    With regards
    Sunil

    You should not mix different apps into one database.
    If you have a standard database license provided by SAP, then this is not allowed. See these sap notes for details:
    [581312 - Oracle database: licensing restrictions|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=581312]
    [105047 - Support for Oracle functions in the SAP environment|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=105047] -> number 23
          23. External data in the SAP database
    Must be covered by an acquired database license (Note 581312).
    Permitted for administration tools and monitoring tools.
    In addition, we do not recommend to use an SAP database with non-SAP software, since this constellation has considerable disadvantages
    Regards, Michael

  • Best practice for data migration install v1.40 - Error 2732 Directory manag

    Hi
    I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
    Prerequisite error
    Installation program stops with missing file error
    The following file was not found
    ... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
    The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
    Windows installer log displays
    Error 2732 Directory Manager not initialized
    SAP note 1527151 does not exist or is internal.
    Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
    Other prerequisite of .NET 3.5.1 met already.
    Patch is released since 20.11.2011 so I presume that it is a good installation set.
    Thanks,
    Alan

    Hi Alan,
    There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
    http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
    Arun

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

Maybe you are looking for

  • Verizon Network Extender

    Any success stories using the Verizon Network Extender? I'm not having much success.  I tried setting the VNE up by connecting it to my AirPort Extreme and got blue LED's on power, GPS, and WAN fairly quickly, but the SYS LED was solid red. After an

  • Why won't Creative release a 60 GB Vision:m alrea

    it's to my utter surprise that creative is still stalling their release of higher capacity vision:m's. With mention of Toshiba's 30 AND 60 GB mp3 player/PMC coming out before the end of march and apple's tremendous success with the 60 GB 5G ipods whi

  • Skto condition in MM  prising

    Dear Gurus, I have copied SKTO condition and created SKTN condition for cash discount % . Initially it was statistical. Now I removed tick from stats in local prising procedure.I mean it is not statistical now. But still when I make PO and check cond

  • Change in the shared services settings

    Hi, Recently the host name changed and I had to reconfigure the MSAD configuration. I restarted the Shared services and then restarted planning. I subsequently ran the userprovision.cmd to synchronise the users with Planning. But the users still can'

  • How do I format Windows hard drive to Mac OSX?

    I have a 120gig hard drive that was just pulled from a Windows XP PC that I'd like to install in my G4. It does not allow me to install Mac OSX right from the first step from booting with a MacOSX install CD. It gives me an alert that says "The softw