Best Practice storage of OLAP and ROLAP data

Hi all,
We are looking at building OLAP cubes based on our our existing relational datawarehouse . Should we just build OLAP cubes in another schema on the existing datawarehouse database or are there fundamental reasons for not storing the OLAP cubes on the same database or machine as the sourced data i.e A datawarehouse star schema.
There is very little activity on this database during daytime after all the ETL's are completing overnight so I cant see any reason to have another machine and database just for the OLAP cubes. Any comments of why this would be a good idea ??
Thanks for any comments,
Brandon

Hi Jos,
Thanks, had a squizz, looks like it needs its own tablespace (preferably stripped across multiple disks) and undo and temp need to be examined to be a suitable config. No mention of a separate box.
Cheers

Similar Messages

  • Best practices for loading apo planning book data to cube for reporting

    Hi,
    I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
    I have seen 2 types of Design:
    1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
    2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
    We do these data loads during evening hours once in a day.
    Rgds
    Gk

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • MMO best practice. Download music and heavy files to users hard disk?

    MMO best practice. Download music and heavy files to users hard disk?
    I have just downloaded a Hello Kitty MMO app for research (for my kid of course).
    I am developping my English teaching app with LOADS of classical music, mp3 sentences and heavy background bgs. Would the best idea be for client to download these to their hardisk ie I would not need to stream them and therefore save a fortune on bandwidth charges from my ISP???
    Cheers

    I see what you mean ie: they have to get the file to their computer one way or another BUT
    a. If they are going to repeatedly use that file ie: a custom cursor or classical piece of music every week when they log on then it would be better for them to have it on their hardisk wouldn't it? If not, they would have to download it every time they log on. I take it that's why the hello kitty site makes you download 130 megas so you have everything on your hardisk, ie: you will be reusing all those assets MANY times in the future. The experience wil be very FAST as you have it on your local disk and needn't have to wait for streaming.

  • What are the best practices for Database management and performance tuning?

    Hello,
    I want to ensure that I am using the best practices for managing and maintaining our Database.
    Is there any documentation out there that outlines how to maintain and ensure top performance out of our database?
    Thank you!
    John Sefton

    I appreciate the responses, however this is not the information I am looking for.
    I am specificaly looking for best practices invloving the managment and performance tuning.
    Example: are their tools that I can install that will monitor the size and response time of the database and alert me if there is degradation in performance?
    Are there specific periodic activities I should be doing to garuntee that my database will continue to function that way it is supposed to?
    Or is this a fire and forget solution that does not need this attention?

  • Best Practices to separate voice and Data vlans

    Hello All .
    I am coming to the community to get some advices on a specific subject .
    One of my customer is actually using vlan access-list to isolate it is data  from it is voice vlan traffic .
    As most of us knows VLAN ACLs are very difficult to deploy and manage at an access-port level that is highly mobile. Because of these management issues they have been looking for a replacement solution consisting of firewalls but apparently the price of the solution was too high in the sky .
    Can someone guide me towards security best practices when it comes to data and voice vlan traffic isolation please ?
    thanks
    Regards
    T.

    thomas.fayet wrote:Hi again Collin , May I ask you what type of fw / switches / ios version you are using for this topology ? Also is the media traffic going through your fw if one voice vlan wants to talk to another voice vlan ? rgds
    Access Switches: 3560
    Distro: 4500 or 6500
    FW: ASA5510 or Juniper SSG 140 (phasing out the Junipers)
    It depends. In the drawing above, no voice traffic would leave the voice enclave until it talks to a remote site. If we add other sites to the drawing, at a minimum call-sig would traverse the firewall and depending on the location of the callers, all voice traffic may cross the firewall. All of that depends on how you have your call managers/vm/voice gateways designed and where the callers are.

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Best practice for BI-BO 4.0 data model

    Dear all,
    we are planning to upgrade BOXI 3.1 to BO 4.0 next year and would like to know if Best Practice exists for BI data model. We find out some general BO 4.0 presentations and it seems that enhancements and changes have been implemented: our goal would be to better understand which BI data model best fits the BO 4.0 solution.
    Have you find documentations or links to BI-BO 4.0 best practice to share ?
    thanks in avance

    Have a look in this document:
    http://www.sdn.sap.com/irj/sdn/index?rid=/library/uuid/f06ab3a6-05e6-2c10-7e91-e62d6505e4ef#rating
    Regards
    Aban

  • The best practice for creating reports and dashboard

    Hello guys
    I am trying to put together a list of best practice on how to create reports and dashboards using OBIEE presentation service. I know a lot of those dos and donts are just corporate words that don't apply consistantly in real world environment, but still I'd like to know if Oracle has any officially defined best practice or not.
    the only best practice I can think of when it comes to building reports and dashboards is:
    Each subject area should contain only one star schema that holds data for a specific business information
    Is there anything else?
    Please advice
    Thanks

    Read this book to understand what a Dashboard is, what it should do and look like to be used by the end users. Very enlightentning.
    Information Dashboard Design: The Effective Visual Communication of Data by Stephen Few (There are a couple of other books by Stephen and although I haven't read them yet, I anticipate them to be equally helpful.
    This book was also helpful to me:
    http://www.amazon.com/Performance-Dashboards-Measuring-Monitoring-Managing/dp/0471724173
    I also found this book helpful in Best Practices...
    http://www.biconsultinggroup.com/knowledgebase.asp?CategoryID=337

  • Best practice to have cache and calclockblock setting?

    Hi,
    I want to implement hyperion planning.
    What should be the best practice to set essbase settings for optimize performance?

    Personally, I would work out the application design before you consider performance settings. There are so many variables involved that to try to do it upfront is going to be difficult.
    That being said each developer has their own preferred approach and some will automatically add certain expressions into the Essbase.cfg file, set certain application level settings via EAS (Index Cache, Data Cache, Data File Cache).
    There are many posts discussing these topics in this forum so suggest you do a search and gather some opinions.
    Regards
    Stuart

  • Best Practice For Display Description and getting Id from  af:selectOneChoice

    HI Experts,
    I have a table DeptTable with the two attributes. 1) DeptId and 2)DeptName
    I want to diplay List of all DeptName in <af:selectOneChoice component. and in backingbean or managedbean I want to get
    corresponding DeptId for Selected DeptName in <af:selectOneChoice.
    To be precise, IN UI I want to see -
    Dept A
    Dept B
    Dept C
    And in bean I want to get-
    101
    102
    103.
    If I select DeptA I want to get 101.
    Therefore what is the best practice to achieve it.
    Regards,

    CJ,
    I vote for solution 1 as it's just your use case. As you said you what to update the his_status_id and her_status_id through the by selecting a STATUSES.short_txt_descr by a drop down. This is exactly the LOV solution.
    ViewLinks are used fro master detail navigation (which you don't do here) and Joining the data make it difficult to update (and you still need a LOV for the drop down box.
    Timo

  • Best practice for Error logging and alert emails

    I have SQL Server 2012 SSIS. I have Excel files that are imported with Exel Source and OLE DB Destination. Scheduled Jobs runs every night SSIS packages.
    I'm looking for advice that what is best practice for production environment.Requirements are followings:
    1) If error occurs with tasks, email is sent to admin
    2) If error occurs with tasks, we have log in flat file or DB
    Kenny_I

    Are you asking about difference b/w using standard logging and event handlers? I prefer latter as using standard logging will not always data in the way in which we desire. So we've developed a framework to add necessary functionalities inside event handlers
    and log required data in the required format to a set of tables that we maintain internally.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Best Practice for Multiple iTunes and One Account?

    My wife and I share our account for our iPhone Applications and Songs in iTunes.
    Can someone point me to a topic that covers a best practice for multiple iTunes syncing?
    I travel a lot and keep my iPhone synced to my Macbook Pro while she uses the workstation at home to sync to. We keep the addresses and contacts and calendars synced through MobileMe, however, I'm trying to find the best way of pushing applications and songs I've bought over to the other PC so they're still around while I'm traveling for her to sync with.
    Thoughts?

    Hi Steve,
    Might be trying for solution for a long time, If i understood your question clear let me clarify you few points.
    You are trying to access the bex query which is designed with the exit's in the background based on the logic and trying to call the entire dimensions and key-figures in a single connection. Then you are trying to map those data in the charts.
    Steve, try to make more connections based upon the logic and split them. use the same query but split them by sales per customer group, sales per day, sales per week by making three different connections and try. You can merge the prompts from all connections.
    Hope this Helps!!!
    Sorry if i misunderstood your question.
    --SumanT

  • Lun Size best practice for UC apps and VMWare?

    Hi,
    We have UCS manager v2.1 with FI 6248 direct FC attached to NetApp with plenty of storage.
    Per following doc, Lun size for UC apps should be 500GB - 1.5TB and 4 to 8 VMs per Lun.
    http://docwiki.cisco.com/wiki/UC_Virtualization_Storage_System_Design_Requirements#Best_Practices_for_Storage_Array_LUNs_for_Unified_Communications_Applications
    We have four B200M3 blades and 3 to 4 UC apps (CUCM, Unity, UCCX) will be hosted on each blade. May add more VM the blades in the future.
    I am thinking four 1 TB Luns and one for each blades. (actually 8 Luns in toal, 4 boot luns for ESXi and 4 for UC apps).
    What is the best practice (or common deployment) to create Lun size and design?
    Thanks,
    Harry

    UC apps need low IO,nothing special,Reference vmware LUN design is ok.

  • Best Practice for carrying forward monthly forecast data??

    Dear all:
    I am in the process of building demo for monthly forecast. I have 12 forecast categories for each period of the fiscal year. However, I am wondering whether there is a best practice to carry forward data in current forecast (month) category into the next forecast (month) category, besides running a standard copy at every month end.
    For instance, let's say I am in June, and I would forecast in a category named "JunFcst" for the period 2010.06, 2010.07, 2010.08... 2010.12. For the example purpose let's say I enter a value of 100 for each time period.
    When next month comes, I would select "JulFcst", which is supposed to initially carry the numbers I have entered in JunFcst. In other words, 2010.07 ~ 2010.12 in JulFcst should show a value of 100. There is no need to carry forward data prior 2010.07 because these months have the actual already.
    I know I can easily carry forward data in JunFcst to JulFcst category by running a standard copy data package, or perhaps use some allocation logic. But if I choose this approach, I will run the copy or allocation package at month end manually every month, or create/schedule a 12 unique copy data packages for each month respectively. Another approach is to use default logic to copy the data on the fly from JunFcst to JulFcst. I am less inclined to this because of the performance impact.
    Is there anyway to create just one script logic that is intelligent enough to copy from one forecast category to another based on the current month? Is there a system function that can grab the current month info within the script logic?
    Thanks a bunch!
    Brian

    Which business rule (i.e. transformation, carry-forward) do you suggest? I looked into carry-forward but didn't think it will work due to the fact that it carries forward within the same category. Also, because we currently do not use carry-forward, we do not have a sub-dimension created...
    Thank you!

Maybe you are looking for