Optimizing data load and calculation

Hi,
I have a cube that takes more than 2 hours to load and calculates more than 3 hours (at its fastest build). There are times that my cube loads and calculates for more than 8 hours. My calculation only uses Calc All. I am very new to Essbase and couldn't find a way to minimize the build time of my cube.
Can anybody help? Here are some stats about my cube. I hope this helps.
Dimension Name Type Declared Size Actual Size
===================================================================
ALL_ACCOUNTS DENSE 7038 6141 Accounts <5> (Dynamic Calc)
ALL_LEDGERS SPARSE 4 3 <1> (Label Only)
ALL_YEARS SPARSE 3 1 <1> (Label Only)
ALL_MONTHS SPARSE 22 22 Time <7> (Active Dynamic Time Series Members: Y-T-D, Q-T-D)
ALL_FUNCTIONS SPARSE 55 54 <9>
ALL_AFFILIATES SPARSE 715 696 <4>
ALL_BUSINESS_UNITS SPARSE 452 440 <3>
ALL_MCC SPARSE 1557 1536 <3>
Any suggestions would be greatly appreciated.
Thanks!
Joe

Joe,
There are too many potential optimizations to list and not enough detail to make any one or two suggestions. I can see some potential areas from improvemt, but your best bet is to bring in a knowledgable consultant for a couple of days to review the cube and make changes. For example, at one client, I made changes that brought a calculation down from 4 + hours to 5 minutes. It took changes to load rules, calc scripts and how they loaded their data. So it was not one thing, but mutiple changes.
If you look at Jason's Hyperion Blog http://www.jasonwjones.com/?m=200908 , he describes taking a calculation down from 20 minutes to a few seconds. Again, nat a single change, but a combination.

Similar Messages

  • Selective data load and transformations

    Hi,
    Can youu2019ll pls clarify me this
    1.Selective data load and transformations can be done in
        A.     Data package
        B.     Source system
        C.     Routine
        D.     Transformation Library-formulas
        E.     BI7 rule details
        F.     Anywhere else?
    If above is correct what is the order in performance wise
    2.Can anyone tell me why not all the fields are not appear in the data package data selection tab even though many include in datasource and data target.
    Tks in advance
    Suneth

    Hi Wijey,
    1.If you are talking about selective data load, you need to write a ABAP Program in the infopackage for the field for which you want to select. Otherway is to write a start routine in the transformations and delete all the records which you do not want. In the second method, you get all the data but delete unwanted data so that you process only the required data. Performancewise, you need to observe. If the selection logic is complicated and taks a lot of time, the second option is better.You try both and decide yourself as to which is better.
    2. Only the fields that are marked as available for selection in the DS are available as selection in the data package. That is how the system is.
    Thanks and Regards
    Subray Hegde

  • Performance issue - Loading and Calculating

    Hi,<BR>I am having 5 GB of data. It is taking 1 hr to load and 30 min to calculate. <BR>I did the following things to improve the performance.<BR>1) Sort the data and loading them in the order of largest sparse first, followed by smallest and dense<BR>2) Enabled parallel load, gave 6 threads for prepare and 4 for writing.<BR>3) Increased data file cache as 400MB and data cache as 50MB, then index cache as 100MB.<BR>4) Calculation only for 4 dimensions, out of 9. In that 2 are dense and 2 are sparse. <BR>5) Calculation with parallel calculation having 3 threads and CALCTASKDIMS as 2.<BR><BR>But i am not getting any improvements.<BR>While doing the calculation i got following message in the logs.<BR>I feel that CALCTASKDIM is not working<BR><BR>[Fri Jan  6 22:01:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012679)<BR>Calculation task schedule [2870,173,33,10,4,1]<BR><BR>[Fri Jan  6 22:01:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012680)<BR>Parallelizing using [1] task dimensions. Usage of Calculator cache caused reduction in task dimensions<BR><BR>[Fri Jan  6 22:33:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012681)<BR>Empty tasks [2434,115,24,10,2,0]<BR><BR>Can any one help me what the above log message is telling and what are the other things to be done to imrpove the performance.<BR><BR>Regards<BR>prsan<BR><BR><BR>

    <p>its not the problem with ur calc task dim.</p><p> </p><p><b>Calculation task schedule [2870,173,33,10,4,1</b>] indicates that ur parell calc can start with 2870calculations in parallel, after which 173 can be performed inparallel then 33 ,10,4 and 1.</p><p> </p><p><b>Empty tasks [2434,115,24,10,2,0]</b>  means these manytasks dont need any calculation- either because there is no data orthey are marked clean due to intelligent calc.</p><p> </p><p>the problem lies with your cal cache setting. try increaing thecal cache settings in ur cfg file and use calcache high setting inyour calc.</p><p> </p><p>hope this works<br></p>

  • Increse No of BGP while data load and how to bypass the DTPin Process Chain

    Hello  All,
    We want to improve the performance of the loads. Currently we are loading the data from external Data Base though DB link. Just to mention we are on BI 7 system.  We are by passing the PSA to load the data quickest. Unfortunately we cannot use PSA.  Because loads times are more when we use PSA. So we are directly accessing views on external data base.  Also external data base is indexed as per our requirement.
    Currently our DTP is set to run on the 10 parallel processes (on DTP settings for batch Batch Manager with job class A). Even though we set to 10 we can see loads are running on 3 or 4 Back ground parallel processes only. Not sure why. Does any one know why it is behaving like that and how to increase them?
    If I want to split the load into three. (Diff DTPs with Different selections).  And all three will load the data into same info provider parallel. We have the routine in the selection that will look a table to get the respective selection conditions and all three DTPs will kick off parallel as part of the process chain.
    But in some cases we only get the data for two or oneDTPs(depends on the selection conditions). In this case is there any way in routine or process chain to say that if there is no selection for that DTP then ignore that DTP or set to success for that DTP and process chain should continue.
    Really appreciate your help.

    Hi
    Sounds like a nice problemu2026
    Here is a response to your questions:
    Before I start, I just want to mention that I do not understand how you are bypassing the PSA if you are using a DTP? Be that as it may, I will respond regardless.
    When looking at performance, you need to identify where your problem is.
    First, execute your view directly on the database. Ask the DBA if you do not have access. If possible perform a database explain on the view (this can also be done from within SAPu2026I think). This step is required to ensure that the view is not the cause of your performance problem. If it is, we need to implement steps to resolve that.
    If the view performs well, consider the following SAP BI ETL design changes:
    1. Are you loading deltas or full loads. When you have performance problems u2013 the first thing to consider is to make use of the delta queue (or changing the extraction to just send deltas to BI)
    2. Drop indexes before load and re-create them after the load 
    3. Make use of the BI 7.0 write optimized DSO. This allows for much faster loads.
    4. Check if you do ABAP lookups during the load. If you do, consider loading the DSO that you are selecting on in memory and change the lookup to refer to the table in memory rather. This will save tremendous time in terms of DB I/O
    5. This will have cost implications but the BI Accelerator will allow for much faster loads
    Good luck!

  • Flash xml data loading and unloading specs

    hi i am trying to get specification information that i cannot
    find anywhere else.
    i am working a large flash project
    and i would like to load xml data into the same swf
    object/movieclip repeatedly.
    as i do not want the previously loaded items to unload i need
    to know if doing this will unload the items from the swf or just
    keep them in the library so they can be reposted without reloading.
    i cannot find any supporting documenation either way that
    tells me that if i load new content into a clip (i am aware
    levels overwrite) if it will or will not unload this content.
    thanks in advance.
    mk

    this is awful for me -- i cant even get the clip to duplicate
    -- and i thought this would be the simplest solution to keeping
    everything cached for one page before and one page after current in
    the project.
    i have used a simpler clip to test the code and see if i am
    insane.
    duplicateMovieClip(_root.circle, "prv", 5);
    prv._x = 300;
    prv._y = 300;
    prv._visible = true;
    prv.startDrag();
    this ALWAYS works when i use the _root.circle file of a green
    simple circle
    BUT
    when i change it to my main movie clip (which is loaded AND
    On screen -- it just doesnt duplicate at all!) -- i've even
    triggered it to go play frame 2 JUST IN CASE
    I've even set visibility to true JUST IN CASE
    ie all i do is change _root.circle to _root.cur
    and .... nada.
    AND _root.cur IS DEFINITELY on the screen and all xml
    components have been loaded into it. (it is a slide with a dynamic
    picture and dynamic type and it 100% works)
    has anyone had this insanity happen before?
    is this an error where flash cannot attach movie or duplicate
    a clip that has dynamic contents???

  • Data load and out put problem

    Hi Experts,
    Iam going through a peculiar problem.I have data flowing from R/3 to master data Info provider 0Requi. I have validated the data between R/3 and the master data info provider, it is found to be good. This data is being loaded from 0Requi to a staging write optimized DSO. there are routines in the Transformations for the DSO. when I check and compare the data, I found that out of 625 records in the Source only 599 were available in the target, and out of 599 records 17 records are duplicate and 29 records have not been populated from source to target.
    Any help to solve the issue will be highly appreciated and thanked with suitable points.
    Thanks and Regards
    SHL

    Thank you very much Jen, Full points to you.
    There was nothing in the error stack. Sy_Subrc in the routine was giving the problem. It has been rectified and the Data is loading fine in the development system.
    Now I am in another peculiar situation.
    The routines, after debugging, are working good in the development system but after transporting to Quality system for testing, they are failing there. Iam facing the same old problem there again. The transports are checked, they were done properly. The ABAPer is satisfied with the transported code. If you can then please guide me.Iam closing this thread and opening a new thread with Subject as " In different behavior of Routines"
    Thank you once again Jen. Full points assigned to you.
    Kind Regards
    SHL

  • Data load and calc script

    Hi friend,<BR>in my cube i have one dimension that <BR>1)contain cosolidation operater ~ for all Level0 members <BR>2)for level 1 memebers haveing conslodation operator ~<BR>3)But Level 3 members consolodation operator is +<BR> <BR>if iam useing following calc will it affect on above dimension<BR>Set Update calc off;<BR>set aggmissing on;<BR>caldim(product);<BR>product is another dimesion in the cube.<BR>my question is <BR>1)if iam useing set aggmissing on willit affect on other dimension which are not calculated on that <BR>calculation script?<BR>2)iam loading date in both upper level and lower level of led dimesion,so if iam useing above calc script any impact led dimesion<BR>

    Hi,<BR>have you tried it? What happened?<BR><BR>My opinion is "agg missing" doesn't have anything with consolidation (~) in common. When (~) is used then first child value is set to upper level. So no matter if consolidation is (~) or (+) the upper value will be overwritten if agg missing is used.<BR><BR>Also bear in mind if in the same dimension combination is used for child and upper level parent the parent value will also be overwritten is agg missing is used.<BR><BR>I suggest you create a simple sample only 2 dimensions and try it out! <img src="i/expressions/face-icon-small-smile.gif" border="0"><BR><BR>By the way. I always use (first command in calc script):<BR>SET UPDATECALC OFF;<BR>to turn of intelligence calculation.<BR><BR>Hope this helps,<BR>Grofaty

  • Measurement Probe Data Acquisition and Calculation

    Hi there,
    Can we make data acquisition from measurement probe ? I mean, can we put the data to the grapher so we can "convert" it to excel or something ? I need to show the V(dc), V(p-p), I(dc), I(p-p), etc... in the grapher view.
    Second, according to my reference, V(dc) = V(p-p)/phi or 0.318 x V(p-p) , where phi = 3,14
    This also contribute to current, I(dc) = I(p-p)/phi or 0.318 x I(p-p)
    In the calculation, MultiSim will use 0.333... or 0.32 and the result will be in quite different. So, anyone can tell me where can I get the formula which MultiSim used to calculate the circuit ?
    Ghost Recon Team Leader
    Ghost Recon Team Leader

    Hay thnx dude..
    I found the example and now it works
    once again thanks a lot

  • Data load performance calculation

    Hi All,
    We have 4 R3 COPA datasources(A1,A2,A3 and A4) which  gets loaded into a ODS in BW.
    Now  in our project client is making all Company codes into 1 Company Code.Hence data volume for a particular datasource increases.
    My question is now how to calculate the performance of the ODS..I mean may be before the change it was taking 1 hour to load but as now due to increased volume of data from one datasource the loading time will increase.Is there any way to anticipate how much more loading time it will take.
    For eg:-
    Before                                                   
    A1  takes 30mins to load
    A2  takes 20mins to load
    A3  takes 35mins to load
    A4  takes 25mins to load
    Now suppose after the changes data volume will increase in A1 as company code is getting centralised.How to calculate the load timings.
    I want to do a performance test

    http://wiki.sdn.sap.com/wiki/display/ERPFI/COPAPerformanceImprovementusingSummarization+Levels
    How to Improve COPA Performance
    http://wiki.sdn.sap.com/wiki/display/BI/ConsultingNotesfor+CO-PA
    Check the load timing through ST13

  • Essbase Studio data load and other

    Hi There,
    I got my cube build from my data mart with one fact table and bunch of dimensions and deploy successfully to essbase server. The questions I have are:
    1. I have another fact table with the same dimensions, I need to load the data into the cube I build. How do I load the data from Essbase Studio, should I add that new Fact table into my schemas? I know I can load the data through EAS, but it seems defeated the purpose of Essbase studio.
    2. Is there any way I can specify from essbase studio for certain, for example, account level, as TB Last, or Avg, it seems you have to apply to the whole level as TB Last, etc. from Essbase Model Properties.
    Thanks

    Donny wrote:
    Hi There,
    I got my cube build from my data mart with one fact table and bunch of dimensions and deploy successfully to essbase server. The questions I have are:>
    1. I have another fact table with the same dimensions, I need to load the data into the cube I build. How do I load the data from Essbase Studio, should I add that new Fact table into my schemas? I know I can load the data through EAS, but it seems defeated the purpose of Essbase studio.Add the second fact table to your minischema with the proper joins
    >
    2. Is there any way I can specify from essbase studio for certain, for example, account level, as TB Last, or Avg, it seems you have to apply to the whole level as TB Last, etc. from Essbase Model Properties.ypu should have columns in your account table for the property values (Time balcance values are F,L,A for first last and average respectively. Then in the Essbase properties, you would specify to use an expernal source and give it the proper column name. Same thing for the skip values, variance reporting, consolidation, etc.
    >
    Thanks

  • Create Document from Data Load and Link to Transaction Record for Long Text

    Hi,
    I have a DBConnect Oracle datasource which contains a large text field.  I would like to build a process that will, as part of the load, create a text file from the content of this large field, upload the file into BW and create the document association with the transaction record.
    Is anyone aware of a HOW-TO to create the BW document entries and upload the files using ABAP?  I thought that I had seen a HOW-TO or instructions approx a year ago, but cannot locate them now.
    Thanks in advance,
    Mel W.

    Hi,
    I hope this is the how to document you were looking for:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/8046aa90-0201-0010-5e99-962948c83331
    -Vikram

  • Data Load Optimization

    Hi,
    I have a cube with following dimension information and it requires optimization for data load, its data is cleared and loaded every week from SQL data source using load rule. It loads 35 million records and the load is so slow that only for data load excluding calculation takes 10 hrs. Is it common? Is there any change in the structure I need to make the load faster like changing the Measures to sparse or change the position of dimensions. Also the block size is large, 52920 B thats kind of absurd. I have also the cache settings below so please look at it please give me suggestions on this
    MEASURE      Dense     Accounts 245 (No. Of Members)
    PERIOD     Dense     Time 27
    CALC      Sparse     None      1
    SCENARIO     Sparse     None 7
    GEO_NM     Sparse     None     50
    PRODUCT     Sparse     None 8416
    CAMPAIGN     Sparse     None 35
    SEGMENT     Sparse     None 32
    Cache settings :
    Index Cache setting : 1024
    Index Cache Current Value : 1024
    Data File Cache Setting : 32768
    Data file Cache Current Value : 0
    Data Cache Setting : 3072
    Data Cache Current Value : 3049
    I would appreciate any help on this. Thanks!

    10 hrs is not acceptable even for that many rows. For my discussion, I'll assume a BSO cube,
    There are a few things to consider
    First what is the order of the columns in your load rule? Can you post the SQL? IS the sql sorted as it comes in? Optimal for a load would be to have your sparse dimensions first followed by the dense dimensions(preferably having one of the dense dimensiosn as columns instead of rows) For example your periods going across like Jan, Feb, Mar, etc
    Second, Do you have parallel data loading turned on? Look in the config for Dlthreadsprepare and DLthreadswrite. My multithreading you can get better throughput
    Third, how does the data get loaded? Is there any summation of data before being loaded or do you have the load rule set to addative. doing the summation in SQL would spead things up a lot since each block would only get hit once.
    I have also seen network issues cause this as transferring this many rows would be slow ( as KRishna said) and have seen where the number of joins done on the SQL caused massive delays in preparing the data. Out of interest, how long does the actual query take if you are just executing it from a SQL tool.

  • Data Load behaviour in Essbase

    Hello all-
    I am loading data from Flat File using a server Rule File. In the rule file i have properties for a feild where in it replaces a name in flat file for member name in outline so it is somwhat like this:
    Replace With
    Canada 00-200-SE
    Belgium 00-300- SE
    and so on
    Now in my flat file there was a new member for example china & the replacement for it was not present in Rule File & when the data was loaded in the system it didnt rejected that record on the contrary it loaded the values for china in
    the region which was above it and overwrited the values for the original one.
    Is this the normal behavior of essbase , I was thinking that record should have been rejected .
    I know when we do a Lock & Send via Addin & if member is not present in outline it give you warning when you lock that sheet & eventually if you dont delete that member from the template it will load data against it in the member above it.
    Is there a waok around for this problem or this is what it is ?
    I am on Hyperion Planning / Essbase Version 9.3.1.
    Thanks

    Still thinking how does these properties effects the way data is being loaded right now. Have gone through DBAG & i dont see a reason y any of these peoperties might be affecting the load^^^Here's what I think is happening: China is not getting mapped, but the replacement for Belgium is occuring and resolves to a valid member name. Essbase sees China and doesn't recognize it (you knew all of this already).
    When the load occurs, Essbase says (okay, I am anthromorphizing, but you get the ida) "Eh, I have no idea what China is, but 00-300-SE is the last good Country member I have, I will load there." Essbase is picking the last valid member and loading to that. I liken it to a lock and send from Excel with nested dimensions and non-repeating members. Essbase "looks up" a row, finds the valid member, and loads there.
    And yes, this is in the DBAG: http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/ddlload.htm#ddlload1034271
    Search for "Unknown Member Fields" -- it's all the way at the bottom of the above link.
    In fact, to save you the trip, per the DBAG:
    If you are performing a data load and Essbase encounters an unknown member name, Essbase rejects the entire record. If there is a prior record with a member name for the missing member field, Essbase continues to the next record. If there is no prior record, the data load stops. Regards,
    Cameron Lackpour

  • Data load from ECC6 Unicode to BI7 MDMP

    Hello Contributors,
    We are upgrading R3 system to ECC6 Unicode but BI7 is still MDMP system. I'm testing data load and there are some errors related with unicode conversion.
    ECC has multiple langusges like English, Korean, Chinese etc...
    BI7 MDMP has English and Korean.
    Error1 :  Korean characters are broken in 0CUSTOMER_TEXT.
       As you know customer name doesn't have language code in R3 and Korean text are being used for Korean customers. After loading the data, English characters are okay but Korean characters  are broken and displayed as #
    Error2 :  Values of fields are merged each other if there is Korean characters in 0CUST_SALES_ATTR.
       We have Korean characters in some of attributes of 0CUST_SALES. So if a record has Korean characters in one of the field then the following fields are merged each other. But if there is no Korean characters then it is loaded correctly.
    For an example;
    Values of R3 extractor
        Cust#      Name               Field1          Field2     Field3
        1234     Korean Text               ABCD          EFGH     JKLM
        3456     English Text               OPQR          STUV     WXYZ
    Values of PSA table after loading
        Cust#      Name               Field1          Field2     Field3
        1234     Korean Text AB     CDEF          GHJK     LM     
        3456     English Text              OPQR          STUV     WXYZ
    (*if there are Korean characters then the following fields are divided and a part is updated in to the previous field)
    Please help me if you have any ideas to resolve this issues.
    Best Regards
    HD Sung.

    Convert your BI system to Unicode.  MDMP is not supported in this scenario, as you have found.

  • Master Data/transactional Data Loading Sequence

    I am having trouble understanding the need to load master data prior to transactional data.  If you load transactional data and there is no supporting master data, when you subsequently load the master data, are the SIDs established at that time, or will then not sync up?
    I feel in order to do a complete reload of new master data, I need to delete the data from the cubes, reload master data, then reload transactional data.  However, I can't explain why I think this.
    Thanks,  Keith

    Different approach is required for different scenario of data target.  Below are just two scenarios out of many possibilities.
    Scenario A:
    Data target is a DataStore Object, with the indicator 'SIDs Generation upon Activation' is set in the DSO maintenance
    Using DTP for data loading.
    The following applies depending on the indicator 'No Update without Master Data' in DTP:
    - If the indicator is set, the system terminates activation if master data is missing and produces an error message.
    - If the indicator is not set, the system generates any missing SID values during activation.
    Scenario B:
    Data target has characteristic that is determined using transformation rules/update rules by reading master data attributes.
    If the attribute is not available during the data load to data target, the system writes initial value to the characteristic.
    When you reload the master data with attributes later, you need to delete the previous transaction data load and reload it, so that the transformation can re-determine the attributes values that writes to the characteristics in data target.
    Hope this help you understand.

Maybe you are looking for

  • In KEPM  error msg ' not all profitability segment can be processed'

    Hi PL  HELP ME WITH  THE FOLLOWING  ERROR, when doing   'enter planning'  in KEPM. Message no. KG803 Diagnosis You cannot maintain all of the profitability segments in the current transaction. This could be due to the following: 1. Errors occurred wh

  • Need urgent help - Nokia n85

    My phone LCD screen cracked, and now i cant see the screen at all, but the sound still works, and i can still make and receive calls. Im trying to figure out how to connect it to my computer, but when i put the USB in, it prompts me to click the PC s

  • Problem with pages launching after update on Nov 23, 2013

    I haven't used pages until now.  I have update to Maverick (10.9); have update for pages from Nov. 23, 2013.  I tried to open it up but it will not open.  Has anyone had this problem. I have Keynote and it is not working, but have numbers and it is w

  • Noise coming out from the top of iMac

    Noise keeps coming from the top of the machine.

  • Transferring music to Mac

    I recently bought a MacBook Pro, and want to use it to sync my iPod Touch. Because of the internet capabilities on this device, I have purchased apps and songs directly on my iPod, and are therefore not in my PC iTunes Library. Is there any way I can