ASO data load happening slowly on Essbase 7.1.6

Hi All ,
We are loading data files into an ASO cube . The same file used to load in seconds till last week. But this week it is taking hours together.
We have checked all the server machine performance but of no use .
Please help

There is no export feature in v7 for ASO cube, you are going to have to upgrade to get the functionality you are looking for.
In the mean time, you can try an MDX query instead of a report script, but I don't think it will yield much better results. You'll probably want to break up the report/query to make smaller chunks, maybe one for each year of history or something like that.
If you plan to stay on 7 you should look at an alternate method for storing the history, for example staging your level 0 data in a relational database and reloading ASO cube from the relational source. You do not want to count on having all your data locked up in an ASO cube with no way to get it out.

Similar Messages

  • May I know how data load takes place in essbase

    Hi all,
    Since I am new to hyperion i need to know some details about Essbase.Last month we faced one issue.Whenever a data load takes place it clears the previous years data and loads only loads the current years data.May i know the reason for this.
    And also i want to know how the data load takes place in Essbase.
    With thanks and regards,
    babu

    I m okay with above statements ... Now i am going to add something else..
    Above mentioned in how many ways we can load the data from different files..now what am i trying to say is we can also use "Calc script" to load the data...
    You know how?
    thru "Datacopy" we can load the data value(s) into another member(s)..You can see in session dialogue box while running the Calc script (Which it contains the "Datacopy") that will show data load in progress..
    And also make sure that every time you won't do copy the data from SQL,Excel..etc ... it may be you can have the situation to copy the data from one server to another server...think in that way will be another good advantage for you..
    i hope you understand this....
    Regards,
    Prabhas

  • Urgent help require-ASO data loading

    I am working on 9.3.0 and want to do incremental data loading so that it wont affect my past data.I am still not sure that it is doable or not .Now the question is
    do I need to design a new aggragation after loading data.
    Thanks in advance

    Hi,
    The ASO cube will clear off all the data , if you make any structural changes to the cube ( i.e if you change your outline ,and you can find out what exactly can clear off hte data ,for example , if you add a new member ,then ,it clears off , and in other case ,if you just add a simple formula to the existing member , it might not clear off the data ).
    If you dont want to effect the past data ,and yet you want to incrementally load , then ensure that all the members are already in the otl( take care of time dimension , let all the time members be there in place) and then you can load in ,using the option of 'add to existing values' while you load .
    But remember , you can only do ,without any structural changes. Else , you need to load all together again
    Its good if you design aggregation , as it helps in retrieval performance, but its not mandatory.
    Sandeep Reddy Enti
    HCC

  • Data load happening terribly slow

    Hi all,
    I had opened up quality server and made some change(removed its compunding attribute) at definition level to an infoobject present in the InfoCube.
    The infoobject,IC and Update rules are all active.
    when i scheduled the load now
    ther is a huge variation in the data load speed.
    Before:
    3Hrs- 60 Lakhs
    Now:
    3hrs-10 Lakhs.
    Also i m observing in SM66 that the process are all reading NRIV Table
    Can anyone throw some insght to this scenario??
    Any useful input will be rewarded!
    Regards
    Dhanya.

    1. yes, select main memory. how many entries do you expect in this dimension?
    In other terms, how many different combinations of characteristics calues included in your DIM are to be posted?
    As I first guess, you should enter there 50'000 but please let me know the cardinality of this dimension.
    2. the fact to have or not master data doesn't apply. Your fact table is booked with DIMIDs as keys of the table. Every time you book a record, the system will check in the dimension tables if this combination of characteristics values have already one record in their DIM, if yes, fine, nothing to do . If a new combination comes, the system will have to add a record in the dimension, thus it will first look for the next number range value (= DIMID).
    In adition, the system will create master data IDs as well (even if there is no master data). In each Dimension table you'll find the corresponding master data SIDs for each of the IObjs belonging to the Dimension.
    Tha's why filling empty cubes takes much more time than loading a cube with data already. That's also why the more you load data the less time it takes.
    Please also make sure that all your F table indexes are dropped. (manage cube, performance tab, delete indexes prior loading).
    this will help considerably initial loads...
    Understanding these concepts of BW datawarehousing are of paramount importance in order to set the system up properly.
    Message was edited by:
            Olivier Cora

  • GL Data load using ODI to Essbase

    Hi, I am trying to load GL actual data to essbase application using ODI. Source file is having 10 columns and the Target is having 11 columns. We are using rules file to load data into essbase. Rules file will split 9th column as two columns and will load the data into Essbase. When we test the rules file in essbase data is getting load into application. when we use the same rule file in ODI interface data is not getting load and giving error as "Unknown member" for the member which we are spliting in 9th column. Source file: HSP_RATES ACCOUNT PERIOD YEAR SCENARIO VERSION CURRENCY ENTITY SFUND PROGRAM DATA HSP_InputValue 611101 Jul FY13 ACTUAL Final Local 0000 SBNR AC0001PS0001 25000 AC0001PS0001 is the concatenated string from GL. we will split this as two columns using rules file to load into essbase application. Please suggest what might be the reason for the error. How to do the mapping between source and target. I have mapping one column(AC0001PS0001) to two dimensions (Program, Activity) in Essbase. Please suggest. Thanks Sri

    In ODI, what you have to do is to split it in the ODI itself. While you are mapping, you can use SQL functions to map it to two different columns. Similar to the way you are doing it in Rule file.
    Regards
    Amarnath
    ORACLE | Essbase

  • ASO data loading...

    <p> </p><p>hi,</p><p>i have a big problem...pls help me...</p><p>i had a rule file in essbase 6.5 for BSO but now i have migratedthe same outline to 7.2.now my rule file is giving me problem whileloading data</p><p>i am loading data for member Days with values like31/28/31/30....for some upperlevel member combinations.but now inaso i can only load data for 0 level members....so the rule file isgiving me wrong result after aggregation for the upper levelmembers...is there any way in which i can load the data for theupperlevel members also or at any other level....</p><p>please help.....</p><p>thanks...</p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p>

    Modify the load rule to apply a prefix to your incoming member name in that field.<BR><BR>Then add an alias to the first stored member of the incoming upper level member that matches this new name. <BR><BR>For instance:<BR><BR>--- Ancestor2<BR>...... --- Ancestor1<BR>.............. --- Child1 (Alias: Input_Ancestor2)<BR><BR>Obviously, as shown above (hopefully the format is retained enough to make sense), your inputs can't have multiple input aliases to the child without devoting additional alias tables to it. However, this generally isn't an issue because the inputs should trace a natrual 'path' that allows at least one level 0 member for every upper level input.<BR><BR>It looks and sounds confusing, but the issue is simple from a technical perspective... alias the upper level member down to an input member, then deal with it there as you would have otherwise dealt with the upper level member as part of your calc.<BR><BR>HTH.<BR>

  • ASO, MDX, Loading Level 0

    <p>Maybe this is more simple than I am making it, but I am tryingto load a level 0 file into my ASO app.  I get to messagesonce the data load happens.   The first is:</p><p> </p><p>SourceTYPE                        SourceFile                    OperationStatus</p><p>Datafile                                e:\whatever.txt                Warning</p><p> </p><p>And then when i click the line, below is what I get.  Andmost of the time, the database is empty, and I can't aggregate whenthere is no data.<br><i><b>Parallel dataload enabled: [1] block prepare threads [1]block write threads.<br>Aggregate storage applications ignore update to derived cells.[5.04334e 007] cells skipped<br>Data Load Elapsed Time : [81.702] seconds<br>Database import completed ['FRaso'.'Finrpt']<br>Output columns prepared: [0]Warnin</b></i></p><p>I also get Buffer which is a success.</p><p> </p><p>Once I do get data in, how do I aggregate stuff.  And can Imodify how things are being aggregated?</p><hr><p> </p><p> </p><p> </p>

    <p>Let me give you an example of what I have in my outline. Using the Months example, I have:</p><p> </p><p>Outline:</p><p><span style=" color: #ff0000;">    MeasureAccounts <11> (Label Only)</span></p><p>          <spanstyle=" color: #008080;">     - It'schildren</span></p><p><span style=" color: #ff0000;">    MonthTime Dynamic <13> (Label Only)</span></p><p><span style=" color:#008080;">                -Jan (~)</span></p><p><span style=" color:#008080;">                -Feb (~)</span></p><p><span style=" color:#008080;">                -....</span></p><p><span style=" color:#008080;">                -Dec(~)</span></p><p><span style=" color:#008080;">                -Mthly (~) <12></span></p><p>                    <span style=" color: #0000ff;">- JAN_MTHLY (+) [0: [JAN]]</span></p><p><span style=" color:#0000ff;">                    -FEB_MTHLY (+) [0: [FEB]- [JAN]]</span></p><p><span style=" color:#0000ff;">                    -.....</span></p><p><span style=" color:#0000ff;">                    -DEC_MTHLY (+) [0: [DEC]-[NOV]]</span></p><p>I tried to indent as best as I could.  But this is the typeof stuff I have.  And you say I shouldn't have any of theseformulas in the different leaves?</p><p> </p><p> </p>

  • Data Loading in Oracle Apps

    Hi,
    We need to load Data for around 6,00,000 records in Oracle Apps R12.1.1.
    Please let me know the best practices for ensuring Data Load happens at the earliest.
    Regards,
    V N

    For such large volume, you should use Oracle API / interface tables.
    Check the following to see what is available.
    Note: 462586.1 - Where are the Oracle® Release 12 (R12) API Reference Guide?
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=462586.1
    Note: 458225.1 - Release 12 Integration Repository
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=458225.1
    Note: 396116.1 - Oracle Integration Repository Documentation Resources Release 12
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=396116.1
    Hope this helps,
    Sandeep Gandhi

  • Data loading delay

    Hi Friends.,
               Shall i have an answer for one error,
    The Issue is: Every day i load to one info cube, whatever the cube it is, it takes 2 Hours for every load, but once it has taken 5 Hours, what might be the reason? just confusing with that, can anybody let me clarify !!!!
    Regards.,
    Balaji Reddy K.

    Reddy,
    1. Is the time taken for loading to PSA or to load from PSA to cube ? if it is to oad to PSA then  uaually the problem lies at the extractor
    2. If it is loading to the cube.. then check out if statistics are being maintained for the cube and they would give an accurate picture of where the dataload is taking up most time.
    Do an SQL trace during the data load and if you find a lot of aster Data Lookups .. make sure that master data is loaded and if there are a lot of looups to Table NRIV check if number range buffering is on so that dim IDs get generated faster
    Check if the data load happens fast if you drop any existing indexes...
    Are you loading any agregates after the data load ? check fi th aggregates are necessary or if they have been enabled for delta loads..
    If you have indexes active and there is a huge data loa , depending on the index , the data load can get delayed..
    If the cube is not compressd , some times the data load can get delayed..
    Also when the data load is going on check in SM50 and SM37 to see if the jobs are active - this means that the data load is active from both sides...
    Always update the statistics for the cube before the load and ater the load , this helps in deciphering the time it takes for the data load... after activating the statistics .. check table RSDDSTAT or the standard reports available as part of BW tecnical content..
    Hope it helps..
    Arun
    Assgn points if helpful

  • Data load in Essbase ASO cube

    Hi,
    I have not been using ASO cube before and had worked only on BSO cubes. Now I have a requirement to create a rule file to load data in to an ASO Essbase cube. I have created a data load rule file as I was creating for a BSO cube which is correctly validating. However when I am doing the data load I am getting following warning:
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"
    I have investigated further and found that ASO cube does not allow data loading at upper levels & on members calculated through formulas. After this I have ensured that I am loading the data in to zero level members and members which are not calculated through formula. But still I am not able to do the data load & getting the same warning.
    Could you please help me and let me know if there is anything else which I am missing here?
    Thanks in advance...
    AKW

    Hi AKW,
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"This is only a warning message that means only those many cells were skipped might be for some reasons like any member pointing to those cells will be missing.
    If you want to copy the Data of your BSO cube to an ASO Application why dont you use an PARTIONING it will copy your whole data from BSO to ASO (If Outline is common in both then copy any member of Sparse dimension like "Scenario 1" from Source i.e. BSO, to same member like "Scenario 1" in Target i.e ASO ),
    This is only an alternate wayThanks
    Avneet Singh Bhatia

  • Essbase 7.1 - Incremental data load in ASO

    Hi,
    Is there incremental data loading feature in ASO version 7.1? Let's say, I've the following data in ASO cube
    P1 G1 A1 100
    Now, I get the following 2 rows as per the incremental data from relational source:
    P1 G1 A1 200
    P2 G1 A1 300
    So, once I load these rows using rule file with override existing values option, will I've the following dataset in ASO:
    P1 G1 A1 200
    P2 G1 A1 300
    I know there is data load buffer concept in ASO 7.1. And this is the inly way to improve data load performance. But just wanted to check if we can implement incremental loading in ASO or not.
    And one more thing, Can 2 load rules run in parallel to load data in ASO cubes? As per my understanding, when we start loading data, the cube is locked for any other insert/update. Pls correct me if I'm wrong!
    Thanks!

    Hi,
    I think the features such as incremental data loads were available from version 9.3.1
    In the whats new for Essbase 9.3.1 it contains
    Incrementally Loading Data into Aggregate Storage Databases
    The aggregate storage database model has been enhanced with the following features:
    l An aggregate storage database can contain multiple slices of data.
    l Incremental data loads complete in a length of time that is proportional to the size of the
    incremental data.
    l You can merge all incremental data slices into the main database slice or merge all
    incremental data slices into a single data slice while leaving the main database slice
    unchanged.
    l Multiple data load buffers can exist on a single aggregate storage database. To save time, you
    can load data into multiple data load buffers at the same time.
    l You can atomically replace the contents of a database or the contents of all incremental data
    slices.
    l You can control the share of resources that a load buffer is allowed to use and set properties
    that determine how missing and zero values, and duplicate values, in the data sources are
    processed.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Data load error in essbase studio

    I get the following error when trying to load an ASO cube using Essbase Studio (EPM 11.1.2). This error doen't seem to be documented in any of the Essbase manuals. Question - does this error indicate an essbase server issue or a data source issue? I'm thnking it's datasource related, but my data source is an Oracle database, which I've used previously to load cubes without a problem. I've refreshed the source and can connect to it fine otherwise.
    Error:
    Data load started at: Fri Dec 03 08:52:21 EST 2010.      Data load elapsed time:  10 Minutes 23 Seconds.
    Failed to deploy Essbase cube.
    Caused by: Failed to load data into database: 8020.
    Caused by: Cannot execute a SQL query
    Caused by: Io exception: Socket read timed out
    Caused by: Socket read timed out
    Appreciate any hel with this issue.

    When I have issues with Studio I try to break it down slowly. I build my dimensions one at a time. If it breaks on a single dimension build I trace the issues backwards and usually find my issue in the schema.
    Studio's role in life is to create SQL load rules and as such depend on a good schema definition. Unforntunately, the dimension build rules can't be opened in EAS with the Dataprep Editor (regular load rules) because they're binary and can do things that a normal load rule cannot (text measures, date measures, time varying attributes, etc.). But that doesn't mean the .rul files are un-readable. If you're having trouble with a particular dimension build process, open the load rule it creates with something like Notepad and grab the SQL that Studio is generating and drop it into Toad (or equivalant) to see if it is generating usable code. If not, there's something wrong with your modeling and you need to go back to the mini-schema.
    When you're able to build all dimensions all at the same time, you're almost there. If your issues comes when you want to build and load data, the final debuging steps go quickly. Towards that end, the data load rules (ones that load data vs. building dimensions) generated by Studio can be edited in EAS using the Dataprep Editor. If you know SQL Load Rules, you should be able to figure out. If not, contact John Goodwin, OCS or a partner and set up a consulting visit.

  • Data Load MAXLs in ASO

    Hi All,
    Greetings of the day !!!!
    Want to understand the difference between "Add values create slice" and "override values create slice" used in data loading MAXLs
    Suppose we initialized buffer and loaded data in buffer then we can use following two MAXLs
    1)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    add values create slice;
    2)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    override values create slice;
    Q1
    What i am thinking logically is if i am again loading the data in the same intersections from which slice is created ADD VALUE will add it and override value will overwrite it .... e.g if 100 was present earlier and we are again loading 200 then ADD will make 300 and overwrite will result 200. Let me know if my understanding is correct
    Q2
    Why do we use "create slice" ? What is the use? Is it for better performance for data loading? Is it compulsary to merge the slices after dataloading??
    Cant we just use add value or override values if we dont want to create slice...
    Q3
    I saw two MAXLs for merging also ... one was Merge ALL DATA and other was MERGE incremental data ... Whats the diff ? In which case we use what?
    Pls help me in resolving my doubts... Thanks a lot !!!!

    Q1 - Your understanding is correct. The buffer commit specification determines how what is in the buffer is applied to what is already in the cube. Note that there are also buffer initialization specifications for 'sum' and 'use last' that apply only to data loaded to the buffer.
    Q2 - Load performance. Loading data to an ASO cube without 'create slice' takes time (per the DBAG) proportional to the amount of data already in the cube. So loading one value to a 100GB cube may take a very long time. Loading data to an ASO cube with 'create slice' takes time proportional to the amount of data being loaded - much faster in my example. There is no requirement to immediately merge slices, but it will have to be done to design / process aggregations or restructure the cube (in the case of restructure, it happens automatically IIRC). The extra slices are like extra cubes, so when you query Essbase now has to look at both the main cube and the slice. There is a statistic that tells you how much time Essbase spends querying slices vs querying the main cube, but no real guidance on what a 'good' or 'bad' number is! See http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/aggstor_runtime_stats.html.
    The other reason you might want to create a slice is that it's possible to overwrite (or even remove, by committing an empty buffer with the 'override incremental data' clause in the buffer commit specification) only the slice data without having to do physical or logical clears. So if you are continually updating current period data, for example, it might make sense to load that data to an incremental slice.
    Q3 - You can merge the incremental slices into the rest of the cube, or you can merge multiple incremental slices into one single incremental slice, but not into the rest of the cube. Honestly, I've only ever wanted to use the first option. I'm not really sure when or why you would want to do the second, although I'm sure it's in there for a reason.

  • Data Load behaviour in Essbase

    Hello all-
    I am loading data from Flat File using a server Rule File. In the rule file i have properties for a feild where in it replaces a name in flat file for member name in outline so it is somwhat like this:
    Replace With
    Canada 00-200-SE
    Belgium 00-300- SE
    and so on
    Now in my flat file there was a new member for example china & the replacement for it was not present in Rule File & when the data was loaded in the system it didnt rejected that record on the contrary it loaded the values for china in
    the region which was above it and overwrited the values for the original one.
    Is this the normal behavior of essbase , I was thinking that record should have been rejected .
    I know when we do a Lock & Send via Addin & if member is not present in outline it give you warning when you lock that sheet & eventually if you dont delete that member from the template it will load data against it in the member above it.
    Is there a waok around for this problem or this is what it is ?
    I am on Hyperion Planning / Essbase Version 9.3.1.
    Thanks

    Still thinking how does these properties effects the way data is being loaded right now. Have gone through DBAG & i dont see a reason y any of these peoperties might be affecting the load^^^Here's what I think is happening: China is not getting mapped, but the replacement for Belgium is occuring and resolves to a valid member name. Essbase sees China and doesn't recognize it (you knew all of this already).
    When the load occurs, Essbase says (okay, I am anthromorphizing, but you get the ida) "Eh, I have no idea what China is, but 00-300-SE is the last good Country member I have, I will load there." Essbase is picking the last valid member and loading to that. I liken it to a lock and send from Excel with nested dimensions and non-repeating members. Essbase "looks up" a row, finds the valid member, and loads there.
    And yes, this is in the DBAG: http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/ddlload.htm#ddlload1034271
    Search for "Unknown Member Fields" -- it's all the way at the bottom of the above link.
    In fact, to save you the trip, per the DBAG:
    If you are performing a data load and Essbase encounters an unknown member name, Essbase rejects the entire record. If there is a prior record with a member name for the missing member field, Essbase continues to the next record. If there is no prior record, the data load stops. Regards,
    Cameron Lackpour

  • Incremental Data loading in ASO 7.1

    HI,
    As per the 7.1 essbase dbag
    "Data values are cleared each time the outline is changed structurally. Therefore, incremental data loads are supported
    only for outlines that do not change (for example, logistics analysis applications)."
    That means we can have the incremental loading for ASO in 7.1 for the outline which doesn't change structurally. Now what does it mean by the outline which changes structurally? If we add a level 0 member in any dimension, does it mean structrual change to that outline?
    It also syas that adding Accounts/Time member doesn't clear out the data. Only adding/deleting/moving standard dimension member will clear out the data. I'm totally confused here. Can anyone pls explain me?
    The following actions cause Analytic Services to restructure the outline and clear all data:
    ● Add, delete, or move a standard dimension member
    ● Add, delete, or move a standard dimension
    ● Add, delete, or move an attribute dimension
    ● Add a formula to a level 0 member
    ● Delete a formula from a level 0 member
    Edited by: user3934567 on Jan 14, 2009 10:47 PM

    Adding a Level 0 member is generally, if not always, considered to be a structural change to the outline. I'm not sure if I've tried to add a member to Accounts and see if the data is retained. This may be true because by definition, the Accounts dimension in an ASO cube is a dynamic (versus Stored) hierarchy. And perhaps since the Time dimension in ASO databases in 7.x is the "compression" dimension, there is some sort of special rule about being able to add to it -- although I can't say that I ever need to edit the Time dimension (I have a separate Years dimension). I have been able to modify formulas on ASO outlines without losing the data -- which seems consistent with your bullet points below. I have also been able to move around and change Attribute dimension members (which I would guess is generally considered a non-structural change), and change aliases without losing all my data.
    In general I just assume that I'm going to lose my ASO data. However, all of my ASO outlines are generated through EIS and I load to a test server first. If you're in doubt about losing the data -- try it in test/dev. And if you don't have test/dev, maybe that should be a priority. :) Hope this helps -- Jason.

Maybe you are looking for