Essbase 7.1 - Incremental data load in ASO

Hi,
Is there incremental data loading feature in ASO version 7.1? Let's say, I've the following data in ASO cube
P1 G1 A1 100
Now, I get the following 2 rows as per the incremental data from relational source:
P1 G1 A1 200
P2 G1 A1 300
So, once I load these rows using rule file with override existing values option, will I've the following dataset in ASO:
P1 G1 A1 200
P2 G1 A1 300
I know there is data load buffer concept in ASO 7.1. And this is the inly way to improve data load performance. But just wanted to check if we can implement incremental loading in ASO or not.
And one more thing, Can 2 load rules run in parallel to load data in ASO cubes? As per my understanding, when we start loading data, the cube is locked for any other insert/update. Pls correct me if I'm wrong!
Thanks!

Hi,
I think the features such as incremental data loads were available from version 9.3.1
In the whats new for Essbase 9.3.1 it contains
Incrementally Loading Data into Aggregate Storage Databases
The aggregate storage database model has been enhanced with the following features:
l An aggregate storage database can contain multiple slices of data.
l Incremental data loads complete in a length of time that is proportional to the size of the
incremental data.
l You can merge all incremental data slices into the main database slice or merge all
incremental data slices into a single data slice while leaving the main database slice
unchanged.
l Multiple data load buffers can exist on a single aggregate storage database. To save time, you
can load data into multiple data load buffers at the same time.
l You can atomically replace the contents of a database or the contents of all incremental data
slices.
l You can control the share of resources that a load buffer is allowed to use and set properties
that determine how missing and zero values, and duplicate values, in the data sources are
processed.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Incremental Data loading in ASO 7.1

    HI,
    As per the 7.1 essbase dbag
    "Data values are cleared each time the outline is changed structurally. Therefore, incremental data loads are supported
    only for outlines that do not change (for example, logistics analysis applications)."
    That means we can have the incremental loading for ASO in 7.1 for the outline which doesn't change structurally. Now what does it mean by the outline which changes structurally? If we add a level 0 member in any dimension, does it mean structrual change to that outline?
    It also syas that adding Accounts/Time member doesn't clear out the data. Only adding/deleting/moving standard dimension member will clear out the data. I'm totally confused here. Can anyone pls explain me?
    The following actions cause Analytic Services to restructure the outline and clear all data:
    ● Add, delete, or move a standard dimension member
    ● Add, delete, or move a standard dimension
    ● Add, delete, or move an attribute dimension
    ● Add a formula to a level 0 member
    ● Delete a formula from a level 0 member
    Edited by: user3934567 on Jan 14, 2009 10:47 PM

    Adding a Level 0 member is generally, if not always, considered to be a structural change to the outline. I'm not sure if I've tried to add a member to Accounts and see if the data is retained. This may be true because by definition, the Accounts dimension in an ASO cube is a dynamic (versus Stored) hierarchy. And perhaps since the Time dimension in ASO databases in 7.x is the "compression" dimension, there is some sort of special rule about being able to add to it -- although I can't say that I ever need to edit the Time dimension (I have a separate Years dimension). I have been able to modify formulas on ASO outlines without losing the data -- which seems consistent with your bullet points below. I have also been able to move around and change Attribute dimension members (which I would guess is generally considered a non-structural change), and change aliases without losing all my data.
    In general I just assume that I'm going to lose my ASO data. However, all of my ASO outlines are generated through EIS and I load to a test server first. If you're in doubt about losing the data -- try it in test/dev. And if you don't have test/dev, maybe that should be a priority. :) Hope this helps -- Jason.

  • Incremental Data load in SSM 7.0

    Hello all,
    I once raised a thread in SDN which says how to automate data load into SSM 7.0.
    Periodic data load for a model in SSM
    Now my new requirement is not to upload the whole data again , but only the new data (data after the previous data load) . Is there a way to do the incremental data load in SSM 7.0 ? Loading the whole of the fact data again and again will take a hit on the performance of the SSM system. Is there a work around in case there is no solution ?
    Thanks
    Vijay

    Vijay,
    In your PAS model you can build a procedure to remove data and then load that data to the correct time period.
    In PAS, to remove data but not the variable definitions from the database:
    Removing data for a particular variable
    REMOVE DATA SALES
    or if there were particular areas only within
    SELECT product P1
    SELECT customer C1
    REMOVE SELECTED SALES
    or remove all data
    REMOVE DATA * SURE
    or just a time period
    REMOVE DATA SALES BEFORE Jan 2008
    Then you would construct or modify your Load Procedure to load the data for the new period
    SET PERIOD {date range}
    Regards.
    Bpb
    Then would

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • How to use incremental data load in OWB? can CDC be used?

    hi,
    i am using oracle 10g relese 2 and OWB 10g relese 1
    i want know how can i implement incremental data load in OWB?
    is it having such implicit feature in OWB tool like informatica?
    can i use CDC concept for this/ is it viable and compatible with my envoirnment?
    what could be other possible ways?

    Hi ,
    As such the current version of OWB does not provide the functionality to directly use CDC feature available. You have to come up with your own strategy for incremental loading. Like, try to use the Update Dates if available on your source systems or use CDC packages to pick the changed data from your source systems.
    rgds
    mahesh

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Incremental Data Load

    Hi,<BR><BR>I am trying to load data to a 6.5 Application. In the file I have a number of lines of data with the exact same members but different data. We have to convert products that used to roll up to a total to only one member.<BR><BR>For example <BR><BR>ABC Total<BR> ABC<BR> DEF<BR> GHI<BR><BR>Now all the data in DFE and GHI will need to be accumulated in ABC.<BR><BR>I tried the "Add to Existing Values" in the data load rule, but that seems to only work with data that is there before the load.<BR><BR>Any thoughts? I really appreciate it.

    Never Mind! I am sorry, I had a mistake in my load rule. I tried to delete the prior post but could not figure out how. Very Sorry.

  • Incremental data loading

    I've come across this question:
    Identify the two true statements about incremental loading.
    A. Allows for real time data access for end users.
    B. Creates subscribes along the main slice in the database.
    C. Materialization of slices is required to provide users the correct query results.
    D. Different materialized views may exist within a slice as compared to the main slice of the database.
    Can anyone state which of these two are correct and why so?
    Thanks

    Im not sure, but wish you good luck with your examination regardless.
    JB

  • Urgent help require-ASO data loading

    I am working on 9.3.0 and want to do incremental data loading so that it wont affect my past data.I am still not sure that it is doable or not .Now the question is
    do I need to design a new aggragation after loading data.
    Thanks in advance

    Hi,
    The ASO cube will clear off all the data , if you make any structural changes to the cube ( i.e if you change your outline ,and you can find out what exactly can clear off hte data ,for example , if you add a new member ,then ,it clears off , and in other case ,if you just add a simple formula to the existing member , it might not clear off the data ).
    If you dont want to effect the past data ,and yet you want to incrementally load , then ensure that all the members are already in the otl( take care of time dimension , let all the time members be there in place) and then you can load in ,using the option of 'add to existing values' while you load .
    But remember , you can only do ,without any structural changes. Else , you need to load all together again
    Its good if you design aggregation , as it helps in retrieval performance, but its not mandatory.
    Sandeep Reddy Enti
    HCC

  • Maxl data load error

    Hi All,
    I'm trying to load data from a csv file to an Aggregate Storage cube.
    I keep getting a wierd error "Syntax error near ['$']".
    Where there is no $ sign in either the script or the flat file source from where I'm loading data.
    This same script worked earlier using the same rule file.
    I'm running the maxl from the EAS console, and not invoking it through a batch.
    I only had to make a change to the cube name and the spool and error file names in the script.
    In the data file i had to make changes to a few member names, and I remapped it through the rule file, where I ignored one particular column.
    I've validated the rule file more than a couple of times, but do not get any errors.
    I'm not sure why I get such an error.
    Can anyone tell me what I'm doing wrong? Or if any one has seen such an error before can you help me out?
    THanks,
    Anindyo
    Edited by: Anindyo Dutta on Feb 4, 2011 7:39 PM
    Edited by: Anindyo Dutta on Feb 4, 2011 7:40 PM

    HEy,
    I'm running the MaxL script through EAS, it doesn't seem like any of the records are going through.
    The script is below:
    login 'usr' 'pwd!' on 'ec2-184-72-157-215.compute-1.amazonaws.com';
    spool on to 'E:/dump/MaxL Scripts/ASO Scripts/Spool files/full_dt_ld_visa_aso_spool.log';
    import database 'VISA_ASO'.'VISA' data from data_file 'E:/dump/Data Load Files/ASO Load files Old format/master_data_load.csv' using server rules_file 'fulldtld' on error write to 'E:/dump/MaxL Scripts/ASO Scripts/Spool files/full_dt_ld_visa_aso_spool.err';
    spool off;
    logout;
    I rechecked a couple of times, doesn't seem like I'm missing any quotes.
    Robb and Jeanette Thanks for your response!
    I'm going to try with a smaller file and update this thread.
    CHeers!
    Anindyo
    Edited by: Anindyo Dutta on Feb 4, 2011 9:14 PM
    Edited by: Anindyo Dutta on Feb 4, 2011 9:16 PM

  • Data load in Essbase ASO cube

    Hi,
    I have not been using ASO cube before and had worked only on BSO cubes. Now I have a requirement to create a rule file to load data in to an ASO Essbase cube. I have created a data load rule file as I was creating for a BSO cube which is correctly validating. However when I am doing the data load I am getting following warning:
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"
    I have investigated further and found that ASO cube does not allow data loading at upper levels & on members calculated through formulas. After this I have ensured that I am loading the data in to zero level members and members which are not calculated through formula. But still I am not able to do the data load & getting the same warning.
    Could you please help me and let me know if there is anything else which I am missing here?
    Thanks in advance...
    AKW

    Hi AKW,
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"This is only a warning message that means only those many cells were skipped might be for some reasons like any member pointing to those cells will be missing.
    If you want to copy the Data of your BSO cube to an ASO Application why dont you use an PARTIONING it will copy your whole data from BSO to ASO (If Outline is common in both then copy any member of Sparse dimension like "Scenario 1" from Source i.e. BSO, to same member like "Scenario 1" in Target i.e ASO ),
    This is only an alternate wayThanks
    Avneet Singh Bhatia

  • Data Load MAXLs in ASO

    Hi All,
    Greetings of the day !!!!
    Want to understand the difference between "Add values create slice" and "override values create slice" used in data loading MAXLs
    Suppose we initialized buffer and loaded data in buffer then we can use following two MAXLs
    1)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    add values create slice;
    2)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    override values create slice;
    Q1
    What i am thinking logically is if i am again loading the data in the same intersections from which slice is created ADD VALUE will add it and override value will overwrite it .... e.g if 100 was present earlier and we are again loading 200 then ADD will make 300 and overwrite will result 200. Let me know if my understanding is correct
    Q2
    Why do we use "create slice" ? What is the use? Is it for better performance for data loading? Is it compulsary to merge the slices after dataloading??
    Cant we just use add value or override values if we dont want to create slice...
    Q3
    I saw two MAXLs for merging also ... one was Merge ALL DATA and other was MERGE incremental data ... Whats the diff ? In which case we use what?
    Pls help me in resolving my doubts... Thanks a lot !!!!

    Q1 - Your understanding is correct. The buffer commit specification determines how what is in the buffer is applied to what is already in the cube. Note that there are also buffer initialization specifications for 'sum' and 'use last' that apply only to data loaded to the buffer.
    Q2 - Load performance. Loading data to an ASO cube without 'create slice' takes time (per the DBAG) proportional to the amount of data already in the cube. So loading one value to a 100GB cube may take a very long time. Loading data to an ASO cube with 'create slice' takes time proportional to the amount of data being loaded - much faster in my example. There is no requirement to immediately merge slices, but it will have to be done to design / process aggregations or restructure the cube (in the case of restructure, it happens automatically IIRC). The extra slices are like extra cubes, so when you query Essbase now has to look at both the main cube and the slice. There is a statistic that tells you how much time Essbase spends querying slices vs querying the main cube, but no real guidance on what a 'good' or 'bad' number is! See http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/aggstor_runtime_stats.html.
    The other reason you might want to create a slice is that it's possible to overwrite (or even remove, by committing an empty buffer with the 'override incremental data' clause in the buffer commit specification) only the slice data without having to do physical or logical clears. So if you are continually updating current period data, for example, it might make sense to load that data to an incremental slice.
    Q3 - You can merge the incremental slices into the rest of the cube, or you can merge multiple incremental slices into one single incremental slice, but not into the rest of the cube. Honestly, I've only ever wanted to use the first option. I'm not really sure when or why you would want to do the second, although I'm sure it's in there for a reason.

  • Incrementally Loading Data Using a Data Load Buffer

    Hi
    I am using Essbase 9.3.1. I am trying to use the feature "Replacing Database or Incremental Data Slice Contents" for my data loads to the ASO Cube.
    I have 2 data sets, one of them is 2 years history. And another is last 3 months which would be changing on a daily basis. I looked at that DBAG and they have exact same scenario as an example for this feature. But I am not able to overwrite my valatile data set with my new file.
    Here is what I do
    alter database ${1}.${2} initialize load_buffer with buffer_id ${6} resource_usage 0.3 property ignore_missing_values, ignore_zero_values ;
    import database ${1}.${2} data from data_file '${3}' using server rules_file '${4}' to load_buffer with buffer_id ${6} add values create slice on error write to '${5}' ;
    alter database {1}.{2} merge incremental data;
    alter database ${1}.${2} clear aggregates ;
    execute aggregate process on database ${1}.${2} ;
    In fact my data from my new (incremental file) does not even make it to the database. I checked that it does get rejected.
    Am I doing something wrong over here. How do I use the concept of "data slice" and its incremental load feature.
    Can anyone please explain ?
    Thanks
    Mandar Joshi

    Hi,
    Just wondering if anyone had any inputs or feedback on my query. Or is my question a really stupid one and does not deserve any attention :)
    Can someone explain how the "data slice" concept works ??
    Thanks
    Mandar Joshi.

  • Data load error in essbase studio

    I get the following error when trying to load an ASO cube using Essbase Studio (EPM 11.1.2). This error doen't seem to be documented in any of the Essbase manuals. Question - does this error indicate an essbase server issue or a data source issue? I'm thnking it's datasource related, but my data source is an Oracle database, which I've used previously to load cubes without a problem. I've refreshed the source and can connect to it fine otherwise.
    Error:
    Data load started at: Fri Dec 03 08:52:21 EST 2010.      Data load elapsed time:  10 Minutes 23 Seconds.
    Failed to deploy Essbase cube.
    Caused by: Failed to load data into database: 8020.
    Caused by: Cannot execute a SQL query
    Caused by: Io exception: Socket read timed out
    Caused by: Socket read timed out
    Appreciate any hel with this issue.

    When I have issues with Studio I try to break it down slowly. I build my dimensions one at a time. If it breaks on a single dimension build I trace the issues backwards and usually find my issue in the schema.
    Studio's role in life is to create SQL load rules and as such depend on a good schema definition. Unforntunately, the dimension build rules can't be opened in EAS with the Dataprep Editor (regular load rules) because they're binary and can do things that a normal load rule cannot (text measures, date measures, time varying attributes, etc.). But that doesn't mean the .rul files are un-readable. If you're having trouble with a particular dimension build process, open the load rule it creates with something like Notepad and grab the SQL that Studio is generating and drop it into Toad (or equivalant) to see if it is generating usable code. If not, there's something wrong with your modeling and you need to go back to the mini-schema.
    When you're able to build all dimensions all at the same time, you're almost there. If your issues comes when you want to build and load data, the final debuging steps go quickly. Towards that end, the data load rules (ones that load data vs. building dimensions) generated by Studio can be edited in EAS using the Dataprep Editor. If you know SQL Load Rules, you should be able to figure out. If not, contact John Goodwin, OCS or a partner and set up a consulting visit.

  • Essbase 9.3.1, more time taken for data load.

    Hi,
    i am trying to load 15gb of data, (data is taken from two seperate database of oracle) to my ASO application directly.
    the load is very slow, what factors should i consider to make the load faster?
    Will incresing the RAM size help me in this context?
    i have gone through the admin doc of Essbase 9.3.1, in that the Hard-Disk and the RAM requirement is given, but it is for the Block storage.
    is there any difference between this estimation and the ASO estimation?
    If anyone can guide me, what things has to be taken care while loading the cube with huge data.
    i shall be very thankful.

    The statements which matters for the Aggregate storage is
    DLSINGLETHREADPERSTAGE FALSE
    DLTHREADSPREPARE Sample Basic 3
    The write(DLTHREADSWRITE Sample Basic 4) option doesnt have any impact on the writing speed of the data in the ASO.
    The statements has to be included in the essbase.cfg file, which is the configuration file of the Essbase server.
    Well as per the document this should be done throught the maxl ,Esscmd or the Analytic services console?
    Question?
    1) Should we simply put this statements in the essbase.cfg file without a semicolon? and restart the server/application?
    2) if these statement can be executed by the maxl? Please let us know how can we do that?
    3) How do we know what is the current level of thread being used for the Read/write?
    Thanks

Maybe you are looking for

  • JDeveloper 10.1.3.3.0.3 Blue Screen of Death on Windows 7 64-bit

    Hi, I am using JDeveloper 10.1.3.3.0.3 on a Windows 7 (64-bit) machine. Whenever I try and use the PgUp or PgDn keys or even use the main vertical scrollbar it produces a Blue Screen of Death. Saying the Video Scheduler has encountered an unexpected

  • Epson R220 & Leopard  "Print CD Program"

    I just installed Leopard my Mac Pro, But I am having problems printing directly on CD's with my EPSON R220 Printer. I am using Epson's "Print CD" and Roxio/Toast "Disc Cover RE" and everytime i go to print I get the following message: "The EPSON prin

  • The button "More info" (in window  "About this Mac") is inactive

    Hi! I'm really don't know what is the problem. 2 days ago it worked. MacAir (Lion 10.7.3.) Thank you in advance

  • Trade show magic interactive help needed with adobe connect 8

    Please see my website http://www.magicduncan.co.uk - I am a trade show magician as per my website and was wondering if this can be incorporated into my site to create a webinar presentation? Can I do screen sharing using adobe connect 8 as this is on

  • Backups from internal drive to external drive

    How do I transfer my backups from my internal hard drive to my external hard drive?  My internal hd is 500GB and with backups, it takes up 274GB.  My external is a 2TB hard drive with plenty of room to spare.  Any help is highly appreciated.  Thanks!