Data Load MAXLs in ASO

Hi All,
Greetings of the day !!!!
Want to understand the difference between "Add values create slice" and "override values create slice" used in data loading MAXLs
Suppose we initialized buffer and loaded data in buffer then we can use following two MAXLs
1)
import database AsoSamp.Sample data
from load_buffer with buffer_id 1
add values create slice;
2)
import database AsoSamp.Sample data
from load_buffer with buffer_id 1
override values create slice;
Q1
What i am thinking logically is if i am again loading the data in the same intersections from which slice is created ADD VALUE will add it and override value will overwrite it .... e.g if 100 was present earlier and we are again loading 200 then ADD will make 300 and overwrite will result 200. Let me know if my understanding is correct
Q2
Why do we use "create slice" ? What is the use? Is it for better performance for data loading? Is it compulsary to merge the slices after dataloading??
Cant we just use add value or override values if we dont want to create slice...
Q3
I saw two MAXLs for merging also ... one was Merge ALL DATA and other was MERGE incremental data ... Whats the diff ? In which case we use what?
Pls help me in resolving my doubts... Thanks a lot !!!!

Q1 - Your understanding is correct. The buffer commit specification determines how what is in the buffer is applied to what is already in the cube. Note that there are also buffer initialization specifications for 'sum' and 'use last' that apply only to data loaded to the buffer.
Q2 - Load performance. Loading data to an ASO cube without 'create slice' takes time (per the DBAG) proportional to the amount of data already in the cube. So loading one value to a 100GB cube may take a very long time. Loading data to an ASO cube with 'create slice' takes time proportional to the amount of data being loaded - much faster in my example. There is no requirement to immediately merge slices, but it will have to be done to design / process aggregations or restructure the cube (in the case of restructure, it happens automatically IIRC). The extra slices are like extra cubes, so when you query Essbase now has to look at both the main cube and the slice. There is a statistic that tells you how much time Essbase spends querying slices vs querying the main cube, but no real guidance on what a 'good' or 'bad' number is! See http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/aggstor_runtime_stats.html.
The other reason you might want to create a slice is that it's possible to overwrite (or even remove, by committing an empty buffer with the 'override incremental data' clause in the buffer commit specification) only the slice data without having to do physical or logical clears. So if you are continually updating current period data, for example, it might make sense to load that data to an incremental slice.
Q3 - You can merge the incremental slices into the rest of the cube, or you can merge multiple incremental slices into one single incremental slice, but not into the rest of the cube. Honestly, I've only ever wanted to use the first option. I'm not really sure when or why you would want to do the second, although I'm sure it's in there for a reason.

Similar Messages

  • DATA LOAD WORNINGS IN ASO CUBES

    Hi Every one,
    While loading data into ASO cubes in Essbae we are getting wornings like *"Data load strem contains 1.25797e 08 and [0] #misssing cells ".*My data file have #missing values and o's and sepecial carecters like E .I want To load the complete data with out warnings.Kindly let me know if any one's know the solution .Whether i need to change any settings in rule file or how to ingnore those cells .
    Thanks,
    Vikram

    The warnings are really informational messages to let you know it loaded and did not load those values. Which is fine as they tend to bloat a cube (The zeros). #missing is not going to load anyway and the E is exponential format of numbers whinch should not be a problem. Excel will display it this way, but you can format it without the E. You don't mention if you are doing this from EAS or MaxL and what version you are on. In Veraion 11, in EAS there are options in the load dialog to turn on or off the loading of zzeros and missing across the top. In MaxL, I don't see the syntax in the Tech reference, but I thought it was there in 9.

  • Data load in Essbase ASO cube

    Hi,
    I have not been using ASO cube before and had worked only on BSO cubes. Now I have a requirement to create a rule file to load data in to an ASO Essbase cube. I have created a data load rule file as I was creating for a BSO cube which is correctly validating. However when I am doing the data load I am getting following warning:
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"
    I have investigated further and found that ASO cube does not allow data loading at upper levels & on members calculated through formulas. After this I have ensured that I am loading the data in to zero level members and members which are not calculated through formula. But still I am not able to do the data load & getting the same warning.
    Could you please help me and let me know if there is anything else which I am missing here?
    Thanks in advance...
    AKW

    Hi AKW,
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"This is only a warning message that means only those many cells were skipped might be for some reasons like any member pointing to those cells will be missing.
    If you want to copy the Data of your BSO cube to an ASO Application why dont you use an PARTIONING it will copy your whole data from BSO to ASO (If Outline is common in both then copy any member of Sparse dimension like "Scenario 1" from Source i.e. BSO, to same member like "Scenario 1" in Target i.e ASO ),
    This is only an alternate wayThanks
    Avneet Singh Bhatia

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • Essbase 7.1 - Incremental data load in ASO

    Hi,
    Is there incremental data loading feature in ASO version 7.1? Let's say, I've the following data in ASO cube
    P1 G1 A1 100
    Now, I get the following 2 rows as per the incremental data from relational source:
    P1 G1 A1 200
    P2 G1 A1 300
    So, once I load these rows using rule file with override existing values option, will I've the following dataset in ASO:
    P1 G1 A1 200
    P2 G1 A1 300
    I know there is data load buffer concept in ASO 7.1. And this is the inly way to improve data load performance. But just wanted to check if we can implement incremental loading in ASO or not.
    And one more thing, Can 2 load rules run in parallel to load data in ASO cubes? As per my understanding, when we start loading data, the cube is locked for any other insert/update. Pls correct me if I'm wrong!
    Thanks!

    Hi,
    I think the features such as incremental data loads were available from version 9.3.1
    In the whats new for Essbase 9.3.1 it contains
    Incrementally Loading Data into Aggregate Storage Databases
    The aggregate storage database model has been enhanced with the following features:
    l An aggregate storage database can contain multiple slices of data.
    l Incremental data loads complete in a length of time that is proportional to the size of the
    incremental data.
    l You can merge all incremental data slices into the main database slice or merge all
    incremental data slices into a single data slice while leaving the main database slice
    unchanged.
    l Multiple data load buffers can exist on a single aggregate storage database. To save time, you
    can load data into multiple data load buffers at the same time.
    l You can atomically replace the contents of a database or the contents of all incremental data
    slices.
    l You can control the share of resources that a load buffer is allowed to use and set properties
    that determine how missing and zero values, and duplicate values, in the data sources are
    processed.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Incrementally Loading Data Using a Data Load Buffer

    Hi
    I am using Essbase 9.3.1. I am trying to use the feature "Replacing Database or Incremental Data Slice Contents" for my data loads to the ASO Cube.
    I have 2 data sets, one of them is 2 years history. And another is last 3 months which would be changing on a daily basis. I looked at that DBAG and they have exact same scenario as an example for this feature. But I am not able to overwrite my valatile data set with my new file.
    Here is what I do
    alter database ${1}.${2} initialize load_buffer with buffer_id ${6} resource_usage 0.3 property ignore_missing_values, ignore_zero_values ;
    import database ${1}.${2} data from data_file '${3}' using server rules_file '${4}' to load_buffer with buffer_id ${6} add values create slice on error write to '${5}' ;
    alter database {1}.{2} merge incremental data;
    alter database ${1}.${2} clear aggregates ;
    execute aggregate process on database ${1}.${2} ;
    In fact my data from my new (incremental file) does not even make it to the database. I checked that it does get rejected.
    Am I doing something wrong over here. How do I use the concept of "data slice" and its incremental load feature.
    Can anyone please explain ?
    Thanks
    Mandar Joshi

    Hi,
    Just wondering if anyone had any inputs or feedback on my query. Or is my question a really stupid one and does not deserve any attention :)
    Can someone explain how the "data slice" concept works ??
    Thanks
    Mandar Joshi.

  • Maxl data load error

    Hi All,
    I'm trying to load data from a csv file to an Aggregate Storage cube.
    I keep getting a wierd error "Syntax error near ['$']".
    Where there is no $ sign in either the script or the flat file source from where I'm loading data.
    This same script worked earlier using the same rule file.
    I'm running the maxl from the EAS console, and not invoking it through a batch.
    I only had to make a change to the cube name and the spool and error file names in the script.
    In the data file i had to make changes to a few member names, and I remapped it through the rule file, where I ignored one particular column.
    I've validated the rule file more than a couple of times, but do not get any errors.
    I'm not sure why I get such an error.
    Can anyone tell me what I'm doing wrong? Or if any one has seen such an error before can you help me out?
    THanks,
    Anindyo
    Edited by: Anindyo Dutta on Feb 4, 2011 7:39 PM
    Edited by: Anindyo Dutta on Feb 4, 2011 7:40 PM

    HEy,
    I'm running the MaxL script through EAS, it doesn't seem like any of the records are going through.
    The script is below:
    login 'usr' 'pwd!' on 'ec2-184-72-157-215.compute-1.amazonaws.com';
    spool on to 'E:/dump/MaxL Scripts/ASO Scripts/Spool files/full_dt_ld_visa_aso_spool.log';
    import database 'VISA_ASO'.'VISA' data from data_file 'E:/dump/Data Load Files/ASO Load files Old format/master_data_load.csv' using server rules_file 'fulldtld' on error write to 'E:/dump/MaxL Scripts/ASO Scripts/Spool files/full_dt_ld_visa_aso_spool.err';
    spool off;
    logout;
    I rechecked a couple of times, doesn't seem like I'm missing any quotes.
    Robb and Jeanette Thanks for your response!
    I'm going to try with a smaller file and update this thread.
    CHeers!
    Anindyo
    Edited by: Anindyo Dutta on Feb 4, 2011 9:14 PM
    Edited by: Anindyo Dutta on Feb 4, 2011 9:16 PM

  • Maxl Error during data load - file size limit?

    <p>Does anyone know if there is a file size limit while importingdata into an ASO cube via Maxl. I have tried to execute:</p><p> </p><p>Import Database TST_ASO.J_ASO_DB data</p><p>using server test data file '/XX/xXX/XXX.txt'</p><p>using server rules_file '/XXX/XXX/XXX.rul'</p><p>to load_buffer with buffer_id 1</p><p>on error write to '/XXX.log';</p><p> </p><p>It errors out after about 10 minutes and gives "unexpectedEssbase error 1130610' The file is about 1.5 gigs of data. The filelocation is right. I have tried the same code with a smaller fileand it works. Do I need to increase my cache or anything? I alsogot "DATAERRORLIMIT' reached and I can not find the log filefor this...? Thanks!</p>

    Have you looked in the data error log to see what kind of errors you are getting. The odds are high that you are trying to load data into calculated memebers (or upper level memebers) resulting in errors. It is most likely the former. <BR><BR>you specify the error file with the <BR><BR>on error write to '/XXX.log'; <BR><BR>statement. Have you looked for this file to find why you are getting errors? Do yourself a favor, load the smaller file and look for the error file to see what kind of an error you are getting. It is possible that you error file is larger than your load file, since multiple errors on a single load item may result in a restatement of the entire load line for each error.<BR><BR>This is a starting point for your exploration into the problem. <BR><BR>DATAERRORLIMIT is set at the config file, default at 1000, max at 65000.<BR><BR>NOMSGLOGGINGONDATAERRORLIMIT if set to true, just stops logging and continues the load when the data error limit is reached. I'd advise using this only in atest environement since it doesn't solve the initial problem of data errors.<BR><BR>Probably what you'll have to do is ignore some of the columns in the data load that load into calculated fields. If you have some upper level memebers, you could put them in skip loading condition. <BR><BR>let us know what works for you.

  • Auto-kick off MaxL script after Oracle GL data load?

    Hi guys, this question will involve 2 different modules: Hyperion and Oracle GL.
    My client has their accounting department updating Oracle GL on a daily basis. My end-user client would like to write a script to automatically kick off the existing MaxL script which is for our daily data load in Hyperion. Currently, the MaxL script is manually executed.
    What's the best approach to build a connection for both modules to communicate with each other? Can we use a timer to trigger the run? If so, how?

    #1 External scheduler.
    I've worked on Appworx and it has build a chain dependent task. There are many other external schedulers like Tivoli,....
    #2 As Daniel pointed out you can use Windows scheduler.
    For every successful GL load add a file to a folder which is accessible for your Essbase task.
    COPY Nul C:\Hyperion\Scripts\Trigger\GL_Load_Finished.txt
    Create another bat file which is scheduled to run on every 5 or 10 mins (this should start just after your GL Load scheduled task)
    This is an example i've for a triggered Essbase job.
    IF EXIST %BASE_DIR%\Trigger\Full_Build_Started.txt (
    Echo "Full Build started"
    ) else (
         IF EXIST %BASE_DIR%\Trigger\Custom_Build_Started.txt (
         Echo "Custom Build started"
         ) else (
              IF EXIST %BASE_DIR%\Trigger\Post_Build_Batch_Started.txt (
              Echo "Post Build started"
              ) else (
              IF EXIST %BASE_DIR%\Trigger\Start_Full_Build.txt (
              Echo "Trigger found starting batch"
              MOVE %BASE_DIR%\Trigger\Start_Batch.txt %BASE_DIR%\Trigger\Full_Build_Started.txt
              call %BASE_DIR%\Scripts\Batch_Files\Monthly_Build_All_Cubes.bat
              ) else (
                   IF EXIST %BASE_DIR%\Trigger\Start_Custom_Build.txt (
                   Echo "Trigger found starting Custom batch"
                   MOVE %BASE_DIR%\Trigger\Start_Custom_Batch.txt %BASE_DIR%\Trigger\Custom_Build_Started.txt
                   call %BASE_DIR%\Scripts\Batch_Files\Monthly_Build_All_Cubes_Custom.bat
                   ) else (
                        IF EXIST %BASE_DIR%\Trigger\Start_Post_Build_Batch.txt (
                        Echo "Trigger found starting Post Build batch"
                        MOVE %BASE_DIR%\Trigger\Start_Post_Build_Batch.txt %BASE_DIR%\Trigger\Post_Build_Batch_Started.txt
                        call %BASE_DIR%\Scripts\Batch_Files\Monthly_Post_Build_All_Cubes.bat
    )So this bat file if it finds Start_Full_Build.txt in the trigger location, it'll rename that to Full_Build_Started.txt and will call the Full Build (likewise for custom and post build)
    Regards
    Celvin
    http://www.orahyplabs.com

  • Incremental Data loading in ASO 7.1

    HI,
    As per the 7.1 essbase dbag
    "Data values are cleared each time the outline is changed structurally. Therefore, incremental data loads are supported
    only for outlines that do not change (for example, logistics analysis applications)."
    That means we can have the incremental loading for ASO in 7.1 for the outline which doesn't change structurally. Now what does it mean by the outline which changes structurally? If we add a level 0 member in any dimension, does it mean structrual change to that outline?
    It also syas that adding Accounts/Time member doesn't clear out the data. Only adding/deleting/moving standard dimension member will clear out the data. I'm totally confused here. Can anyone pls explain me?
    The following actions cause Analytic Services to restructure the outline and clear all data:
    ● Add, delete, or move a standard dimension member
    ● Add, delete, or move a standard dimension
    ● Add, delete, or move an attribute dimension
    ● Add a formula to a level 0 member
    ● Delete a formula from a level 0 member
    Edited by: user3934567 on Jan 14, 2009 10:47 PM

    Adding a Level 0 member is generally, if not always, considered to be a structural change to the outline. I'm not sure if I've tried to add a member to Accounts and see if the data is retained. This may be true because by definition, the Accounts dimension in an ASO cube is a dynamic (versus Stored) hierarchy. And perhaps since the Time dimension in ASO databases in 7.x is the "compression" dimension, there is some sort of special rule about being able to add to it -- although I can't say that I ever need to edit the Time dimension (I have a separate Years dimension). I have been able to modify formulas on ASO outlines without losing the data -- which seems consistent with your bullet points below. I have also been able to move around and change Attribute dimension members (which I would guess is generally considered a non-structural change), and change aliases without losing all my data.
    In general I just assume that I'm going to lose my ASO data. However, all of my ASO outlines are generated through EIS and I load to a test server first. If you're in doubt about losing the data -- try it in test/dev. And if you don't have test/dev, maybe that should be a priority. :) Hope this helps -- Jason.

  • MAXL SCRIPT TO EXECUTE IN BACKGROUND THE DATA LOAD

    Hi,
    I have problem with a MaxL script, I don´t know the command to execute a data load in the background, someone knows??? it would be very grateful if you can help me because now I have to load the data manually and then tick execute in background.
    Thanks for your help
    Regards,

    If the two processes are in no way dependent on each other, why not just use two separate MaxL scripts and run / schedule them separately?
    If you really need to launch multiple MaxL operations against different cubes to run in the background from a single script you can only do this with a shell / command script, not 'natively' in MaxL. If you're on Windows and using CMD, for example, see the 'START' command.
    --EDIT: Crossed over with Sunil, think he pretty much covers it!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Urgent help require-ASO data loading

    I am working on 9.3.0 and want to do incremental data loading so that it wont affect my past data.I am still not sure that it is doable or not .Now the question is
    do I need to design a new aggragation after loading data.
    Thanks in advance

    Hi,
    The ASO cube will clear off all the data , if you make any structural changes to the cube ( i.e if you change your outline ,and you can find out what exactly can clear off hte data ,for example , if you add a new member ,then ,it clears off , and in other case ,if you just add a simple formula to the existing member , it might not clear off the data ).
    If you dont want to effect the past data ,and yet you want to incrementally load , then ensure that all the members are already in the otl( take care of time dimension , let all the time members be there in place) and then you can load in ,using the option of 'add to existing values' while you load .
    But remember , you can only do ,without any structural changes. Else , you need to load all together again
    Its good if you design aggregation , as it helps in retrieval performance, but its not mandatory.
    Sandeep Reddy Enti
    HCC

  • Training on CalcScripts, Reporting Scritps, MaxL and Data Loading

    Hi All
    I am new to this forum. I am looking for someone who can train me on topics like CalcSripts, Reporting Scripts, MaxL and Data Loading.
    I am willing to pay for your time. Please let me know.
    Thanks

    Hi Friend,
    As you seems to be new to essbase,you must learn What is Essbase, OLAP, Difference between Dense & Sparse, and then use "essbase tech ref" for more reference
    After that go through
    https://blogs.oracle.com/HyperionPlanning/and start exploring CalcScript, Maxl etc
    and all this for you free free free..........
    Thanks,
    Avneet

  • ASO data load happening slowly on Essbase 7.1.6

    Hi All ,
    We are loading data files into an ASO cube . The same file used to load in seconds till last week. But this week it is taking hours together.
    We have checked all the server machine performance but of no use .
    Please help

    There is no export feature in v7 for ASO cube, you are going to have to upgrade to get the functionality you are looking for.
    In the mean time, you can try an MDX query instead of a report script, but I don't think it will yield much better results. You'll probably want to break up the report/query to make smaller chunks, maybe one for each year of history or something like that.
    If you plan to stay on 7 you should look at an alternate method for storing the history, for example staging your level 0 data in a relational database and reloading ASO cube from the relational source. You do not want to count on having all your data locked up in an ASO cube with no way to get it out.

  • ASO Balnak Field Fill with Data Load Rule

    In block storage database I could fill a blank field with a text value by creating field with text and then replacing whole word match with preferred text. I am unable to get this to work in ASO database. Field Properties tab has the option but does not work when I try to use it. Has anyone else encountered this situation?

    Hi,
    Thank you both for your answers. But what confuses me is this: I created a rules file using a file with 12 columns. I selected the appropriate member for each column in Field Properties, and added the View member in the data load header. Then I get the error message: "This field is also defined in the header definition" for all fields. However, if I don't set the members in Field Properties and just set them in the data load header, I get another error message: "There is an unknown member (or no member) in the field name."
    Can you please help?
    Thank you!

Maybe you are looking for