ASO Data clear

We are clearing data in the ASO cube by reloading the previous data files with rule files to insert the #missing in the same combinations.
We are able to clear the data.
But my doubt is whether it will reduce the cube size or we need to restructure the cube?

If you overwrite the existing data with #missing in an ASO cube, then the cube size will be reduced.
Unless there is a requirement to distinguish between zeros and missing values in the database, it is better to replace zeros with missing values
It will also reduces the size of the database.
You dont need any restructure.

Similar Messages

  • Partial data clear in ASO possible for multiple tuples?

    Hi,
    I am trying to do a partial data clear in an ASO cube. I need to clear FY12->Oct & FY13->Nov (consecutive periods). Here's what I tried:
    +alter database 'GL_TXT'.'GL_TXT' clear data in region '{([FY12],[Oct]),([FY13],[Nov])}' physical;+
    No error is thrown but the data isn't cleared either. The statement finishes execution almost immediately.
    I tried the UNION function but that didn't work either. Here's how my statement looks with the UNION function:
    +alter database 'GL_TXT'.'GL_TXT' clear data in region '{UNION({([FY12],[Oct])},{([FY13],[Nov])})}' physical;+
    Again, no error but no clear either. The UNION pulls the correct data set when used in a Select statement:
    +SELECT UNION({([FY12],[Oct])},{([FY13],[Nov])}) ON COLUMNS FROM GL_TXT.GL_TXT;+
    I can get it to clear if I write separate statements for each period but I want to have them in a single script as I suspect two scripts wouldn't be very efficient.
    Please help!
    Thanks,
    Shashi

    Thanks for your reply Vasavya! Running the region clear scripts twice (once for each month) is still faster for me than using the report script approach. I want to see if having both periods in one statement will improve the performance :)
    Regards,
    Shashi

  • ASO Data base Size....

    When we normally right click on the ASO app folder in Windows and check properties of that folder then i am able to estimate what is the size of the database.That is how i used to know about my ASO databases.Although i am fine with what i am doing right now.Definitely i will go through dbag to understand ASO in detail.
    Need insight or methiods how to calculate ASO data base size ?

    What you are doing is correct and best practice in collecting the ASO cube size.
    To explain in detail.
    We can define the ASO cube size as the sum of otl file size and .dat file (resides in app/<appname>/default directory).
    .dat file stats, you can collect using below MaxL command.
    query database testaso.testdb list all file information;
    But if you are managing dim and balance files archives in the same database folder by this way we cannot estimate the incremental growth of the archives.
    I am good to define the ASO cube size as the .otl + .dat + .otl backups + archives (interface and lev0 files).
    If you track the incremental growth of all these files only, you can estimate the future disk space requirements.
    For this tracking the size of entire application folder is correct.
    You can write batch script to collect all this information
    In unix you can collect as
    +cs=du -g $appname | tail -1 | awk '{ print $1 }'+
    echo "size of $appname is: $cs. "

  • Essbase default data clears taking place during FDM Batch Processing

    We have been trying to avoid the default FDM Essbase database clears (which are coded into the LOAD Action scripts) by writing our own data clears and adding them to one of the event scripts. This was to ensure that the data clear took place against the correct members/point-of-view.
    However, when using the Batch Processing functionality of FDM Workbench, we are seeing unexpected data clears taking place, indicating that FDM is still executing the default data clears. Is there any way to disable these? We also want to ensure that our own data clears are still executed (they are in the "BefExportToDat" event scripts). Please advise!
    These are the unexpected Essbase application log entries we are seeing:
    +[Fri May 25 17:39:45 2012]Local/Monthly/Monthly/admin@Native Directory/140266738562816/Info(1012555)+
    +Clearing data from [Period 10] partition with fixed members [Entity(B1110); Scenario(Current Year)]+
    +...+
    +[Fri May 25 17:39:46 2012]Local/Monthly/Monthly/admin@Native Directory/140266738562816/Info(1012555)+
    +Clearing data from [Period 10] partition with fixed members [Entity(B1110); Scenario(Current Year)]+
    +...+
    +[Fri May 25 17:39:47 2012]Local/Monthly/Monthly/admin@Native Directory/140266738562816/Info(1012555)+
    +Clearing data from [Period 10] partition with fixed members [Entity(B1120); Scenario(Current Year)]+
    +...+
    +[Fri May 25 17:39:48 2012]Local/Monthly/Monthly/admin@Native Directory/140266738562816/Info(1012555)+
    +Clearing data from [Period 10] partition with fixed members [Entity(B1130); Scenario(Current Year)]+
    +...+
    +[Fri May 25 17:39:49 2012]Local/Monthly/Monthly/admin@Native Directory/140266738562816/Info(1012555)+
    +Clearing data from [Period 10] partition with fixed members [Entity(B1140); Scenario(Current Year)]+
    +...+
    +[Fri May 25 17:39:49 2012]Local/Monthly/Monthly/admin@Native Directory/140266738562816/Info(1012555)+
    +Clearing data from [Period 10] partition with fixed members [Entity(B1210); Scenario(Current Year)]+
    +...+
    +[Fri May 25 17:39:50 2012]Local/Monthly/Monthly/admin@Native Directory/140266738562816/Info(1012555)+
    +Clearing data from [Period 10] partition with fixed members [Entity(B1220); Scenario(Current Year)]+

    Hi SH, yes you are right, I also realised this in the car on the way home! But don't you mean "A" rather than "M" (for append)?
    I have tested this and it appears to be the solution - setting the second character of the fifth segment of the filename to "A" has stopped the default Essbase clears from being executed. To summarise:
    To append or replace to data in FDM:
    FDM Web Client: Select "Append" or "Replace" option in drop-down list, respectively.
    FDM Workbench Batch Processing: Set first character of fifth segment of filename to be "A" or "R", respectively.
    To append or replace to data in Essbase:
    FDM Web Client: Select "1 - Merge" or "0 - Replace" option in drop-down list in Export stage of workflow, respectively.
    FDM Workbench Batch Processing: Set second character of fifth segment of filename to be "A" or "R", respectively.
    Thanks for your help!

  • Issue with Logical data clear in ASO

    Since we upgraded to version 11 we have done away with clearing data from the ASO cube using #MI files, and have started using the Clear Data In Region MDX functionality. From my understanding, when doing a logical clear it should load offsetting values to a new slice, that will result in a value of 0 being retrieved from the DB.
    The problem is that there is a 2-3 minute window between the clear script and the new data load where users are pulling not 0's, but incomplete data. Again, from my understanding this is not how a logical delete should act. This process runs every 2 hours, so there is a 2-3 minute window every 2 hours where the data users see may be incorrect. If we can't resolve this issue we will have to go back to loading #MI to clear data from the ASO DB which we are hoping to avoid. Also, we can't do a physical delete because it takes too long.
    Any ideas? Am I misinterpreting the Logical Delete functionality?
    Thanks in advance.

    Just to confuse matters, this problem is intermittent and I haven't been able to successfully replicate it in our Test environment.
    That would seem to indicate something else was going on in the DB that was interfering with the clear, but the logs aren't showing any errors, locks, etc that could have caused the problem.

  • ASO physical clear increases cube size?

    I am testing out performance and feasibility of a physical clear and have noticed that the cube size actually INCREASES after doing a physical delete. The cube size increased from 11.9 GB to 21.9 GB. Anyone know why?
    Our ASO cube is used as a forecasting application and models are being updated consistently. Logical delete won't work because a model can be updated multiple times and direct quote from manual precludes the logical clear option.
    "Oracle does not recommend performing a second logical clear region operation on the same region, because the second operation does not clear the compensating cells created in the first operation and does not create new compensating cells."
    Here is the MDX I used to do a physical clear of ~120 models from the cube.
    alter database Express.Express clear data in region
    PM10113,
    PM10123,
    PM10124,
    PM10140,
    PM6503,
    PM6507,
    PM6508,
    PM6509,
    PM6520,
    PM6528
    }' Physical;
    Any insight would be greatly appreciated.

    I am sorry but I do not have my test system available so i will have to do it from memory.
    I am surprised at this - it is what you would expect if you did a logical clear. When you look at db statistics does the number of cells reflect the original less the cleared area? And does it show no slices? If so then you did do a physical as your maxl indicates.
    You might want to stop and start the application. Otherwise I will have to check some more.
    But given the size of the increase (almost doubled) I would wonder why you would do a clear as opposed to a reset. Finally I am wondering why you are doing a clear at all? Why not just a send and let an incremental slice. That way only the changed cells would be found in the slice. More important the slices would be quite small and likely automatically "merged".
    Finally - I am wondering about the DBAG quote (page 982 on PDF) you included. Again I would have to test but I think they are only warning that the the number of slices will start to build up but I would expect that "because the second operation does not clear the compensating cells created in the first operation and does not create new compensating cells". The net result would still be correct.

  • Urgent help require-ASO data loading

    I am working on 9.3.0 and want to do incremental data loading so that it wont affect my past data.I am still not sure that it is doable or not .Now the question is
    do I need to design a new aggragation after loading data.
    Thanks in advance

    Hi,
    The ASO cube will clear off all the data , if you make any structural changes to the cube ( i.e if you change your outline ,and you can find out what exactly can clear off hte data ,for example , if you add a new member ,then ,it clears off , and in other case ,if you just add a simple formula to the existing member , it might not clear off the data ).
    If you dont want to effect the past data ,and yet you want to incrementally load , then ensure that all the members are already in the otl( take care of time dimension , let all the time members be there in place) and then you can load in ,using the option of 'add to existing values' while you load .
    But remember , you can only do ,without any structural changes. Else , you need to load all together again
    Its good if you design aggregation , as it helps in retrieval performance, but its not mandatory.
    Sandeep Reddy Enti
    HCC

  • Essbase Data Clears Off After Restructure

    Hello,
       When I am doing the Dimension Update in ASO Cube , the Data file is getting cleared off even while I select Retain all Data when Structured . I have done Restructuring Number of times and I have never faced this Issue . Can anyone let me know whats happening here . Its pretty crucial !
    Thanks !!

    955124 wrote:
    I did Smart View  retrievals and I didnt see any data , for clarity I restored the cubes to the point before doing Update and there was data. .
    Checking the properties tells you for certain...  ...you could have lost some data, you could have broken a formula, etc.
    Surprisingly, deleting a dimension and re-adding it does actually work, providing that the dimension has the same name before and after.  But in any case, if you do something that Essbase knows is going to clear the data you get a message after saving that says "In order to restructure, all data must be cleared", even though you have selected "Retain all data".  Any chance you got that and clicked through (it's easily done )
    Possibilities for 'silent' data loss are that you have made previously level-zero members level-one or above, deleted the level-zero members that contained data (obviously) and / or changed the names of level-zero members.

  • ASO CUBE CLEARING DELAY

    Hi All,
    I have an ASO cube that is aggregated to about 5gigs. When I "CLEAR ALL DATA" its taking in excess of 10-15mins, which seems a rather long time. Is this normal? I'm sure in the past it wasn't doing this.
    Thanks
    Paul

    Hi Paul,
    1. Time taken to 'clear all data' might have taken more time beacuse of any other activity at the system/server end.
    2. Was 5GB of your cube had taken 10 -15 mins lesser than this time, then i suspect your system/server might be handling more requests at the time of "clearing data" this time. Just check with the admin to ensure that you are doing a fair apples to apples camparision.
    3. Also check ,whether any thing at server end, storage end(i.e SAN or NAS) has changed these days.
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • ASO Partial clear in Essbase v 9.2.0

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v9.2.0? We are trying to clear some data in our dbase and don’t want to clear out all the data.
    Thank you in Advance,
    Dan

    How are you trying to clear the data, if it using maxl and mdx then I don't think that was available until version 11
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How to store ASO dat file in different location

    Hello Everyone,
    i have a requirement where I need to store the .dat(ASO) file in a location other than the folder where the application artifcats are stored. In BSO, We can right click the database, edit, properties and in Storage tab we can choose where Data and Index files have to be stored. Do we have similar option for ASO too? because, the space we have for the application folders are limited and we have a separate SAN drive for storing data and index file and thats where BSO .pag files are stored. Since our ASO cube is growing I want to move the data to a location where BSO data is stored, i mean to the designated drive for data. Any suggestions will be much appreciated.
    Thanks.

    Yes, ASO has the concept of 'tablespaces'. There are four (default, temp, log and metadata) but you probably just want to move the 'default' and 'temp' tablespaces to your SAN location. Temp will look really small (i.e. empty) until you run a restructure or aggregation and then it'll explode to the size of the default tablespace so it's important not to miss.
    See the section on 'Managing Storage for ASO Databases' here http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/asysadmn.html, or the 'alter tablespace' MaxL command here http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/maxl_alttabl.html.
    EDIT: Actually, looking at the documentation, I see that you can't move the metadata or log tablespaces. But you're probably only going to care about 'default' and 'temp' anyway.

  • ASO - .dat file in default directory

    Hi,
    Our .dat file in the 'default' directory on our server is 209,715,200 The sys admin says we need to move it as it's too big for that directory. They do this on the BSO cubes and identify a path where the 'Storage' resides. However she doesn't see this option for ASO. Is this not possible in ASO and the .dat file must remain where it's at? Or is there an alternative method to store the .dat file somewhere else?

    You mean managing the tablespaces ? - http://docs.oracle.com/cd/E17236_01/epm.1112/eas_help/tablespc.html
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • FEBAN - Automatic EBS banking data clearing from Payroll checks

    Hi, I have a problem with the automatic Check clearing of my EBS data from payroll. The document number for correct clearing is updated in table PAYR field VBLNR. If I re issue the clearing with Program RFEBAK30 it is not picking up the the document to clear. In the output file the BELNR comes with * and not with the available document number.
    Thanks for any idea.

    Hi Christy,
    Just curious, if you found the process to update the encashment.
    Thanks,
    Mahesh

  • Infotype data clear

    I want to clear all data from infotype 2005 pl do help.
    reward points will be given.
    Regards,
    Quavi.

    write a program
    selection parameters : s_pernr   for  pa2005-pernr  .
    TYPES: data  :  begin of itab_wa  occurs 
    include structure   pa2005 .
    end of  itab_wa .
    DATA itab  TYPE TABLE OF itab_wa.
    SELECT *       FROM pa2005
           INTO CORRESPONDING FIELDS OF TABLE itab
           WHERE  pernr  in  s_pernr .
    DELETE pa2005 FROM TABLE itab .
    reward points if it is usefull

  • ASO data slice export using MDX

    Hi
    Can any one help in syntax for exporting data slice from ASO cube using Mdx and Maxl?

    spool
    http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/frameset.htm?maxl_commands_spool.html
    ogin test test on test;
    Spool on to 'mxlDefectAll.txt';
    set column_width 36;
    SELECT NON EMPTY {( [Actual])} ON COLUMNS,
    NON EMPTY
    CrossJoin (CrossJoin ([Product].children, [Market].children),
    CrossJoin([Year].Children, [Measures].children))
    ON ROWS
    FROM Sample.Basic
    spool off;
    logout;
    or use maxl-perl interface
    http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/frameset.htm?maxl_perl_examples.html

Maybe you are looking for