Essbase 11.1.2 defragmentation

Hi All,
We've hit something quite strange when running a defrag on a client's environment.
The method used was a parallel export of all data (maxl command), followed by a linear import. It's a block storage model and this method was chosen because in theory it should be faster then a dense restructure and it will guarantee the numbers in the database remain the same (I'm not 100% sure all data is at level 0).
In a development environment we successfully the script which generated I think 9 files - 8 from the parallel exports plus one went over 2GB so we got an extra. All imported to a new database fine and the resulting fragmentation ratio went to 1.0 - great!
In UAT we did the same but the resulting fragmentation ratio got worse - dropping to 0.35. Obviously there is a difference but the only things we have found so far is the amount of data is much higher (around 3.5 times) and therefore many more export files were generated as they frequently went over 2GB. The other difference is dev seems to have bitmap encoding, UAT is RLE.
Has anyone seen anything similar - and is this the best/fastest way to defrag? Does the order in which the files are imported matter?
At the moment we're also experimenting in development with the data export and import binary commands via calc scripts - has anyone else had experience with this?
Many Thanks for any help,

Hi,
Thanks for your responses. Having spent a bit more time on it it appears the clustering ratio drops to 0.35 as soon as the import process starts and stays there the whole time! It's an "all" data export/import we're doing as the calc time after a level 0 import is too long. It is a fast import - 3.5 hours (compared to the export which is over 14 hours).
We've thought about a restructure but backups occur daily and the client won't let us turn these off and it looks like a restructure will take longer than 15 hours (probably the greatest window we have between overnight processes). The export/import allows us to export one day and import the next.
Outlines are a bit different - there is an extra attribute dimension in development, but this shouldn't really make a difference being sparse and dynamic.
Maybe the parallel export creates 'fragmented' txt files, though this isn't what we saw in development. Giving it is parallel, it looks like it slices each thread by a sparse dimension.
For encoding - not sure why the environments are different in encoding but we'll try switching development to RLE which looks like the best option and see if it makes any difference.
Thanks for ideas so far,

Similar Messages

  • Calc time vs. defragmentation

    I have a database with an average cluster ratio of .44. If I export and reload my data, it will go to 1.0, but as soon as I calc it goes back to .44.Under my current data settings, this calc takes a mere 5.7 seconds to run and retrevial time is fine. In an effort to improve the cluster ratio, I played with my dense/sparse settings, changed my time dimension to sparse, and was able to get a .995 cluster ratio after calculation; the problem is now the calc script ran for 127 seconds which is 22x longer.I know that either calc time is minimal by Essbase standards, but I'm still curious which way is "optimal". I would think it is always best to take the enhanced performance over the academic issue of cluster ratio, but I'm concerned at what point this becomes more than an academic question. How imporant is the cluster ratio and what kind of implications are there for having a database that is more fragmented? Are there other things besides calc and retrieval time that maybe I'm not seeing on the surface that I should be concerned with. Since defragmentation should improve performance is it worth it to sacrifice some performance for less fragmentation? Of course as this database grows this will become more of an issue.Any input, thoughts and comments would be appreciated.

    Just my humble opinion: Everybody's data has a different natural sparsity and rather than think in terms of 'fragmentation', think in terms of the nature of your data. If you made EVERY dimension sparse except for Accounts, and had only one member in Accounts, your database would consist solely of single-cell datablocks that are 100% populated - as dense as you can get. The trade-off is that you will have a HUGE number of these small, highly compact datablocks and your calc times would be enormous. As a general rule, you can take each of your densest dimensions in turn and make them "dense" in the outline until your datablocks approach 80k in size. The tradeoff is that not all cells in each datablock will be populated, but you'll have fewer datablocks and your calcs will zoom. Your goal is not to simply minimize the number of datablocks or to minimize the datablock size or to maximize the block density. You goal is to reach a compromise position that maximizes the utility of the database.A good approach is to hit a nice compromise spot in terms of sparse/dense settings and then begin optimizing your calcs and considering converting highly sparse stored dimensions to attributes and such. These changes can make a tremendous impact on calc time. We just dropped our calc time on a database from 14 hours to 45 minutes and didn't even touch the dense/sparse settings.-dan

  • Decimals mismatch after all data defragmentation

    Hi,
    We did a defragmentation of essbase cubes by exporting, clearing and import all data. There is a difference in decimals. Just to avoid the aggregation of cubes, we did a full data export rather than going for a level 0 and aggregation. Is there a possibility to avoid this difference in decimals?
    Before Defragmentation - OTHER EXPEN     -98177.2799999957
    After defragmentation - OTHER EXPEN     -98177.2799999976Regards,
    Ragav.

    Just to share my experience. I have also ever encounter similar issue which the
    rounding are different when I change the outline order. There is no Dyn calc, simply
    aggregation, some members are tagged as +, some are -.
    The rounding difference is noticeable when the numbers are very big, e.g. billion, and the difference
    is from 5 decimals onwards. The rounding difference is insignificant compared
    to the amount itself, so it is not causing an issue.
    I also notice similar behaviour in excel when I sum a number with massive amount.
    There is a rounding difference. So, I guess this is something to do with how binary works.
    Read the following article:
    http://support.microsoft.com/kb/78113

  • Cluster ratio & Defragmentation

    Scenario:
    Before:-Cluster ratio:1, Index cache=300MB
    After:- After running a calc scipt (max calc dim are used) cluster ratio-0.68 & index cache has increased.
    I am running the calc script as a part of test and I am not loading any data. Data is aggregated to Higher level and still I am running the calc scipt for aggregation.
    According to debag:-
    Fragmentation is unused disk space. Fragmentation is created when Essbase
    writes a data block to a new location on disk and leaves unused space in the former location of the data block.
    Block size increases because data from a data load or calculation is appended to the blocks; the blocks must therefore be written to the end of a data file.
    What does the above statement "calculation is appended to the blocks" mean? even though my higher level blocks are present.
    Does this mean that when I run a calculation my block size increases?
    If yes why does it increase?
    If yes, the next time when I run the calc script why my cluster ratio is still at 0.68 and why doesn't that decrease?
    NOTE: After fragmentation, all blocks are marked as dirty. Does this has anything to do with the decrease in the cluster ratio after rrunnnig the calc script (all blocks are marked as clean)? I run the calc script again (blocks are marked as clean) but still it calculates all the blocks becaouse intelligent calc is off. now after this calc also does it append the calculated value to the blocks?
    What is the logic behind this?
    I am not using intelligent calculation but my clearupdatestatus is set to after.

    Yes, defragmentation (restructuring) does throw away old (free for reuse) blocks.
    You must differentiate a little bit. dirty blocks != clean blocks != blocks available for reuse.
    Have a look at the documentation for restructuring, that's the time essbase runs through all blocks and tries to allign them back in best order for retrieval and throwing away the blocks available for reuse. That's also the reason essbase should have at least twice as much space left on the drive as your cube has in size.
    1- I keep on calculating again and again so the new blocks gets added again
    and again even though my blocks are clean increaing my data file Yes for intelligent calculation turned off AND certain conditions which influence the stepping of a calculation (commited mode, parallel calc, cache size changes between calcs ..)
    (or)
    2 - Is it like my blocks are clean so it doesn't add any new blocks for my further > calculation?Again yes :-) for intelligent calculation turned on AND certain conditions which influence the stepping of the calculation are NOT present.
    If the block is relocated, the former position becomes free for reuse
    Does this means, this block has both teh old and new values which increases
    the size of the block and it needs to be relocated as the existing space is not
    sufficient for this block?Quite so, but not one block is holding new and old data, each block is holding its own data. A new block is created for the new data, the old block remains untouched besides flagging it for being available for reuse.
    The flagging for reuse does not mean it is really lost. As it could be reused by a block which does fit in. But I do not know if reuse need an exact match or is a less than equal comparison. In the later case (which I would guess is the one used) small gaps would still be present as not the whole space for the block would be used up.

  • Hyperion User Acces and Defragmentation

    1.How to create  new user in share services and how to assign groups after that what is process will do hyperion planning 11.1.2.1 Verion,Please send me if you have any screen shots
    2.What is the Defragmentation process steps in hyperion.Actually I exported the data and clear the data, and then imported the data, after what I will do for Aggregation,Please help me
    Thanks&Regards

    Hi
    Defragementation in Essbase includes below steps
    Step 1: Export Lev-0 data
    Step 2: clear the cube
    Step 3: Load the data
    Step 4: Run Calcall function to aggregate the data
    or else
    you can do the below steps if you have less volume of data
    Step 1: Export All Level data
    Step2 : Clear the Cube/database
    Step 3: Load the exported all level data
    Hope it helps
    Thanks
    Ramya

  • Cache Settings-Defragmentation

    Dear All,
    For Buffered I/O, the default settings are IC-1024KB & DC-3MB
    If we increase these caches , will there be any reduce in time taken for performing defragmentation.
    Looking in to the other side of Essbase:
    How does it actually perform defragmentation. I mean what actually happens inside?
    To the extent of what I know (Thanks for all those who have provided me the information of what happens inside when essbase performs calculation), during calculation new blocks are created and the old blocks are marked - to be reused later.
    So, during defragmentation it removes all those blocks marked as reuse. So if it has to , it has to get the blocks in to memory and check againsst each block.
    So, I think increasing cache sizes might reduce the time taken to perform defragmentation.
    Let me know your views. And correct me if I am wrong.
    Regards
    Amarnath

    You mean to say
    I didn't get what you were trying to say..
    Can you be more elaborative.
    The link what you have provided is only for setting the cache sizes. That doesn't include any information regarding the setting of cache sizes for performing defragmentation
    I would appreciate if anyone can comment on this...
    Regards
    Amarnath

  • AGG and CALC DIM Essbase script recently started to grow our pag files

    We have a Essbase script that does nothing but AGG and CALC DIM that ran fine for months in that it did not grow our Workforce cube. Starting in late Jan it started to grow its pag files. Workforce cube used to be 7 GB in Dec 2010, then it grew to 10GB today. I tested running it and it grew our pag files by 170MB 2nd time and then by 70MB the 3rd time I ran it. Has anyone seen this?

    Thanks a million Cameron.
    1) I do dense restructures every night - apparently that does not remove all defragmentation.
    last questions:
    2) I exported level zero, cleared all data, then imported level zero data. That should clear up all defragmentation, wouldn't it?
    3) After importing level zero data, I ran a simple Calc Dim calc script on Accounts dim only on this Workforce BSO cube that is only 400MB. It took over 30 mins. On my second and third run of same calc script, it took 9 mins. My BSO cube grew a few MB. Can I assume that blocks have been build by first run and that all subsequent runs will stay around 9 mins since blocks have now been build?
    Here is the calc script
    SET CACHE HIGH;
    SET UPDATECALC OFF;
    SET CLEARUPDATESTATUS OFF;
    SET LOCKBLOCK HIGH;
    SET AGGMISSG ON;
    SET CALCPARALLEL 3;
    FIX (febscenario,Working)
    FIX(@RELATIVE(TTC,0),@RELATIVE(TCI,0),@LEVMBRS("Project",0),@RELATIVE("Total Employees",0))
    FIX(FY11, FY12 "Jan":"Dec")
    FIX("HSP_InputValue","Local","USD")
    CALC DIM ("Account");
    CALC TWOPASS;
    ENDFIX
    ENDFIX /* &YearNext */
    ENDFIX
    ENDFIX
    4) When I calc only FY11, it takes 3 seconds to calc on the first to 4th run of the above calc. However, when I calc FY12, it takes over 30 mins on first calc and 9 mins subsequently. Why is that? Should I use SET CALCONMISSINGBLK in my calc script?
    5) I am running calc as Essbase admin user. The level zero text file I loaded is only 460MB. After calc, the BSO cube's pag files are only 420MB. We are thinking of calc'ing older scenarios for historical purposes but am not sure if that will degrade the calc performance. My experience has been that - increasing the size of the BSO cube by calc'ing will degrade future calc times. Is that your experience?
    Edited by: Essbase Fan on Feb 25, 2011 9:15 AM
    Edited by: Essbase Fan on Feb 25, 2011 9:17 AM

  • Essbase Agg Calc script Running Inconsistantly

    Hi All,
    We are seeing inconsistent completion times for one of our calc scripts that simply aggregates a single Entity dimension. It runs periodically throughout the day on an already aggregated database. The normal completion time is 20 minutes. We have observed that some runs can take up to 7 hours. This issue persists even if there are no users in the system. We had the SAN and the Essbase server monitored while running this calc, but no issues were found on either end. In the essbase log, it appears that Essbase is sitting idle for a period of time while the calc is running. Has anyone experienced this before?
    ------------------------Calc script -------------------
    SET CACHE HIGH;
    SET LOCKBLOCK HIGH;
    SET MSG SUMMARY;
    SET NOTICE LOW;
    SET UPDATECALC OFF;
    SET AGGMISSG ON;
    SET CALCPARALLEL 4;
    SET CALCTASKDIMS 4;
    /* Baseline fix. */
    FIX (@RELATIVE("YearTotal",0), @RELATIVE("ACCountInc",0), @RELATIVE("AccountLine",0), @RELATIVE("AccountOther",0), "FY02", "Working")
         Agg("Entity") ;
    ENDFIX
    It became a major issue. Your inputs will really help us.
    Thanks in advance

    Couple of unknowns here but here's a few tips:
    1. Run the script in MaxL and ensure that you log users out, kill all existing app processes. Now even if this isn't do-able in the long run, you want to test this out to see that the result is consistent. A lot of times your process is waiting for other online processes to finish.
    2. Defragmentation of the BSO cube could be the cause. If you defrag the cube and the 1st time run is fast, and the 2nd time run is slow, then you have created alot of blocks that shouldn't be there. And that's your problem and you need to tune the way you agg.
    3. Check essbase statistics especial average cluster ratio to be 1
    Daniel Poon

  • Hyperion Essbase Error: 1002097 Unable to load database [PlanType1]

    Hello,
    I am running Hyperion Essbase 9.2.0.3 version. This has happened 3 times now since last 2 weeks. I had to restore the APP folder from previous backup.
    The message that i see is Error: 1002097 Unable to load database [PlanType1]
    I have searched google and no luck. Anyone knows how to fix this or why we get this message. I don't want to restore from backup everytime.
    Thanks
    Azmat Bhatti

    I think you have currepted backup.
    Try run essmsh script to recover broken block's.
    Backups - Files still locked after beginarchive

  • "Error 1002097 Unable to Load database" while starting an Essbase App.

    Hello Essbase Experts,
    I am getting below error while starting an Essbase Application:
    "Error 1002097 Unable to Load database"
    Below is an extract from the Essbase Application Log:
    [2012-04-04T14:14:03.816-19:14] [RPA] [ESM-6] [ERROR] [32][] [ecid:1333566842161,0] [tid:2058170128] Unable to lock file
    [SID/essbase/user_projects/epmsystem/EssbaseServer/essbaseserver1/app/RPA/RPA/RPA.esm]. Essbase will try again after a short
    delay.
    [2012-04-04T14:14:04.821-19:14] [RPA] [ESM-5] [ERROR] [32][] [ecid:1333566842161,0] [tid:2058170128] Unable to lock file
    [SID/essbase/user_projects/epmsystem/EssbaseServer/essbaseserver1/app/RPA/RPA/RPA.esm]. Please make sure other processes do not
    access Essbase files while Essbase server is running.
    [2012-04-04T14:14:04.821-19:14] [RPA] [MBR-89] [WARNING] [1][] [ecid:1333566842161,0] [tid:2058170128] Unable to open
    [SID/essbase/user_projects/epmsystem/EssbaseServer/essbaseserver1/app/RPA/RPA/RPA.esm] for database
    [RPA][2012-04-04T14:14:04.821-19:14] [RPA] [SVR-97] [ERROR] [32][]
    [ecid:1333566842161,0] [tid:2058170128] Unable to load database [RPA]
    [2012-04-04T14:14:04.821-19:14] [RPA] [SVR-97] [ERROR] [32][]
    [ecid:1333566842161,0] [tid:2058170128] Unable to load database []
    [2012-04-04T14:14:04.835-19:14] [RPA] [SVR-97] [ERROR] [32][]
    [ecid:1333566842161,0] [tid:2058170128] Unable to load database []
    Please sugest pointers to start the application.
    Thanks,
    KK

    *[2012-04-04T14:14:03.816-19:14] [RPA] [ESM-6] [ERROR] [32][] [ecid:1333566842161,0] [tid:2058170128] Unable to lock file*
    Solution would be easy ...like if the Essbase Agent is stopped using the services.msc (services panal)on Windows while a process is currently running, or if there has been an abnormal termination of the Essbase Agent, orphaned ESSSVR processes can be left. Shuting down the Essbase server using ESSCMD or MaxL then Check Task Manager confirm the ESSBASE.exe process later by this ESSSVR.exe processes should end if still NO then Do an "End Process" on any ESSSVR.exe process that is still running. Start the Essbase service and start the application.
    ESSSVR.exe is process which keep application Alive (bad thing is if u have many applications running similarlly u have same numbe of ESSSVr .exe will be running in task master ..but u cant find out which one belongs to which application :( )
    Reasons
    looking at your error i can say , First of all you need to know what is Lock in EAS ? and what locks exist
    (right-click on Essbase Servername, EDIT->LOCK and EDIT->LOCKED OBJECTS, and if your database/outline appears in there, unlock it.)
    Check any antivirus- and backup software that may be scanning / running on your Essbase folders as that can lock the files and any ESSSVR process stays when the Essbase Agent is stopped
    More over another chance of getting error is when you have you page and index files in different location and rest otl, rep.csc script in another drive .. usually while taking back up in OTL,csc drive u do get created index file and page file even u have page and index file in diffrent drive so doe to this u get this error unable to load database
    Hope this give u some idea

  • Essbase Error Error: 1002097 Unable to load database

    Hi,
    I am using essbase 7.1. I am having a problem when I am trying to run any calc start application or database or doing anything Essbase give me an Error: 1002097 Unable to load database. I restart essbase services but nothing happen. Any help would be great.
    Thank you,

    I am currently getting the same error, it happens randomly on different models each day at around the same time. I have found this error on the log for one model.
    Unable To Find Or Open [e:\Program_Files\Hyperion\Essbase\bin\APP\LAdPBurn\LAdPBurn\ess00011.pag]
    I think the data files are locked by something, I am checking with our server team to see when the backups run as there are no other processes running at this time in Essbase. Incidentally the model has 15 .pag files.

  • How to build the members in essbase based on levels using EAL

    Hi All,
    How to restrict the members to build the in essbase based on levels using EAL.
    I've requirement, in one dimension, needs to build the one level1 member as level0 in essbase.
    please let me know the procedure, if any one implemented or face similar.
    Thanks in advance,
    Kiran

    there's no such functionality in EAL to create Essbase outlines based on levels. the only workaround to this is to use Mapping tables:
    1- create the dimension in essbase with the desired HFM level 1 members as level 0.
    2- set this specific dimension to mapping in your design grid within your EAL Bridge
    3- create the SQL Mapping table and start mapping those members from HFM to Essbase.
    Tanios

  • Data back up in Essbase

    Hi All,
    1. I want to take back-up of complete database, so what are the file to be taken as back up(.pag, .index, .........??)
    2. I want to copy data from one data base to another database so wht are the file to be taken in to consideration
    3. can we do these backups using Maxl.
    its very urgent for me, wating for ur replay
    Thanks.
    Kumar

    SSS wrote:
    Hi All,
    1. I want to take back-up of complete database, so what are the file to be taken as back up(.pag, .index, .........??)
    2. I want to copy data from one data base to another database so wht are the file to be taken in to consideration
    3. can we do these backups using Maxl.
    its very urgent for me, wating for ur replay
    Thanks.
    KumarIf you just want to backup the Essbase database, you may shutdown the Essbase services and then copy all files under the database folder.
    For metadata kept in SQL server, you may need to have a separate database backup.

  • Auto / Silent install of Essbase 6.1.4.0 (Client)

    Does anyone know/have the switch(es) for an auto or silent install (client side) for Excel Essbase Add-in version 6.1.4.0?Thank you,Blake

    Hi,Install Shield has a silent option that allows to record the options that have to be selected by the user.You first have to launch the install with a record a setup.iss file by launcinh setup.exe /r (or -r)Then setup.exe /f1 with the location and name of the iss file will do the trick.Consult installshield.com knowldege base for more information (perform a search on setup.iss).Please be aware that depending on the OS the iss file may need to be different.EricPartake Consulting

  • Error 1030723 Unable to get UTF-8 locale when using Essbase API 11.1.1

    Now I got a question about how to connect to an Essbase Server by using essbase client API (11.1.1). I encountered an error “Unable to get UTF-8 locale” when I tried to EssInit((pInitStruct, phInstance) API.
    However, I had no problem to call the API if I uses essbase previous client APIs (7.1 or 9.3).
    I passed the “ESS_API_UTF8” to usApiType field in the ESS_INIT_T struct. When I openned the “essapi.h” header file, I found these are some new fields (highlighted in red color below) added in the essbase client API (11.1.1)
    ESS_TSA_API_typedef_struct(ess_init_t)
    ESS_TSA_ELEMENT(ESS_ULONG_T, Version); /* This should be set to ESS_API_VERSION */
    ESS_TSA_ELEMENT(ESS_PVOID_T, UserContext); /* void pointer to user's message context */
    ESS_TSA_ELEMENT(ESS_USHORT_T, MaxHandles); /* max number of context handles required */
    ESS_TSA_ELEMENT(ESS_SIZE_T, MaxBuffer); /* max size of buffer that can be allocated */
    ESS_TSA_ELEMENT(ESS_STR_T, LocalPath); /* local path to use for file operations */
    ESS_TSA_ELEMENT(ESS_STR_T, MessageFile); /* full path name of message database file */
    ESS_TSA_ELEMENT(ESS_PFUNC_T, AllocFunc); /* user-defined memory allocation function */
    ESS_TSA_ELEMENT(ESS_PFUNC_T, ReallocFunc); /* user-defined memory reallocation function */
    ESS_TSA_ELEMENT(ESS_PFUNC_T, FreeFunc); /* user-defined memory free function */
    ESS_TSA_ELEMENT(ESS_PFUNC_T, MessageFunc); /* user-defined message callback function */
    ESS_TSA_ELEMENT(ESS_STR_T, HelpFile); /* user-defined help file path */
    ESS_TSA_ELEMENT(ESS_PVOID_T, Ess_System); /* reserved for internal use */
    ESS_TSA_ELEMENT(ESS_USHORT_T, usApiType);
    ESS_TSA_ELEMENT(ESS_PCATCHFUNC_T, CatchFunc); /* user-defined kill-own-request signal callback function */
    ESS_TSA_ELEMENT(ESS_PCATCH_INIT_FUNC_T, CatchInitFunc); /* user-defined kill-own-request signal initialization callback function */
    ESS_TSA_ELEMENT(ESS_PCATCH_TERM_FUNC_T, CatchTermFunc); /* user-defined kill-own-request signal termination callback function */
    ESS_TSA_ELEMENT(ESS_PCOOKIE_CREATE_FUNC_T, CookieCreateFunc); /* user-defined cookie creation function */
    ESS_TSA_ELEMENT(ESS_PCOOKIE_DELETE_FUNC_T, CookieDeleteFunc); /* user-defined cookie creation function */
    } ESS_TSA_END(ESS_INIT_T);
    I could not find any document to introduce the API (11.1.1. And what does the error “Unable to get UTF-8 locale” mean? How can work around it. Any environment parameters or paths need to be set?
    Please advise.

    Hi,
    The API documentation for V11 is available from :- http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_apiref/frameset.htm?launch.htm
    Hopefully it might point you in the right direction.
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for