Average clustering Ratio
Hi,
As the process of defragmentation... after clearing the data cube, I loaded L0 data in the BSO cube. after data load completion "Average clustering ratio" is 0.34 .
After running agg "Average clustering ratio" is 0.43.
After doing database restructure "Average clustering ratio" became 1.
I was expecting average clustering ratio after loading L0 should be 1 or else after running agg should be 1, but it was never the case. can you help understand why this happening?
Thanks for your time and help...
ACR relates to the extent to which blocks are physically in the .pag files in outline order.
I have a theory that if the load is performed in parallel (or even if the load file is not ordered in the same way, as might be the case with the output of a parallel export) the blocks don't necessarily get loaded in that order.
A (non-parallel) restructure is going to completely recreate the .pag files exactly in outline order, which is why the ACR returns to 1.
See also Essbase Users: Dense restructure is not making database average clustering ratio 1
I wouldn't obsess over getting ACR to be 1.
Similar Messages
-
Average Clustering ratio is not going up to 1.0 even after dense restructure
Gurus,
good evening.
I am having a strange issue.
I did a force restructure using Maxl. The number of pag files reduced from 14 to 9 but when I check properties the clustering ratio is still showing the value in contrary to the expectation of 1.0
I am in impression that the force restructure would remove fragmentation and take Average Clustering Ratio to 1.0
please advise.Gurus,
good evening.
I am having a strange issue.
I did a force restructure using Maxl. The number of pag files reduced from 14 to 9 but when I check properties the clustering ratio is still showing the value in contrary to the expectation of 1.0
I am in impression that the force restructure would remove fragmentation and take Average Clustering Ratio to 1.0
please advise. -
Index file increase with no corresponding increase in block numbers or Pag file size
Hi All,
Just wondering if anyone else has experienced this issue and/or can help explain why it is happening....
I have a BSO cube fronted by a Hyperion Planning app, in version 11.1.2.1.000
The cube is in it's infancy, but already contains 24M blocks, with a PAG file size of 12GB. We expect this to grow fairly rapidly over the next 12 months or so.
After performing a simple Agg of aggregating sparse dimensions, the Index file sits at 1.6GB.
When I then perform a dense restructure, the index file reduces to 0.6GB. The PAG file remains around 12GB (a minor reduction of 0.4GB occurs). The number of blocks remains exactly the same.
If I then run the Agg script again, the number of blocks again remains exactly the same, the PAG file increases by about 0.4GB, but the index file size leaps back to 1.6GB.
If I then immediately re-run the Agg script, the # blocks still remains the same, the PAG file increases marginally (less than 0.1GB) and the Index remains exactly the same at 1.6GB.
Subsequent passes of the Agg script have the same effect - a slight increase in the PAG file only.
Performing another dense restructure reverts the Index file to 0.6GB (exactly the same number of bytes as before).
I have tried running the Aggs using parallel calcs, and also as in series (ie single thread) and get exactly the same results.
I figured there must be some kind of fragmentation happening on the Index, but can't think of a way to prove it. At all stages of the above test, the Average Clustering Ratio remains at 1.00, but I believe this just relates to the data, rather than the Index.
After a bit of research, it seems older versions of Essbase used to suffer from this Index 'leakage', but that it was fixed way before 11.1.2.1.
I also found the following thread which indicates that the Index tags may be duplicated during a calc to allow a read of the data during the calc;
http://www.network54.com/Forum/58296/thread/1038502076/1038565646/index+file+size+grows+with+same+data+-
However, even if all the Index tags are duplicated, I would expect the maximum growth of the Index file to be 100%, right? But I am getting more than 160% growth (1.6GB / 0.6GB).
And what I haven't mentioned is that I am only aggregating a subset of the database, as my Agg script fixes on only certain members of my non-aggregating sparse dimensions (ie only 1 Scenario & Version)
The Index file growth in itself is not a problem. But the knock-on effect is that calc times increase - if I run back-to-back Aggs as above, the 2nd Agg calc takes 20% longer than the 1st. And with the expected growth of the model, this will likely get much worse.
Anyone have any explanation as to what is occurring, and how to prevent it...?
Happy to add any other details that might help with troubleshooting, but thought I'd see if I get any bites first.
The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
Thanks for reading.alan.d wrote:
The only other thing I think worth pointing out at this stage is that we have made the cube Direct I/O for performance reasons. I don't have much prior exposure to Direct I/O so don't know whether this could be contributing to the problem.
Thanks for reading.
I haven't tried Direct I/O for quite a while, but I never got it to work properly. Not exactly the same issue that you have, but it would spawn tons of .pag files in the past. You might try duplicating your cube, changing it to buffered I/O, and run the same processes and see if it does the same thing.
Sabrina -
Error: 1012704 Dynamic Calc processor cannot lock more than [25] ESM blocks
Dear All,
I get the Following Error in the Essbase console when I try to Execute any CalcScript.
Error: 1012704 Dynamic Calc processor cannot lock more than [25] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting)_+
Please find the detailed output of the Statics of my Planning Applications Database and outline.
please help guys........
GetDbStats:
-------Statistics of AWRGPLAN:Plan1 -------
Dimension Name Type Declared Size Actual Size
===================================================================
HSP_Rates SPARSE 11 11
Account DENSE 602 420
Period DENSE 19 19
Year SPARSE 31 31
Scenario SPARSE 6 6
Version SPARSE 4 4
Currency SPARSE 10 10
Entity SPARSE 28 18
Departments SPARSE 165 119
ICP SPARSE 80 74
LoB SPARSE 396 344
Locations SPARSE 57 35
View SPARSE 5 5
Number of dimensions : 13
Declared Block Size : 11438
Actual Block Size : 7980
Declared Maximum Blocks : 3.41379650304E+015
Actual Maximum Blocks : 1.87262635317E+015
Number of Non Missing Leaf Blocks : 10664
Number of Non Missing Non Leaf Blocks : 2326
Number of Total Blocks : 12990
Index Type : B+ TREE
Average Block Density : 0.01503759
Average Sparse Density : 6.936782E-010
Block Compression Ratio : 0.001449493
Average Clustering Ratio : 0.3333527
Average Fragmentation Quotient : 19.3336
Free Space Recovery is Needed : No
Estimated Bytes of Recoverable Free Space : 0
GetDbInfo:
----- Database Information -----
Name : Plan1
Application Name : AWRGPLAN
Database Type : NORMAL
Status : Loaded
Elapsed Db Time : 00:00:05:00
Users Connected : 2
Blocks Locked : 0
Dimensions : 13
Data Status : Data has been modified
since last calculation.
Data File Cache Size Setting : 0
Current Data File Cache Size : 0
Data Cache Size Setting : 3128160
Current Data Cache Size : 3128160
Index Cache Size Setting : 1048576
Current Index Cache Size : 1048576
Index Page Size Setting : 8192
Current Index Page Size : 8192
Cache Memory Locking : Disabled
Database State : Read-write
Data Compression on Disk : Yes
Data Compression Type : BitMap Compression
Retrieval Buffer Size (in K) : 10
Retrieval Sort Buffer Size (in K) : 10
Isolation Level : Uncommitted Access
Pre Image Access : No
Time Out : Never
Number of blocks modified before internal commit : 3000
Number of rows to data load before internal commit : 0
Number of disk volume definitions : 0
Currency Info
Currency Country Dimension Member : Entity
Currency Time Dimension Member : Period
Currency Category Dimension Member : Account
Currency Type Dimension Member :
Currency Partition Member :
Request Info
Request Type : Data Load
User Name : admin@Native Directory
Start Time : Mon Aug 15 18:35:51 2011
End Time : Mon Aug 15 18:35:51 2011
Request Type : Customized Calculation
User Name : 6236@Native Directory
Start Time : Tue Aug 16 09:44:10 2011
End Time : Tue Aug 16 09:44:12 2011
Request Type : Outline Update
User Name : admin@Native Directory
Start Time : Tue Aug 16 10:50:02 2011
End Time : Tue Aug 16 10:50:02 2011
ListFiles:
File Type
Valid Choices: 1) Index 2) Data 3) Index|Data
>>Currently>> 3) Index|Data
Application Name: AWRGPLAN
Database Name: Plan1
----- Index File Information -----
Index File Count: 1
File 1:
File Name: C:\Oracle\Middleware\user_projects\epmsystem1\EssbaseServer\essbaseserver1\APP\AWRGPLAN\Plan1\ess00001.ind
File Type: INDEX
File Number: 1 of 1
File Size: 8,024 KB (8,216,576 bytes)
File Opened: Y
Index File Size Total: 8,024 KB (8,216,576 bytes)
----- Data File Information -----
Data File Count: 1
File 1:
File Name: C:\Oracle\Middleware\user_projects\epmsystem1\EssbaseServer\essbaseserver1\APP\AWRGPLAN\Plan1\ess00001.pag
File Type: DATA
File Number: 1 of 1
File Size: 1,397 KB (1,430,086 bytes)
File Opened: Y
Data File Size Total: 1,397 KB (1,430,086 bytes)
File Size Grand Total: 9,421 KB (9,646,662 bytes)
GetAppInfo:
-------Application Info-------
Name : AWRGPLAN
Server Name : GITSHYPT01:1423
App type : Non-unicode mode
Application Locale : English_UnitedStates.Latin1@Binary
Status : Loaded
Elapsed App Time : 00:00:05:24
Users Connected : 2
Data Storage Type : Multidimensional Data Storage
Number of DBs : 3
List of Databases
Database (0) : Plan1
Database (1) : Plan2
Database (2) : Plan3ESM Block Issue
Cheers..!! -
Sudden increase in import time
<p>On 9/6 I refreshed my database by exporting level 0 information,resetting the database, and importing the data back in. Theimport took 984 seconds. On 9/19, I performed the exact sameprocedure, and the import took 5,838 seconds. While I hadadded some dimension members in the meanwhile, as well as somedata, I can't see anything that would account for such a drasticincrease in import time. The size of the file on 9/6 was1,032,616 kb, and the file on 9/19 was 1,072,413 - not a bigdifference. </p><p> </p><p>Statistics on 9/6</p><p>-------Statistics of Birdseye<img src="i/expressions/face-icon-small-tongue.gif" border="0">lan1 -------<br><br>Dimension Name Type Declared Size Actual Size<br>===================================================================<br> Accounts DENSE 2431 2254<br>Time Periods DENSE 23 14<br>Data Type SPARSE 8 8<br>Scenarios SPARSE 25 25<br>Versions SPARSE 5 5<br>Years SPARSE 7 7<br>Customer SPARSE 3281 2412<br>Product SPARSE 10786 10657<br>Commodity SPARSE 122 122<br>Dry - Frozen SPARSE 5 5<br>Family SPARSE 149 149<br>Label SPARSE 486 486<br><br>Number of dimensions : 12<br>Declared Block Size : 55913<br>Actual Block Size : 31556<br>Declared Maximum Blocks : 247722062000<br>Actual Maximum Blocks : 179932788000<br>Number of Non Missing Leaf Blocks : 373943<br>Number of Non Missing Non Leaf Blocks : 385909<br>Number of Total Blocks : 759852<br>Index Type : B+ TREE<br>Average Block Density : 0.2448663<br>Average Sparse Density : 0.0004222977<br>Block Compression Ratio : 0.01842294<br>Average Clustering Ratio : 0.5076456<br>Average Fragmentation Quotient : 7.292331</p><p> </p><p>Statistics on 9/19</p><p>-------Statistics of Birdseye<img src="i/expressions/face-icon-small-tongue.gif" border="0">lan1 -------<br><br>Dimension Name Type Declared Size Actual Size<br>===================================================================<br> Accounts DENSE 2431 2254<br>Time Periods DENSE 23 14<br>Data Type SPARSE 13 13<br>Scenarios SPARSE 25 25<br>Versions SPARSE 5 5<br>Years SPARSE 7 7<br>Customer SPARSE 3284 2414<br>Product SPARSE 10804 10660<br>Commodity SPARSE 122 122<br>Dry - Frozen SPARSE 5 5<br>Family SPARSE 149 149<br>Label SPARSE 486 486<br><br>Number of dimensions : 12<br>Declared Block Size : 55913<br>Actual Block Size : 31556<br>Declared Maximum Blocks : 403588822000<br>Actual Maximum Blocks : 292715605000<br>Number of Non Missing Leaf Blocks : 401337<br>Number of Non Missing Non Leaf Blocks : 437215<br>Number of Total Blocks : 838552<br>Index Type : B+ TREE<br>Average Block Density : 0.2189758<br>Average Sparse Density : 0.0002864733<br>Block Compression Ratio : 0.01815302<br>Average Clustering Ratio : 0.5888489<br>Average Fragmentation Quotient : 3.811463</p><p> </p><p>Any help would be greatly appreciated.</p>
Looking at your statistics there is an increase in the dimensiion members, but that should NOT account for such a massive increase in loading time.<BR><BR>1.5 hours is just wrong.... Unless there is a problem in your database.<BR><BR>Try validating it just to make sure that it is ok.<BR><BR><BR>On a different tack, it would be worth checking the network traffic and the job log to see whether there was anything else running that may have caused this to go slow.<BR><BR>If you can't find anything, make a copy, and then try it on the copy, just to se if it was a 1 off issue, or if it is taking so long...
-
Partition ERROR - 1023040 - msg from remote site : need to understand
hi,
I currently have a problem with the partitions between two cubes.
Architecture:
80 countries database (source)
1 world database (destination)
Process :
- The partitions are created dynamically by maxl scripts :
spool on to $1;
Alter application $2 comment '**_batch_**';
Alter application $4 comment '**_batch_**';
Alter system load application $2;
Alter system load application $4;
Alter application $2 disable startup;
Alter application $4 disable startup;
Alter application $2 disable connects;
Alter application $4 disable connects;
/* Create Transparant Partition between Country cube to Mond cube */
create or replace replicated partition $2.$3
AREA
'"S_R_N",
&curr_month,
&local_currency, "D_EURO",
@IDESCENDANTS("P_Produit"),
@LEVMBRS("M_Marche",1),"M_Marche",
@IDESCENDANTS("B_Marque"),
@IDESCENDANTS("U_Sourcing"),
@REMOVE (@DESCENDANTS("I_Masse"), @LIST ("I_55CCOM")), @DESCENDANTS("I_Divers"),
@IDESCENDANTS("NA_Nature"),MCX'
to $4.$5
AREA
'"S_R_N",
&curr_month,
"D_DEV", "D_EUR",
@IDESCENDANTS("P_Produit"),
@LEVMBRS("M_MixClient",0),"M_MixClient",
@IDESCENDANTS("B_Marque"),
@IDESCENDANTS("U_Sourcing"),
@REMOVE (@DESCENDANTS("I_Masse"), @LIST ("I_55CCOM")), @DESCENDANTS("I_Divers"),
@IDESCENDANTS("NA_Nature"),MCX,
&country_name'
mapped globally ('',D_$7, "D_EURO", "M_Marche") to (W_$6,D_DEV, "D_EUR", "M_MixClient")
refresh replicated partition $2.$3 to $4.$5 all data;
drop replicated partition $2.$3 to $4.$5;
Alter application $2 enable startup;
Alter application $4 enable startup;
Alter application $2 enable connects;
Alter application $4 enable connects;
Alter application $2 comment '**_enable_**';
Alter application $4 comment '**_enable_**';
Alter system unload application $2;
Alter system unload application $4;
Spool off;
Logout;
exit;
- Defragmentation cubes, launch replications countries successively one by one to the world cubes sequentially .
the order of the country is not the same from one month to another .
Treatment is initiated each month.
Symptoms :
- Partition fall into error with the following message but not systematically .
message:
MAXL > refresh replicated partition PGC_ESP.Pgc_esp PGC_MOND.Pgc_mond to all data ;
ERROR - 1023040 - msg from remote site [ [ Wed Nov. 29 10:21:03 2013] hprx1302/PGC_MOND/Pgc_mond/PGC_ADMIN/Error ( 1023040 ) msg from remote site [ [ Wed Nov. 29 10:21:02 2013] hprx1302 / PGC_ESP / Pgc_esp / PGC_ADMIN / Error (1023040) msg from remote site [ [ Wed Nov. 29 10:21:01 2013] hprx1302/PGC_MOND/Pgc_mond/PGC_ADMIN/Error ( 1042012 ) Network error [ 32] : Can not Send Data ]]] .
We note that the error occurs in the following cases:
- The errerur happens generally when the average clustering ratio is low. (cube fragmented) for cubes source and / or destination
- When beacuoup replication were done before: in the last 10 to 15 cubic remaining replicate.
- We mistake once on the environment recipe on the first cube with average clustering ratio to 0.96 but the server recipe is much less efficient.
We noticed that when doing a defragmentation cubes source and destination once the error obtained treatment replication was no longer falling into error.
Problem: defragmentation cube world take 10 hours.
We also made the following observation:
OK/INFO - 1051034 - Logging in user [PGC_ADMIN].
OK/INFO - 1051035 - Last login on Friday, November 29, 2013 10:19:46 AM.
OK/INFO - 1053012 - Object [Pgc_esp] is locked by user [PGC_ADMIN].
OK/INFO - 1053012 - Object [Pgc_mond] is locked by user [PGC_ADMIN].
OK/INFO - 1053012 - Object [54116855] is locked by user [PGC_ADMIN].
OK/INFO - 1053012 - Object [39843334] is locked by user [PGC_ADMIN].
OK/INFO - 1053013 - Object [54116855] unlocked by user [PGC_ADMIN].
OK/INFO - 1053013 - Object [39843334] unlocked by user [PGC_ADMIN].
WARNING - 1241137 - [Target] - Partition definition is not valid: [Cell count mismatch: [1279464568200] area for slice [1] members per dimension [63 1 2 1 6 26 7 245 1 37955 ]].
OK/INFO - 1053012 - Object [25586652] is locked by user [PGC_ADMIN].
OK/INFO - 1053012 - Object [11329970] is locked by user [PGC_ADMIN].
OK/INFO - 1053013 - Object [25586652] unlocked by user [PGC_ADMIN].
OK/INFO - 1053013 - Object [11329970] unlocked by user [PGC_ADMIN].
WARNING - 1241137 - [Source] - Partition definition is not valid: [Cell count mismatch: [47895484140] area for slice [1] members per dimension [63 1 6 7 2173 2 17 1 245 ]].
OK/INFO - 1053013 - Object [Pgc_esp] unlocked by user [PGC_ADMIN].
OK/INFO - 1053013 - Object [Pgc_mond] unlocked by user [PGC_ADMIN].
OK/INFO - 1051037 - Logging out user [PGC_ADMIN], active for 0 minutes.
OK/INFO - 1241124 - Partition replaced.
Following these findings we need to understand what is happening.
We would like to understand why partitons fall errors?
why we have the message "Partition definition is not valid" in the logs when creating the partition?
Regards,
Oliv.Hi SreekumarHariharan,
Tx to your anwers, but we are already try all the solution proposes to Essbase FAQ.
a)Increase the values for NETDELAY and NETRETRYCOUNT in essbase.cfg file.Restart the essbase server.
We are changed the two value in the essbase.cfg but nothing to do. The same error appears
b)Make sure that the all source members and target members used in partition are in sync
All member are diferent between source and target but a mapping are defined in the partition (see the partition maxl in my below message.
c)Validate the partition (look at the validation tab, it will give the numbers for each side of the partition ie source area and target area)
You can see the logs of validation partition :
WARNING - 1241137 - [Target] - Partition definition is not valid: [Cell count mismatch: [1279464568200] area for slice [1] members per dimension [63 1 2 1 6 26 7 245 1 37955 ]].
OK/INFO - 1053012 - Object [25586652] is locked by user [PGC_ADMIN].
OK/INFO - 1053012 - Object [11329970] is locked by user [PGC_ADMIN].
OK/INFO - 1053013 - Object [25586652] unlocked by user [PGC_ADMIN].
OK/INFO - 1053013 - Object [11329970] unlocked by user [PGC_ADMIN].
WARNING - 1241137 - [Source] - Partition definition is not valid: [Cell count mismatch: [47895484140] area for slice [1] members per dimension [63 1 6 7 2173 2 17 1 245 ]].
OK/INFO - 1053013 - Object [Pgc_esp] unlocked by user [PGC_ADMIN].
OK/INFO - 1053013 - Object [Pgc_mond] unlocked by user [PGC_ADMIN].
OK/INFO - 1051037 - Logging out user [PGC_ADMIN], active for 0 minutes.
OK/INFO - 1241124 - Partition replaced.
d)Rerun the Partition script again
The same error appears.
Tx to your help.
Regards,
Oliv. -
MaxL Command for GETDBSTATS (Esscmd)
Hello,
I do see in the documentation that the equivalent for the Esscmd GETDBSTATS is "query database sample.basic get dbstats data_block".
Can someone let me know if there is a MaxL command to know the average clustering ratio of a DB?
Thanks,
- KrrishOppss...My bad!!! I think I figured out. The same command does give the average clustering ratio information as well.
-
Understanding differences between to cubes that should be equal
Hello everybody:
I have got two cubes with the same configuration and outline (one is a copy of the other). On the second cube I add a sparse new dimension ( Area ) with 3 parent members and about 20 child members (all stored). All are empty except +'N/A'+ (parent member) which is the member selected to save all information for the rest of dimensions.
These are the statistics:
Cube 1 Cube 2
Number of existing blocks 28.222 28.222
Block size (B) 42.912 42.912
Potential number of blocks 1.176.252 29.406.300
Existing level 0 blocks 18.270 18.270
Existing upper-level blocks 9.952 9.952
Block density (%) 10,79 10,79
% of maximum blocks exist. 2,4 0,1
Compression ratio 1 1
Average clustering ratio 1 1
Index space (ind files) 8.024 8.024
Data space (pag files) 887.049 1.106.671
Why is there such a difference in space between cubes? I know that the second has an extra dimension, but data is saved only in one of these members and it is a sparse dimension. Therefore block size and number of blocks are the same. And I have restructured both cubes.
There is a calc that is spending 8 minutes for cube 1 and 16 for cube 2. The differences are observed in two subcalcs:
1) CALC DIM of one sparse dimension (different from 'AREA'). It takes twice the time.
2) One member of one dimension ('accumulated data' member) based on UDAS. For the first cube takes 12 seconds and for the second 5 minutes. How is possible this difference?
Could somebody shed some light on these questions?
Thank you
Regards
JavierHello everybody:
TimG, you are right. That is my point. The fragmentation is dense, but I will check it again. And I have not checked fragmentation due to the fact that the second cube is a copy of the first one. I will also check this.
Srinivas, as TimG have said, I am talking about .pag space. I could understand that index were bigger but not data saved. I will try with FIX statement as you say, but I think data I/O should be the same. When Essbase look up the index, the information required could be more (old information + new dimension information); but afterwards reads data blocks that should have the same size. And it has to recover the same amount of blocks.
And about the calc, one like this:
FIX(@UDA("UPE_MS_Last").....)
TD_ACCUM;
ENDFIX
Where TD_ACCUM formula is:
IF (@ISUDA(Accounts,"UPE_MS_Last"))
IF (@ISLEV(PERIOD,0))
TD_MONTHLY;
ENDIF;
ENDIF
IF (@ISUDA(Accounts,"UPE_TR_Monthly"))
IF (@ISLEV(PERIOD,0))
+@SUMRANGE(TD_MONTHLY,@CURRMBRRANGE(PERIOD,LEV,0,,0));+
ENDIF
ENDIF
IF (@ISUDA(Accounts,"UPE_MS_Sum"))
IF (@ISLEV(PERIOD,0))
+@SUMRANGE(TD_MONTHLY,@CURRMBRRANGE(PERIOD,LEV,0,,0));+
ENDIF
ENDIF
IF (@ISUDA(Accounts,"UPE_MS_Avg"))
IF (@ISLEV(PERIOD,0))
+@AVGRANGE(SKIPNONE,TD_MONTHLY,@CURRMBRRANGE(PERIOD,LEV,0,,0));+
ENDIF
ENDIF
Thank you
Regards
Javier -
Remove fragmentation for large database
I have two databases each with page file size close to 80GB, Index 4GB. Average Clustering Ratio on
them is close 0.50. I am in a dillema how to defragment these databases. I have two options -
1> Export level0 data, clear all data (using Reset), Reimport level0 data and fired a calc all.
2> Export all data. clear all data (using Reset), Reimport all data.
Here is the situation.
-> Export all data is running for 19 hours. Hence I could not continue with option 2.
-> Option 1 works fine, but when I fire the calc after loading level0 data, Average clustering ratio
goes back to 0.50. So the database is fragmented again. I am back to the point where I had started.
How do you guys suggest to handle this situation?
This is version Essabse 7. ( yeah, it is old).The below old thread seems to be worth reading:
[Thread: Essbase Defragmentation|http://forums.oracle.com/forums/thread.jspa?threadID=713717&tstart=0]
Cheers,
-Natesh -
Greetings -
I am an Oracle DBA, someone with DBA background would always want to be proactive in hopes of keeping the database at an optimal level. Essbase is new to me, but I do like the challenge.
My question is rather educational, not an issue. I’ve read somewhere that it is a good practice to REORG Essbase once awhile to keep the performance at an optimal level. I usually backup my Essbase database, clear it, then re-import it, which has been working fine.
But, rather than performing export/import when I feel like it, I'd like to find ways that would prompt me to do so.
1. Is there a way to check Essbase using MaxL command to see whether I should Reorg the database?
2. What about the .PAG vs IND (page db vs index), do I watch for their size? If so, what do I look for?
3. What about on AAS level, is their a property that would alert me to do a Reorg,
etc..
Any input would be very helpful. A white paper or even a Book that you can refer me to, would be helpful. With Oracle relational database, I wrote so many scripts and there are so many dictionary tables within the database from which I can inspect the health of the database effectively. I am not sure if there is anything like that in Essbase databse.
Your valuable input would be very appreciated.
A.J.In AAS you can view the Average Clustering Ratio (higher is better - 1.0 being no fragmentation) by looking at the DB properties.
Regarding size of .pag or .ind files -- always ensure the filesystem(s) assigned to the database have over 2 times the existing space for a Block Storage application. A dense restructure will create temporary files for both page (pan) and index (inn) files which will be similar sized as the originals.
The Essbase DBAG (Database Admin Guide) and all documentation is at below eb site. It has some pretty good documentation regarding fragmentation, how to understand the level and deal with it. The process you are doing is a valid (good) method to handle fragmentation.
http://download.oracle.com/docs/cd/E10530_01/doc/index.htm
From the DBAG
"In ESSCMD, look at the Average Fragmentation Quotient that is returned when you execute GETDBSTATS command. Use this table to evaluate whether the level of fragmentation is likely to be causing performance problems:
DB Size Fragementation Quotient Threshold
Up to 200 MB 60% or higher
Up to 2 GB 40% or higher
that 2 GB 30% or higher"
MaXL
login USER_NAME PASS on SERVER
query database APP_NAME.DB_NAME get dbstats data_block;
You will need to apply some other options on the MaXL to make the columns a usable format.
Regards,
-John -
Remove database fragmentation in Essbase
Hi,
Somebody guide me which is best option to remove database fragmentation in essbase at Production server.
I see Average clustering ratio is -> 0.5 which i think should be higher.
Option which i know:-
export data (all) > clear database > load data again.
Regards
KumarEither a full restructure or export,clear,import data should remove the fragmentation.
Personally I prefer to export/import.
Cheers
John
http://john-goodwin.blogspot.com/ -
Hello All-
Is there a way of measuring Fragmentation in Essbase database ?
Thanks!Hi-
Thanks for the reply. I used the command GETDBSTATS and got following:
Average Clustering Ratio : 1
Average Fragmentation Quotient : 52.61024
According to DBAG for Average Clustering ratio:
The average clustering ratio database statistic indicates the fragmentation level of the data (.pag) files. The maximum value, 1, indicates no fragmentation.
According to DBAG for Average Fragmentation Quotient :
Large (greater than 2 GB) 30% or higher
Any quotient above the high end of the range indicates that reducing fragmentation may help performance
Now if i see my results and try to judge if my database is fragmented or not so According to Average Clustering Ratio its not at all Fragmented. However if i go by Average Framentation Quotient my database is fragmented (I am sure that
my database is fragmented right now). Do you know if the Average Clustering Ratio gets default value of 1 everytime the application is restarted or if we recycle the services?
Moreover i was trying to look at "Average Fragmentation Quotient" via EAS and was not able to get where in i can see that , i was able to see Average Clustering Ratio via EAS.
Thanks! -
Hi All,
Few of our calc cripts are runnig low for EPM applications.
Its happening like some of the calc scripts are running fine..while a few other are running slow.
Can you ugget what thing needed to be checked
ThanksHi,
The version is not mentioned.
Hope the below tuning methods are helpful:
1. Check that compression settings are still present. In EAS, expand the application and database. Right-click on the database > Edit > Properties > Storage tab. Check that your "Data compression" is not set to "No compression" and that "Pending I/O access mode" is set to "Buffered I/O". Sometimes the compression setting can revert to "no compression", causing the rapid growth of the data files on disk.
2. On the Statistics tab, check the "Average clustering ratio". This shoud be close to 1. If it is not, restructure you database, by right-clicking on it and choosing "Restructure...". This will reduce any fragmentation caused by repeated data import and export. Fragmentation will naturally reduce performance over time, but this can happen quite quickly when there are many data loads taking place.
3. Check the caches and block sizes.
a.Recommended block size: 8 to 100Kb
b.Recommended Index Cache:
Minimum=1 meg
Default=10 meg
Recommendation=Combined size of all ESS*.IND files if possible; otherwise as large as possible given the available RAM.
c.Recommended Data File Cache:
Minimum=8 meg
Default=32 meg
Recommendation=Combined size of all ESS*.PAG files if possible; otherwise as large as possible given the available RAM, up to a maximum of 2Gb.
NOTE this cache is not used if the database is buffered rather than direct I/O (Check “Storage” tab). Since all Planning databases are buffered, and most customers use buffered for native Essbase applications too, this cache setting is usually not relevant.
d. Recommended Data Cache:
Minimum=3 meg
Default=3 meg
Recommendation=0.125 * Combined size of all ESS*.PAG files, if possible, otherwise as large as possible given the available RAM.
A good indication of the health of the caches can be gained by looking at the “Hit ratio” for the cache on the Statistics tab in EAS. 1.0 is the best possible, lower means lower performance.
4. Check system resources:
Recommended virtual memory setting (NT systems): 2 to 3 times the RAM available. 1.5 times the RAM on older systems.
Recommended disk space:
A minimum of double the combined total of all .IND and .PAG files. You need double because you have to have room for a restructure, which will require twice the usual storage space whilst it is ongoing.
Please see the below document for reference:
Improving the Performance of Business Rules and Calculation Scripts (Doc ID 855821.1)
-Regards,
Priya -
Hi All
I am currently working on 2 cubes in AAS 9.3, the first one is an ASO cube which is also the source cube of a transparent partition and the second one is the target cube which is BSO. I am working on optimizing the cubes and would appreciate if some one could list out the areas that I need to focus on and also some of the techniques that I can use in optimizing these cubes. I am pasting a copy of the statistics for the bso cube, Pls take a look and let me know what could be done.
Thanks in advance
Mik
GetDbState:
---------Database State---------
Description:
Allow Database to Start : Yes
Start Database when Application Starts : Yes
Access Level : None
Data File Cache Size : 167772160
Data Cache Size : 335544320
Aggregate Missing Values : No
Perform two pass calc when [CALC ALL;] : Yes
Create blocks on equation : No
Currency DB Name : N/A
Currency Conversion Type Member : N/A
Currency Conversion Type : N/A
Index Cache Size : 83886080
Index Page Size : 8192
Cache Memory Locking : Disabled
Data Compression on Disk : Yes
Data Compression Type : BitMap Compression
Retrieval Buffer Size (in K) : 1000
Retrieval Sort Buffer Size (in K) : 1000
Isolation Level : Uncommitted Access
Pre Image Access : Yes
Time Out after : 20 sec.
Number of blocks modified before internal commit : 3000
Number of rows to data load before internal commit : 0
Number of disk volume definitions : 0
I/O Access Mode (pending) : Buffered
I/O Access Mode (in use) : Buffered
Direct I/O Type (in use) : N/A
GetDbStats:
-------Statistics of App2:DB2 -------
Dimension Name Type Declared Size Actual Size
===================================================================
Period DENSE 2488 366
Measures DENSE 84 57
Disc SPARSE 320 318
CNon SPARSE 91 90
DType SPARSE 108 108
Product SPARSE 334 295
Region SPARSE 301 300
Sales SPARSE 19 19
Year SPARSE 7 6
Version SPARSE 5 4
Number of dimensions : 10
Declared Block Size : 208992
Actual Block Size : 20862
Declared Maximum Blocks : 2.10256646746E+014
Actual Maximum Blocks : 1.2473878176E+014
Number of Non Missing Leaf Blocks : 0
Number of Non Missing Non Leaf Blocks : 3560
Number of Total Blocks : 3560
Index Type : B+ TREE
Average Block Density : 0.05752085
Average Sparse Density : 2.853964E-009
Block Compression Ratio : 0.001341574
Average Clustering Ratio : 0.3333445
Average Fragmentation Quotient : 1.406705
Free Space Recovery is Needed : No
Estimated Bytes of Recoverable Free Space : 0
GetDbInfo:
----- Database Information -----
Name : DB2
Application Name : App2
Database Type : NORMAL
Status : Loaded
Elapsed Db Time : 02:11:44:49
Users Connected : 2
Blocks Locked : 0
Dimensions : 10
Data Status : Data has not been modified since last calculation.
Data File Cache Size Setting : 0
Current Data File Cache Size : 0
Data Cache Size Setting : 335460960
Current Data Cache Size : 335460960
Index Cache Size Setting : 83886080
Current Index Cache Size : 83886080
Index Page Size Setting : 8192
Current Index Page Size : 8192
Cache Memory Locking : Disabled
Database State : Read-write
Data Compression on Disk : Yes
Data Compression Type : BitMap Compression
Retrieval Buffer Size (in K) : 1000
Retrieval Sort Buffer Size (in K) : 1000
Isolation Level : Uncommitted Access
Pre Image Access : No
Time Out : Never
Number of blocks modified before internal commit : 3000
Number of rows to data load before internal commit : 0
Number of disk volume definitions : 0
Currency Info
Currency Country Dimension Member :
Currency Time Dimension Member : Period
Currency Category Dimension Member : Measures
Currency Type Dimension Member :
Currency Partition Member :
Request Info
Request Type : Data Load
User Name : 123admin
Start Time : Sun Feb 10 23:28:42 2008
End Time : Sun Feb 10 23:28:44 2008
Request Type : Customized Calculation
User Name : 123admin
Start Time : Sun Feb 10 23:29:12 2008
End Time : Sun Feb 10 23:29:17 2008
Request Type : Outline Update
User Name : 123admin
Start Time : Sun Feb 10 03:53:55 2008
End Time : Sun Feb 10 03:53:56 2008
Output:
End output file:Yes I am creating periods before the actual data....here's how it looks
Period Time <17> (Alias: YTD) (Dynamic Calc)
Jan (+) <31> (Alias: January) (Dynamic Calc)
Feb (+) <31> (Alias: February) (Dynamic Calc)
Mar (+) <29> (Alias: March) (Dynamic Calc)
Apr (+) <31> (Alias: April) (Dynamic Calc)
May (+) <30> (Alias: May) (Dynamic Calc)
Jun (+) <31> (Alias: June) (Dynamic Calc)
Jul (+) <30> (Alias: July) (Dynamic Calc)
Aug (+) <31> (Alias: August) (Dynamic Calc)
Sep (+) <31> (Alias: September) (Dynamic Calc)
Oct (+) <30> (Alias: October) (Dynamic Calc)
Nov (+) <31> (Alias: November) (Dynamic Calc)
Dec (+) <30> (Alias: December) (Dynamic Calc)
Weeks (~) <5> (Label Only)
Qtr1 (~) <3> (Alias: Quarter 1) (Dynamic Calc)
Qtr2 (~) <3> (Alias: Quarter 2) (Dynamic Calc)
Qtr3 (~) <3> (Alias: Quarter 3) (Dynamic Calc)
Qtr4 (~) <3> (Alias: Quarter 4) (Dynamic Calc) -
Why is Compression so low?
Hi,
We got a very low compression ratio in our database. What are the factors which will effect the compression ratio?
Here are our DBStat details for your reference.
---------Database State---------
Description:
Allow Database to Start : Yes
Start Database when Application Starts : Yes
Access Level : None
Data File Cache Size : 33554432
Data Cache Size : 1024000000
Aggregate Missing values : Yes
Perform two pass calc when [CALC ALL;] : No
Create blocks on equation : No
Currency DB Name : N/A
Currency Conversion Type Member : N/A
Currency Conversion Type : N/A
Index Cache Size : 102400000
Index Page Size : 8192
Cache Memory Locking : Disabled
Data Compression on Disk : Yes
Data Compression Type : BitMap Compression
Retrieval Buffer Size (in K) : 10
Retrieval Sort Buffer Size (in K) : 10
Isolation Level : Uncommitted Access
Pre Image Access : Yes
Time Out after : 20 sec.
Number of blocks modified before internal commit : 3000
Number of rows to data load before internal commit : 0
Number of disk volume definitions : 0
I/O Access Mode (pending) : Buffered
I/O Access Mode (in use) : Buffered
Direct I/O Type (in use) : N/A
-------Statistics of App:DB -------
Dimension Name Type Declared Size Actual Size
===================================================================
Measures DENSE 5 4
Scenario DENSE 2 1
Account DENSE 10830 8791
All Years SPARSE 4 3
Time SPARSE 51 14
Source SPARSE 29 25
BU SPARSE 384 380
Product SPARSE 3073 2962
Department SPARSE 14124 13629
Number of dimensions : 9
Declared Block Size : 108300
Actual Block Size : 35164
Declared Maximum Blocks : 9.86006229627E+013
Actual Maximum Blocks : 5.752596465E+013
Number of Non Missing Leaf Blocks : 3119607
Number of Non Missing Non Leaf Blocks : 143700950
Number of Total Blocks : 146820557
Index Type : B+ TREE
Average Block Density : 0.121687
Average Sparse Density : 0.0002552249
Block Compression Ratio : 0.002126631
Average Clustering Ratio : 0.3870301
Average Fragmentation Quotient : 0.05602184
Free Space Recovery is Needed : No
Estimated Bytes of Recoverable Free Space : 0
----- Database Information -----
Name : DB
Application Name : APP
Database Type : NORMAL
Status : Loaded
Elapsed Db Time : 03:22:59:31
Users Connected : 11
Blocks Locked : 0
Dimensions : 9
Data Status : Data has been modified
since last calculation.
Data File Cache Size Setting : 0
Current Data File Cache Size : 0
Data Cache Size Setting : 1023975680
Current Data Cache Size : 1023975680
Index Cache Size Setting : 102400000
Current Index Cache Size : 102400000
Index Page Size Setting : 8192
Current Index Page Size : 8192
Cache Memory Locking : Disabled
Database State : Read-write
Data Compression on Disk : Yes
Data Compression Type : BitMap Compression
Retrieval Buffer Size (in K) : 10
Retrieval Sort Buffer Size (in K) : 10
Isolation Level : Uncommitted Access
Pre Image Access : No
Time Out : Never
Number of blocks modified before internal commit : 3000
Number of rows to data load before internal commit : 0
Number of disk volume definitions : 0
Currency Info
Currency Country Dimension Member :
Currency Time Dimension Member : Time
Currency Category Dimension Member : Measures
Currency Type Dimension Member :
Currency Partition Member :
Request Info
Request Type : Data Load
User Name : admin
Start Time : Fri Apr 25 19:41:58 2008
End Time : Fri Apr 25 20:06:22 2008
Request Type : Customized Calculation
User Name : admin
Start Time : Fri Apr 25 20:20:37 2008
End Time : Sat Apr 26 06:24:23 2008
Request Type : Outline Update
User Name : admin
Start Time : Fri Apr 25 20:19:52 2008
End Time : Fri Apr 25 20:19:58 2008
Can you please direct me what iam missing here?
Thanks
BhaskarHi Dave,
Looks like the dbstats are not in bytes. But when i see the database statistics in Admin services the block size is 281312 B which is 4 x 1 x 8791 x 8 as per the calcuation. The block size numbers from dbstats are
Declared block size = 5 x 2x 10904 = 109040
Actual block size = 4 x 1 x 8791 = 35164
So the block size is valid. Product and Department are very sparse dimensions. I didnt quite understand your statement "many of the db ratio statistics (incluuding compression) were recorded on the blocks in memory and not the db as a whole". Can you please explain what it means.
Thanks
Bhaskar
Maybe you are looking for
-
10.6.8 - No audio out (but internal speaker fine) - MBP5,5
After upgrading to 10.6.8, plugging in my earphones or headphones does squat. My internal speaker still works fine though I've tried the AppleHDA rollback (to 10.6.7 version) to no avail - each time either audio disables totally or I get the "Incorr
-
Issues with Flasvars in IE vs FF
Code works great in FF but not IE. Happened once I put the php fields in. Any help? AC_FL_RunContent( "src", "playerProductInstall", "FlashVars", "MMredirectURL="+MMredirectURL+'&MMplayerType='+MMPlayerType+'&MMdoctitle='+MMdoctitle+"" , "width", "40
-
.this just started (again) this afternoon.......switched web browsers and everything is fine.......no software updates......what is going on
-
Error when running netbeans 5.5 on Linux
hi all I am facing the following error when I start netbeans 5.5 linux. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> java.lang.NullPointerException at org.netbeans.mdr.persistence.btreeimpl.btreestorage.BtreeDatabase.getIfExists(BtreeDatabase.java:726) at
-
BAM vs. VC reports in NW BPM
Hi, I am wondering about the relationship between BAM and the VC reports which can be generated in NW BPM. What is your opinion? Thanks, Tamas