Essbase Database SIze

Hi, Essbae version 11.1.1.3, We have 3 cubes where 2 cubes gets consolidated into 3rd cube. PRME Cube(14 page files) and CCE(17 page files), but CONSOL is 70 page files.
CCE + PRME = CONSOL.
Is there any way to reduce(70 page files) this size. I tried exporting level0 from CONSOL and reimport and ran "CALC all" but same size.
CONSOL
Dimesion     Type     members in dim     Members stored
Account     Dense     2253     1899
Period     Dense     35     13
Entity_bu     Sparse     126     125
Years     Sparse     11     11
Ds     Sparse     98     49
MS     Sparse     33     32
Costcenter     Sparse     1526     1070
scenario     Sparse     32     12
product     Sparse     576     320
version     Sparse     25     25

Compression settings for 3 cubes is Bitmap encoding. And dimension order for 2 cubes were same and CONSOL cube was different.
Consol
Account     Dense
Period     Dense
Entity_bu     Sparse
Years     Sparse
Ds     Sparse
MS     Sparse
Costcenter     Sparse
scenario     Sparse
product     Sparse
version     Sparse
I am confused, when we talk about optimized model DENSE order should be Largest - smallest no of members. But in our case the 2 (planning) cubes order CCE + PRME dense order is Period then Account.
But COSOL (CCE + PRME) cube has Account first then Period. SHould I udpate the CONSOL same as PRME and CCE and reload data and AGG it.
Glenn, As you asked, This is only for reporting purpose. This cube is being created thru Planning , but data loads happen thru Essbase. If I have to create ASO, I need to pull out from Planning and should I create and maintain the cube in Essbase only?
Thanks
Radhika

Similar Messages

  • Using MaxL in batch script to get Essbase database size before proceeding

    I have a batch script which dynamically generates some MaxL before passing it to ESSMSH. I would like to insert a command at the beginning of the MaxL script, to get and assess the size of the database before proceeding. If the database is empty I would like the script to quit. This is to prevent the script from exporting data and overwriting the previous export file if the database is empty.
    The generated MaxL (modified) is currently as follows. What logic can I add to the MaxL script, to assess the database size and go to the "errorhandler" label if it is empty? Is this even possible?
    login %USER% identified by %PASS% on %SERVER%;
    iferror "errorhandler";
    /* log file */
    spool stdout on to '<directory>.log';
    iferror "errorhandler";
    /* error file */
    spool stderr on to '<directory>.err';
    iferror "errorhandler";
    /* export data */
    set timestamp on;
    execute calculation %APP%.%DB%.C_Export;
    iferror "errorhandler";
    /* reset database */
    alter database %APP%.%DB% reset data;
    iferror "errorhandler";
    /* import data */
    import database %APP%.%DB% data from server data_file "<filename>.txt" on error abort;
    iferror "errorhandler";
    /* calculate database */
    execute calculation default on %APP%.%DB%;
    iferror "errorhandler";
    /* error handler */
    spool off;
    define label "errorhandler";
    logout;
    exit;

    Hi Stuart,
    Yes I was aware of display database "app"."db"; however this will output a table rather than a specific value. And what logic can I then add to the MaxL script to assess the specific value of "Db Status" and go to the "errorhandler" label if it is empty?
    It would seem this needs to be done via a VBScript, to loop through the file rows to find the value before proceeding accordingly. But I was hoping there might be a way to do this within MaxL.

  • Essbase database has exploded!

    Hi,
    We have a Essbase database which used to be around 1.9 GBs till a few days back. Recently the scenario dimension was made dense to improve performance. This should have reduced the total number of blocks and increased the blocksize. In our case the total number of blocks have increased many times - hence the size has increase by 2.5 times. There has been an increase to our block size (which is something that is understandable).
    Also Hour glass was changed to hour glass on stick model. I am not sure if this would have caused this.
    There have been no changes to the outline.
    We are still investigating but any input would really help get to the bottom of this.
    Thanks,
    Amol

    Something doesn't make sense here.
    Recently the scenario dimension was made dense to improve performance. This should have reduced the total number of blocks and increased the blocksize. In our case the total number of blocks have increased many times - hence the size has increase by 2.5 times. There has been an increase to our block size (which is something that is understandable).Think about this (yes, I know, this is why you posted the question) -- you have taken a sparse dimension and made it dense -- if that is the only action it must reduce the number of blocks. How can you have gotten both a larger block size and an increased number of blocks? The only answer I can come up with is block change + additional data. Compression has nothing to do with the number of blocks or the size (okay, the logical, not the physically stored). I'll also throw in maybe the number of members in still sparse dimensions also grew and were loaded.
    However, if you made the block bigger (and I would question Scenario being dense, but I don't have access to your database) through a dense Scenario, and more data (more months, years, days, products, whatever) then sure, your IND and PAG files could get so big that at least the PAG file breaks the 2 gig limit and spools to new files. If your data storage has been so defined, it could be on separate volumes.
    The above change is why we have test, QA, and production environments, right? :)
    Of course, I've had clients tell me, with only a slight trace of irony, "There's no test like production," so maybe you're in the same boat.
    Regards,
    Cameron Lackpour

  • SQL azure database size not dropping down after deleting all table

    Dear all,
    I have a simple database on Azure for which I have deleted all table data. The size of the database is still showing 5Mb of data and I am charge for that. I have heard that this may happen from cluster index getting fragmented.
    I have run  a querry I found on internet on all my table index to show percentage of fragmentation and all report 0%.
    DBA is not so my job but what could it be or how can I reduce that size ?
    ON premise I would use COMPACT DB but not available in azure like some others DB action
    Thnaks for tips
    regards

    user created objects/data are not the only ones stored in your database. you got system objects and metadata as Mike mentions above.
    are you trying to skip being charged if you're not storing data? looking at the pricing table,  you'll still get charged the $4.995 for the 0-100MB database size range.

  • Unable to access Essbase Database

    Hi,
    Following Essbase error is coming when data is fetching from Essbase database from a report: -
    Failed to open database 'Database1' on server 'URL'
    Error reported by Essbase.
    Additional information:
    Syntax error loading filters - operation canceled
    Kindly tell what could be the reasons behind it??
    Regards,
    Atul K

    Here is a post from the network54 board that discusses this problem. I used to get that error after I suspended my laptop, then restarted and was assigned a new ip address..
    Tim Tow
    Oracle ACE
    Applied OLAP, Inc

  • Unable to open Essbase Database

    Hi,
    Following Essbase error is coming when data is fetching from Essbase database from a report: -
    Failed to open database 'Database1' on server 'URL'
    Error reported by Essbase.
    Additional information:
    Syntax error loading filters - operation canceled
    Kindly tell what could be the reasons behind it??
    Regards,
    Atul K

    Hi,
    Followiong could be the reasons of your problem: -
    1- Try to restart application and then try to start database.
    2- If point 1 not work then may be application process 'ESSSVR' for that application get hangh. So stop all app/db and check if any Esssvr is running from Task Manager then kill it.
    3- Restart Essbase Server and try to restart db again.
    4- If problem still persists then may be your db got corrupted and please restore it from backup.
    5- Also from your posted error, it is looking that Essbase internal application $DM_APP$ is either missing or corrupted so please make it in following ways:-
    5.1. Shut down Essbase
    5.2. Rename essbase.sec to essbasebackup.sec
    5.3. Rename $DM_APP$ directory, e.g., Backup_$DM_APP$ if it exists but if it doesn't do not worry about it.
    5.4. Restart Essbase. At this point, a new essbase.sec is created,along with a new $DM_APP$ directory and files associated withit.
    5.5. From EAS, make sure Data Mining part is working
    5.6. Shutdown Essbase
    5.7. Rename essbase.sec to essbasenew.sec
    5.8. Rename essbasebackup.sec to essbase.sec
    5.9. Start Essbase
    5.10. Check to make sure Data Mining part is working
    5.11. Try renaming application or load db
    5.12. Once all is working, remove essbasenew.sec.
    Hope it will help pyou.
    Atul K,

  • Tables are deleted but database size does not change in sql server 2008r2

    Hi All,
    20GB Tables are deleted in my database but database size does not change and disk size showing same size.

    Hi ,
    I have ran the Disk usage by Top Tables report and Identified couple of tables with unwanted data for last 5 years. I have deleted the data for the first 3 years and then ran the Disk usage by Top Tables report again. When I compared the report before
    and after the data deletion, I have noticed certain facts which is not matching with what I know or learned from experts like you. The following are the points where I am looking for clarification:
    1.Reserved (KB) has been reduced. I was expecting the data Reserved (KB) will remain the same after the data deletion. 
    2. The Data(KB) and Indexes(KB) fields have been reduced as expected. The Unused(KB) field have been increased as expected.
    I was expecting the total of Data(KB) and Indexes(KB) field space gained will be equal to the Unused(KB) field gained after deleting the data. But that is not the case. When I deducted(subtracted) the difference in  the Reserved(KB)(Difference before
    and after data deletion) field from the Total of space gained from the data deletion is equal to the Unused(KB) gained field value.
    I am not a SQL expert and not questioning but trying to understand whether we really gain space by deleting data from the tables. Also keen to get the concepts right, but my testing by deleting some records confused me.
    Looking ahead to all your expert advice.
    Thanks,
    Vennayat

  • What is the best practice on mailbox database size in exchange 2013

    Hi, 
    does anybody have any links to good sites that gives some pros/cons when it comes to the mailbox database sizes in exchange 2013? I've tried to google it - but hasn't found any good answers. I would like to know if I really need more than 5 mailbox databases
    or not on my exchange environment. 

    Hi
       As far as I know, 2TB is recommended maximum database size for Exchange 2013 databases.
       If you have any feedback on our support, please click
    here
    Terence Yu
    TechNet Community Support

  • Error While Creating Essbase Database From Hyperion Planning

    Hi,
    While creating the Essbase Database From the 'Manage Database' in Hyperion Planning, I am getting the following error:
    com.hyperion.planning.olap.EssbaseException: Account (1060000)
    It gets stuck at Adding Dimensions.
    I have tried reconfiguring Planning, but no luck.
    My Relation Repository is MS SQL Server 2005 and the Essbase and Shared Services are on a Linux box.
    I am getting the following error in the Planning log
    [12-Nov-2009 10:50:41]: Propegating external event[ FROM_ID: 68b6dbf1 Class: class com.hyperion.planning.sql.HspLock Object Type: -1 Primary Key: 50001 ]
    [12-Nov-2009 10:50:41]: Processing cube: Plan1
    [12-Nov-2009 10:50:41]: Setting System CFG properties for Attribute Dimensions
    [12-Nov-2009 10:50:41]: Adding dimension: Account
    [12-Nov-2009 10:50:41]: Closing outlines
    com.hyperion.planning.olap.EssbaseException: Account (1060000)
         at com.hyperion.planning.olap.HspEssbaseOutlineAPI.EssAddMemberEx(Native Method)
         at com.hyperion.planning.olap.HspCubeRefreshTask.addDimension(Unknown Source)
         at com.hyperion.planning.olap.HspCubeRefreshTask.addDimensionsAndMembers(Unknown Source)
         at com.hyperion.planning.olap.HspCubeRefreshTask.buildOutlines(Unknown Source)
         at com.hyperion.planning.olap.HspCubeRefreshTask.run(Unknown Source)
         at com.hyperion.planning.HspJSImpl.runCubeRefresh(Unknown Source)
         at com.hyperion.planning.HyperionPlanningBean.runCubeRefresh(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:294)
         at sun.rmi.transport.Transport$1.run(Transport.java:153)
         at java.security.AccessController.doPrivileged(Native Method)
         at sun.rmi.transport.Transport.serviceCall(Transport.java:149)
         at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:466)
         at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:707)
         at java.lang.Thread.run(Thread.java:595)
    Thanks for your help.
    Edited by: user524093 on Nov 12, 2009 4:38 PM

    Have you tried restarting the services and giving it a try?
    Which version of Essbase/Planning are you on?
    If your Essbase server name has more than 30 characters, please use NODENAME with short server name.

  • Need help in changing of Essbase Database

    Hello All,
    I have given task to give few changes in Essbase database 7. I am going to write my task briefly as under...
    1- There is already one application created like existing application is XYZ and its data base exist only for HH (AA_TT / AA_TT)
    2- I have to create new ABC essbase application for 5 divisions of this compnay . Lets say divisions are (AA, BB, CC)
    3- New ABC Essbase application will be model after exsiting AA_TT database.
    *4- Now I have give changes or changes which i need to do in Essbase database as follows:-*
    *(a) Load Current Time Dimension ( M01, M02, M03, and etc...)*
    *(b) Load current Division dimensions*
    *(c) Load current Per. Code Dimensions*
    Since I am new and on a learning mode so please Can someone guide and help me on point number 4 like.....
    *1- what should I need to do ?*
    *2- what steps i need to follow ?*
    *3- What things I need to finish that talks ?*
    If you have something more then my thaught and want to share then please help me and share with me so I can finish my this talk as much as fast.
    I want to learn and even i like to study too but now I do not know how to start for this task. I am just like in middle of no where like i know where i am but where to go that is what i am not getting..............
    Your positive reply will be appreciated...
    Thank you very much in advance...
    Fez

    As for adding members to the outline, you can either do it manually (not too painful for a Time Periods dimension), or if you have the members you need in a file, or available via SQL, then load 'em up using a Load Rule. Using the Parent/Child relationship is one of the preferred methods, but that means your source data needs to be in a parent / child relationship.
    There are other ways in the load rule, Generation, Level, etc, but if your source data isn't very good these methods of building the dimension can be a little harder to control.
    Here's a link to the DBAG for dataload rules.
    http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_dbag/frameset.htm?ddlintro.htm
    Edited by: RobertR3 on Apr 13, 2011 8:36 AM

  • How to get the database size of several system

    Hi
    We've got a lot of database system on our landscape. I want to make a simple report to get the size the all our database on a weekly base.
    I try to get a RZ20 value on it without succes.
    Do you know a sql request that i could launch on all my database to get this information ? i did'nt find any V$^view with this information.
    thanks for your help
    florent

    "The database size" can be
    - the total size
    - the filled size
    - the filled size without undo/temp
    You can use DBACOCKPIT to centrally manage multiple instance and check the sizes.
    Markus

  • How to determine the database size corresponding to the nber records in DSO

    Hi Colleagues,
    I would like to determine the database size corresponding to my new BI project.
    I know the number of records uploaded in the DSO from the source system for the intialization phase.
    How can I deduct the database size / disk size corresponding to the number of record uploaded ?
    Thanks,

    Hi Ram,
    I am with SAP BI Release SAPKW70019
    I do not have the option Single Table analysis -
    I have in DB02 or ST04 the following options.
    *- Space*
    -- space overview
    ->database
    --overview
    ->users
    --overview
    --detailed analysis
    -> tablespaces
    --overview
    --detailed analysis
    -> segments
    --overview
    --Detailed analysis
    --Detailed Analysis Aggregated
    -> Additional Functions
    --Collector Logs
    --BW Analysis
    Where should I go through ?
    Thanks

  • Sharepoint 2010 - configuration database size limit

    Hi,
       We have below scenario in our production farm. Ours is a SharePoint 2010 Enterprise Edition server.
    Medium size farm (4 WFE, 4 App Servers, Content DBs in separate tier. No SharePoint search in this farm)
    Configuration DB reached 250+ GB. Please clarify below queries.
    1. What is the boundary limit for configuration database size?
    2. How we can scale the configuration database?
    3. Can we add multiple configuration database per farm?
    4. What are the best practices and regular maintenance activities to be done to keep the configuration db size under control?
    It would be really really great if someone gives some good knowledge in this area.

    1) Not aware of one
    2) You can add multiple files to the file group
    3) No
    Is the growth in the MDF or the LDF (log file)? If it is in the log file, are you using high availability (Clustering, Mirroring, Log Shipping, AlwaysOn)? If so, you need to run a BACKUP LOG periodically to maintain the LDF size, or allow you to truncate
    the LDF.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best Trategy to reduce the Database Size

    Hi Everyone,
    In our Client's Landscape SAP systems have been upgraded to newer versions whereas our client want
    one copy of older Production systems (one copy to retain)
    1) SAP R/3 4.6 C system  (database size of this system is approx 2TB)
    2) SAP BW 3.0 (database size of this system is approx 2TB)
    Now CLient wants us to reduce the database size via re-organization because Archiving of IDOCs & Links we have already done
    Client has recommended for :
    1) Oracle Export/Import: Only Oracle DBA can do (ignore this one)
    2) Database Reorganization : We have tried Reoragnization via BRtools but found very tedious 9 (ignore this one)
    3) SAP Export/Import : Via this way we want to reduce the database size
    Can anybody Tell us How much Free space do we require in order at OS level in order to store the Database Export
    of Two Databases of size around 4TB & what would be the best strategy of reducing the Dabase size.
    Via SAP Export/Import how much approx how much database size will be reduced
    Thanks & Regards
    Deepak Gosain

    Hi,
    >Can anybody Tell us How much Free space do we require in order at OS level in order to store the Database Export
    >of Two Databases of size around 4TB & what would be the best strategy of reducing the Dabase size.
    The only realistic way to know is to do a system copy of the production system on a testbed system and to test the Database Export.
    If you really want to decrease the database size you will have to archive a lot more than the IDOC archiving object.
    Regards,
    Olivier

  • Best practice on mailbox database size & we need how many server for deployment exchange server 2013

    Dear all,
    We have  an server that runs Microsoft exchange server 2007 with the following specification:
    4 servers: Hub&CAS1 & Hub&CAS2 & Mailbox1 & Mailbox2 
    Operating System : Microsoft Windows Server 2003 R2 Enterprise x64
    6 mailbox databases
    1500 Mailboxes
    We need to upgrade our exchange server from 2007 to 2013 to fulfill the following requirment:
    I want to upgrade the exchange server 2007 to exchange server 2013 and implement the following details:
    1500 mailboxes
    10GB or 15GB mailbox quota for each user
    How many
    servers and databases  are
    required for this migration<ins cite="mailto:Mohammad%20Ashouri" datetime="2014-05-18T22:41"></ins>?
    Number of the servers:
    Number of the databases:
    Size of each database:
    Many thanks.

    You will also need to check server role requirement in exchange 2013. Please go through this link to calculate the server role requirement : http://blogs.technet.com/b/exchange/archive/2013/05/14/released-exchange-2013-server-role-requirements-calculator.aspx
    2TB is recommended maximum database size for Exchange 2013 databases.
    Here is the complete checklist to upgrade from exchange 2007 to 2013 : http://technet.microsoft.com/en-us/library/ff805032%28v=exchg.150%29.aspx
    Meanwhile, to reduce the risks and time consumed during the completion of migration process, you can have a look at this proficient application(http://www.exchangemigrationtool.com/) that would also be
    a good approach for 1500 users. It will help you to ensure the data security during the migration between exchange 2007 and 2013.

Maybe you are looking for

  • How can I get a Japanese flash web page to display 100% correctly?

    I've tried everything I can think of but I can't get some of this web site's font to display correctly. Some of the flash shows in the correct, Japanese, characters, but some of it shows as code. If I log in as a different user, and set that user's I

  • BT trying to charge me for call I have not made.

    Hi Can anyone help me - I have just spent ages on the phone to your billing people none of who seem to have any common sense (also I have been cut off in "transfers" from one helpdesk to another). Apparently I have made a phone call for 18.28 hrs non

  • Customizing Cheque Writer(Generic) to embed xml code and layout

    Hi All, I'm working on International HRMS. I need to customize the seeded Cheque Writer(Generic) Process generated Cheque to embed my layout and print cheques as per our business need. How to do this? Any inputs would help me immensely. Thanks

  • CFB2 unable to access Services Browser (unable to get metadata for CFC)

    Hello -- I'm a novice with CF9. == Scenario - Mac OS X v10.6.8 - ColdFusion v9 [updated to latest 9.0.1] - JRun with integrated WebServer [listen on 8500 port] - Developer mode - ColdFusion Builder v2.0.0 Build 277745 [standalone installation] - Crea

  • Burning a DVDSP2 project  with disc copy?

    I tried burning a DVD (1-4X media) using disc copy (Jag 10.2.8) at 1x but it burned at 4X. I used img created from DVDSP2. Is the img file DVDSP2 created locked in to only burn at the max. speed based on media-burner speeds-the same as burning direct