Regarding Cube size

Hi
Please tell me "how to know cube size in essbase "
i mean where we can see cube size
Regards
Siva

Hi,
Database -> properties -> storage tab, will not give you the size of your current cube . rather , It gives you a clear picture of how much storage and which drive you have accomadated for storage.
Sandeep Reddy Enti
HCC
http://analytiks.blogspot.com

Similar Messages

  • How to calculate the HFM Cube size in SQL Server-2005

    Hi
    How to calculate the HFM Cube size in SQL Server-2005 ?
    Below query used for Oracle. Then what is query for SQL Server?
    SQL> select sum(bytes/1024/1024) from dba_segments where segment_name like 'FINANCIAL_%' and owner='HFM';
    SUM(BYTES/1024/1024)
    SQL> select sum(bytes/1024/1024) from dba_segments where segment_name like 'HSV FINANCIAL%' and owner='HFM';
    SUM(BYTES/1024/1024)
    Regards
    Smilee

    What is your objective? The subcube in HFM is a concept which applies to the application tier - not so much to the database tier. The size of the subcube is the unique number of data strips (data values for January - December inclusive, for example) for the given entity, currency triplet or Parent.Child node. You have to account for parent accounts and customs which don't exist in the database but are generated in RAM in the application tier.
    So, if your objective is to find the largest subcubes, you could do this by querying the database and counting the number of records per entity/value (DCE tables) or parent.child entity combination (DCN tables). I'm not versed in SQL, but I think the script below would just tell you the schema size and not the subcube sizes.
    Check out Accelatis.com for a third party software product that can do this for you. The feature is called the Subcube Analyzer and was written by the same team that wrote HFM, so they ought to know how this works :-)
    --chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • My cube size is go on increasing essbase how to handle this ?

    ex:day 1 my cube size is 60 gb.after 2 days it go 80 gb.how to handle this in essbase

    May be you can try to do a restructure of the Cube. But it will take more time and and when this restructure is being performed no one will be able to access the cube.
    All the pag files will reduce to less files once its done.
    We have around 170 pag files and after restructure they have reduced to 130 pag files. You do not need to change any data inside the cube to do this.
    The process to do from F to H and back from H to F.
    Restructure to H drive
    Create the directories on the H drive if they do not exist. (not sure if this is necessary)
    Go to properties of the cube (storage tab) and remove the F drive, and add H drive
    Do a stop and start of the database
    Go back to properties and insure the H drive is there
    Right click on the cube and confirm the H drive is  there
    Then go to cube and do a restructure
    Restructure back to F drive
    Go to properties of the cube (storage tab) and remove H drive and add F drive
    Do a stop and start of the database
    Go back to properties and insure F drive is there
    Then go to cube, right click and do a restructure back to F.
    Regards,
    Naveen

  • ASO: Cube size

    Hi,
    We are currently working on designing a system with the cube size estimated to be in the region of 400GB to 600Gbs. I wanted to explore Essbase ASO as an option. Has anyone implemented a cube of that size using ASO and if yes how is the performance with regards to query performance and processing time. Are there any specific issues that we have to keep in mind? We would have about 12 dimensions.
    Thanks,
    Amol

    hi Amol,
    1. On what basis , did you estimate your cube to around 400GB to 600GB.
    2. If ASO is an option, its huge advantage lies in space, its does not take more space , unlike BSO.
    3. I have seen cubes ,who size was around 300-400GB in BSO,when made the same cube into ASO , its consumed space of 40GB-45GB.
    HOpe this helps
    Sandeep Reddy Enti
    HCC
    http://hyperionconsutlancy.com/

  • How to find out Cube Size (Step by step process)

    Hi all,
    Can any body tell me How can i find out the Cube size ?
    Thanks in advance.
    Vaibhav A.

    Hi,
    try Tcode DB02
    and
    A simplified estimation of disk space for the BW can be obtained by using the following formula:
    For each cube:
    Size in bytes =
    (n + 3) x 10 bytes + (m x 17 bytes) *
    http:// rows of initial load + rows of periodic load * no. of periods
    n = number of dimensions
    m = number of key figures
    For more details please read the following:
    Estimating an InfoCube
    When estimating the size of an InfoCube, tables like fact and dimension tables are considered.
    However, the size of the fact table is the most important, since in most cases it will be 80-90% of the
    total storage requirement for the InfoCube.
    When estimating the fact table size consider the effect of compression depending on how many
    records with identical dimension keys will be loaded.
    The amount of data stored in the PSA and ODS has a significant impact on the disk space required.
    If data is stored in the PSA beyond a simply temporary basis, it could be possible that more than 50%
    of total disk space will be allocated for this purpose.
    Dimension Tables
    u2022 Identify all dimension tables for this InfoCube.
    u2022 The size and number of records need to be estimated for a dimension table record. The size of
    one record can be calculated by summing the number of characteristics in the dimension table at
    10 bytes each. Also, add 10 bytes for the key of the dimension table.
    u2022 Estimate the number of records in the dimension table.
    u2022 Adjust the expected number of records in the dimension table by expected growth.
    u2022 Multiply the adjusted record count by the expected size of the dimension table record to obtain
    the estimated size of the dimension table.
    Fact Tables
    u2022 Count the number of key figures the table will contain, assuming a key figure requires 17 bytes.
    u2022 Every dimension table requires a foreign key in the fact table, so add 6 bytes for each key. Donu2018t
    forget the three standard dimensions.
    u2022 Estimate the number of records in the fact table.
    u2022 Adjust the expected number of records in the fact table by expected growth.
    u2022 Multiply the adjusted record count by the expected size of the fact table record to obtain the
    estimated size of the fact table.
    Regards,
    Marasa.

  • To find the Cube Size

    Hi guys,
    how to find the current cube size, any t-codes or fn modules
    to figure it out.
    Thanks,
    Your help will be greatly appreciated

    Hi Vj,
    Finding how much DB% a cube is occupying
    1)go to se11 type psa table name /bic/b********** and display the table.
    now u have tab technical setting at top click it
    and it will display the table space and the size
    for cube/bib/d********
    2)If you are maintaining the Statistics. You can check the reports of statistics Multiprovider. or you can also check with statisctis cube.
    3)To see the disk space used from an InfoCube you should see TCode DB02 and should sum all the involved tables (Detailed Analysis -> Object Name "<IC_NAME>"), so E, F and DIM tables.
    4)You can use RSRV -> All Elementary Tests -> Database -> Database information about infoprovider tables to get the rows in the fact tables and dimension tables.
    You can also use the program SAP_INFOCUBE_DESIGNS.
    Hope this helps,
    Regards
    Karthik
    Assign points if helpful

  • Huge cube size difference between version 7.1 and 11.1

    Hello All,
    We are migrating from version 7.1 to 11.1. We did a full data export of each cube on 7.1 and did a server load on 11.1. We found that all the cubes on the new 11.1 server are almost 50% in size compare to same cube on 7.1 server. We did a detailed drill through and found that the there isn't any data loss on the cubes on the latest server. Please let us know if this smaller cube size is normal or did we miss something somewhere? I appreciate your comments!!!
    Thanks!

    Your cube was probably really fragmented and reloading all the data will have reduced the fragmentationYup. You can prove this by taking those export files and recreating the database on your 7.1.x server. I'll bet you find the same thing.
    Take a gander at this post re fragmentation.
    I'd add to that posting that MaxL's force restructure is another way to defrag a database; I personally like the flat file export as I can be reasonably sure backup software won't choke on a flat file.
    Regards,
    Cameron Lackpour

  • ASO physical clear increases cube size?

    I am testing out performance and feasibility of a physical clear and have noticed that the cube size actually INCREASES after doing a physical delete. The cube size increased from 11.9 GB to 21.9 GB. Anyone know why?
    Our ASO cube is used as a forecasting application and models are being updated consistently. Logical delete won't work because a model can be updated multiple times and direct quote from manual precludes the logical clear option.
    "Oracle does not recommend performing a second logical clear region operation on the same region, because the second operation does not clear the compensating cells created in the first operation and does not create new compensating cells."
    Here is the MDX I used to do a physical clear of ~120 models from the cube.
    alter database Express.Express clear data in region
    PM10113,
    PM10123,
    PM10124,
    PM10140,
    PM6503,
    PM6507,
    PM6508,
    PM6509,
    PM6520,
    PM6528
    }' Physical;
    Any insight would be greatly appreciated.

    I am sorry but I do not have my test system available so i will have to do it from memory.
    I am surprised at this - it is what you would expect if you did a logical clear. When you look at db statistics does the number of cells reflect the original less the cleared area? And does it show no slices? If so then you did do a physical as your maxl indicates.
    You might want to stop and start the application. Otherwise I will have to check some more.
    But given the size of the increase (almost doubled) I would wonder why you would do a clear as opposed to a reset. Finally I am wondering why you are doing a clear at all? Why not just a send and let an incremental slice. That way only the changed cells would be found in the slice. More important the slices would be quite small and likely automatically "merged".
    Finally - I am wondering about the DBAG quote (page 982 on PDF) you included. Again I would have to test but I think they are only warning that the the number of slices will start to build up but I would expect that "because the second operation does not clear the compensating cells created in the first operation and does not create new compensating cells". The net result would still be correct.

  • Aso cube size after migration to essbase 9.3.1

    Hy,
    Our Aso cubes size after migration from essbase 7.1.2 to 9.3.1 are double and more.
    Do you have any suggestion?
    Thank you.
    Lorenzo

    You could try Essbase Security Dumper (ESD), available for download on www.dougware.com. It will export users, groups, and filters into a text dump that is easily read/modified and updated to another server.
    Caveats: only works with native security, and assigns random passwords to users that are created (a log file shows all upload tasks which can be used to look up the password assignments).
    It may help, although it's obviously not going to eliminate all the effort, as I don't believe there is a tool that will migrate native passwords from one box to another.
    There are other security tools available there that can help recover individual databases and such.

  • UDA effect the cube size

    Dear All,
    I have create dthe UDAs
    Please can any one suggest me is UDA effect the cube size?
    And how many attributes we have in essbase ?
    Thanks in Advance.
    Edited by: user8815661 on Aug 2, 2012 12:17 AM

    Hi,
    UDAs are member labels that you create to extract data based on a particular characteristic.
    UDAs has no effect on the cube size
    KosuruS

  • Doubts regarding CUBE

    Hi all
    I have some doubts regarding CUBE . If i get sip trunking from a service provider and i use a CUBE router to distribute those sip trunk to customers who are using IP pbx for there ip phones to connect .
    As i know CUBE helps in ip to ip communication so does in this scenario i need a cucm anywhere ?
    or a cme router ?
    i got to know that i need a dsp resource in cube router so to avoid any mismatch of codecs.
    Please guide me to clear my basics or you can provide me with any document to read.
    Regards
    Aateek

    Hi Aateek,
    You can start by checking the following links
    http://docwiki.cisco.com/wiki/Cisco_Unified_Border_Element_SIP_Trunk_Configuration_Example
    http://www.cisco.com/c/en/us/products/collateral/unified-communications/unified-border-element/white_paper_c11-620461.html
    HTH
    Manish

  • Querie regarding cache sizes

    For optimizing the calculation script i have set in my cube & compression type as RLE (prior calculation script was running with time span of 6 minutes now it takes 2 minutes ,datafile exported using dataexport is same )
    The maximum index cache you have set as 4097152 kb ( i.e.3.9 gb) Is it ok to set the index cache so high even though my index file size is less than 1 GB.
    1)How do i conclude the maximum value for datacache is 36000000 kb. What are the factors i need to take for consideration.
    2)Datacache Maximum 36000000 kb
    Datacache is 36000000 kb (i.e. 34.33 GB). Is it a practical approach
    Regards
    Shenna

    Hi,
    Index Cache:
    The doc suggests to have- 1 MB of index cache for Buffered I/O and 10 MB of index cache for Buffered I/O !
    While you can use this recommendation to start with- You're the right person to arrive at the actual figure by doing some trials relevant to your environment.
    Data Cache:
    Again, the doc. suggests to have- Data cache = 0.125 * the value of data file cache size
    Where- Suggested Data file cache size = Combined size of all essn.pag files, if possible; otherwise as large as possible. This cache setting not used if Essbase is set to use buffered I/O.
    It's prudent to do trials independently for each of the caches!
    It's worth reading all the posts of the thread @ Understanding Buffered I/O and Direct I/O to understand experts' opinions !
    Best of luck :)
    - Natesh

  • How to Query SSAS Cube Size Including Each Fact and Dimension Sizes?

    Microsoft recommends: “In general, the number of records per partition should not exceed 20 million. In addition, the size of a partition should not exceed 250 MB.” http://blogs.msdn.com/b/sqlcat/archive/2009/03/13/analysis-services-partition-size.aspx
    I am not able to find any queries that would show how many records/size in MB I have per partition?  
    Please advise.

    To see how big your partitions are simply open Visual Studio/BIDS and go to
    File --> Open --> Analysis Services Database ...
    Now navigate to your cube and to your partitions and you will see some additional columns showing Size of each partition which are only available if you connect to an online-database (opposed to offline development)
    The RowCount is a bit more tricky. The easiest thing would be to use BIDS Helper and update the estimated counts:
    https://bidshelper.codeplex.com/wikipage?title=Update%20Estimated%20Counts
    but be aware, this might take some time as it is querying all relational tables again!
    regarding the 250MB/20M rule: this is a just a guideline and the actual sizes depend on your data and your requirements
    hth,
    gerhard
    Gerhard Brueckl
    blogging @ http://blog.gbrueckl.at
    working @ http://www.pmOne.com

  • Help needed regarding font sizes

    Iam generating pdf documents using oracle developer's report builder.
    When i generate and view those pdf's using the web previewer thay appear to be quite fine,whereas when i host them on ias the font sizes gets bigger, thus lot of text gets truncated.
    Is there any parameters that has to be set for it.Pls help

    Hi Amit,
    Thanks alot for your reply. Its really helpful for me.
    So,we usually enter these relationships manually only, right? Before going ahead with the custom program, could you please let me know whether there is any SAP note related to this.Once again thanks alot for you help.
    Regards,
    Kishore.

  • Few Queries regarding Message Size in XI

    Hi ,
    I have posted the same question before many times over but did not get a satisfactory reply.
    I have many  file to idoc scenarioes.
    Now the file size is large .It is a CSV or a Tab-delimited file.
    IT can grow upto 100 mb.
    DEV has 4gb of RAM
    QA and PRODUCTION have 16 GB of RAM
    INTEL DUAL CORE PROCESSOR ON EACH box.
    If I want to send a complete 60-100 mb file I run into "LOCK table overflow"
    WE have increased the table size parameter on recieving CRM system.
    recordset parameter is set for all interfaces from 500 to 5000.
    Since I am not able to send a complete file I am breaking it into chunks and sending as I do not have much choice.
    For certain interfaces where I am having large number of fields I am able to send only 2000 records per file for others about 14000 records per file.
    In order to take care of errors like "LOCK table overflow " and to automate the process I have scheduled a report 'RSARFCEX".Otherwise I'll end up putting one file at a time and wait for it to get processed (i.e. create idocs with status 64 in the recieving system.) and this takes a long time.
    I see here on sdn people are able to send a large file in one go.
    Even if I try to split a file using xi ..multimapping..extended interface determination...the works et cetra ...It fails for a file of size 6mb.
    That too after increasing the no. of work processes.
    Besides tried to follow XI tunning parameters ...but there was no change in status-quo.Finally I am using an abap program in xi which is spillting the file (in a matter of few minutes) so that file adapter can pick it up .
    I am surprised by xi system's performance  .I am not using BPM .There is content conversion on sender side.
    Would the experts on the forum please provide a solution.

    Hi deepak
    Did you check this weblog
    /people/alessandro.guarneri/blog/2006/03/05/managing-bulky-flat-messages-with-sap-xi-tunneling-once-again--updated
    and this related thread
    XML file size
    regards
    krishna
    Message was edited by:
            Krishnamoorthy Ramakrishnan

Maybe you are looking for