Increased calc time in Essbase cube

Hello-
I have an Essbase cube that just recently started experiencing very long calc times. The cube has been in a 9.3.1 environment for approximately 4 months and there haven't been any major additions to the data. Just adding Actuals every month. I had a CalcAll running in approximately 2 minutes and now it's running in 2+ hours. This essentially happened overnight. The size of the page file is 267Mb and index file is 25Mb. The data cache and index cache are currently set at 400,000Kb and 120,000Kb respectively.
This is a 7 dimensional cube with 2 dense dimensions... Accounts and Time. My block size is high due to the large amount of level 0 Accounts... 215,384Kb.
The number of index page writes and data block reads & writes is pretty high... 128,214 - 3,809,875. And the hit ratio on the data cache is .07.
I've tried adjusting the data cache and index cache both up and down... but with no success.
Any help would be appreciated.
Thanks!

Here are a couple of things to think about
How big is the databasename.ind file? How often do you restructure the database? if it is large, it means the database is fragmented (you can also look at the database stats to help figure it out) Every time you runn the calc all, you cause fragmentation.
If you have agg missing off it also increases the calc time. Also, you will find that calculation a loaded cube takes longer than if you have a cube with just level zero data.
If your database is not too big, you might try exporting level zero data, clearing the database, reloading the data and doing your calc all. (of course I would try this on a copy of the production database not the actual database itself). You might fined it quicker than your straight calc.
Since you do a straight calc all, you might also consider converting this to an ASO cube, you really get rid of the call all in total.

Similar Messages

  • Cube maintenance time increases every time it runs

    Hi,
    I have created a cube in AWM (10.1.0.4) with five dimensions. This cube is set to not to preaggregate any levels in order to have a short maintenance time. First time I maintained the cube it took about 3 minutes. In order to get a average maintenance time i chose to maintain the cube a few times more. For every time I did this, the cube used a bit more time to maintain and last time it used 20 minutes. Every time i checked the "aggregate the full cube" option and no data is added to the source tables.
    I also checked the tablespace that the AW is stored in and it also increases for each run. Its now using 1,6 gb.
    The database version is 10.1.0.4.0
    Anyone have any ideas to what I can do?
    Regards
    Ragnar
    edit: I did a few more runs and the last run time was 40 minutes and the tablespace is now using 4,1gb so I think this is where the problem is. Instead of overwriting the old data it seems that it just adds it, making the tablespace bigger for each time.
    Message was edited by:
    rhaug

    Hi,
    seems like I have resolved this problem now. I had made several cubes that were almost identical. Only difference was how they aggregated the data. One was full aggregation, one was skip-level aggregation and the last didn't have any. The reason I did this was to be able to compare maintenance time and see how this affected the response time to the cube. I am not sure what causes this, but I never managed to aggregate the cubes correctly. The cube with full aggregation took just about a minute or two to maintain and when i chose to view the data it took another minute.
    So my impression was that it was aggregating all the data runtime.
    When I tried to maintain any of the cubes after this, I got some different errors. Usually the maintenance failed when the tablespaces couldn't grow anymore. The temp tablespace was at this point beyond 20gb.
    I then thought that the name of the measures in the cube could have something to do with the errors I got, and renamed them so they were unique in the AW. The tablespaces grew large also this time, but the maintenance stopped because of an out of memory error.
    Then i deleted all cubes but one and tried to maintain it. After about 35 minutes it was done and when i chose to view the data, they seemed to be precalculated and the response time was good. The tablespace containing the AW also seems normal in size, around 500mb. I did several test runs during the night, and since yesterday the cube has successfully been maintained 15 times.
    So this brings me to my question.
    Can a AW only contain one cube? Or is it just a user error from my part? To me it seems a bit weird that you only can have one cube using the same dimensions, so I'm not sure if this is a right way of doing it, but it works.
    Anyone have any input or similar experiences?
    Regards
    Ragnar

  • Problems in accessing Essbase cube using Interactive Reporting studio

    Hi All,
    I have developed a report in Interactive studio for which Essbase is the source. When i try to process my query i am getting the below error
    "Request [Report] from user[username] was terminated since it exceeded the time limit"
    Any idea as to how the time limit can be increased at essbase side.

    I still can not resolve the problem. I am pulling up an attribute dimension in my report and that seems to be the root cause of the problem. When i remove this attribute dimension and generate the report with rest of the details the report renders without any issues.
    can this be becuase the attribute dimension is "Dynamic Calc and Store" in Essbase cube and IR does not support certain essbase features such as Dynamic attribute dimensions ? This is quiet urgent and hence would need a quick response from ya'll

  • Reduce Calc Time

    Afternoon everyone,
    We load data into our cube monthly, and when running the calc on the database it can take between 2/3 days to complete. I appreciate that calc time can be determined by a wide variety of factors (number of dense/spare dims/members etc) - but looking at things from a system resource view:
    The server has 8 CPU's.
    With total memory = 4194303 (according to server information within Application Manager)
    When calcing, approx 1500000 of memory is used.
    The start of the calc script defines the following parameters: 'SET CALCPARALLEL 4; SET UPDATECALC OFF;'
    Would increasing the 'SET CALCPARALLEL' parameter from 4 to 6 be a viable approach to trying to reduce calc time (especially given the amount of available resource on the server)??
    The server wont be used for anything else during the calc.

    CL wrote:
    Are you running 64 bit or 32 bit Essbase?
    32 bit maxes out at 4 CPUs for parallel calc; 64 bit can go to 8.
    You might want to look at the number of task dimensions set for parallel calculations.
    See: http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_techref/calc/set_calctaskdims.htm
    And your calculator cache is going to impact parallel calcs as well.
    All of this can go up in smoke if you have calculations that require Essbase to calculate in serial, such as cross dimensionals.
    There are lots of other possibilities re performance.
    1) Could the SAN/local drives be faster?
    2) Do you need to calc the whole database (I have no idea what your db is, only that you mention a monthly calc -- is it for just one month?)
    3) Partitioning the db by month<--This is probably a really good place to look although nothing is free.
    4) Going to ASO
    There are others as well.
    I appreciate that thie above four thoughts are beyond your question, they're just food for thought.
    Regards,
    Cameron LackpourASO should be an option. It is much much faster rollup than BSO.

  • Essbase cube optimization

    Hi All,
    I know a lot has been discussed on optimizing Essbase cubes and others but need to get some more guidance on the same.
    Scenario:
    Essbase app 11.1.2.2 with 2 dense and 6 sparse as below:
    Account ~500 stored (dense)
    Period ~13 stored (dense)
    Fin Element ~2000 stored (sparse)
    Entity ~10000 stored (sparse)
    Prod ~10000 stored (sparse)
    * Year, Version and Scenario are non-aggregating dimensions
    Stats: Block size ~50kb and lev0 size ~400MB
    The issue is that there are few monthly end calc scripts that takes more than ~3-4hrs /script to execute on the cube.
    I had done the following things so far:
    1- Provide custom hour glass outline: Period-Accounts-Fin Element- Prod - Entity - Yr - Version -Scenario
    2- Changed few member properties
    3- Optimized script
    However the run time for the scripts is equal to what was earlier.
    A quick question:
    If I make dimension 'Fin element' as dense, block size is ~80MB. I guess 100KB is max recommended, but need to confirm on this.
    Kindly guide ahead as in what else should I try ahead and do..
    Regards..

    Hi
    This is all a little subjective....but....
    Are your upper level members of dense dims dynamic or are you having to run any CALC DIM commands against them within the lengthy BR's? If you are then these will probably be responsible for a fair chunk of the time - could always pull these out and run separately to see exactly how much time they take.
    JB

  • How to Increase Retrival time using excel addin

    I have a ASO cube what thing can i do ti increase retrival time to excel addin on version 9.3.1
    Please advise

    1. Edit the essbase.cfg file with the following:
    AGENTDELAY 60
    NETDELAY 600
    NETRETRYCOUNT 800
    NETTCPCONNECTRETRYCOUNT 100
    AGENTTHREADS 60
    SERVERTHREADS 60
    AGTSVRCONNECTIONS 7
    2. Edit the database properties with the following:
    If on 32-bit Platform, set the Essbase Data retrieval buffers as:
    Buffer size(KB): 10
    Sort buffer size(KB): 40
    If on 64-bit Platform, set the Essbase Data retrieval buffers as:
    Buffer size(KB): 20
    Sort buffer size(KB): 40
    3. Add or Edit the following Registry settings on Essbase Server and Client machines
    a) Open the Registry
    b) Navigate to LocalMachine\System\CurrentControlSet\Services\TCPIP\Parameters
    c) Add new DWORD Value named TcpTimedWaitDelay, right click and select Modify. Select decimal radio button, type in 30. (The default value of this parameter is 2 minutes. This is how long twill take for a TCP/IP port that was used by the network for a connection to be released and made available again. 30 sec is the minimum allowed by Microsoft)
    d) Add new DWORD Value named MaxUserPort, right click and select Modify. Select decimal radio button, type in 65534. ( The default value is 5000. This determines the highest port number TCP can assign when an application requests an available user port from the system.
    e) Add new DWORD Value named MaxFreeTcbs, right click and selectModify. Select decimal radio button, type in 6250. (The default value is 2000. This determines the number of TCP control blocks(TCBs) the system creates to support active connections. Because each connection requires a control block, this value determines how many active connections TCP can support simultaneously. If all control blocks are used and more connection requests arrive,TCP can prematurely release connections in the TIME_WAIT state inorder to free a control block for a new connection and also increase the netdelay and netretrycount.
    If the application is crashing on retrieval, then you might be hitting a unpublished Bug 12319088: ON RETRIEVAL FROM EXCEL ADDIN FOR THREE TIMES, ESSBASE APPLICATION SHUTS DOWN
    KosuruS

  • Essbase cube/pag file size reduction-bitmap compression

    We are seeing some huge pag files in our essbase cube, and Oracle had suggested changing to bitmap compression to see if that will help.(currently, we have 'no compression')
    That said, what is the difference between using RLE versus bitmap encoding for BSO cubes?
    Would like to hear other's experiences.

    (currently, we have 'no compression')^^^You are going to be very happy. Very, very happy.
    You can read the Database Administrator's Guide -- just search for "comrpession".
    Here it is: http://download.oracle.com/docs/cd/E17236_01/epm.1112/esb_dbag/dstalloc.html
    There are a bunch of caveats and rules re the "better" compression method -- having said that, I almost always use RLE as the method it seems the most efficient wrt disk space. This makes sense as this method examines each block and applies the most efficient method: RLE, bitmap, or Index Value pair. I can't see a calculation difference between bitmap and RLE.
    How on earth did you get into the position of no compression in the first place?
    Regards,
    Cameron Lackpour
    Edited by: CL on Jul 18, 2011 10:48 AM
    Whoops, a couple of things:
    1) After you apply the change in compression in EAS or via MaxL, you must stop and then restart the database for the setting to take affect.
    2) You must export the database, clear it out, and reload it using the new compression method -- the change in #1 only affects new blocks so you could have a mix of compressed and uncompressed bliocks in your db which won't do anything for storage space.
    3) The only way to really and truly know which method is more efficient is to benchmark the calculation (think writing to disk which could be a bit slower than it is now) and compression methods is to do it. Try it as bitmap and write down sample calc times and IND and PAG file size, then try it as RLE and do the same.

  • Linking existing Essbase Cube to Planning

    My business team created an Essbase cube, and now they want us to link that cube to Planning.
    I opened an SR and I was told the following:
    1. Ensure you have a test environment to create this new Planning app before moving to production.
    2. Create a new Planning app in the test environment with the same name as the production Essbase cube.
    3. Export all Essbase data.
    4. Copy over the outline to the new Essbase cube in the test env. You can copy over the rest of the Essbase objects as needed (calc scripts, report scripts, etc).
    5. Import the Essbase data into the new Essbase cube.
    6. refresh Planning
    Everything went well except for Step 6. When I do the refresh from Planning, the Essbase Outline gets updated with the one from Planning which doesn't have anything since it was just created.
    I need to refresh the Planning Outline with the the one from Essbase that does have the data the users have been working on.
    Thanks.

    Hi,
    You will have to define all the dimension membes etc to Planning. Because Essbase is the database, it will be updated with planning information... There are no shortcuts.. But you can export your dimension members from existing Essbase and load into planning using HAL/ODI/DIM..
    Use outline extractor tool to get the existing dimensionality and create 1 metadata file per dimensions.
    I believe Oracle support thought that you want to create a new Essbase cube from existing Planning cube.
    Cheers
    RS

  • Essbase Studio: Failed to deploy Essbase cube

    Hi
    I have started working with Essbase studio sometime back and I am able to deploy BSO cube with success using the TBCSample Database which comes along with Essbase. Now I wanted to deploy ASO cube, as no sample database is available I thought to create one, I extracted ASOSamp using ODI to CSV files. Then I bulk inserted the csv extracts into MSSQL 2003 server which created 11 tables (Age, Geography, IncomeLevel, Measures, PaymentType, Product, Stores, Time TransactionType, Year). The above mentioned table does not have any keys(Primary, Foreign) as it is an Essbase export.
    I then successful created ASO Cube Schema using the newly created sample database in MSSQL, validated cube schema without any errors.
    Essbase Property Setting:
    Measures Hierarchy is tagged as Dynamic Compression at dimension level
    Time, Product and Year Hierarchy is tagged as Multiple Hierarchies Enabled, Year does not have multiple hierarchies but it has formulas for Variance and Variance % member. Is there a way to tag Year as Dynamic hierarchy?
    But when I try to deploy the cube to Essbase I receive following errors:
    Failed to deploy Essbase cube
    Caused By: Cannot end incremental build. Essbase Error(1060053): Outline has errors
    \\Record #1 - Member name (Time) already used
    + S Time + S
    \\Record #6 - Member name (1st Half) already used
    MTD + S 1st Half + S
    \\Record #7 - Member name (2nd Half) already used
    MTD + S 2nd Half + S
    \\Record #21 - Member name (Qtr1) already used
    Qtr1 + S Feb + S
    \\Record #22 - Member name (Qtr1) already used
    Qtr1 + S Jan + S
    \\Record #23 - Member name (Qtr1) already used
    Qtr1 + S Mar + S
    \\Record #24 - Member name (Qtr2) already used
    Qtr2 + S Apr + S
    \\Record #25 - Member name (Qtr2) already used
    Qtr2 + S Jun + S
    \\Record #26 - Member name (Qtr2) already used
    Qtr2 + S May + S
    \\Record #27 - Member name (Qtr3) already used
    Qtr3 + S Aug + S
    \\Record #28 - Member name (Qtr3) already used
    Qtr3 + S Jul + S
    \\Record #29 - Member name (Qtr3) already used
    Qtr3 + S Sep + S
    \\Record #30 - Member name (Qtr4) already used
    Qtr4 + S Dec + S
    \\Record #31 - Member name (Qtr4) already used
    Qtr4 + S Nov + S
    \\Record #32 - Member name (Qtr4) already used
    Qtr4 + S Oct + S
    \\Record #33 - Member name (Time) already used
    Time + S MTD + S
    \\Record #34 - Member name (Time) already used
    Time ~ S QTD ~ S
    \\Record #35 - Member name (Time) already used
    Time ~ S YTD ~ S
    \\Record #9 - Error adding Attribute to member QTD(Jan) (3320)
    \\Record #9 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD + S QTD(Jan) + S [Jan]
    \\Record #10 - Error adding Attribute to member QTD(Apr) (3320)
    \\Record #10 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Apr) ~ S [Apr]
    \\Record #11 - Error adding Attribute to member QTD(Aug) (3320)
    \\Record #11 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Aug) ~ S [Jul]+[Aug]
    \\Record #12 - Error adding Attribute to member QTD(Dec) (3320)
    \\Record #12 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Dec) ~ S [Oct]+[Nov]+[Dec]
    \\Record #13 - Error adding Attribute to member QTD(Feb) (3320)
    \\Record #13 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Feb) ~ S [Jan]+[Feb]
    \\Record #14 - Error adding Attribute to member QTD(Jul) (3320)
    \\Record #14 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Jul) ~ S [Jul]
    \\Record #15 - Error adding Attribute to member QTD(Jun) (3320)
    \\Record #15 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Jun) ~ S [Apr]+[May]+[Jun]
    \\Record #16 - Error adding Attribute to member QTD(Mar) (3320)
    \\Record #16 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Mar) ~ S [Jan]+[Feb]+[Mar]
    \\Record #17 - Error adding Attribute to member QTD(May) (3320)
    \\Record #17 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(May) ~ S [Apr]+[May]
    \\Record #18 - Error adding Attribute to member QTD(Nov) (3320)
    \\Record #18 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Nov) ~ S [Oct]+[Nov]
    \\Record #19 - Error adding Attribute to member QTD(Oct) (3320)
    \\Record #19 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Oct) ~ S [Oct]
    \\Record #20 - Error adding Attribute to member QTD(Sep) (3320)
    \\Record #20 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Sep) ~ S [Jul]+[Aug]+[Sep]
    \\Record #36 - Error adding Attribute to member YTD(Jan) (3320)
    \\Record #36 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD + S YTD(Jan) + S [Jan]
    \\Record #37 - Error adding Attribute to member YTD(Apr) (3320)
    \\Record #37 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Apr) ~ S [Qtr1]+[Apr]
    \\Record #38 - Error adding Attribute to member YTD(Aug) (3320)
    \\Record #38 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Aug) ~ S [1st Half]+[Jul]+[Aug]
    \\Record #39 - Error adding Attribute to member YTD(Dec) (3320)
    \\Record #39 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Dec) ~ S [1st Half]+[Qtr3]+[Qtr4]
    \\Record #40 - Error adding Attribute to member YTD(Feb) (3320)
    \\Record #40 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Feb) ~ S [Jan]+[Feb]
    \\Record #41 - Error adding Attribute to member YTD(Jul) (3320)
    \\Record #41 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Jul) ~ S [1st Half]+[Jul]
    \\Record #42 - Error adding Attribute to member YTD(Jun) (3320)
    \\Record #42 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Jun) ~ S [1st Half]
    \\Record #43 - Error adding Attribute to member YTD(Mar) (3320)
    \\Record #43 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Mar) ~ S [Qtr1]
    \\Record #44 - Error adding Attribute to member YTD(May) (3320)
    \\Record #44 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(May) ~ S [Qtr1]+[Apr]+[May]
    \\Record #45 - Error adding Attribute to member YTD(Nov) (3320)
    \\Record #45 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Nov) ~ S [1st Half]+[Qtr3]+[Oct]+[Nov]
    \\Record #46 - Error adding Attribute to member YTD(Oct) (3320)
    \\Record #46 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Oct) ~ S [1st Half]+[Qtr3]+[Oct]
    \\Record #47 - Error adding Attribute to member YTD(Sep) (3320)
    \\Record #47 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Sep) ~ S [1st Half]+[Qtr3]
    \\Record #2 - Incorrect Dimension [Year] For Member [ParentName] (3308)
    ParentName Consolidation DataStorage MemberName Consolidation DataStorage Formula
    \\Record #1 - Member name (Promotions) already used
    S Promotions S
    \\Record #2 - Incorrect Dimension [Promotions] For Member [ParentName] (3308)
    ParentName DataStorage MemberName DataStorage
    \\Record #3 - Member name (Promotions) already used
    Promotions S Coupon S
    \\Record #4 - Member name (Promotions) already used
    Promotions S Newspaper Ad S
    \\Record #5 - Member name (Promotions) already used
    Promotions S No Promotion S
    \\Record #6 - Member name (Promotions) already used
    Promotions S Temporary Price Reduction S
    \\Record #7 - Member name (Promotions) already used
    Promotions S Year End Sale S
    \\Record #2 - Incorrect Dimension [Payment Type] For Member [ParentName] (3308)
    ParentName DataStorage MemberName DataStorage
    \\Record #2 - Incorrect Dimension [Transation Type] For Member [ParentName] (3308)
    ParentName DataStorage MemberName DataStorage
    \\Record #22 - Member name (Home Entertainment) already used
    Home Entertainment + S Home Audio/Video + S
    \\Record #23 - Member name (Home Entertainment) already used
    Home Entertainment + S Televisions + S
    \\Record #24 - Member name (Other) already used
    Other + S Computers and Peripherals + S
    \\Record #25 - Incorrect Dimension [Product] For Member [ParentName] (3308)
    ParentName Consolidation DataStorage MemberName Consolidation DataStorage
    \\Record #26 - Member name (Personal Electronics) already used
    Personal Electronics + S Digital Cameras/Camcorders + S
    \\Record #27 - Member name (Personal Electronics) already used
    Personal Electronics + S Handhelds/PDAs + S
    \\Record #28 - Member name (Personal Electronics) already used
    Personal Electronics + S Portable Audio + S
    \\Record #31 - Member name (All Merchandise) already used
    Products + S All Merchandise + S
    \\Record #32 - Member name (High End Merchandise) already used
    Products ~ S High End Merchandise ~ S
    \\Record #33 - Member name (Systems) already used
    Systems + S Desktops + S
    \\Record #34 - Member name (Systems) already used
    Systems + S Notebooks + S
    \\Record #18 - Error adding Attribute to member Digital Recorders (3320)
    Home Audio/Video + S Digital Recorders + S
    \\Record #36 - Error adding Attribute to member Flat Panel (3320)
    Televisions + S Flat Panel + S
    \\Record #37 - Error adding Attribute to member HDTV (3320)
    Televisions + S HDTV + S
    \\Record #8 - Incorrect Dimension [Income Level] For Member [ParentName] (3308)
    ParentName DataStorage MemberName DataStorage
    \\Record #1 - Member name (Geography) already used
    S Geography S
    \\Record #2 - Error adding member 27425 (3317)
    \\Record #2 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    A M F GREENSBORO - NC S 27425 S 336
    \\Record #3 - Error adding member 36310 (3317)
    \\Record #3 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABBEVILLE - AL S 36310 S 334
    \\Record #4 - Error adding member 29620 (3317)
    \\Record #4 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABBEVILLE - SC S 29620 S 864
    \\Record #5 - Error adding member 67510 (3317)
    \\Record #5 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABBYVILLE - KS S 67510 S 316
    \\Record #6 - Error adding member 58001 (3317)
    \\Record #6 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERCROMBIE - ND S 58001 S 701
    \\Record #7 - Error adding member 42201 (3317)
    \\Record #7 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERDEEN - KY S 42201 S 502
    \\Record #8 - Error adding member 21001 (3317)
    \\Record #8 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERDEEN - MD S 21001 S 410
    \\Record #9 - Error adding member 39730 (3317)
    \\Record #9 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERDEEN - MS S 39730 S 601
    \\Record #10 - Error adding member 28315 (3317)
    \\Record #10 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERDEEN - NC S 28315 S 910
    \\Record #11 - Error adding member 79311 (3317)
    \\Record #11 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERNATHY - TX S 79311 S 806
    \\Record #12 - Error adding member 79601 (3317)
    \\Record #12 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABILENE - TX S 79601 S 915
    \\Record #13 - Error adding member 79608 (3317)
    \\Record #13 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABILENE - TX S 79608 S 915
    \\Record #14 - Error adding member 79698 (3317)
    \\Record #14 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    Are these errors due to data source, if yes what could be possible work around?
    Is there any problem with Essbase properties which I have set if so then when I validate cube schema why I dn't get any errors?
    Please help me, I am stuck here not able to deploy ASO Cube.
    Thanks in advance

    Hii
    I have the same problem , you have.
    did you manage to solve it ??
    Thanks in advance

  • View Display Error in OBIEE with Essbase Cubes

    Hi All,
    Currently we are generating the Reports from Essbase Cubes.
    We have an hierarchy in OBIEE and when we are trying to drill down one hierarchy(Tech Executive) we are getting below Error.
    " Error
    View Display Error
    Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 96002] Essbase Error: Unknown Member [H1_Rel_TecExec_Ini_appl].[Unknown] used in query (HY000)
    SQL Issued: SELECT s_0, s_1, s_2, s_3, s_4, s_5 FROM ( SELECT 0 s_0, "Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Initiative Name" s_1, "Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Tech Executive" s_2, SORTKEY("Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Initiative Name") s_3, SORTKEY("Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Tech Executive") s_4, "Defect7M"."Defect7M#1"."Defect7M - measure" s_5 FROM "Defect7M" WHERE ("Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Tech Executive" = 'Unknown') ) djm "
    Can someone assist me how to resolve this error
    Thanks,
    SatyaB

    Satya,
    Have you done anything to modify the essbase drill logic within your BMM?
    Remember when modeling essbase you should just try to use the defaults first to ensure that all works correctly the first time through. Then you can adjust any hiearchies, federate, etc.

  • Combining relation facts with dimensions from an Essbase cube

    Hi!
    I am having trouble combining relational measures (from EBS) with dimensions from an Essbase cube. The dimensions that we want to use for reporting (drilling etc) are in an Essbase cube and the facts are in EBS.
    I have managed to import both the EBS tables and the cube into OBIEE (11.1.15) and I have created a business model on the cube. For the cube I converted the accounts dimension to a value based dimension, other than that it was basically just drag and drop.
    In this business model I created a new logical table with an LTS consisting of three tables from the relational database.
    The relational data has an account key that conforms to the member key of the accounts dimension in the Essbase cube. So in the accounts dimension (in the BMM layer) I mapped the relational column to correct column (that is already mapped to the cube) - this column now has two sources; the relational table and the cube. This account key is also available in the LTS of my fact table.
    The content levels for the LTS in the fact table have all been set to detail level for the accounts dimension.
    So far I am able to report on the data from the fact table (only relational data) and I can combine this report with account key from the account dimension (because this column is mapped to the relational source as well as the cube). But if expand the report with a column (from the accounts dimension) that is mapped only to the cube (the alias column that contains the description of the accounts key), I get an error (NQSError 14025 - see below).
    Seeing as how I have modeled that the facts are connected to the dimension through the common accounts key, I cannot understand why OBIEE doesn't seem to understand which other columns - from the same dimension - to fetch.
    If this had been in a relational database I could have done this very easily with SQL; something along the lines of select * from relational_fact, dim_accounts where relational_fact.account_key=dim_accounts.account_key.
    Error message:
    [nQSError: 14025] No fact table exists at the requested level of detail
    Edit:
    Regards
    Mogens
    Edited by: user13050224 on Jun 19, 2012 6:40 AM

    Avneet gave you the beginnings of one way, but left out that a couple of things. First, you would want to do the export of level zero only. Second, the export needs to be in column format and third, you need to make sure the load rule you use is set to be additive otherwise the last row will overwrite the previouse values.
    A couple of other wats I can think of doing this
    Create a replicated partition that maps the 3 non used dimensiosn to null (Pick the member at the top of the dimension in your mapping area)
    Create a report script to extract the data putting the three dimensions in the page so they don't show up.
    Use the custom defined function jexport in a calc script to get what you want

  • Issue in integrating Essbase cubes with OBIEE

    Hi
    I am trying to use Essbase cubes as datasource in OBIEE for generating reports,but the issue is in generating , No columns in fact table of cube in BMM layer.
    Outline of cube is
    Revel(cube)
    (Hierachies)
    Time Time <5> (Label Only)
    Item <54> (Label Only) (Two Pass)
    DepInst <20> (Label Only)
    SFA_Flag <2>
    Deduction_Flag <2>
    Rating_Category <6>
    PD_Band <9>
    Product <17>
    Entity <4>
    CR_Agency <5>
    I am confused how to generate reports without measures in fact table.
    Regards
    Sandeep

    Hi Sandeep,
    in that case it's as I thought:
    Or did you just not specify any measure hierarchy?You tried this...
    In BMM layer i made this dimension as fact and tried to create reports but not use....which isn't the way. First of all your cube seems to be built quite bizarre since it doesn't even provide a default measure hierarchy so I'd have your Essbase guys check that.
    As for the OBIEE side: the key is the physical layer. BMM's already too late. In the physical cube object, you must define one of the hierarchies as the measure hierarchy (since your cube doesn't seem to provide it; see above):
    [http://hekatonkheires.blogspot.com/2010/02/obieeessbase-how-to-handle-missing.html]
    Cheers,
    C.

  • Cannot Lock and Send data to an Essbase cube

    Hi all,
    One of our customer is executing a Macro script to lock and send data to the essbase cube from an excel sheet.
    They reported that in several cases where users will submit their data, and later discover that their changes are not in Essbase.
    The calls to EssVRetrieve (to lock the blocks) and EssVSendData are both returning successfully and there is no error message received while executing the above macros.
    I reviewed the application log file and found the following message:
    [Mon Nov 24 18:59:43 2008]Local/Applicn///Warning(1080014)
    Transaction [ 0xd801e0( 0x492b4bb0.0x45560 ) ] aborted due to status [1014031].
    I analysed the above message and found the user is trying to lock the database when already a lock has been applied to it and some operation is being performed on it. Because of that the transaction has been aborted. But customer says no concurrent operation is being performed at that time.
    Can anyone help me in this regard.
    Thanks,
    Raja

    The error message for error 1014031 is 'Essbase could not get a lock in the specified time.' The first thought I have is that perhaps some user/s have the 'Update Mode' option set in their Essbase Options and thus, when they are retrieving data, they are inadvertantly locking the data blocks. If that is the case, you will probably see this issue sporadically as the locks are automatically released when the user disconnects from Essbase.
    To make it stop, you will have to go to every user's desktop and make sure they have that Essbase Option turned off. Further, you will have to look at any worksheets they may use that may have an Essbase Option name stored on it. The range name is stored as a string and includes a setting for update mode. Here is a sample that I created for this post where I first turned 'on' update mode and then turned 'off' update mode:
    A1100000001121000000001100120_01-0000
    A1100000000121000000001100120_01-0000
    Note the 11th character in the first string is '1' which indicates that Update Mode is 'on'; in the second string, it is 'off'.
    This behavior, particularly with update mode, is the only one of the behaviors that I disliked in Excel and pushed me to design our Dodeca product. In Dodeca, the administrator controls all Essbase options and can either set individual options to the value they want or they can allow the user to choose their own options. Most of our customers do not allow the user to set update mode.
    Tim Tow
    Applied OLAP, Inc

  • Migrating Essbase cube across versions via file system

    A large BSO cube has been taking much longer to complete a 'calc all' in Essbase 11.1.2.2 than on Essbase 9.3.1 despite all Essbase.cfg, app and db settings being same (https://forums.oracle.com/thread/2599658).
    As a last resort, I've tried the following-
    1. Calc the cube on the 9.3.1 server.
    2. Use EAS Migration Wizard to migrate the cube from the 9.3.1 server to the 11.1.2.2 server.
    3. File system transfer of all ess*.ind and ess*.pag from 9.3.1\app\db folder to 11.1.2.2\app\db folder (at this point a retrieval from the 11.1.2.2 server does not yet return any data).
    4. File system transfer of the dbname.esm file from 9.3.1\app\db folder to 11.1.2.2\app\db folder (at this point a retrieval from the 11.1.2.2 server returns an "unable to load database dbname" error and an "Invalid transaction status for block -- Please use the IBH Locate/Fix utilities to find/fix the problem" error).
    5. File system transfer of the dbname.tct file from 9.3.1\app\db folder to 11.1.2.2\app\db folder (and voila! Essbase returns data from the 11.1.2.2 server and numbers match with the 9.3.1 sever).
    This almost seems too good to be true. Can anyone think of any dangers of migrating apps this way? Has nothing changed in file formats between Essbase 9.x and 11.x? Won't not transferring the dbname.ind and dbname.db files cause any issues down the road? Thankfully we are soon moving to ASO for this large BSO cube, so this isn't a long term worry.

    Freshly install the Essbase 11.1.2.2 on Window server 2008 r-2 with the recommended hardware specification. After Installation configure 11.1.2.2 with the DB/Schema
    Take the all data back up of the essbase applications using script export or directly exporting from the cube.
    Use the EAS Migration wizard to migrate the essbase applications
    After the Migrating the applications successfully,reLoad all the data into cube.
    For the 4th Point
    IBH error generally caused when there is a mismatch in the index file and the PAG file while e executing the calculation script .Possible solutions are available
    The recommended procedure is:
    a)Disable all logins.
    alter application sample disable connects;
    b)Forcibly log off all users.
    alter system logout session on database sample.basic;
    c)Run the MaxL statement to get invalid block header information.
    alter database sample.basic validate data to local logfile 'invalid_blocks';
    d)Repair invalid block headers
    alter database sample.basic repair invalid_block_headers;
    Thanks,
    Sreekumar Hariharan

  • How to make data in Essbase cube equal to data in DW when drilling through

    Is there standard ways in Oracle BI + Essbase to make data in Essbase cube and DW equal (corresponding)?
    For example when we are drilling down from cube to DW in this moment in DW may be loaded additional data (new data) which are not loaded in the cube yet.
    So we have situation where data in the cube not correspond to data in the DW.

    I think rebuilding the cube on a more frequent basis not solves the problem – there will be significant time between the moment when data loaded in DW and when data updated in the cube.
    I thought of creating 2 tables in DW (“new” and “old”) and 2 cubes (“new” and “old”).
    So the process of loading data will look like this:
    1. We have corresponding data in table “old” and cube “old”. User always works with “old” objects.
    2. Load data to table “new”.
    3. Load data to cube “new” from table “new”.
    4. Rename tables and cubes: “old” to “new”, “new” to “old”. Here users starting to work with updated cube and table.
    5. Add new changes go cube and table “new” (there will be old data).
    6. Go to step 2.
    But this way is too expensive (storage amount doubles).
    And maybe easier way can be found?...

Maybe you are looking for

  • Mounting cdrom drives in VM Guests

    I've installed the latest version of OVM (whistler) and have installed a guest OEL5.3 environment (Vail). I can mount /mnt/cdrom on Whistler but it is never recognized in Vail. I have a /dev/sdc showing on Vail but get this when I mount it: mount /mn

  • Safari Crashes after loading the website.

    Safari loads the website and gives a error message. It stops working. Please help me with a solution.

  • Create Records through OData using Jquery

    Hi All, I have created a OData Service in SAP ECC system in format http://hostname:portno/sap/opu/odata/sap/Z_PORDER_SRV/PurchaseOrderHeaderCollection I am trying to create a record from front end using JQuery.   I am able to retrieve all the records

  • How To Reinstall The Mac App Store?

    When someone was on my computer they used App Cleaner to delete it entirely. I'm not sure if its a joke or whatever. How do I redownload/install it? I'm using a early 2011 MPB running Mac OS X Lion.

  • Touch smart not opening windows or after I play a game it won't do anything

    I had to send my touch smart in for maintenance and since it has been returned it won't shut down, won't open any windows, I try to run Norton and it freezes, I am getting very annoyed, got it in 2010 and never had any serious issues until now.