Cube Solve Step

Can some one please help me to understand what Solve step does (in cube_build_log). DB version 11.2.0.1
In our builds it takes the longest , a partition with just 2.5 million rows is taking between 6-8 hours to solve. I see lot of db sequential reads on OEM (temp and workspace tablespace).
* What are important components - this step is memory , IO , CPU bound ? (Is there a parameter than can be tweaked ) or way to see what is really slowing it down.Is there a way to tune solve process.
* Also does this step always run single threaded
We do have have 80+ mesures in our cube.
Thanks in advance.
Edited by: user11159529 on May 13, 2011 8:28 AM

The SOLVE step (typically) takes leaf data in the cube and aggregates it up the hierarchies of all the dimensions. This is usually the most expensive step in a cube build and can be either CPU or IO bound depending on the details of machine, configuration, and schema.
Three things that can seriously slow down a partition build are
<li> A poorly tuned database. The recommended database settings are described on http://wiki.oracle.com/page/OLAP+option+-DBASample+Scripts.
<li> Too many dimensions. Anything under 10 should be fine. Over 15 and it will get very slow. Between these numbers will depend on other factors.
<lI> Too many measures. I don't know why this is, but large numbers of measures (close to 100, say) can cause a serious slowdown. If you could break your cube into two cubes of 40 measures each, you may find that the combined build time for both is less than the time for your current cube. Ideally you would break the measures up based on shared sparsity patterns. (e.g. if measure A is generally NULL when B is NULL and vice versa, then put them in the same cube.)

Similar Messages

  • Cube Solve Time when using MAX Aggregation Operator

    Hello,
    We have created a cube to implement the count distinct measure we need.
    The cube contains only one measure (COUNT) and uses the MAX operator to aggregate across all other dimensions except for the one we want to count (which uses the SUM operator). We have set the precompute percent to 60% for the bottom partition and 0% for the top partition. The cube is compressed.
    The problem is that the SOLVE step for a partition when performing a COMPLETE cube build, seems to be taking a very long time and is taking up huge amounts of TEMPORARY tablespace.
    We have succesfully created another cube with the same dataset which uses the SUM operator across all dimensions.
    This cube build was completed in a reasonable amount of time even though we had 5 stored measures and 80% aggregation for the top partition.
    Is this behaviour expected when using MAX operator?
    Thank you,
    Vicky

    Thank you, David.
    As you said we are using mixed operators because we are doing a distinct count.
    We will try setting the precompute percent to 35%,although I'm a bit worried about the query performance in this case.
    Neelesh, I think that Atomic Refresh was set to TRUE during the last refresh but the cube was the only object in the build script.
    No other cubes or dimensions were maintained in the same build so I don't think it could have affected the use of TEMP tablespace.
    Generally we don't use Atomic Refresh.
    Thank you,
    Vicky

  • Rubik's cube solver

    Hi All,
    Just though I would share a silly side project I have been working on, a G based Rubik's cube solver.
    It is far from perfect, but it brought me some joy so I thought I would share it with the community. It mostly seems to work, I won't go into too many details here, but basically you start off with a clean cube, apply some random operations to it, and then get the algorithm to solve. I supppose it would be easy enough to change to allow user defined starting positions, but it is basically proof of concept for the algorithm so I haven't bothered taking it any further. Note, the solving algorithm is based on Dan Knight's 7 step solution, which works quite well once you get the hang of it. The solution is far, far from optimal, and in fact the vast majority of cubes can be solved in 17 moves or less (belive it or not)...
    Attached is a zip of the files, note, it uses the openG 3D buttons, so I manually added that library to the zip. There are controls on the left hand side of the screen to apply the primitive transformations to the cube if you wanted to do it manually.
    The randomisation used mathscript to allow for starting with a known random number seed (is this possible with the normal random functions?), if you dont have this installed feel free to use any random number generator
    I would be interested to hear comments or see other G code for solving the cube, or if I have left something out of the zip file.
    Happy cubing...
    nrp
    ps: my record for solving the cube is about 2 mins, which is nowhere near the pros! Oh well...
    nrp
    CLA
    Attachments:
    rubiccube_main_v2 Folder.zip ‏1118 KB

    Very nice!
    nrp wrote:
    The randomisation used mathscript to allow for starting with a known random number seed (is this possible with the normal random functions?)
    Use e.g. the random white noise from the "signal processing..signal generation" palette. It has a seed input.
    nrp wrote:
    I would be interested to hear comments or see other G code for solving the cube, or if I have left something out of the zip file.
    Do you know the guys who made the LabVIEW robot solver?
    LabVIEW Champion . Do more with less code and in less time .

  • ASO cube Migration Steps from Backend

    Hi Gurus,
    What would be the Essbase ASO cube migration steps from the Backend.
    Edited by: Hkag on 18-Apr-2013 04:11

    found answer
    Edited by: Softperson on 19/8/2010 17:53

  • Solve cube, solve single measures in a cibe

    We are using OWB 10gR2, having an AW cube with two measures, one has the solve option YES, the other has NO (in the Aggregator tab of the cube editor).
    Now we were trying the following: When loading the cube using a mapping with the cube operator, we used for the cube operator option "Solve the cube" YES the first time and NO the second time (cleaning the cube in between, of course). The first time ALL measures have been solved, the second time NONE of them has been solved. What should be the effect of specifying different solve options for measure in a cube? The values of this option seem to be ignored anyway. Is it not possible to solve one measures and not to solve another in the same cube???
    By the way, in beta releases the two option values were "on load" and "on demand", instead of "YES" and "NO" we have in 10gR2. Comparing the 10R2 and the beta releases, has more been changed than the labels? Is the intended semantic still "on load" and "on demand"?
    A lot of questions! Can anybody help on that topic? Thanks!

    With non-compressed cubes it is possible to solve one measure and not another. You will need the latest database patch also for this (10.2.0.2, bug 4550247 has details for 10.1 patch) for it to work properly. In the production release of OWB this should be operating, I think there were issues in the betas.
    The options on measures for solve indicate which measures will be included in the primary solve. The solve indicator on the cube operator in the map however indicates whether this solve will be executed or not. So the map can just load data or load and solve the data. There is a transformation function for executing solves, so solves can be scheduled independently from loading. Its also possible to solve measures independently from each other using this function (WB_OLAP_AW_PRECOMPUTE).
    Hope this helps.

  • Firefox cannot load websites but other programs can. I followed the problem solving steps, however I can't access to any web site.

    I followed the following step (in spanish):
    Configuración de la conexión para Firefox
    Si te conectas a Internet a través de un servidor proxy que tiene problemas de conexión, no podrás cargar páginas web. Para comprobar la configuración del proxy de Firefox:
    1.En la parte superior de la ventana de Firefox, en la barra de menú, haz clic en el menú Herramientas, y selecciona Opciones....
    2.Selecciona el panel Avanzado.
    3.Selecciona la pestaña Red.
    4.En la sección Conexión, haz clic en Configuración... .
    5.Cambia la configuración del proxy:
    ◦Si no te conectas a Internet a través de un proxy (o no sabes si te conectas a través de uno), selecciona Sin proxy.
    ◦Si te conectas a Internet a través de un proxy, compara la configuración de Firefox con la de otro navegador (como Internet Explorer. Consulta Guía de configuración de proxy de Microsoft) (O Safari. Consulta Guía de configuración de proxy de Apple).
    1.Cierra la ventana de Configuración de conexión.
    2.Haz clic en Aceptar para cerrar la ventana de OpcionesHaz clic en Cerrar para cerrar la ventana de preferenciasCerrar la ventana de preferencias

    You need to allow for the loopback connection in Firefox, as an "outgoing" connection.

  • How to find out Cube Size (Step by step process)

    Hi all,
    Can any body tell me How can i find out the Cube size ?
    Thanks in advance.
    Vaibhav A.

    Hi,
    try Tcode DB02
    and
    A simplified estimation of disk space for the BW can be obtained by using the following formula:
    For each cube:
    Size in bytes =
    (n + 3) x 10 bytes + (m x 17 bytes) *
    http:// rows of initial load + rows of periodic load * no. of periods
    n = number of dimensions
    m = number of key figures
    For more details please read the following:
    Estimating an InfoCube
    When estimating the size of an InfoCube, tables like fact and dimension tables are considered.
    However, the size of the fact table is the most important, since in most cases it will be 80-90% of the
    total storage requirement for the InfoCube.
    When estimating the fact table size consider the effect of compression depending on how many
    records with identical dimension keys will be loaded.
    The amount of data stored in the PSA and ODS has a significant impact on the disk space required.
    If data is stored in the PSA beyond a simply temporary basis, it could be possible that more than 50%
    of total disk space will be allocated for this purpose.
    Dimension Tables
    u2022 Identify all dimension tables for this InfoCube.
    u2022 The size and number of records need to be estimated for a dimension table record. The size of
    one record can be calculated by summing the number of characteristics in the dimension table at
    10 bytes each. Also, add 10 bytes for the key of the dimension table.
    u2022 Estimate the number of records in the dimension table.
    u2022 Adjust the expected number of records in the dimension table by expected growth.
    u2022 Multiply the adjusted record count by the expected size of the dimension table record to obtain
    the estimated size of the dimension table.
    Fact Tables
    u2022 Count the number of key figures the table will contain, assuming a key figure requires 17 bytes.
    u2022 Every dimension table requires a foreign key in the fact table, so add 6 bytes for each key. Donu2018t
    forget the three standard dimensions.
    u2022 Estimate the number of records in the fact table.
    u2022 Adjust the expected number of records in the fact table by expected growth.
    u2022 Multiply the adjusted record count by the expected size of the fact table record to obtain the
    estimated size of the fact table.
    Regards,
    Marasa.

  • (SOLVED) Step 2.9 on Beginner's Guide - Help

    I am 14 years old and pretty noob at networking and linux.
    Ok I am in CHROOT. Just wondering, is step 2.9 optitional if you already have inernet access? Because I just did ping google.com and it worked?
    I don't see anywhere it saying: "If you have internet access then you can skip this step"
    Last edited by gogobebe2 (2013-12-26 00:04:03)

    Yes and no. Of course is optional, even if pings don't work -someone could not have internet access by the time of the installation-, but if you have and want to setup your network 2.9 is... still optional. You can setup it now, or in a few weeks
    In the end you'll need to configure and enable the network profile you are going to use.
    Most live installations use dhcp (dynamic ip assignation), but you might not want dhcp for your computer, so I presume it comes enabled on the installation media, but disabled once installed -not sure about this though-.
    Just make sure how you want to setup your network (wired/wireless, dynamic/static ip) and proceed.

  • CUBE reporting; step-by-step guide required

    Hi,
    I'm new to Oracle and have been assigned the job to create a CUBE reporting tool for an financial application. I have two questions;
    1. On server side; what am I required to do? The database is fairly complex (fully normalized), I have read a few articles on creating dimensions, creating new databases etc. Can anyone suggest the best approach and/or point me to a tutorial on the process of this and the tools required.
    2. We're wanting a customizable interface reporting tool, we're using ASP.Net, does anyone know of a good tool for this (in relation to question 1).

    You mean an n-dimensional OLAP cube and not the CUBE() analytic function? Well, I would suggest you want to be using teh built-in functionality rather rolling your own, so the best place to start is the documentation. Note that OLAP is only available with the Enterprise licence.
    Cheers, APC

  • Webi report: Issue Cannot retrieve dimensions from Cube

    Hello Experts,  We're facing an issue with a partical set of Webi reports in Quality environment. The reports are BW reports based on OLAP BICS connectivity on top of Bex queries.y  Below is the scenario and the issue:  1. The backed Bex queries have been trasported to Quality BW system and they're running fine when run from RSRT or Bex analyzer. 2.The Webi reports along with the BICS connections have been migrated to the Quality box and the BICS connections have been editied so that they point to the Quality BOX bex queries and they're testing and responding to the connection successfully. 3. The Webi reports pointed to the Dev Box in the Dev BOBJ system are working fine, but when the same reports for which the connections were repointed in Quality as above, are giving the error "Cannot Retrieve Dimensions from Cube".  Steps taken: a. Deleted the reports and the BICS connection from Quality environment and remigrated the reports,connecitons and changed the connecitons but still the issue persists. b.Tried finding the root cause and got the following issue in the Bex query in Quality: There are filters put in the Bex query . All other filters are working fine except 0CALMONTH. We tried putting all sorts of variables on 0CALMONTH, but the moment a variable is put on 0CALMONTH, this issue occurs. But when we tried removing the variable from 0CALMONTH in filters and restricted it to a single value, the Report works fine.  However the same is not an issue in the Dev Box and all queries with similar variables are working fine.  So we're thinking this might be an issue on the BW backend and the cube might have to be transported again to the Quality Box.  ANy quick suggestions or inputs on this would be greatly appreciated.  Thanks and regards, Abhishek

    Hi Abhishek,
    I am facing same issue.Have you resolve this issue?
    Regards,
    Rajesh

  • How can I get a digital WDT that includes all samples, not just the one for the current time step...?

    See block labeled ''digital data'' in my attachment for reference. Currently, only the digital data point for the current time step can be seen (it is deleted before the next one appears). However, I would like it display all the samples in the table like in the example found at this link under ''Digital Waveform Control'':
    http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/fp_controls_indicators/ 
    Many thanks for any suggestions! I am new to Labview, so I appreciate your help.
    Solved!
    Go to Solution.
    Attachments:
    myproject.vi ‏220 KB

    Hey westerman111,
    If you're looking to produce have your display include the solution information from previous solver steps, you will need to buffer the previous data. The way to implement this in a Control Design & Simulation Loop is using the Memory.vi found under Control Design & Simulation > Simulation > Utilities > Memory.vi. It will allow you to save previous information generated in the simulation environment for letter solution steps.
    I've attached an example that should get you started in using the Memory.vi.
    I would also make sure that what you're looking to accomplish is suitable for the Control & Simulation Loop. I know you mentioned you were new to LabVIEW so I wanted to make sure you were heading off in the right direction. Is there a particular reason why you are using the Control & Simulation Loop instead of a standard While or For Loop? The Control Design & Simulation loop is unique in that it calculates the solution of a dynamic system at a prescribed time step and ODE solver. It also provides the tools to interact with the model you are solving during execution. However, if you are simply looking to perform data acquisition and measurements (instead of dynamic model simulation) I would recommend using standard LabVIEW functions.
    Here are some useful references for getting start with both LabVIEW and the Control Design and Simulation Module.
    Tutorial: Getting Started with Simulation (Control Design and Simulation Module)
    http://zone.ni.com/reference/en-XX/help/371894G-01/lvsimhowto/sim_h_gs/
    Getting Started with LabVIEW
    http://digital.ni.com/manuals.nsf/websearch/ba2fb433a7940e7a862579d40070cc2c
    Tim A.
    National Instruments
    Attachments:
    myproject_edit.vi ‏249 KB

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • How to create an unsolved cube with awm???

    hi all,
    I readed the "Oracle Olap developer's guide to the Oalp api" and I found there's 2 type of Cube: Solved and Unsolved Cubes. And this document says: "... if all the data for a cube is specified by the DBA, then the cube is considered to be Solved. If some or all of the aggregate data must be calculated by Oracle OLap, then the cube is unsolved ..."
    I tried with awm 10.2.0.3.0A to create an unsolvedCube but I can't. All cubes I created are solvedCube. To know if a cube is solved or unsolved, I wrotte an program in Java to read informations of package mtm.
    Some one can tell me how to create an unsolved cube with AWM ou other soft please!

    SH is not a relational OLAP data model which is quite different from the GLOBAL schema which is based on an Analytic Workspace.
    If you change the aggregation method you will need to re-compute the whole cube which can be a very big job! You might be able to force the unsolved status be de-selecting all the levels on the Rules tab in AWM. However, I think by default analytic workspace OLAP models always provide a fully solved cube to the outside world. This is the nature of the multi-dimensional model.
    Relationally, as keys are located in separate columns a cube can be unsolved in that the key column only contains values for a single level from the corresponding dimension tables. If more than keys for different levels within the same dimension appear within the fact key column then the cube is deemed as being solved.
    Therefore, I am not sure you are going to get the information you require from the API. To changes the aggregation method you will have to switch off all pre-compute options and also disable the session cache to prevent previously calculated data being returned when you change the aggregation method.
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • [SOLVED] Kernel panic for an unknown reason

    [Solved] Steps done to solve it:
    0. Panic because I couldn't play starcraft2 (this step is sooo important! ;)
    1. Updated my mirrorlist.
    2. Forced a refresh on the package list from the new mirrorlist. "pacman -Syy"
    3. Downgraded the kernel.
    4. Deleted the latest kernel package from the pacman cache.
    5. Updated the system. (pacman -Syu)
    6. Thanked lilsirecho for helping out!
    7. Profit
    Hi all
    I just did a "pacman -Syu" this afternoon and after that I'm getting kernel panics for unknown reasons.
    edit: I had kernel 3.0.4 befure syu and I have kernel 3.0.7 now. I tried downgrading wine and the kernel with no success.
    It first happened when I tried to run StarCraft2 with wine. I even created a post looking for help because of that.
    Later on I tested another game, just in case, and it crashed too.
    Looks like a wine problem, right? Or even a graphics driver problem.
    That's what I thought too. But then I tried to update with "yaourt -Syu --aur" and in the first package compression it freezed once again. And it does fail everytime I try to update with yaourt during the compression phase.
    What do this situations have in common?
    My opinion is that they all require a lot of resources. StarCraft 2 puts my PC at 100% almost all the time. The other game is pretty old, but it freezed during an "environment loading" phase, right when the map was loading. The compression phase of yaourt also consumes a lot of resources.
    This is my view of this problem, but I could be completely wrong.
    What I need is help finding what is the real source of this kernel panics. I don't know where to look for the logs or the error reports when a kernel panic occurs.
    I hope someone can help me trace the problem somehow. I think I'm lost
    ty in advance.
    cheers!
    Last edited by fatum (2011-11-10 23:16:10)

    lilsirecho wrote:
    Possibly caused by a mirror download.
    Perhaps you need to revert kernel and insure you have the latest mirrorlist and then syu again.
    When I saw your response I thought: "Why should that be true? I downgraded the kernel with no success. Doesn't make much sense".
    But then I did what you said:
    Downgraded the kernel back to 3.0.4.
    Moved the latest mirrorlist.pacnew I had to be the mirrorlist in use.
    Then did a "pacman -Syyu".
    And, miraculously, it works fine now.
    How in the world did you know that could be the reason? It would have been the last possibility I would have thought about, no doubt about that
    Thank you very much lilsirecho. Your post really helped me.
    Not gonna mark this as SOLVED yet. Yesterday I marked my other thread as solved too soon and I regretted my decision.
    I will leave a 1week time-frame to be 100% sure that it is fixed, just in case.

  • Cube content deletion is taking more time than usual.

    Hi Experts,
    We have a Process chain which ideally should run in every two hours. This chain has a delete data cube content step before the new data is loaded in the cube. This chain is running fine for one instance & the other instance is taking more time so it is quite intermittent.
    In the process chain we are also deleting contents from the Dimension tables (in the delete content step). Need your inputs to improve the performance of this step.
    Thanks & Regards
    Mayank Tyagi.

    Hi Mayank ,
    You can delete the indexes of the cube before deleting the contents of the cube . The concept is same as of data loading that data loads happens faster when indexes are deleted .
    If you are having aggregates over this cube , then that aggregate will be also adjusted .
    Kind Regards,
    Ashutosh Singh

Maybe you are looking for