Multiple Cube Refresh

Hi,
We are having multiple cubes in a single workspace. We would need to refresh more than one cube parallely, using DBMS_CUBE package. Is that feasible? Can 2 cubes refresh in parallel? Will there not be conflict in read-write attach mode on the analytic workspace?
rgds,
Prakash S

This should work
exec dbms_cube.build('cubeA, cubeB', parallelism=>2)All partitions of cube A will be built before cubeB is started, but if paralellism is sufficiently high, then they should both begin. It will not work to call dbms_cube.build in two different sessions.

Similar Messages

  • Multiple Cubes refresh in parallel

    Hi
    I have an analytical workspace where i have modelled a set of conformed dimensions and some dimensions specific to specific subject areas. There will be multiple cubes (Partitioned and some non partitioned) in this analytical workspace.
    Would like to know if these cubes can be refreshed in parallel. I have tried using DBMS_CUBE provided parallelism parameter and kicked off 2 cubes refresh but when i check the cube_build_log, the slave process is always 0 and the execution seems to have happened in serial.
    Please suggest how these cubes can be refreshed in parallel.
    Thanks

    FAILED RECORDS FROM THE LOG :
    ==========================
    127     0     FAILED     BUILD          BUILD     "(CLOB) <ERROR>
    <![CDATA[
    XOQ-01707: Oracle job "IncrMyCBMV_JOB$_812" failed while executing slave build "GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406" with error "37162: ORA-37162: OLAP error
    XOQ-00703: error executing OLAP DML command "(UPDATE GLOBAL.GLOBAL : ORA-37605: error during OLAP AW UPDATE
    ORA-00600: internal error code, arguments: [kdliLockBlock], [9708], [16859386], [0], [0], [0], [0], [], [], [], [], []
    ORA-06512: at "SYS.DBMS_CUBE", line 234
    ORA-06512: at "SYS.DBMS_CUBE", line 316
    ORA-06512: at line 1
    ".]]>>
    </ERROR>"     GLOBAL     GLOBAL               02-APR-13 12.25.43.702000000 PM ASIA/CALCUTTA     (CLOB) BUILD price_cube, units_cube     DBMS_CUBE     0               4542     0     0     2     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P22:1999.10     IncrMyCBMV_JOB$_821     02-APR-13 12.25.42.673000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6288     230     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P24:1999.12     IncrMyCBMV_JOB$_819     02-APR-13 12.25.32.533000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6272     228     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P25:2000.01     IncrMyCBMV_JOB$_818     02-APR-13 12.25.30.505000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6259     227     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P26:2000.02     IncrMyCBMV_JOB$_817     02-APR-13 12.25.28.477000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6258     226     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P27:2000.03     IncrMyCBMV_JOB$_816     02-APR-13 12.25.26.449000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6237     225     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P28:2000.04     IncrMyCBMV_JOB$_815     02-APR-13 12.25.24.421000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6235     224     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P29:2000.05     IncrMyCBMV_JOB$_814     02-APR-13 12.25.22.393000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6234     223     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P30:2000.06     IncrMyCBMV_JOB$_813     02-APR-13 12.25.20.349000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6232     222     0     3     IncrMyCBMV

  • I see 'enq: JI - contention' when building multiple cubes/partitions

    Version 11.2.0.3
    I can successfully build multiple partitions of a cube simultaneously by supplying the degree of parallelism that I want. I can also build multiple cubes and multiple partitions of multiple cubes by submitting separate jobs (one per cube) with parallelism set in the job (for number of partitions per job/cube).
    My goal was to refresh 2 cubes simultaneously, 2 partitions in parallel each, so that 4 partitions total were refreshing simultaneously. There were sufficient hardware resources (memory and processes) to do this. I tried to submit 2 jobs, one for each cube, with parallel 2 on each.
    What happens is that 3 partitions start loading, not 4. The smaller of the 2 cubes loads 2 partitions at a time, but the larger of the cubes starts loading only 1 partition and the other partition process waits with JI - contention.
    I understand that JI contention is related one materialized view refresh blocking another refresh of the same MV. Yet simultaneous refresh of different partitions is supported for cube MVs.
    Because I see the large cube having the problem but not the smaller one, I wonder if adding more hash partitions to the AW$ (analytic workspace) table would allow more concurrent update processes. We have a high enough setting for processes and job_queue_processes, and enough available threads, etc.
    Will more hash subpartitions on the AW$ table allow for more concurrency for cube refreshes?

    It looks like the JI contention was coming from having multiple jobs submitted to update the SAME cube (albeit different partitions). Multiple jobs for different cubes (up to one job/cube each) seems to avoid this issue. I thought there was only one job per cube, but that was not true.
    Still, if someone has some insight into creating more AW hash subpartitions, I'd like to hear it. I know how to do it, but I am not sure what the impact will be on load or solve times. I have read a few sources online indicating that it is a good idea to have as many subpartitions as logical cube partitions, and that it is a good idea to set the subpartition number to a power of two to ensure good balance.

  • Number of users logged in during cube refresh activity

    I am using Hyperion 11.1.2.1 and running  Planning cube refresh activity using \Hyperion\Planning\bin\CubeRefresh.cmd.
    Is there any way to find out total number of users logged in into a particular Planning Application during Cube refresh activity running for that application ?

    I don't think Planning repository has information about sessions.
    Let us know if you succeed.
    I was not talking about Essbase as in "Essbase", all users that are performing an activity in Planning (data refresh, data load, run business rule) will show up in EAS sessions so if you run the maxl display session against Planning application it'll give you all users.
    Now OP, you can perform a search for "line count windows cmd" will give you ideas on how to count the lines and that count is going to be the numbers of users (well it is not the number of users, but number of sessions)
    ORA-00001: Unique constraint violated: Count lines in multiple files using Windows command prompt
    Regards
    Celvin

  • Cube refresh fails with an error below

    Hi,
    We are experiencing this problem below during planning application database refresh. We have been refreshing the database everyday, but all of a sudden the below error is appearing in log. The error is something like below:
    Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
    java.io.EOFException
    When the database refresh is done from workspace manually, the database refresh is happening successfully. But when triggered from unix script, its throwing the above error.
    Is it related to some provisioning related issue for which user has been removed from MSAD?? Please help me out on this.
    Thanks,
    mani
    Edited by: sdid on Jul 29, 2012 11:16 PM

    I work with 'sdid' and here is a better explaination of what exactly is going on -
    As part of our nightly schedule we have a unix shell script that executes refresh of essbase cubes from planning using the 'CubeRefresh.sh' shell script.
    Here is how our shell looks like -
    /opt/hyperion/Planning/bin/CubeRefresh.sh /A:<cube name> /U:<user id> /P:<password> /R /D /FS
    Here is what 'CubeRefresh.sh' looks like -
    PLN_JAR_PATH=/opt/hyperion/Planning/bin
    export PLN_JAR_PATH
    . "${PLN_JAR_PATH}/setHPenv.sh"
    "${HS_JAVA_HOME}/bin/java" -classpath ${CLASSPATH} com.hyperion.planning.HspCubeRefreshCmd $1 $2 $3 $4 $5 $6 $7
    And here is what 'setHPenv.sh' looks like -
    HS_JAVA_HOME=/opt/hyperion/common/JRE/Sun/1.5.0
    export HS_JAVA_HOME
    HYPERION_HOME=/opt/hyperion
    export HYPERION_HOME
    PLN_JAR_PATH=/opt/hyperion/Planning/lib
    export PLN_JAR_PATH
    PLN_PROPERTIES_PATH=/opt/hyperion/deployments/Tomcat5/HyperionPlanning/webapps/HyperionPlanning/WEB-INF/classes
    export PLN_PROPERTIES_PATH
    CLASSPATH=${PLN_JAR_PATH}/HspJS.jar:${PLN_PROPERTIES_PATH}:${PLN_JAR_PATH}/hbrhppluginjar:${PLN_JAR_PATH}/jakarta-regexp-1.4.
    jar:${PLN_JAR_PATH}/hyjdbc.jar:${PLN_JAR_PATH}/iText.jar:${PLN_JAR_PATH}/iTextAsian.jar:${PLN_JAR_PATH}/mail.jar:${PLN_JAR_PA
    TH}/jdom.jar:${PLN_JAR_PATH}/dom.jar:${PLN_JAR_PATH}/sax.jar:${PLN_JAR_PATH}/xercesImpl.jar:${PLN_JAR_PATH}/jaxp-api.jar:${PL
    N_JAR_PATH}/classes12.zip:${PLN_JAR_PATH}/db2java.zip:${PLN_JAR_PATH}/db2jcc.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/css-9_
    3_1.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/ldapbp.jar:${PLN_JAR_PATH}/log4j.jar:${PLN_JAR_PATH}/log4j-1.2.8.jar:${PLN_JAR_
    PATH}/hbrhppluginjar.jar:${PLN_JAR_PATH}/ess_japi.jar:${PLN_JAR_PATH}/ess_es_server.jar:${PLN_JAR_PATH}/commons-httpclient-3.
    0.jar:${PLN_JAR_PATH}/commons-codec-1.3.jar:${PLN_JAR_PATH}/jakarta-slide-webdavlib.jar:${PLN_JAR_PATH}/ognl-2.6.7.jar:${HYPE
    RION_HOME}/common/CLS/9.3.1/lib/cls-9_3_1.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/EccpressoAll.jar:${HYPERION_HOME}/common/
    CLS/9.3.1/lib/flexlm.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/flexlmutil.jar:${HYPERION_HOME}/AdminServices/server/lib/easse
    rverplugin.jar:${PLN_JAR_PATH}/interop-sdk.jar:${PLN_JAR_PATH}/HspCopyApp.jar:${PLN_JAR_PATH}/commons-logging.jar:${CLASSPATH
    export CLASSPATH
    case $OS in
    HP-UX)
    SHLIB_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${SHLIB_PATH:-}
    export SHLIB_PATH
    SunOS)
    LD_LIBRARY_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LD_LIBRARY_PATH:-}
    export LD_LIBRARY_PATH
    AIX)
    LIBPATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LIBPATH:-}
    export LIBPATH
    echo "$OS is not supported"
    esac
    We have not made any changes to either the shell or 'CubeRefresh.sh' or 'setHPenv.sh'
    From the past couple of days the shell that executes 'CubeRefresh.sh' has been failing with the error message below.
    Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
    java.io.EOFException
    This error is causing our Essbase cubes to not get refreshed from Planning cubes through these batch jobs.
    On the other hand the manual refesh from within Planning works.
    We are on Hyperion® Planning – System 9 - Version : 9.3.1.1.10
    Any help on this would be greatly appreciated.
    Thanks
    Andy
    Edited by: Andy_D on Jul 30, 2012 9:04 AM

  • Bad request deletion at a time from multiple cubes?

    How do we delete a bad request or reconstruct from cube at a time which is loading from single info source to multiple cubes?

    hi Bharath,
    try these links.
    http://help.sap.com/saphelp_nw04s/helpdata/en/ca/5c7b3cbd556915e10000000a114084/frameset.htm
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/espackages/maintenance%2brequest
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a65a8e07211d2acb80000e829fbfe/frameset.htm
    hope it helps.

  • Performance issue in browsing SSAS cube using Excel for first time after cube refresh

    Hello Group Members,
    This is a continuation of my earlier blog question -
    https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices
    As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.
    I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around
    4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube
    refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.
    Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after
    daily cube refresh?
    Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take
    a significant time based on the bandwidth of the network and connection.
    Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?
    Best Regards, Arka Mitra.

    Could you run the following two DMV queries filling in the name of the cube you're connecting to. Then please post back the row count returned from each of them (by copying them into Excel and counting the rows).
    I want to see if this is an issue I've run across before with thousands of dimension attributes and MDSCHEMA_CUBES performance.
    select [HIERARCHY_UNIQUE_NAME]
    from $system.mdschema_hierarchies
    where CUBE_NAME = 'YourCubeName'
    select [LEVEL_UNIQUE_NAME]
    from $system.mdschema_levels
    where CUBE_NAME = 'YourCubeName'
    Also, what version of Analysis Services is it? If you connect Object Explorer in Management Studio to SSAS, what's the exact version number it says on the top server node?
    http://artisconsulting.com/Blogs/GregGalloway

  • Maintaining multiple cubes in an AW

    How can I concurrently maintain multiple cubes in an AW? In a session, I can only attach one AW and start maintaining one Cube, hence, cannot attach it again for maintaining another Cube in another session.
    Regards, Anirban

    AWM attaches AWs in 'read write' mode, so you cannot open a second AWM and attach the same AW. But all is not lost.
    Option 1: Select two cubes in AWM and then choose parallelism >= 2
    If neither of your cubes is partitioned, then they should be built in parallel. If your cubes are partitioned, then the server will build all the partitions of cube 1 first, but should start working on the partitions of cube 2 as soon as there are free processes, so they should overlap. Note that this will not happen if there is any dependency between the cubes. The USER_DEPENDENCIES view should tell you if this is true.
    Option 2: (11.2 only) Start the builds using DBMS_CUBE.BUILD
    Wherever possible the server (11.2) will attempt to build the cube in 'multi-writer' mode instead of in 'read write' mode. This means you could run build two cubes in parallel from different SQL sessions. This will only happen for cube-only builds -- dimension builds always require 'read write' mode.

  • Cube Refresh Performance Issue

    We are facing a strange performance issue related to cube refresh. The cube which used to take 1 hr to refresh is taking around 3.5 to 4 hr without any change in the environment. Also, the data that it processes is almost the same before and now. Only these cube out of all the other cubes in the workspace is suffering the performance issue over a period of time.
    Details of the cube:
    This cube has 7 dimensions and 11 measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.
    Data Volume: 2480261 records in the source to be processed daily (almost evenly distributed across the partition)
    Cube is refreshed with the below script
    DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);
    Has anyone faced similar issue? Please can advise on what might be the cause for the performance degradation.
    Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
    AWM - awm11.2.0.2.0A

    Take a look at DBMS_CUBE.BUILD documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube.htm#ARPLS218 and DBMS_CUBE_LOG documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube_log.htm#ARPLS72789
    You can also search this forum for more questions/examples about DBMS_CUBE.BUILD
    David Greenfield has covered many Cube loading topics in the past on this forum.
    Mapping to Relational tables
    Re: Enabling materialized view for fast refresh method
    DBMS CUBE BUILD
    CUBE_DFLT_PARTITION_LEVEL in 11g?
    Reclaiming space in OLAP 11.1.0.7
    Re: During a cube build how do I use an IN list for dimension hierarchy?
    .

  • Creating a universe on Multiple Cubes or Bex Queries

    Dear SDNers,
    Can we create a single universe on Multiple cubes or DSOs or BEx Queries and how do we do this?
    Can you plese tell me step by step.
    Thanks,
    Swathi.

    Hi swathi,
    yes, we can create multicubes from single universe from webi rich client/deski, create a single report  from BEx Query it makes as single micro cube.
    and Next Edit Query--> Add Query--> select same universe.  This Query makes as another Micro cube.   so you can design a report from those Microcubes. if you add additional object to query panal the microcube updated from data base.
    This Link help for you,
    [http://reports.is.ed.ac.uk/areas/itservices/busintel/TrainingMaterials/advUser/Lesson6-Step1.html]
    Thanking you,
    Praveen

  • Cube Refresh Failure

    I am running into some interesting issues where a particular cube refresh fails with the following message:
    10:08:14 ***Error Occured in BUILD_DRIVER: In __XML_SEQUENTIAL_LOADER: In __XML_UNIT_LOADER: In __XML_LOAD_MEAS_CC_PRT_ITEM: In ___XML_LOAD_TEMPPRG: The Q12_AW!Q12_RESPONDENTS_AUMQMA00015F_PRTCOMP dimension does not have a value numbered -157717.
    It looks like one of the partition is corrupted or things to that nature.
    I built a brand new cube identical to the previous cube and it refreshes fine (same facts). I am just curious if there is any way to fix the previous cube without recreating the whole thing.
    Am I missing something very obvious here?
    Swapan.

    You might try drop/recreate of the composite Q12_RESPONDENTS_AUMQMA00015F_PRTCOMP, Then reload that partition.
    You can also drop a partition but I have no idea what impact that has with regards to the AW maintainance tools that use the Java API... i.e. AWM and OWB
    Basically the answer is
    1) No you are not missing something obvious
    2) Recreating the whole thing is easier and more reliable. Especially if you have an EIF backup of the AW....

  • Multiple cubes questions

    Dear BW expert,
    I have question about Multiple Cubes. What is the advantage and disadvantage to use multiple cubes for reporting performance?
    I have following scenarios:
    Cube2005, Cube2006, ...... Cube2xxx. (total growing )
    Vs.
    Cube1, ..... Cube12 (Total 12 cubes)- Based on Fiscal period. If I use this model, how about I want a report for a special year, so in this case I have to access 12 cubes every time. If I build multiprovider on top of this for each year, is that possible?
    Pros &  cons?
    Any suggestions is greatly appreciated and points awarded!
    Weidong

    Hi there,
    Thanks for your reply!
    My case is that we have a huge volume of data for each year, so we decided to build cube per year, but we have to create a lot of cubes in advance and make the maintenance work harder. Then we come up with the idea by using cube per fiscal period/month, so we have fix number of cubes(12), and on top of the 12 cubes, we build multiprovider per year, and then multiprovider on top of the multipovider per year - is that possible?
    The main reason is to keep the data size for each cube/ODS small. Anyone has such experience with a large data size cube?
    Any comments? Thanks in advance!
    Weidong

  • Essbase, Multiple Cubes, ODBC Strategy

    Hello All,
    I'm new to Essbase and am working on an upgrade project taking us from the v7 platform to the v11 (with a temporary stop at v9). One obstacle that we've faced is the fact that having multiple cubes which have jobs that consume a lot of bandwidth and few the same ODBC driver to acquire the DB data a "bottleneck" effect occurs where some jobs don't run because the driver is working other jobs. I am looking for a best-practice solution that anyone has implemented to mitigate this "bottleneck" effect. Your help is appreciated; thank you.
    v/r
    Roy

    Great Thanks! ***
    However, since I am a "newbie" can you ellaborate on that process; how it is done? I really appreciate your help, I believe that seems like the most viable solution. I believe the next thing we will be doing is actually splitting half our cubes on one server and the other half on another to "load-balance" between the two. Your explination is truly appreicated.
    v/r
    Roy

  • Analysis across multiple cubes - OBIEE 10g/11g

    Hi,
    Does OBIEE supports analysis across multiple cubes ? or it is inherent to OBIEE ?
    Regards,
    Junaid

    You can use union reports
    For example
    req 2) Product_Group column formula fx "Budgeted'
             Budget column formula fx Filter (budget using product_group in ('alpha','beta')
    req 3) Product_Group column formula fx "Not Budgeted'
             Budget column formula fx Filter (budget using product_group not in ('alpha','beta')
    mark correct/helpful if its correct/helpful

  • What is the Cube refresh and how to determine cube refresh time for a cube

    hi,
    what will be the meaning of cube refershing, i believe it is clearing and updating the data in production weekly or monthly basis. how it will happen in general, do we need to do mannually, will there be any scheduler in production.
    please clarify me how to determine refresh timime for a cube, what facters we need to consider.
    Thanks,
    bhavani.

    I can't give you specific guidelines as every client/cube is different based on needs. Here is some general info.
    Cube refresh is a pretty generic name for a number of things.
    It could be as you say clearing and updating the cube, or just updating, or running calculations or running restructure operations. It's pretty much what you need it to be for your environment. As to timing, There are two timings to consider. First, the frequency of refresh. Is it Monthly, weekly, Daily, hourly? That depends on the application and the needs of the users. The second thing is how long the refresh takes. Is it seconds, minutes, hours, etc. This has to be considered to meed the User's SLA and hopfully fit into a batch processing window. Refreshes can be done manually or automated with batch or shell scripts (depending on the OS) and scheduled through whatever scheduling package is on the box (Windows AT scheduler, Cron, third party, etc)

Maybe you are looking for

  • Third party windows 8.1 driver for VMSpc Version 3.0

    I purchased a HP Stream 7 to use as an engine monitor for my motorhome. The tablet will not recognize the driver for the JIB. I have the latest driver and it installs perfectly on my Pavilion x2 with the same operating system. Nothing seems to work w

  • Parent Child Hierarchies

    Hi All, I am setting up Parent Child Hierarchies in EBS.  Currently, we have all of our base-level child accounts under one parent 'ALL'.  I am trying to expand this having a level of parents underneath 'ALL' such as 'Cash', 'AR', 'Inventory', etc. 

  • Can´t see my flash movie

    I´m working with authorware 7.02 and try to import a flash movie. This is published as flash 6 (actionscript 1) and contains a video (mpeg4) and control buttons for it (play, pause, change of speed,...). I tryed to import it with the flash asset xtra

  • Setup/ Configuration Question

    The network setup I'm using is a wireless G router, Belkin brand, and I have four WRE54G expanders throughout the warehouse. I don't have WEP turned on so I used the auto configuration on all of them. They all made connection and they all work. Howev

  • Today's iOS update did not fix purchased apps from crashing

    Guess we have to continue waiting.