Multiple Cubes refresh in parallel

Hi
I have an analytical workspace where i have modelled a set of conformed dimensions and some dimensions specific to specific subject areas. There will be multiple cubes (Partitioned and some non partitioned) in this analytical workspace.
Would like to know if these cubes can be refreshed in parallel. I have tried using DBMS_CUBE provided parallelism parameter and kicked off 2 cubes refresh but when i check the cube_build_log, the slave process is always 0 and the execution seems to have happened in serial.
Please suggest how these cubes can be refreshed in parallel.
Thanks

FAILED RECORDS FROM THE LOG :
==========================
127     0     FAILED     BUILD          BUILD     "(CLOB) <ERROR>
<![CDATA[
XOQ-01707: Oracle job "IncrMyCBMV_JOB$_812" failed while executing slave build "GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406" with error "37162: ORA-37162: OLAP error
XOQ-00703: error executing OLAP DML command "(UPDATE GLOBAL.GLOBAL : ORA-37605: error during OLAP AW UPDATE
ORA-00600: internal error code, arguments: [kdliLockBlock], [9708], [16859386], [0], [0], [0], [0], [], [], [], [], []
ORA-06512: at "SYS.DBMS_CUBE", line 234
ORA-06512: at "SYS.DBMS_CUBE", line 316
ORA-06512: at line 1
".]]>>
</ERROR>"     GLOBAL     GLOBAL               02-APR-13 12.25.43.702000000 PM ASIA/CALCUTTA     (CLOB) BUILD price_cube, units_cube     DBMS_CUBE     0               4542     0     0     2     IncrMyCBMV
127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P22:1999.10     IncrMyCBMV_JOB$_821     02-APR-13 12.25.42.673000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6288     230     0     3     IncrMyCBMV
127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P24:1999.12     IncrMyCBMV_JOB$_819     02-APR-13 12.25.32.533000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6272     228     0     3     IncrMyCBMV
127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P25:2000.01     IncrMyCBMV_JOB$_818     02-APR-13 12.25.30.505000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6259     227     0     3     IncrMyCBMV
127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P26:2000.02     IncrMyCBMV_JOB$_817     02-APR-13 12.25.28.477000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6258     226     0     3     IncrMyCBMV
127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P27:2000.03     IncrMyCBMV_JOB$_816     02-APR-13 12.25.26.449000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6237     225     0     3     IncrMyCBMV
127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P28:2000.04     IncrMyCBMV_JOB$_815     02-APR-13 12.25.24.421000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6235     224     0     3     IncrMyCBMV
127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P29:2000.05     IncrMyCBMV_JOB$_814     02-APR-13 12.25.22.393000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6234     223     0     3     IncrMyCBMV
127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P30:2000.06     IncrMyCBMV_JOB$_813     02-APR-13 12.25.20.349000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6232     222     0     3     IncrMyCBMV

Similar Messages

  • Multiple Cube Refresh

    Hi,
    We are having multiple cubes in a single workspace. We would need to refresh more than one cube parallely, using DBMS_CUBE package. Is that feasible? Can 2 cubes refresh in parallel? Will there not be conflict in read-write attach mode on the analytic workspace?
    rgds,
    Prakash S

    This should work
    exec dbms_cube.build('cubeA, cubeB', parallelism=>2)All partitions of cube A will be built before cubeB is started, but if paralellism is sufficiently high, then they should both begin. It will not work to call dbms_cube.build in two different sessions.

  • I see 'enq: JI - contention' when building multiple cubes/partitions

    Version 11.2.0.3
    I can successfully build multiple partitions of a cube simultaneously by supplying the degree of parallelism that I want. I can also build multiple cubes and multiple partitions of multiple cubes by submitting separate jobs (one per cube) with parallelism set in the job (for number of partitions per job/cube).
    My goal was to refresh 2 cubes simultaneously, 2 partitions in parallel each, so that 4 partitions total were refreshing simultaneously. There were sufficient hardware resources (memory and processes) to do this. I tried to submit 2 jobs, one for each cube, with parallel 2 on each.
    What happens is that 3 partitions start loading, not 4. The smaller of the 2 cubes loads 2 partitions at a time, but the larger of the cubes starts loading only 1 partition and the other partition process waits with JI - contention.
    I understand that JI contention is related one materialized view refresh blocking another refresh of the same MV. Yet simultaneous refresh of different partitions is supported for cube MVs.
    Because I see the large cube having the problem but not the smaller one, I wonder if adding more hash partitions to the AW$ (analytic workspace) table would allow more concurrent update processes. We have a high enough setting for processes and job_queue_processes, and enough available threads, etc.
    Will more hash subpartitions on the AW$ table allow for more concurrency for cube refreshes?

    It looks like the JI contention was coming from having multiple jobs submitted to update the SAME cube (albeit different partitions). Multiple jobs for different cubes (up to one job/cube each) seems to avoid this issue. I thought there was only one job per cube, but that was not true.
    Still, if someone has some insight into creating more AW hash subpartitions, I'd like to hear it. I know how to do it, but I am not sure what the impact will be on load or solve times. I have read a few sources online indicating that it is a good idea to have as many subpartitions as logical cube partitions, and that it is a good idea to set the subpartition number to a power of two to ensure good balance.

  • Maintaining multiple cubes in an AW

    How can I concurrently maintain multiple cubes in an AW? In a session, I can only attach one AW and start maintaining one Cube, hence, cannot attach it again for maintaining another Cube in another session.
    Regards, Anirban

    AWM attaches AWs in 'read write' mode, so you cannot open a second AWM and attach the same AW. But all is not lost.
    Option 1: Select two cubes in AWM and then choose parallelism >= 2
    If neither of your cubes is partitioned, then they should be built in parallel. If your cubes are partitioned, then the server will build all the partitions of cube 1 first, but should start working on the partitions of cube 2 as soon as there are free processes, so they should overlap. Note that this will not happen if there is any dependency between the cubes. The USER_DEPENDENCIES view should tell you if this is true.
    Option 2: (11.2 only) Start the builds using DBMS_CUBE.BUILD
    Wherever possible the server (11.2) will attempt to build the cube in 'multi-writer' mode instead of in 'read write' mode. This means you could run build two cubes in parallel from different SQL sessions. This will only happen for cube-only builds -- dimension builds always require 'read write' mode.

  • Number of users logged in during cube refresh activity

    I am using Hyperion 11.1.2.1 and running  Planning cube refresh activity using \Hyperion\Planning\bin\CubeRefresh.cmd.
    Is there any way to find out total number of users logged in into a particular Planning Application during Cube refresh activity running for that application ?

    I don't think Planning repository has information about sessions.
    Let us know if you succeed.
    I was not talking about Essbase as in "Essbase", all users that are performing an activity in Planning (data refresh, data load, run business rule) will show up in EAS sessions so if you run the maxl display session against Planning application it'll give you all users.
    Now OP, you can perform a search for "line count windows cmd" will give you ideas on how to count the lines and that count is going to be the numbers of users (well it is not the number of users, but number of sessions)
    ORA-00001: Unique constraint violated: Count lines in multiple files using Windows command prompt
    Regards
    Celvin

  • Cube refresh fails with an error below

    Hi,
    We are experiencing this problem below during planning application database refresh. We have been refreshing the database everyday, but all of a sudden the below error is appearing in log. The error is something like below:
    Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
    java.io.EOFException
    When the database refresh is done from workspace manually, the database refresh is happening successfully. But when triggered from unix script, its throwing the above error.
    Is it related to some provisioning related issue for which user has been removed from MSAD?? Please help me out on this.
    Thanks,
    mani
    Edited by: sdid on Jul 29, 2012 11:16 PM

    I work with 'sdid' and here is a better explaination of what exactly is going on -
    As part of our nightly schedule we have a unix shell script that executes refresh of essbase cubes from planning using the 'CubeRefresh.sh' shell script.
    Here is how our shell looks like -
    /opt/hyperion/Planning/bin/CubeRefresh.sh /A:<cube name> /U:<user id> /P:<password> /R /D /FS
    Here is what 'CubeRefresh.sh' looks like -
    PLN_JAR_PATH=/opt/hyperion/Planning/bin
    export PLN_JAR_PATH
    . "${PLN_JAR_PATH}/setHPenv.sh"
    "${HS_JAVA_HOME}/bin/java" -classpath ${CLASSPATH} com.hyperion.planning.HspCubeRefreshCmd $1 $2 $3 $4 $5 $6 $7
    And here is what 'setHPenv.sh' looks like -
    HS_JAVA_HOME=/opt/hyperion/common/JRE/Sun/1.5.0
    export HS_JAVA_HOME
    HYPERION_HOME=/opt/hyperion
    export HYPERION_HOME
    PLN_JAR_PATH=/opt/hyperion/Planning/lib
    export PLN_JAR_PATH
    PLN_PROPERTIES_PATH=/opt/hyperion/deployments/Tomcat5/HyperionPlanning/webapps/HyperionPlanning/WEB-INF/classes
    export PLN_PROPERTIES_PATH
    CLASSPATH=${PLN_JAR_PATH}/HspJS.jar:${PLN_PROPERTIES_PATH}:${PLN_JAR_PATH}/hbrhppluginjar:${PLN_JAR_PATH}/jakarta-regexp-1.4.
    jar:${PLN_JAR_PATH}/hyjdbc.jar:${PLN_JAR_PATH}/iText.jar:${PLN_JAR_PATH}/iTextAsian.jar:${PLN_JAR_PATH}/mail.jar:${PLN_JAR_PA
    TH}/jdom.jar:${PLN_JAR_PATH}/dom.jar:${PLN_JAR_PATH}/sax.jar:${PLN_JAR_PATH}/xercesImpl.jar:${PLN_JAR_PATH}/jaxp-api.jar:${PL
    N_JAR_PATH}/classes12.zip:${PLN_JAR_PATH}/db2java.zip:${PLN_JAR_PATH}/db2jcc.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/css-9_
    3_1.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/ldapbp.jar:${PLN_JAR_PATH}/log4j.jar:${PLN_JAR_PATH}/log4j-1.2.8.jar:${PLN_JAR_
    PATH}/hbrhppluginjar.jar:${PLN_JAR_PATH}/ess_japi.jar:${PLN_JAR_PATH}/ess_es_server.jar:${PLN_JAR_PATH}/commons-httpclient-3.
    0.jar:${PLN_JAR_PATH}/commons-codec-1.3.jar:${PLN_JAR_PATH}/jakarta-slide-webdavlib.jar:${PLN_JAR_PATH}/ognl-2.6.7.jar:${HYPE
    RION_HOME}/common/CLS/9.3.1/lib/cls-9_3_1.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/EccpressoAll.jar:${HYPERION_HOME}/common/
    CLS/9.3.1/lib/flexlm.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/flexlmutil.jar:${HYPERION_HOME}/AdminServices/server/lib/easse
    rverplugin.jar:${PLN_JAR_PATH}/interop-sdk.jar:${PLN_JAR_PATH}/HspCopyApp.jar:${PLN_JAR_PATH}/commons-logging.jar:${CLASSPATH
    export CLASSPATH
    case $OS in
    HP-UX)
    SHLIB_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${SHLIB_PATH:-}
    export SHLIB_PATH
    SunOS)
    LD_LIBRARY_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LD_LIBRARY_PATH:-}
    export LD_LIBRARY_PATH
    AIX)
    LIBPATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LIBPATH:-}
    export LIBPATH
    echo "$OS is not supported"
    esac
    We have not made any changes to either the shell or 'CubeRefresh.sh' or 'setHPenv.sh'
    From the past couple of days the shell that executes 'CubeRefresh.sh' has been failing with the error message below.
    Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
    java.io.EOFException
    This error is causing our Essbase cubes to not get refreshed from Planning cubes through these batch jobs.
    On the other hand the manual refesh from within Planning works.
    We are on Hyperion® Planning – System 9 - Version : 9.3.1.1.10
    Any help on this would be greatly appreciated.
    Thanks
    Andy
    Edited by: Andy_D on Jul 30, 2012 9:04 AM

  • Bad request deletion at a time from multiple cubes?

    How do we delete a bad request or reconstruct from cube at a time which is loading from single info source to multiple cubes?

    hi Bharath,
    try these links.
    http://help.sap.com/saphelp_nw04s/helpdata/en/ca/5c7b3cbd556915e10000000a114084/frameset.htm
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/espackages/maintenance%2brequest
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a65a8e07211d2acb80000e829fbfe/frameset.htm
    hope it helps.

  • Performance issue in browsing SSAS cube using Excel for first time after cube refresh

    Hello Group Members,
    This is a continuation of my earlier blog question -
    https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices
    As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.
    I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around
    4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube
    refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.
    Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after
    daily cube refresh?
    Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take
    a significant time based on the bandwidth of the network and connection.
    Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?
    Best Regards, Arka Mitra.

    Could you run the following two DMV queries filling in the name of the cube you're connecting to. Then please post back the row count returned from each of them (by copying them into Excel and counting the rows).
    I want to see if this is an issue I've run across before with thousands of dimension attributes and MDSCHEMA_CUBES performance.
    select [HIERARCHY_UNIQUE_NAME]
    from $system.mdschema_hierarchies
    where CUBE_NAME = 'YourCubeName'
    select [LEVEL_UNIQUE_NAME]
    from $system.mdschema_levels
    where CUBE_NAME = 'YourCubeName'
    Also, what version of Analysis Services is it? If you connect Object Explorer in Management Studio to SSAS, what's the exact version number it says on the top server node?
    http://artisconsulting.com/Blogs/GregGalloway

  • How to run multiple CodedUI Ordered Tests over multiple Test Agents for parallel execution using Test Controller

    we are using VS 2013, I need to run multiple Coded UI Ordered Tests in parallel on different agents.
    My requirement :
    Example:   I have 40 Coded UI Test scripts in single solution/project. i want to run in different OS environments(example 5 OS ).  I have created 5 Ordered tests with the same 40 test cases. 
    I have one Controller machine and 5 test agent machines. Now I want my tests to be distributed in a way that every agent gets 1 Ordered test to execute. 
    Machine_C = Controller (Controls Machine_1,2,3,4,5)
    Machine_1 = Test Agent 1 (Should execute Ordered Test 1 (ex: OS - WIN 7) )
    Machine_2 = Test Agent 2 (Should execute Ordered Test 2 (ex:
    OS - WIN 8) )
    Machine_3 = Test Agent 3 (Should execute Ordered Test 3
    (ex: OS - WIN 2008 server)  )
    Machine_4 = Test Agent 4 (Should execute Ordered Test 4 (ex:
    OS - WIN 2012 server) )
    Machine_5 = Test Agent 5 (Should execute Ordered Test 5 (ex:
    OS - WIN 2003 server) )
    I have changed the  “MinimumTestsPerAgent” app setting value
    as '1' in controller’s configuration file (QTController.exe.config).
    When I run the Ordered tests from the test explorer all Test agent running with each Ordered test and showing the status as running. but with in the 5 Test Agents only 2 Agents executing the test cases remaining all 3 agents not executing the test cases but
    status showing as 'running' still for long time (exp: More then 3 hr) after that all so  its not responding. 
    I need to know how I can configure my controller or how I can tell it to execute these tests in parallel on different test agents. This will help me reducing the script execution time. 
     I am not sure what steps I am missing. 
    It will be of great help if someone can guide me how this can be achieved.
    -- > One more thing Can I Run one Coded UI Ordered Test on One Specific Test Agent?
    ex: Need to run ordered Test 1 in Win 7 OS (Test Agent 1) only.
    Thanks in Advance.

    Hi Divakar,
    Thank you for posting in MSDN forum.
    As far as I know, we cannot specify coded UI ordered test run on specific test agent. And it is mainly that test controller determine which coded UI ordered test assign to which test agent.
    Generally, I know that if we want to run multiple CodedUI Ordered Tests over multiple Test Agents for parallel execution using Test Controller.
    We will need to change the MinimumTestsPerAgent property to 1 in the test controller configuration file (QTControllerConfig.exe.config) as you said.
    And then we will need to change the bucketSize number of tests/number of machines in the test settings.
    For more information about how to set this bucketSize value, please refer the following blog.
    http://blogs.msdn.com/b/aseemb/archive/2010/08/11/how-to-run-automated-tests-on-different-machines-in-parallel.aspx
    You can refer this Jack's suggestion to run your coded UI ordered test in lab Environment or load test.
    https://social.msdn.microsoft.com/Forums/vstudio/en-US/661e73da-5a08-4c9b-8e5a-fc08c5962783/run-different-codedui-tests-simultaneously-on-different-test-agents-from-a-single-test-controller?forum=vstest
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Spawning multiple approval tasks in parallel in OIM11g SOA Composite

    Hi,
    We are trying to implement the following scenario.
    1) We are trying to develop a SOA composite for AD Group Access
    2) The request dataset contains a child table for AD User Group Details which is as follows.
    <AttributeReference name="AD User Group Details" attr-ref="UD_ADUSRC" type="String" length="20" widget="text" available-in-bulk="true">
    <AttributeReference name="Group Name" attr-ref="Group Name" type="String" length="400" widget="lookup" available-in-bulk="true" lookup-code="Lookup.ADReconciliation.GroupLookup" primary="true"/>
    </AttributeReference>
    3) Consider the user is already provisioned to AD.
    4) User now tries to request for AD Group Access by using a request template
    5) The request dataSet for the resource "AD Group Access" will be displayed where the user would "Add" the group(s) to which (s)he want access.
    6) Once the request is sumbitted the associated SOA composite would be executed.
    7) Now, in the SOA composite the logic should be as follows:
    a. For each group selected, there is a corresponding dataApprover who should approve the request.
    b. Once the dataApprover approves the request it goes to the next approver who is securityApprover.
    c. Once the securityApprover approves the request, the request should go thru and the user should get the membership in the AD Group.
    d. Since "AD User Group Details" is a child form in the request dataset, the user can add multiple groups in the same request.
    e. If there are muliple groups selected in the same request, then the same request should spawn parallel approval tasks for all corresponding dataApprovers and securityApprovers.
    f. Then the user should get membership to those AD Groups for which the corresponding dataApprover and securityApprover had approved the request.
    e. If a dataApprover or securityApprover rejects the request then the user shouldn't get membership to the respective group. However, this shouldn't prevent the user from getting membership to other groups for which dataApprover-securityApprover approval was done.
    The dataApprover and securityApprover for the groups are stored in a db table mapping to the corresponding group name.
    We have implemented a SOA composite for which the logic is fine if we add only one group in the child table of request dataset. As per the current implementation, when a user submits the request, the dataApprover and securityApprover for the selected group are fetched from the table and the global variables in SOA composite are set with the ID of dataApprover and securityApprove using setVariableData. These are sting variables. These variables are used in the approval task. The approval task has two "Single Type" participants - dataApprover and securityApprover. These participants fetch the value of dataOwner and securityOwner from the global variables set using setVariableData.
    Now, as mentioned above, if mutiple groups are added like group1, group 2 etc. then there should be multiple approval tasks spawned in parallel that will be approved/rejected by dataApprover1-securityApprover1, dataApprover2-securityApprover2 etc. Depending on the output (approve/reject) the user should get membership to appropriate groups.
    Any inputs on how to modify the current composite to spawn multiple approval tasks in parallel depending on the number of groups added from the requestDataSet would be helpful.
    Regards,
    Swaroop

    Single request id then you are bit safe. The way to do it would be:
    1. Set the dataApprovers as a comma separated list of all the data approvers for all the groups.
    2. Set the securityApprovers as a command separated list of all the security approvers for all the groups.
    3. In Human Task assign the first stage to all the dataApprovers and second stage to securityApprovers.
    Cons of this approach are:
    1. All the approvers would see all the data and they might be confused what they are approving.
    2. securityAppprovers for say group1 won't get the item untill all the dataApprovers approve the request even though dataApprover has approved the request for group1.
    3. Would be hard to implement the rejection cases; depending upon how you want to handle the rejections. For e.g. what if any dataApprover rejects the request? Should the whole request be rejected? If so what would happen to those which have already been approved by dataApprovers? Same case goes for securityApprovers. Again since you cannot modify the requested data once the request is submitted; thus you cannot remove the rejected groups from the request.
    4. You provisioning won't trigger untill all dataApprovers and all securityApprovers have approved the request.
    5. Any one approve from comma separated list of approvers would approve the request. Thus you cannot make sure that all the approvers should approve the request. The workaround would be to create parallel stages in human task and assign one group/approver to one parallel stage. This would mean that you will have to hard code the number of parallel approvals which can be generated in your BPEL human task (This would again depend upon the number of groups requested). To workaround this you could use BPEL extenal routing program where you can pragmatically assign tasks but again since there is no entitlement based request engine in OIM, thus there would be issues there too.
    As a workaround, make sure that you allow only one group to be requested per request and reject the request outright if multiple groups are requested in a single request. You will need to buy in the business on this one.
    Have heard the grapevine that 12G which is in the pipeline would have entitlement based request engine and also would allow for modification of request data once the request is submitted.
    HTH,
    BB

  • Can we run multiple sapinst programs in parallel?

    We are planning to upgrade our EP, XI and BW simultaneously. These SIDs all reside on the same server and same OS. Is it possible to run multiple sapinst programs in parallel? I know it may require specifying separate ports for the 2nd and 3rd instance of sapinst. For example: sapinst 1 gets default ports 21212, 21213. sapinst 2 will take 21214, 21215 and sapinst 3 takes 21216 and 21217.
    Has anyone attempted this sort of a thing before? Appreciate your thoughts.

    Never done that before, maybe it works .. however it could run into problems when the sapinst are trying to access the same paths / files.
    Regards,
    Siddhesh

  • Cube Refresh Performance Issue

    We are facing a strange performance issue related to cube refresh. The cube which used to take 1 hr to refresh is taking around 3.5 to 4 hr without any change in the environment. Also, the data that it processes is almost the same before and now. Only these cube out of all the other cubes in the workspace is suffering the performance issue over a period of time.
    Details of the cube:
    This cube has 7 dimensions and 11 measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.
    Data Volume: 2480261 records in the source to be processed daily (almost evenly distributed across the partition)
    Cube is refreshed with the below script
    DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);
    Has anyone faced similar issue? Please can advise on what might be the cause for the performance degradation.
    Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
    AWM - awm11.2.0.2.0A

    Take a look at DBMS_CUBE.BUILD documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube.htm#ARPLS218 and DBMS_CUBE_LOG documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube_log.htm#ARPLS72789
    You can also search this forum for more questions/examples about DBMS_CUBE.BUILD
    David Greenfield has covered many Cube loading topics in the past on this forum.
    Mapping to Relational tables
    Re: Enabling materialized view for fast refresh method
    DBMS CUBE BUILD
    CUBE_DFLT_PARTITION_LEVEL in 11g?
    Reclaiming space in OLAP 11.1.0.7
    Re: During a cube build how do I use an IN list for dimension hierarchy?
    .

  • Creating a universe on Multiple Cubes or Bex Queries

    Dear SDNers,
    Can we create a single universe on Multiple cubes or DSOs or BEx Queries and how do we do this?
    Can you plese tell me step by step.
    Thanks,
    Swathi.

    Hi swathi,
    yes, we can create multicubes from single universe from webi rich client/deski, create a single report  from BEx Query it makes as single micro cube.
    and Next Edit Query--> Add Query--> select same universe.  This Query makes as another Micro cube.   so you can design a report from those Microcubes. if you add additional object to query panal the microcube updated from data base.
    This Link help for you,
    [http://reports.is.ed.ac.uk/areas/itservices/busintel/TrainingMaterials/advUser/Lesson6-Step1.html]
    Thanking you,
    Praveen

  • Cube Refresh Failure

    I am running into some interesting issues where a particular cube refresh fails with the following message:
    10:08:14 ***Error Occured in BUILD_DRIVER: In __XML_SEQUENTIAL_LOADER: In __XML_UNIT_LOADER: In __XML_LOAD_MEAS_CC_PRT_ITEM: In ___XML_LOAD_TEMPPRG: The Q12_AW!Q12_RESPONDENTS_AUMQMA00015F_PRTCOMP dimension does not have a value numbered -157717.
    It looks like one of the partition is corrupted or things to that nature.
    I built a brand new cube identical to the previous cube and it refreshes fine (same facts). I am just curious if there is any way to fix the previous cube without recreating the whole thing.
    Am I missing something very obvious here?
    Swapan.

    You might try drop/recreate of the composite Q12_RESPONDENTS_AUMQMA00015F_PRTCOMP, Then reload that partition.
    You can also drop a partition but I have no idea what impact that has with regards to the AW maintainance tools that use the Java API... i.e. AWM and OWB
    Basically the answer is
    1) No you are not missing something obvious
    2) Recreating the whole thing is easier and more reliable. Especially if you have an EIF backup of the AW....

  • Multiple cubes questions

    Dear BW expert,
    I have question about Multiple Cubes. What is the advantage and disadvantage to use multiple cubes for reporting performance?
    I have following scenarios:
    Cube2005, Cube2006, ...... Cube2xxx. (total growing )
    Vs.
    Cube1, ..... Cube12 (Total 12 cubes)- Based on Fiscal period. If I use this model, how about I want a report for a special year, so in this case I have to access 12 cubes every time. If I build multiprovider on top of this for each year, is that possible?
    Pros &  cons?
    Any suggestions is greatly appreciated and points awarded!
    Weidong

    Hi there,
    Thanks for your reply!
    My case is that we have a huge volume of data for each year, so we decided to build cube per year, but we have to create a lot of cubes in advance and make the maintenance work harder. Then we come up with the idea by using cube per fiscal period/month, so we have fix number of cubes(12), and on top of the 12 cubes, we build multiprovider per year, and then multiprovider on top of the multipovider per year - is that possible?
    The main reason is to keep the data size for each cube/ODS small. Anyone has such experience with a large data size cube?
    Any comments? Thanks in advance!
    Weidong

Maybe you are looking for

  • Adobe CS3 Windows 7 Pro 64bit Issues

    I recently upgraded to Windows 7 Pro 64bit Edition after my system crashed. I installed Adobe CS3 without a problem and it ran fine for a little while. Then one day I started up Photoshop and then it just stopped working. So I decided to uninstall it

  • Error while trying to deploy the application using Enterprise manager website

    Hi, I am trying to deploy a huge ear file using enterprise manager website of Oracle 9iAs Rel 2. The deployment fails with the following error. Deployment failed: Nested exception Root Cause: null; nested exception is: java.lang.OutOfMemoryError. nul

  • New Computer, New Itunes, ... old Apple TV. how to modify content

    The Apple TV was originally setup on a Mac, and uploaded tons of movies to it. The MAC died last year, and was replaced with a PC. I've been able to watch all the content, and figured out how to stream to it by using itunes and activating "home shari

  • Struts1.1b2 and websphere 4.00 problem !

    Anyone here tried using the above two combination ? I am getting this error I SRVE0092I: [Servlet LOG]: Error Page Exception: : com.ibm.servlet.engine.webapp.WebAppErrorReport: Cannot find message resources under key org.apache.struts.action.MESSAGE

  • Start oc4j

    Dear Friends, I have installed Oracle 9iDS. When i try to run the batch file startinst.bat in <<ORACLE-HOME>>/j2ee/Oracle9iDS/startinst.bat i get the following error. I m puzzled of the same.. i m stuck up with the problem for the past 2 days.. kindl