Maitain cube script clarification

Guys,
I have already posted one of the thread regarding Maintain cube/AWM script in the discoverer forum
Thread ID
What is MaxJobQueues,RunSolve and CleanMeasures in Maintain AWM Script??
Sorry to post again in the same question in different forum/Way??.
I am not able to understand what is the meaning of RunSolve and cleanMeasures when this parameter values changes from true,false to false,true/false,false
and Why we want to give MaxJobQueues!!!???
Anybody can clarify the same.
Thanks for your timings!!!
Nats

for RunSolve and cleanMeasures not sure....
for maxjobqueues I would say that if you are specifying some value then u will see that while you run the maintainance task it will pick the partition and run x number of processes in parallel , here x is the number u specified. it will not run in parallel x partitions but for each partition task will be divided into x process and run in parallel. you can see these into table called xml_load_log in olapsys schema.
hope this helps..!!!
Thanks
Brijesh

Similar Messages

  • Cube script parameters

    Hello!
    Finally we are starting our huge migration project from 10.2.0.4 to 11.2.0.3. Now we are doing study what to change in our OLAP build scripts. Originally we use DML and now would like to build cubes using "proper" way. My question concerns clearing of variables. Are there any way to setup a parameter for dimension status for clear step in cube script?
    Let's say I would like to define before execution the day I would like to clear from sales cube. How should I accomplish this without DML programming?
    Thanks in advance!
    Regards,
    Kirill

    Kirill,
    Since you are moving from 10golap to 11golap, few points to keep in mind (for you and for others on this forum):
    (1). Convert AW's calc measures logic from olap dml to new OLAP EXPRESSION syntax. http://docs.oracle.com/cd/E11882_01/olap.112/e23381/toc.htm
    (2). If there is any olap dml program to populate an attribute, move that logic to relational side (i.e., in your dimension source view)
    (3). Try to understand, how XML can be used to manage the AW and its objects. There are lot of posts by David Greenfield on this forum.
    (4). Its very important to understand DBMS_CUBE package. Try to understand all its details, since you will use this a lot. http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_cube.htm
    (5). It is also important to use as little olap dml as possible, even for calc measures. Use this forum for any help you need with that. There should be only few cases now, where olap dml is necessary. Use the standard cube-aware olap dml statements: http://docs.oracle.com/cd/E11882_01/olap.112/e17122/dml_basics.htm#BABFDBDJ
    (6). Logging is much extensive now. So if you have any code that relied on olapsys.XML_LOAD_LOG that need to be changed also.
    http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_cube_log.htm#ARPLS72789
    (7). Use compressed cubes now. See David's post about how much to precompute: Question on cube build / query performance (11.2.0.3)
    (8). If you have a RAC environment, then look at this new functionality where you can Pin dbms_cube.build parallel jobs to specific node on RAC Pin dbms_cube.build parallel jobs to specific node on RAC
    (9). For any write-back to cube, see if you can use this tip: http://oracleolap.blogspot.com/2010/10/cell-level-write-back-via-plsql.html
    (10). If you had any custom work done to improve looping during queries, then keep in mind that it is provided out-of-the-box now by properties like: $LOOP_VAR and $LOOP_DENSE
    http://docs.oracle.com/cd/E11882_01/olap.112/e17122/dml_properties013.htm
    http://docs.oracle.com/cd/E11882_01/olap.112/e17122/dml_properties014.htm
    (11). Better features for handling security through AWM. http://docs.oracle.com/cd/E11882_01/olap.112/e17123/security.htm
    (12). Cube Materialized Views features also available. You may or may not need it.
    http://docs.oracle.com/cd/E11882_01/olap.112/e17123/admin.htm#CHDBCEGB
    (13). OLAP_TABLE function is still fully supported. OBIEE 11.1.1.5 uses OLAP_TABLE to generate queries. So if you like you can continue using OLAP_TABLE, or you can look at the new CUBE_TABLE function: http://docs.oracle.com/cd/E11882_01/server.112/e17118/functions042.htm
    And finally... Go through all David Greenfield's posts. You will find lot of good ideas on how to do things differently in 11gOLAP.
    .

  • Project Online - Cube script for building files

    Hi
    We have to use Project Online for our solution and we need to build a reporting database in SQL.
    Instead of building the schema manually,are there any pre built 2013 ones for use?
    Also, would there be any ETL scripts for building the cubes available or are all of these only developed through paid services - ie Project Hosts, Agorain?
    Regards
    Sean 

    Hello,
    There are some SSIS package examples / blog posts you can start with but each organisation would have different requirements so it would be difficult to have a pre-built production SSIS package that suited all. The links below might help get you started
    with creating your custom SQL Reporting database / data warehouse:
    http://pwmather.wordpress.com/2014/03/26/projectonline-data-via-odata-and-ssis-in-sql-database-table-on-premise-msproject-sharepointonline-bi-ssrs-office365-cloud/
    http://nearbaseline.com/blog/2014/04/project-site-custom-list-reporting-using-ssis-odata-connector/
    http://msdn.microsoft.com/en-us/library/office/dn794163(v=office.15).aspx &
    http://www.microsoft.com/en-us/download/details.aspx?id=43736
    To create an OLAP cube from you custom data warehouse would required you to create the code to do that. You could look at using one of the Microsoft partners to do all of this for you as a paid service.
    http://office.microsoft.com/en-gb/project/microsoft-project-partner-resources-ms-project-FX103802119.aspx
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS

  • Rman script clarification

    Hi gurus,
    Im new to rman and im configuring it from EM.
    My strategy is disk-based. I don't backup with rman on tape. Im sending it on tape from another program.
    The generated script is:
    [BEGIN SCRIPT]
    $rman_script="backup device type disk tag '%TAG' database;
    backup device type disk tag '%TAG' archivelog all not backed up;
    allocate channel for maintenance type 'SBT_TAPE';
    delete noprompt obsolete device type disk;
    release channel;
    &br_save_agent_env();
    &br_prebackup($l_db_connect_string, $l_is_cold_backup, $l_use_rcvcat, $l_db_10_or_higher, $l_backup_strategy, "TRUE");
    my $result = &br_backup();
    exit($result);
    [END SCRIPT]
    The backup appears failed because it seems its trying to backup on tape too. As saw on metalink, I sent a command that is supposed to clear the tape channel.
    Here is the error message:
    [MESSAGE START]
    RMAN> allocate channel for maintenance type 'SBT_TAPE';
    released channel: ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of allocate command on ORA_MAINT_SBT_TAPE_1 channel at 10/09/2007 02:49:27
    ORA-19554: error allocating device, device type: SBT_TAPE, device name:
    ORA-27211: Failed to load Media Management Library
    Additional information: 22
    [MESSAGE END]
    It still appears failed.
    Can I remove the line
    allocate channel for maintenance type 'SBT_TAPE';
    Thanks, your help is greatly appreciated

    I think you definitely need to delete that line if
    you are not using tape.
    can you also paste the output of
    RMAN> show all;
    enricoHi,
    thanks for replying. Here is the result for the "show all" command:
    using target database control file instead of recovery catalog
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u18/oracle/BACKUPPRODUCTION/%F';
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 1;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u18/oracle/HOTBACKUP/%U';
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/appora/oracle/product/10.2.0/db_1/dbs/snapcf_giga10g.f'; # default
    I also executed this morning
    RMAN> CONFIGURE DEVICE TYPE 'SBT_TAPE' CLEAR;
    RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' clear;
    I already tried CONFIGURE DEVICE TYPE 'SBT_TAPE' CLEAR; but not the second line, I'll see tonight if it's ok now.
    ...and I forgot to mention I'm running a 10gR2 database on Redhat AS 4
    Thanks and have a nice day,
    JF

  • Calc Script Clarification

    Hi,
    In our Hyperion Planning System there are 7 Dimensions. Two of them are Locations and Accounts.
    There is a one script which written earlier which we need to understand as follows,
    VAR M1
    M1= 9998;
    FIX(Budget,FY09,Monthly,P000)
    FIX(Jan)
    +"Location A"+
    +(+
    IF (M1 = 9998)
    +"Finance Charge" = "Finance Charge" * 100;+
    ENDIF;
    +)+
    ENDIFX
    ENDFIX
    Please let us know the propose of having *"Location A"* before the IF statement

    Nilaksha,
    You're looking at an Essbase calc script/Hyperion Business Rule and within that, a member calc block.
    Member calculation blocks are started with a member, i.e., "Location A", then a left parentheses, then code, then a right parentheses.
    What's weird about a member calc block is that you could write it to use just the account you want to calculate, i.e., "Finance Charge", or another member from the same dimension. Why would they be different? Perhaps originally Location A was calculated along with Finance Charge and then got removed? <--That seems likely, as you can assign values to more than one member in a member calc block.
    It's confusing -- you would think that you'd only be able to write to one, but that isn't so.
    "Location A"
    IF (M1 = 9998)
    "Finance Charge" = "Finance Charge" * 100;
    "Location A" = something * something ;
    ENDIF;
    )What's also interesting about calc member blocks is that you don't need to assign the member on the left hand side of the equals sign for the member defined in the block such as:
    "Finance Charge"
    IF (M1 = 9998)
    "Finance Charge" * 100;
    ENDIF;
    )It's basically like taking a formula out of the outline and putting it into a calc script.
    One last thing -- Essbase calc script variables (M1) can only be used within member calc blocks -- unfortunately not elsewhere.
    Btw, I assume your example was redacted for public consumption, as it always fires with M1 is always equal to 9998.
    Regards,
    Cameron Lackpour

  • Toptalker Script Clarification

    Hello Community,
    We've been experiencing high utilisation on our interface, to the point where the rx is full saturated, see below:
    ukxxxx-rm01-xx#show int atM 0/1/0
    ATM0/1/0 is up, line protocol is up
      Hardware is MPC ATMSAR, address is 6073.5cd8.95a6 (bia 6073.5cd8.95a6)
      Description: Internet Connection
      MTU 1600 bytes, sub MTU 1600, BW 448 Kbit/sec, DLY 820 usec,
         reliability 255/255, txload 137/255, rxload 255/255
    I decided to apply the attached script to determine what might be causing the saturation.
    Can someone please help me interpet the output?
    The culprit is the source ip 194.75.x.x to 80.229.x.x with prot ESP and AvgBits/s 1.19M and AVGpkt/s 175.
    I don't know how to interpret the results...
    Any explanation would greatly appreciated.
    Cheers
    Carlton

    What specifically don't you understand in the output?  The flows in question look to be VPN traffic between the two hosts.

  • Loading 0IC_C02 infocube -- some clarifications

    Hi BW experts,
    I have loaded data into 0IC_C02(matreials movements) infocube using the datasource 2lis_03_s195. It is an initialization load. The data has been successfully loaded. Now my point is, as this cube is related to inventory management is there any certain proceedure to follow in loading, compressing & running queries on this cube. your clarifications will be highly appreciated.
    thanks & regards.

    Hi Vamshi,
    Check these links:
    Inventory Management
    Re: Inventory Management
    Besides these you would find lots of links on IM in these forums.
    Bye
    Dinesh

  • DBMS CUBE BUILD

    Hi All
    I have been maintaining my cube till now using the analytical workspace manager.
    I now want to give it a try using the DBMS_CUBE.BUILD sql script.
    Is the following script right ?
    BEGIN
    DBMS_CUBE.BUILD
    'MY_SCHEMA.MY_CUBE USING (CLEAR VALUES,LOAD SYNCH,SOLVE)',
    'C', -- refresh method
    false, -- refresh after errors
    32, -- parallelism
    true, -- atomic refresh
    true, -- automatic order
    false, -- add dimensions
         'CUBE_DATA_REFRESH' -- identify job
    END;
    Also if I specify 'C' as my refresh method and do not specify CLEAR VALUES will it still clear the values that are already existing in the cube ?
    Edited by: CelestialCitizen on Sep 12, 2011 9:39 AM
    Edited by: CelestialCitizen on Sep 12, 2011 9:52 AM

    There are four variants of the CLEAR command.
    <li>CLEAR VALUES. This will clear everything from your cube, including both leaf (i.e. loaded) and aggregate cells. This will happen regardless of the refresh method you choose.
    <li>CLEAR LEAVES. This will clear the leaf (i.e. loaded) values from your cube, but not the aggregated values. This will happen regardless of the refresh method you choose.
    <li>CLEAR AGGREGATES. This will clear the aggregated values from your cube, but will not touch the loaded values. This will happen regardless of the refresh method you choose.
    <li>CLEAR. This will behave differently depending on the refresh method you choose. If you specify 'C', for complete, then it will behave like CLEAR VALUES. If you specify any other method (e.g. 'S' for fast solve or '?' for force), then it will behave like CLEAR LEAVES.
    If you do not include any CLEAR command in your cube script, then no values will be cleared. This will happen regardless of the refresh method you choose. Specifically, then, the following script would not remove data from the cube even if the corresponding fact tables are empty.
    BEGIN
    DBMS_CUBE.BUILD
      'MY_SCHEMA.MY_CUBE USING (LOAD SYNCH,SOLVE)',
      'C', -- refresh method
      false, -- refresh after errors
      32, -- parallelism
      true, -- atomic refresh
      true, -- automatic order
      false, -- add dimensions
      'CUBE_DATA_REFRESH' -- identify job
    END;The SYNCH and NO SYNCH options on the LOAD command only mean anything for dimension loads.

  • Enabling materialized view for fast refresh method

    In AWM when i select refresh method as FAST and enable the materialized view, am getting the below error:
    "Refresh method fast requires materialized view logs and a previously run complete refresh of the cube mv". i need this Fast refresh of a cube materialized view, so that it performs an incremental refresh and re-aggregation of only changed rows in the source table.
    can anyone help me on this??

    If you want the cube to hold data even after it has been deleted from the relational table, then you should disable the MV on the cube.
    Synchronization with the source table is determined by the default "cube script".
    <li>CLEAR, LOAD, SOLVE : This will synchronize your cube with the source table. It is a requirement for MVs.
    <li>LOAD, SOLVE: This will allow your cube to contain data even after it has been removed from the source table. It sounds like you want this.
    Cube builds can be "incremental" in one of two ways.
    (1) You can have an "incremental LOAD" if the source table contains only the changed rows or if you use MV "FAST" or "PCT" refresh. Since you can't use MVs, you would need a source table with only the changed rows.
    (2) You will have an "incremental SOLVE" (a.k.a. "incremental aggregation") if there is no "CLEAR VALUES" or "CLEAR AGGREGATES" step and various other conditions hold.
    To force a "complete LOAD" with an "incremental SOLVE" you should have all rows in your source table and run the following build script.
    LOAD, SOLVE You could also run "CLEAR LEAVES, LOAD, SOLVE" to synchronize the cube with the table.
    To force an "incremental LOAD" with a "complete SOLVE" you make the source table contains only the changed rows and the run the following:
    CLEAR AGGREGATES, LOAD, SOLVEor
    LOAD, CLEAR AGGREGATES, SOLVEFinally, if you want both LOAD and SOLVE to be incremental you make the source table contains only the changed rows and the run the following:
    LOAD, SOLVE

  • What does AverageOfChildren aggregate function in SSAS 2005 actually do?

    Folks,
    Have any of you been playing around with SSAS 2005 to have worked out
    what the AverageOfChildren aggregate function actually does?
    I was expecting it to do the equivalent of a simple AVG() with a GROUP
    BY in SQL, but it seems to be doing something additional to that.
    No-one in the team I work in has been able to work out what exactly it
    is doing.
    Any info would be appreciated!
    Cheers,
    Kenneth

    This very question was recently discussed in the public SQL Server OLAP newsgroup:
    http://groups.google.com/group/microsoft.public.sqlserver.olap/msg/c662e201b99678bc
    >>
     Aggregate Function Average of Children does not work
    This behavior could occur because
    AverageOfChildren,FirstChild,LastChild,FirstNonEmpty and LastNonEmpty are
    semi-additive and treats the Time dimension different from the other
    dimensions. Please refer to the following link for details:
    http://msdn2.microsoft.com/en-us/library/ms175356.aspx
    The description in above link (BOL) is not very clear about
    AverageOfChildren, and I have forward this feedback to the proper channel.
    The AverageOfChildren only applys when aggregating via Time dimension.  
    Actually, when you try to create a new measure in cube, when you select
    usage, you could see "average over time" which is for AverageOfChildren.
    To get the result of average behavior you want, you may want define a Sum
    and a Count measure, then create a calculated measure (in the cube script)
    which divides the two base measures.
    >>

  • Mapping to Relational tables

    We are new to Oracle OLAP and AWM.
    These are the questions we have :
    1. Is there a way to map to external files rather than RDBMS for populating the cubes ? If so, how in AWM ?
    2. If i want to have the cubes populated every day for 1 year time, should the RDBMS also (that maps the cube) should have one year data ?.
    Once the cube is populated for yesterday from RDMS, can i not remove the yesterday's data from RDMS and load only the new data in RDBMS.
    In such a case, the cube should have yesterdays data and the newly added data from today.
    The intent is that, the RDBMS should have just 1 day or 1 week data but the cube should have 1 year data...

    1. Is there a way to map to external files rather than RDBMS for populating the cubes ? If so, how in AWM ?Define an RDBMS external table that maps to the external files and then map the cube to this table. See http://docs.oracle.com/cd/E11882_01/server.112/e22490/et_concepts.htm for more details.
    2. If i want to have the cubes populated every day for 1 year time, should the RDBMS also (that maps the cube) should have one year data ?.
    Once the cube is populated for yesterday from RDMS, can i not remove the yesterday's data from RDMS and load only the new data in RDBMS.
    In such a case, the cube should have yesterdays data and the newly added data from today.
    The intent is that, the RDBMS should have just 1 day or 1 week data but the cube should have 1 year data...The default behaviour of cubes defined in AWM is to add new data during each cube build and keep existing data (unless it is overridden in a cell). So if you build once a day from a fact table that contain only one day of data, then eventually the cube will contain a year of data. This is controlled by the "default cube script" for the cube. You need a cube script without a CLEAR command.
    Given this setup, you should think about using the LOAD SERIAL option in the DBMS_CUBE.BUILD script. For example
    exec DBMS_CUBE.BUILD('MY_CUBE USING (LOAD SERIAL, SOLVE)')This will cause the server to access the external table only once instead of once per partition. The per partition load makes sense if there are indexes on the fact table, but this is not going to be true in your case. It also make sense (in all cases) if you plan to partition the cube by DAY.

  • What happened to Allocation & Forecast in OLAP 11gR1 (AWM)?

    Hello all,
    Does anybody know what happened to allocation and forecast features in OLAP 11gR1 (AWM)?
    These were available as step types in "Calculation Plans" in 10gR2. In 11gR1 "Cube Scripts" seems to be the descendant of "Calculation Plans"; but there are no allocation nor forecast steps types available in "Cube Scripts".
    I suppose an OLAP DML script could be used for allocation/forecast but that doesn't seem to be a feaure for let's say a business analyst without programming knowledge (of OLAP DML).
    Best regards,
    Javor

    You can create Allocation and Forecast in 10g style AWs only using AWM 11g
    The newer 11g style AWs do not support Allocation and Forecast yet.

  • Need help regarding complex calculation using Max value and limiting data after Max date in MDX

    I am working on a bit complex calculated measure in SSAS cube script mode.
    Scenario /Data Set
    Date
    A
    B
    C
    A+B
    5/29/2014
    Null
    34
    Null
    34
    6/30/2014
    Null
    23
    45
    68
    7/15/2014
    25
    -25
    Null
    0
    8/20/2014
    -34
    Null
    Null
    -34
    9/30/2014
    25
    Null
    60
    25
    10/15/2014
    45
    -45
    Null
    0
    11/20/2014
    7
    8
    Null
    15
    a) Need to capture latest non-null value of Column C based on date
    with above example it should be 60 as of 9/30/2014
    b) Need to capture column A+B for all dates.
    c) Add values from column (A+B) only after latest date which is after 9/30/2014. 
    with above example it's last 2 rows and sum is 15
    d) Finally add value from step a and step c. which means the calc measure value should be = 75
    I need to perform all this logic in MDX. I was able to successfully get step a and b in separate calc measure, however i am not sure how to limit the scope based on certain date criteria. In this case it's, date> Max date(9/30/2014) . Also how should
    i add calculated members and regular members?
    I was able to get max value of C based on date and max date to limit the scope.
    CREATE MEMBER CURRENTCUBE.[Measures].[LatestC] AS
    TAIL( 
      NONEMPTY(
        [Date].[Date].CHILDREN*[Measures].[C]),1).ITEM(0) ,visible=1;
    CREATE MEMBER CURRENTCUBE.[Measures].[MaxDateofC] AS
    TAIL( 
      NONEMPTY(
        [Date].[Date].CHILDREN,[Measures].[C]),1).ITEM(0).MemberValue ,visible=1;
    Please help with Scope statement to limit the aggregation of A+B for dates > MaxDateofC? Also further how to add this aggregation value to LatestC calc measure?
    Thank You

    Hi Peddi,
    I gave TRUNC to both of the dates. But still the same issue. I think the problem is in returning the BolbDomain.
    return blobDomain;
    } catch (XDOException xdoe) {
    System.out.println("Exception in XDO :");
    throw new OAException("Exception in XDO : "+xdoe.getMessage());
    catch (SQLException sqle) {
    System.out.println("Exception in SQL :");
    throw new OAException("SQL Exception : "+sqle.getMessage());
    catch (OAException e) {
    System.out.println("Exception in OA :");
    throw new OAException("Unexpected Error :: " +e.getMessage());
    Thanks and Regards,
    Myvizhi

  • Oracle OLAP installation problem

    Hello,
    I want to test Oracle OLAP and have come to a problem while installing cube script
    I am following this document http://www.oracle.com/technetwork/database/options/olap/global-11g-readme-082667.html
    It seems as if some steps were missing in the manual, but strangely I did not manage to find anyone online having the same problem, so it must be me overseeing something.
    SQL> @global_11g_create_cubes
    This procedure creates the cubes for the sample GLOBAL schema.
    You should be logged on as a DBA to execute this procedure.
    Specify file system directory containing this installation program.
    Example:
    c:\download\Global_Schema_11g or /home/oracle/Global_Schema_11g
    Directory: /home/oracle/olap
    Specify a password for the GLOBAL user.
    Enter password:
    Connected.
    Begin installation
    ... deleting GLOBAL AW (if it exists)
          dbms_cube.import_xml(xmlCLOB);
    ERROR at line 68:
    ORA-06550: line 68, column 7:
    PLS-00201: identifier 'DBMS_CUBE.IMPORT_XML' must be declared
    ORA-06550: line 68, column 7:
    PL/SQL: Statement ignored
    Any ideas?

    Hi Srini,
    MOS Doc helped me to install OLAP and DBMS_CUBE Library is not a problem any more. Now I have following error, again without similar error/solution to be found using Google
    SQL> @global_11g_create_cubes.sql;
    This procedure creates the cubes for the sample GLOBAL schema.
    You should be logged on as a DBA to execute this procedure.
    Specify file system directory containing this installation program.
    Example:
    c:\download\Global_Schema_11g or /home/oracle/Global_Schema_11g
    Directory: /home/oracle/olap
    Specify a password for the GLOBAL user.
    Enter password:
    Connected.
    Begin installation
    ... deleting GLOBAL AW (if it exists)
    ... creating GLOBAL AW
      begin
    ERROR at line 1:
    ORA-37162: OLAP error
    ORA-33292: Insufficient permissions to access analytic workspace GLOBAL.GLOBAL
    using the specified access mode.
    XOQ-01600: OLAP DML error while executing DML "SYS.AWXML!R11_INITIALIZE_AW"
    ORA-06512: at "SYS.DBMS_CUBE", line 433
    ORA-06512: at "SYS.DBMS_CUBE", line 465
    ORA-06512: at "SYS.DBMS_CUBE", line 523
    ORA-06512: at "SYS.DBMS_CUBE", line 486
    ORA-06512: at "SYS.DBMS_CUBE", line 501
    ORA-06512: at "SYS.DBMS_CUBE", line 512
    ORA-06512: at line 2
    Lookin into the code of global_11g_create_cubes.sq it is this command where error happens:
    dbms_cube.import_xml('GLOBAL_INSTALL', 'GLOBAL_MV.XML');

  • Loading incremental data

    Hi,
    I am using 11.1.0.7 DB with 11.1.0.7B AWM. I would like to load data incrementally.
    Some time my fact data contains tuples which are loaded before (corrections/re statements) and when i load incrementally then it just replaces the existing tuple from the cube.
    Is there a better way to do an incremental load other than getting all the revelant data(from historic tables) and then using group by and then loading it?
    Thanks,

    The term "incremental" has two common meanings in the context of loading data into a cube. First, it could refer to loading a subset of records from source tables. For example, a fact table has data for years 2005 - 2010 and data is added daily. The goal of an incremental load might be to load only those records that we added to or updated in the fact table yesterday (e.g., '29-MAR-2010'). Solutions (1), (2) and (3) apply to that situation.
    "Incremental" might also be used to describe a situation where data read during a load changes, rather than replaces, data in the cube. For example, data such the following already exists in the cube:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 100.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 150.00
    and the following data is added to the fact table (and these are the only records for these time, product and customer values):
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 15.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 -25.00
    And the intent is to have data appear as follows in the cube:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 115.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 125.00
    What you need to know is that data read from a table always replaces the data that exists in the cube. So, if you just load from the fact table into the cube the data will be:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 15.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 -25.00
    There are two things that you could do that would yield the following data in the cube:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 115.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 125.00
    A) You could load the following records from the fact table directly into the cube.
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 115.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 125.00
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 15.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 -25.00
    The SQL used to load the cube can do a SUM .... GROUP BY. The net result will be:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 115.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 125.00
    (I think you might need to map the cube with joins to get the sum ... group by. Be sure to check the SQL in the cube_build_log to make sure you are getting the SQL you expect.)
    B) You could load the following records into a seperate cube (let's call this SALES_CUBE_UPDATE, while your main cube is named SALES_CUBE).
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 15.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 -25.00
    As post load task, you can update the SALES cube to be the sum of the current value of SALES plus the value in the SALES_UPDATE cube. You would do this with OLAP DML code such as
    sales_cube_sales_stored(sales_cube_measure_dim 'SALES') = sales_cube_sales_stored + sales_cube_update_sales
    If you use this method, you would ideally:
    - Filter (in OLAP DML terms, LIMIT) the dimensions of the cube to only those values that have data in the SALES_CUBE_UPDATE cube so you don't spent time looping over dimension values that don't have data.
    - Loop over the composite dimension. E.g.,
    SET sales_cube_sales_stored(sales_cube_measure_dim 'SALES') = sales_cube_sales_stored + sales_cube_update_sales ACROSS sales_cube_composite
    Or, if the cube is partitioned (almost all cubes benefit from partitioning) you will loop the partition template. E.g.,
    SET sales_cube_sales_stored(sales_cube_measure_dim 'SALES') = sales_cube_sales_stored + sales_cube_update_sales ACROSS sales_cube_prt_template
    In most cases, you will do this only at the leaf levels and then aggregate so the entire process will look something like this:
    1) Load data into the sales_cube_update cube using the LOAD command (create this using a cube script in AWM). Don't bother to aggregate as part of the load.
    2) Run an OLAP DML program such as:
    " Limit to lowest levels
    LIMIT time TO time_levelrel 'DAY'
    LIMIT product TO product_levelrel 'ITEM'
    LIMIT customer TO customer_levelrel 'CUSTOMER'
    " Keep only those values where data exists in the SALES_CUBE_UPDATE cube.
    LIMIT time KEEP sales_cube_update_sales NE na
    LIMIT product KEEP sales_cube_update_sales NE na
    LIMIT customer KEEP sales_cube_update_sales NE na
    " Add the values of the sales_cube_update cube to the values in the sales_cube.
    " Loops the partition template for better performance.
    SET sales_cube_sales_stored(sales_cube_measure_dim 'SALES') = sales_cube_sales_stored + sales_cube_update_sales ACROSS sales_cube_prt_template
    " Save the data.
    UPDATE
    COMMIT
    3) Aggregate the cube (create an AGGREGATE command an AWM cube script).
    Notes:
    - Be sure to clear data from the SALES_CUBE_UPDATE cube before or after you load new data into it. (E.g., use the OLAP DML CLEAR command.)
    - If you will be running OLAP DML commands on data that exists in multiple partitions you can parallize the execution of the OLAP DML code. See the following post: http://oracleolap.blogspot.com/2010/03/parallel-execution-of-olap-dml.html
    Well, a bit of a lengthy explanation. I hope helps. Good look.

Maybe you are looking for

  • My iMac will not boot up!

    I get the start up tone with the apple on the screen. This last a bit, goes to the blue screen and will remain like that. Any help?

  • Can't connect to 802.1x wifi

    Did anybody succeed in connecting to a 802.1x authenticated wi-fi network? I've got a 2G iphone with 2.1 firmware. My iphone asks me whether i want to join an "enterprise wi-fi network", I tap OK, then it asks me for the username and password (which

  • Why is my iMessage being sent as a text message?

    A couple of days ago I installed iOS 5 on my iPhone 4. Now, for my contacts who also own iPhones, iMessages seem to be activated. When I send a message, the bubble is blue, indicating an iMessage. But then, a few minutes later, these messages are tur

  • Cisco ESW 520 in Cisco LMS

    Hi, I have a question whether CicoWorks LMS can manage Cisco ESW 520 SWITCHS. I can import it in RME but it says unknown device. Thanks Ashley

  • JFPRAMBLE: duplex-simplex switching at print time.

    Form is 14" self-mailer, duplex printing with "details" overflow on to separate page(s). Problem is the backer is overprinting the face of page 1. Any ideas on the syntax for switching duplex face-to-back and then back to simplex for overflow. I'm su