Cube Maintenance

Hi Experts,
We have a Sales scenario. R/3 -> DSO -> Cube.
For a particular date 12.11.2009 for a particular material there are multiple records in the Cube.
For Eg : DSO = Invoice1 / Material1 / Qty = 50.
But in the Cube the same record got loaded twice like :-
Cube: Invoice1 / Material1 / Qty = 50.
          Invoice1 / Material1 / Qty = 50.
How can I delete this particular record from the cube.
Can I delete the request from the Cube for 12.11.2009 and activate the request from DSO to Cube.
What is the feasable solution.
Thanks
Kumar.

HI,
You can find the request for duplicate records in the cube and compare any one of the request with other records as well, and the entire content is doubled, then you can delete the request.
Otherwise, you can do a selective deletion for this particular combination in the Infocube and load for the same selection from ODS to cube as a repair full, incase of delta, otherwise normal full load for the selective combination.
Thanks
Sat

Similar Messages

  • Killing a cube maintenance job

    Hi,
    We have a query on the handling of cube maintenance jobs:
    If, for some issues with the machine, we kill the AWM application process using the Windows Task Manager Utility while its running a cube build, then the connection to the AW is still active and the Analytic Workspace still remains connected in the Read-Write mode. So the impact is that if we were to restart the cube maintenance job again by re-attaching the AW, after having killed it, then we need to in fact restart the DB instance so that the hanging connection to the AW is done away with.
    Is there any way to "completely" kill this connection without having to restart the DB instance?
    (The AWM version is use is 10.2.0.3.0A)
    Thanks,
    Piyush

    Something like this will work. You need to run it with sufficient privilege to access the v$session and v$aw_olap tables (sys?) as well as the alter system kill command.
    CREATE OR REPLACE PROCEDURE sess_killer(awowner IN VARCHAR2,   awname IN VARCHAR2) AS
    mysid NUMBER;
    serial NUMBER;
    CURSOR c1 IS
    SELECT sid,
      serial#
    FROM v$session s,
      v$aw_olap a,
      dba_aws d
    WHERE d.owner = awowner
    AND d.aw_name = awname
    AND d.aw_number = a.aw_number
    AND a.session_id = s.sid;
    sql_stmt VARCHAR2(200);
    session_marked
    EXCEPTION;
    pragma exception_init(session_marked,   -31);
    BEGIN
      OPEN c1;
      LOOP
        FETCH c1
        INTO mysid,
          serial;
        EXIT
      WHEN c1 % NOTFOUND;
      --DBMS_OUTPUT.PUT_LINE('Session: ' || mysid || ', ' || serial);
      sql_stmt := 'ALTER system kill SESSION ''' || mysid || ', ' || serial || '''';
    -- DBMS_OUTPUT.PUT_LINE(sql_stmt);
      BEGIN
        EXECUTE IMMEDIATE sql_stmt;
      EXCEPTION
      WHEN session_marked THEN
        NULL;
      END;
    END LOOP;
    CLOSE c1;
    END sess_killer;

  • Cube maintenance time increases every time it runs

    Hi,
    I have created a cube in AWM (10.1.0.4) with five dimensions. This cube is set to not to preaggregate any levels in order to have a short maintenance time. First time I maintained the cube it took about 3 minutes. In order to get a average maintenance time i chose to maintain the cube a few times more. For every time I did this, the cube used a bit more time to maintain and last time it used 20 minutes. Every time i checked the "aggregate the full cube" option and no data is added to the source tables.
    I also checked the tablespace that the AW is stored in and it also increases for each run. Its now using 1,6 gb.
    The database version is 10.1.0.4.0
    Anyone have any ideas to what I can do?
    Regards
    Ragnar
    edit: I did a few more runs and the last run time was 40 minutes and the tablespace is now using 4,1gb so I think this is where the problem is. Instead of overwriting the old data it seems that it just adds it, making the tablespace bigger for each time.
    Message was edited by:
    rhaug

    Hi,
    seems like I have resolved this problem now. I had made several cubes that were almost identical. Only difference was how they aggregated the data. One was full aggregation, one was skip-level aggregation and the last didn't have any. The reason I did this was to be able to compare maintenance time and see how this affected the response time to the cube. I am not sure what causes this, but I never managed to aggregate the cubes correctly. The cube with full aggregation took just about a minute or two to maintain and when i chose to view the data it took another minute.
    So my impression was that it was aggregating all the data runtime.
    When I tried to maintain any of the cubes after this, I got some different errors. Usually the maintenance failed when the tablespaces couldn't grow anymore. The temp tablespace was at this point beyond 20gb.
    I then thought that the name of the measures in the cube could have something to do with the errors I got, and renamed them so they were unique in the AW. The tablespaces grew large also this time, but the maintenance stopped because of an out of memory error.
    Then i deleted all cubes but one and tried to maintain it. After about 35 minutes it was done and when i chose to view the data, they seemed to be precalculated and the response time was good. The tablespace containing the AW also seems normal in size, around 500mb. I did several test runs during the night, and since yesterday the cube has successfully been maintained 15 times.
    So this brings me to my question.
    Can a AW only contain one cube? Or is it just a user error from my part? To me it seems a bit weird that you only can have one cube using the same dimensions, so I'm not sure if this is a right way of doing it, but it works.
    Anyone have any input or similar experiences?
    Regards
    Ragnar

  • Inventory cube maintenance

    Hi,
    We are using the inventory cube 0IC_C03 for the past 3 years and we have about 100 million records. We have a few reports running off the cube. Now because of the hugh amount of data the query execution takes a very long time approx 20mins.
    Are there any performance tuning measures I can take to speed up the query execution? Please advice.
    Thanks.

    Hi Morpheus;
    You can do the compression manually (Manage InfoCube -> Collapse  and Select the "With zero Elimination" or in the process chain.
    However, performance problems of reports make me raise several questions:
    - Did you check the database performance of tables? (RSRV -> Tests in Transaction RSRV -> All Elementary Tests  -> Database -> Database Information about InfoProvider Tables -> Your Cube..). If you have in one dimension more than 30% that dimension is given you problems. Try to use the maximun number of dimensions possible and define the most heavy as "Line item dimension" (for material for example".
    - Are you using "heavy" characteristics in your cube like material document, sales or purchasing documents? If yes, can't they be remove? The number of records will decrease a lot. You can also track it using DSO.
    - Also, check the number of data that is transfered and added to your cube. (usually when transfered is lower that added you need to know why this appends and figger out a way to this doesn't occur).
    - Check also the SP used. Higher SP allows higher performance in report execution.
    - Finally, check if isn^t possible add more filters in the query and if "fix" filters are in the global area of filters and not in the "local" area.
    Regards;
    Ricardo

  • 11g Cube not showing any data with no Rejected records

    Hi David ,
    Strangely one of my 11g Cube is not showing data from today as it is showing all records rejected . However when I lookup Rejected records table I didn't find any records .Not sure what is happening . When I want to peep out the AWM queries from CUBE_BUILD_LOG and ran to the database in the AWM schema the records are retruning perfectly fine . I wonder same query is firing during Cube load but without any data ? My Cube build script has only LOAD and Aggregate .
    after maintain My dimensions data are looking fine but no data populated after Cube maintenance . My MV switch off across all dimensions and cubes .
    I navigate to CUBE_OPERATION_LOG and not able to comprehend about the content.
    Any advice ?
    Thanks and Regards,
    DxP

    Hi David ,
    To be very frank today is very bad day ... Please see below my observation:
    Executed below to make sure that no key values in dimension is missing but present in fact . All below query returns no row.
    select distinct owner_postn_wid from w_synm_rx_t_f
    minus
    select distinct row_wid from postn_dh
    select distinct payer_type_Wid from w_synm_rx_t_f
    minus
    select distinct row_wid from wc_ins_plan_dh
    select distinct market_wid from w_synm_rx_t_f
    minus
    select distinct row_wid from w_product_dh
    select distinct period_day_wid from w_synm_rx_t_f
    minus
    select distinct row_wid from w_daytime_D
    select distinct contact_wid from w_synm_rx_t_f
    intersect
    select distinct row_wid from w_person_d
    select distinct X_TERR_TYPE_WID from w_synm_rx_t_f
    minus
    select distinct row_wid from W_LOV_D
    ============================
    Below returns count of 0 rows : ensure no NULL present
    select count(1) from w_synm_rx_t_f where contact_wid is null;
    select count(1) from w_synm_rx_t_f where owner_postn_wid is null;
    select count(1) from w_synm_rx_t_f where payer_type_Wid is null;
    select count(1) from w_synm_rx_t_f where period_day_wid is null;
    select count(1) from w_synm_rx_t_f where X_TERR_TYPE_WID is null;
    select count(1) from w_synm_rx_t_f where market_wid is null;
    +++++++++++++++++++++++++++++++++
    Cube Build Log has below entry:
    796     0     STARTED     CLEAR VALUES     MKT_SLS_CUBE     CUBE          NNOLAP     NN_OLAP_POC     P14:JAN2010          17-AUG-11 07.12.08.267000000 PM +05:30          JAVA     1          C     47141     67     0     1     
    796     0     COMPLETED     CLEAR VALUES     MKT_SLS_CUBE     CUBE          NNOLAP     NN_OLAP_POC     P14:JAN2010          17-AUG-11 07.12.08.267000000 PM +05:30          JAVA     1          C     47141     67     0     2     
    796     0     STARTED     LOAD     MKT_SLS_CUBE     CUBE          NNOLAP     NN_OLAP_POC     P14:JAN2010          17-AUG-11 07.12.08.283000000 PM +05:30          JAVA     1          C     47142     68     0     1     
    796     0     SQL     LOAD     MKT_SLS_CUBE     CUBE     "<SQL>
    <![CDATA[
    SELECT /*+ bypass_recursive_check cursor_sharing_exact no_expand no_rewrite */
    T16_ROW_WID ALIAS_127,
    T13_ROW_WID ALIAS_128,
    T10_ROW_WID ALIAS_129,
    T7_ROW_WID ALIAS_130,
    T4_ROW_WID ALIAS_131,
    T1_ROW_WID ALIAS_132,
    SUM(T20_MKT_TRX) ALIAS_133,
    SUM(T20_MKT_NRX) ALIAS_134
    FROM
    SELECT /*+ no_rewrite */
    T1."CONTACT_WID" T20_CONTACT_WID,
    T1."MARKET_WID" T20_MARKET_WID,
    T1."OWNER_POSTN_WID" T20_OWNER_POSTN_WID,
    T1."PAYER_TYPE_WID" T20_PAYER_TYPE_WID,
    T1."PERIOD_DAY_WID" T20_PERIOD_DAY_WID,
    T1."MKT_NRX" T20_MKT_NRX,
    T1."MKT_TRX" T20_MKT_TRX,
    T1."X_TERR_TYPE_WID" T20_X_TERR_TYPE_WID
    FROM
    NN_OLAP_POC."W_SYNM_RX_T_F" T1 )
    T20,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T16_ROW_WID
    FROM
    NN_OLAP_POC."W_DAYTIME_D" T1 )
    T16,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T13_ROW_WID
    FROM
    NN_OLAP_POC."W_PERSON_D" T1 )
    T13,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T10_ROW_WID
    FROM
    NN_OLAP_POC."WC_INS_PLAN_DH" T1 )
    T10,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T7_ROW_WID
    FROM
    NN_OLAP_POC."W_LOV_D" T1 )
    T7,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T4_ROW_WID
    FROM
    NN_OLAP_POC."POSTN_DH" T1 )
    T4,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T1_ROW_WID
    FROM
    NN_OLAP_POC."W_PRODUCT_DH" T1 )
    T1
    WHERE
    ((T20_PERIOD_DAY_WID = T16_ROW_WID)
    AND (T16_ROW_WID = 20100101)
    AND (T20_CONTACT_WID = T13_ROW_WID)
    AND (T20_PAYER_TYPE_WID = T10_ROW_WID)
    AND (T7_ROW_WID = T20_X_TERR_TYPE_WID)
    AND (T20_OWNER_POSTN_WID = T4_ROW_WID)
    AND (T20_MARKET_WID = T1_ROW_WID)
    AND ((T20_PERIOD_DAY_WID) IN ((20100107.000000) , (20100106.000000) , (20100128.000000) , (20100124.000000) , (20100121.000000) , (20100118.000000) , (20100115.000000) , (20100109.000000) , (20100125.000000) , (20100114.000000) , (20100111.000000) , (20100110.000000) , (20100104.000000) , (20100101.000000) , (20100129.000000) , (20100123.000000) , (20100117.000000) , (20100113.000000) , (20100108.000000) , (20100131.000000) , (20100120.000000) , (20100116.000000) , (20100119.000000) , (20100105.000000) , (20100102.000000) ,
    (20100130.000000) , (20100127.000000) , (20100122.000000) , (20100112.000000) , (20100103.000000) , (20100126.000000) ) ) )
    GROUP BY
    (T1_ROW_WID, T4_ROW_WID, T7_ROW_WID, T10_ROW_WID, T13_ROW_WID, T16_ROW_WID)
    ORDER BY
    T1_ROW_WID ASC NULLS LAST ,
    T4_ROW_WID ASC NULLS LAST ,
    T7_ROW_WID ASC NULLS LAST ,
    T10_ROW_WID ASC NULLS LAST ,
    T13_ROW_WID ASC NULLS LAST ,
    T16_ROW_WID ASC NULLS LAST ]]>/>
    </SQL>"     NNOLAP     NN_OLAP_POC     P14:JAN2010          17-AUG-11 07.12.08.627000000 PM +05:30          JAVA     1     MAP1     C     47142     68     0     2     
    796     0     COMPLETED     LOAD     MKT_SLS_CUBE     CUBE     "<CubeLoad
    LOADED="0"
    REJECTED="4148617"/>"     NNOLAP     NN_OLAP_POC     P14:JAN2010          17-AUG-11 07.12.40.486000000 PM +05:30          JAVA     1          C     47142     68     0     3     
    796     0     STARTED     UPDATE     MKT_SLS_CUBE     CUBE          NNOLAP     NN_OLAP_POC     P14:JAN2010          17-AUG-11 07.12.40.501000000 PM +05:30          JAVA     1          C     47143     69     0     1     
    796     0     COMPLETED     UPDATE     MKT_SLS_CUBE     CUBE          NNOLAP     NN_OLAP_POC     P14:JAN2010          17-AUG-11 07.12.40.548000000 PM +05:30          JAVA     1          C     47143     69     0     2     
    +++++++++++++++++
    You can observer clear rejection of 4 million rows ... Ran the above query which returns my data successfully.
    Look out to CUBE_REJECTED records take the sample record and put into the above query it is returning the data fine with my measures and dimension WID's :(PLEASE SEE BELOW THE FILTERS on ROW_WID)
    =========================
    SELECT /*+ bypass_recursive_check cursor_sharing_exact no_expand no_rewrite */
    T16_ROW_WID ALIAS_127,
    T13_ROW_WID ALIAS_128,
    T10_ROW_WID ALIAS_129,
    T7_ROW_WID ALIAS_130,
    T4_ROW_WID ALIAS_131,
    T1_ROW_WID ALIAS_132,
    SUM(T20_MKT_TRX) ALIAS_133,
    SUM(T20_MKT_NRX) ALIAS_134
    FROM
    SELECT /*+ no_rewrite */
    T1."CONTACT_WID" T20_CONTACT_WID,
    T1."MARKET_WID" T20_MARKET_WID,
    T1."OWNER_POSTN_WID" T20_OWNER_POSTN_WID,
    T1."PAYER_TYPE_WID" T20_PAYER_TYPE_WID,
    T1."PERIOD_DAY_WID" T20_PERIOD_DAY_WID,
    T1."MKT_NRX" T20_MKT_NRX,
    T1."MKT_TRX" T20_MKT_TRX,
    T1."X_TERR_TYPE_WID" T20_X_TERR_TYPE_WID
    FROM
    NN_OLAP_POC."W_SYNM_RX_T_F" T1 )
    T20,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T16_ROW_WID
    FROM
    NN_OLAP_POC."W_DAYTIME_D" T1 )
    T16,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T13_ROW_WID
    FROM
    NN_OLAP_POC."W_PERSON_D" T1 )
    T13,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T10_ROW_WID
    FROM
    NN_OLAP_POC."WC_INS_PLAN_DH" T1 )
    T10,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T7_ROW_WID
    FROM
    NN_OLAP_POC."W_LOV_D" T1 )
    T7,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T4_ROW_WID
    FROM
    NN_OLAP_POC."POSTN_DH" T1 )
    T4,
    SELECT /*+ no_rewrite */
    T1."ROW_WID" T1_ROW_WID
    FROM
    NN_OLAP_POC."W_PRODUCT_DH" T1 )
    T1
    WHERE
    ((T20_PERIOD_DAY_WID = T16_ROW_WID)
    AND (T16_ROW_WID = 20100101)
    AND (T20_CONTACT_WID = T13_ROW_WID)
    AND (T20_PAYER_TYPE_WID = T10_ROW_WID)
    AND (T7_ROW_WID = T20_X_TERR_TYPE_WID)
    AND (T20_OWNER_POSTN_WID = T4_ROW_WID)
    AND (T20_MARKET_WID = T1_ROW_WID)
    AND T13_ROW_WID = 255811
    AND T7_ROW_WID = 122
    AND T4_ROW_WID =3
    AND T1_ROW_WID=230
    AND T10_ROW_WID = 26
    AND ((T20_PERIOD_DAY_WID) IN ((20100107.000000) , (20100106.000000) , (20100128.000000) , (20100124.000000) , (20100121.000000) , (20100118.000000) , (20100115.000000) , (20100109.000000) , (20100125.000000) , (20100114.000000) , (20100111.000000) , (20100110.000000) , (20100104.000000) , (20100101.000000) , (20100129.000000) , (20100123.000000) , (20100117.000000) , (20100113.000000) , (20100108.000000) , (20100131.000000) , (20100120.000000) , (20100116.000000) , (20100119.000000) , (20100105.000000) , (20100102.000000) ,
    (20100130.000000) , (20100127.000000) , (20100122.000000) , (20100112.000000) , (20100103.000000) , (20100126.000000) ) ) )
    GROUP BY
    (T1_ROW_WID, T4_ROW_WID, T7_ROW_WID, T10_ROW_WID, T13_ROW_WID, T16_ROW_WID)
    ORDER BY
    T1_ROW_WID ASC NULLS LAST ,
    T4_ROW_WID ASC NULLS LAST ,
    T7_ROW_WID ASC NULLS LAST ,
    T10_ROW_WID ASC NULLS LAST ,
    T13_ROW_WID ASC NULLS LAST ,
    T16_ROW_WID ASC NULLS LAST
    =================================
    THE XML export of CUBE as below:
    <!DOCTYPE Metadata [
    <!ENTITY % BIND_VALUES PUBLIC "OLAP BIND VALUES" "OLAP METADATA">
    %BIND_VALUES;
    ]>
    <Metadata
    Version="1.2"
    MinimumDatabaseVersion="11.2.0.1">
    <Cube
    ETViewName="MKT_SLS_CUBE_VIEW"
    Name="MKT_SLS_CUBE">
    <Measure>
    <BaseMeasure
    SQLDataType="NUMBER"
    ETMeasureColumnName="TRX"
    Name="TRX">
    <Description
    Type="LongDescription"
    Language="AMERICAN"
    Value="TRX">
    </Description>
    <Description
    Type="ShortDescription"
    Language="AMERICAN"
    Value="TRX">
    </Description>
    <Description
    Type="Description"
    Language="AMERICAN"
    Value="TRX">
    </Description>
    </BaseMeasure>
    </Measure>
    <Measure>
    <BaseMeasure
    SQLDataType="NUMBER"
    ETMeasureColumnName="NRX"
    Name="NRX">
    <Description
    Type="LongDescription"
    Language="AMERICAN"
    Value="NRX">
    </Description>
    <Description
    Type="ShortDescription"
    Language="AMERICAN"
    Value="NRX">
    </Description>
    <Description
    Type="Description"
    Language="AMERICAN"
    Value="NRX">
    </Description>
    </BaseMeasure>
    </Measure>
    <CubeMap
    Name="MAP1"
    IsSolved="False"
    Query="W_SYNM_RX_T_F"
    WhereClause="W_DAYTIME_D.ROW_WID = 20100101">
    <MeasureMap
    Name="TRX"
    Measure="TRX"
    Expression="W_SYNM_RX_T_F.MKT_TRX">
    </MeasureMap>
    <MeasureMap
    Name="NRX"
    Measure="NRX"
    Expression="W_SYNM_RX_T_F.MKT_NRX">
    </MeasureMap>
    <CubeDimensionalityMap
    Name="TIME"
    Dimensionality="TIME"
    MappedDimension="TIME.CALENDER.MONTHLY"
    JoinCondition="W_SYNM_RX_T_F.PERIOD_DAY_WID = W_DAYTIME_D.ROW_WID"
    Expression="W_DAYTIME_D.ROW_WID">
    </CubeDimensionalityMap>
    <CubeDimensionalityMap
    Name="CUSTOMER"
    Dimensionality="CUSTOMER"
    MappedDimension="CUSTOMER.CUSTOMER_HIERARCHY.DETAIL_LEVEL"
    JoinCondition="W_SYNM_RX_T_F.CONTACT_WID = W_PERSON_D.ROW_WID"
    Expression="W_PERSON_D.ROW_WID">
    </CubeDimensionalityMap>
    <CubeDimensionalityMap
    Name="INS_PLAN_DH"
    Dimensionality="INS_PLAN_DH"
    MappedDimension="INS_PLAN_DH.INS_PLAN.DETAIL"
    JoinCondition="W_SYNM_RX_T_F.PAYER_TYPE_WID = WC_INS_PLAN_DH.ROW_WID"
    Expression="WC_INS_PLAN_DH.ROW_WID">
    </CubeDimensionalityMap>
    <CubeDimensionalityMap
    Name="LIST_OF_VALUES"
    Dimensionality="LIST_OF_VALUES"
    MappedDimension="LIST_OF_VALUES.LOV_HIERARCHY.DETAIL_LEVEL"
    JoinCondition="W_LOV_D.ROW_WID = W_SYNM_RX_T_F.X_TERR_TYPE_WID"
    Expression="W_LOV_D.ROW_WID">
    </CubeDimensionalityMap>
    <CubeDimensionalityMap
    Name="POSITIONDH"
    Dimensionality="POSITIONDH"
    MappedDimension="POSITIONDH.POST_HIER.DETAIL"
    JoinCondition="W_SYNM_RX_T_F.OWNER_POSTN_WID = POSTN_DH.ROW_WID"
    Expression="POSTN_DH.ROW_WID">
    </CubeDimensionalityMap>
    <CubeDimensionalityMap
    Name="PRODH"
    Dimensionality="PRODH"
    MappedDimension="PRODH.PRODHIER.DETAILLVL"
    JoinCondition="W_SYNM_RX_T_F.MARKET_WID = W_PRODUCT_DH.ROW_WID"
    Expression="W_PRODUCT_DH.ROW_WID">
    </CubeDimensionalityMap>
    </CubeMap>
    <Organization>
    <AWCubeOrganization
    MVOption="NONE"
    SparseType="COMPRESSED"
    MeasureStorage="SHARED"
    NullStorage="MV_READY"
    CubeStorageType="NUMBER"
    PrecomputePercent="35"
    PrecomputePercentTop="0"
    PartitionLevel="TIME.CALENDER.MONTHLY"
    AW="&AW_NAME;">
    <SparseDimension
    Name="TIME"/>
    <SparseDimension
    Name="CUSTOMER"/>
    <SparseDimension
    Name="INS_PLAN_DH"/>
    <SparseDimension
    Name="LIST_OF_VALUES"/>
    <SparseDimension
    Name="POSITIONDH"/>
    <SparseDimension
    Name="PRODH"/>
    <DefaultBuild>
    <![CDATA[BUILD SPEC LOAD_AND_AGGREGATE
      LOAD NO SYNCH,
      SOLVE
    )]]>
    </DefaultBuild>
    </AWCubeOrganization>
    </Organization>
    <Dimensionality
    Name="TIME"
    ETKeyColumnName="TIME"
    Dimension="TIME">
    </Dimensionality>
    <Dimensionality
    Name="CUSTOMER"
    ETKeyColumnName="CUSTOMER"
    Dimension="CUSTOMER">
    </Dimensionality>
    <Dimensionality
    Name="INS_PLAN_DH"
    ETKeyColumnName="INS_PLAN_DH"
    Dimension="INS_PLAN_DH">
    </Dimensionality>
    <Dimensionality
    Name="LIST_OF_VALUES"
    ETKeyColumnName="LIST_OF_VALUES"
    Dimension="LIST_OF_VALUES">
    </Dimensionality>
    <Dimensionality
    Name="POSITIONDH"
    ETKeyColumnName="POSITIONDH"
    Dimension="POSITIONDH">
    </Dimensionality>
    <Dimensionality
    Name="PRODH"
    ETKeyColumnName="PRODH"
    Dimension="PRODH">
    </Dimensionality>
    <Description
    Type="LongDescription"
    Language="AMERICAN"
    Value="MKT SLS CUBE">
    </Description>
    <Description
    Type="ShortDescription"
    Language="AMERICAN"
    Value="MKT SLS CUBE">
    </Description>
    <Description
    Type="Description"
    Language="AMERICAN"
    Value="MKT SLS CUBE">
    </Description>
    <ConsistentSolve>
    <![CDATA[SOLVE
      SUM
        MAINTAIN COUNT
         OVER ALL
    )]]>
    </ConsistentSolve>
    </Cube>
    </Metadata>
    +++++++++++++++++++++++
    I dropped the AW and create new from exported XML and maintain all dimensions and then rebuild . Still have the issue :(
    Any thing you can hightlight from above ?
    Thanks,
    DxP
    Also I sustpect whethere it is a issue due to below error caused when I click on one of my Position_Hier view from AWM : even if I select that view it is throwing the error in SQL developer after displaying first couple of rows (while page down)
    java.sql.SQLException: ORA-33674: Data block size 63 exceeds the maximum size of 60 bytes.
    at oracle.olap.awm.util.jdbc.SQLWrapper.execute(Unknown Source)
    at oracle.olap.awm.querydialog.PagedQueryDialog$1.construct(Unknown Source)
    at oracle.olap.awm.ui.SwingWorker$2.run(Unknown Source)
    at java.lang.Thread.run(Thread.java:595)
    Edited by: e_**** on Aug 17, 2011 8:41 PM

  • Duplicate dimension in AW Cube

    Hi:
    We have to include two copies of a customer dimension in a cube in our analytical workspace. The two copies are due to two FKs (caller_customer and a receiver_customer) in the cube. Our customer dimension has > 5M records and has several hierarchies. We are suspecting that including both the same large customer dimension twice in our cube will increase our cube size and cube maintenance time substantially. Any ideas on this? Is there a workaround or a better way to do this?
    Regards.
    Umar.

    It sounds from your description that there is a single leaf value, Customer, that is rolled up differently depending on the role of caller or receiver. I am not sure why you need to implement two different dimensions, surely this is one dimension with two hierarchies baed on caller and receiver drill paths.
    Hope this helps,
    Keith
    Oracle Business Intelligence Product Management
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Beans http://www.oracle.com/technology/products/bib/index.html
    Discoverer: http://www.oracle.com/technology/products/discoverer/
    BI Software: http://www.oracle.com/technology/software/products/ias/devuse.html
    Documentation: http://www.oracle.com/technology/documentation/appserver1012.html
    BI Samples: http://www.oracle.com/technology/products/bi/samples/
    Blog: http://oraclebi.blogspot.com/

  • Maintain Cube Not Working

    Brijesh,
    I built my dimensions, levels and heirarchies successfully and also created cube. Now that I've built my measures and run the maintainance, I'm not seeing any values in them even though I know I should.
    Based on my mapping, the keys from the fact are going to the right dimensions (and I even made a simplier, just one dimension --> measure cube as well), but no success. There are cases where I know I shouldn't get any data (based on selected values), but when I make valid selection I see only 0.00 being displayed.
    Can you tell where I may have gone wrong here? Are the values made available for selection (the attributes) ONLY supposed to be the same one-to-one values available in the fact table?
    **I'm using the simple SUM aggregate function for my measures, and pretty much all the default configurations given.
    Brijesh Gaur
    Posts: 416
    Registered: 04/03/08
    Re: where are dimension attributes in AWM - cube viewer?
    Posted: Nov 10, 2009 3:21 AM in response to: mikeyp Reply
    Attribute is something related to dimension. They are purely the property of the dimension and not the fact. Now you said the data is not visible in the cube and you are getting 0.00 even with a simplier case(one dimensional cube). There are many causes of the value not shown in the cube and some are mentioned below
    1. All records are rejcted in the cube maintaince process. For this you can check olapsys.xml_log_log table and see if you can find any rejected records.
    2. There is some breakage in the hierarchy of the dimension. It also prevents data from summing up.
    Did you tried with the global sample available on OTN? That should be good starting point for you.
    You can check the cube as below to find out it is loaded or not. Consider your one dimensional cube. Find out a member which is existing in the dimension and also has some fact value associated with it.
    1. Now limit your dimension to that value like this
    lmt <dim name> to '<value>'
    For compressed composite cube
    2. Now check data in the cube like this
    rpr cubename_prt_topvar
    for uncompressed cube you should do
    rpr cubename_stored
    #2 should show the same value which is available in the fact.
    Thanks,
    Brijesh
    mikeyp
    Posts: 14
    Registered: 09/22/09
    Re: where are dimension attributes in AWM - cube viewer?
    Posted: Nov 13, 2009 1:24 PM in response to: Brijesh Gaur Edit Reply
    Brijesh,
    Thanks for your suggestions, and here are my results based on that.
    1. No records rejected after running cube maintenance
    2. I didn't limit my dimension to specific value as you recommended, but made my member the same as my Long and Short description attributes using AWM. (Its a flat dimension i.e. no level or hierarchy since the dimension only has one value/meaningful field.
    Based on those steps, I still didn't get the results I was looking for. The fact table has five values for that one dimension and I'm seeing 0.00 for 4 of them, and an inaccurate value for the last one. (this after doing comparison with simple aggregate query against fact)
    Do you have any other possible reasons/solutions?
    **Loading the Global Schema into our dev environment is out of my hands unfortunately, so that's the reason for the prolonged learning curve.

    Brijesh,
    Here's the results of what you suggested:
    1. Creating test dim and fact table with simple case you provided was successful, and AWM was able to map the same values to cube which was created on top of that model.
    2. I took it a step further and changed dim values to be same like existing dim table
    2.b. Also replaced test fact table values to mimic existing values as well so it would match what's available in dim table, and here's where the fun / mystery begins
    Scenario 1:
    I created fact like this...........select dim value, sum(msr) from <existing fact table>
    As you can easily tell, my values were already aggregated in the table, and they also match perfectly in cube created by AWM - no issue.
    Scenario 2:
    Created fact like this............select dim value, msr from <existing fact table>
    Quite clear is that my values are no longer aggregated, and broken down across multiple occurences of dim values; did this so that I can verify that the "sum" will actually work when used in AWM.
    The results from scenario 2 lead me back to same issue being faced before - i.e. the values weren't being rolled up when the cube was created. No records were rejected, and there was only ONE measure value showing up (and it was still incorrect), and everything else was 0.00
    I retrieved this error from the command program that runs in the background. This was generated right after running the maintain cube:
    <the system time> TRACE: In oracle.dss.metadataManager.............MDMMetadataProviderImpl92::..........MetadataProvider is created
    <the system time> PROBLEM: In oracle.dss.metadataManager.........MDMMetadataProviderImpl92::fillOlapObjectModel: Unable to retrieve AW metadata. Reason ORA-942
    BI Beans Graph version [3.2.3.0.28]
    <the system time> PROBLEM: In oracle.dss.graph.GraphControllerAdapter::public void perspectiveEvent( TDGEvent event ): inappropriate data: partial data is null
    <the system time> PROBLEM: In oracle.dss.graph.BILableLayout::logTruncatedError: legend text truncated
    Please tell me this helps shed some light on the main reason for no values coming back; we really need to move forward with using Oracle cubes where we are.
    Thanks
    Mike

  • Missing AW Maintenance Details When Submitting to Oracle Job Queue

    Is there a way to know the number of added/deleted members or the processed/ rejected records of a dimension/cube maintenance that was submitted to the Oracle Job Queue?
    The following is the log of my recent maintenance task that was submitted to the Oracle Job Queue:
    18:59:03 Attached AW OLAP_TEST.AW_TEST in RW Mode.     
    18:58:27 Completed Build(Refresh) of OLAP_TEST.AW_TEST Analytic Workspace.     
    18:58:27 Finished Parallel Processing.     
    18:58:21 Running Jobs: AWXML$_2534_1. Waiting for Tasks to Finish...     
    18:58:21 Started 1 Finished 0 out of 1 Tasks.     
    18:58:21 Running Jobs: AWXML$_2534_1.          Usually, there would be a line in the XML_LOAD_LOG table where the added/deleted members or processed/rejected records can be found. In this case, there is none.

    Usually the entries you want are before these lines (time-wise). The parallel processing only handles aggregating fact partitions in parallel, not loading the base level data.

  • Mulit-level linking to master data via navig attribs:  A saga in two parts

    The Saga, Part 1
    I have a requirement to provide access to a hierarchy belonging to a characteristic that is not contained in an infocube.  At first I thought I was going to have to add this characteristic to the cube, but then I realized the characteristic is a navigational attribute of another characteristic that is contained in the cube.
    The Question, Part 1
    Will users be able to use this navigational attribute as a selection criterion on reports, and will they be able to activate and use the display hierarchy associated with this characteristic?  (I'm anticipating the answer to this question will be yes.)
    The Saga, Part 2
    Assuming the answer to the above is yes, let's carry this one step further.  The navigational attribute described above (let's call it the "local" navigational attribute) also possesses navigational attributes, one of which users want to be able to use as a selection criterion on the same report (let's call this one the "remote" navigational attribute).  They also want to be able to use the display hierarchy on this remote attribute.
    The Question, Part 2
    I know this is a long shot, but can a navigational attribute of a navigational attribute be accessed in BEx?  Can that remote navigational attribute be used as a selection criterion?  Can its hierarchy be used in a query?  The scenario would look something like the following:
    BASIC CHAR (contained in infocube)
      |
       --> LOCAL NAVIG ATTRIB
              |
               --> REMOTE NAVIG ATTRIB (hierarchy needed in report)
    I'm anticipating the answer to Question #2 will be no, but I just wanted to be sure.  I'm guessing I'll have no choice but to add at least one of these characteristics to the cube.
    Thanks
    P.S.  We're running BW 3.1, BI Content 3.3.

    <b>The Question, Part 1
    Will users be able to use this navigational attribute as a selection criterion on reports, and will they be able to activate and use the display hierarchy associated with this characteristic? (I'm anticipating the answer to this question will be yes.)</b>
    The answer is yes. It helps not to have the IO in the cube and still use it in the front end.
    <b>The Question, Part 2
    I know this is a long shot, but can a navigational attribute of a navigational attribute be accessed in BEx? Can that remote navigational attribute be used as a selection criterion? Can its hierarchy be used in a query? The scenario would look something like the following:
    BASIC CHAR (contained in infocube)
       --> LOCAL NAVIG ATTRIB
    &#61672;     REMOTE NAVIG ATTRIB (hierarchy needed in report)</b>
    When read this thread, I was curious to see if it works or not. I did try to do this on coorder, plant and plant category. Guess what, it didn’t work. I even flagged it “NAVIGATIONAL ATTRIBUTE INFOPROVIDER” in the IO maintenance to see it reflects on the cube’s navigational attributes
    No this scenario wont work.
    So in the infocube maintenance you need to see something like this basic char__local navig__remote nav to make the scenario work. Which is obviously unavailable in the cube maintenance.
    I would love to see some one make this work. Reduces lot of effort to realign data in the cube if there is a problem in the master data.

  • Error Occured in BUILD_DRIVER

    I am facing following error in cube maintenance while submitting maintenance job to oracle queue.
    ***Error Occured in BUILD_DRIVER:
    It doesnt gave any other reason. I deployed everything on another machine with same patches, it works fine there.. but on second machine it doesnt. Also, when i manually maintain the cube from AWM , it works fine. I copied exact maintenance script generated by everytime it gave me above error.
    Can any1 help me out ?

    Let me elaborate further.
    I have two different environments.
    Environment 1
    Oracle Database 10.2.0.2.0
    1 Interim Patch :
    5612127
    The scheduled cube maintenance script is working fine in this environment. The cube is built in MY_OWNER shema. And this scheduler will run by METADATA_OWNER user. However, the scheduled job is run under the ownership of "SYS" which is quite weird. In end, this scheduled job is ran successfully without any error and cube is maintained properly.
    Environment 2:
    Oracle Database 10.2.0.2.0
    5 Interim Patches :
    5612127, 4939157, 5225799, 4639977, 5033476
    The scheduled cube maintenance script is not working in this environment. However, if i manually maintain the cube in AWM, it works absolutely fine. The exact error it says in XML_LOG is:
    11:30:49 Failed to Submit a Job to Build(Refresh) Analytic Workspace MY_OWNER.MY_ANALYTICAL_WORKSPACE.
    11:30:49 ***Error Occured in BUILD_DRIVER:

  • Loading one partition in AWM tries to aggregate all partitions

    I believe I'm seeing a difference in behavior between maintenance of a cube with a compressed composite vs. a cube with all dimensions sparse but not compressed.
    With the compressed composite cube, I execute cube maintenance (AWM 10.2.0.2), select 'Aggregate the cube for only the incoming data values', and click Finish. According to XML_LOAD_LOG, the records are loaded, and the aggregation occurs in only the partition(s) in which the data was loaded.
    With a fully-sparse cube (but NOT using a compressed composite), I follow all of the same steps, and use the same fact table. According to XML_LOAD_LOG, the records are loaded in the same amount of time as before, but the aggregation starts to loop over every partition (I'm partitioned along the Period-level of a Day-Wk-Pd-Qtr-Year time dimension). The log shows an entry for every period, not just the single period in the fact table, and it's taking about 90 seconds per period. For 48 periods, that adds over an hour of processing time unnecessarily.
    I tried it twice, thinking perhaps I clicked the wrong radio button, but that's not the case. I'm seeing very different (and detrimental) behavior.
    Has anyone else seen this?

    Addendum: The cube maintenance process also behaves badly when requesting the use of more than 1 processor. I have two cubes that I want to maintain at the same time. The server has 4 processors. I submit the job to the Oracle Job Queue and specify the use of 2 processors.
    The load log shows 48 3-line entries like these:
    Attached AW TEST1_SALESAW in MULTI Mode
    Started load of measures: Sales, Cost, Units from Cube LY4.CUBE. PD07 04 Partition
    Finished load of measures: Sales, Cost, Units from Cube LY4.CUBE. PD07 04 Partition. Processed 0 records.
    There is only one fiscal period in the fact table. But the multi-threaded process dutifully rolls through all 48 fiscal periods, and in doing so adds a significant amount of time to the process.
    NOTE: This is only an issue when using multiple processors. If I request only 1 processor, the "looping through all partitions" behavior does not occur.

  • Creating variable for attribute

    Hi guys,
    Is it possible to create variable for an attribute. My requirement is that the user wants Serial # as one of the user selection criteria which is an attribute of system code equipment in Dimension material. If not, is there any other possible way to get it in user selection criteria when the report is run.
    Thanks in advance.

    you cant create a variable on attribute of a char.
    other work around is to make Serial # a navigational attribute both in the IO (system code equipment) and in the cube maintenance too. That way in the report the attribute will be available as any other char in the cube.

  • OWB and AWM for loading DWH

    Hello,
    I am on the following version:
    DB: Oracle 10.2.0.1 (both source DB and target DWH)
    OWB : 10.2.0.1
    AWM : 10.2.0.1
    I have the following process:
    From the source DB I use OWB to load relational dimensions (SCD 2). I use AWM to load data from these dimensional tables to a Analytic workspace. I use OWB because I need to track changes (SCD) and use AWM because OWB currently does not support Parent-Child dimensions where as AWM supports. I would love to load AW from OWB directly if it had parent-child support.
    My question is:
    All the load should be trigger-based, When my process (SQL) completes on the source table the OWB process of loading the relational dimension should start which is not a problem, I using the run_owb_code.sql for this. once the run_owb_code is complete, I want the AWM's maintain data process to start. I don't know how to do it. I tried saving the maintenance task to script and tried executing it, it doesn't seem to work. I don't want to AWM and run cube maintenance every time the cube needs to be refreshed.
    Please note I don't have a cube in the relation world. My fact(cube) in the AW is a direct load from a table (as I don't need SCD here) so basically use OWB just to load the dimensions alone. AWM creates the actual cube with the dimension tables loaded using OWB and the fact table from source, So I don't think I can use the "dbms_awm.refresh_awcube(...".
    Can somebody please help me in automating this load process
    Thanks a lot in advance!
    Maruthi

    Hello,
    I am sorry, after some research, I came across this post
    Re: Cube and dimension refresh proactively
    As per the above post, the script that is generated using AWM will work if the AW is not attached by any user.
    I detached the AW and executed the script and the cube was populated with data.
    Thanks,
    Maruthi

  • OLAP commands at sql*plus

    Hi there,
    I installed on my computer the Analytic Workspace Manager 11g to manage olap cubes by using this page http://www.oracle.com/technology/products/bi/olap/index.html. It is a very useful tool which really helped me on building and quering tools, but i wasn't able to find any staff about how to do all this at command prompt "sql*plus".
    So, I would like to know how i can work with olap using commands at a proper command prompt like sqlplus. To be more specific i need commands which:
    1. Define dimensions,levels,hierarchies and cubes at command prompt!
    2. Load data on dimension tables and cubes!
    Also, could you give me a tutorial which organises all OLAP commands?
    Note: If there is no other way but to use the tool i just said, It would be helpful to tell me!

    All metadata/object creation is via the OLAP API (which is Java) so you can't create dimensions and cubes via SQL Plus. You can, however, manage cube maintenance using the dbms_cube package (which can, of course, be called from SQL Plus). See http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_cube.htm#CFHGDJAA

  • 0FIGL_V10 Error

    Hi,
    I executed the query 0FIGL_V10_Q0001 on 0FIGL_V10 virtual Infocube, it is giving error characteristic financial statement Item not available in infoprovider. When I checked the objects of virtual cube maintenance the info object is there.
    Th error is BRAIN W216 and BRAIN E101 on seeing the display error message in query designer.
    Pls help me out as this is urgent.

    U have to make sure of following.
    1. 0GLACCEXT infoobject should installed and loaded with master data including hierarchy data.
        http://help.sap.com/bp_bw370/BBLibrary/documentation/B70_BB_ConfigGuide_EN_DE.doc
    2. Make sure the FM (RS_BCT_FIGL_DATA_GET_VC10) got installed that pulls data from 0FIGL_C10 to 0FIGL_V10, taking 
        0GLACCEXT into consideration.  (0GLACCEXT is not available in 0FIGL_C10)

Maybe you are looking for

  • Control mac mini using medius tx1000 universal control

    I have a iMac 24, but I am interested in buying a mac mini to use in my home theatre using iPhoto & iTunes. Can the mac mini be controlled using the Medius tx 1000 or ant other universal remote?. Can the mini be turned remotely on and off?

  • End-user closes the Web browser's window...

    Hi , I have an Exit_Form selection on a menu..... Is it possible the code being there to be executed whenever the end-user closes the web-browser's window.....??? I use Dev10g. Thanks , a lot Simon

  • Is it Possible Scenario in workflow...

    Hi All, I have one scenario related to MM MIGO for Finance persons. 1. One report created for 103 (movement by where house) and one tcode for this. 2. This report has selection screen for PO date and MIR date. 3. I am calling this report from workflo

  • Service Date Duration

    I have an Object called Service Date in the Universe. In Webi, I created a report with Service Date for a duration of 03/02/2010 - 05/04/2010 by using Service Date as Query filter with in Between Operator. In report it shows the service date duration

  • SI E' BLOCCATO IL MIO IPHONE 3G CON L'APP DJ MIXER PRO

    AIUTATEMI NON SO COME SI SBLOCCA PERFAVORE