10gR2 Incremental aggregation performance

Hi, I've seen the Oracle presentation about 10gR2 where it shows that incremental data loads perform remarkably faster than they did in 10gR1, etc. However, I have not been able to repeat this performance on a small test cube. Could someone from Oracle please jump in and let me know if there are additional settings, design considerations, etc. needed to see these performance increases?
The cube I'm testing this on (borrowed from Mark Rittman) has the following setup:
qty_shipped <channel <time customer product promotion>>
qty_ordered <channel <time customer product promotion>>
I am using a global composite, but not compression. Also I am not doing a skip-level aggregation - I'm fully aggregating all dimensions.
When I load and aggregate the entire data (11,429 input records), I see the following performance:
Data load time: 1 sec
Aggregation time: 33 sec
Basically, then I delete the database, recreate it from the XML template, and load it in two pieces: an "initial" load (containing 11,411 records) and an "incremental load" (containing 18 records). These tables are populated based on time - the initial load has all history except for the last two days worth of data, and the incremental load contains the last two days worth.
Here is the performance I see:
"Initial" load - solving the exact same way as the "full" load above except using the "initial" load table (11,411 recs), and then doing an "aggregate full cube":
Data load time: 1 sec
Aggregation time: 40 sec
*** Note: for some reason, aggregating the "initial" load table is always somewhat slower than the one I build using the "full" dataset - not sure why! Doing the exact same steps, starting with clean cube...only diff would appear to be that the "initial" table has 18 less records than the full one. Any ideas?
"Incremental" load - here I load just the 18 records for the last two days worth of data. I repoint the cube mappings to the "incremental" load table, then load and aggregate using the "aggregate only for incoming values" method. I would expect this solve to be so fast I can barely measure it, but it isn't:
Data load time: 1 sec
Aggregation time: 17 sec
Obviously the incremental load is somewhat faster than doing a full load - but on the other hand, its only loading 18 out of 11,000+ values. I would expect the whole thing to finish in under a second.
Could someone please shed some light on what is going on?
Thanks!
Scott

2. Are there any good reference materials on how to
set up partitioning for OLAP? I'm intrigued by your
comment re: parallel builds, etc. but I no nothing
about how to effectively set up and use local
composites, etc.I don't know what reference materials are available
for this sort of thing.
Also, I neglected to point out in the last message
that, AFAIK, you can only use global composites
if you are NOT using compression. (At the OLAP DML
level, you can't have more than one partition
dimensioned by the same compressed composite
anymore than you can have two different variables
dimensioned by the same compressed composite.)
To expand a bit on the issue of parallel
build (actually, parallel load / build):
AWM lets you use the job queue to do your
refresh in multiple sessions simultaneously.
These sessions use multiwriter mode, and
each one basically ACQUIRE's a partition,
loads and builds it, and then UPDATE's it;
then, it moves on to another partition. If two
partitions share a composite, then two sessions
can't ACQUIRE it simultaneously, so global
composites can interfere with parallelism. (If you
are refreshing more than one cube at a time, however,
it's possible that AWM may do one cube in one session
at the same time as it does another cube in a different
session - I'm not sure about this.)
>
3. How does partitioning TIME work when time is a
dense (fastest varying) dimension of a variable? My
understanding (under Express) is that the fastest
varying dimensions are all grouped together on a
single database "page". If you partition on TIME, how
will it break up those pages? Does a "page" thenSuppose that you have a cube dimensioned by
TIME, PROD, and GEOG, and TIME is both dense
AND the fastest varying dimension. (TIME is
usually the fastest-varying, but occasionally a
different dimension might be dense and slowest
varying.)
For an unpartitioned cube, when you do your initial
load and build, the data will be grouped so that all
time periods for a given combination of PROD and
GEOG are stored contiguously in a LOB - which
means, physically speaking, that they are all on
the same page or small set of pages. When you
add new values to to TIME dimension, however, and
load some new data for those values, those values
will go at the end of the LOB - meaning they will
go on a different page.
This kind of maintenance can cause several problems.
First, it tends to fragment the index that the engine
uses to keep track of where all the data lives, which
slows down everything. Second, it means that when
you delete a bunch of contiguous TIME dimension
values (like, say, everything older than 1 year), you
may wind up having lots of pages that have some
deleted data and some undeleted data. This is bad
because it means that you need just as many page
reads to access your data AFTER you delete a
bunch of data.
On the other hand, if you partition by TIME, you put
limits on how bad these problems can get. If you
partition at the QUARTER level, then you can be sure
that when you want to delete an old quarter, all the
pages belonging to that QUARTER, and all the index
segmentation that they can represent, will be gone.
As to overhead - there's some, but last I heard it
didn't seem to be a big issue for anyone. You probably
want to avoid partitioning on too fine of a level, though,
for several reasons. First, there's some overhead involved
in building each partition. Second, fine-grained partitioning
does disrupt the "locality" of adjacent time periods -
if you partition on week, then the values for one week will
never be stored in the same page as the values for another
week. Finally, I don't think that AWM allows you to
precompute values for dimension positions "above" the
partitioning level. In other words, if you really need to
precompute quarterly totals, you can't partition on month -
you must partition on quarter or year.
Hope this helps.
Dan

Similar Messages

  • Incremental aggregation in OWB

    Hi all
    I have a strange requirement to get an incremental aggregation done and stored in one particular column based on changes to other columns of a dimension. Suppose I have a dimension COUNTRY_DIM as follows
    COUNTRY_SK
    COUNTRY_CODE *( BUSINESS KEY)*
    COUNTRY_LOCATION *( TRIGGERING COLUMN)*
    POPULATION *( TRIGGERING COLUMN)*
    COLUMN_INDICATOR
    START_DATE
    END_DATE
    My source is
    COUNTRY_CODE
    COUNTRY_LOCATION
    POPULATION
    While loading the dimension, if there is a change in either country location or population, the column indicator will be accordingly set.
    Case 1: If there is no change or new record being loaded into dimension, column indicator will be 0
    Case 2: If country location has changed for a particular code say GB, column indicator should be set to 1.
    Case 3: If both country location and population have changed, column indicator needs be set to 2.
    This seems to be an incremental aggregation where I check for each triggering column will be checked for a change and the indicator appropriately set. I have done this in Informatica where we can easily set a variable within an expression but I am not sure how we can do this in OWB.
    Any ideas?
    Birdy

    This seems to be an incremental aggregation where I check for each triggering column will be checked for a change and the indicator appropriately set. I have done this in Informatica where we can easily set a variable within an expression but I am not sure how we can do this in OWB.Here also you can take a expression and in outgroup column (say indicator) define the logic
    case when
        ( LEAD( COUNTRY_LOCATION plc_tag_id, 1, 0) OVER (ORDER BY COUNTRY_LOCATION )  = COUNTRY_LOCATION    AND
          LEAD( POPULATION , 1, 0) OVER (ORDER BY POPULATION )  = POPULATION   
         then '0'
      when
        ( LEAD( COUNTRY_LOCATION , 1, 0) OVER (ORDER BY COUNTRY_LOCATION ) <> COUNTRY_LOCATION    AND
        LEAD( POPULATION , 1, 0) OVER (ORDER BY POPULATION )  <> POPULATION 
         then '2'
         when
        ( LEAD( COUNTRY_LOCATION , 1, 0) OVER (ORDER BY COUNTRY_LOCATION ) = '1'
          then '1'
         end (Marked the answer as helpul or Correct if it is (top right) )
    Cheers
    Nawneet
    Edited by: Nawneet on Jun 14, 2010 5:58 AM

  • Link aggregation - performance overhead?

    Does anybody know if Solaris link aggregation incurs any performance degredation compared with non-redundant network connections?
    We've recently upgraded a client system and have enabled link aggregation to bind two interfaces (bge) to a logical aggregated interface.
    Apart from the server hardware upgrade, which brought a change from ce to bge interfaces, this is the only other significant network change.
    On this heavily used system, database network performance has degraded significantly, and being the only significant network change I'm wondering whether link aggregation could be a cause.
    Reading through Sunsolve articles and man pages, there doesn't appear to be anything categorically stating aggregation imposes a cost overhead.
    However, I'd be interested to hear from others if they've experienced this.
    For what it's worth, this is Solaris 10 10/09 on an Ultrasparc VII.

    So many considerations when you talk about performance: what system hardware in use? how many interfaces? is this a CoolThreads "T" server (where single threaded programs don't run as well as multi-threaded); what do you mean by "database network performance has degraded"; does the DB need to be tuned or adjusted for this? If using add-on cards, are they in the most optimal slot (some platforms) and don't go over the recommended # of interface boards supported by the system? different storage array or config for you DB?
    when cases like these come in, there typically is a discussion that needs to take place around those questions as well as what/when/where and expectations like those described here: http://blogs.sun.com/hippy/entry/what_s_the_answer_to
    also understand that link aggregation does not necessarily mean that you will see a balancing of the load evenly b/t 2 or more aggregated interfaces. The default policy is "L4" which decides the outbound interface by TCP and UDP info if found in the packet, not by simple source/destination or mac address hashing, although you can set those with -P if you want. You can also set multiple policies. (-P L2,L3).
    So it /could/ be related to the aggregation change you made, but you also upgraded this DB machine from some other system, so there ARE other differences. Were you using link aggregation on the previous system or Sun Trunking? What type of CPU was that other system?
    that's all I can think of, but hopefully you get the idea.

  • Incremental aggregation using dbms_awm.aggregate_awcube

    Is it possible to process an aggregation in an incremental way?
    I use dbms_awm.aggregate_awcube package to do it.
    Regards

    You can save the XML Templates on a sever directory (which is accessible through Oracle DB directory alias), and then use plsql to drop and recreate the AW from those XML files. I did that at a client site and it worked out very well. No Java or OLAP was used - just plsql procedure. Every night, before the load, AW$... table was dropped and then recreated and then loaded - all through plsql. At this moment, I don't have access to that code. I will look for it.
    - Nasar
    .

  • Exception Aggregation - Performance

    Guys,
    We have a query with ten calculated key figures and all of these have exception aggregation of summation at material level. Currently for just one plant the query is taking 8 minutes of time which is too bad.
    These 10 CKF's caluculate different types of stock values..all these have summation as exception aggregation. The price is calculated on the fly by using replacement path attribute value. Any suggestions on how to increase the performance of the query or any other approach to handle this scenario.
    stock value 1 = quantiy1 * price1.
    stock value 10 = quantiy10 * price10. 
    Please let me know if you need more info.
    Thanks,
    Varma.

    Hi Varma,
    Make sure that you have these Notes in the system:
    1416737
    1387593
    1396485
    Regards,
    Michael

  • 10gR2 Spatital + Partitioning Performance Issues

    I'm trying to get spatial working reasonably with a (range) partitioned table, containing a single 2D point value, all local indexes.
    If I query directly against a single partition, or have a restriction which "prunes" down to one partition, perfromance is reasonabe, and the plan looks like this;
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=3303 pr=0 pw=0 time=1598104 us)
    2596 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1584119 us)
    2596 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1581494 us)
    2596 DOMAIN INDEX FOO_SDX (cr=707 pr=0 pw=0 time=1550312 us)
    If my query is a bit looser, and ends up hitting 2 or more partitions, things degrade substantially, and I end up with a plan like this:
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=10472 pr=0 pw=0 time=6592543 us)
    5188 PARTITION RANGE INLIST PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=3349053 us)
    5188 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=6586055 us)
    5188 BITMAP CONVERSION TO ROWIDS (cr=5955 pr=0 pw=0 time=6539205 us)
    2 BITMAP AND (cr=5955 pr=0 pw=0 time=6539145 us)
    2 BITMAP CONVERSION FROM ROWIDS (cr=514 pr=0 pw=0 time=209088 us)
    5188 SORT ORDER BY (cr=514 pr=0 pw=0 time=206661 us)
    5188 DOMAIN INDEX FOO_SDX (cr=514 pr=0 pw=0 time=158447 us)
    12 BITMAP OR (cr=5441 pr=0 pw=0 time=7052201 us)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2650 pr=0 pw=0 time=3356960 us)
    1000000 SORT ORDER BY (cr=2650 pr=0 pw=0 time=3173026 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2650 pr=0 pw=0 time=193 us)(object id 63668)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2791 pr=0 pw=0 time=3292124 us)
    1000000 SORT ORDER BY (cr=2791 pr=0 pw=0 time=3153435 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2791 pr=0 pw=0 time=1000160 us)(object id 63668)
    Now this is a simple test case. My real situation is a bit more complex, with more data, more partitions, and another table joined in to do the partition pruning, but it comes down to the same issues.
    I've tried various hints, but have not been able to change the plan substantially.
    I've written a similar test case with btree indexes and it does not have these problems, and actually does pretty good with simple MBR type queries.
    I'll post another message with the spatial test case script...
    --Peter                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Here is the test script (kind of long):
    --create a partitioned table with local spatial index...
    create table foo (
         pid number not null, --partition_id
         id number not null,
         location MDSYS.SDO_GEOMETRY null --needs to be null for CTAS to work
    PARTITION BY RANGE (pid) (
    PARTITION P0 VALUES LESS THAN (1)
    create index pk_foo_idx on foo(pid, id) local;
    alter table foo add constraint pk_foo
    primary key (pid, id)using index pk_foo_idx;
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO1',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO2',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    commit;
    --local spatial index on main partitioned table
    CREATE INDEX foo_sdx ON foo (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT') LOCAL;
    --staging tables for exchanging with partitions later
    create table foo1 as select * from foo where 1=2;
    create table foo2 as select * from foo where 1=2;
    declare
         v_lon number;
         v_lat number;
    begin
         for i in 1..1000000 loop
              v_lat := DBMS_RANDOM.value * 20;
              v_lon := DBMS_RANDOM.value * 20;
              insert into foo1 (pid, id, location) values
              (1, i, MDSYS.SDO_GEOMETRY(2001,8307,MDSYS.SDO_POINT_TYPE(v_lon,v_lat,null),NULL,NULL));
              insert into foo2 (pid, id, location) values
              (2, 1000000+i, MDSYS.SDO_GEOMETRY(2001,8307,MDSYS.SDO_POINT_TYPE(v_lon,v_lat,null),NULL,NULL));
         end loop;
    end;
    commit;
    --index everything the same way
    create index pk_foo_idx1 on foo1(pid, id);
    alter table foo1 add constraint pk_foo1
    primary key (pid, id)using index pk_foo_idx1;
    create index pk_foo_idx2 on foo2(pid, id);
    alter table foo2 add constraint pk_foo2
    primary key (pid, id)using index pk_foo_idx2;
    CREATE INDEX foo_sdx1 ON foo1 (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT');
    CREATE INDEX foo_sdx2 ON foo2 (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT');
    exec dbms_stats.gather_table_stats(user, 'FOO', cascade=>true);
    exec dbms_stats.gather_table_stats(user, 'FOO1', cascade=>true);
    exec dbms_stats.gather_table_stats(user, 'FOO2', cascade=>true);
    alter table foo add partition p1 values less than (2);
    alter table foo add partition p2 values less than (3);
    alter table foo exchange partition p1 with table foo1 including indexes;
    alter table foo exchange partition p2 with table foo2 including indexes;
    drop table foo1;
    drop table foo2;
    --ok, now lets run some queries
    set timing on
    alter session set events '10046 trace name context forever, level 12';
    --easy one, single partition  (trace ET=0.18s)
    select count(*) from (
         select d.pid, d.id
         from foo partition(p1) d
         where
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=3303 pr=0 pw=0 time=1598104 us)
    2596 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1584119 us)
    2596 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1581494 us)
    2596 DOMAIN INDEX FOO_SDX (cr=707 pr=0 pw=0 time=1550312 us)
    --partition pruning works for 1 partition (trace ET=0.18s),
    --uses pretty much the same plan as above
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid = 1 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    --heres where the trouble starts  (trace ET=6.59s)
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid in (1,2) and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=10472 pr=0 pw=0 time=6592543 us)
    5188 PARTITION RANGE INLIST PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=3349053 us)
    5188 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=6586055 us)
    5188 BITMAP CONVERSION TO ROWIDS (cr=5955 pr=0 pw=0 time=6539205 us)
    2 BITMAP AND (cr=5955 pr=0 pw=0 time=6539145 us)
    2 BITMAP CONVERSION FROM ROWIDS (cr=514 pr=0 pw=0 time=209088 us)
    5188 SORT ORDER BY (cr=514 pr=0 pw=0 time=206661 us)
    5188 DOMAIN INDEX FOO_SDX (cr=514 pr=0 pw=0 time=158447 us)
    12 BITMAP OR (cr=5441 pr=0 pw=0 time=7052201 us)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2650 pr=0 pw=0 time=3356960 us)
    1000000 SORT ORDER BY (cr=2650 pr=0 pw=0 time=3173026 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2650 pr=0 pw=0 time=193 us)(object id 63668)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2791 pr=0 pw=0 time=3292124 us)
    1000000 SORT ORDER BY (cr=2791 pr=0 pw=0 time=3153435 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2791 pr=0 pw=0 time=1000160 us)(object id 63668)
    --this performs better but is ugly and non-general (trace ET=0.35s)
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid = 1 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
         UNION ALL
         select d.pid, d.id
         from foo d
         where
         d.pid = 2 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    );

  • Aggregation at the lowest level in the cube

    Hi guys
    I designed very simple test cube with one dimension (both MOLAP driven).
    The dimension PRODUCT consists of three levels:
    - Group
    - Category
    - Product_detail
    PRODUCT_SRC table to load PRODUCT dimension:
    PR_GROUP_NAME PR_GROUP_ID PR_CATEGORY_NAME PR_CATEGORY_ID PR_DETAIL_NAME PRODUCT_DETAIL_ID
    dairy 1000 yoghurts 1000000 yoghurt_1 1000000000
    dairy 1000 yoghurts 1000000 yoghurt_2 1000000001
    dairy 1000 yoghurts 1000000 yoghurt_3 1000000002
    candy 1001 cookies 1000001 cookies_1 1000000003
    candy 1001 cookies 1000001 cookies_2 1000000004
    candy 1001 cookies 1000001 cookies_3 1000000005
    beverages 1002 juices 1000002 juice_1 1000000006
    beverages 1002 mineral water 1000003 mineral_water_1 1000000007
    beverage 1002 energy drink 1000004 energy_drink_1 1000000008
    The cube SALES has one measure:
    - Value_of_sales (sum aggr)
    SALES_SRC table to load SALES cube:
    VALUE PROD_ID ID
    1236 1000000002 2
    115 1000000006 3
    1697 1000000005 4
    12 1000000004 5
    168 1000000008 6
    1984 1000000005 7
    9684 1000000004 8
    84 1000000002 9
    8 1000000007 10
    498 1000000006 11
    4894 1000000008 12
    4984 1000000004 13
    448 1000000003 14
    4489 1000000004 15
    13 1000000001 16
    879 1000000004 17
    896 1000000006 18
    4646 1000000007 20
    I created the dimension PRODUCT and a mapping which loaded the data into the dimension. It worked perfectly. The hierarchy was created as I expected.
    Then I created cube SALES and a mapping which should load the data into the cube. It is very very simple mapping - there were just only two items on the canvas:
    - SALES_SRC table
    and
    - SALES cube
    and two lines:
    - from SALES_SRC.VALUE to SALES.VALUE_OF_SALES
    - from SALES_SRC.PROD_ID to SALES.PRODUCT_NAME
    Then I deployed everything and ran mapping, which loaded cube. But in my opinion the cube was not populated in a proper way, because it was no aggregation performed at the lowest level of product hierarchy - there was only a value of the first occurence of certain product. I mean:
    In SALES.SRC we have for instance:
    VALUE PROD_ID ID
    1236 1000000002 2
    84 1000000002 9
    For me the the value in the cube should be 1236 + 84 = 1320, but the value in the cube at PRODUCT_DETAIL_LEVEL for yoghurt_3 is only 1236 - first occurence of this product in SALES.SRC.
    Why hasn't been the data aggregated at lowest level of PRODUCT dimension hierarchy - is it the way OWB does such things?
    Should I manually aggregate the data before loading to cube (just to use Aggregator to aggregate the data at lowest level)? If yes - what about incremental loading of data to cube (the old value value is simply replaced by new one and not summed in the cube)
    In data warehouse solutions of other vendors the cube in such situation is loaded as I expected here.
    I really don't know what to do. I do really appreciate any help from you.
    Thank you in advance
    Peter

    Hi David
    Thank you very much.
    Now I'm sure that I have to aggregate facts by myself at the lowest level of hierarchy in a dimension.
    Regards
    Peter

  • Aggregating Cube dimensioned by Compressed Composite

    What is the optimum way to aggregate cubes dimensioned by compressed composites using DBMS_AW.EXECUTE? I am not able to figure out why the DBMS_CUBE package performs better then DBMS_AW.
    Test results from the sample Global Schema:
    Load 5 records and run the Aggregation using the following method (takes 2 seconds):
    exec DBMS_CUBE.BUILD('GLOBAL.UNITS_CUBE USING(SOLVE)','C',false,1,true,true,false);
    Load 5 records and run the Aggregation using the following method (takes 17 seconds):
    exec DBMS_AW.EXECUTE('aggregate units_cube_stored using units_cube_solve_aggmap;');
    Page 853 in OLAP DML Reference 11g Guide states:
    Aggregating Variables Dimensioned by Compressed Composites
    Keep the following points in mind when designing an aggregation specification for a
    variable dimensioned by a compressed composite:
    ■ RELATION statements in the aggregation specification must be coded following
    the guidelines given in "RELATION Statements for Compressed Composites" on
    page 9-52.
    ■ There is no support for parallel aggregation. Instead, use multiple sessions to
    compute variables or partitions that have their own compressed composites.
    ■ If possible, Oracle OLAP automatically performs incremental aggregation when
    you reaggregate a variable dimensioned by the compressed composite. In other
    words, Oracle OLAP determines what changes have occurred since the last
    aggregation, determines the smallest region of the variable that needs to be
    recomputed, and recomputes only that region.
    Consequently, there is no support for explicit incremental aggregation. You cannot
    aggregate a variable dimensioned by a compressed composite if the dimension
    status of the variable is limited. The status of the variable's dimensions must be
    ALLSTAT for the aggregation to succeed. You can, however, partition using a
    dense dimension with local compressed composites. In this way you can
    aggregate only those partitions that contain new data.

    One difference is that DBMS_CUBE will look to see if the cube needs solving before calling aggregate. So if you call
    exec DBMS_CUBE.BUILD('GLOBAL.UNITS_CUBE USING(SOLVE)','C',false,1,true,true,false);
    several times in a row, I could believe that it would become faster after the initial solve.

  • Re aggregation when a member is deleted from dimension

    Hi All,
    I am aware that deleting a member from a non-partitioned dimension of a cube will trigger a re-aggregation.
    The behaviour I am seeing is that this re-aggregation takes as long as (slightly less) than a full initial solve. For e.g. a full initial solve takes around 30 minutes, whereas a re-aggregating the cube after a dimension member is removed takes roughly around 28 minutes. Re-aggregating the cube just after the second re-aggregation is then quick, around 2 minutes
    The member that was removed from the dimension does not have any data associated with it in the cube, so was expecting the re aggregation to be quick as it does not affect any of the existing aggregated data.
    Could someone explain this behaviour?
    The cube is compressed and is partitioned by Time Dimension. I am on Oracle 11.2.0.2
    Thanks

    There certainly can be a performance difference between 'Fast Solve' and 'Complete'. When no dimension members are changed, the fast solve will usually be quicker because it engages incremental aggregation. But your situation is different because you are adding new members. In theory this should only impact the latest partition, but it does not due to some known bugs. Here are two that I believe are relevant to your case. Neither is publicly visible at this point.
    BUG 12536825 - CHANGED RELATION WON'T RETURN THE RIGHT VALUE
    Bug 11934210 - CC USES FULL BUILD INSTEAD OF INCREMENTAL WHEN RELATION CHANGED
    The RELATION in both cases refers to the parent-child relationship in the dimension. When you add a new member to the dimension, this relation is changed. The effect of bug 12536825 is that partitions that are not really involved (because there is no data for the new member) are re-aggregated anyway. The effect of bug 11934210 is that these partitions can get fully reaggregated even no data has changed in that partition.
    These bugs are not (as of writing) fixed in any public patch, but you may be able to get a one-off fix if this is seriously impacting your performance. I would open an SR describing the problem. You can refer to my name, the bugs above, and this post so that it will be properly forwarded.

  • Int not increased when incremented in method call

    Hi,
    Could someone please explain why my int value is not increased in value when the increment is performed within a method call?
    private void printString(int i)
    if( i > 5)
          System.out.println("Greater than 5!");
    else
          printString(i++);
    }Thanks

    jverd wrote:
    masijade. wrote:
    It is incremented, just after the print, as pointed out above.No, it's incremented before the printString method is recursively called, but after the argument's value (the value of the expression i++) has been determined.
    As it stands, if he calls that method with i <= 5, it will recursively call itself with the same value until the stack blows up.Yes, yes, I know. Stupid me. The print just simply doesn't get the incremented value.
    P.S. I didn't realise it was a recursive method. I saw "print" and thought it was simply an SOP (you know what they say, that you see what you expect to see). ;-)

  • Warning ALSB Statistics Manager BEA-473007 Aggregator did not receive statistics from ...

    Hi,
    I am using cluster with osb_server1 and osb_server2. While starting the servers, I am facing below error on Managed Server(osb_server2) but only warning on Managed Server(osb_server1).
    Warning on managed server1(osb_server1)
    <Warning> <ALSB Statistics Manager> <BEA-473007> <Aggregator did not receive statistics from [osb_server2] for the aggregation performed for tick 1855320.>
    Error on managed server2(osb_server2)
    <Nov 24, 2011 11:23:00 AM UTC> <Error> <ALSB Statistics Manager> <BEA-473003> <Aggregation Server Not Available. Failed to get remote aggregator
    java.rmi.UnknownHostException: Could not discover URL for server 'osb_server1'
    at weblogic.protocol.URLManager.findURL(URLManager.java:145)
    at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.getInitialContext(WlsRemoteServerImpl.java:94)
    at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.lookupJNDI(WlsRemoteServerImpl.java:54)
    at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.getRemoteAggregator(ALSBStatisticsManager.java:291)
    at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.access$000(ALSBStatisticsManager.java:38)
    Truncated. see log file for complete stacktrace
    Please provide your solutions here.
    Thanks

    Hi,
    I am using cluster with osb_server1 and osb_server2. While starting the servers, I am facing below error on Managed Server(osb_server2) but only warning on Managed Server(osb_server1).
    Warning on managed server1(osb_server1)
    <Warning> <ALSB Statistics Manager> <BEA-473007> <Aggregator did not receive statistics from [osb_server2] for the aggregation performed for tick 1855320.>
    Error on managed server2(osb_server2)
    <Nov 24, 2011 11:23:00 AM UTC> <Error> <ALSB Statistics Manager> <BEA-473003> <Aggregation Server Not Available. Failed to get remote aggregator
    java.rmi.UnknownHostException: Could not discover URL for server 'osb_server1'
    at weblogic.protocol.URLManager.findURL(URLManager.java:145)
    at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.getInitialContext(WlsRemoteServerImpl.java:94)
    at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.lookupJNDI(WlsRemoteServerImpl.java:54)
    at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.getRemoteAggregator(ALSBStatisticsManager.java:291)
    at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.access$000(ALSBStatisticsManager.java:38)
    Truncated. see log file for complete stacktrace
    Please provide your solutions here.
    Thanks

  • RMAN: Increment backup very slow

    Hi All,
    We have Datawarehouse database having size around 7TB. Increment backup performance is extremely poor it is taking approx 14hrs to complete the process. We have also enabled block change tracking but failed to meet target.
    Below mentioned DB info & RMAN configuration parameters:
    DB: 11.1.0.6
    OS: Linux 2.6.18-128.el5 x86_64
    System has 16 processor when more than two process per CPU i.e. 32
    CONFIGURE RETENTION POLICY TO REDUNDANCY 5;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/d01copy/control_bkp/autobackup_control_file%F';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 13 BACKUP TYPE TO COMPRESSED BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BZIP2';
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.1.0/dbs/snapcf_PRODDB.f'; # default
    Thanks in advance.

    Thnaks for reply.
    Have you configured Compressed backups?
    --> Yes. with below command.
    RMAN> backup as compressed backupset incremental level 1 tag=$v_tag database;
    Have you allocated channels ?
    --> Yes.
    allocate channel backup_disk1 type disk format '$v_bdest/%U' maxpiecesize 10G;
    allocate channel backup_disk2 type disk format '$v_bdest/%U' maxpiecesize 10G;
    allocate channel backup_disk3 type disk format '$v_bdest/%U' maxpiecesize 10G;
    allocate channel backup_disk4 type disk format '$v_bdest/%U' maxpiecesize 10G;
    allocate channel backup_disk5 type disk format '$v_bdest/%U' maxpiecesize 10G;
    What is the Large pool size configured? try to increase.
    --> large_pool_size=1073741824
    Is backup to DISK or TAPE? mentioned in script?
    --> Backup goes to DISK only.
    How is DISK performance?
    --> How can we calculate DISK performance on LINUX?
    Regards,

  • OSB 10gR3 Aggregation server in a Cluster configuration

    Hello everybody,
    I am trying to deploy an osb 10gr3 cluster with the following configuration
    *unix_machine_1
    ** AdminServer
    ** osb_server_1
    *unix_machine_2
    ** osb_server_2
    osb_server_1 & osb_server_2 beloing to an osb_cluster
    although all servers startup and are OK
    I constantly get
    on osb_server_2 log : "Aggregation Server Not Available"
    and
    on osb_server_1 log : "Aggregator did not receive statistics from [osb_server_2] for the aggregation performed for tick xxxxx"
    (of course in the sbconsole i can see reports only for osb_server_1 )
    do you have any idea what am I doing wrong?
    regards
    ./ydes
    PS I am trying to follow the draft guide from http://blogs.oracle.com/fmw/2009/11/13/OSB_WP_HA_Final_Draft.pdf

    along with the listen address, my problem was that the installation was in production mode, and the hosts had more than one IP address assigned.
    wls instances where bound (listening) to the wrong IP address
    eduardo, thanx for your reply
    regards
    ./ydes

  • How to parameterize SAP PO Performance Monitoring

    Hello,
    we have upgraded our SAP PI 7.1 to SAP PO 7.4 (Java Only). Now I ‘d like to parameterize the Message Performance Monitoring. I cannot find the parameter to change the amount of days in which the performance data is available. It seems that the default value is set to 7 days, but I can’t find the parameter to be changed. Please advise how to reset the parameter.
    Thanks for your support.
    kind regards
    Daniel

    Hey Leela,
    thanks for your advice. But unfortunately this isn’t the value Im looking for. The parameter “xiadapter.inbound.persistDuration.default” only define that messages and their payload are available for this amount of milliseconds.
    I’m searching for the parameter to change the number of days I can display in the performance monitoring. For example I can check the performance data from the 1.12.2014 until the 8.12.2014. On the 9.12.2014 the aggregated data are lost.
    Which parameter I have to change to get a higher amount of aggregated performance data. Because I want to see performance data from the 1.12.2014 one month later not only one week.
    I have added a Screenshot from my performance monitoring. In this view I only can see the performance data from the last 7 days (aggregate into days). I want to see the last 31 days in this view.
    Thanks for advice

  • Difference between Compression and Aggregation

    Hi,
      Can anybody explain the Difference between Compression and Aggregation.Performance wise which is better and explain me in detail.
      Thnaks,
      Chinna

    Hi,
    suppose you are having three charecteristics in a cube say X,Y,Z..
    Even for the records which are having the same combination of these charecteristics but are loaded with different request they won't get aggregated.
    So when you go for compression the records , it deletes the request number, and aggregates the records which are having the same combination of these records.
    Coming to the aggregates , if you build a aggregate on the charectaristic say 'X' then it aggregates the records which are having the same value for a particular charecteristic.
    ex: say you are having the recrds as
    x1, y1 ,z1......(some key figures)
    xi, y2,z1,.....
    x1,y1,z1,....
    x3,y3,z3...
    If you compress them, you will get three records.
    If you go for aggregates based on the charecteristic 'X' you will get two records.
    So aggregates will give more aggregate level of data than compression
    regards,
    haritha.

Maybe you are looking for

  • Why the same name of process comes out in AIX , in java. please help me.

    Hello. I have two questions related to Jvm. I've developed Job scheduler program which is doing somthing as the DB schedule is set in the way of Multi thread. Currently , I'm in the process of testing this one. It is good , if in the normal case. How

  • How to block execution of event listeners

    Hi all, JDev version : 11.1.1.6 My requirement is that I want to block all event listeners like ActionListeners, SelectionListeners, DisclosureListeners, RowDisclosure listeners when the screen is opened in readonly mode. I could block ActionListener

  • How do I download a photo from email to my mac?  It will not do it!

    I download a photo from my email and it goes shows up as a doc file?  Stupid!  How do I get it into Iphoto as a JGEG?

  • Screen field value under POV event

    Hi All, I have one issue...please suggest me solution for this. In module-pool program i have one screen(1000) which has one field(hrname_1000). Under this event: PROCESS ON VALUE-REQUEST. FIELD hrname_1000 MODULE hrname_1000. under this module henam

  • Cant get Claro search Malware of my Firefox!!!!!!!!!

    Claro Search has installed itself on my Firefox. I have been able to successfully remove it from my Internet Explorer and Google Crome, but not from Firefox. I have already treid the following: 1. Disable Add-ons And Extensions 2. Reset the browser.s