Aggregation operators

Hi All,
I am trying to create a new data warehouse based on the Sales History (SH) example.
I've created 3 dimensions (time, product and customer) and a cube with all the dimensions created.
My cube has only one measure (product_prices) with the following aggregation operators:
- time : SUM
- product : SUM
- customer : SUM
These are the default aggregation operator for my dimensions (I think). What I want to do is to change the aggregation operators for product and customer to <NONE> (translated from protuguese) as I have in the SH example.
The problem is that I don't have the option <NONE> when I try to change the operator using the OLAP Manager (OEM). I tried to create it manually with the cwm package, but, as I am working with the OLAP Release 1 (Patched to 9.0.1.2.1), it is impossible to do so. (only with Release 2 and cwm2...)
Thanks in advance for any help.
Bye,
Mario

The default aggregation operators are currently not used by any part of the product. All runtime query aggregations are always SUM. Currently you should leave the settings alone, the product only supports SUM.

Similar Messages

  • Usage of aggregation operators not supported by OWB

    I want to use the WSUM operator for aggregation in an AW cube. Since WSUM is not supported by OWB I updated the corresponding AGGMAP after deploying the cube to the AW. But whenever I execute the cube load mappig the AGGMAP is overwritten to what has been originally defined in the OWB repository.
    I would expect that only deploying the cube would overwrite my manual updates but not the execution of a mapping.
    Any explanations for that?? Other workarounds to use aggregation operators not supported by OWB? Do I really have to create another AGGMAP not touched by the mapping?
    Btw, we are using OWB10gR2.

    To create the XML you can use the AWXML java api directly and get a string from the AWXML api representing the XML. This could be done from an expert and a procedure generated.
    You could hold the XML holding the alter cube and aggregation/solve definition in a procedure (in a table of VARCHARs for example), then the procedure would be deployed and executed after the cube is deployed...to deploy the metadata.
    Then the solve can be run as a post-map trigger in the map.

  • Different aggregation operators for the measures in a compressed cube

    I am using OWB 10gR2 to create a cube and its dimensions (deployed into a 10gR2 database). Since the cube has 11 dimensions I set all dimensions to sparse and the cube to compressed. The cube has 4 measures, two of them have SUM as aggregation operator for the TIME dimensions the other two should have AVERAGE (or FIRST). I have SUM for all other dimensions.
    After loading data into the cube for the first time I realized that the aggregation for the TIME dimension was not always (although sometimes) correct. It was really strange because either the aggregated values were correct (for SUM and for AVERAGE) or seemed to be "near" the correct result (like average of 145.279 and 145.281 is 145.282 instead of 145.280 or 122+44+16=180 instead of 182). For all other dimensions the aggregation was OK.
    Now I have the following questions:
    1. Is it possible to have different aggregations for different measures in the same COMPRESSED cube?
    2. Is it possible to have the AVERAGE or FIRST aggregation operator for measures in a COMPRESSED cube?
    For a 10gR1 database the answer would be NO, but for a 10gR2 database I do not know. I could not find the answer, neither in the Oracle documentation nor somewhere else.

    What I found in Oracle presentation is that in 10GR2 compressed cube enhancements support all aggregation methods except weighted methods (first, last, minimum, maximum and so on). It is from September 2005 so maybe something changed since then.
    Regarding your question about the results I think it is caused by the fact that calculation are made on doubles and then there is a compression, so maybe precsion is lost a little bit :(. I really am curious whether it is because of numeric (precision loss) issues.

  • Different aggregation operator for two measures in single cube

    Hi,
    I have three measures age_month and age_year,pages for which aggregation operator should be like this
    for age_month and age_year
    time - max
    account - sum
    sales org - sum
    for pages
    time - sum
    account - sum
    sales org - sum
    I am creating MOLAP cube in OWB and deploying to AWM. I can create above structure in OWB but when I deploy it to AWM I see that for all the dimensions and all the measures aggregation operator as sum. Off course I can change the aggregation operator at cube level but it make changes to all underlying measures too.
    Also I can see in XML of the cube and see the operator for two measures is as max but UI shows as sum. After load also I am seeing in report it is calculating as sum instead of max along time dimension.
    Any help would be highly appreciated.
    Thanks
    Brijesh

    If you have an existing cube (already defined and aggregation setup) then modifying the aggregation behavior like changing order of dimensions/aggregation operators is not very robust (likely to fail due to inability of internal objects to sync up to modified defintion). Its always better to take a backup of cube definition and drop/rebuild a fresh cube with new dim order/agg operators.
    How can you have a compressed cube and also set the aggregation property per dimension?
    I was under the impression that if it is a compressed cube, then not only is there a restriction that all measures have same aggregation properties (defined at cube level) but also the aggregation behavior defined at cube level should use the same operator (note: any one operator out of sum or max or min) for all dimensions.
    Edited by: Shankar S. on Dec 19, 2008 11:54 AM
    Perhaps this additional restriction is only required if we intend to use the Cube as a source for relational queries using Cube organized MVs.
    Another way to do this is given below:
    I'm assuming that when you say that you require Max along Time and Sum along other dimensions, you mean that you want LAST (chronological) along time at leaf level and SUM along other dimension. Also assuming that Time hierarchy has the following levels: DAY, MONTH, QUARTER, YEAR. Big assumption :) as finding out Max along time requires the help of the fact/measure whereas finding out last along time requires us to use the dimension (along with dimension metadata) alone.
    Define 1 Cube: Cube1 ... Structure: Compressed Composite, datatype: DECIMAL
    Set Aggregation: SUM along all dimensions
    Create 3 stored measures ( age_m_sto, age_y_sto and pages)
    You may want to set description for age_m_sto/age_y_sto as "******** INTERNAL: DO NOT USE ******** Stored Measure used for age_month/age_year"
    Create 2 calculated measures - age_month and age_year defined as follows ...
    olap expresssion to be given in AWM:
    <age_month>: OLAP_DML_EXPRESSION('cube1_age_m_sto(time if time_levelrel eq ''DAY'' then time else statlast(limit(time to bottomdescendants using time_parentrel time(time time))))', DECIMAL)
    <age_year>: OLAP_DML_EXPRESSION('cube1_age_y_sto(time if time_levelrel eq ''DAY'' then time else statlast(limit(time to bottomdescendants using time_parentrel time(time time))))', DECIMAL)
    NOTE: The caculated measure performs the LAST along time action using the stored measure. For every higher level time dimension member, it reaches along the dimension to the last leaf level member and reads the stored measure for that lowest level time member.
    Map and Maintain Cube.
    From the SQL Cube View: Use only the columns corresponding to stored measure: pages and calculated measures: age_month, age_pages (ignore columns corresponding to age_m_sto, age_y_sto)
    HTH
    Shankar

  • How to setup Dimension Attributes in ERPI

    Hi All,
          I am currently in process of setting metadata rules in ERPI to extract hierarchies from EBS into EPMA Planning and HFM application. Our hierarchies are already setup in these applications but we need to replace this process with ERPI. I know ERPI does give capability of handling some attributes such as Data Storage Parent, Data Storage, Expense Reporting, Account Type, Time Balance for Balance sheet and income statement OR use system defaults.
    My question is how do I define these attributes in a way that it keeps the ones that are currently in the system?. How about +,- aggregation operators, Two Pass Calculation, Smartlist, Data Type, etc?. If I use system defaults, will it overwrite the current ones?.
    Please suggest. I need to be able to keep currenty attributes as it is and just update the metadata piece.
    Thanks Everyone. Any feedback will be greatly appreciated.

    Refer to pages 81-82 of the ERPI Administrator Guide (11.1.2.2)
    http://www.oracle.com/technetwork/middleware/epm/documentation/index.html

  • Distinct Count doesn't return the expected results

    Hi All,
    I was fighting a little trying to implement a Distinct Count measure over an account dimension in my cube. I read a couple of posts relateed to that and I followed the steps posted by the experts.
    I could process the cube but the results I'm getting are not correct. The cube is returning a higher value compared to the correct one calculated directly from the fact table.
    Here are the details:
    Query of my fact table:
    select distinct cxd_account_id,
              contactable_email_flag,
              case when recency_date>current_date-365 then '0-12' else '13-24' end RECENCY_DATE_ROLLUP,
              1 QTY_ACCNT
    from cx_bi_reporting.cxd_contacts
    where cxd_account_id<>-1 and recency_date >current_date-730;
    I have the following dimensions:
         Account (with 3 different hierarchies)
         Contactable Email Flag (Just 3 values, Y, N, Unknown)
         Recency_date (Just dimension members)
    All dimensions are sparse and the cube is a compressed one. I defined "MAXIMUM" as aggregate for Contactable Email flag and Recency date and at the end, SUM over Account.
    I saw that there is a patch to fix an issue when different aggregation rules are implemented in a compressed cube and I asked the DBA folks to apply it. They told me that the patch cannot be applied because we have an advanced version already installed (Patch 11.2.0.1 ).
    These are the details of what we have installed:
          OLAP Analytic Workspace       11.2.0.3.0 VALID
          Oracle OLAP API 11.2.0.3.0 VALID
          OLAP Catalog 11.2.0.3.0 VALID
    Is there any other patch that needs to be applied to fix this issue? Or it's already included in the version we have installed (11.2.0.3.0)?
    Is there something wrong in the definition of my fact table and that's why I'm not getting the right results?
    Any help will be really appreciated!
    Thanks in advance,
    Martín

    Not sure I would have designed the dimensions /cubes as you,  but there is another method you can obtain distinct counts.
    Basically relies on using basic OLAP DML Expression language and can be put in a Calculated Measure, or can create two Calculated measures
    to contain each specific result.  I use this method to calculate distinct counts when I want to calculate averages, etc ...
    IF account_id ne -1 and (recency_date GT today-365) THEN -
    CONVERT(NUMLINES(UNIQUELINES(CHARLIST(Recency_date))) INTEGER)-
    ELSE IF account_id ne -1 and (recency_date GT today-730 and recency_date LE today-365) THEN -  
    CONVERT(NUMLINES(UNIQUELINES(CHARLIST(Recency_date))) INTEGER)-
    ELSE NA
    This exact code may not work in your case, but think you can get the gist of the process involved.
    This assumes the aggregation operators are set to the default (Sum), but may work with how you have them set.
    Regards,
    Michael Cooper

  • Generating Derived Table

    Hi,
    I've just created in BO Designer 6.5 a derived table for the calculation of an aggregate function.
    The table is structured in the following way (for example):
    SELECT SUM(ColumnCount) AS Sum_Distinct_NDG
    FROM (SELECT COUNT(DISTINCT cod_sample) AS ColumnCount
               FROM table.sample) DTBL
    In the derived table I have only one numeric column, and I can not put it in join with my fact table (table.sample).
    I thought topopulate the derived table also with an "alias" of the "table.sample.cod_sample" to put them in join, am I right? Anyway... I don't know how to do it.
    Can anybody help me to go on?
    Thanks in advance
    Riccardo

    Hi riccardo,
    assuming you have a table A with a column A1 and you want to count the distinct entries in this column. Your sql statement will look like this:
    select count(distinct A.A1) from A
    Assuming that the column A1 contains the following data:
    A1
    1
    1
    2
    2
    3
    Your sql statement return only one row with just one column:
    select count(distinct A.A1) from A
    3
    In your example you try then to build the sum of a table with a single line:
    select sum(B.alias1) from ( select count(distinct A.A1) as alias1 from A ) B
    which does not make really sense. 
    Generally it does not make sense to apply two aggregation operators on the same database field at once.
    What is your high level requirement here? In order to count the distinct entries the distinct count operator should be enough.
    Regards,
    Stratos

  • OR between groups in Row-Restrictions, priority - Universe Design

    Old School Univers Design, no SAP source, but plain Oracle
    Two Questions here
    1 - Priority in rowrestrictions
    from documentation:
    u201CYou can specify which restriction to apply to a user that belongs to multiple groups using a universe. For example a user belongs to two groups, Sales with a restriction to view 5000 rows of data, and Marketing to view 10000 rows. When the user refreshes a report, the restriction associated with the lowest level group is applied. In the example above, if the Sales group had order 1 and Marketing had order 2, the restriction from marketing (10000) would be used.u201D
    I read in the documentation for Rowrestrictions, that the restriction on top of the order list will be the one kicking in, in case of a huser having two conflicting restrictions.
    Does anyone have experience with this ?
    Does it work ?
    My experience is that its not working.  Both rowrestrictions gets into the SQL with AND between as shown
      AND  ( FLEX_SEGMENT5.FLEX_VALUE IN (030,033,090,805,041,062,048)
      AND  FLEX_SEGMENT5.FLEX_VALUE IN (041,048,062)
    the first line is my new restriction with priority 1,
    the second line is the other restriction with priority 6
    2 - OR between groups
    there is a Restriction option in Manage Rowrestrictions , where you can spesify Rowrestriction Combinations using AND or OR.
    I have not been able to have this working.
    Anyone having a positive experience here ? Or have I misunderstood what this actually means.
    I would expect 
      WHERE  ( FLEX_SEGMENT5.FLEX_VALUE IN (030,033,090,805,041,062,048)
      OR FLEX_SEGMENT5.FLEX_VALUE IN (041,048,062)
    What I get is
    ( FLEX_SEGMENT5.FLEX_VALUE IN (030,033,090,805,041,062,048)
      AND  FLEX_SEGMENT5.FLEX_VALUE IN (041,048,062)
    Is this a known bug ?
    I have tried this in WEBI 3.1 and R2

    Hello,
    In fact, Priority is used for universe overload where only one overload among several might apply. This is the case for connection overloads, table mappings..., but not for row restrictions.
    For row restrictions, you have AND/OR aggregation operators. In Universe design tool, you have the option to choose how Row restriction will aggregate (click the "cog" icon):
    + Combine row restrictions using AND
    + Combine row restrictions using AND within group hierarchies and OR between groups
    In the first case, all row restrictions that might apply to a user will be aggregated with AND operators.
    In the second case, row restrictions that are inherited are aggregated with AND operators and the ones coming the user groups a user belongs to are aggregated with OR operators.
    Ex with the following user groups organisation:
    G1
      G11 (G11 belongs to G1 group)
    G2
      G21 (G21 belongs to G2 group)
    and user U belongs to G11 and G21 user groups.
    Final row restriction that applies to the user U is:
    (G1 AND G11) OR (G2 AND G21)
    Hope it helps.
    Cheers
    ~~cas

  • Ora-12801 with ora - 00600 urgent

    Hi dear
    I just migrate my dataabse from oracle 8.1.7.4 to oracle 10g release 2
    i m getting the error
    ORA-12801: error signaled in parallel query server P000
    ORA-00600: internal error code, arguments: [xtycsr3], [], [], [], [], [], [], []
    when i save the new record it's work fine but when i save the record the modify mode it give the above error
    i m using form 6i
    O/s win 2000 server
    Kindly suggest the solution urgently i m on production database
    Regards
    Tarun Mittal

    Symptoms:
    Internal Error may occur (ORA-600)
    ORA-600 [xtycsr3]
    Related To:
    Parallel Query (PQO)
    Description
    Select with GROUP BY may result in ORA-600[xtycsr3]
    on a parallel slave process, if select list contains
    a PLSQL function call on aggregation operators.
    Eg:
    SELECT 1, b.x, xty_func( max(1),SUM(100))
    FROM xty b
    GROUP BY 1, b.x;
    The full bug text (if published) can be seen at Bug 2459355
    This link will not work for UNPUBLISHED bugs.

  • Workspace manager vs. shadow tables

    Hi,
    I have the requirement to track any changes (insert/update/delete) on db tables.
    At the end the user should be able to view the change history of records in the GUI.
    The two possible methods are (in my opinion):
    a) workspace manger
    b) manage shadow tables manually (with triggers)
    Has anyone experience with workspace manager for this use case?
    What are the pros and contras of the two methods?
    Database is 10gR2
    regards
    Peter

    We are using OWB to create OLAP because you have your metadata properly defined in the design repository of OWB from where you can deploy to different databases and schemas. We are also using OWB to create tables and other relational objects instead of using SQL Developer or Toad to do so.
    Nevertheless there are some restrictions when using OWB: You cannot create programs with OWB (e.g. for limiting access to certain objects), not all aggregation operators are supported (e.g. the weighted aggregation operators like WSUM are not supported by OWB), you cannot create models, ...
    If you come to these restrictions you could write "after-deployment scripts", i.e. you deploy your dimensions and cubes from OWB and let the scripts do what you could not model with OWB.
    Hope this helps!

  • Copy a mapping from one project to another project in same reepository.

    Hi All,
    I am using OWB 10GR2. I have two projects in this repository called dev_project and test_project. after completion of mappings development in dev_project, i want to move the mapping from dev_project to test_project. I was able to copy the mapping from dev_project to test_project.
    However, when i open the mapping in test_project project, I was able to see all mapping operators like sources, targets and all other operators. But filter operators, join operators and aggregator operators doesn't have its properties( means there are no filter condidtion, join condition).
    Please help me how to take those filter and join conditions also from dev_project to test_project.
    Thanks,
    pv

    There is a OWB bug 8267898 described as:
    COPYING MAPPING BETWEEN PROJECTS IN OWB LOOSES ALL THE OPERATOR PROPERTIES
    Try to apply patch:
    Patch 8289030 - PSE FOR BASE BUG 8267898 ON TOP OF 10.2.0.4 FOR WINDOWS 32BIT (215)

  • Rolling up multiple leaves on a dimension into a single measure

    I'm sure this is a common problem but I cannot find any talk of it with my Google searching.
    The problem I am having is when I need to a get a single measure, using an 'Average' aggregate, for multiple leaves in a dimension.
    For example, say I have a geography dimension and I need a single score for the three countries GB, HK and US combined, I might do this (though it won't be good enough):
    select avg(s.score) as score
    from my_cube_view s
    where s.geography in ('GB','HK','US')
    It's not good enough because the avg() needs weighting, GB, HK and US not containing an equal number of facts each (their individual scores will already be averaged by the cube).
    Is there a way for the cube to do this calculation for me? Or should I, when wanting to rollup scores this way, revert to manually averaging up the fact scores?
    Hope that makes sense,
    Dominic
    Edited by: Dominic Watson on Sep 8, 2011 5:02 AM

    You should be able to generate weighted averages. Check the Aggregation Operators section of the OLAP User's Guide to see the different hierarchical aggregation options for averages.

  • Analytic Workspace Manager vs Warehouse Builder

    When is it best to use Analytic Workspace Manager over Warehouse Builder to create the OLAP? Please advise.

    We are using OWB to create OLAP because you have your metadata properly defined in the design repository of OWB from where you can deploy to different databases and schemas. We are also using OWB to create tables and other relational objects instead of using SQL Developer or Toad to do so.
    Nevertheless there are some restrictions when using OWB: You cannot create programs with OWB (e.g. for limiting access to certain objects), not all aggregation operators are supported (e.g. the weighted aggregation operators like WSUM are not supported by OWB), you cannot create models, ...
    If you come to these restrictions you could write "after-deployment scripts", i.e. you deploy your dimensions and cubes from OWB and let the scripts do what you could not model with OWB.
    Hope this helps!

  • Access Query

    I have a query that show each time a mold was used. Is it possible to have this query show only the last time it was used? Also, can I create another that shows the count of each one instead on each individual record?

    Does Mold # determine Model #, i.e. for each value of the former can there be only one value of the latter?  This implies a one-to-many relationship type between models and moulds.  Or can there be more than one value of Model # for each value
    of Mold #, i.e. the relationship type is many-to-many.
    If Mold # does determine Model #, and the values in your image tend to suggest this, then it looks to me like the query would be a simple aggregating one.  In query design view click on the capital sigma symbol in the design ribbon.  Then in the design
    grid leave the  'total' row for the Mold # and Model # columns as Group By, but for the Build Date column select Max to show the latest date used per mould, or Count to show the number of times each is used.  You can of course do both in the same
    query by adding Build Date twice to the design grid and using separate aggregation operators in each.
    If, on the other hand, Mold # does not determine Model #, then we will need to see the query's SQL statement to get an idea of its structure.  Open the query and switch to SQL view.  Copy the SQL statement and paste it into a reply here.
    Ken Sheridan, Stafford, England

  • SDE's Efficiency Depends on Oracle Spatial?

    Hello Everyone,
    I came to hear an argument recently that if we install ESRI's ArcSDE on top of the Oracle Spatial the server throughput declines drastically!!!
    How much truthful this statement is???
    null

    Srikrishna,
    ESRI's SDE only ever executes
    primary filter searching against
    Oracle Spatial tables.
    "ArcSDE uses the Oracle Spaital SDO_FILTER
    function to perform the primary spatial query. ArcSDE performs secondary filtering
    of the SDO_GEOMETRY based on the spatial
    relationship requested by the application"
    Quoted from: Spatial queries, Appendix D Oracle Spatial geometry type, ArcSDE Configuration and Tuning Guide for Oracle,
    ArcInfo 8.1
    IMHO this would mean that as the Oracle
    Spatial boys continue the performance and
    scalability improvements they have shown
    in 8.1.7 and 9i (via R+ trees, improved
    query optimisations etc), then
    ESRI's SDE will become a bottleneck for
    ESRI-based GIS applications.
    I can't see ESRI changing this attitude
    to Oracle secondary filtering in a hurry
    (or until at least their approach is shown
    to be slow).
    Also, you can't use the aggregation
    operators in 9i with arcSDE 8.1 as
    it stands. Something I doubt will change
    in a hurry as well.
    regards
    Simon

Maybe you are looking for

  • Upgrade from 46C to ECC 6

    Hello: We are planning to begin technical upgrade in SAP, from 46C to ECC 6.0. I have the next landscape: -Current SAP (CS): SAP 46C with Oracle 8.1.7 and Solaris 8 -New Server (NS): Without SAP, 40% more of capacity, with Oracle 10.2 and Solaris 10

  • Automatic payment : run 2 forms

    hello, can you help me to bind two forms to RFFOD__S program, thanks

  • Getting problem in using TcodeTBB7

    Hi , When I use TCODE TBB7 for compnay code 2750 then it's not showing valuation price . If I run it combindly with other company code then it's showing the valuation price company code 2750 .If I run other compnay codes seperately for them it's show

  • Selected ringtone reverts to default after sync

    I have downloaded to my mac a couple of ringtones which I use for family members.  The ringtones for those family members (seen in Contacts) revert to the default ringtone (just a standard one available on the iPhone 6) every time I sync to my mac. 

  • The use of this in synchronizing.....

    My book is giving too much of a vague explanation and here is a snippet that I have a few questions on... public class TestIt{public void doThings(){ synchronized(this){ //do synchronized stuff }Now the question I pose is that the book is giving an e