Aggregate table

Hi !
I have a problem when I try to use the aggregate wizard, but only with some dimension tables.
This is the error message given by Job Manager :
Processed: 1 queries
Encountered 1 errors
[NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred.
[nQSError: 84008] [Aggregate Persistence] Error while processing aggregates (refer previous errors in log).
Statement preparation failed
I have created all my dimensions with OWB so I don't understand why it works with some tables and not with others.
Thanks for your help

Hi, here is my error :
-------------------- Aggregate Manager (Error): [nQSError: 32003] The object "DATAWARE".."INFOCENTRE"."SA_Action00002041"."K" of type 'TABLE KEY': is missing a list of type 'COLUMN'.
: [nQSError: 84003] [Aggregate Persistence] Checkin failed.
EDIT*
I think I find a way. This error occur when I select the most detailed hierarchy of a dimension. I have DimA, DimB and Fact tables. DimA has 2 hierarchy levels (DimA1 and DimA2) and DimB 3 (DimB1, DimB2, DimB3). When I want to create aggregates tables using DimA2 or DimB3, I have the error above. So how can I do to solve this ?
Edited by: Yannis on 5 juil. 2010 01:42
EDIT 2*
I finally solve my problem creating a new parent level for my most detail hierarchy.
Edited by: Yannis on 5 juil. 2010 02:33

Similar Messages

  • Errors when Creating Aggregate Tables in OBIEE 11.1.1.6 within SQL server

    Hi All,
    I was trying to create an aggregate table in OBIEE 11.1.1.6 within SQL Server. The sql was generated successfully as below. But an error occurred when I use NQCMD to execute the sql and the following error showed:
    1. SQL for creating Aggregate Table:
    create aggregates
    "ag_Measure"
    for "ASOBI_DTT_Demo"."Measure"("ValidVIPCnt")
    at levels ("ASOBI_DTT_Demo"."日期"."月", "ASOBI_DTT_Demo"."門市品牌"."門市品牌", "ASOBI_DTT_Demo"."門市類別"."門市類別", "ASOBI_DTT_Demo"."內創門市"."內創門市", "ASOBI_DTT_Demo"."門市/倉庫"."門市/倉庫", "ASOBI_DTT_Demo"."門市群組"."門市群組", "ASOBI_DTT_Demo"."門市行政區課"."行政區", "ASOBI_DTT_Demo"."門市地區"."城市")
    using connection pool "ASOBI_DTT_Demo"."ASOBI_System10"
    in "ASOBI_DTT_Demo"."ASOBI"."dbo";
    2. Error Message:
    "ag_Measure"
    for "ASOBI_DTT_Demo"."Measure"("ValidVIPCnt")
    at levels ("ASOBI_DTT_Demo"."日期"."月", "ASOBI_DTT_Demo"."門市品牌"."門市品牌"
    , "ASOBI_DTT_Demo"."門市類別"."門市類別", "ASOBI_DTT_Demo"."內創門市"."內創門市"
    , "ASOBI_DTT_Demo"."門市/倉庫"."門市/倉庫", "ASOBI_DTT_Demo"."門市群組"."門市群
    組", "ASOBI_DTT_Demo"."門市行政區課"."行政區", "ASOBI_DTT_Demo"."門市地區"."城市
    using connection pool "ASOBI_DTT_Demo"."ASOBI_System10"
    in "ASOBI_DTT_Demo"."ASOBI"."dbo"
    [343][State: 37000] [Microsoft][SQL Server Native Client 10.0][SQL Server]CREATE
    、DROP or ALTER 陳述式中使用未知的物件類型 'aggregates'。
    Statement execute failed
    Which means "Using unknown object type 'aggregates' in CREATE. DROP or ALTER statements" in English.
    Can anyone give me a suggestion for this error?? Many thanks!!!

    Hi Martin,
    I guess, I was not clear enough. Let me try again
    How Aggregate Persistence works in OBIEE?
    Once you are done choosing options in the Aggregate Persistence wizard, it generates an intelligent Query.
    What query is it?
    If you happen to understand the query, it is not like any ANSI standard SQL (I would say DDL) query. As you might have noticed there are no SQL Server datatypes, lengths, keys, constraints etc. This query can only be understood by the BI Server.
    How do I issue this query?
    Since the logical query could only be understood by BI Server, it has to be issued only to BI Server Engine using some tool viz NQCMD in this case.
    What does issuing this query using NQCMD do?
    The execution steps are as follows, the moment the query is issue via NQCMD
    Aggregate Persistent Wiz Generate Query ----- Issued to ---> NQCMD ----- Passes the logical query to ---> BI Server ----- Parses the query ---> Builds the corresponding physical DDL statements Issued --->To the Database --- If successful ---> .RPD is automatically updated with the aggregated sources etc.
    How do I pass the query to BI Server using NQCMD?
    The format of issuing this logical query to BI Server using NQCMD is
    nqcmd -d <Data Source Name> -u <Analytics UserId> -p <Password> -s <command> > output.log
    where
    <Data Source Name> : Is the DSN name which OBIPS uses to talk to Oracle BI Server. Yes, it's the very same DSN that can be found in InstanceConfig.xml
    <Analytics UserID> : Any user in obiee with admin privileges.
    <Password> : Password of the obiee UserId
    <Command> : Logical SQL Command which you already have handy.
    Hope I was good this time..
    Dhar

  • Error when creating aggregate table

    Hello,
    I am creating an aggregate table using the Aggregate Persistence Wizard. When trying to run the batch file, I am receiving an error: "Could not connect to the Oracle BI Server instance".
    But then, the Oracle BI Server is running and I am able to do queries in answers with no connection issues. (Pls see below)
    Please help.
    Thanks,
    Felicity
    D:\OracleBI\server\Repository>create_agg.bat
    D:\OracleBI\server\Repository>nqcmd -d AnalyticsWeb -u Administrator -p Administ
    rator -s D:\OracleBI\server\Repository\CREATE_AGG.sql
    Oracle BI Server
    Copyright (c) 1997-2009 Oracle Corporation, All rights reserved
    create aggregates
    "ag_SalesFacts"
    for "SupplierSales"."SalesFacts"("Net Weight Shipped","Units Ordered","Units Sh
    ipped","Dollars")
    at levels ("SupplierSales"."ProductsDim"."Type", "SupplierSales"."CustomersDim"
    ."SalesRep", "SupplierSales"."PeriodsDim"."Month")
    using connection pool "ORCL"."SUPPLIER CP"
    in "ORCL".."SUPPLIER2"
    create aggregates
    "ag_SalesFacts"
    for "SupplierSales"."SalesFacts"("Net Weight Shipped","Units Ordered","Units Sh
    ipped","Dollars")
    at levels ("SupplierSales"."ProductsDim"."Type", "SupplierSales"."CustomersDim"
    ."SalesRep", "SupplierSales"."PeriodsDim"."Month")
    using connection pool "ORCL"."SUPPLIER CP"
    in "ORCL".."SUPPLIER2"
    [10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general er
    ror has occurred.
    [nQSError: 37001] Could not connect to the Oracle BI Server instance.
    Statement preparation failed
    Processed: 1 queries
    Encountered 1 errors

    Will this help you solve issue http://forums.oracle.com/forums/thread.jspa?messageID=3661598
    Check the comments in this blog http://obiee101.blogspot.com/2008/11/obiee-aggregate-persistence-wizard.html
    It deals with use permissions for the database.
    hope answers your question..
    Cheers,
    kk

  • Filter not applied in answers because of aggregate tables

    Hi Gurus:
    I am having an issue with filters and aggregate tables.
    I have a report which I am filtering on a dimension.
    Now, if I dont expose that column on report, it is not filtering properly. It is hitting aggregate tables and hence result is not correct.
    I checked physical SQL issued, and it doesn't even have that filter in WHERE clause. However, logical SQL has it.
    However, if I expose that column on report table, then it is filtering properly, since it is hitting base tables.
    I cannot include this column in aggregated dimension , since it is not part of hierarchy ( Snow flaked ).
    Any idea why is this happening?
    Please help me.
    - Vinay

    Hi Vinay,
    The hints I gave you are not work-arounds or band-aids to OBIEE. They are the features in OBIEE meant for a specific purpose.
    As far as the Best Practice OBIEE is meant strictly to source a Fully Qualified Data Marts. You cannot expect a high level summarized data within a few seconds from OBIEE, if you dont have data rolled-up and summarized to different levels.
    When we first started using this tool replacing Actuate a couple of years ago, we thought our users will be happy seeing the reports on a click. But it was horrible than Actuate because we dont have our data rolled-up or aggregated.
    Now our users love the reports because we have highly aggreagated data sources such as Essbase, Materialized Views, Query Re-Writes, Indexes, Partitioning etc.... etc... and these are all out side OBIEE.
    Thanks
    Sai

  • Error when running report: "Error in aggregate table for InfoCube"

    Hi Experts
    We had a temporary error, which I would like to find the root cause for.
    We where running a Workbook which is based on a multiprovider. For a short period of time (around 10 minuts) we got the following error when we executed the workbook based on this multiprovider:
    "Error in aggregate table for InfoCube"
    There where no loads or aggregates rolling up on the cubes in the multiprovider.
    I see no short dumps as well in ST22.
    Have anybody seen this error before, and how can I trace how this error occured?
    Thanks in advance.
    Kind regards,
    Torben

    Hi Sneha,
    I will Suggest u to do some RSRV tests.
    Go to transaction RSRV. There you will find test for aggregates. Just Perform them and see whether you get any discrepencies.
    Regds,
    Shashank
    Edited by: Shashank Dighe on Jan 4, 2008 10:51 AM

  • Has anyone managed to configure aggregate tables correctly?

    Hi all,
    Although OBIEE supports the use of aggregate tables in fact tables as an additional source (for performance improvements), has anyone actually correctly configured them for metrics that use AVGs?
    We are seeing that if you create an aggregate table which already calculates averages, when these values are used in further aggregations at the higher levels, they are actually incorrect, as they are based on the pre-calculated averages of the aggregate table, and not on the average of the lowest grain of detail of the fact. This is not a problem for SUM, MIN or MAX metrics, but is a problem for AVG metrics.
    This is partly logical, but is a serious limitation for the use of aggregate tables. These tables are correct for all the dimensions included in that aggregated table, but not when you remove one of those dimension in an OBIEE request.
    The only workaround we see is to redefine the metric as a metric with a logical definition:
    SUM(column that contains value which we want an average for) / SUM (no. of columns in question)
    At the lowest detail this has no impact, but on the aggregated levels above, this would assure a correctly calculated average.
    Any advice? Opinions? Prior experiences? anything?

    Thanks Wildnight.
    Yes, it seems logical that with sum all would work correctly, however i think this is only one type of common metrics in use in any datawarehouse. If anyone has any other ideas let me know.
    There is also the possibility to play around with the aggregate configuration based on dimensiones (maybe introducing some clever formula in the 'Others' dimension option), or maybe defining AVG in the actual metric definition in the column mappings, and setting the aggregate rule to None.

  • Aggregate tables have many partitions per request

    We are having some performance issues dealing with aggregate tables and
    Db partitions. We are on BW3.5 Sp15 and use Oracle DB 9.2.06. After
    some analysis, we can see that for many of our aggregates, there are
    sometimes as much as a hundred partitions in the aggregates fact table.
    If we look at infocube itself, there are only a few requests (for
    example, 10). However, we do often delete and reload requests
    frequently. We understood that there should only be one partition per
    request in the aggregate (infocube is NOT set up for partitioning by
    other than request).
    We suspect the high number of partitions is causing come performance
    issues. But we don;t understand why they are being created.
    I have even tried deleting the aggregate (all aggregate F tables and
    partitions were dropped) and reloading, and we still see many many more
    partitions than requests. (we also notice that many of the partitions
    have a very low record count - many less than 10 records in partition).
    We'd like to understand what is causing this. Could line item
    dimensions or high cardinality play a role?
    On a related topic-
    We also have seen an awful lot of empty partitions in both the infocube
    fast table and the aggregate fact table. I understand this is probably
    caused by the frequent deletion and reload of requests, but I am
    surprised that the system does not do a better job of cleaning up these
    empty partitions automatically. (We are aware of program
    SAP_DROP_EMPTY_FPARTITIONS).
    I am including some files which show these issues via screen shots and
    partition displays to help illustrate the issue.
    Any help would be appreciated.
    Brad Daniels
    302-275-1980
    215-592-2219

    Ideally the aggregates should get compressed by themselves - there could be some change runs that have affected the compression.
    Check the following :
    1. See if compressing the cube and rolling up the aggregates will merge the partitions.
    2. What is the delta mode for the aggregates ( are you loading deltas for aggregates or full loads ) ?
    3. Aggregates are partitioned according to the infocube and since you are partitioning according to the requests - the same is being done on the aggregates.
    Select another partitioning characteristic if possible. - because it is ideally recommended that request should not be used for partitioning.
    Arun
    Assign points if it helps..

  • How to refine data in Aggregate tables  in Oracle BI

    Hello!
    How to refine data in aggregates tables (created by "create aggregates ..." statement) after the data in corresponding database tables was updated?
    It is unable to use the "delete aggregates" statement and then "create aggregates" again, because "delete aggregates" eliminates all aggregates in a model, but there are several aggregates in our model, that are intended for refining in different time moments and periods.
    Thanks

    903830 wrote:
    Hi folks,
    suppose we have table emp01, have 10 records & we create another table emp02 as
    create table emp02 as select * from emp01;
    now both the table have identical data.
    How to find "data" in two tables are identical?Why wouldn't they be identical? You've just created one table as a copy of the other. They are identical in data terms.

  • How to configure an existing aggregate table?

    Generally i have created the aggregated fact tables using the wizard, but now i have to use an existing aggregate fact table, could someone help me in configuring it out?

    If you want to use an existing aggregate fact table then import the table ,create joins in physical layer.Add this aggregate table to LTS of your fact in BMM and set the level under content tab accordingly.
    Thanks
    Sandeep

  • Multiple Aggregate Tables in OBIEE

    We are experiencing a funky issue with the use of aggregate tables. Unfortunately, I am not seeing any obvious way in the documentation or blogs to remedy this issue.
    Leveraging the Aggregate Persistance Wizard we developed 2 aggregates for our data set.
    1st is a monthly aggregate with a rollup data set and the 2nd is the daily aggregate with a more limited data set.
    For a set of monthly queries and dashboards both aggregates will produce the correct results, but the monthly agg does it 10 times faster <5 sec. However, for some reason the BI Server is choosing the daily aggregate instead.
    Any ideas on how we can force / hint for the BI server to use the monthly aggregate first if it can?Thanks,
    Michael

    The log, unfortunately, is to big to in its entirety, but here is the example of the queries that go against the daily aggregate as oppose to the monthly:
    +++Administrator:7a0000:7a0001:----2009/06/10 09:04:48
    -------------------- General Query Info:
    Repository: Star, Subject Area: EnterpriseBI, Presentation: EVSalesReporting
    +++Administrator:7a0000:7a0001:----2009/06/10 09:05:14
    -------------------- Sending query to database named BONARDEV (id: <<6974073>>):
    select D3.c4 as c1,
    D3.c5 as c2,
    D3.c2 as c3,
    D3.c1 as c4,
    D3.c6 as c5,
    D3.c7 as c6,
    D3.c8 as c7,
    D3.c3 as c8
    from
    (select D1.c1 as c1,
    D1.c2 as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    D1.c6 as c6,
    D1.c7 as c7,
    D1.c8 as c8
    from
    (select sum(T31738.Curr_Day00015AE1) as c1,
    D47.c8 as c2,
    D47.c4 as c3,
    T31723.Trade_Clas0001777C as c4,
    D47.c3 as c5,
    D47.c7 as c6,
    D47.c6 as c7,
    D47.c5 as c8,
    ROW_NUMBER() OVER (PARTITION BY T31723.Trade_Clas0001777C, D47.c4 ORDER BY T31723.Trade_Clas0001777C ASC, D47.c4 ASC) as c9
    from
    SA_Trade_C000176D4 T31723,
    SA_Organiz00015413 T31635,
    ag_Sales_Metrics_2 T31738,
    (select distinct D1.c1 + 1 as c1,
    D1.c2 as c2,
    D1.c3 as c3
    from
    (select min(D1.c1) over (partition by D1.c4) as c1,
    min(D1.c2) over (partition by D1.c4, D1.c3) as c2,
    D1.c3 as c3
    from
    (select Case when case D1.c5 when 1 then D1.c3 else NULL end is not null then Rank() OVER ( ORDER BY case D1.c5 when 1 then D1.c3 else NULL end ASC NULLS LAST ) end as c1,
    Case when case D1.c6 when 1 then D1.c3 else NULL end is not null then Rank() OVER ( PARTITION BY D1.c4 ORDER BY case D1.c6 when 1 then D1.c3 else NULL end ASC NULLS LAST ) end as c2,
    D1.c3 as c3,
    D1.c4 as c4
    from
    (select T31704.Date_ID00016FBC as c3,
    T31704.Calendar_Y00014B57 as c4,
    ROW_NUMBER() OVER (PARTITION BY T31704.Calendar_Y00014B57 ORDER BY T31704.Calendar_Y00014B57 DESC) as c5,
    ROW_NUMBER() OVER (PARTITION BY T31704.Calendar_Y00014B57, T31704.Date_ID00016FBC ORDER BY T31704.Calendar_Y00014B57 DESC, T31704.Date_ID00016FBC DESC) as c6
    from
    SA_Day00015E8E T31704
    ) D1
    ) D1
    ) D1
    ) D48,
    (select distinct min(D1.c1) over (partition by D1.c6) as c1,
    min(D1.c2) over (partition by D1.c6, D1.c4) as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    D1.c6 as c6,
    D1.c7 as c7,
    D1.c8 as c8
    from
    (select Case when case D1.c9 when 1 then D1.c4 else NULL end is not null then Rank() OVER ( ORDER BY case D1.c9 when 1 then D1.c4 else NULL end ASC NULLS LAST ) end as c1,
    Case when case D1.c10 when 1 then D1.c4 else NULL end is not null then Rank() OVER ( PARTITION BY D1.c6 ORDER BY case D1.c10 when 1 then D1.c4 else NULL end ASC NULLS LAST ) end as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    D1.c6 as c6,
    D1.c7 as c7,
    D1.c8 as c8
    from
    (select T31704.Calendar_M00014B4D as c3,
    T31704.Date_ID00016FBC as c4,
    T31704.Calendar_Q00014B51 as c5,
    T31704.Calendar_Y00014B57 as c6,
    T31704.Month_End_00014B6D as c7,
    T31704.Calendar_D00014B5B as c8,
    ROW_NUMBER() OVER (PARTITION BY T31704.Calendar_Y00014B57 ORDER BY T31704.Calendar_Y00014B57 DESC) as c9,
    ROW_NUMBER() OVER (PARTITION BY T31704.Calendar_Y00014B57, T31704.Date_ID00016FBC ORDER BY T31704.Calendar_Y00014B57 DESC, T31704.Date_ID00016FBC DESC) as c10
    from
    SA_Day00015E8E T31704
    ) D1
    ) D1
    ) D47
    where ( T31635.Organizati00014BB3 = T31738.Organizati00014BB3 and T31723.Trade_Clas0001777D = T31738.Trade_Clas0001777D and T31635.Business_U00014BC5 = 'EVD' and T31738.Date_ID00016FBC = D48.c3 and D47.c1 = D48.c1 and D47.c3 = '2009 / 05' and (T31723.Trade_Clas0001777C in ('Redemption', 'Sale')) and D47.c2 >= D48.c2 )
    group by T31723.Trade_Clas0001777C, D47.c3, D47.c4, D47.c5, D47.c6, D47.c7, D47.c8
    ) D1
    where ( D1.c9 = 1 )
    ) D3
    order by c8, c2, c1
    +++Administrator:7a0000:7a0001:----2009/06/10 09:05:14
    -------------------- Sending query to database named BONARDEV (id: <<6977524>>):
    select D4.c4 as c1,
    D4.c5 as c2,
    D4.c2 as c3,
    D4.c1 as c4,
    D4.c6 as c5,
    D4.c7 as c6,
    D4.c8 as c7,
    D4.c3 as c8
    from
    (select D1.c1 as c1,
    D1.c2 as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    D1.c6 as c6,
    D1.c7 as c7,
    D1.c8 as c8
    from
    (select sum(T31738.Curr_Day00015AE1) as c1,
    D49.c3 as c2,
    D49.c5 as c3,
    T31723.Trade_Clas0001777C as c4,
    D49.c4 as c5,
    D49.c7 as c6,
    D49.c8 as c7,
    D49.c6 as c8,
    ROW_NUMBER() OVER (PARTITION BY T31723.Trade_Clas0001777C, D49.c5 ORDER BY T31723.Trade_Clas0001777C ASC, D49.c5 ASC) as c9
    from
    SA_Trade_C000176D4 T31723,
    SA_Organiz00015413 T31635,
    ag_Sales_Metrics_2 T31738,
    (select distinct min(D1.c1) over (partition by D1.c4) as c1,
    min(D1.c2) over (partition by D1.c4, D1.c3) as c2,
    D1.c3 as c3
    from
    (select Case when case D1.c5 when 1 then D1.c3 else NULL end is not null then Rank() OVER ( ORDER BY case D1.c5 when 1 then D1.c3 else NULL end ASC NULLS LAST ) end as c1,
    Case when case D1.c6 when 1 then D1.c3 else NULL end is not null then Rank() OVER ( PARTITION BY D1.c4 ORDER BY case D1.c6 when 1 then D1.c3 else NULL end ASC NULLS LAST ) end as c2,
    D1.c3 as c3,
    D1.c4 as c4
    from
    (select T31704.Date_ID00016FBC as c3,
    T31704.Calendar_Y00014B57 as c4,
    ROW_NUMBER() OVER (PARTITION BY T31704.Calendar_Y00014B57 ORDER BY T31704.Calendar_Y00014B57 DESC) as c5,
    ROW_NUMBER() OVER (PARTITION BY T31704.Calendar_Y00014B57, T31704.Date_ID00016FBC ORDER BY T31704.Calendar_Y00014B57 DESC, T31704.Date_ID00016FBC DESC) as c6
    from
    SA_Day00015E8E T31704
    ) D1
    ) D1
    ) D50,
    (select distinct min(D1.c1) over (partition by D1.c8) as c1,
    min(D1.c2) over (partition by D1.c8, D1.c5) as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    D1.c6 as c6,
    D1.c7 as c7,
    D1.c8 as c8
    from
    (select Case when case D1.c9 when 1 then D1.c5 else NULL end is not null then Rank() OVER ( ORDER BY case D1.c9 when 1 then D1.c5 else NULL end ASC NULLS LAST ) end as c1,
    Case when case D1.c10 when 1 then D1.c5 else NULL end is not null then Rank() OVER ( PARTITION BY D1.c8 ORDER BY case D1.c10 when 1 then D1.c5 else NULL end ASC NULLS LAST ) end as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    D1.c6 as c6,
    D1.c7 as c7,
    D1.c8 as c8
    from
    (select T31704.Calendar_D00014B5B as c3,
    T31704.Calendar_M00014B4D as c4,
    T31704.Date_ID00016FBC as c5,
    T31704.Calendar_Q00014B51 as c6,
    T31704.Month_End_00014B6D as c7,
    T31704.Calendar_Y00014B57 as c8,
    ROW_NUMBER() OVER (PARTITION BY T31704.Calendar_Y00014B57 ORDER BY T31704.Calendar_Y00014B57 DESC) as c9,
    ROW_NUMBER() OVER (PARTITION BY T31704.Calendar_Y00014B57, T31704.Date_ID00016FBC ORDER BY T31704.Calendar_Y00014B57 DESC, T31704.Date_ID00016FBC DESC) as c10
    from
    SA_Day00015E8E T31704
    ) D1
    ) D1
    ) D49
    where ( T31635.Organizati00014BB3 = T31738.Organizati00014BB3 and T31723.Trade_Clas0001777D = T31738.Trade_Clas0001777D and T31635.Business_U00014BC5 = 'EVD' and T31738.Date_ID00016FBC = D50.c3 and D49.c1 = D50.c1 and D49.c4 = '2009 / 05' and (T31723.Trade_Clas0001777C in ('Redemption', 'Sale')) and D49.c2 >= D50.c2 )
    group by T31723.Trade_Clas0001777C, D49.c3, D49.c4, D49.c5, D49.c6, D49.c7, D49.c8
    ) D1
    where ( D1.c9 = 1 )
    ) D4
    order by c8, c2, c1

  • Distinct count makes BI server to choose wrong aggregate table

    Hi experts,
    I have 4 dimension tables time, store, product and client and one fact table sales.
    The sources from table sales are from 3 aggregate tables:
    agg_sales_1: aggregate sales for one product of one client in one store per day
    agg_sales_2: aggregate sales for one product in one store per day (all clients)
    agg_sales_3: aggregate sales for one store per day (all products , all clients)
    You can see that agg_sales_1 have a lot of lines, agg_sales_2 has few lines and agg_sales_3 has very few lines (one line per store and day)...
    What I need is: all stores to see the average sales per one month (I don't care about products or clients - all of them)
    so I create :
    one fact logical column wich has sum(sales) and at time level i set it to month : total_sales_per_month
    one fact logical column wich has count(distinct(date)) and at time level i set it to month - wich gives me in one month how many days with sales I have: '#_of_days_in_with_sales_in_month'
    and I want to have the average_sales_per_month = total_sales_per_month / '#_of_days_in_with_sales_in_month'.
    So far so good:
    if in Presentation in my report I put day and total_sales_per_month then the server choses agg_sales_3 (wich is the best solution)
    if in Presentation in my report I put day and total_sales_per_month and '#_of_days_in_with_sales_in_month', or just average_sales_per_month then the server choses agg_sales_1 (wich is the worst solution).
    The question is why?
    another clue:
    if I change the aggregate function from count(distinct()) in count() (This is no good for me) then the server choses again the good table agg_sales_3.
    So, I'm thinking that the function count(distinct()) is makeing this bad behavior...
    Any suggestions pls...
    And Happy Hollydays
    Thanks
    Nicolae Ancuta

    One of the dimension table have joins to other fact tables and query routed through unwanted dim and fact tables. this is happeneing because of aggregate navigation in fact sources, content tab set to detailed level. I'm trying to use aggregate functions...

  • Creating aggregate tables

    Hi gurus,
    I have problem with creating aggregate tables. I use Oracle business intelligence 11g R1 cookbook book for help but I think author forget some important things and I can't create it.
    I have fact table and dimension (fact_table, group_product_dim). I summery them and now I got  fact aggregate table and  aggregate dimension  (fact_table_agg, group_product_dim_agg):
    summary:
    fact_table
    fact_table_agg
    group_product_dim
    group_product_dim_agg
    After I create physical diagram for base and aggregate combination in physical layer.
    Now I need to move this tables in BMM. Which combination I need to bring in BMM layer? Both, basic combination or aggregate combination?
    I move basic combination (fact_table, group_product_dim) and try two add two logical table source in dimension group_product_dim. Create a new logical table source by right-clicking on the group_product_dim logical
    table and selecting the New Object | Logical Table Source option. This will bring up the new window Logical Table Source.
    I select the physical table by clicking the + sign and select (group_product_dim_agg) the aggregate dimension table. Now I make the column mapping on the Column Mapping tab of the same Logical Table Source. Only few columns are mapped and all the others are unmapped. This is because the aggregate dimension table doesn't store all columns from basic dimension group_product_dim. The last modification I do on the Content tab of the Logical Table Source.
    I need define the levels at which this logical table source can satisfy queries. This step is very important. Making any kind of mistake will cause the BI server to use a wrong logical table source for the queries.
    Here I have a problem because I can't put Logical Level into Aggregation content, group by menu. I can't put anyting else then the default value Column.
    Tnx,
    best regards

    1.) http://www.amazon.com/Oracle-Business-Intelligence-Cookbook-ebook/dp/B00DHCKTXQ/ref=wl_it_dp_o_pC_nS_nC?ie=UTF8&colid=322X8WO18RFEG&coliid=I13QJ38KK8PYTB
    2.)  I create NEW business model in BMM and after drag a fact_table and group_product_dim into it. Do I need drag aggregate objects into New business model, also?
         I don't, I drag only fact_table and group_product_dim and create New Logical Table Source on group_product_dim where I try make new source.
    Can you tell me what I need to do in background? Do I need map columns from dimension to aggregate dimension?

  • Aggregate table and outdated rows

    hi everybody,
    let´s assume the following scenario:
    - one fact table, one aggregator, one target aggregate table with a sum column.
    - aggregate table loading type is set to "insert/update", update matching is done on several columns.
    - first load is started, aggregated data is correct, no problems so far
    - one column is updated in the fact table (all rows)
    - now we´ve got a new attribute set which doesn´t match a row at the aggregate table, so a new row is inserted
    - the aggregate table now contains two rows, but only one is valid
    how can i avoid this without truncating the aggregate table before loading? do i have to delete outdated aggregate rows before/after loading, or is there any other solution?
    thanks for your help,
    peter
    Edited by: peter_AT on Nov 12, 2009 7:46 AM

    >
    Why can't you just update that collumn in your target table if you don't need the previous version of that row?
    >
    thanks for your suggestion.
    but this can cause "ORA-30926: unable to get a stable set of rows in the source tables" - or am i wrong about this?
    a more detailed explanation:
    fact table t_src:
    SQL> select * from t_src;
    S1  S2
    A   Bproperty "match column when updating row" on columns a1 and a2 of aggregation table t_agg are set to "Yes".
    aggregate table t_agg contains these rows:
    SQL> select * from t_agg;
    A1  A2         NUM
    A   B            1now t_src is updated:
    SQL> update t_src set s1='B' where s1='A';
    1 row updated.
    SQL> select * from t_src;
    S1  S2
    B   Bafter the next load table t_agg looks like this:
    SQL> select * from t_agg;
    A1  A2         NUM
    A   B            1
    B   B            1if i want to update column A1, i have to remove matching from there - which leads to ORA-30926 when loading next time.
    thanks,
    peter
    Edited by: peter_AT on Nov 12, 2009 7:47 AM

  • Logical sql firing larger aggregate table instead of smaller one

    Hi
    When we process a request containing one particular column alone along with some time dimension say month or year, the logical sql is firing larger aggregare table instead of smaller aggregate table. Please, help us in resolving this issue.
    The OracleBI version we are using is 10.1.3.4.1
    Thanks.

    Hi,
    Try posting in the OLAP forum.
    OLAP
    Thanks, Mark

  • Aggregate tables in Administration tool

    Hello!
    I have a problem when I want to create aggregate tables.
    I create query with Aggregate Persistence, but when I run it in Job Manager in it is running but it never ends.
    Can you help me please?!
    Regards, Karin

    11.5
    Edited by: 914091 on Mar 26, 2012 5:30 AM

  • OBIEE bypasses smaller aggregate table and queries largest aggregate table

    Hello,
    Currently we are experiencing something strange regarding queries that are generated.
    Scenario:
    We have 1 detail table and 3 aggregate tables in the RPD. For this scenario I will only refer to 2 of the Aggregates.
    Aggregate 1 (1 million rows):
    Contains data - Division, Sales Rep, Month, Sales
    Aggregate 2 (13 milliion rows):
    Contains data - Division, Product, Month, Sales
    Both tables are set at the appropriate dimension levels in the Business Model. Row counts have been updated in the physical layer in the RPD.
    When we create an answers query that contains Division, Month and Sales, one would think that OBIEE would query the smaller and faster of the two tables. However, obiee wants to query the table with 13 million records completely bypassing the smaller table. If we make the larger aggregate inactive, then OBIEE queries the smaller table. We can't figure out why OBIEE wants to immediately go to the larger table.
    Has anyone experienced something such as this? Any help would be greatly appreciated.
    Edited by: gwb on Aug 19, 2009 7:45 AM

    Have you try to change the sort order of the logical table source in your logical table ?
    !http://gerardnico.com/wiki/_media/temp/obiee_logical_table_sources_sort.jpg!
    Set the Aggregate 1 first.
    Cheers
    Nico

Maybe you are looking for

  • HELP! All web browsers freeze on certain webpages

    Alright I'm baffled at what to do with this problem, hopefully someone can help me. A few days ago i was browsing the web when it froze up on me. This happens from time to time and not thinking to much about it, i force closed Firefox. Upon restartin

  • Touch Screen not working in Windows 8.1 HP Pavilion 23 TouchSmart

    I've checked the device manager and the device seems to be working properly and have the latest driver installed. However, in the Pen and Touch section, the Touch action is not able to be altered.  It is set to Equivalent mouse action, but it is not

  • Namespace prefix: Using Anonymizer

    Hi guys, I'm having some difficulties using XMLAnonymizer for placing some namespace prefix's in my message. After sending the message, I can check on RWB that the module Anonymizer is loaded, but results in the following: 2007-06-21 16:55:45 Success

  • Am i laying out my film right by imprtng small sequences in 1 big sequence

    I am working on small portions of my film in there own individual sequence, then I import those small sequences into one final sequence. Is this the right way to lay out the film. When I render a small sequence, then go into the final sequence, it wa

  • ABGL error

    When doing ABGL, getting the following error: Balance for transaction type group 10 negative for the area 01 Message no. AA629 What is Group 10? Can someone please explain how to get around this error