Model Aggregate tables to speed up query processing

Hi,
Can anyone help me in the topics
"Model Aggregate Tables to Speed up Query Processing"
"Model Partitions and Fragments to improve application performance and usability"
Am new to this concept, haven't worked on this before.
Regards,
Arun

Arun wrote:
Thanks. This would definitely help me out. But, i face issue in opening the repository in Online mode. Could you please tell me how to solve this issue?
Here OBIEE server is lying on Unix Environment.
I've created a repository in Windows Client.(Have installed OBIEE 11g developers client, BI Admin tool in my local machine).
I would like to know the way to open a repository file in online mode in Client side.
Regards,
ArunCreate a 'BI Server' type ODBC connection in your windows environment pointing to your Unix box, then when you go open -> Online in admin tool you will see the entry as an option.
Better to start a new thread when your original question is answered.

Similar Messages

  • RPD Modelling - Aggregate tables

    Hello,
    I have two facts which are Actuals and Budgets.
    F_Actuals - Month level Data, Date Fk being Date_WID(So it has data only for the first of the month)
    F_Budget - Day level Data, FK being Date_WID
    Dim_Date - Day, PK being Date_WID
    I have created aggregate tables for the facts at Month level, and at other dimension levels. Now when I snowflake these tables in BMM as other sources for the fact, I have set the content filter for Date Dim at detail level(for AGG tables), though the data is actually at month level, its exists for a single day in the whole month.
    I have been facing performance issues when Budget columns are pulled into the report, as it doesnt have any AGG table WRT other dimensions.
    So the Budget columns were included into the Actual AGG tables. Everything is fine till this point. But I have two coulmns in my presentation layer for Budget, one from AGG and one from detail Budget Fact. So as to give the user only one column, I have snowflaked the Actual AGG fact with the F_Budget and set the content levels appropriatly.
    Issue: The data doesnt match. As long as the column from F_Actuals are pulled the data is accurate, but once columns from F_Budget are pulled the data is wrong.
    Do you anything wrong with the modelling? or what actually went wrong here?
    Thanks.

    I dont see any issues with the snowflaking, though you snowflaked Actual AGG tables to F_Budget. If your data doesnt match, i would check the content levels. As you said the combination of columns from both facts is where the issue is, so probably check all the dimensions and its levels set in your facts, probably F_Budget in particular.

  • Use aggregates/bwa to speed up DTP-processing

    Dear all,
    I would like to use option "use aggregates" in DTP definition in order to speed up loading data from source-infocube to target-dso. My source-infocube is fully indexed on bwa.
    Unfortunately, it seems to me as if this option is not supported yet ... when executing my dtp, data will be read from f/e-facttable of source-infocube.
    Any ideas? / Experiences?
    Kind regards,
    Thomas

    Hi,
    Consider the following options:-
    1) Load only required data, add a filter if possible.
    2) Increase the parallel processes to execute the dtp and choose the priority of the job on app server to A.
    3) check transformation for code, if present.
         a) Check if you are using the complete key in the select. Use appropriate sort / delete duplicates option.
         b) Use read statement with binary search.
         c) In case of nested loops, use read with index option
         d) If possible use field symbols / hash tables.
    above are some basic checks.
    Thanks
    Sharat

  • Speeding up query processing

    Hi,
    I was wondering if anyone had any suggestions on how to improve the performance of this code? All table used are indexed. The calling routine for this procedure is set into a for loop. I am working on Oracle8i release 8.1.7.4.0.
    Thanks,
    Darren
    CREATE OR REPLACE PACKAGE aa_mars
    AS
         PROCEDURE get_mars_info;
    END aa_mars;
    CREATE OR REPLACE PACKAGE BODY aa_mars
    AS
    PROCEDURE get_mars_info
    IS
         Variables
         TYPE MPANS is TABLE OF by_recon_mpans.mpan_core%TYPE;
         mpan MPANS;     
         lv_module_name VARCHAR2(30) := 'Populate MARS Alignment Info';
         lv_sqlerrm VARCHAR2(2000) := '';
         ln_sqlcode NUMBER := 0;
         lv_error_text VARCHAR2(2000) := '';
         vv_control           VARCHAR2(20);          
    BEGIN
         --dbms_output.put_line ('Started :'||TO_CHAR(SYSDATE,'DD/MM/YY HH24:MI:SS'));
         SELECT a.mpan_core BULK COLLECT
         INTO mpan
         FROM      by_recon_mpans a
         WHERE      NOT EXISTS(SELECT 1
                             FROM aa_mars_info b
                             WHERE a.mpan_core = b.mpan_core)
         AND      SUBSTR(a.mpan_core,1,2) IN (13,18)
         AND      ROWNUM < 2000;
    /********** MARS DETAILS ******************************/
         FORALL i in mpan.FIRST..mpan.LAST
              INSERT      INTO aa_mars_info
                        mpan_core
              ,           reg_id
              ,           reg_efd
              ,           reg_etd
              ,           meter_id
              SELECT DISTINCT r.ms_mpan_core mpan_core
              ,           r.register_channel_id reg_id
              ,           MIN(r.effective_from_date) reg_efd
              , MAX( DECODE( r.effective_to_date, TO_DATE('01-JAN-4000', 'DD-MON-YYYY'),TO_DATE('31-DEC-2999', 'DD-MON-YYYY'),r.effective_to_date)) reg_etd
              ,           m.short_meter_id mtr_id
              FROM      registers@link_to_mars r
              ,           meters@link_to_mars m
              WHERE      m.meter_id=r.mtr_meter_id
              AND      m.effective_from=r.mtr_effective_from
              AND      r.ms_mpan_core = mpan(i)
              AND      r.effective_to_date IN (SELECT MAX (r1.effective_to_date)
                                                 FROM registers@link_to_mars r1
                                                      WHERE r.ms_mpan_core = r1.ms_mpan_core)
              AND      r.mtr_meter_id IN (SELECT e.mtr_meter_id
                                            FROM registers@link_to_mars e
                                            WHERE e.ms_mpan_core=r.ms_mpan_core
                                            AND e.effective_to_date=r.effective_to_date)
              GROUP BY r.ms_mpan_core
              ,           r.register_channel_id          
              , m.short_meter_id;
              COMMIT;
         EXCEPTION HANDLER
         EXCEPTION
         WHEN OTHERS THEN
         lv_sqlerrm := SQLERRM;
         ln_sqlcode := SQLCODE;
         Tracker_Error_Handler.write_error(v_module_name => lv_module_name
                                                 ,v_error_code => ln_sqlcode
                                                 ,v_error_message => lv_sqlerrm
                                                 ,v_additional_text => lv_error_text
                                                 ,v_error_type => Dctrack_Constants.c_error_fatal);
         dbms_output.put_line(lv_error_text);
         ROLLBACK;     
    END get_mars_info;
    END aa_mars;
    /

    We only have read access on the database over the link. A view can be read only. I know, I know, what you mean is you aren't allowed to put additional objects on the remote database. Still I suggest you ask: it's it their interest to make your query as efficient as possible too, because they are the ones serving you the data, so it's dragging their system down too.
    ... without me having to copy all the tables to a local area?Well, that's what you're effectively doing anyway at the moment.
    Do you have any further suggestions, Yes, use the LIMIT caluse in the bulk collect.
    --  decalration section
    CURSORC c_mpans IS
    SELECT a.mpan_core
    FROM by_recon_mpans a
    WHERE NOT EXISTS(SELECT 1
    FROM aa_mars_info b
    WHERE a.mpan_core = b.mpan_core)
    AND SUBSTR(a.mpan_core,1,2) IN (13,18);
    BEGIN
       OPEN c_mpans;
       LOOP
           FETCH c_mpans BULK COLLECT INTO mpan LIMIT 2000;
           EXIT WHEN mpan.count() = 0;
           FORALL i IN mpan.FIRST..mpan.LAST
       END LOOP;
    END;
    /At least then you only call the procedure once and execute the main query once.
    Cheers, APC

  • Aggregate tables in Administration tool

    Hello!
    I have a problem when I want to create aggregate tables.
    I create query with Aggregate Persistence, but when I run it in Job Manager in it is running but it never ends.
    Can you help me please?!
    Regards, Karin

    11.5
    Edited by: 914091 on Mar 26, 2012 5:30 AM

  • How to model in RPD aggregate tables with different years of data

    Can someone let me know how to accomplish the following in OBIEE?
    I want to create a logical fact table with multiple logical table sources. I have an aggregate table that only stores current year data. In OBIEE, if a user builds a report using data from the current year, I want the query to hit this aggregate table. My base fact table however stores all years of data. If a user builds a report using data from prior time periods, I want the query to hit the base fact table.
    And if you're curious, the aggregate only contains current year data because the ETL needs to do a full load each night due to the complexity. The high volume of data and the amount of time it takes to populate this aggregate means we only have time to populate the current year data.
    Thanks in advance.

    Yes this situation is both an aggregate table and fragmented data. I already modeled the aggregate table correctly specifying the content logical levels.
    I'm not familiar with how to set the fragmentation logic. I see the fragmentation content section. What do I enter in this section to specify that my AGG table is for current year data? Do I need to enter something for both logica table sources?
    Please let me know if there is a link with examples or an explanation. Appreciate the responses.

  • How to create index to speed up query on XMLTYPE table

    I have a table of XMLTYPE called gary_pass_xml. What kind of index can I create on the table to speed up this query.
    SELECT (Extract(Value(FareGroupNodes),'/FareGroup')) FareGroup
    FROM GARY_PASS_XML tx,
    TABLE(XMLSequence(Extract(Value(tx),'/FareSearchRS/FareGroup'))) FareGroupNodes
    WHERE existsnode(value(tx),'/FareSearchRS/FareGroup') = 1

    I have a table of XMLTYPE called gary_pass_xml. What kind of index can I create on the table to speed up this query.
    SELECT (Extract(Value(FareGroupNodes),'/FareGroup')) FareGroup
    FROM GARY_PASS_XML tx,
    TABLE(XMLSequence(Extract(Value(tx),'/FareSearchRS/FareGroup'))) FareGroupNodes
    WHERE existsnode(value(tx),'/FareSearchRS/FareGroup') = 1

  • Aggregate tables in 10G

    Hi Experts,
    In OBI 10G how to use Aggregated Tables? How does the server knows when it should use Aggregated Table? And when to use ordinary table at the time of fetching the data?
    What is the purpose Aggregate persistence wizard in RPD.
    Thanks in advance,

    Aggregate Table (Aggregate Persistence Wizard)
    Aggregate Table: Aggregate tables store precalculated measures that have been aggregate over a set of dimensional attributes.
    This is very useful technique for speeding up query response time in decision support systems. This eliminates the need of run time calculations and delivers faster results to users The calculations are done ahead of time and the results are stored in the tables.
    The key point is that the aggregate table should have fewer rows than the non aggregate table and therefore processing should be quicker.
    Aggregate Persistence Wizard
    Go to: OBIEE Admin > tool> Utilities > Aggregate Persistence Wizard
    http://obiee101.blogspot.com/2008/11/obiee-aggregate-persistence-wizard.html
    http://obieetutorialguide.blogspot.com/2012/03/creating-aggregate-tables-in-obiee.html

  • OBIEE bypasses smaller aggregate table and queries largest aggregate table

    Hello,
    Currently we are experiencing something strange regarding queries that are generated.
    Scenario:
    We have 1 detail table and 3 aggregate tables in the RPD. For this scenario I will only refer to 2 of the Aggregates.
    Aggregate 1 (1 million rows):
    Contains data - Division, Sales Rep, Month, Sales
    Aggregate 2 (13 milliion rows):
    Contains data - Division, Product, Month, Sales
    Both tables are set at the appropriate dimension levels in the Business Model. Row counts have been updated in the physical layer in the RPD.
    When we create an answers query that contains Division, Month and Sales, one would think that OBIEE would query the smaller and faster of the two tables. However, obiee wants to query the table with 13 million records completely bypassing the smaller table. If we make the larger aggregate inactive, then OBIEE queries the smaller table. We can't figure out why OBIEE wants to immediately go to the larger table.
    Has anyone experienced something such as this? Any help would be greatly appreciated.
    Edited by: gwb on Aug 19, 2009 7:45 AM

    Have you try to change the sort order of the logical table source in your logical table ?
    !http://gerardnico.com/wiki/_media/temp/obiee_logical_table_sources_sort.jpg!
    Set the Aggregate 1 first.
    Cheers
    Nico

  • Aggregate tables

    Hello all,
    I am currently implementing several aggregates in the repository and I have following question...
    There is a certain fact table the business model layer with 2 sources. The base table and the aggregate table.
    This fact table has several measures, and depending on which measures or which fields you pick in combination with a measure, oracle bi will decide which source he will use.
    I noticed that if I pick a measure that can be calculated from the aggregate table in combination with a measure that has to be calculated on the base table, oracle bi creates 2 queries to create 1 report.
    1 query is send to the aggregate table, and the other to the base table... Which results in the fact that the report will take even longer to load so the aggregate has no use.
    Is there any way to configure, if a situation like this occurs, that oracle bi will only pick the base table and not the aggregate table?
    I hope someone can help..
    Thanks!

    This is on the expected lines. This is what you might expect in your case since both the measures are from the same fact table but say for example if one measure comes from one fact table and the other measure comes from another fact table then you do not want both the queries to be fired in the base tables correct. Thats one of the reasons why we need to be careful while using Aggregate Tables. If you have plans of mixing measures that contain both aggregates and non-aggregates then it is better not to have the aggregation tables at all. You can leverage caching and database MV's. Its better to use aggregate tables only when you are pretty sure all the measures are at the same granularity and when the data does not become stale frequently.
    Thanks,
    Venkat
    http://oraclebizint.wordpress.com

  • Errors when Creating Aggregate Tables in OBIEE 11.1.1.6 within SQL server

    Hi All,
    I was trying to create an aggregate table in OBIEE 11.1.1.6 within SQL Server. The sql was generated successfully as below. But an error occurred when I use NQCMD to execute the sql and the following error showed:
    1. SQL for creating Aggregate Table:
    create aggregates
    "ag_Measure"
    for "ASOBI_DTT_Demo"."Measure"("ValidVIPCnt")
    at levels ("ASOBI_DTT_Demo"."日期"."月", "ASOBI_DTT_Demo"."門市品牌"."門市品牌", "ASOBI_DTT_Demo"."門市類別"."門市類別", "ASOBI_DTT_Demo"."內創門市"."內創門市", "ASOBI_DTT_Demo"."門市/倉庫"."門市/倉庫", "ASOBI_DTT_Demo"."門市群組"."門市群組", "ASOBI_DTT_Demo"."門市行政區課"."行政區", "ASOBI_DTT_Demo"."門市地區"."城市")
    using connection pool "ASOBI_DTT_Demo"."ASOBI_System10"
    in "ASOBI_DTT_Demo"."ASOBI"."dbo";
    2. Error Message:
    "ag_Measure"
    for "ASOBI_DTT_Demo"."Measure"("ValidVIPCnt")
    at levels ("ASOBI_DTT_Demo"."日期"."月", "ASOBI_DTT_Demo"."門市品牌"."門市品牌"
    , "ASOBI_DTT_Demo"."門市類別"."門市類別", "ASOBI_DTT_Demo"."內創門市"."內創門市"
    , "ASOBI_DTT_Demo"."門市/倉庫"."門市/倉庫", "ASOBI_DTT_Demo"."門市群組"."門市群
    組", "ASOBI_DTT_Demo"."門市行政區課"."行政區", "ASOBI_DTT_Demo"."門市地區"."城市
    using connection pool "ASOBI_DTT_Demo"."ASOBI_System10"
    in "ASOBI_DTT_Demo"."ASOBI"."dbo"
    [343][State: 37000] [Microsoft][SQL Server Native Client 10.0][SQL Server]CREATE
    、DROP or ALTER 陳述式中使用未知的物件類型 'aggregates'。
    Statement execute failed
    Which means "Using unknown object type 'aggregates' in CREATE. DROP or ALTER statements" in English.
    Can anyone give me a suggestion for this error?? Many thanks!!!

    Hi Martin,
    I guess, I was not clear enough. Let me try again
    How Aggregate Persistence works in OBIEE?
    Once you are done choosing options in the Aggregate Persistence wizard, it generates an intelligent Query.
    What query is it?
    If you happen to understand the query, it is not like any ANSI standard SQL (I would say DDL) query. As you might have noticed there are no SQL Server datatypes, lengths, keys, constraints etc. This query can only be understood by the BI Server.
    How do I issue this query?
    Since the logical query could only be understood by BI Server, it has to be issued only to BI Server Engine using some tool viz NQCMD in this case.
    What does issuing this query using NQCMD do?
    The execution steps are as follows, the moment the query is issue via NQCMD
    Aggregate Persistent Wiz Generate Query ----- Issued to ---> NQCMD ----- Passes the logical query to ---> BI Server ----- Parses the query ---> Builds the corresponding physical DDL statements Issued --->To the Database --- If successful ---> .RPD is automatically updated with the aggregated sources etc.
    How do I pass the query to BI Server using NQCMD?
    The format of issuing this logical query to BI Server using NQCMD is
    nqcmd -d <Data Source Name> -u <Analytics UserId> -p <Password> -s <command> > output.log
    where
    <Data Source Name> : Is the DSN name which OBIPS uses to talk to Oracle BI Server. Yes, it's the very same DSN that can be found in InstanceConfig.xml
    <Analytics UserID> : Any user in obiee with admin privileges.
    <Password> : Password of the obiee UserId
    <Command> : Logical SQL Command which you already have handy.
    Hope I was good this time..
    Dhar

  • Error when creating aggregate table

    Hello,
    I am creating an aggregate table using the Aggregate Persistence Wizard. When trying to run the batch file, I am receiving an error: "Could not connect to the Oracle BI Server instance".
    But then, the Oracle BI Server is running and I am able to do queries in answers with no connection issues. (Pls see below)
    Please help.
    Thanks,
    Felicity
    D:\OracleBI\server\Repository>create_agg.bat
    D:\OracleBI\server\Repository>nqcmd -d AnalyticsWeb -u Administrator -p Administ
    rator -s D:\OracleBI\server\Repository\CREATE_AGG.sql
    Oracle BI Server
    Copyright (c) 1997-2009 Oracle Corporation, All rights reserved
    create aggregates
    "ag_SalesFacts"
    for "SupplierSales"."SalesFacts"("Net Weight Shipped","Units Ordered","Units Sh
    ipped","Dollars")
    at levels ("SupplierSales"."ProductsDim"."Type", "SupplierSales"."CustomersDim"
    ."SalesRep", "SupplierSales"."PeriodsDim"."Month")
    using connection pool "ORCL"."SUPPLIER CP"
    in "ORCL".."SUPPLIER2"
    create aggregates
    "ag_SalesFacts"
    for "SupplierSales"."SalesFacts"("Net Weight Shipped","Units Ordered","Units Sh
    ipped","Dollars")
    at levels ("SupplierSales"."ProductsDim"."Type", "SupplierSales"."CustomersDim"
    ."SalesRep", "SupplierSales"."PeriodsDim"."Month")
    using connection pool "ORCL"."SUPPLIER CP"
    in "ORCL".."SUPPLIER2"
    [10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general er
    ror has occurred.
    [nQSError: 37001] Could not connect to the Oracle BI Server instance.
    Statement preparation failed
    Processed: 1 queries
    Encountered 1 errors

    Will this help you solve issue http://forums.oracle.com/forums/thread.jspa?messageID=3661598
    Check the comments in this blog http://obiee101.blogspot.com/2008/11/obiee-aggregate-persistence-wizard.html
    It deals with use permissions for the database.
    hope answers your question..
    Cheers,
    kk

  • Power Query; How do I reference a Power Pivot table from a Power Query query

    Hi,
    It's pretty awesome how you can define Extract Transform and Load processes within Power Query without having to type in a single line of code. However how do I reference a Power Pivot table from a Power Query query to avoid me repeatedly accessing
    the same data source (CSV) file with a view to increasing performance?
    We are aware of the reference sub menu option with Power Query. However the new query created by the "reference" option still seems to refresh data from the data source (CSV) rather than just referencing the base query. Is this understanding
    correct? There does seem to be a lot of hard disk activity when re-running the new query which is based on a base query rather than a data source.  So we were hoping the new query would just need to reference the base query in memory rather than rescanning
    the hard disk. Is there any way to ensure that the reference query just rescans the base query in memory?
    Kind Regards,
    Kieran.
    Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/

    Hi Kieran,
    This sounds like something to be suggested for a future release. At the present time, Power Query will always re-run the entire Power Query query when refreshed. The Reference feature is analogous to a SQL view whereby the underlying query is always re-executed
    when it's queried, or in this case refreshed. Even something like using the Power Query cache to minimise the amount of data re-read from the disk would be helpful for performance but the cache is only used for the preview data and stored locally.
    It would be a good idea to suggest this feature to the Power BI team via the feedback smiley face.
    Regards,
    Michael Amadi
    Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
    Website: http://www.nimblelearn.com, Twitter:
    @nimblelearn
    Hi Michael, 
    Glad to hear from you about this.  And thanks to Kieran for bringing a very good valid point to debate. Will be glad to see this in future release. 
    - please mark correct answers

  • Filter not applied in answers because of aggregate tables

    Hi Gurus:
    I am having an issue with filters and aggregate tables.
    I have a report which I am filtering on a dimension.
    Now, if I dont expose that column on report, it is not filtering properly. It is hitting aggregate tables and hence result is not correct.
    I checked physical SQL issued, and it doesn't even have that filter in WHERE clause. However, logical SQL has it.
    However, if I expose that column on report table, then it is filtering properly, since it is hitting base tables.
    I cannot include this column in aggregated dimension , since it is not part of hierarchy ( Snow flaked ).
    Any idea why is this happening?
    Please help me.
    - Vinay

    Hi Vinay,
    The hints I gave you are not work-arounds or band-aids to OBIEE. They are the features in OBIEE meant for a specific purpose.
    As far as the Best Practice OBIEE is meant strictly to source a Fully Qualified Data Marts. You cannot expect a high level summarized data within a few seconds from OBIEE, if you dont have data rolled-up and summarized to different levels.
    When we first started using this tool replacing Actuate a couple of years ago, we thought our users will be happy seeing the reports on a click. But it was horrible than Actuate because we dont have our data rolled-up or aggregated.
    Now our users love the reports because we have highly aggreagated data sources such as Essbase, Materialized Views, Query Re-Writes, Indexes, Partitioning etc.... etc... and these are all out side OBIEE.
    Thanks
    Sai

  • How to refine data in Aggregate tables  in Oracle BI

    Hello!
    How to refine data in aggregates tables (created by "create aggregates ..." statement) after the data in corresponding database tables was updated?
    It is unable to use the "delete aggregates" statement and then "create aggregates" again, because "delete aggregates" eliminates all aggregates in a model, but there are several aggregates in our model, that are intended for refining in different time moments and periods.
    Thanks

    903830 wrote:
    Hi folks,
    suppose we have table emp01, have 10 records & we create another table emp02 as
    create table emp02 as select * from emp01;
    now both the table have identical data.
    How to find "data" in two tables are identical?Why wouldn't they be identical? You've just created one table as a copy of the other. They are identical in data terms.

Maybe you are looking for

  • SonicWall Global VPN Client and Split tunneling

    Hello All, I searched Google and the forums here and can't find someone with the same problem. Lets start at the beginning-Just started this job a couple months ago and people brought to my attention immediately an issue while they were on the VPN th

  • Payment terms in downpayment request

    After sales order we are creating down payment via VF01. Payment terms in billing document comes from the master data. However when the financial noted item i.e. request is created it ignores it and picks up the net due date i.e. posting date. For ex

  • Operating System Size

    Hi, I have a 48 hour old Macbook with the 80gb hard drive. I have loaded nothing on it and it shows 57gb of available space. Can this be right? the OS and misceallaneous other programs take up 23 gb? I went as far as to try a new install from the dis

  • Call of duty ghost

    I can get online but when I try and get into a game says no games found when at the time there are 50000+ people playing can take up to half an hour to get into the game and a card update will not install didn't have any problems with the ps3 !!!!

  • API for Iview to change its propeerties height and width

    Hi , i need the API for Iview  to change its proiperties like height and width and to assign iview to role, Regards , venkat p