Poor query performance

I have an Oracle 10g R2 10.2.0.4 data warehouse with a fact table that holds records that are input in chronological order. Each month of records is contained in its own tablespace. Each tablespace is partitioned by a range partition on the day and subpartitioned by a hash of the primary key into 8 subpartitions, which results in daily partitions. A tablespace usually contains 7 datafiles that are 16384m in size for a total of 114688m (115GB) in physical size. All datafiles are on a SAN within ASM disk groups.
Why when I query for counts in the fact table over the first three days of a month does it return much faster (orders of magnitude faster) than when I query for counts in the last three days of a month?
The schema is a star schema and according to the explain plans for both queries star transforms are being achieved.
Is it the file size? What do you think?

As Oraclefusions said I think we need to know more about your stats collection method. Are you using the Oracle provided scheduler job with the default sessing or have you modified it?
When you look at the last_analyzed date for the table how often are the statistics being updated?
Have you tried running explain plan for the two sets of data then if the plans do not vary have you queried v$sql_plan for the actual execution plans to see how Oracle is solving the two queries?
HTH -- Mark D Powell --

Similar Messages

  • Poor query performance in Prod.

    I am facing lots of issues in my queries.
    The query is working fine in Dev. but after i transported it to Prod. the query is taking too much time to retreive the result.
    Why i am facing this issue.
    How can i do the performance tuning for the query.?
    The query is built on multiprovider and it is also jumping to the ODS for ODS query.
    But the query performance is really low and poor in Production.
    And to surprise the query is wroking perfectly and faster in Dev.
    What can be the suggestion.
    Please send documents for performnace tuning, notes number... etc.

    Are datavolumes huge in Prod Box...dat may be cause 4 d slow runtimes.
    <b>Look at below performance improving techs</b>
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/aec09790-0201-0010-8eb9-e82df5763455
    Business Intelligence Performance Tuning [original link is broken]
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    Note 565725 - Optimizing the performance of ODS objects

  • Poor query performance when joining CONTAINS to another table

    We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
    For example, we can find all the records a user has access to from our base table by the following query:
    SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID;
    This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
    Our search query looks like this:
    SELECT score(1), d.*
    FROM duns d
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
    2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
    WITH subset
    AS
    (SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID
    SELECT score(1), d.*
    FROM duns d
    JOIN subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
    Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
    Thanks!!

    Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc  NUMBER,
      3       text_key  VARCHAR2 (30))
      4  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc  NUMBER,
      3       emp_id       NUMBER)
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO duns
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> -- indexes:
    SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
      2  ON duns (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
      2  ON primary_contact (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
    SCOTT@orcl_11gR2> -- as suggested by Roger:
    SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- original query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> WITH
      2    subset AS
      3        (SELECT d.duns_loc
      4         FROM      duns d
      5         JOIN      primary_contact pc
      6         ON      d.duns_loc = pc.duns_loc
      7         AND      pc.emp_id = :employeeID)
      8  SELECT score(1), d.*
      9  FROM   duns d
    10  JOIN   subset s
    11  ON     d.duns_loc = s.duns_loc
    12  WHERE  CONTAINS (TEXT_KEY, :search,1) > 0
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 4228563783
    | Id  | Operation                      | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |   1 |  SORT ORDER BY                 |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |*  2 |   HASH JOIN                    |                   |     2 |    84 |   120   (3)| 00:00:02 |
    |   3 |    NESTED LOOPS                |                   |    38 |  1292 |    50   (2)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  5 |      DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | DUNS_DUNS_LOC_IDX |     1 |     5 |     1   (0)| 00:00:01 |
    |*  7 |    TABLE ACCESS FULL           | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
       5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
       6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
       7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
    SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
    SCOTT@orcl_11gR2> WITH
      2    subset1 AS
      3        (SELECT pc.duns_loc
      4         FROM      primary_contact pc
      5         WHERE  pc.emp_id = :employeeID),
      6    subset2 AS
      7        (SELECT score(1), d.*
      8         FROM      duns d
      9         WHERE  CONTAINS (TEXT_KEY, :search,1) > 0)
    10  SELECT subset2.*
    11  FROM   subset1, subset2
    12  WHERE  subset1.duns_loc = subset2.duns_loc
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
    SCOTT@orcl_11gR2> SELECT subset2.*
      2  FROM   (SELECT pc.duns_loc
      3            FROM   primary_contact pc
      4            WHERE  pc.emp_id = :employeeID) subset1,
      5           (SELECT score(1), d.*
      6            FROM   duns d
      7            WHERE  CONTAINS (TEXT_KEY, :search,1) > 0) subset2
      8  WHERE  subset1.duns_loc = subset2.duns_loc
      9  ORDER  BY score(1) DESC
    10  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- ansi join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  JOIN   primary_contact
      4  ON     duns.duns_loc = primary_contact.duns_loc
      5  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      6  AND    primary_contact.emp_id = :employeeid
      7  ORDER  BY SCORE(1) DESC
      8  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- old join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns, primary_contact
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc = primary_contact.duns_loc
      5  AND    primary_contact.emp_id = :employeeid
      6  ORDER  BY SCORE(1) DESC
      7  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- in clause:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc IN
      5           (SELECT primary_contact.duns_loc
      6            FROM   primary_contact
      7            WHERE  primary_contact.emp_id = :employeeid)
      8  ORDER  BY SCORE(1) DESC
      9  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 3825821668
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN SEMI              |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2>

  • Poor query performance only with migrated 7.0 queries

    Dear Team,
    We are facing a serious query performance issue after migration of queries from 3.5 to 7.0.
    I executed a query in 3.5 with some variable values which takes fraction of seconds to display the output. But the same migrated query with same variable entries is taking very long time and giving time out error.
    We are not using any aggregates in the InfoProvider level.
    Both the queries are based on same cube but 3.5 query is taking less time and 7.0 is taking very long time if more selection is done.
    I checked for notes where I didn't find specific note for this particular scenario. I found notes only for general query performance improvement.
    I want to know the reason why only in 7.0 the same 3.5 query is taking a long time and giving time out error. And please suggest some notes or suggestions related to this scenario.
    Regards,
    Chan

    Hi,
    Queries in BI 7.0 are almost the same as queries in 3.x format.
    inorder to check if the problem is in the query runtime (database time) or JAVA runtime (probably rendering) you should try running it from RSRT once in JAVA web and once in ABAP web.
    if the problem is only with JAVA web, than u should take the URL and add &profiling=X at the end.
    after the query execution u can use statistics which will be shown at the top of the page.
    With my experience, the problem is in the rendering phase of the query. Things that could be done is to limit the number of rows shown at each page, that could be done by changing the 0ANALYSIS web template - it's one of the web template parameters.
    Tomer.

  • Poor query performance in WebI on top of BEx queries

    Hello,
    We are using Web Inteliigence i BO 3.1 (SP0) to report on BEx queries (BW 7.0).
    We experience however that reporting is much quicker for the same set of data when using BEx Analyzer than when using WebI. In BEx it is acceptable, but in WebI it's not.
    How can we optimize the performance in this scenario? What tricks are there?
    The amount of data is not huge (max 500 000 records in the cube)
    BR,
    Fredrik

    Hi,
    Dennis is right. But if you need to upgrade (and you do) than maybe going to the latest SP is better. It is SP4. With SP3 the WebI Option "Query Striping" has been introduced which brings Performance as well.
    Also check
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/d0bf4691-cdce-2d10-45ba-d1ff39408eb4
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/109b7d63-7cab-2d10-2fbc-be5c61dcf110
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/006b1374-8f91-2d10-fe9c-f9fa12e2f595
    Regards
    -Seb.

  • Poor query performance when using date range

    Hello,
    We have the following ABAP code:
    select sptag werks vkorg vtweg spart kunnr matnr periv volum_01 voleh
          into table tab_aux
          from s911
          where vkorg in c_vkorg
            and werks in c_werks
            and sptag in c_sptag
            and matnr in c_matnr
    that is translated to the following Oracle query:
    SELECT
    "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN 000000000100000000 AND 000000000999999999;
    Because the field SPTAG is not enclosed by apostropher, the oracle query has a very bad performance. Below the execution plans and its costs, with and without the apostrophes. Please help me understanding why I am getting this behaviour.
    ##WITH APOSTROPHES
    SQL> EXPLAIN PLAN FOR
      2  SELECT
      3  "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN '20101201' AND '20101231' AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
    Explained.
    SQL> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
    PLAN_TABLE_OUTPUT
    Id
    Operation
    Name
    Rows
    Bytes
    Cost (%CPU)
    0
    SELECT STATEMENT
    932
    62444
    150   (1)
    1
    TABLE ACCESS BY INDEX ROWID
    S911
    932
    62444
    149   (0)
    2
    INDEX RANGE SCAN
    S911~VAC
    55M
    5   (0)
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       1 - filter("VKORG"='D004' AND "SPTAG">='20101201' AND
                  "SPTAG"<='20101231')
       2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
                  "MATNR"<='000000000999999999')
    ##WITHOUT APOSTROPHES
    SQL> EXPLAIN PLAN FOR
      2  SELECT
      3  "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
    SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Id
    Operation
    Name
    Rows
    Bytes
    Cost (%CPU)
    0
    SELECT STATEMENT
    2334
    152K
    150   (1)
    1
    TABLE ACCESS BY INDEX ROWID
    S911
    2334
    152K
    149   (0)
    2
    INDEX RANGE SCAN
    S911~VAC
    55M
    5   (0)
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       1 - filter("VKORG"='D004' AND TO_NUMBER("SPTAG")>=20101201 AND
                  TO_NUMBER("SPTAG")<=20101231)
       2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
                  "MATNR"<='000000000999999999')
    Best Regards,
    Daniel G.

    Volker,
    Answering your question, regarding the explain from ST05. As a quick work around I created an index (S911~Z9), but still I'd like to solve this issue without this extra index, as primary index would work ok, as long as date was correctly sent to oracle as string and not as number.
    SELECT                                                                         
      "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" ,        
      "PERIV" , "VOLUM_01" , "VOLEH"                                               
    FROM                                                                           
      "S911"                                                                       
    WHERE                                                                          
      "MANDT" = :A0 AND "VKORG" = :A1 AND "SPTAG" BETWEEN :A2 AND :A3 AND "MATNR"  
      BETWEEN :A4 AND :A5                                                          
    A0(CH,3)  = 003              
    A1(CH,4)  = D004             
    A2(NU,8)  = 20101201  (NU means number correct?)       
    A3(NU,8)  = 20101231         
    A4(CH,18) = 000000000100000000
    A5(CH,18) = 000000000999999999
    SELECT STATEMENT ( Estimated Costs = 10 , Estimated #Rows = 6 )                                                              
        5  3 FILTER                                               
             Filter Predicates                                                                               
    5  2 TABLE ACCESS BY INDEX ROWID S911                 
                 ( Estim. Costs = 10 , Estim. #Rows = 6 )         
                 Estim. CPU-Costs = 247.566 Estim. IO-Costs = 10                                                                               
    1 INDEX RANGE SCAN S911~Z9                     
                     ( Estim. Costs = 7 , Estim. #Rows = 20 )     
                     Search Columns: 4                            
                     Estim. CPU-Costs = 223.202 Estim. IO-Costs = 7
                     Access Predicates Filter Predicates          
    The table originally includes the following indexes:
    ###S911~0
    MANDT
    SSOUR
    VRSIO
    SPMON
    SPTAG
    SPWOC
    SPBUP
    VKORG
    VTWEG
    SPART
    VKBUR
    VKGRP
    KONDA
    KUNNR
    WERKS
    MATNR
    ###S911~VAC
    MANDT
    MATNR
    Number of entries: 61.303.517
    DISTINCT VKORG: 65
    DISTINCT SPTAG: 3107
    DISTINCT MATNR: 2939

  • Poor query performance with BETWEEN

    I'm using Oracle Reports 6i.
    I needed to add Date range parameters (Starting and Ending dates) to a report. I used lexicals in the Where condition to handle the logic.
    If no dates given,
    Start_LEX := '/**/' and
    End_LEX := '/**/'
    If Start_date given,
    Start_LEX := 'AND t1.date >= :Start_date'
    If End_date given,
    End_LEX := 'AND t1.date <= :End_date'
    When I run the report with no dates or only one of the dates, it finishes in 3 to 8 seconds.
    But when I supply both dates, it takes > 5 minutes.
    So I did the following
    If both dates are given and Start_date = End date,
    Start_LEX := 'AND t1.date = :Start_date'
    End_LEX := '/**/'
    This got the response back to the 3 - 8 second range.
    I then tried this
    if both dates are given and Start_date != End date,
    Start_LEX := 'AND t1.date BETWEEN :Start_date AND :End_date'
    End_LEX := '/**/'
    This didn't help. The response was still in the 5+ minutes range.
    If I run the query outside of Oracle Reports, in PL/SQL Developer or SQLplus, it returns the same data in 3 - 8 seconds in all cases.
    Does anyone know what is going on in Oracle Reports when a date is compared with two values either separately or with a BETWEEN? Why does the query take approx. 60 times as long to execute?

    Hi,
    Observe access plan first by using BETWEEN as well as using <= >=.
    Try to impose logic of using NVL while forming lexical parameters.
    Adinath Kamode

  • Poor query performance when I have select ...xmlsequence in from clause

    Here is my query and it takes forever to execute. When I run the xmlsequence query alone, it takes a few seconds. But once added to from clause and I execute the query below together, it takes too long. Is there a better way to do the same?
    What I am doing here is what is there in t1 should not be there in what is returned from the subquery (xmlsequence).
    select distinct t1.l_type l_type,
    TO_CHAR(t1.dt1,'DD-MON-YY') dt1 ,
    TO_CHAR(t2.dt2,'DD-MON-YY') dt2, t1.l_nm
    from table1 t1,table2 t2,
    select k.column_value.extract('/RHS/value/text()').getStringVal() as lki
    from d_v dv
    , table(xmlsequence(dv.xmltypecol.extract('//c/RHS')))k
    ) qds
    where
    t2.l_id = t1.l_id
    and to_char(t1.l_id) not in qds.lki

    SQL> SELECT *
      2  FROM   TABLE(DBMS_XPLAN.DISPLAY);
    Plan hash value: 2820226721
    | Id  | Operation                            | Name                   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                     |                        |  7611M|   907G|       |   352M  (1)|999:59:59 |
    |   1 |  HASH UNIQUE                         |                        |  7611M|   907G|  2155G|   352M  (1)|999:59:59 |
    |*  2 |   HASH JOIN                          |                        |  9343M|  1113G|       |    22M  (2)| 76:31:45 |
    |   3 |    TABLE ACCESS FULL                 | table2           |  1088 | 17408 |       |     7   (0)| 00:00:0
    |   4 |    NESTED LOOPS                      |                        |  8468M|   883G|       |    22M  (1)| 76:15:57 |
    |   5 |     MERGE JOIN CARTESIAN             |                        |  1037K|   108M|       |  4040   (1)| 00:00:49 |
    |   6 |      TABLE ACCESS FULL               | D_V        |  1127 | 87906 |       |    56   (0)| 00
    |   7 |      BUFFER SORT                     |                        |   921 | 29472 |       |  3984   (1)| 00:00:48 |
    |   8 |       TABLE ACCESS FULL              | table1                   |   921 | 29472 |       |     4   (0)| 00:00:01 |
    |*  9 |     COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE |       |       |       |           
    Predicate Information (identified by operation id):
       2 - access("t2"."L_ID"="t1"."L_ID")
       9 - filter(TO_CHAR("t1"."L_ID")<>"XMLTYPE"."GETSTRINGVAL"("XMLTYPE"."EXTRACT"(VALUE(KOKB
                  alue/text()')))Message was edited by:
    M@$$@cHu$eTt$

  • OLAP Query performance

    Hi,
    Does using compression & partitioning (by time) affect the Reporting performance adversely? I have a 8GB Cube with 13 dimensions built in 10.1.0.4. Cube was defined with 1 dense dimension and other 12 as sparse in a compressed composite. It was also partitioned by Years. It takes close to 1 hour to build the cube. Since it is compressed, fully aggregated, I would assume. However, performance of discoverer queries on this cube has been pathetic! Any drill downs or slice/dice takes a long time to return if there are multiple dimensions in either edges of the Crosstab. Also, when scrolling down, it freezes for a while and then brings the data. Sometimes it takes couple of minutes!
    What are the things I needs to check to speed this up? I think I checked things like sparsity, SGA/PGA sizes, OLAP Page Pool etc..
    Regards
    Suresh

    Hi Suresh,
    Before you can implement changes to improve performance, you need to understand the causes of the performance problems. Discoverer for OLAP uses the OLAP API for queries, and the OLAP API generates SQL to query an analytic workspace. There are a few broad possible causes of poor query performance:
    retrieving data from the AW is slow
    SQL execution is slow, perhaps because the SQL is inefficient
    SQL execution is fast, but the OLAP API is slow to fetch data
    Each of these causes demands a different approach. I'd suggest that you enable configuration parameters SQL_TRACE and TIMED_STATISTICS, generate some trace files, and use the tkprof utility to try to narrow down the cause of the trouble.
    Geof

  • Query performance for WEBI on top of BEx

    Hello Everyone,
    Is there a way to get the time taken by a webi query execution on the BO side and in the BW side? Is this stat stored some where?
    Also, what is the best approach to take for comparing performance between the original bex query and the WEBI on top of the Bex?
    Thanks for your help.
    Aashish

    Hi,
    i posted some links for the same Question here:
    Poor query performance in WebI on top of BEx queries
    Maybe those can help you.
    Regards
    -Seb.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • SQL query performance question

    So I had this long query that looked like this:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM
    ideal d, ideal_term a,( select deal_key, deal_term_key, max(createdOn) maxdate    from Ideal_term B
    where createdOn <= '03-OCT-12 10.03.00 AM' group by deal_key, deal_term_key ) B
    WHERE  a.begin_date <= '20-MAR-09 01.01.00 AM'
    *     and a.end_date >= '19-MAR-09 01.00.00 AM'*
    *     and A.deal_key = b.deal_key*
    *     and A.deal_term_key = b.deal_term_key*
    *     and     a.createdOn = b.maxdate*
    *     and d.deal_key = a.deal_key*
    *     and d.name like 'MVPP1 B'*
    order by
    *     a.begin_date, a.deal_key, a.deal_term_key;*
    This performed very poorly for a record in one of the tables that has 43,000+ revisions. It took about 1 minute and 40 seconds. I asked the database guy at my company for help with it and he re-wrote it like so:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM ideal d
    INNER JOIN (SELECT deal_key,
    deal_term_key,
    MAX(createdOn) maxdate
    FROM Ideal_term B2
    WHERE '03-OCT-12 10.03.00 AM' >= createdOn
    GROUP BY deal_key, deal_term_key) B1
    ON d.deal_key = B1.deal_key
    INNER JOIN ideal_term a
    ON B1.deal_key = A.deal_key
    AND B1.deal_term_key = A.deal_term_key
    AND B1.maxdate = a.createdOn
    AND d.deal_key = a.deal_key + 0
    WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
    AND a.end_date >= '19-MAR-09 01.00.00 AM'
    AND d.name LIKE 'MVPP1 B'
    ORDER BY a.begin_date, a.deal_key, a.deal_term_key
    this works much better, it only takes 0.13 seconds. I've bee trying to figure out why exaclty his version performs so much better. His only epxlanation was that the "+ 0" in the WHERE clause prevented Oracle from using an index for that column which created a bad plan initially.
    I think there has to be more to it than that though. Can someone give me a detailed explanation of why the second version of the query performed so much faster.
    Thanks.
    Edited by: su**** on Oct 10, 2012 1:31 PM

    I used Autotrace in SQL developer. Is that sufficient? Here is the Autotrace and Explain for the slow query:
    and for the fast query:
    I said that I thought there was more to it because when my team members and I looked at the re-worked query the database guy sent us, our initial thoughts were that in the slow query some of the tables didn't have joins and because of that the query formed a Cartesian product and this resulted in a huge 43,000+ rows matrix.
    In his version all tables had joins properly defined and in addition he had that +0 which told it to ignore the index for the attribute deal_key of table ideal_term. I spoke with the database guy today and he confirmed our theory.

  • How to improve query performance using infoset

    I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!

    Hi
    As info set itself doesn't have any data so it improves Performance
    also go through the below tips.
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
    How to read ST03N datasets from DB
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Inventory Ageing query performance

    Hi All,
       I have created inventory ageing query on our custom cube which is replica of 0IC_C03. We have data from 2003 onwards. the performance of the query is very poor the system almost hangs. I tried to create aggregates to improve performance but its failed. What i should do to improve the performance and why the aggregate filling is failed. Cube have compressed data. Pls guide.
    Regards:
    Jitendra

    Inaddition to the above posts
    Check the below points ... and take action accordingly to increase the query performance.
    mainly check --Is the Cube data Compressed. it will increase the performance of the query..
    1)If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2)Check code for all exit variables used in a report.
    3)Check the read mode for the query. recommended is H.
    4)If Alternative UOM solution is used, turn off query cache.
    5)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    6)Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    7)Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
    Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    8)Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    9)If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing.
    10)Check the user exits usage involved in OLAP run time?
    11)Use Constant Selection instead of SUMCT and SUMGT within formulas.
    12)
    Turn on the BW Statistics: RSA1, choose Tools -> BW statistics for InfoCubes(Choose OLAP and WHM for your relevant Cubes)
    To check the Query Performance problem
    Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all InfoCubes.
    You need to run ST03N in expert mode to get these values
    based on the analysis and the values taken from the above  - Check if an aggregate is suitable or setting OLAP etc.
    Edited by: prashanthk on Nov 26, 2010 9:17 AM

  • Removing Stats improves query performance

    Hi,
    Today i have got a rare question regarding Stats collection,
    In some cases removing existing stats will improve the query performance...
    What is the reason behind this?
    After a rigorous search i have landed here to here from you all.
    Thanks in-advance.
    Vara

    Hi,
    Before 11gR2 and adaptive cursor sharing, skewed data could result in wrong execution plan (histogramms and bind variable peaking could not fix it).
    Sometimes statistics cannot be fresh enough, for example with temporary table, but also for regular tables which are massively loaded/modified, resulting in poor execution plan.
    Also the optimizer might use guesses for complex predicates... which could have same consequences.
    When no statistics are available, dynamic sampling may give a more accurate estimation of the cardinalities than statistics.
    https://blogs.oracle.com/optimizer/entry/dynamic_sampling_and_its_impact_on_the_optimizer

Maybe you are looking for

  • Standard Code Pages in Non-Unicode System

    Our System is Non-Unicode System , Single Code page configuration and Standard Code Page ISO8859-1(Latin-1) is Active. Is it only 1 standard code page ISO8859-1(Latin-1) WILL BE ACTIVE or also other code pages ISO8859-2,ISO8859-5,ISO8859-7,ISO8859-8,

  • How do I get a hard copy of my contacts?

    I need a hard copy of my contacts from my iPhone, which is on my PC in the iCloud. My phone froze up and I needed to call someone for airport pick up.  Panic attack

  • Log4j2 is not working in weblogic12c

    Hi, I am developing my applocation in JEE6 with log4j2 as a logging libarary. I found that log4j2 is not working in weblogic 12c [installed using wls_121200.jar]. getting below exception when I try to access any .xhtml [JSF2] or .hml pages. can oracl

  • Installing Leopard on second HD

    I have Mountain Lion running on my Bay 1 HD. This is a 2009 Mac Pro. I will install Leopard on a second HD in another Bay. My question is do I make the second HD Journaled or not?

  • Custom date format is not working for me

    I have a bunch of dates imported from another spread sheet. They look like this: February 04, 2014 at 09:24AM Numbers 3.2 sees this as Text, so I went about making a custom date format. It looks like this: However, I cannot make Numbers see it as a d