Poor query performance in Prod.

I am facing lots of issues in my queries.
The query is working fine in Dev. but after i transported it to Prod. the query is taking too much time to retreive the result.
Why i am facing this issue.
How can i do the performance tuning for the query.?
The query is built on multiprovider and it is also jumping to the ODS for ODS query.
But the query performance is really low and poor in Production.
And to surprise the query is wroking perfectly and faster in Dev.
What can be the suggestion.
Please send documents for performnace tuning, notes number... etc.

Are datavolumes huge in Prod Box...dat may be cause 4 d slow runtimes.
<b>Look at below performance improving techs</b>
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/aec09790-0201-0010-8eb9-e82df5763455
Business Intelligence Performance Tuning [original link is broken]
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
Note 565725 - Optimizing the performance of ODS objects

Similar Messages

  • Poor query performance when joining CONTAINS to another table

    We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
    For example, we can find all the records a user has access to from our base table by the following query:
    SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID;
    This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
    Our search query looks like this:
    SELECT score(1), d.*
    FROM duns d
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
    2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
    WITH subset
    AS
    (SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID
    SELECT score(1), d.*
    FROM duns d
    JOIN subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
    Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
    Thanks!!

    Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc  NUMBER,
      3       text_key  VARCHAR2 (30))
      4  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc  NUMBER,
      3       emp_id       NUMBER)
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO duns
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> -- indexes:
    SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
      2  ON duns (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
      2  ON primary_contact (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
    SCOTT@orcl_11gR2> -- as suggested by Roger:
    SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- original query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> WITH
      2    subset AS
      3        (SELECT d.duns_loc
      4         FROM      duns d
      5         JOIN      primary_contact pc
      6         ON      d.duns_loc = pc.duns_loc
      7         AND      pc.emp_id = :employeeID)
      8  SELECT score(1), d.*
      9  FROM   duns d
    10  JOIN   subset s
    11  ON     d.duns_loc = s.duns_loc
    12  WHERE  CONTAINS (TEXT_KEY, :search,1) > 0
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 4228563783
    | Id  | Operation                      | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |   1 |  SORT ORDER BY                 |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |*  2 |   HASH JOIN                    |                   |     2 |    84 |   120   (3)| 00:00:02 |
    |   3 |    NESTED LOOPS                |                   |    38 |  1292 |    50   (2)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  5 |      DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | DUNS_DUNS_LOC_IDX |     1 |     5 |     1   (0)| 00:00:01 |
    |*  7 |    TABLE ACCESS FULL           | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
       5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
       6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
       7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
    SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
    SCOTT@orcl_11gR2> WITH
      2    subset1 AS
      3        (SELECT pc.duns_loc
      4         FROM      primary_contact pc
      5         WHERE  pc.emp_id = :employeeID),
      6    subset2 AS
      7        (SELECT score(1), d.*
      8         FROM      duns d
      9         WHERE  CONTAINS (TEXT_KEY, :search,1) > 0)
    10  SELECT subset2.*
    11  FROM   subset1, subset2
    12  WHERE  subset1.duns_loc = subset2.duns_loc
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
    SCOTT@orcl_11gR2> SELECT subset2.*
      2  FROM   (SELECT pc.duns_loc
      3            FROM   primary_contact pc
      4            WHERE  pc.emp_id = :employeeID) subset1,
      5           (SELECT score(1), d.*
      6            FROM   duns d
      7            WHERE  CONTAINS (TEXT_KEY, :search,1) > 0) subset2
      8  WHERE  subset1.duns_loc = subset2.duns_loc
      9  ORDER  BY score(1) DESC
    10  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- ansi join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  JOIN   primary_contact
      4  ON     duns.duns_loc = primary_contact.duns_loc
      5  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      6  AND    primary_contact.emp_id = :employeeid
      7  ORDER  BY SCORE(1) DESC
      8  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- old join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns, primary_contact
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc = primary_contact.duns_loc
      5  AND    primary_contact.emp_id = :employeeid
      6  ORDER  BY SCORE(1) DESC
      7  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- in clause:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc IN
      5           (SELECT primary_contact.duns_loc
      6            FROM   primary_contact
      7            WHERE  primary_contact.emp_id = :employeeid)
      8  ORDER  BY SCORE(1) DESC
      9  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 3825821668
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN SEMI              |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2>

  • Poor query performance only with migrated 7.0 queries

    Dear Team,
    We are facing a serious query performance issue after migration of queries from 3.5 to 7.0.
    I executed a query in 3.5 with some variable values which takes fraction of seconds to display the output. But the same migrated query with same variable entries is taking very long time and giving time out error.
    We are not using any aggregates in the InfoProvider level.
    Both the queries are based on same cube but 3.5 query is taking less time and 7.0 is taking very long time if more selection is done.
    I checked for notes where I didn't find specific note for this particular scenario. I found notes only for general query performance improvement.
    I want to know the reason why only in 7.0 the same 3.5 query is taking a long time and giving time out error. And please suggest some notes or suggestions related to this scenario.
    Regards,
    Chan

    Hi,
    Queries in BI 7.0 are almost the same as queries in 3.x format.
    inorder to check if the problem is in the query runtime (database time) or JAVA runtime (probably rendering) you should try running it from RSRT once in JAVA web and once in ABAP web.
    if the problem is only with JAVA web, than u should take the URL and add &profiling=X at the end.
    after the query execution u can use statistics which will be shown at the top of the page.
    With my experience, the problem is in the rendering phase of the query. Things that could be done is to limit the number of rows shown at each page, that could be done by changing the 0ANALYSIS web template - it's one of the web template parameters.
    Tomer.

  • Problems with Full loads/Decreased query performance in Prod

    We have a table which serves as a base for a complex view. The table has roughly around 10 Million records and its a daily full load(I know, that delta loads are much better for handling large sets of data, but this information is very dynamic and with the business time constraints and project deliverables, the best decision was to do a Full load).
    This is the process we follow:
    > Drop Indexes (All columns individual indexes which are used inside the complex view as joins)
    > Truncate table
    > Load data
    > Recreate indexes.
    All the above steps are performed from SAP Dataservices Thru scripts and sql() function to execute the command, no manual intervention what so ever.
    After the job is successfully completed, the view doesn't refresh at all(It sits there forever). The same job when run across same volume of production data in Test environment performs much faster.
    Then, how do I refresh the view is manually log into SQL Developer, drop all the indexes on the parent table, and re-create all the indexes in the same order as Dataservices script. It performs very well after till the next load (the next morning).
    Any suggestions would be very helpful.
    My main question is why does it run faster, when I drop and recreate the indexes? and doesnt complete when the indexes are created by the SQL() from data services tool.
    Tried:
    Explain Plan(in dev, Test, Prod): Query cost vaired accross environments but returned results with same return times (In Production after manual Index creation)
    Tuning advisor (Only in Test) DBA evaluated it to be good.
    Thanks
    Nash
    DB Version Oracle 11.0.7
    Dataservices 3.2

    BluShadow and Harman
    Thank You!
    Im using a regular view, not a materialized view. and yes the query plan is completely different from Test and Production. In test the query was completely running on Hash based joins whereas in Production its using Nested Joins in the execution plan.
    Will try to gather statistics after the load and as per BluShadow, will look on the way of writing a function that makes a call to Database.
    Thank you all for taking sometime. I will try to test this out starting today and will extend tests over a couple of days.
    Regards
    Nash

  • Poor query performance in WebI on top of BEx queries

    Hello,
    We are using Web Inteliigence i BO 3.1 (SP0) to report on BEx queries (BW 7.0).
    We experience however that reporting is much quicker for the same set of data when using BEx Analyzer than when using WebI. In BEx it is acceptable, but in WebI it's not.
    How can we optimize the performance in this scenario? What tricks are there?
    The amount of data is not huge (max 500 000 records in the cube)
    BR,
    Fredrik

    Hi,
    Dennis is right. But if you need to upgrade (and you do) than maybe going to the latest SP is better. It is SP4. With SP3 the WebI Option "Query Striping" has been introduced which brings Performance as well.
    Also check
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/d0bf4691-cdce-2d10-45ba-d1ff39408eb4
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/109b7d63-7cab-2d10-2fbc-be5c61dcf110
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/006b1374-8f91-2d10-fe9c-f9fa12e2f595
    Regards
    -Seb.

  • Poor query performance when using date range

    Hello,
    We have the following ABAP code:
    select sptag werks vkorg vtweg spart kunnr matnr periv volum_01 voleh
          into table tab_aux
          from s911
          where vkorg in c_vkorg
            and werks in c_werks
            and sptag in c_sptag
            and matnr in c_matnr
    that is translated to the following Oracle query:
    SELECT
    "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN 000000000100000000 AND 000000000999999999;
    Because the field SPTAG is not enclosed by apostropher, the oracle query has a very bad performance. Below the execution plans and its costs, with and without the apostrophes. Please help me understanding why I am getting this behaviour.
    ##WITH APOSTROPHES
    SQL> EXPLAIN PLAN FOR
      2  SELECT
      3  "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN '20101201' AND '20101231' AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
    Explained.
    SQL> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
    PLAN_TABLE_OUTPUT
    Id
    Operation
    Name
    Rows
    Bytes
    Cost (%CPU)
    0
    SELECT STATEMENT
    932
    62444
    150   (1)
    1
    TABLE ACCESS BY INDEX ROWID
    S911
    932
    62444
    149   (0)
    2
    INDEX RANGE SCAN
    S911~VAC
    55M
    5   (0)
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       1 - filter("VKORG"='D004' AND "SPTAG">='20101201' AND
                  "SPTAG"<='20101231')
       2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
                  "MATNR"<='000000000999999999')
    ##WITHOUT APOSTROPHES
    SQL> EXPLAIN PLAN FOR
      2  SELECT
      3  "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
    SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Id
    Operation
    Name
    Rows
    Bytes
    Cost (%CPU)
    0
    SELECT STATEMENT
    2334
    152K
    150   (1)
    1
    TABLE ACCESS BY INDEX ROWID
    S911
    2334
    152K
    149   (0)
    2
    INDEX RANGE SCAN
    S911~VAC
    55M
    5   (0)
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       1 - filter("VKORG"='D004' AND TO_NUMBER("SPTAG")>=20101201 AND
                  TO_NUMBER("SPTAG")<=20101231)
       2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
                  "MATNR"<='000000000999999999')
    Best Regards,
    Daniel G.

    Volker,
    Answering your question, regarding the explain from ST05. As a quick work around I created an index (S911~Z9), but still I'd like to solve this issue without this extra index, as primary index would work ok, as long as date was correctly sent to oracle as string and not as number.
    SELECT                                                                         
      "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" ,        
      "PERIV" , "VOLUM_01" , "VOLEH"                                               
    FROM                                                                           
      "S911"                                                                       
    WHERE                                                                          
      "MANDT" = :A0 AND "VKORG" = :A1 AND "SPTAG" BETWEEN :A2 AND :A3 AND "MATNR"  
      BETWEEN :A4 AND :A5                                                          
    A0(CH,3)  = 003              
    A1(CH,4)  = D004             
    A2(NU,8)  = 20101201  (NU means number correct?)       
    A3(NU,8)  = 20101231         
    A4(CH,18) = 000000000100000000
    A5(CH,18) = 000000000999999999
    SELECT STATEMENT ( Estimated Costs = 10 , Estimated #Rows = 6 )                                                              
        5  3 FILTER                                               
             Filter Predicates                                                                               
    5  2 TABLE ACCESS BY INDEX ROWID S911                 
                 ( Estim. Costs = 10 , Estim. #Rows = 6 )         
                 Estim. CPU-Costs = 247.566 Estim. IO-Costs = 10                                                                               
    1 INDEX RANGE SCAN S911~Z9                     
                     ( Estim. Costs = 7 , Estim. #Rows = 20 )     
                     Search Columns: 4                            
                     Estim. CPU-Costs = 223.202 Estim. IO-Costs = 7
                     Access Predicates Filter Predicates          
    The table originally includes the following indexes:
    ###S911~0
    MANDT
    SSOUR
    VRSIO
    SPMON
    SPTAG
    SPWOC
    SPBUP
    VKORG
    VTWEG
    SPART
    VKBUR
    VKGRP
    KONDA
    KUNNR
    WERKS
    MATNR
    ###S911~VAC
    MANDT
    MATNR
    Number of entries: 61.303.517
    DISTINCT VKORG: 65
    DISTINCT SPTAG: 3107
    DISTINCT MATNR: 2939

  • Poor query performance

    I have an Oracle 10g R2 10.2.0.4 data warehouse with a fact table that holds records that are input in chronological order. Each month of records is contained in its own tablespace. Each tablespace is partitioned by a range partition on the day and subpartitioned by a hash of the primary key into 8 subpartitions, which results in daily partitions. A tablespace usually contains 7 datafiles that are 16384m in size for a total of 114688m (115GB) in physical size. All datafiles are on a SAN within ASM disk groups.
    Why when I query for counts in the fact table over the first three days of a month does it return much faster (orders of magnitude faster) than when I query for counts in the last three days of a month?
    The schema is a star schema and according to the explain plans for both queries star transforms are being achieved.
    Is it the file size? What do you think?

    As Oraclefusions said I think we need to know more about your stats collection method. Are you using the Oracle provided scheduler job with the default sessing or have you modified it?
    When you look at the last_analyzed date for the table how often are the statistics being updated?
    Have you tried running explain plan for the two sets of data then if the plans do not vary have you queried v$sql_plan for the actual execution plans to see how Oracle is solving the two queries?
    HTH -- Mark D Powell --

  • Poor query performance with BETWEEN

    I'm using Oracle Reports 6i.
    I needed to add Date range parameters (Starting and Ending dates) to a report. I used lexicals in the Where condition to handle the logic.
    If no dates given,
    Start_LEX := '/**/' and
    End_LEX := '/**/'
    If Start_date given,
    Start_LEX := 'AND t1.date >= :Start_date'
    If End_date given,
    End_LEX := 'AND t1.date <= :End_date'
    When I run the report with no dates or only one of the dates, it finishes in 3 to 8 seconds.
    But when I supply both dates, it takes > 5 minutes.
    So I did the following
    If both dates are given and Start_date = End date,
    Start_LEX := 'AND t1.date = :Start_date'
    End_LEX := '/**/'
    This got the response back to the 3 - 8 second range.
    I then tried this
    if both dates are given and Start_date != End date,
    Start_LEX := 'AND t1.date BETWEEN :Start_date AND :End_date'
    End_LEX := '/**/'
    This didn't help. The response was still in the 5+ minutes range.
    If I run the query outside of Oracle Reports, in PL/SQL Developer or SQLplus, it returns the same data in 3 - 8 seconds in all cases.
    Does anyone know what is going on in Oracle Reports when a date is compared with two values either separately or with a BETWEEN? Why does the query take approx. 60 times as long to execute?

    Hi,
    Observe access plan first by using BETWEEN as well as using <= >=.
    Try to impose logic of using NVL while forming lexical parameters.
    Adinath Kamode

  • Poor query performance when I have select ...xmlsequence in from clause

    Here is my query and it takes forever to execute. When I run the xmlsequence query alone, it takes a few seconds. But once added to from clause and I execute the query below together, it takes too long. Is there a better way to do the same?
    What I am doing here is what is there in t1 should not be there in what is returned from the subquery (xmlsequence).
    select distinct t1.l_type l_type,
    TO_CHAR(t1.dt1,'DD-MON-YY') dt1 ,
    TO_CHAR(t2.dt2,'DD-MON-YY') dt2, t1.l_nm
    from table1 t1,table2 t2,
    select k.column_value.extract('/RHS/value/text()').getStringVal() as lki
    from d_v dv
    , table(xmlsequence(dv.xmltypecol.extract('//c/RHS')))k
    ) qds
    where
    t2.l_id = t1.l_id
    and to_char(t1.l_id) not in qds.lki

    SQL> SELECT *
      2  FROM   TABLE(DBMS_XPLAN.DISPLAY);
    Plan hash value: 2820226721
    | Id  | Operation                            | Name                   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                     |                        |  7611M|   907G|       |   352M  (1)|999:59:59 |
    |   1 |  HASH UNIQUE                         |                        |  7611M|   907G|  2155G|   352M  (1)|999:59:59 |
    |*  2 |   HASH JOIN                          |                        |  9343M|  1113G|       |    22M  (2)| 76:31:45 |
    |   3 |    TABLE ACCESS FULL                 | table2           |  1088 | 17408 |       |     7   (0)| 00:00:0
    |   4 |    NESTED LOOPS                      |                        |  8468M|   883G|       |    22M  (1)| 76:15:57 |
    |   5 |     MERGE JOIN CARTESIAN             |                        |  1037K|   108M|       |  4040   (1)| 00:00:49 |
    |   6 |      TABLE ACCESS FULL               | D_V        |  1127 | 87906 |       |    56   (0)| 00
    |   7 |      BUFFER SORT                     |                        |   921 | 29472 |       |  3984   (1)| 00:00:48 |
    |   8 |       TABLE ACCESS FULL              | table1                   |   921 | 29472 |       |     4   (0)| 00:00:01 |
    |*  9 |     COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE |       |       |       |           
    Predicate Information (identified by operation id):
       2 - access("t2"."L_ID"="t1"."L_ID")
       9 - filter(TO_CHAR("t1"."L_ID")<>"XMLTYPE"."GETSTRINGVAL"("XMLTYPE"."EXTRACT"(VALUE(KOKB
                  alue/text()')))Message was edited by:
    M@$$@cHu$eTt$

  • OLAP Query performance

    Hi,
    Does using compression & partitioning (by time) affect the Reporting performance adversely? I have a 8GB Cube with 13 dimensions built in 10.1.0.4. Cube was defined with 1 dense dimension and other 12 as sparse in a compressed composite. It was also partitioned by Years. It takes close to 1 hour to build the cube. Since it is compressed, fully aggregated, I would assume. However, performance of discoverer queries on this cube has been pathetic! Any drill downs or slice/dice takes a long time to return if there are multiple dimensions in either edges of the Crosstab. Also, when scrolling down, it freezes for a while and then brings the data. Sometimes it takes couple of minutes!
    What are the things I needs to check to speed this up? I think I checked things like sparsity, SGA/PGA sizes, OLAP Page Pool etc..
    Regards
    Suresh

    Hi Suresh,
    Before you can implement changes to improve performance, you need to understand the causes of the performance problems. Discoverer for OLAP uses the OLAP API for queries, and the OLAP API generates SQL to query an analytic workspace. There are a few broad possible causes of poor query performance:
    retrieving data from the AW is slow
    SQL execution is slow, perhaps because the SQL is inefficient
    SQL execution is fast, but the OLAP API is slow to fetch data
    Each of these causes demands a different approach. I'd suggest that you enable configuration parameters SQL_TRACE and TIMED_STATISTICS, generate some trace files, and use the tkprof utility to try to narrow down the cause of the trouble.
    Geof

  • Query performance for WEBI on top of BEx

    Hello Everyone,
    Is there a way to get the time taken by a webi query execution on the BO side and in the BW side? Is this stat stored some where?
    Also, what is the best approach to take for comparing performance between the original bex query and the WEBI on top of the Bex?
    Thanks for your help.
    Aashish

    Hi,
    i posted some links for the same Question here:
    Poor query performance in WebI on top of BEx queries
    Maybe those can help you.
    Regards
    -Seb.

  • Query performed well in dev but hanging on prod

    Hi DBAs,
    Urgent help is required on this issue.
    One of my sql query performed well in dev server, but now for 1 week it is hanging in prod server so,
    * How can i proceed to troubleshoot the problem.
    * What are areas i need to look for , and for what ?
    Regards
    Asif

    Are they both the same version of databases?
    How recent is your dev data (if possible update your dev from prod)?
    If prod to dev copy is not possible
    ....AND stats for both db are up-to-date
    ....AND big differnce in db size for dev & prod
    ....=> You can copy prod's stats and import to dev then run your query on dev to localise the issue then you can start tuning the query on dev.

  • Query performs well in dev but hanging on prod

    Hi DBAs,
    Urgent help is required on this issue.
    One of my sql query performed well in dev server, but now for 1 week it is hanging in prod server so,
    * How can i proceed to troubleshoot the problem.
    * What are areas i need to look for , and for what ?
    Regards
    Asif

    There could be plenty of reasons to lead this behaviour to occur, sicnce you diden't come up with valid inputs, one really can not offer a pin point advise.
    in that case, I would simply recommened you to post the output of explain
    plan after making sure your stats are up to date.
    hare krishna
    Alok

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Query performance and Transport

    Guys, (Those of you that already work in the field maybe able to better answer this question.) 
    Let's say if an end-user comes and complaints about a certain query performance.  The developer ends up creating aggregates on the cube.  Would he be doing all this process on dev box then transport to QA and on to Production environment or directly on production environment and won't need to transport anything?  Any kind of documentation will be helpful for this.
    Thanks,
    RG

    Aggregates are created directly in production.
    We create aggregate based on query statistics...which is based on data read by query and various times.
    Since this is based of data in prod we develop directly in prod...i mean dev wont have any data so you dont have any basis to create aggregate. even if you use statistics data from prod to create aggregate in dev...you cannot again check performance improvement coz of aggregate in dev as it wont have data as prod has

Maybe you are looking for