Query Performance COST - HELP

Hello Experts,
Please help me how the table "digital_compatibility" be modified for faster performance?
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE     10.2.0.1.0     Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.1.0 - Productio
NLSRTL Version 10.2.0.1.0 - Production
Count of records for the tables:-
SELECT count(*) FROM DEVICE_TYPE; --421
SELECT count(*) FROM DIGITAL_COMPATIBILITY; --227757
CREATE TABLE  DEVICE_TYPE
     DEVICE_TYPE_ID          NUMBER(38,0),
     DEVICE_TYPE_MAKE        VARCHAR2(256 BYTE),
     DEVICE_TYPE_MODEL       VARCHAR2(256 BYTE),
     DEVICE_DISPLAY_NAME     VARCHAR2(256 BYTE),
     PARTNER_DEVICE_TYPE     VARCHAR2(256 BYTE),
     DEVICE_IMAGE_URL        VARCHAR2(256 BYTE),
     FOH_BUTTON_NAME         VARCHAR2(256 BYTE),
     FOH_ACTIVE_FLAG         CHAR(1 BYTE),
     BB_RETAIL_FLAG          CHAR(1 BYTE),
     DISPLAY_DESCRIPTION     VARCHAR2(256 BYTE),
     DEVICE_CATEGORY_ID      NUMBER(38,0),
     DEVICE_SUB_CATEGORY_ID  NUMBER(38,0),
     DEVICE_BRAND_ID         NUMBER(38,0),
     PARENT_ID               NUMBER(38,0),
     POWERED_BY              VARCHAR2(256 BYTE),
     CARRIER                 VARCHAR2(256 BYTE),
     CAPABILITY_SET_ID       NUMBER(38,0),
     CREATED_BY              VARCHAR2(32 BYTE),
     CREATED_DATE  DATE,
     UPDATED_BY  VARCHAR2(32 BYTE),
     UPDATED_DATE  DATE,
     POWERED_BY_DEVICE_TYPE    VARCHAR2(64 BYTE),
     OPERATING_SYSTEM          VARCHAR2(32 BYTE),
     OPERATING_SYSTEM_VERSION  VARCHAR2(32 BYTE),
     BROWSER                   VARCHAR2(32 BYTE),
     BROWSER_VERSION           VARCHAR2(32 BYTE),
     CLASSIFICATION            VARCHAR2(32 BYTE),
    CONSTRAINT  PK_DEVICE_TYPE  PRIMARY KEY ( DEVICE_TYPE_ID));
CREATE INDEX  DEVICE_TYPE_IDX  ON  DEVICE_TYPE
     CAPABILITY_SET_ID ,
    UPPER( PARTNER_DEVICE_TYPE )
CREATE TABLE DIGITAL_COMPATIBILITY
    DIGITAL_COMPATIBILITY_ID NUMBER NOT NULL ENABLE,
    CAPABILITY_SET_ID        NUMBER,
    OBJECT_TYPE              VARCHAR2(38 BYTE) NOT NULL ENABLE,
    CREATED_DATE DATE NOT NULL ENABLE,
    CREATED_BY VARCHAR2(38 BYTE) NOT NULL ENABLE,
    UPDATED_DATE DATE NOT NULL ENABLE,
    UPDATED_BY        VARCHAR2(38 BYTE) NOT NULL ENABLE,
    OBJECT_ID         VARCHAR2(114 BYTE),
    ENCODE_PROFILE_ID NUMBER
CREATE INDEX ENCODE_PROFILE_ID_IDX ON DIGITAL_COMPATIBILITY
    ENCODE_PROFILE_ID,
    OBJECT_ID,
    OBJECT_TYPE
         Query
=====
EXPLAIN PLAN FOR
SELECT  /*+ INDEX(dc, ENCODE_PROFILE_ID_IDX) */
DISTINCT dc.object_id AS title_id
FROM digital_compatibility dc,
  device_type dt
WHERE dc.capability_set_id        = dt.capability_set_id
AND upper(dt.partner_device_type) = :1
AND dc.object_id                 IN (:2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32, :33)
AND dc.object_type                =:"SYS_B_0";
                             Explain plan
| Id  | Operation                       | Name                  | Rows  | Bytes | Cost (%CPU)|
|   0 | SELECT STATEMENT                |                       |     2 |   472 |   274   (4)|
|   1 |  HASH UNIQUE                    |                       |     2 |   472 |   274   (4)|
|*  2 |   MAT_VIEW ACCESS BY INDEX ROWID| DIGITAL_COMPATIBILITY |     1 |    93 |    68   (3)|
|   3 |    NESTED LOOPS                 |                       |     2 |   472 |   273   (4)|
|*  4 |     INDEX FULL SCAN             | DEVICE_TYPE_IDX       |     4 |   572 |     1   (0)|
|*  5 |     INDEX FULL SCAN             | ENCODE_PROFILE_ID_IDX |     8 |       |    67   (3)|
Predicate Information (identified by operation id):
   2 - filter("DC"."CAPABILITY_SET_ID"="DT"."CAPABILITY_SET_ID")
   4 - access(UPPER("PARTNER_DEVICE_TYPE")=:1)
       filter(UPPER("PARTNER_DEVICE_TYPE")=:1)
   5 - access("DC"."OBJECT_TYPE"=:SYS_B_0)
       filter(("DC"."OBJECT_ID"=:2 OR "DC"."OBJECT_ID"=:3 OR "DC"."OBJECT_ID"=:4 OR
              "DC"."OBJECT_ID"=:5 OR "DC"."OBJECT_ID"=:6 OR "DC"."OBJECT_ID"=:7 OR
              "DC"."OBJECT_ID"=:8 OR "DC"."OBJECT_ID"=:9 OR "DC"."OBJECT_ID"=:10 OR
              "DC"."OBJECT_ID"=:11 OR "DC"."OBJECT_ID"=:12 OR "DC"."OBJECT_ID"=:13 OR
              "DC"."OBJECT_ID"=:14 OR "DC"."OBJECT_ID"=:15 OR "DC"."OBJECT_ID"=:16 OR
              "DC"."OBJECT_ID"=:17 OR "DC"."OBJECT_ID"=:18 OR "DC"."OBJECT_ID"=:19 OR
              "DC"."OBJECT_ID"=:20 OR "DC"."OBJECT_ID"=:21 OR "DC"."OBJECT_ID"=:22 OR
              "DC"."OBJECT_ID"=:23 OR "DC"."OBJECT_ID"=:24 OR "DC"."OBJECT_ID"=:25 OR
              "DC"."OBJECT_ID"=:26 OR "DC"."OBJECT_ID"=:27 OR "DC"."OBJECT_ID"=:28 OR
              "DC"."OBJECT_ID"=:29 OR "DC"."OBJECT_ID"=:30 OR "DC"."OBJECT_ID"=:31 OR
              "DC"."OBJECT_ID"=:32 OR "DC"."OBJECT_ID"=:33) AND "DC"."OBJECT_TYPE"=:SYS_B_0)
Note
   - 'PLAN_TABLE' is old version
                           Trace
recursive calls     280
db block gets     16
consistent gets     97
physical reads     0
redo size     3224
bytes sent via SQL*Net to client     589
bytes received via SQL*Net from client     1598
SQL*Net roundtrips to/from client     2
sorts (memory)     4
sorts (disk)     0
                        Thanks ....

You index on DIGITAL_COMPATIBILITY is on ENCODE_PROFILE_ID, OBJECT_ID, OBJECT_TYPE
But you query for object_id and object_type.
How much rows you you identify with this? What the PK?
The way it's now, it needs to read the full index, then the table and as DEVICE_TYPE is small it makes a NL to it.
Makes sense.
If you would add an index on OBJECT_ID, OBJECT_TYPE and capability_set_id ORACLE would just need to read the INDEX.

Similar Messages

  • Query Performance Tuning - Help

    Hello Experts,
    Good Day to all...
    TEST@ora10g>select * from v$version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    "CORE     10.2.0.4.0     Production"
    TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
    NLSRTL Version 10.2.0.4.0 - Production
    SELECT fa.user_id,
              fa.notation_type,
                 MAX(fa.created_date) maxDate,
                                      COUNT(*) bk_count
    FROM  book_notations fa
    WHERE fa.user_id IN
        ( SELECT user_id
         FROM
           ( SELECT /*+ INDEX(f2,FBK_AN_ID_IDX) */ f2.user_id,
                                                      MAX(f2.notatn_id) f2_annotation_id
            FROM  book_notations f2,
                  title_relation tdpr
            WHERE f2.user_id IN ('100002616221644',
                                          '100002616221645',
                                          '100002616221646',
                                          '100002616221647',
                                          '100002616221648')
              AND f2.pack_id=tdpr.pack_id
              AND tdpr.title_id =93402
            GROUP BY f2.user_id
            ORDER BY 2 DESC)
         WHERE ROWNUM <= 10)
    GROUP BY fa.user_id,
             fa.notation_type
    ORDER BY 3 DESC;Cost of the Query is too much...
    Below is the explain plan of the query
    | Id  | Operation                                  | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                           |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   1 |  SORT ORDER BY                             |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   2 |   HASH GROUP BY                            |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID             | book_notations                 |    11 |   319 |     4   (0)| 00:00:01 |
    |   4 |     NESTED LOOPS                           |                                |    53 |  2385 |    50   (6)| 00:00:01 |
    |   5 |      VIEW                                  | VW_NSO_1                       |     5 |    80 |    29   (7)| 00:00:01 |
    |   6 |       HASH UNIQUE                          |                                |     5 |    80 |            |          |
    |*  7 |        COUNT STOPKEY                       |                                |       |       |            |          |
    |   8 |         VIEW                               |                                |     5 |    80 |    29   (7)| 00:00:01 |
    |*  9 |          SORT ORDER BY STOPKEY             |                                |     5 |   180 |    29   (7)| 00:00:01 |
    |  10 |           HASH GROUP BY                    |                                |     5 |   180 |    29   (7)| 00:00:01 |
    |  11 |            TABLE ACCESS BY INDEX ROWID     | book_notations                 |  5356 |   135K|    26   (0)| 00:00:01 |
    |  12 |             NESTED LOOPS                   |                                |  6917 |   243K|    27   (0)| 00:00:01 |
    |  13 |              MAT_VIEW ACCESS BY INDEX ROWID| title_relation                         |     1 |    10 |     1   (0)| 00:00:01 |
    |* 14 |               INDEX RANGE SCAN             | IDX_TITLE_ID                   |     1 |       |     1   (0)| 00:00:01 |
    |  15 |              INLIST ITERATOR               |                                |       |       |            |          |
    |* 16 |               INDEX RANGE SCAN             | FBK_AN_ID_IDX                  |  5356 |       |     4   (0)| 00:00:01 |
    |* 17 |      INDEX RANGE SCAN                      | FBK_AN_ID_IDX                  |   746 |       |     1   (0)| 00:00:01 |
    Table Details
    SELECT COUNT(*) FROM book_notations; --111367
    Columns
    user_id -- nullable field - VARCHAR2(50 BYTE)
    pack_id -- NOT NULL --NUMBER
    notation_type--     VARCHAR2(50 BYTE)     -- nullable field
    CREATED_DATE     - DATE     -- nullable field
    notatn_id     - VARCHAR2(50 BYTE)     -- nullable field      
    Index
    FBK_AN_ID_IDX - Non unique - Composite columns --> (user_id and pack_id)
    SELECT COUNT(*) FROM title_relation; --12678
    Columns
    pack_id - not null - number(38) - PK
    title_id - not null - number(38)
    Index
    IDX_TITLE_ID - Non Unique - TITLE_ID
    Please help...
    Thanks...

    Linus wrote:
    Thanks Bravid for your reply; highly appreciate that.
    So as you say; index creation on the NULL column doesnt have any impact. OK fine.
    What happens to the execution plan, performance and the stats when you remove the index hint?
    Find below the Execution Plan and Predicate information
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 126058086"
    "| Id  | Operation                                  | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |"
    "|   0 | SELECT STATEMENT                           |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   1 |  SORT ORDER BY                             |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   2 |   HASH GROUP BY                            |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   3 |    TABLE ACCESS BY INDEX ROWID             | book_notations                 |    10 |   290 |     4   (0)| 00:00:01 |"
    "|   4 |     NESTED LOOPS                           |                                |    50 |  2250 |    53   (8)| 00:00:01 |"
    "|   5 |      VIEW                                  | VW_NSO_1                       |     5 |    80 |    32  (10)| 00:00:01 |"
    "|   6 |       HASH UNIQUE                          |                                |     5 |    80 |            |          |"
    "|*  7 |        COUNT STOPKEY                       |                                |       |       |            |          |"
    "|   8 |         VIEW                               |                                |     5 |    80 |    32  (10)| 00:00:01 |"
    "|*  9 |          SORT ORDER BY STOPKEY             |                                |     5 |   180 |    32  (10)| 00:00:01 |"
    "|  10 |           HASH GROUP BY                    |                                |     5 |   180 |    32  (10)| 00:00:01 |"
    "|  11 |            TABLE ACCESS BY INDEX ROWID     | book_notations                 |  5875 |   149K|    28   (0)| 00:00:01 |"
    "|  12 |             NESTED LOOPS                   |                                |  7587 |   266K|    29   (0)| 00:00:01 |"
    "|  13 |              MAT_VIEW ACCESS BY INDEX ROWID| title_relation                      |     1 |    10 |     1   (0)| 00:00:01 |"
    "|* 14 |               INDEX RANGE SCAN             | IDX_TITLE_ID                   |     1 |       |     1   (0)| 00:00:01 |"
    "|  15 |              INLIST ITERATOR               |                                |       |       |            |          |"
    "|* 16 |               INDEX RANGE SCAN             | FBK_AN_ID_IDX                  |  5875 |       |     4   (0)| 00:00:01 |"
    "|* 17 |      INDEX RANGE SCAN                      | FBK_AN_ID_IDX                  |   775 |       |     1   (0)| 00:00:01 |"
    "Predicate Information (identified by operation id):"
    "   7 - filter(ROWNUM<=10)"
    "   9 - filter(ROWNUM<=10)"
    "  14 - access(""TDPR"".""TITLE_ID""=93402)"
    "  16 - access((""F2"".""USER_ID""='100002616221644' OR ""F2"".""USER_ID""='100002616221645' OR "
    "              ""F2"".""USER_ID""='100002616221646' OR ""F2"".""USER_ID""='100002616221647' OR "
    "              ""F2"".""USER_ID""='100002616221648') AND ""F2"".""PACK_ID""=""TDPR"".""PACK_ID"")"
    "  17 - access(""FA"".""USER_ID""=""$nso_col_1"")"
    The cost is the same because the plan is the same. The optimiser chose to use that index anyway. The point is, now that you have removed it, the optimiser is free to choose other indexes or a full table scan if it wants to.
    >
    Statistics
    BEGIN
    DBMS_STATS.GATHER_TABLE_STATS ('TEST', 'BOOK_NOTATIONS');
    END;
    "COLUMN_NAME"     "NUM_DISTINCT"     "NUM_BUCKETS"     "HISTOGRAM"
    "NOTATION_ID"     110269     1     "NONE"
    "USER_ID"     213     212     "FREQUENCY"
    "PACK_ID"     20     20     "FREQUENCY"
    "NOTATION_TYPE"     8     8     "FREQUENCY"
    "CREATED_DATE"     87     87     "FREQUENCY"
    "CREATED_BY"     1     1     "NONE"
    "UPDATED_DATE"     2     1     "NONE"
    "UPDATED_BY"     2     1     "NONE"
    After removing the hint ; the query still shows the same "COST"
    Autotrace
    recursive calls     1
    db block gets     0
    consistent gets     34706
    physical reads     0
    redo size     0
    bytes sent via SQL*Net to client     964
    bytes received via SQL*Net from client     1638
    SQL*Net roundtrips to/from client     2
    sorts (memory)     3
    sorts (disk)     0
    Output of query
    "USER_ID"     "NOTATION_TYPE"     "MAXDATE"     "COUNT"
    "100002616221647"     "WTF"     08-SEP-11     20000
    "100002616221645"     "LOL"     08-SEP-11     20000
    "100002616221644"     "OMG"     08-SEP-11     20000
    "100002616221648"     "ABC"     08-SEP-11     20000
    "100002616221646"     "MEH"     08-SEP-11     20000Thanks...I still don't know what we're working towards at the moment. WHat is the current run time? What is the expected run time?
    I can't tell you if there's a better way to write this query or if indeed there is another way to write this query because I don't know what it is attempting to achieve.
    I can see that you're accessing 100k rows from a 110k row table and it's using an index to look those rows up. That seems like a job for a full table scan rather than index lookups.
    David

  • Query Performance Please Help

    Hi can any body tell me how do I improve the performance of this query.This query takes forever to execute.
    PLEASE HELP
    select substr(d.name,1,14) "dist",
    sum(r.room_net_sq_foot) "nsf",
    sum(r.student_station_count) "sta",
    sum(distinct(r.cofte)) "fte"
    from b_fish_report r,
    g_efis_organization d
    where substr(r.organization_code,-2,2) = substr(d.code,-2,2) and
    d.organization_type = 'CNTY' and
    r.room_satisfactory_flag = 'Y' and
    substr(d.code,-2,2) between '01' and '72'
    -- rownum < 50
    group by d.name, r.organization_code
    order by d.name
    It has nonunique Indexes on Organization code
    Thanks
    Asma.

    Asma,
    I tried your SQL on my tables T1 and T2. Indexes are on C1,C2,C3 and N1,N2,N3. The data in T1 and T2 are shown below with the explain plan (also called EP) listed. You really need to do an explain plan (free TOAD is easiest to do this in) and respond showing your EP results.
    By simply changing the optimizer mode to RULE I was able to get it to use indexes on both T1 and T2.
    T1 data
    C1     C2     C3     N1     N2
    001     Y     AAA     1     11
    002     Y     BBB     2     22
    003     Y     CCC     3     33
    111     N     DDD     4     44
    222     N     EEE     5     55
    333     Y     FFF     6     66
    070     Y     GGG     7     77
    071     N     HHH     8     88
    072     Y     III     9     99
    TEST     TEST     TEST     10     100
    T2 data
    C1     C2     C3     N1     N2
    001     CNTY     AAA     1     11
    002     CNTY     BBB     2     22
    003     CNTY     CCC     3     33
    111     XXX     DDD     4     44
    222     XXX     EEE     5     55
    333     CNTY     FFF     6     66
    070     CNTY     GGG     7     77
    071     XXX     HHH     8     88
    072     CNTY     III     9     99
    TEST     TEST     TEST     10     100
    This is the results when I run the SQL based on this data ...
    dist     nsf     sta     fte
    AAA     1     11     10
    BBB     2     22     20
    CCC     3     33     30
    FFF     6     66     60
    GGG     7     77     70
    III     9     99     90
    --[SQL 1] : with CHOOSE as the optimizer mode, which is normally the DEFAULT if no hint is specified
    select /*+ CHOOSE */
    substr(d.c3,1,14) "dist",
    sum(r.n1) "nsf",
    sum(r.n2) "sta",
    sum(distinct(r.n3)) "fte"
    from t1 r, t2 d
    where substr(r.c1,-2,2) = substr(d.c1,-2,2) and
    d.c2 = 'CNTY' and
    r.c2 = 'Y' and
    substr(d.c1,-2,2) between '01' and '72'
    group by d.c3, r.c1
    order by d.c3
    This is what the EP shows for your SQL (which will probably be the same for you once you do an EP on your actuall sql) ...
    SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=37)
    SORT (GROUP BY) (Cost=4 Card=1 Bytes=37)
    NESTED LOOPS (Cost=2 Card=1 Bytes=37)
    TABLE ACCESS (FULL) OF T1 (Cost=1 Card=1 Bytes=12)
    TABLE ACCESS (BY INDEX ROWID) OF T2 (Cost=1 Card=1 Bytes=25)
    INDEX (RANGE SCAN) OF I_NU_T2_C2 (NON-UNIQUE)
    Notice the FULL table scan of T1 which you don't want, and neither C1 index is getting used (I've explained why below).
    --[SQL 2] : only changed the hint to RULE ...
    select /*+ RULE */
    substr(d.c3,1,14) "dist",
    sum(r.n1) "nsf",
    sum(r.n2) "sta",
    sum(distinct(r.n3)) "fte"
    from t1 r, t2 d
    where substr(r.c1,-2,2) = substr(d.c1,-2,2) and
    d.c2 = 'CNTY' and
    r.c2 = 'Y' and
    substr(d.c1,-2,2) between '01' and '72'
    group by d.c3, r.c1
    order by d.c3
    SELECT STATEMENT Optimizer=HINT: RULE
    SORT (GROUP BY)
    NESTED LOOPS
    TABLE ACCESS (BY INDEX ROWID) OF T2
    INDEX (RANGE SCAN) OF I_NU_T2_C2 (NON-UNIQUE)
    TABLE ACCESS (BY INDEX ROWID) OF T1
    INDEX (RANGE SCAN) OF I_NU_T1_C2 (NON-UNIQUE)
    Though the C2 index is getting used (your r.c2 = 'Y' part in the where clause) the main problem your having here is the JOIN column (C1 in both tables) is not getting used. So the join you have ...
    where substr(r.c1,-2,2) = substr(d.c1,-2,2)
    isn't using an index and you want it too. There are 2 solutions to correct this..
    Solution #1
    The first is to make a function-based index for data. Since your doing SUBSTR on C1 that C1 index does not contain that partial information so it will not use it. Below is the syntax to make a function based index for this partial data ...
    CREATE INDEX I_NU_T1_C1_SUBSTR ON T1 (SUBSTR(C1,-2,2));
    CREATE INDEX I_NU_T2_C1_SUBSTR ON T2 (SUBSTR(C1,-2,2));
    or also this way if it's still not using the above indexes ...
    CREATE INDEX I_NU_T1_C1_SUBSTR ON T1 (SUBSTR(C1,-2,2),C1);
    CREATE INDEX I_NU_T2_C1_SUBSTR ON T2 (SUBSTR(C1,-2,2),C1);
    Solution #2
    The second solution is to make another column in both table and place this 2 digit information in it, and then index this new column. That way the join will look like ...
    where r.c_new_column = d.c_new_column
    and
    r.c_new_column between '01' and '72'
    also with this new column the BETWEEN clause at the end you will not need the substring as well. Also remember BETWEEN on character values is different than numbers.
    Final Notes
    I just tried creating the functional index and I can't get it to be used it for some reason (I might not have the right amount of data), but I really think that is your best option here. As long as it uses the functional index you won't have to change your code. You might want to try using INDEX() in the hint to get it to be used, but hopefully it will use it right away. Try all 4 types of optimizer modes (CHOOSE, RULE, ALL_ROWS, FIRST_ROWS) in your primary hints to see if it will use the new function-based index.
    You really do need to get explain plan going. Even if you make these functional indexes you won't know if its going to be using them until you look at the EP results. You can do EP manually (the SQL of how to produce the results is in OTN, though I find free TOAD is by far the easiest) and you will still need to have run the utlxplan.sql script. Oracle I do think has some GUI tools, maybe in OEM, that have explain plan built in as well.
    I hope this helps ya,
    Tyler D.

  • Query Performance Issue (help)

    I'm having issues w/ huge performance issues on the following. The sub intersect query lists duplicates in table1 and table2...and deletes those results from table2. But, the dups criteria is not looking at all fields, only those in the subquery....
    DELETE  FROM isw.accounts2     
           WHERE id_user||''||SYSTEM_ID||''||NM_DATABASE IN (
                  SELECT id_user||''||SYSTEM_ID||''||NM_DATABASE
                     FROM (
                           SELECT id_user, domain_name, system_name, user_description,
                                  user_dn, fl_system_user, dt_user_created,
                                  dt_user_modified, pw_changed, user_disabled,
                                  user_locked, pw_neverexpired, pw_expired,
                                  pw_locked, cd_geid, user_type, nm_database,
                                  cd_altname, fl_lob, cd_account_sid, system_id
                           FROM isw.accounts       -- accounts
                           WHERE SYSTEM_ID IN (SELECT SYSTEM_ID FROM SYSTEMS
                                               WHERE FL_LOB =  'type'  AND
                                               FL_SYSTEM_TYPE = 'Syst')
                           INTERSECT
                           SELECT id_user, domain_name, system_name, user_description,
                                  user_dn, fl_system_user, dt_user_created,
                                  dt_user_modified, pw_changed, user_disabled,
                                  user_locked, pw_neverexpired, pw_expired,
                                  pw_locked, cd_geid, user_type, nm_database,
                                  cd_altname, fl_lob, cd_account_sid, system_id
                           FROM isw.accounts2       --accounts_temp
                           WHERE SYSTEM_ID IN (SELECT SYSTEM_ID FROM SYSTEMS
                                               WHERE FL_LOB = 'type'
                                               AND FL_SYSTEM_TYPE =  'syst')
               )Edited by: Topher34 on Oct 24, 2008 12:00 PM
    Edited by: Topher34 on Oct 24, 2008 12:01 PM

    PLAN_TABLE_OUTPUT
    Plan hash value: 2030965500
    | Id  | Operation              | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |               |     1 |   623 |  2269   (7)| 00:00:28 |
    |*  1 |  FILTER                |               |       |       |            |          |
    |*  2 |   HASH JOIN SEMI       |               |     1 |   623 |   236   (2)| 00:00:03 |
    |   3 |    TABLE ACCESS FULL   | ACCOUNTS_BAX2 |     1 |   603 |   222   (1)| 00:00:03 |
    |*  4 |    TABLE ACCESS FULL   | SYSTEMS       |    15 |   300 |    14   (8)| 00:00:01 |
    |   5 |   VIEW                 |               |     1 |   117 |  2032   (7)| 00:00:25 |
    |   6 |    INTERSECTION        |               |       |       |            |          |
    |   7 |     SORT UNIQUE        |               |  2145 |   418K|            |          |
    |*  8 |      HASH JOIN         |               |  2145 |   418K|  1793   (8)| 00:00:22 |
    |*  9 |       TABLE ACCESS FULL| SYSTEMS       |    15 |   300 |    14   (8)| 00:00:01 |
    |* 10 |       TABLE ACCESS FULL| ACCOUNTS_BAX  |  2269 |   398K|  1779   (8)| 00:00:22 |
    |  11 |     SORT UNIQUE        |               |     1 |   588 |            |          |
    |* 12 |      HASH JOIN         |               |     1 |   588 |   236   (2)| 00:00:03 |
    |* 13 |       TABLE ACCESS FULL| ACCOUNTS_BAX2 |     1 |   568 |   222   (1)| 00:00:03 |
    |* 14 |       TABLE ACCESS FULL| SYSTEMS       |    15 |   300 |    14   (8)| 00:00:01 |
    Edited by: Topher34 on Oct 27, 2008 8:08 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Query - Performance plzz help

    Hi All,
    I would like to tune the following query in a better manner
    SELECT hou.NAME organization_name
           ,haou.name parent_org_name
           ,msi.secondary_inventory_name sub_inventory_code
           ,msi.availability_type nettable_sub_inventory
           ,msib.segment1 item_name
           ,msib.description item_description
           ,mc.concatenated_segments category_name    
           ,msib.primary_uom_code item_uom_code
           ,XXTEST_TEST_ONHAND(msib.organization_id,msib.inventory_item_id,msi.secondary_inventory_name) AVAILABLE_ONHAND
           ,NVL((SELECT SUM(quantity_shipped - quantity_received)
                 FROM rcv_shipment_lines rmlv
                 WHERE rmlv.to_organization_id = msib.organization_id
                   AND rmlv.item_id = msib.inventory_item_id
                   AND rmlv.to_subinventory = msi.secondary_inventory_name
                   AND source_document_code IN ('REQ','INVENTORY')
                   AND rmlv.shipment_line_status_code in ('PARTIALLY RECEIVED','EXPECTED')),0) intransit_qunatity
           ,msib.organization_id
           ,msib.inventory_item_id 
           ,mic.category_set_id    
    FROM mtl_system_items_b msib
         ,hr_organization_units hou
         ,mtl_secondary_inventories msi
         ,mtl_item_categories mic
         ,mtl_categories_b_kfv mc 
         ,per_org_structure_versions posv
         ,per_org_structure_elements pose
         ,hr_all_organization_units haou
         ,per_organization_structures pos
    WHERE hou.organization_id = msi.organization_id
      AND msib.organization_id = hou.organization_id
      AND mic.inventory_item_id = msib.inventory_item_id
      AND mic.organization_id = msib.organization_id
      AND mc.category_id = mic.category_id
      AND mic.category_set_id = FND_PROFILE.VALUE('XXTEST_INV_INVENTORY_CAT_SET')
      AND pos.organization_structure_id = posv.organization_structure_id
      AND posv.org_structure_version_id = pose.org_structure_version_id
      AND haou.organization_id = pose.organization_id_parent
      AND pos.name = FND_PROFILE.VALUE('XXTEST_INV_ORG_HIERARCHY')
      AND pose.organization_id_child = msib.organization_id;  Purpose :
    Actually this is for creating form view , and the custom function is encapsulating a oracle apps API. We can also put the custom function in POST_QUERY of the form block. But I feel it is better to put in view itself. We are expecting that this query will fetch around 500,000 records.
    Expected record count
    mtl_system_items_b - less than 100,000 records
    ,hr_organization_units less than 1000 records
    ,mtl_secondary_inventories less than 1000 records
    ,mtl_item_categories - - less than 300,000 records
    ,mtl_categories_b_kfv - less than 100 records
    ,per_org_structure_versions less than 1000 records
    ,per_org_structure_elements hou less than 1000 records
    ,hr_all_organization_units less than 1000 records
    ,per_organization_structures less than 1000 records
    Vesrion of DB
    10.2.0.4.0

    Aside from what others have said, your WHERE clause isn't doing much filtering mostly joining so I would guess (only a guess) that indexes wouldn't be used.
    Also, you're doing two things in the SELECT that may hurt performance, a function call and another SELECT statement.
    Can you turn that inline SELECT into an outer join in the main query? What does that function call do? Can you embed the logic into your statement? Otherwise it's going to get called 500,000 times.

  • Poor query performance when joining CONTAINS to another table

    We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
    For example, we can find all the records a user has access to from our base table by the following query:
    SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID;
    This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
    Our search query looks like this:
    SELECT score(1), d.*
    FROM duns d
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
    2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
    WITH subset
    AS
    (SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID
    SELECT score(1), d.*
    FROM duns d
    JOIN subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
    Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
    Thanks!!

    Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc  NUMBER,
      3       text_key  VARCHAR2 (30))
      4  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc  NUMBER,
      3       emp_id       NUMBER)
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO duns
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> -- indexes:
    SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
      2  ON duns (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
      2  ON primary_contact (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
    SCOTT@orcl_11gR2> -- as suggested by Roger:
    SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- original query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> WITH
      2    subset AS
      3        (SELECT d.duns_loc
      4         FROM      duns d
      5         JOIN      primary_contact pc
      6         ON      d.duns_loc = pc.duns_loc
      7         AND      pc.emp_id = :employeeID)
      8  SELECT score(1), d.*
      9  FROM   duns d
    10  JOIN   subset s
    11  ON     d.duns_loc = s.duns_loc
    12  WHERE  CONTAINS (TEXT_KEY, :search,1) > 0
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 4228563783
    | Id  | Operation                      | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |   1 |  SORT ORDER BY                 |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |*  2 |   HASH JOIN                    |                   |     2 |    84 |   120   (3)| 00:00:02 |
    |   3 |    NESTED LOOPS                |                   |    38 |  1292 |    50   (2)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  5 |      DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | DUNS_DUNS_LOC_IDX |     1 |     5 |     1   (0)| 00:00:01 |
    |*  7 |    TABLE ACCESS FULL           | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
       5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
       6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
       7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
    SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
    SCOTT@orcl_11gR2> WITH
      2    subset1 AS
      3        (SELECT pc.duns_loc
      4         FROM      primary_contact pc
      5         WHERE  pc.emp_id = :employeeID),
      6    subset2 AS
      7        (SELECT score(1), d.*
      8         FROM      duns d
      9         WHERE  CONTAINS (TEXT_KEY, :search,1) > 0)
    10  SELECT subset2.*
    11  FROM   subset1, subset2
    12  WHERE  subset1.duns_loc = subset2.duns_loc
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
    SCOTT@orcl_11gR2> SELECT subset2.*
      2  FROM   (SELECT pc.duns_loc
      3            FROM   primary_contact pc
      4            WHERE  pc.emp_id = :employeeID) subset1,
      5           (SELECT score(1), d.*
      6            FROM   duns d
      7            WHERE  CONTAINS (TEXT_KEY, :search,1) > 0) subset2
      8  WHERE  subset1.duns_loc = subset2.duns_loc
      9  ORDER  BY score(1) DESC
    10  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- ansi join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  JOIN   primary_contact
      4  ON     duns.duns_loc = primary_contact.duns_loc
      5  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      6  AND    primary_contact.emp_id = :employeeid
      7  ORDER  BY SCORE(1) DESC
      8  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- old join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns, primary_contact
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc = primary_contact.duns_loc
      5  AND    primary_contact.emp_id = :employeeid
      6  ORDER  BY SCORE(1) DESC
      7  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- in clause:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc IN
      5           (SELECT primary_contact.duns_loc
      6            FROM   primary_contact
      7            WHERE  primary_contact.emp_id = :employeeid)
      8  ORDER  BY SCORE(1) DESC
      9  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 3825821668
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN SEMI              |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2>

  • Tuning query performance

    Dear experts,
    I have a question regarding as the performance of a BW query.
    It takes 10 minutes to display about 23 thousands lines.
    This query read the data from an ODS object.
    According to the "where" clause in the "select" statement monitored via Oracle session when the query was running, I created an index for this ODS object.
    After rerunning the query, I found that the index had been taken by Oracle in reading this table (estimated cost is reduced to 2 from about 3000).
    However, it takes the same time as before.
    Is there any other reason or other factors that I should consider in tuning the performance of this query?K
    Thanks in advance

    Hi David,
              Query performance when reporting on ODS object is slower compared to infocubes, infosets, multiproviders etc because of no aggregates and other performance techinques in DSO.
    Basically for DSO/ODS you need to turn on the BEx reporting flag, which again is an overhead for query execution and affects performance.
    To improve the performance when reporting on ODS you can create secondary indexes from BW workbench.
    Please check the below links.
    [Re: performance issues of ODS]
    [Which criteria to follow to pick InfoObj. as secondary index of ODS?;
    Hope this helps.
    Regards,
    Haritha.

  • Query Performance on ODS

    Hi all,
            I have an ODS with around 23000 records. The ODS has around 20 fields in it (3 key fields). I built a query on the ODS. The User only inputs Cost center Hierarchy. The report does not have any calculations. It is a direct view of the fields, but the report is taking a long time to run if I select first 2nd or 3rd level of the hierarchy as the starting point. If I  select the 5th or 6th level for the hierarchy then the query output is fast. With just 23000 records on the ODS I thought query performance should be fast for any level of hierarchy.
    I even created an index on Costcenter, even then no improvement in performance. Is there any way to achieve faster query performance on the ODS.
    Thanks,
    Prabhu.

    Technical content cubes[ if installed ] gives more generic statistics of query run time...
    RSRT option mentioned by Sudheer is more targeted approach.
    The same RSRT can be help full to do the following....[ from help.sap]
    Definition
    The read mode determines how the OLAP processor gets data during navigation. You can set the mode in Customizing for an InfoProvider and in the Query Monitor for a query.
    Use
    The following types are supported:
           1.      Query to be read when you navigate or expand hierarchies (H)
    The amount of data transferred from the database to the OLAP processor is the smallest in this mode. However, it has the highest number of read processes.
    In the following mode Query to read data during navigation, the data for the fully expanded hierarchy is requested for a hierarchy drilldown. In the Query to be read when you navigate or expand hierarchies mode, the data across the hierarchy is aggregated and transferred to the OLAP processor on the hierarchy level that is the lowest in the start list. When expanding a hierarchy node, the children of this node are then read.
    You can improve the performance of queries with large presentation hierarchies by creating aggregates on a middle hierarchy level that is greater or the same as the hierarchy start level.
           2.      Query to read data during navigation (X)
    The OLAP processor only requests data that is needed for each navigational status of the query in the Business Explorer. The data that is needed is read for each step in the navigation.
    In contrast to the Query to be read when you navigate or expand hierarchies mode, presentation hierarchies are always imported completely on a leaf level here.
    The OLAP processor can read data from the main memory when the nodes are expanded.
    When accessing the database, the best aggregate table is used and, if possible, data is aggregated in the database.
           3.      Query to read all data at once (A)
    There is only one read process in this mode. When you execute the query in the Business Explorer, all data in the main memory area of the OLAP processor that is needed for all possible navigational steps of this query is read. During navigation, all new navigational states are aggregated and calculated from the data from the main memory.
    The read mode Query to be read when you navigate or expand hierarchies significantly improves performance in almost all cases compared to the other two modes. The reason for this is that only the data the user wants to see is requested in this mode.
    Compared to the Query to be read when you navigate or expand hierarchies, the setting Query to read data during navigation only effects performance for queries with presentation hierarchies.
    Unlike the other two modes, the setting Query to Read All Data At Once also has an effect on performance for queries with free characteristics. The OLAP processor aggregates on the corresponding query view. For this reason, the aggregation concept, that is, working with pre-aggregated data, is least supported in the Query to read all data at once mode.
    We recommend you choose the mode Query to be read when you navigate or expand hierarchies.
    Only choose a different read mode in exceptional circumstances. The read mode Query to read all data at once may be of use in the following cases:
    §         The InfoProvider does not support selection. The OLAP processor reads significantly more data than the query needs anyway.
    §         A user exit is active in a query. This prevents data from already being aggregated in the database.

  • Query performance puzzling

    Version: Application Express 3.0.1.00.12
    DB: 11.1.0.7.0
    Running a query: select to_char(max(to_date(datetime,'DD/MON/YYYY:HH24:MI:SS')),'DD/MON/YYYY:HH24:MI:SS') "Latest upload time" from TBL;
    gives me a result in .109 sec in SQL DEVELOPER
    But the same query in APEX (this is the only report on page!) takes 60+ seconds..
    0.02: print column headings
    0.02: rows loop: 15 row(s)
    24/NOV/2009:12:01:29
    62.47: Region: Hits per second
    Why the difference???
    Is APEX not utilizing the index in same way as sql*developer? The table has over 9mil+ rows,but as the index is utilized it should return fast as sql*d.
    Thanks,

    Thanks for responses, didn't get much far though, here's the actions i took :
    a) Disabled pagination - no changes
    b) Recreated region - no changes
    Sorting wasn't enabled in both the above cases.
    c) Put trace by following http://download.oracle.com/docs/cd/E14373_01/appdev.32/e11838/debug.htm#BABGDGEH
    The trace file DID NOT Generate, no idea why!
    When i placed the &p_trace=YES , the application asked me to relogin and upon login went to the page, but the tracefile in user_dump_dest wasn't generated.
    NEED Help in understanding why the trace is not working!! I'm using XDB http server inbuild if that's any help.
    d) Index is used automatically for MAX ,if available, here's a explain plan of query performing fast in sql*plus
    SQL> /
    Latest upload time
    24/NOV/2009:12:01:29
    Execution Plan
    Plan hash value: 2141038945
    | Id | Operation | Name | Rows | Bytes | Cost
    (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 8 | 80212
    (1)| 00:16:03 |
    | 1 | SORT AGGREGATE | | 1 | 8 |
    | |
    | 2 | INDEX FULL SCAN (MIN/MAX)| TBL1_UNIQ_IDX | 16M| 123M|
    | |
    SQL> l
    1* select to_char(max(to_date(datetime,'DD/MON/YYYY:HH24:MI:SS')),'DD/MON/YYYY:HH24:MI:SS') "Latest upload time" from TBL1

  • Improve query performance

    Hi,
    I am executing one query it takes 40-45 mins, can anybody tell me where is the issue because I have index on SUBSCRIPTION table.
    Query is taking time in Nested Loop. Can anyboduy please help to improve query performance.
    Select count(unique individual_id)
    from SUBSCRIPTION S ,SOURCE D WHERE S.ORDER_DOCUMENT_KEY_CD=D.FULFILLMENT_KEY_CD AND prod_abbr='TOH'
    and to_char(source_start_dt,'YYMM')>='1010' and mke_mag_source_type_cd='D';
    select count(*) from source; ----------3,425,131
    select count(*) from subscription;---------394,517,271
    Below is exlain Plan
    Plan
    SELECT STATEMENT CHOOSECost: 219 Bytes: 38 Cardinality: 1                                              
    13 SORT GROUP BY Bytes: 38 Cardinality: 1                                                   
    12 PX COORDINATOR                                              
         11 PX SEND QC (RANDOM) SYS.:TQ10001 Bytes: 38 Cardinality: 1                                         
         10 SORT GROUP BY Bytes: 38 Cardinality: 1                                    
         9 PX RECEIVE Bytes: 38 Cardinality: 1                               
              8 PX SEND HASH SYS.:TQ10000 Bytes: 38 Cardinality: 1                          
              7 SORT GROUP BY Bytes: 38 Cardinality: 1                     
              6 TABLE ACCESS BY LOCAL INDEX ROWID TABLE SUBSCRIPTION Cost: 21 Bytes: 3,976 Cardinality: 284                
                   5 NESTED LOOPS Cost: 219 Bytes: 604,276 Cardinality: 15,902           
              2 PX BLOCK ITERATOR      
                   1 TABLE ACCESS FULL TABLE SOURCE Cost: 72 Bytes: 1,344 Cardinality: 56
                   4 PARTITION HASH ALL Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16     
                   3 INDEX RANGE SCAN INDEX XAK1SUBSCRIPTION Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
    Please suggest

    it eliminate hidden conversation from char to numberi dont know indexes/partition on TC table, and you?
    drop table test;
    create table test as select level id, sysdate + level/24/60/60 datum from dual connect by level < 10000;
    create index idx1 on test(datum);
    analyze table test compute statistics;
    explain plan for select count(*) from test where to_char(datum,'YYYYMMDD') > '20120516';   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 3467505462                                                    
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |    
    |   0 | SELECT STATEMENT   |      |     1 |     7 |     7  (15)| 00:00:01 |    
    |   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |    
    |*  2 |   TABLE ACCESS FULL| TEST |   500 |  3500 |     7  (15)| 00:00:01 |    
    Predicate Information (identified by operation id):                            
       2 - filter(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD')>'20120516')       
    explain plan for select count(*) from test where datum > trunc(sysdate);   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 2330213601                                                    
    | Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     | 
    |   0 | SELECT STATEMENT      |      |     1 |     7 |     7  (15)| 00:00:01 | 
    |   1 |  SORT AGGREGATE       |      |     1 |     7 |            |          | 
    |*  2 |   INDEX FAST FULL SCAN| IDX1 |  9999 | 69993 |     7  (15)| 00:00:01 | 
    Predicate Information (identified by operation id):                            
       2 - filter("DATUM">TRUNC(SYSDATE@!))                                        
    drop index idx1;
    create index idx1 on test(to_number(to_char(datum,'YYYYMMDD')));
    analyze table test compute statistics;
    explain plan for select count(*) from test where to_number(to_char(datum,'YYYYMMDD')) > 20120516;   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 227046122                                                     
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |     
    |   0 | SELECT STATEMENT  |      |     1 |     5 |     2   (0)| 00:00:01 |     
    |   1 |  SORT AGGREGATE   |      |     1 |     5 |            |          |     
    |*  2 |   INDEX RANGE SCAN| IDX1 |     1 |     5 |     2   (0)| 00:00:01 |     
    Predicate Information (identified by operation id):                            
       2 - access(TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD'))>       
                  20120516)                                                        
    explain plan for select count(*) from test where datum > trunc(sysdate);   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 3467505462                                                    
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |    
    |   0 | SELECT STATEMENT   |      |     1 |     7 |     7  (15)| 00:00:01 |    
    |   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |    
    |*  2 |   TABLE ACCESS FULL| TEST |  9999 | 69993 |     7  (15)| 00:00:01 |    
    Predicate Information (identified by operation id):                            
       2 - filter("DATUM">TRUNC(SYSDATE@!))                                        

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Query performance in two environments

    Hi all,
    I have developed simple select queries on a multiprovider and I am facing issues with query performance in quality box. A query runs pretty fast in in dev and return results while the same one dumps in Quality environment giving a time out error. This sounds more strange because our dev box has comparitively more records than the quality environment right now.
    On anlyzing the query path in both environments, we noticed that the query does an index scan in dev but not in Quality environment, especially when the selection is such that the query is supposed to return lot of records. Since the query does a sequential scan in quality, it dumps. Is there any setting that I need to make seprately in the quality environment.
    Any tips on query optimization would be great help. Thanks
    Regards
    Niranjana

    Execute some of the RSRT tests in the QA for the query using "Execute+Debug" option and use some test for Multiprovider and Databases checks in it ,try to compare with Dev as well.
    Hope it Helps
    Chetan
    @CP..

  • System/Query Performance: What to look for in these tcodes

    Hi
    I have been researching on system/query performance in general in the BW environment.
    I have seen tcodes such as
    ST02 :Buffer/Table analysis
    ST03 :System workload
    ST03N:
    ST04 : Database monitor
    ST05 : SQL trace
    ST06 :
    ST66:
    ST21:
    ST22:
    SE30: ABAP runtime analysis
    RSRT:Query performance
    RSRV: Analysis and repair of BW objects
    For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave “tune summary” screen with several rows and columns (?not sure what they are called) with several numerical values.
    Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
    Basically some type of a metric for each of these indicators provided by these performance tcodes.
    Something similar to when you are using an Operating system, and the CPU performance is  consistently over 70%  which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
    I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
    Thanks

    hi Amanda,
    i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters 
    http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
    EarlyWatch focuses on the following aspects:
    ·        Server analysis
    ·        Database analysis
    ·        Configuration analysis
    ·        Application analysis
    ·        Workload analysis
    EarlyWatch Alert – a free part of your standard maintenance contract with SAP – is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
    ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
    Understanding Your EarlyWatch Alert Reports
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
    hope this helps.

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

Maybe you are looking for

  • Why is spellcheck is not working until the 3rd try - it usd to work fine before the upgrade

    when I click spellcheck to check my email I get an error message one or 2 times before it works

  • Error in calling the Remote Function Module

    Hi all, I am trying to call the Remote function Module which is in CRM  from ECC . But we are not able to detect the function module as it is showing the error as Function Module not found. Can anybody help us in calling the Remote Function Module .

  • Command /usr/bin/codesign failed with exit code 1

    Hi  I am getting this type of error when i am trying to run application on device . I create certificate and provisioning properly, but i am getting this error every time. Please help me out thanks in advance.

  • Crosstab showing 0 and 2.3 percent

    I am using Crosstab to calculate my working days and jobs. The report displays the information fine however, right on top of the Crosstab it shows 0 - 2.34% I am wondering what is this and why is it showing 0 and if it is showing 0 then 0 / 23007 sho

  • Regarding OFFLINE TABLESPACES

    Hi all, You will say this guy have so many qusetions... but when I try my level best to get the answer from searchin on internet.. some books and by asking other people.. and when I don't get satisfied answer, I always choose OTN.... Right now I have