Inlist Iterator or Filter - Performance issue or bug?

Hi,
Case1:
SQL Statement
SELECT
/*+
FIRST_ROWS
FROM
"/BI0/PMAT_PLANT"
WHERE
"PLANT" = :A0 AND "MAT_PLANT" = :A1 AND ( "OBJVERS" = :A2 AND "CHANGED" = :A3 OR "OBJVERS" = :A4
AND "CHANGED" = :A5 ) AND ROWNUM <= :A6#
Execution Plan
SELECT STATEMENT ( Estimated Costs = 5 , Estimated #Rows = 1 )
5 COUNT STOPKEY
5 INLIST ITERATOR
5 TABLE ACCESS BY INDEX ROWID /BI0/PMAT_PLANT
INDEX RANGE SCAN /BI0/PMAT_PLANT~Z0
-> Runtime: > 1000s (!!!!!)
Case 2:
SQL Statement
SELECT
/*+
FIRST_ROWS
FROM
"/BI0/PMAT_PLANT"
WHERE
"PLANT" = :A0 AND "MAT_PLANT" = :A1 AND "OBJVERS" BETWEEN :A2 AND :A3 AND "CHANGED" BETWEEN :A4 AND :A5 <= :A6#
Execution Plan
SELECT STATEMENT ( Estimated Costs = 5 , Estimated #Rows = 1 )
5 COUNT STOPKEY
5 FILTER
5 TABLE ACCESS BY INDEX ROWID /BI0/PMAT_PLANT
INDEX RANGE SCAN /BI0/PMAT_PLANT~Z0
-> Runtime: < 1s (!!!!!)
What do I miss? Any hint?
THX in advance...
Paul
P.S. Oracle 9.2.0.3

The max. number of returned rows is 2 in both cases.That's presumably because of the ROWNUM. The two queries ask different questions, so the database expects them - potentially - to return different results, and would probably do so without the STOPKEY .
Primary key: PLANT, MAT_PLANT, OBJVERS
And OBJVERS is only "A" or "M".irrelevant (applies in both cases)
And the optimizer calculates costs of 5 for each statement.Optimizer costs have no objective value, so this doesn't mean anything
So the very high runtime in one case is very interesting.No, it's obvious. Look at the queries you posted. The first query says:
return me every row where a particular value of OBVERS is matched with a particular value of CHANGED or another particular value of OBVERS is matched with another particular value of CHANGED
whereas the second query says
return me every row where OBVERS has one of a range of values and CHANGED has one of a range of values.
It's precisely because OBVERS has so few values that you have to do so much work in the first query. Basically your're checking all the orws that match on PLANT and MAT_PLANT. The second query just needs to weed out the rows where CHANGED is in a particular range of values.
Of course, the execution plans are dependent on which values you put into the variables. For instance, this query:
SELECT  * FROM
"/BI0/PMAT_PLANT"
WHERE "PLANT" =  1 AND "MAT_PLANT" = 2
AND ( "OBJVERS" = 'A' AND "CHANGED" = '01/01/02'
OR "OBJVERS" = 'M' AND "CHANGED" = '02/01/02')
AND ROWNUM <= 2is not the same as
SELECT  * FROM
"/BI0/PMAT_PLANT"
WHERE "PLANT" =  1 AND "MAT_PLANT" = 2
AND ( "OBJVERS" = 'M' AND "CHANGED" = '01/01/02'
OR "OBJVERS" = 'M' AND "CHANGED" = '02/01/02')
AND ROWNUM <= 2but it is the same as
SELECT  * FROM
"/BI0/PMAT_PLANT"
WHERE "PLANT" =  1 AND "MAT_PLANT" = 2
AND ("CHANGED" = '01/01/02' OR "CHANGED" = '02/01/02')
AND ROWNUM <= 2Capice?
Cheers, APC

Similar Messages

  • Expression Filter Performance Issues / Misuse?

    I'm currently evaluating the Expression Filter functionality for a new requirement. The basic idea of the requirement is that I have a logging table that I want to get "interesting" records from. The way I want to set it up is to exclude known, "uninteresting", records or record patterns.
    So as far as an implementation I was considering a table of expressions that contained expression filter entries for the "uninteresting" records and checking this against my logging table using the EVALUATE operator and looking for a 0 result.
    In my testing I wanted to return results where the EVALUTE operator is equal to 1 to see if my expressions are correct. In doing this I was experiencing significant performance issues. For example my test filter matches 72 rows out of 61657 possible entries. It took Oracle almost 10 minutes to evaluate this expression. I tried it with and without an Expression Filter index with no noticeable change in execution time. The test case and query is provided below.
    Is this the right use case for Expression Filter? Am I misunderstanding how it works? What am I doing wrong?
    Test Case:
    Version
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    Objects & Query:
    CREATE TABLE expressions( white_list VARCHAR2(200));
    CREATE TABLE data
    AS
    SELECT OBJECT_ID
         , OWNER
         , OBJECT_NAME
         , CREATED
         , LAST_DDL_TIME
    FROM   DBA_OBJECTS
    BEGIN
      -- Create the empty Attribute Set --
      DBMS_EXPFIL.CREATE_ATTRIBUTE_SET('exptype');
      -- Define elementary attributes of EXF$TABLE_ALIAS type --
      DBMS_EXPFIL.ADD_ELEMENTARY_ATTRIBUTE('exptype','data',
                                            EXF$TABLE_ALIAS('test_user.data'));
    END;
    BEGIN
      DBMS_EXPFIL.ASSIGN_ATTRIBUTE_SET('exptype','expressions','white_list');
    END;
    INSERT INTO expressions(white_list) VALUES('data.owner=''TEST_USER'' AND data.created BETWEEN TO_DATE(''08/03/2010'',''MM/DD/YYYY'') AND TO_DATE(''08/05/2010'',''MM/DD/YYYY'')');
    exec dbms_stats.gather_table_stats(USER,'EXPRESSIONS');
    exec dbms_stats.gather_table_stats(USER,'DATA');
    CREATE INDEX expIndex ON Expressions (white_list) INDEXTYPE IS EXFSYS.EXPFILTER
      PARAMETERS ('STOREATTRS (data.owner,data.object_name,data.created)
                   INDEXATTRS (data.owner,data.object_name,data.created)');
    SELECT /*+ gather_plan_statistics */ data.* FROM data, expressions WHERE EVALUATE(white_list,exptype.getVarchar(data.rowid)) = 1;
    DROP TABLE expressions PURGE;
    BEGIN
            DBMS_EXPFIL.DROP_ATTRIBUTE_SET(attr_set => 'exptype');
    END;
    DROP TABLE data PURGE;

    Hi,
    If you are already using the queries and are stable enough then rather than modifying query you can try other options to improve the query performance like data compression of the cube, creation of aggregates, placing cube on BIA or creating cache for the query.
    Best Regards,
    Prashant Vankudre.

  • Inlist iterator

    what is an inlist iterator?
    From performance point of view is it better to write a query using OR , IN operator, if the column being referred has an index on to it.

    http://asktom.oracle.com/pls/ask/f?p=4950:8:16232527426353483178::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:8906695863428

  • Top N query with INLIST Iterator performance problem

    I have a top N query that is giving me problems on Oracle 11.2.0.3.
    First of all, I have a query like the following (simplified from the real query, but produces the same problem):
        select /*+ gather_plan_statistics */ * from
          select rowid
          from payer_subscription ps
          where  ps.subscription_status = :i_subscription_status
          and ps.merchant_id           = :merchant_id2
          order by transaction_date desc
        ) where rownum <= :i_rowcount; This query works well. It can very efficiently find me the top 10 rows for a massive data set, using an index on merchant_id, subscription_status, transaction_date.
        | Id  | Operation                     | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
        |   0 | SELECT STATEMENT              |             |      1 |        |     10 |00:00:00.01 |       4 |
        |*  1 |  COUNT STOPKEY                |             |      1 |        |     10 |00:00:00.01 |       4 |
        |   2 |   VIEW                        |             |      1 |     11 |     10 |00:00:00.01 |       4 |
        |*  3 |    INDEX RANGE SCAN DESCENDING| SODTEST2_IX |      1 |    100 |     10 |00:00:00.01 |       4 |
        -------------------------------------------------------------------------------------------------------As you can see the estimated actual rows at each stage are 10, which is correct.
    Now, I have a requirement to get the top N records for a set of merchant_Ids, so if I change the query to include two merchant_ids, the performance tanks:
        select /*+ gather_plan_statistics */ * from
          select  rowid
          from payer_subscription ps
          where  ps.subscription_status = :i_subscription_status
              and (ps.merchant_id = :merchant_id or
                   ps.merchant_id = :merchant_id2 )
          order by transaction_date desc
        ) where rownum <= :i_rowcount;
        | Id  | Operation               | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
        |   0 | SELECT STATEMENT        |             |      1 |        |     10 |00:00:00.17 |     178 |       |       |          |
        |*  1 |  COUNT STOPKEY          |             |      1 |        |     10 |00:00:00.17 |     178 |       |       |          |
        |   2 |   VIEW                  |             |      1 |    200 |     10 |00:00:00.17 |     178 |       |       |          |
        |*  3 |    SORT ORDER BY STOPKEY|             |      1 |    200 |     10 |00:00:00.17 |     178 |  2048 |  2048 | 2048  (0)|
        |   4 |     INLIST ITERATOR     |             |      1 |        |  42385 |00:00:00.10 |     178 |       |       |          |
        |*  5 |      INDEX RANGE SCAN   | SODTEST2_IX |      2 |    200 |  42385 |00:00:00.06 |     178 |       |       |          |Notice now that there are 42K rows coming out of the two index range scans - Oracle is no longer aborting the index range scan when it reaches 10 rows. What I thought would happen, is that Oracle would get at most 10 rows for each merchant_id, knowing that at most 10 rows are to be returned by the query. Then it would sort that 10 + 10 rows and output the top 10 based on the transaction date, but it refuses to do that.
    Does anyone know how I can get the performance of the first query, when I need to pass a list of merchants into the query? I could probably get the performance using a union all, but the list of merchants is variable, and could be anywhere between 1 or 2 to several 100, so that makes that a bit unworkable.

    Across the two merchants_id's there are about 42K rows (this is in test, on Prod there could be several million). In the first query example, Oracle can answer the query in about 4 logical IOs and without even doing a sort as it uses the index to scan and get the relevant rows in Oracle.
    In the second case, I hoped it would pull 10 rows for each merchant_id and then sort the resulting 20 rows to find the top 10 ordered by transaction_date, but instead it is scanning far more rows than it needs to.
    In my example, it takes 4 logical IOs to answer the first query, but ~ 170 to answer the second, while I think it is achievable in 8 or so. For example, this query does what I want, but it is not a feasible option due to how many merchant_id's I may have to deal with:
    select /*+ gather_plan_statistics */ *
    from
      select *
      from 
        select * from
          select  merchant_id, transaction_date
          from payer_subscription ps
          where  ps.subscription_status = :i_subscription_status
          and ps.merchant_id = :merchant_id
          order by transaction_date desc
        ) where rownum <= :i_rowcount
        union all
        select * from  
          select  merchant_id, transaction_date
          from payer_subscription ps
          where  ps.subscription_status = :i_subscription_status
          and ps.merchant_id = :merchant_id2
          order by transaction_date desc
        ) where rownum <= :i_rowcount
      ) order by transaction_date desc
    ) where rownum <= :i_rowcount;
    | Id  | Operation                          | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    |   0 | SELECT STATEMENT                   |             |      1 |        |     10 |00:00:00.01 |    6 |          |       |          |
    |*  1 |  COUNT STOPKEY                     |             |      1 |        |     10 |00:00:00.01 |    6 |          |       |          |
    |   2 |   VIEW                             |             |      1 |     20 |     10 |00:00:00.01 |    6 |          |       |          |
    |*  3 |    SORT ORDER BY STOPKEY           |             |      1 |     20 |     10 |00:00:00.01 |    6 |  2048 |  2048 | 2048  (0)|
    |   4 |     VIEW                           |             |      1 |     20 |     20 |00:00:00.01 |    6 |          |       |          |
    |   5 |      UNION-ALL                     |             |      1 |        |     20 |00:00:00.01 |    6 |          |       |          |
    |*  6 |       COUNT STOPKEY                |             |      1 |        |     10 |00:00:00.01 |    3 |          |       |          |
    |   7 |        VIEW                        |             |      1 |    100 |     10 |00:00:00.01 |    3 |          |       |          |
    |*  8 |         INDEX RANGE SCAN DESCENDING| SODTEST2_IX |      1 |    100 |     10 |00:00:00.01 |    3 |          |       |          |
    |*  9 |       COUNT STOPKEY                |             |      1 |        |     10 |00:00:00.01 |    3 |          |       |          |
    |  10 |        VIEW                        |             |      1 |     11 |     10 |00:00:00.01 |    3 |          |       |          |
    |* 11 |         INDEX RANGE SCAN DESCENDING| SODTEST2_IX |      1 |    100 |     10 |00:00:00.01 |    3 |          |       |          |
    ---------------------------------------------------------------------------------------------------------------------------------------This UNION ALL query completes in 6 logical IOs - the original query I posted with 2 IDs takes 178 to return the same results.

  • Performance Impact with OR concatenation / Inlist Iterator

    Hello guys,
    is there any performance impact with using OR concatenations or some IN-Lists?
    The function of both is the "same":
    1) Concatenation (OR-processing)
    SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
    - Which are then ‘concatenated’
    2) Inlist Iterator
    SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
    - Every value executed seperately
    - Same as concatenation of 3 “OR-red” values
    So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
    Thanks and Regards
    Stefan

    The note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
    The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
    The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
    Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
    Quick example to demonstrate the difference between concatenation and iteration:
    drop table t1;
    create table t1 as
    select
         rownum     id,
         rownum     n1,
         rpad('x',100)     padding
    from
         all_objects
    where
         rownum <= 10000
    create index t1_i1 on t1(id);
    execute dbms_stats.gather_table_stats(user,'t1')
    set autotrace traceonly explain
    select
         /*+ use_concat(t1) */
         n1
    from
         t1
    where
         id in (10,20,30,40,50,60,70,80,90,100)
    set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
       1    0   CONCATENATION
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       5    4       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       7    6       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       8    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       9    8       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      10    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      11   10       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      12    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      13   12       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      14    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      15   14       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      16    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      17   16       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      18    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      19   18       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      20    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      21   20       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
       1    0   INLIST ITERATOR
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Known performance issue bugs and patches for R12.1.3

    Hi Team,
    We have upgraded oracle application from 12.1.1 to 12.1.3.
    I wanted to apply the known performance issue bugs and patches for R12.1.3.
    Please let me know for any details.
    Thanks,

    Are u currently facing any performance issues on 1213?
    Tuning All Layers Of E-Business Suite – Performance Topics
    http://www.oracle.com/technetwork/apps-tech/collab2011-tuning-ebusiness-421966.pdf
    • Start with Best Practices : (note: 1121043.1)
    • SQL Tuning
    – Trace files
    – SQLT output (note: 215187.1)
    – Trace Analyzer (note: 224270.1)
    – AWR Report (note: 748642.1)
    – AWR SQL Report (awrsqrpt.sql)
    – 11g SQL Monitoring
    – SQL Tuning Advisor
    • PL/SQL Tuning
    – Product logs
    – PL/SQL Profiler (note: 808005.1)
    • Reports Tracing
    – note: 111311.1
    • Database Tuning
    – AWR Report (note: 748642.1)
    – ADDM report (note: 250655.1)
    – Automated Session History (ASH) Report
    – LTOM output (note: 352363.1)
    • Forms Tuning
    • Forms Tracing (note: 373548.1)
    • FRD Log (note: 445166.1)
    – Generic note: 438652.1
    • Middletier Tuning
    – JVM Logs
    – JVM Sizing (note: 362851.1)
    – JDBC Tuning (note: 278868.1)
    • OS
    – OSWatcher (note: 301137.1)

  • Performance Issue while Joining two Big Tables

    Hello Experts,
    We've the following Scenario, wherein we need to have Sales rep associated for a Sales Order. This information is available in VBPA table with Sales Order, Sales Order Item and Partner Function being the key.
    NOw I'm interested in only one Partner Function for e.g. 'ZP'. This table is having around 120 million records.
    I tried both options:
    Option 1 - Join this table(VBPA) with Sales Order Item table(VBAP) within the Data Foundation Layer of the Analytic View and doing the filtering on Partner Function
    Option 2 - Create a Attribute View for VBPA having filtering on Partner Function and then join this Attribute View in the Logical Join Layer with the Data Foundation table.
    Both these options are killing the performance.
    Is there any way out to achieve this ?
    Your expert opinion is greatly appreciated!!
    Thanks & regards,
    Jomy

    Hi,
    Lars is correct. You may have to spend a little bit more time and give a bigger picture.
    I have used this join. It takes about 2 to 3 seconds to execute this join for me. My data volume is less than yours.
    You must be have used a left outer join when joining the attribute view (with constant filter ZP  as specified in your first post) to the data foundation. Please cross check once again, as sometimes my fat finger inadvertently changed the join type and I had to go back and fix it. If this is a left outer join or a referential join, HANA  does not perform the join if you are not requesting any field from the attribute view on table VBPA. This used to be a problem due to a bug in SP4 but got fixed in SP5.
    However, if you have performed this join in the data foundation, it does enforce, the join even if you did not ask any fields from the VBPA table. The reason being that you have put a constant filter ZR (LIPS->VBPA join in  data foundation as specified in one of your later replies).
    If any measure you are selecting in the analytic view is a restricted measure  or a calculated measure that needs some field from VBPA, then the join will be enforced as you would agree. This is where I had most trouble. My  join itself is not bad but my business requirement to get the current value of a partner attribute on  a higher level calculation view sent too much data from analytic view to calculation view.
    Please send the full diagram  of your model and vizplan. Also, if you are using a front end (like analysis office), please trap the SQL sent from this front end tool and include it in the message.  Even a straight SQL you have used in which you have detected this performance issue will be helpful.
    Ramana

  • A Performance Issue

    Hello Oracle Gurus,
    I have met a performance issue as below, can anyone help me figure out how to improve it?
    Database Versoin: 10.2.0.4.0
    Table: non_fin_hist, a partitioned table by range on trxn_dt, each partition include data of one year.
    Index: there is a global non-partitioned index created on trxn_dt.
    Query1: SELECT * FROM non_fin_hist t WHERE t.trxn_dt >= SYSDATE - 5 AND t.trxn_dt < SYSDATE;
    Row Count:
    SQL> SELECT COUNT(*)
    2 FROM non_fin_hist t
    3 WHERE t.trxn_dt >= SYSDATE -5
    4 AND t.trxn_dt < SYSDATE;
    COUNT(*)
    37805
    Explain plan as below:
    SELECT STATEMENT, GOAL = ALL_ROWS               30     28     4508
    FILTER                         
    TABLE ACCESS BY GLOBAL INDEX ROWID     CSTDEV     NON_FIN_HIST     30     28     4508
    INDEX RANGE SCAN     CSTDEV     NON_FIN_HIST_TRXN_DT_IDX     6     24     
    Query2: SELECT * FROM non_fin_hist t WHERE t.trxn_dt >= SYSDATE - 100 AND t.trxn_dt < SYSDATE - 95;
    Rows Count:
    SQL> SELECT COUNT(*)
    2 FROM non_fin_hist t
    3 WHERE t.trxn_dt >= SYSDATE - 100
    4 AND t.trxn_dt < SYSDATE - 95;
    COUNT(*)
    25674
    Explain plan as below:
    SELECT STATEMENT, GOAL = ALL_ROWS               8337     32261     5194021
    FILTER                         
    PARTITION RANGE ITERATOR               8337     32261     5194021
    TABLE ACCESS FULL     CSTDEV     NON_FIN_HIST     8337     32261     5194021
    My question is why the cost of the two query have such a difference? Why query 2 will use partition range iterator instead of global index scan even the result has fewer data?
    Thanks a lot.
    Tony

    808370 wrote:
    Hello Oracle Gurus,
    I have met a performance issue as below, can anyone help me figure out how to improve it?
    Database Versoin: 10.2.0.4.0
    Table: non_fin_hist, a partitioned table by range on trxn_dt, each partition include data of one year.
    Index: there is a global non-partitioned index created on trxn_dt.
    Query1: SELECT * FROM non_fin_hist t WHERE t.trxn_dt >= SYSDATE - 5 AND t.trxn_dt < SYSDATE;
    Row Count:
    SQL> SELECT COUNT(*)
    2 FROM non_fin_hist t
    3 WHERE t.trxn_dt >= SYSDATE -5
    4 AND t.trxn_dt < SYSDATE;
    COUNT(*)
    37805
    Explain plan as below:
    SELECT STATEMENT, GOAL = ALL_ROWS 30 28 4508
    FILTER
    TABLE ACCESS BY GLOBAL INDEX ROWID CSTDEV NON_FIN_HIST 30 28 4508
    INDEX RANGE SCAN CSTDEV NON_FIN_HIST_TRXN_DT_IDX 6 24
    Query2: SELECT * FROM non_fin_hist t WHERE t.trxn_dt >= SYSDATE - 100 AND t.trxn_dt < SYSDATE - 95;
    Rows Count:
    SQL> SELECT COUNT(*)
    2 FROM non_fin_hist t
    3 WHERE t.trxn_dt >= SYSDATE - 100
    4 AND t.trxn_dt < SYSDATE - 95;
    COUNT(*)
    25674
    Explain plan as below:
    SELECT STATEMENT, GOAL = ALL_ROWS 8337 32261 5194021
    FILTER
    PARTITION RANGE ITERATOR 8337 32261 5194021
    TABLE ACCESS FULL CSTDEV NON_FIN_HIST 8337 32261 5194021
    My question is why the cost of the two query have such a difference? Why query 2 will use partition range iterator instead of global index scan even the result has fewer data?
    Thanks a lot.
    Tony
    THis is not my strong suite, so I could be way off base, but let me hazard a guess and if I get shot down we'll both learn something . . .
    In your first query, you were only asking for data only 5 days old, based on trxn.dt. presumably, it would only need to hit one partition. In the second query, you were asking for data between 5 and 100 days old. Does that span multiple partitions, and represent a significantly greater percentage of the table? Clearly the two queries are selecting different portions of the table, and different percentages of the table, so I would expect a difference in the plans.
    Hello.
    Thanks for your kindly reply. I think both the two queries were asking for 5 days data, and one partition has 1 year data. I think the two queries both span one partition only.
    Tony
    Edited by: 808370 on May 9, 2011 7:30 AM

  • Performance Issue with the query

    Hi Experts,
    While working on peoplesoft, today I was stuck with a problem when one of the Query is performing really bad. With all the statistics updates, query is not performing good. On one of the table, query is spending time doing lot of IO. (db file sequential read wait). And if I delete the stats for the table and let dynamic sampling to take place, query just works good and there is no IO wait on the table.
    Here is the query
    SELECT A.BUSINESS_UNIT_PC, A.PROJECT_ID,  E.DESCR,  A.EMPLID,  D.NAME,  C.DESCR,  A.TRC,
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)),
      SUM( A.EST_GROSS),
      DECODE( A.TRC, 'ROVA1', 0, 'ROVA2', 0, ( SUM( A.EST_GROSS)/100) * 9.75),
      '2012-07-01',
      '2012-07-31',
      SUM(     DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0) +
         DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)
      DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
      DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
      DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'),
          TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'),
          TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY')  || ' / '  || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
      || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
      C.TRC,  TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
      E.BUSINESS_UNIT, 
      E.PROJECT_ID
    FROM
         PS_TL_PAYABLE_TIME A, 
         PS_TL_TRC_TBL C,
         PS_PROJECT E, 
         PS_PERSONAL_DATA D,
           PS_PERALL_SEC_QRY D1, 
         PS_JOB F, 
         PS_EMPLMT_SRCH_QRY F1
    WHERE
         D.EMPLID = D1.EMPLID
    AND      D1.OPRID   = 'TMANI'
    AND      F.EMPLID   = F1.EMPLID
    AND      F.EMPL_RCD = F1.EMPL_RCD
    AND      F1.OPRID   = 'TMANI'
    AND      A.DUR BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD') AND TO_DATE('2012-07-31','YYYY-MM-DD')
    AND      C.TRC   = A.TRC
    AND      C.EFFDT =  (SELECT
                   MAX(C_ED.EFFDT) 
                  FROM
                   PS_TL_TRC_TBL C_ED
                     WHERE
                   C.TRC = C_ED.TRC 
                  AND C_ED.EFFDT <= SYSDATE 
    AND      E.BUSINESS_UNIT = A.BUSINESS_UNIT_PC
    AND      E.PROJECT_ID    = A.PROJECT_ID
    AND      A.EMPLID        = D.EMPLID
    AND      A.EMPLID        = F.EMPLID
    AND      A.EMPL_RCD      = F.EMPL_RCD
    AND      F.EFFDT         =  (SELECT
                        MAX(F_ED.EFFDT) 
                      FROM
                        PS_JOB F_ED
                        WHERE
                        F.EMPLID = F_ED.EMPLID 
                      AND      F.EMPL_RCD = F_ED.EMPL_RCD
                        AND      F_ED.EFFDT <= SYSDATE 
    AND      F.EFFSEQ =  (SELECT
                   MAX(F_ES.EFFSEQ) 
                   FROM
                   PS_JOB F_ES
                     WHERE
                   F.EMPLID = F_ES.EMPLID 
                   AND F.EMPL_RCD = F_ES.EMPL_RCD
                     AND F.EFFDT  = F_ES.EFFDT 
    AND      F.GP_PAYGROUP  = DECODE(' ', ' ', F.GP_PAYGROUP, ' ')
    AND      A.CURRENCY_CD  = DECODE(' ', ' ', A.CURRENCY_CD, ' ')
    AND      DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')) = DECODE(' ', ' ', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')), 'L', 'L', 'E', 'E', 'A', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')))
    AND ( A.EMPLID, A.EMPL_RCD) IN  (SELECT
                             B.EMPLID,
                             B.EMPL_RCD 
                        FROM
                             PS_TL_GROUP_DTL B
                                    WHERE
                             B.TL_GROUP_ID = DECODE('ER012', ' ', B.TL_GROUP_ID, 'ER012') 
    AND      E.PROJECT_USER1   = DECODE(' ', ' ', E.PROJECT_USER1, ' ')
    AND      A.PROJECT_ID      = DECODE(' ', ' ', A.PROJECT_ID, ' ')
    AND      A.PAYABLE_STATUS <>
      CASE
        WHEN to_number(TO_CHAR(sysdate, 'DD')) < 15
        THEN
          CASE
            WHEN A.DUR > last_day(add_months(sysdate, -2))
            THEN ' '
            ELSE 'NA'
          END
        ELSE
          CASE
            WHEN A.DUR > last_day(add_months(sysdate, -1))
            THEN ' '
            ELSE 'NA'
          END
      END
    AND      A.EMPLID = DECODE(' ', ' ', A.EMPLID, ' ')
    GROUP BY A.BUSINESS_UNIT_PC,
           A.PROJECT_ID,
           E.DESCR,
         A.EMPLID,
           D.NAME,
           C.DESCR,
           A.TRC,
           '2012-07-01',
           '2012-07-31',
           DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
           DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
           DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-31',      'YYYY-MM-DD'), 'Mon ')
           || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
           || TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY')  || ' / '
           || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')  || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
           C.TRC,  TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
           E.BUSINESS_UNIT,  E.PROJECT_ID
    HAVING SUM( A.EST_GROSS) <> 0
    ORDER BY 1,
            2,
             5,
             6 ;Here is the screenshot for IO wait
    [https://lh4.googleusercontent.com/-6PFW2FSK3yE/UCrwUbZ0pvI/AAAAAAAAAPA/eHM48AOC0Uo]
    Edited by: Oceaner on Aug 14, 2012 5:38 PM

    Here is the execution plan with all the statistics present
    PLAN_TABLE_OUTPUT
    Plan hash value: 1575300420
    | Id  | Operation                                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                            |                    |     1 |   237 |  2703   (1)| 00:00:33 |
    |*  1 |  FILTER                                     |                    |       |       |            |          |
    |   2 |   SORT GROUP BY                             |                    |     1 |   237 |            |          |
    |   3 |    CONCATENATION                            |                    |       |       |            |          |
    |*  4 |     FILTER                                  |                    |       |       |            |          |
    |*  5 |      FILTER                                 |                    |       |       |            |          |
    |*  6 |       HASH JOIN                             |                    |     1 |   237 |  1695   (1)| 00:00:21 |
    |*  7 |        HASH JOIN                            |                    |     1 |   204 |  1689   (1)| 00:00:21 |
    |*  8 |         HASH JOIN SEMI                      |                    |     1 |   193 |  1352   (1)| 00:00:17 |
    |   9 |          NESTED LOOPS                       |                    |       |       |            |          |
    |  10 |           NESTED LOOPS                      |                    |     1 |   175 |  1305   (1)| 00:00:16 |
    |* 11 |            HASH JOIN                        |                    |     1 |   148 |  1304   (1)| 00:00:16 |
    |  12 |             JOIN FILTER CREATE              | :BF0000            |       |       |            |          |
    |  13 |              NESTED LOOPS                   |                    |       |       |            |          |
    |  14 |               NESTED LOOPS                  |                    |     1 |   134 |   967   (1)| 00:00:12 |
    |  15 |                NESTED LOOPS                 |                    |     1 |   103 |   964   (1)| 00:00:12 |
    |* 16 |                 TABLE ACCESS FULL           | PS_PROJECT         |   197 |  9062 |   278   (1)| 00:00:04 |
    |* 17 |                 TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME |     1 |    57 |     7   (0)| 00:00:01 |
    |* 18 |                  INDEX RANGE SCAN           | IDX$$_C44D0007     |    16 |       |     3   (0)| 00:00:01 |
    |* 19 |                INDEX RANGE SCAN             | IDX$$_3F450003     |     1 |       |     2   (0)| 00:00:01 |
    |* 20 |               TABLE ACCESS BY INDEX ROWID   | PS_JOB             |     1 |    31 |     3   (0)| 00:00:01 |
    |  21 |             VIEW                            | PS_EMPLMT_SRCH_QRY |  5428 | 75992 |   336   (2)| 00:00:05 |
    PLAN_TABLE_OUTPUT
    |  22 |              SORT UNIQUE                    |                    |  5428 |   275K|   336   (2)| 00:00:05 |
    |* 23 |               FILTER                        |                    |       |       |            |          |
    |  24 |                JOIN FILTER USE              | :BF0000            | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  25 |                 NESTED LOOPS                |                    | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  26 |                  TABLE ACCESS BY INDEX ROWID| PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 27 |                   INDEX UNIQUE SCAN         | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |* 28 |                  TABLE ACCESS FULL          | PS_SJT_PERSON      | 55671 |  1739K|   333   (1)| 00:00:04 |
    |  29 |                CONCATENATION                |                    |       |       |            |          |
    |  30 |                 NESTED LOOPS                |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |* 31 |                  INDEX FAST FULL SCAN       | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |* 32 |                  INDEX UNIQUE SCAN          | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  33 |                 NESTED LOOPS                |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 34 |                  INDEX UNIQUE SCAN          | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 35 |                  INDEX UNIQUE SCAN          | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  36 |                NESTED LOOPS                 |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 37 |                 INDEX UNIQUE SCAN           | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 38 |                 INDEX UNIQUE SCAN           | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |* 39 |            INDEX UNIQUE SCAN                | PS_PERSONAL_DATA   |     1 |       |     0   (0)| 00:00:01 |
    |  40 |           TABLE ACCESS BY INDEX ROWID       | PS_PERSONAL_DATA   |     1 |    27 |     1   (0)| 00:00:01 |
    |* 41 |          INDEX FAST FULL SCAN               | PS_TL_GROUP_DTL    |   323 |  5814 |    47   (3)| 00:00:01 |
    |  42 |         VIEW                                | PS_PERALL_SEC_QRY  |  7940 | 87340 |   336   (2)| 00:00:05 |
    |  43 |          SORT UNIQUE                        |                    |  7940 |   379K|   336   (2)| 00:00:05 |
    |* 44 |           FILTER                            |                    |       |       |            |          |
    |* 45 |            FILTER                           |                    |       |       |            |          |
    |  46 |             NESTED LOOPS                    |                    | 55671 |  2663K|   335   (1)| 00:00:05 |
    |  47 |              TABLE ACCESS BY INDEX ROWID    | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 48 |               INDEX UNIQUE SCAN             | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |* 49 |              TABLE ACCESS FULL              | PS_SJT_PERSON      | 55671 |  1576K|   333   (1)| 00:00:04 |
    |  50 |            CONCATENATION                    |                    |       |       |            |          |
    |  51 |             NESTED LOOPS                    |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |* 52 |              INDEX FAST FULL SCAN           | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |* 53 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  54 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 55 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 56 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  57 |            CONCATENATION                    |                    |       |       |            |          |
    |  58 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 59 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 60 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  61 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |* 62 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |* 63 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    |  64 |            NESTED LOOPS                     |                    |     1 |    63 |     3   (0)| 00:00:01 |
    |  65 |             INLIST ITERATOR                 |                    |       |       |            |          |
    |* 66 |              INDEX RANGE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     2   (0)| 00:00:01 |
    |* 67 |             INDEX RANGE SCAN                | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |  68 |        TABLE ACCESS FULL                    | PS_TL_TRC_TBL      |   922 | 30426 |     6   (0)| 00:00:01 |
    |  69 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |* 70 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    |  71 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |* 72 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    |  73 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |* 74 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    |* 75 |     FILTER                                  |                    |       |       |            |          |
    PLAN_TABLE_OUTPUT
    |* 76 |      FILTER                                 |                    |       |       |            |          |
    |* 77 |       HASH JOIN                             |                    |     1 |   237 |   974   (2)| 00:00:12 |
    |* 78 |        HASH JOIN                            |                    |     1 |   226 |   637   (1)| 00:00:08 |
    |* 79 |         HASH JOIN                           |                    |     1 |   193 |   631   (1)| 00:00:08 |
    |  80 |          NESTED LOOPS                       |                    |       |       |            |          |
    |  81 |           NESTED LOOPS                      |                    |     1 |   179 |   294   (1)| 00:00:04 |
    |  82 |            NESTED LOOPS                     |                    |     1 |   152 |   293   (1)| 00:00:04 |
    |  83 |             NESTED LOOPS                    |                    |     1 |   121 |   290   (1)| 00:00:04 |
    |* 84 |              HASH JOIN SEMI                 |                    |     1 |    75 |   289   (1)| 00:00:04 |
    |* 85 |               TABLE ACCESS BY INDEX ROWID   | PS_TL_PAYABLE_TIME |     1 |    57 |   242   (0)| 00:00:03 |
    |* 86 |                INDEX SKIP SCAN              | IDX$$_C44D0007     |   587 |       |   110   (0)| 00:00:02 |
    |* 87 |               INDEX FAST FULL SCAN          | PS_TL_GROUP_DTL    |   323 |  5814 |    47   (3)| 00:00:01 |
    |* 88 |              TABLE ACCESS BY INDEX ROWID    | PS_PROJECT         |     1 |    46 |     1   (0)| 00:00:01 |
    |* 89 |               INDEX UNIQUE SCAN             | PS_PROJECT         |     1 |       |     0   (0)| 00:00:01 |
    |* 90 |             TABLE ACCESS BY INDEX ROWID     | PS_JOB             |     1 |    31 |     3   (0)| 00:00:01 |
    |* 91 |              INDEX RANGE SCAN               | IDX$$_3F450003     |     1 |       |     2   (0)| 00:00:01 |
    |* 92 |            INDEX UNIQUE SCAN                | PS_PERSONAL_DATA   |     1 |       |     0   (0)| 00:00:01 |
    |  93 |           TABLE ACCESS BY INDEX ROWID       | PS_PERSONAL_DATA   |     1 |    27 |     1   (0)| 00:00:01 |
    |  94 |          VIEW                               | PS_EMPLMT_SRCH_QRY |  5428 | 75992 |   336   (2)| 00:00:05 |
    |  95 |           SORT UNIQUE                       |                    |  5428 |   275K|   336   (2)| 00:00:05 |
    |* 96 |            FILTER                           |                    |       |       |            |          |
    |  97 |             NESTED LOOPS                    |                    | 55671 |  2827K|   335   (1)| 00:00:05 |
    |  98 |              TABLE ACCESS BY INDEX ROWID    | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |* 99 |               INDEX UNIQUE SCAN             | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |*100 |              TABLE ACCESS FULL              | PS_SJT_PERSON      | 55671 |  1739K|   333   (1)| 00:00:04 |
    | 101 |             CONCATENATION                   |                    |       |       |            |          |
    | 102 |              NESTED LOOPS                   |                    |     1 |    63 |     2   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |*103 |               INDEX FAST FULL SCAN          | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |*104 |               INDEX UNIQUE SCAN             | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 105 |              NESTED LOOPS                   |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*106 |               INDEX UNIQUE SCAN             | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*107 |               INDEX UNIQUE SCAN             | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 108 |             NESTED LOOPS                    |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*109 |              INDEX UNIQUE SCAN              | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*110 |              INDEX UNIQUE SCAN              | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 111 |         TABLE ACCESS FULL                   | PS_TL_TRC_TBL      |   922 | 30426 |     6   (0)| 00:00:01 |
    | 112 |        VIEW                                 | PS_PERALL_SEC_QRY  |  7940 | 87340 |   336   (2)| 00:00:05 |
    | 113 |         SORT UNIQUE                         |                    |  7940 |   379K|   336   (2)| 00:00:05 |
    |*114 |          FILTER                             |                    |       |       |            |          |
    |*115 |           FILTER                            |                    |       |       |            |          |
    | 116 |            NESTED LOOPS                     |                    | 55671 |  2663K|   335   (1)| 00:00:05 |
    | 117 |             TABLE ACCESS BY INDEX ROWID     | PSOPRDEFN          |     1 |    20 |     2   (0)| 00:00:01 |
    |*118 |              INDEX UNIQUE SCAN              | PS_PSOPRDEFN       |     1 |       |     1   (0)| 00:00:01 |
    |*119 |             TABLE ACCESS FULL               | PS_SJT_PERSON      | 55671 |  1576K|   333   (1)| 00:00:04 |
    | 120 |           CONCATENATION                     |                    |       |       |            |          |
    | 121 |            NESTED LOOPS                     |                    |     1 |    63 |     2   (0)| 00:00:01 |
    |*122 |             INDEX FAST FULL SCAN            | PSASJT_OPR_CLS     |     1 |    24 |     2   (0)| 00:00:01 |
    |*123 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 124 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*125 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*126 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 127 |           CONCATENATION                     |                    |       |       |            |          |
    | 128 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*129 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |*130 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 131 |            NESTED LOOPS                     |                    |     1 |    63 |     1   (0)| 00:00:01 |
    |*132 |             INDEX UNIQUE SCAN               | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    |*133 |             INDEX UNIQUE SCAN               | PSASJT_CLASS_ALL   |     1 |    39 |     0   (0)| 00:00:01 |
    | 134 |           NESTED LOOPS                      |                    |     1 |    63 |     3   (0)| 00:00:01 |
    | 135 |            INLIST ITERATOR                  |                    |       |       |            |          |
    |*136 |             INDEX RANGE SCAN                | PSASJT_CLASS_ALL   |     1 |    39 |     2   (0)| 00:00:01 |
    |*137 |            INDEX RANGE SCAN                 | PSASJT_OPR_CLS     |     1 |    24 |     1   (0)| 00:00:01 |
    | 138 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |*139 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    | 140 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |*141 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    | 142 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |*143 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    | 144 |      SORT AGGREGATE                         |                    |     1 |    14 |            |          |
    |*145 |       INDEX RANGE SCAN                      | PS_TL_TRC_TBL      |     2 |    28 |     2   (0)| 00:00:01 |
    | 146 |      SORT AGGREGATE                         |                    |     1 |    20 |            |          |
    |*147 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    20 |     3   (0)| 00:00:01 |
    | 148 |      SORT AGGREGATE                         |                    |     1 |    23 |            |          |
    |*149 |       INDEX RANGE SCAN                      | PSAJOB             |     1 |    23 |     3   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------------------------------Most of the IO wait occur at step 17. Though exlain plan simply doesnot show this thing..But when we run this query from Peoplesoft application (online page), this is visible through Grid control SQL Monitoring
    Second part of the Question continues....

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • IOS 8.1+ Performance Issue

    Hello,
    I encountered a serious performance bug in Adobe Air iOS application on devices running iOS 8.1 or later. Approximately in 1-2 minutes fps drops to 7 or lower without interacting with the app. This is very noticeable in the app. The app looks like frozen for about 0.5 seconds. The bug doesn't appear on every session.
    Devices tested: iPad Mini iOS 8.1.1, iPhone 6 iOS 8.2. iPod Touch 4 iOS 6 is working correctly.
    Air SDK versions: 15 and 17 tested.
    I can track the bug using Adobe Scout. There is a noticeable spike on frame time 1.16. Framerate drops to 7.0. The App spends much time on function Runtime overhead. Sometimes the top activity is Running AS3 attached to frame or Waiting For Next Frame instead of Runtime overhead.
    The bug can be reproduced on an empty application having a one bitmap on stage. Open the app and wait for two minutes and the bug should appear. If not, just close and relaunch the app.
    Bugbase link: Bug#3965160 - iOS 8.1+ Performance Issue
    Miska Savela

    Hi
    Id already activated Messages and entered the 6 digit code I was presented with into my iPhone. I can receive txt messages from non iOS users on my iMac and can reply to those messages.
    I just can't send a new message from scratch to a non iOS user :-s
    Thanks
    Baz

  • Performance issues with LOV bindings in 3-tier BC4J architecture

    We are running BC4J and JClient (Jdeveloper 9.0.3.4/9iAS 9.0.2) in a 3-tier architecture, and have problems with the performance.
    One of our problems are comboboxes with LOV bindings. The view objects that provides data for the LOV bindings contains simple queries from tables with only 4-10 rows, and there are no view links or entity objects to these views.
    To create the LOV binding and to set the model for the combobox takes about 1 second for each combobox.
    We have tried most of tips in http://otn.oracle.com/products/jdev/tips/muench/jclientperf/index.html, but they do not seem to help on our problem.
    The performance is OK (if not great) when the same code is running as 2-tier.
    Does anyone have any good suggestions?

    I can recommend that you look at the following two bugs in Metalink: Bug 2640945 and Bug 3621502
    They are related to the disabling of the TCP socket-level acknowledgement which slows down remote communications for EJB components using ORMI (the protocol used by Oracle OC4J) to communicate between remote EJB client and server.
    A BC4J Application Module deployed as an EJB suffers this same network latency penalty due to the TCP acknowledgement.
    A customer sent me information (that you'll see there as a part of Bug# 3621502) like this on a related issue:
    We found our application runs very slow in 3-Tier mode (JClient, BC4J deployed
    as EJB Session Bean on 9iAS server 9.0.2 enterprise edition). We spent a lot
    of time to tune up our codes but that helped very little. Eventually, we found
    the problem seemed to happen on TCP level. There is a 200ms delay in TCP
    level. After we read some documents about Nagle Algorithm,  we disabled a
    registry key (TcpDelAckTicks) in windows2000  on both client and server. This
    makes our program a lot faster.
    Anyway, we think we should provide our clients a better solution other than
    changing windows registry for them, for example, there may be a way to disable
    that Nagle's algorithm through java.net.Socket.setTcpNoDelay(true), in BC4J,
    or anywhere in our codes. We have not figured out yet.
    Bug 2640945 was fixed in Oracle Application Server 10g (v9.0.4) and it now disables this TCP Acknowledgement on the server side in that release. In the BugDB, I see backport patches available for earlier 9.0.3 and 9.0.2 releases of IAS as well.
    Bug 3621502 is requesting that that same disabling also be performed on the client side by the ORMI code. I have received a test patch from development to try out, but haven't had the chance yet.
    The customer's workaround in the interim was to disable this TCP Acknowledgement at the OS level by modifying a Windows registry setting as noted above.
    See Also http://support.microsoft.com/default.aspx?kbid=328890
    "New registry entry for controlling the TCP Acknowledgment (ACK) behavior in Windows XP and in Windows Server 2003" which documents that the registry entry to change disable this acknowledgement has a different name in Windows XP and Windows 2003.
    Hope this info helps. It would be useful to hear back from you on whether this helps your performance issue.

  • Performance issues with dynamic action (PL/SQL)

    Hi!
    I'm having perfomance issues with a dynamic action that is triggered on a button click.
    I have 5 drop down lists to select columns which the users want to filter, 5 drop down lists to select an operation and 5 boxes to input values.
    After that, there is a filter button that just submits the page based on the selected filters.
    This part works fine, the data is filtered almost instantaneously.
    After this, I have 3 column selectors and 3 boxes where users put values they wish to update the filtered rows to,
    There is an update button that calls the dynamic action (procedure that is written below).
    It should be straight out, the only performance issue could be the decode section, because I need to cover cases when user wants to set a value to null (@) and when he doesn't want update 3 columns, but less (he leaves '').
    Hence P99_X_UC1 || ' = decode('  || P99_X_UV1 ||','''','|| P99_X_UC1  ||',''@'',null,'|| P99_X_UV1  ||')
    However when I finally click the update button, my browser freezes and nothing happens on the table.
    Can anyone help me solve this and improve the speed of the update?
    Regards,
    Ivan
    P.S. The code for the procedure is below:
    create or replace
    PROCEDURE DWP.PROC_UPD
    (P99_X_UC1 in VARCHAR2,
    P99_X_UV1 in VARCHAR2,
    P99_X_UC2 in VARCHAR2,
    P99_X_UV2 in VARCHAR2,
    P99_X_UC3 in VARCHAR2,
    P99_X_UV3 in VARCHAR2,
    P99_X_COL in VARCHAR2,
    P99_X_O in VARCHAR2,
    P99_X_V in VARCHAR2,
    P99_X_COL2 in VARCHAR2,
    P99_X_O2 in VARCHAR2,
    P99_X_V2 in VARCHAR2,
    P99_X_COL3 in VARCHAR2,
    P99_X_O3 in VARCHAR2,
    P99_X_V3 in VARCHAR2,
    P99_X_COL4 in VARCHAR2,
    P99_X_O4 in VARCHAR2,
    P99_X_V4 in VARCHAR2,
    P99_X_COL5 in VARCHAR2,
    P99_X_O5 in VARCHAR2,
    P99_X_V5 in VARCHAR2,
    P99_X_CD in VARCHAR2,
    P99_X_VD in VARCHAR2
    ) IS
    l_sql_stmt varchar2(32600);
    p_table_name varchar2(30) := 'DWP.IZV_SLOG_DET'; 
    BEGIN
    l_sql_stmt := 'update ' || p_table_name || ' set '
    || P99_X_UC1 || ' = decode('  || P99_X_UV1 ||','''','|| P99_X_UC1  ||',''@'',null,'|| P99_X_UV1  ||'),'
    || P99_X_UC2 || ' = decode('  || P99_X_UV2 ||','''','|| P99_X_UC2  ||',''@'',null,'|| P99_X_UV2  ||'),'
    || P99_X_UC3 || ' = decode('  || P99_X_UV3 ||','''','|| P99_X_UC3  ||',''@'',null,'|| P99_X_UV3  ||') where '||
    P99_X_COL  ||' '|| P99_X_O  ||' ' || P99_X_V  || ' and ' ||
    P99_X_COL2 ||' '|| P99_X_O2 ||' ' || P99_X_V2 || ' and ' ||
    P99_X_COL3 ||' '|| P99_X_O3 ||' ' || P99_X_V3 || ' and ' ||
    P99_X_COL4 ||' '|| P99_X_O4 ||' ' || P99_X_V4 || ' and ' ||
    P99_X_COL5 ||' '|| P99_X_O5 ||' ' || P99_X_V5 || ' and ' ||
    P99_X_CD   ||       ' = '         || P99_X_VD ;
    --dbms_output.put_line(l_sql_stmt); 
    EXECUTE IMMEDIATE l_sql_stmt;
    END;

    Hi Ivan,
    I do not think that the decode is performance relevant. Maybe the update hangs because some other transaction has uncommitted changes to one of the affected rows or the where clause is not selective enough and needs to update a huge amount of records.
    Besides that - and I might be wrong, because I only know some part of your app - the code here looks like you have a huge sql injection vulnerability here. Maybe you should consider re-writing your logic in static sql. If that is not possible, you should make sure that the user input only contains allowed values, e.g. by white-listing P99_X_On (i.e. make sure they only contain known values like '=', '<', ...), and by using dbms_assert.enquote_name/enquote_literal on the other P99_X_nnn parameters.
    Regards,
    Christian

Maybe you are looking for