Using Composite Index Performance Issue

Hi,
Need help:
I have a table 'TABLEA' which has composite index on CODE and DEPT_CODE
While fetching the data from the table 'TABLEA' in the WHERE condition I am using only CODE and not DEPT_CODE.
Is the usage of the WHERE condition by using only the CODE column and not DEPT_CODE column affects the performance?
Any help will be needful for me

See the test case below
SQL> create table test_emp
  2  (
  3  emp_ssn number,
  4  emp_name varchar2(50),
  5  emp_state varchar2(15),
  6  emp_city varchar2(20)
  7  );
Table created
SQL> create index test_emp_idx on test_emp(emp_ssn,emp_name);
Index created
SQL> insert into test_emp values (123456789,'Robben','New York','Buffalo');
1 row inserted
SQL> insert into test_emp values (223456789,'Jack','Florida','Miami');
1 row inserted
SQL> insert into test_emp values (323456789,'Peter','Texas','Dallas');
1 row inserted
SQL> insert into test_emp values (423456789,'Johny','Georgia','Atlanta');
1 row inserted
SQL> insert into test_emp values (523456789,'Carmella','California','San Diego');
1 row inserted
SQL> commit;
Commit complete
SQL> explain plan for select /*+ index(test_emp test_emp_idx) */ * from test_emp where emp_ssn = 323456789;
Explained
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2345760695
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)
|   0 | SELECT STATEMENT            |              |     1 |    61 |   163   (1)
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_EMP     |     1 |    61 |   163   (1)
|*  2 |   INDEX RANGE SCAN          | TEST_EMP_IDX |     1 |       |     2   (0)
Predicate Information (identified by operation id):
   2 - access("EMP_SSN"=323456789)
Note
   - dynamic sampling used for this statement
18 rows selected
SQL> explain plan for select /*+ INDEX_SS(test_emp test_emp_idx) */ * from test_emp where emp_name = 'Robben';
Explained
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 85087452
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)
|   0 | SELECT STATEMENT            |              |     1 |    61 |    15   (0)
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_EMP     |     1 |    61 |    15   (0)
|*  2 |   INDEX SKIP SCAN           | TEST_EMP_IDX |     1 |       |    11   (0)
Predicate Information (identified by operation id):
   2 - access("EMP_NAME"='Robben')
       filter("EMP_NAME"='Robben')
Note
   - dynamic sampling used for this statement
19 rows selectedThanks,
Andy

Similar Messages

  • SQL Query not using Composite Index

    Hi,
    Please look at the below query:
    SELECT pde.participant_uid
    ,pde.award_code
    ,pde.award_type
    ,SUM(decode(pde.distribution_type
    ,'FORFEITURE'
    ,pde.forfeited_quantity *
    pde.sold_price * cc.rate
    ,pde.distributed_quantity *
    pde.sold_price * cc.rate)) AS gross_Amt_pref_Curr
    FROM part_distribution_exec pde
    ,currency_conversion cc
    ,currency off_curr
    WHERE pde.participant_uid = 4105
    AND off_curr.currency_iso_code =
    pde.offering_currency_iso_code
    AND cc.from_currency_uid = off_curr.currency_uid
    AND cc.to_currency_uid = 1
    AND cc.latest_flag = 'Y'
    GROUP BY pde.participant_uid
    ,pde.award_code
    ,pde.award_type
    In oracle 9i, i"ve executed this above query, it takes 6 seconds and the cost is 616, this is due to non usage of the composite index, Currency_conversion_idx(From_currency_uid, To_currency_uid, Latest_flag). I wonder why this index is not used while executing the above query. So, I've dropped the index and recreated it. Now, the query is using this index. After inserting many rows or say in 1 days time, if the same query is executed, again the query is not using the index. So everyday, the index should be dropped and recreated.
    I don't want this drop and recreation of index daily, I need a permanent solution for this.
    Can anyone tell me, Why this index goes stale after a period of time???? Please take some time and Solve this issue.
    -Sankar

    Hi David,
    This is Sankar here. Thankyou for your reply.
    I've got the plan table output for this problematic query, please go thro' it and help me out why the index CURRENCY_CONVERSION_IDX is used now and why it's not using while executing the query after a day or inserting some records...
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 26 | 15678 | 147 |
    | 1 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_PAYOUT_SCHEDULE | 1 | 89 | 2 |
    |* 2 | INDEX UNIQUE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 61097 | | 1 |
    | 3 | SORT AGGREGATE | | 1 | 67 | |
    |* 4 | FILTER | | | | |
    |* 5 | INDEX RANGE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 1 | 67 | 2 |
    | 6 | SORT AGGREGATE | | 1 | 94 | |
    |* 7 | FILTER | | | | |
    |* 8 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_PAYOUT_SCHEDULE | 1 | 94 | 3 |
    |* 9 | INDEX RANGE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 1 | | 2 |
    |* 10 | FILTER | | | | |
    |* 11 | HASH JOIN | | 26 | 15678 | 95 |
    |* 12 | HASH JOIN OUTER | | 26 | 11596 | 91 |
    |* 13 | HASH JOIN | | 26 | 10218 | 86 |
    | 14 | VIEW | | 1 | 82 | 4 |
    | 15 | SORT GROUP BY | | 1 | 116 | 4 |
    |* 16 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_LEDGER | 1 | 116 | 2 |
    |* 17 | INDEX RANGE SCAN | PARTICIPANT_UID_IDX | 1 | | 1 |
    |* 18 | HASH JOIN OUTER | | 26 | 8086 | 82 |
    |* 19 | HASH JOIN | | 26 | 6006 | 71 |
    | 20 | NESTED LOOPS | | 36 | 5904 | 66 |
    | 21 | NESTED LOOPS | | 1 | 115 | 65 |
    | 22 | TABLE ACCESS BY INDEX ROWID | CURRENCY_CONVERSION | 18 | 756 | 2 |
    |* 23 | INDEX RANGE SCAN | KLS_IDX_CURRENCY_CONV | 3 | | 1 |
    | 24 | VIEW | | 1 | 73 | 4 |
    | 25 | SORT GROUP BY | | 1 | 71 | 4 |
    | 26 | TABLE ACCESS BY INDEX ROWID| PART_AWARD_VALUE | 1 | 71 | 2 |
    |* 27 | INDEX RANGE SCAN | PAV_PARTICIPANT_UID_IDX | 1 | | 1 |
    | 28 | TABLE ACCESS BY INDEX ROWID | PARTICIPANT_AWARD | 199 | 9751 | 1 |
    |* 29 | INDEX UNIQUE SCAN | PARTICIPANT_AWARD_PK1 | 100 | | |
    |* 30 | INDEX FAST FULL SCAN | PARTICIPANT_AWARD_TYPE_PK1 | 147 | 9849 | 4 |
    | 31 | VIEW | | 1 | 80 | 10 |
    | 32 | SORT GROUP BY | | 1 | 198 | 10 |
    |* 33 | TABLE ACCESS BY INDEX ROWID | CURRENCY_CONVERSION | 1 | 42 | 2 |
    | 34 | NESTED LOOPS | | 1 | 198 | 8 |
    | 35 | NESTED LOOPS | | 2 | 312 | 4 |
    | 36 | TABLE ACCESS BY INDEX ROWID| PART_DISTRIBUTION_EXEC | 2 | 276 | 2 |
    |* 37 | INDEX RANGE SCAN | IND_PARTICIPANT_UID | 1 | | 1 |
    | 38 | TABLE ACCESS BY INDEX ROWID| CURRENCY | 1 | 18 | 1 |
    |* 39 | INDEX UNIQUE SCAN | CURRENCY_AK | 1 | | |
    |* 40 | INDEX RANGE SCAN | CURRENCY_CONVERSION_AK | 2 | | 1 |
    | 41 | VIEW | | 1 | 53 | 4 |
    | 42 | SORT GROUP BY | | 1 | 62 | 4 |
    |* 43 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_VESTING | 1 | 62 | 2 |
    |* 44 | INDEX RANGE SCAN | PAVES_PARTICIPANT_UID_IDX | 1 | | 1 |
    | 45 | TABLE ACCESS FULL | AWARD | 1062 | 162K| 3 |
    | 46 | TABLE ACCESS BY INDEX ROWID | CURRENCY | 1 | 18 | 2 |
    |* 47 | INDEX UNIQUE SCAN | CURRENCY_AK | 102 | | 1 |
    Predicate Information (identified by operation id):
    2 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2
    "PAPS"."INSTALLMENT_NUM"=1)
    4 - filter(4105=:B1)
    5 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2)
    7 - filter(4105=:B1)
    8 - filter("PAPS"."STATUS"='OPEN')
    9 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2)
    10 - filter("CC_A_P_CURR"."FROM_CURRENCY_UID"= (SELECT /*+ */ "CURRENCY"."CURRENCY_UID" FROM
    "EWAPDBO"."CURRENCY" "CURRENCY" WHERE "CURRENCY"."CURRENCY_ISO_CODE"=:B1))
    11 - access("SYS_ALIAS_7"."AWARD_CODE"="A"."AWARD_CODE")
    12 - access("SYS_ALIAS_7"."AWARD_CODE"="PVS"."AWARD_CODE"(+))
    13 - access("SYS_ALIAS_8"."AWARD_CODE"="PALS"."AWARD_CODE" AND
    "SYS_ALIAS_8"."AWARD_TYPE"="PALS"."AWARD_TYPE")
    16 - filter(TRUNC("PAL1"."LEDGER_ENTRY_DATE")<=TRUNC(SYSDATE@!) AND "PAL1"."ALLOC_TYPE"='IPU')
    17 - access("PAL1"."PARTICIPANT_UID"=4105)
    filter("PAL1"."PARTICIPANT_UID"=4105)
    18 - access("SYS_ALIAS_8"."AWARD_CODE"="PDES"."AWARD_CODE"(+) AND
    "SYS_ALIAS_8"."AWARD_TYPE"="PDES"."AWARD_TYPE"(+))
    19 - access("SYS_ALIAS_7"."AWARD_CODE"="SYS_ALIAS_8"."AWARD_CODE")
    23 - access("CC_A_P_CURR"."TO_CURRENCY_UID"=1 AND "CC_A_P_CURR"."LATEST_FLAG"='Y')
    27 - access("PAV"."PARTICIPANT_UID"=4105)
    filter("PAV"."PARTICIPANT_UID"=4105)
    29 - access("SYS_ALIAS_7"."AWARD_CODE"="SYS_ALIAS_9"."AWARD_CODE" AND
    "SYS_ALIAS_7"."PARTICIPANT_UID"=4105)
    30 - filter("SYS_ALIAS_8"."PARTICIPANT_UID"=4105)
    33 - filter("CC"."LATEST_FLAG"='Y')
    37 - access("PDE"."PARTICIPANT_UID"=4105)
    filter("PDE"."PARTICIPANT_UID"=4105)
    39 - access("OFF_CURR"."CURRENCY_ISO_CODE"="PDE"."OFFERING_CURRENCY_ISO_CODE")
    40 - access("CC"."FROM_CURRENCY_UID"="OFF_CURR"."CURRENCY_UID" AND "CC"."TO_CURRENCY_UID"=1)
    43 - filter("PV"."VESTING_DATE"<=SYSDATE@!)
    44 - access("PV"."PARTICIPANT_UID"=4105)
    filter("PV"."PARTICIPANT_UID"=4105)
    47 - access("CURRENCY"."CURRENCY_ISO_CODE"=:B1)
    Note: cpu costing is off
    93 rows selected.
    Please help me out...
    -Sankar

  • Using Reference Cursor Performance Issue in Report

    Hi,
    Are reference cursor supposed to be faster than a normal query? The reason why I am asking is because I am using a reference cusor query in the data model and it has a performance issue on the report, it's taking quite a while to run than If I just run the same reference cursor query in sql*plus. The difference is significantly big. Any input is very much appreciated!
    Thanks,
    Marilyn

    From the metalink bug 4372868 on 9.0.4.x. It was fixed on 10.1.2.0.2 and does not have a backport for any 9.0.4 version.
    Also the 9.0.4 version is already desupported. Please see the note:
    Note 307042.1
    Topic: Desupport Notices - Oracle Products
    Title: Oracle Reports 10g 9.0.4 & 9.0.4.x
    Action plan:
    If you are still on 9.0.4 and later version of oracle reports and have no plan yet to migrate to 10.1.2.0.2 version use the same query you are using in your reference cursor and use it as a plain SQL query in your reports data model.

  • Why don't be used composite index in partition table.

    A table has done range list partition.
    Then, I create a composite index named INDEX_N2(COLUMN_005, COLUMN_002). --COLUMN_005 AND COLUMN_002 ARE VARCHAR2
    When I run the SQL
    select 1
    from table1 T1
    where T1.COLUMN_005 = 'x'
    and T1.COLUMN_002 = 'y'
    And view the explain plan, INDEX_N2 has been used.
    But when I join the COLUMN_005 to other column, the index will not be used.
    The SQL is like below.
    select 1
    from table1 T1, table T2
    where T1.COLUMN_005 = T2.COLUMN_002 --T2.COLUMN_002 IS VARCHAR2
    and T1.COLUMN_002 = 'y'
    The explain plan is like below
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 426 | 44730 | 65367 (1)| 00:13:05 | | |
    |* 1 | HASH JOIN | | 426 | 44730 | 65367 (1)| 00:13:05 | | |
    | 2 | PARTITION RANGE ALL| | 8 | 792 | 5 (0)| 00:00:01 | 1 | 32 |
    | 3 | PARTITION LIST ALL| | 8 | 792 | 5 (0)| 00:00:01 | 1 | 8 |
    |* 4 | TABLE ACCESS FULL| TABLE1 | 8 | 792 | 5 (0)| 00:00:01 | 1 | 256 |
    | 5 | PARTITION RANGE ALL| | 1112K| 6519K| 65355 (1)| 00:13:05 | 1 | 34 |
    | 6 | PARTITION LIST ALL| | 1112K| 6519K| 65355 (1)| 00:13:05 | 1 | LAST |
    | 7 | TABLE ACCESS FULL| TABLE2 | 1112K| 6519K| 65355 (1)| 00:13:05 | 1 | 442 |
    Query Block Name / Object Alias (identified by operation id):
    1 - SEL$1
    4 - SEL$1 / T1@SEL$1
    7 - SEL$1 / T2@SEL$1
    Predicate Information (identified by operation id):
    1 - access("T1"."COLUMN_005"="T2"."COLUMN_002")
    4 - filter("T1"."COLUMN_002"='y')
    Column Projection Information (identified by operation id):
    1 - (#keys=1)
    2 - "T1"."COLUMN_005"[VARCHAR2,150]
    3 - "T1"."COLUMN_005"[VARCHAR2,150]
    4 - "T1"."COLUMN_005"[VARCHAR2,150]
    5 - "T2"."COLUMN_002"[VARCHAR2,20]
    6 - "T2"."COLUMN_002"[VARCHAR2,20]
    7 - "T2"."COLUMN_002"[VARCHAR2,20]
    Note
    - dynamic sampling used for this statement
    My Questin is:
    Two columns are include the columns of index.
    Why it can't be used?
    Thanks

    I create a new index on column2, (INDEX_N3)
    and change SQL to
    select 1 from table T1 where
    T1.COLUMN_002 = '848K 36892'
    This time INDEX_N3 will be used.
    but change SQL to
    select 1 from table T1 where
    T1.COLUMN_002 = '848K 36892'
    and T1.COLUMN_004 = '1000'
    The explain plan will show full scan.
    Why?
    Thanks.

  • Statistics,  Rebuilding Indexes, Performance Issues

    I have a query which use to run 2-3 sec
    After taking stats and rebuliding indexes
    This Query is taking 25 sec
    Is there anyway that I can get mt performance back?
    Collected Stats
    First Using Analyze table Compute stats
    Second DBMS_STATS.GATHER_ TABLE_STATS
    Level 12 Tracing
    TKPROF: Release 9.2.0.6.0 - Production on Sun Jun 29 13:23:11 2008
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Sort options: prsela exeela fchela
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    SELECT ped.addrs_typ, ped.bnk_addrs_seq_no, ped.clm_case_no,
    ped.eft_payee_seq_no, ped.partition_desgntr,
    ped.payee_bnk_acct_typ, ped.payee_eft_dtl_no,
    ped.paye_bnk_acct_no, ped.paye_bnk_nm, ped.paye_bnk_rtng_no,
    ped.row_updt_sys_id, ped.vrsn_no, el.clm_payee_no
    FROM payee_eft_detail ped, eft_payee_lnk el, clm_payee cp
    WHERE ped.curr_row_ind = 'A'
    AND cp.curr_row_ind = 'A'
    AND cp.clm_payee_no = el.clm_payee_no
    AND cp.mail_zip = 'XXXXXX'
    AND ped.paye_bnk_rtng_no = 'XXXXXX'
    AND ped.paye_bnk_acct_no = 'XXXXXXX'
    AND ped.payee_bnk_acct_typ = 'XXXX'
    AND ped.eft_payee_seq_no = el.eft_payee_seq_no
    call count cpu elapsed disk query current rows
    Parse 1 0.02 0.01 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 23.46 22.91 0 1292083 0 0
    total 3 23.48 22.93 0 1292083 0 0
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 117
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 1 0.02 0.01 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 23.46 22.91 0 1292083 0 0
    total 3 23.48 22.93 0 1292083 0 0
    Misses in library cache during parse: 1
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 0 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 0 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 0
    1 user SQL statements in session.
    0 internal SQL statements in session.
    1 SQL statements in session.
    Trace file compatibility: 9.02.00
    Sort options: prsela exeela fchela
    1 session in tracefile.
    1 user SQL statements in trace file.
    0 internal SQL statements in trace file.
    1 SQL statements in trace file.
    1 unique SQL statements in trace file.
    41 lines in trace file.
    ****************-----==========================================*****
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 73 | 5 | | |
    | 1 | NESTED LOOPS | | 1 | 73 | 5 | | |
    | 2 | NESTED LOOPS | | 12 | 708 | 4 | | |
    | 3 | TABLE ACCESS BY GLOBAL INDEX ROWID| PAYEE_EFT_DETAIL_T | 12 | 540 | 1 | ROWID | ROW L |
    |* 4 | INDEX RANGE SCAN | TEST_PAYEE_EFT_DETAIL_T_IE21 | 12 | | 3 | | |
    | 5 | TABLE ACCESS BY GLOBAL INDEX ROWID| EFT_PAYEE_LNK_T | 1 | 14 | 1 | ROWID | ROW L |
    |* 6 | INDEX RANGE SCAN | EFT_PAYEE_LNK_PK | 1 | | 1 | | |
    |* 7 | INDEX RANGE SCAN | CLM_PAYEE_T_IE10 | 1 | 14 | | | |
    Predicate Information (identified by operation id):
    4 - access("PED"."PAYE_BNK_RTNG_NO"='XXXXXXX' AND "PED"."PAYE_BNK_ACCT_NO"='XXXXXXXXXX' AND
    "PED"."PAYEE_BNK_ACCT_TYP"='CHK' AND "PED"."CURR_ROW_IND"='A')
    6 - access("PED"."EFT_PAYEE_SEQ_NO"="LNK"."EFT_PAYEE_SEQ_NO")
    7 - access("LNK"."CLM_PAYEE_NO"="CP"."CLM_PAYEE_NO" AND "CP"."MAIL_ZIP"='XXXXXX' AND "CP"."CURR_ROW_IND"='A')
    ==++++++++++++++*************************=+++++++++++++----------------=========
    TKPROF: Release 9.2.0.6.0 - Production on Sun Jun 29 19:28:39 2008
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Sort options: prsela exeela fchela
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    SELECT ped.addrs_typ, ped.bnk_addrs_seq_no, ped.clm_case_no,
    ped.eft_payee_seq_no, ped.partition_desgntr,
    ped.payee_bnk_acct_typ, ped.payee_eft_dtl_no,
    ped.paye_bnk_acct_no, ped.paye_bnk_nm, ped.paye_bnk_rtng_no,
    ped.row_updt_sys_id, ped.vrsn_no, el.clm_payee_no
    FROM payee_eft_detail ped, eft_payee_lnk el, clm_payee cp
    WHERE ped.curr_row_ind = 'A'
    AND cp.curr_row_ind = 'A'
    AND cp.clm_payee_no = el.clm_payee_no
    AND cp.mail_zip = 'XXXXXX'
    AND ped.paye_bnk_rtng_no = 'XXXXXXXXXX'
    AND ped.paye_bnk_acct_no = 'XXXXXXXX'
    AND ped.payee_bnk_acct_typ = 'CHK'
    AND ped.eft_payee_seq_no = el.eft_payee_seq_no
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 23.30 22.75 0 1292083 0 0
    total 3 23.30 22.75 0 1292083 0 0
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: 117
    Rows Row Source Operation
    0 NESTED LOOPS
    214395 NESTED LOOPS
    214395 TABLE ACCESS BY GLOBAL INDEX ROWID PAYEE_EFT_DETAIL_T PARTITION: ROW LOCATION ROW LOCATION
    214395 INDEX RANGE SCAN TEST_PAYEE_EFT_DETAIL_T_IE21 (object id 160840)
    214395 TABLE ACCESS BY GLOBAL INDEX ROWID EFT_PAYEE_LNK_T PARTITION: ROW LOCATION ROW LOCATION
    214395 INDEX RANGE SCAN EFT_PAYEE_LNK_PK (object id 75455)
    0 INDEX RANGE SCAN CLM_PAYEE_T_IE10 (object id 71871)
    alter session set sql_trace=false
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 117
    ALTER SESSION SET SQL_TRACE = TRUE
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 1 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer goal: CHOOSE
    Parsing user id: 117
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 2 0.00 0.00 0 0 0 0
    Execute 3 0.00 0.00 0 0 0 0
    Fetch 1 23.30 22.75 0 1292083 0 0
    total 6 23.30 22.75 0 1292083 0 0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 0 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 0 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 0
    3 user SQL statements in session.
    0 internal SQL statements in session.
    3 SQL statements in session.
    Trace file compatibility: 9.02.00
    Sort options: prsela exeela fchela
    1 session in tracefile.
    3 user SQL statements in trace file.
    0 internal SQL statements in trace file.
    3 SQL statements in trace file.
    3 unique SQL statements in trace file.
    57 lines in trace file.
    Message was edited by:
    user644525

    I have a query which use to run 2-3 sec
    After taking stats and rebuliding indexes
    This Query is taking 25 sec
    Is there anyway that I can get mt performance back?
    PLAN_TABLE_OUTPUT
    | Id  | Operation                            |  Name                         | Rows  | Bytes | Cost  | Pstart| Pstop |
    |   0 | SELECT STATEMENT                     |                               |     1 |    73 |     5 |       |       |
    |   1 |  NESTED LOOPS                        |                               |     1 |    73 |     5 |       |       |
    |   2 |   NESTED LOOPS                       |                               |    12 |   708 |     4 |       |       |
    |   3 |    TABLE ACCESS BY GLOBAL INDEX ROWID| PAYEE_EFT_DETAIL_T            |    12 |   540 |     1 | ROWID | ROW L |
    |*  4 |     INDEX RANGE SCAN                 | TEST_PAYEE_EFT_DETAIL_T_IE21  |    12 |       |     3 |       |       |
    |   5 |    TABLE ACCESS BY GLOBAL INDEX ROWID| EFT_PAYEE_LNK_T               |     1 |    14 |     1 | ROWID | ROW L |
    |*  6 |     INDEX RANGE SCAN                 | EFT_PAYEE_LNK_PK              |     1 |       |     1 |       |       |
    |*  7 |   INDEX RANGE SCAN                   | CLM_PAYEE_T_IE10              |     1 |    14 |       |       |       |
    Predicate Information (identified by operation id):
       4 - access("PED"."PAYE_BNK_RTNG_NO"='XXXXXXX' AND "PED"."PAYE_BNK_ACCT_NO"='XXXXXXXXXX' AND
                  "PED"."PAYEE_BNK_ACCT_TYP"='CHK' AND "PED"."CURR_ROW_IND"='A')
       6 - access("PED"."EFT_PAYEE_SEQ_NO"="LNK"."EFT_PAYEE_SEQ_NO")
       7 - access("LNK"."CLM_PAYEE_NO"="CP"."CLM_PAYEE_NO" AND "CP"."MAIL_ZIP"='XXXXXX' AND "CP"."CURR_ROW_IND"='A')
    ==++++++++++++++*************************=+++++++++++++----------------=========
    SELECT    ped.addrs_typ, ped.bnk_addrs_seq_no, ped.clm_case_no,
                    ped.eft_payee_seq_no, ped.partition_desgntr,
                    ped.payee_bnk_acct_typ, ped.payee_eft_dtl_no,
                    ped.paye_bnk_acct_no, ped.paye_bnk_nm, ped.paye_bnk_rtng_no,
                    ped.row_updt_sys_id, ped.vrsn_no, el.clm_payee_no
               FROM payee_eft_detail ped, eft_payee_lnk el, clm_payee cp
              WHERE ped.curr_row_ind = 'A'
                AND cp.curr_row_ind = 'A'
                AND cp.clm_payee_no = el.clm_payee_no
                AND cp.mail_zip = '16803'
                AND ped.paye_bnk_rtng_no = '111000614'
                AND ped.paye_bnk_acct_no = '1586266775'
                AND ped.payee_bnk_acct_typ = 'CHK'
                AND ped.eft_payee_seq_no = el.eft_payee_seq_no
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     23.30      22.75          0    1292083          0           0
    total        3     23.30      22.75          0    1292083          0           0
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: 117 
    Rows     Row Source Operation
          0  NESTED LOOPS 
    214395   NESTED LOOPS 
    214395    TABLE ACCESS BY GLOBAL INDEX ROWID PAYEE_EFT_DETAIL_T PARTITION: ROW LOCATION ROW LOCATION
    214395     INDEX RANGE SCAN TEST_PAYEE_EFT_DETAIL_T_IE21 (object id 160840)
    214395    TABLE ACCESS BY GLOBAL INDEX ROWID EFT_PAYEE_LNK_T PARTITION: ROW LOCATION ROW LOCATION
    214395     INDEX RANGE SCAN EFT_PAYEE_LNK_PK (object id 75455)
          0   INDEX RANGE SCAN CLM_PAYEE_T_IE10 (object id 71871)Trying to recreate my response to this thread that was posted yesterday...
    The query is performing 1.3 million logical reads, per the "query" column in the TKPROF output. 1.3 million 8KB (or 16KB) logical reads takes a fair amount of time, and is consuming 23.30 CPU seconds of time (execution time is 22.75 seconds). The explain plan and the row source execution plan are identical, although the predicted cardinality numbers are much lower than the actual number of rows returned, per the TKPROF output (12 rows versus 214,395 rows). The CLM_PAYEE table seems to have the greatest number of restrictions placed on it, yet Oracle is joining that table last.
    You found, per my recommendation, that adding the hint /*+ LEADING(CP) */ significantly decreased the execution time to less than one second. You may not have collected statistics on the indexes, even though you collected statistics on the tables. This may also be a sign that one or more histograms are required for the cardinality estimates to be close to accurate.
    Did your GATHER_TABLE_STATS commands look similar to the following:
    DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'PAYEE_EFT_DETAIL',CASCADE=>TRUE);
    DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'EFT_PAYEE_LNK',CASCADE=>TRUE);
    DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'CLM_PAYEE',CASCADE=>TRUE);Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Non-Batch Index Performance Issue

    I have two tables that have identical data, one is receiving the data via a batch import, the other is receiving data real time at about 10 rows per second, every second, every day. Both tables also have the same index applied and both tables have a similar number of rows.
    The batch import table seems to make use of the index and responds to queries (using 'where') very quickly while the table with real time data flow takes considerably longer to respond to the query.
    What is the best way to optimize the index on tables that do not receive batch imports?

    Thanks for the replies,
    Stats are gathered in the DB Maintenance Window every night - using the Oracle recommended 'auto' sample size. The stats on the table and indexes are no more than 24 hours old at any given time.
    Also, the tkprof ouput is below:
    select to_date(MSGDATETIME, 'YYYY-MM-DD HH24:MI:SS') DTG, MGHOSTNAME, PIXIDN, PIXTXT
    from dmblue.fw16master
    where msghostname = '10.104'
    and to_char(to_date(msgdatetime, 'YYYY-MM-DD HH24:MI:SS'), 'Mon DD YYYY / HH24') like 'Sep 29 2006 / 09'
    and msglevel = 'Info'
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 90 26.85 237.42 750925 757238 0 1335
    total 92 26.85 237.43 750925 757238 0 1335
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS
    Rows Row Source Operation
    1335 TABLE ACCESS FULL FW16MASTER (cr=757238 pr=750925 pw=0 time=242940908 us)
    Any ideas?

  • Using Blobs - possible performance issues

    Hello,
    We are considering using BLOBs in our Oracle 9i database( supposed to be migrated to 10g soon)
    Here is the scenario
    ·     20 000 BLOB/year keeping during 5 years
    ·     100 inserts per day
    ·     200 reads per day
    ·     Maximum BLOB size is 150 Kb
    What will be performance impact of using Blobs on INSERT/SELECT ?
    Would it be better to separate BLOBs on the different tablespace ?
    Thanks in advance
    Alexander

    Hello,
    We are considering using BLOBs in our Oracle 9i
    database( supposed to be migrated to 10g soon)
    Here is the scenario
    ·     20 000 BLOB/year keeping during 5 years
    ·     100 inserts per day
    ·     200 reads per day
    ·     Maximum BLOB size is 150 Kb
    What will be performance impact of using Blobs on
    INSERT/SELECT ?Are these stored in the tables or outside the tables using locators?
    Would it be better to separate BLOBs on the different
    tablespace ?
    Yes. It is better to store on different Tablespace.

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance Issue Problem

    SELECT PSPHI
             A~PSPNR
             B~POST1
             A~POST1
      INTO TABLE T_PRPS
      FROM PRPS AS A
      JOIN PROJ AS B
      ON PSPHI = B~PSPNR
      WHERE PSPHI IN S_PSPHI.
    user wont enter the project number.so first select query vl fetch the entire table reocrds bcoz s_psphi is the select-option.
    after getting the records from prps i m passing these records into AFPO table to fetch network numbers.
    IF T_PRPS[] IS NOT INITIAL.
    SELECT AUFNR INTO TABLE T_AFPO
                   FROM AFPO
                   FOR ALL ENTRIES IN T_PRPS
                   WHERE PROJN = T_PRPS-PSPNR
                   %_HINTS ORACLE 'INDEX("AFPO" "AFPO~002")'.
    ENDIF.
    after getting records from AFPO i passed these records to AUFK to fetch text and other data.
    IF T_AFPO[] IS NOT INITIAL.
    SELECT AUFNR
               PSPEL
               KTEXT
               INTO TABLE T_AUFK
               FROM AUFK
               FOR ALL ENTRIES IN T_AFPO
               WHERE AUFNR = T_AFPO-AUFNR
    ENDIF.
    The second select query is taking long time.
    At first the 2nd and 3rd select queries are written by using JOIN,but performance issue came.so i split the JOIN condition.After that also i got same performance problem.Then i used Secondary index syntax.time reduced to 17mins,but user wants it less time.Please any body vl gIve solution for this as soon as possible.
    My analysis is the field which is ther in  where condition is not a primary key filed and records in the database table also hav large in amount.
    Thanks&Regards,
    R.P.Sastry

    It looks like the problem is entirely due to the large amount of data. You are extracting a large number of entries from PRPS and then using all of the results to get orders from AFPO using an non-unique secondary index. this will take time and there's not much you can do about this except run it in the background.
    Or enter the project number.
    Rob
    Edited by: Rob Burbank on Feb 20, 2008 9:27 AM
    Edited by: Rob Burbank on Feb 20, 2008 9:29 AM

  • Performance Issue - Index is not used when a zero padded string is queried

    Hi All,
    I have a table T1 which has many columns. One such column say C1 is a varchar2(20). T1 has 10 million rows and there is an index called I1 on C1. Stats are current for both tables and indexes. These are the scenarios:
    Scenario 1
    select *   from T1 where C1 = '0013206263' --Uses index I1
    187 ms
    Scenario 2
    select *   from T1 where C1 = '8177341863' --Uses index I1
    203 ms
    *Scenario 3*
    *select *   from T1 where C1 = '0000000945' --Uses Fulll Table Scan --Very Slow*
    *45 seconds*
    When I force the sql to use the index through a hint, it is working fine:
    Scenario 4
    select /*+ INDEX (t1 i1) */  *   from T1 where C1 = '0013206263' --Uses index I1
    123 ms
    Scenario 5
    select /*+ INDEX (t1 i1) */  *   from T1 where C1 = '8177341863' --Uses index I1
    201 ms
    *Scenario 6*
    *select /*+ INDEX (t1 i1) */  *   from T1 where C1 = '0000000945' --Uses index I1*
    *172ms*Is there any reason for this performance issue? Why does the optimizer goes for FTS in Scenario 3?
    Edited by: user539954 on May 14, 2009 12:22 PM
    Edited by: user539954 on May 14, 2009 12:32 PM

    user539954 wrote:
    Please see the replies below:
    - How many distinct values for C1 out of that 10 million rows? I'm guessing that histograms were created for C1, correct?
    =>7 million distinct c1 values. I have not gathered a histogram yet. Should I try that?
    SQL> explain plan for select * from T1  where C1 = '0000000954';
    | Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT  |                |   244K|    19M| 26228   (5)|
    |   1 |  TABLE ACCESS FULL| T1 |   244K|    19M| 26228   (5)|
    SQL> explain plan for select * from T1  where C1 = '0033454555';
    | Id  | Operation                   | Name               | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT            |                    |   532 | 43624 |   261   (0)|
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1     |   532 | 43624 |   261   (0)|
    |   2 |   INDEX RANGE SCAN          | I1 |   532 |       |     2   (0)|
    It's possible you do have a histogram, even though you didn't plan on creating it, if you're running 10g.
    In the absence of a histogram and with 7M distinct keys in 10M rows, Oracle should have predicted 2 rows for both queries, not 244,000 and 532.
    If you do have a histogram, you probably need to get rid of it.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Performance issue - Index not being used

    Hi,
    I have the following scenario
    - I have a table Account with a PK -Account_ID, for which there is a index PK_ACCOUNT_ID_IDX
    - Another table Account_fact has a PK - Account_ID, Month_Year, for which there is a index PK_ACCOUNT_FACT_IDX
    I have a join in these two table. The query is
    select a.ACCOUNT_ID, b.Cost
    From Account a, Account_fact b
    Where a.account_id = b.Account_id
    and a.Include_flag = 'Y'
    The explain plan for this query tells me its doing a Full table scan and not using the index and the cost of query is very high.
    Can any one suggest what is wrong here.
    Regards
    Deepak

    >> Shud I be creating a bit map index.
    Bit map index is not for this case.
    http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96533/data_acc.htm#8131
    Also bitmaps are very dangerous for Transactional tables. They are not meant for them.
    As I told you in this case create index on all the columns you are using in the query.
    For example INCLUDE_FLAG, ACCOUNT_ID in ACCOUNT table
    ACCOUNT_ID, COST in the other table.
    But keep in mind that if you are selecting any other column from these tables then again it might opt for full table scan......

  • Performance issue with drop and re-create index

    My database table has about 2 million records. The index in the table was not optmized, so we created a new index lets call it index2. So this table now was the original index (index1) and the index2. We then inserted data into this table from the other box. It was running for a few weeks.
    Suddenly we noticed that a query which used to take a few seconds now took more than a minute. The execution plan was using the index2 which technically should be faster. We checked if the statistics were upto date and it was. So then we dropped the new index, re-ran the query and it completed in 10 sec's. It was usign the old index. This puzzled me since the point of the index2 was to make it better. So then we re-created index2 and genrated stats for the index. Re-ran the query and it completed in 5 sec's.
    Everytime we timed to run the query, I shutdown and restarted the box to clear all cache's. So all the time I have specified are pure time's and not cached. The execution plan using index2 taking 1 min and 5 sec's are nearly the same, with very minior difference in cost and cardnitality. Any ideas why index2 took 1 min before and after drop and create again takes only 5 sec.
    The reason I want to find the cause is to ensure that this doesn't happen again, since its impossible for me to re-create the index everytime I see this issue. Any thoughts would be helpful.

    Firstly the indexes are different index1 is only on the time column, where as index2 is a composite index consisting of 3 columns.
    Here are the details. The test that I did were last friday, 3/31. Yesterday and today when I executed the same query I get more increased times, yesterday it took 9 sec amd today 17 sec. The stats job kicked in on both days and is upto date. This table, nothing gets deleted. Only added.
    3/31
    Original
    Elapsed: 00:01:02.17
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6553 Card=9240 Bytes
    =203280)
    1 0 SORT (UNIQUE) (Cost=6553 Card=9240 Bytes=203280)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=15982 Card=2306303 Bytes=50738666)
    drop index EVENT_NA_TIME_ETYPE
    Elapsed: 00:00:11.91
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7792 Card=9275 Bytes
    =204050)
    1 0 SORT (UNIQUE) (Cost=7792 Card=9275 Bytes=204050)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EVENT' (Cost=2092
    Card=2284254 Bytes=50253588)
    3 2 INDEX (RANGE SCAN) OF 'EVENT_TIME_NDX' (NON-UNIQUE
    ) (Cost=6740 Card=2284254)
    create index EVENT_NA_TIME_ETYPE ON EVENT(NET_ADDRESS,TIME,EVENT_TYPE);
    BEGIN
    SYS.DBMS_STATS.GENERATE_STATS('USER','EVENT_NA_TIME_ETYPE',0);
    end;
    Elapsed: 00:00:05.14
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6345 Card=9275 Bytes
    =204050)
    1 0 SORT (UNIQUE) (Cost=6345 Card=9275 Bytes=204050)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=12878 Card=2284254 Bytes=50253588)
    4/3
    Elapsed: 00:00:09.70
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6596 Card=9316 Bytes
    =204952)
    1 0 SORT (UNIQUE) (Cost=6596 Card=9316 Bytes=204952)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=11696 Card=2409400 Bytes=53006800)
    Statistics
    0 recursive calls
    0 db block gets
    11933 consistent gets
    9676 physical reads
    724 redo size
    467 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    3 rows processed
    4/4
    Elapsed: 00:00:17.99
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6681 Card=9421 Bytes
    =207262)
    1 0 SORT (UNIQUE) (Cost=6681 Card=9421 Bytes=207262)
    2 1 INDEX (FULL SCAN) OF 'EVENT_NA_TIME_ETYPE' (NON-UNIQ
    UE) (Cost=12110 Card=2433800 Bytes=53543600)
    Statistics
    0 recursive calls
    0 db block gets
    12279 consistent gets
    9423 physical reads
    2608 redo size
    467 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    3 rows processed
    SQL> select index_name,clustering_factor,blevel,leaf_blocks,distinct_keys from u ser_indexes where index_name like 'EVENT%';
    INDEX_NAME CLUSTERING_FACTOR BLEVEL LEAF_BLOCKS DISTINCT_KEYS
    EVENT_NA_TIME_ETYPE 2393170 2 12108 2395545
    EVENT_PK 32640 2 5313 2286158
    EVENT_TIME_NDX 35673 2 7075 2394055

  • Performance issue with Oracle Text index

    Hi Experts,
    We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
    One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
    Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
    sql query doing fine:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    --sql query that hangs forever:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    and exists (--one clause here wiht a few table joins)
    and not exists (--one clause here wiht a few table joins);
    --Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
    I will be most thankful for the inputs.
    OrauserN

    Hi Barbara,
    First of all, thanks a lot for reviewing the issue.
    Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
    Here is the entire sql:
    SELECT U.CLNT_OID,
           U.USR_OID,
           S.EMAILADDRESS,
           U.FIRST_NAME,
           U.LAST_NAME,
           S.JOBCODE,
           S.LOCATION,
           S.DEPARTMENT,
           S.ASSOCIATEID,
           S.ENTERPRISECOMPANYCODE,
           S.EMPLOYEEID,
           S.PAYGROUP,
           S.PRODUCTLOCALE
      FROM    ACCESS_USR U
           INNER JOIN
              ACCESS_SIA S
           ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
    WHERE     U.CLNT_OID = 'G39NY3D25942TXDA'
           AND EXISTS
                  (SELECT 1
                     FROM ACCESS_USR_GROUP_XREF UGX
                          INNER JOIN ACCESS_GROUP RELG
                             ON     RELG.CLNT_OID = UGX.CLNT_OID
                                AND RELG.GROUP_OID = UGX.GROUP_OID
                          INNER JOIN ACCESS_GROUP G
                             ON     G.CLNT_OID = RELG.CLNT_OID
                                AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
                    WHERE     UGX.CLNT_OID = U.CLNT_OID
                          AND UGX.USR_OID = U.USR_OID
                          AND G.GROUP_OID = 920512943
                          AND UGX.INCLUDED = 1)
           AND NOT EXISTS
                      (SELECT 1
                         FROM    ACCESS_USR_GROUP_XREF UGX
                              INNER JOIN
                                 ACCESS_GROUP G
                              ON     G.CLNT_OID = UGX.CLNT_OID
                                 AND G.GROUP_OID = UGX.GROUP_OID
                        WHERE     UGX.CLNT_OID = U.CLNT_OID
                              AND UGX.USR_OID = U.USR_OID
                              AND G.GROUP_OID = 920512943
                              AND UGX.INCLUDED = 1)
           AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
    Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
    NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
    Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
    --definition of preferences used in the index:
    SET SERVEROUTPUT ON size unlimited
    WHENEVER SQLERROR EXIT SQL.SQLCODE
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
       ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_LEXER with BASIC LEXER is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
       ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_WL with BASIC WORDLIST is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    --now below is the code of the index:
    CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
    (FIRST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
    (LAST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second.  Also if I remove the EXISTS clause and use only the NOT EXISTS  clause it returns in less than 4 seconds. But with both clauses it runs forever!
    When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
    Regards
    OrauserN

  • Anyone using a 12 Core Mac Pro? I have HORRIBLE performance issues .. Help!

    After the latest 10.7.4 Mac OS X update I have extremely horrible performance issues with AE ... and they were not so great before the update.
    It is still stabilizing ... but an 1:19 clip in SD is taking 12 HOURS TO analyze and stabilize. !!!
    The 12 cores are barely being used and this problem has been an issue since I purchased the suite over a year ago.
    Does anyone else have problems using AE on their 12 CORE MAC PRO?
    REPLY ONLY IF YOU HAVE A 12 CORE MAC PRO PLEASE.
    There must be a problem because since the update ... Adobe Encore is PERFECT .. and ALL 12 CORES MAX OUT and the encoding is quick!!
    I also have major problems between PP and AE using Dynamic link .. and slow renders in PP.
    Everything else works fine .. other apps / other vendors.
    I am calling Adobe today.

    Thank you for your time.
    I am using a 12 Core 2.93 GHZ with 32 GB RAAM and a NVidia GTX 285.
    I also have a Areca 1212 PCI RAID Card. ( NOTE: I HAVE A SERIOUS RAID DRIVER PROBLEM NOW. THE RAID IS DISCONNECTED )
    Mac OS X 10.7.4
    Adobe Master Suite 5.5
    The Apple "Console" app logs a whole lot of these three Adobe related errors:
    1. 5/18/12 2:10:16.260 AM aeselflink: CFURLCreateWithString was passed this invalid URL string: '/System/Library/Frameworks/System.framework' (a file system path instead of an URL string). The URL created will not work with most file URL functions. CFURLCreateWithFileSystemPath or CFURLCreateWithFileSystemPathRelativeToBase should be used instead.
    2.) 5/18/12 2:10:16.331 AM aeselflink: -[NSMenu menuID]: unrecognized selector sent to instance 0x1183100e0
    5/18/12 2:10:17.333 AM aeselflink: -[NSMenu menuID]: unrecognized selector sent to instance 0x115625740
    3.) 5/18/12 2:11:18.596 AM [0x0-0x9b09b].com.adobe.aerendercore: You have at least one output module template that refers to a missing output plug-in.  Please check your Output Module Templates.
    Only half of my hyperthreaded processors are active when using AE "Warp Stabilizer". This issue was addressed before in the Forums. There was no solution. I dont' know if it is any better in CS 6.
    Also the automatic saving of all linked compositions while using the dynamic link feature bewteen PP and AE causes huge inoperatable waiting times .. unbearable. My guess is that the new "Global Performance Cache " fixes this ... which I consider update to a terrible problem .. but they sell it as a feature ( Ill go into that later.
    Question:
    What RAID card are you using?
    Do you use the Warp Stabilizer?

  • WILL BIG INDEX WILL CAUSE PERFORMANCE ISSUE?

    In an index table, if there are a lot of insert then data will grow and/or if the index is
    huge then can it really cause performance issue?
    Is there a document in metalink that says if index is 50% of data then we have to rebuild it? What are the basis and threshold of rebuilding index?

    A big index by itself won't cause a performance issue. There are other circumstances you should consider for the index.
    First of all, which kind of index are you talking about, there are several kind of indexes in Oracle. On the other hand, assuming you are talking about a regular B*Tree index, you should consider factors such as selectivity and cardinality. If the indexed column has evenly distributed values, then the index will be highly selective, and if the indexed column is highly skewed, in order for the index not to become a real bottleneck you should gather histograms, so selectivity can be calculated at execution time and in case the query retrieves a highly selective data range the index won't slow performance, otherwise a full table scan will be considered a best data access path.
    Rebuilding indexes is an operation performed when the index becomes invalid, or when migrating the index to a new tablespace, but not when you suspect the index has become 'fragmented' in this case you should use the Coalesce command. Oracle provides efficient algorithms to maintain the index balanced.
    ~ Madrid
    http://hrivera99.blogspot.com/

Maybe you are looking for

  • Tax code does not exist in company code Error in FBCJ

    Dear Sir, I want to post a document using business transaction Expenses in Cash Book. In this case entry will be posted to the GL account which are tax relevant. Necessary configuration for Business Transactions in FBCJ2 have been done. even The Tax

  • Bulk collect limit 1000 is looping only 1000 records out of 35000 records

    In below code I have to loop around 35000 records for every month of the year starting from Aug-2010 to Aug-2011. I am using bulk collect with limit clause but the problem is: a: Limit clause is returning only 1000 records. b: It is taking too much t

  • Writing to DV tape

    Once I've plugged my camera in how do I write to DV tape? i. What setting do I put my camera on? ii. What settings do I used in export? Cheers Ben

  • How to clear the endpoint directory in NAC Profiler

    Hi All, I want to delete all the endpoint discovered and profiled by the NAC Profiler. Can anybody guide me on this, such that I can delete all the endpoint discovered and profiled by the Profiler in one go. Thanks, Abuzar.

  • Why my iChat AV only work with Mac not PC

    I've been use iChat to communicated with my parents for couple year. (I have a PowerMac G5 with external iSight. They have a PC and runnning AIM 5.9 version.) Everything works fine until recently I purchased a 15" MBP. I always got "User did not resp