Table scan

I have an update statement on a table consisting of 3 million records. There is a script which inserts records into this table and then updates two fields after inserting the records into this table. Insertion works perfectly fine but the script hangs when update runs on the table. I checked Enterprise Manager Console and it says full table scan going on and it would take 8 hours to complete. I would like to know what is causing this. Thanks for your help.

you should be doing update on PK coulmns and pk has index .. update should use index scan.. that would be fast .. check the explain plan.
--Girish                                                                                                                                                                                                                                                                                                           

Similar Messages

  • Data Federator Full Table scan

    Hi,
    Is it possible to prevent a Full table scan when creating a join between tables of 2 catalogues in a Data Federator?
    This is seriously hampering the development time within Data Federator Development.
    I am working with IDT Beta to create a Universe based on Multiple Source. The delay is so huge when creating joins that we could revert to Universe Design Tool. I have posted it here as Data Federator gurus will have tweak about IDT as it incorporates DF within itself in BI 4.0.
    Any inputs will be great.. In case this is in the wrong forums, then please move it accordingly.
    VFernandes

    The issue was fixed when the GA was released. This was present in Beta

  • Select statement in a function does Full Table Scan

    All,
    I have been coding a stored procedure that writes 38K rows in less than a minute. If I add another column which requires call to a package and 4 functions within that package, it runs for about 4 hours. I have confirmed that due to problems in one of the functions, the code does full table scans. The package and all of its functions were written by other contractors who have been long gone.
    Please note that case_number_in (VARCHAR2) and effective_date_in (DATE) are parameters sent to the problem function and I have verified through TOAD’s debugger that their values are correct.
    Table named ps2_benefit_register has over 40 million rows but case_number is an index for that table.
    Table named ps1_case_fs has more than 20 million rows but also uses case_number as an index.
    Select #1 – causes full table scan runs and writes 38K rows in a couple of hours.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = case_number_in and
    a1.case_number = a2.case_number and
    a2.application_date <= effective_date_in and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Select #2 – runs – hard coding values makes the code to write the same 38K rows in a few minutes.
    {case}
    SELECT max(a2.application_date)
    INTO l_app_date
    FROM dwfssd.ps2_benefit_register a1, dwfssd.ps2_case_fs a2
    WHERE a2.case_number = 'A006438' and
    a1.case_number = a2.case_number and
    a2.application_date <= '01-Apr-2009' and
    a1.DOCUMENT_TYPE = 'F';
    {case}
    Why using the values in the passed parameter in the first select statement causes full table scan?
    Thank you for your help,
    Seyed
    Edited by: user11117178 on Jul 30, 2009 6:22 AM
    Edited by: user11117178 on Jul 30, 2009 6:23 AM
    Edited by: user11117178 on Jul 30, 2009 6:24 AM

    Hello Dan,
    Thank you for your input. The function is not determinsitic, therefore, I am providing you with the explain plan. By version number, if you are refering to the Database version, we are running 10g.
    PLAN_TABLE_OUTPUT
    Plan hash value: 2132048964
    | Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT              |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |*  1 |  HASH JOIN                    |                         |   324K|    33M|  3138   (5)| 00:00:38 |       |       |
    |   2 |   BITMAP CONVERSION TO ROWIDS |                         |     3 |     9 |     1   (0)| 00:00:01 |       |       |
    |*  3 |    BITMAP INDEX FAST FULL SCAN| IDX_PS2_ACTION_TYPES    |       |       |            |          |       |       |
    |   4 |   PARTITION RANGE ITERATOR    |                         |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    |   5 |    TABLE ACCESS FULL          | PS2_FS_TRANSACTION_FACT |   866K|    87M|  3121   (4)| 00:00:38 |   154 |   158 |
    Predicate Information (identified by operation id):
       1 - access("AL1"."ACTION_TYPE_ID"="AL2"."ACTION_TYPE_ID")
       3 - filter("AL2"."ACTION_TYPE"='1' OR "AL2"."ACTION_TYPE"='2' OR "AL2"."ACTION_TYPE"='S')
    Thank you very much,
    Seyed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query

    Hi Everyone,
    When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
    UPDATE TABLE_A a
    SET a.current_commit_date =
    (SELECT MAX (b.loading_date)
    FROM TABLE_B b
    WHERE a.sales_order_id = b.sales_order_id
    AND a.sales_order_line_id = b.sales_order_line_id
    AND b.confirmed_qty > 0
    AND b.data_flag IS NULL
    OR b.schedule_line_delivery_date >= '23 NOV 2008')
    Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
    I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
    When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
    Please please help me in optimizing this.
    Thanks,
    Sudhindra

    Check the instruction again, you're leaving out information we need in order to help you, like optimizer information.
    - Post your exact database version, that is: the result of select * from v$version;
    - Don't use TOAD's execution plan, but use
    SQL> explain plan for <your_query>;
    SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
    Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
    It's also explained in the instruction.
    When was the last time statistics were gathered for table_a and table_b?
    You can find out by issuing the following query:select table_name
    , last_analyzed
    , num_rows
    from user_tables
    where table_name in ('TABLE_A', 'TABLE_B');
    Can you also post the results of these counts;select count(*)
    from table_b
    where confirmed_qty > 0;
    select count(*)
    from table_b
    where data_flag is null;
    select count(*)
    from table_b
    where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy');

  • Query optimization - Query is taking long time even there is no table scan in execution plan

    Hi All,
    The below query execution is taking very long time even there are all required indexes present. 
    Also in execution plan there is no table scan. I did a lot of research but i am unable to find a solution. 
    Please help, this is required very urgently. Thanks in advance. :)
    WITH cte
    AS (
    SELECT Acc_ex1_3
    FROM Acc_ex1
    INNER JOIN Acc_ex5 ON (
    Acc_ex1.Acc_ex1_Id = Acc_ex5.Acc_ex5_Id
    AND Acc_ex1.OwnerID = Acc_ex5.OwnerID
    WHERE (
    cast(Acc_ex5.Acc_ex5_92 AS DATETIME) >= '12/31/2010 18:30:00'
    AND cast(Acc_ex5.Acc_ex5_92 AS DATETIME) < '01/31/2014 18:30:00'
    SELECT DISTINCT R.ReportsTo AS directReportingUserId
    ,UC.UserName AS EmpName
    ,UC.EmployeeCode AS EmpCode
    ,UEx1.Use_ex1_1 AS PortfolioCode
    SELECT TOP 1 TerritoryName
    FROM UserTerritoryLevelView
    WHERE displayOrder = 6
    AND UserId = R.ReportsTo
    ) AS BranchName
    ,GroupsNotContacted AS groupLastContact
    ,GroupCount AS groupTotal
    FROM ReportingMembers R
    INNER JOIN TeamMembers T ON (
    T.OwnerID = R.OwnerID
    AND T.MemberID = R.ReportsTo
    AND T.ReportsTo = 1
    INNER JOIN UserContact UC ON (
    UC.CompanyID = R.OwnerID
    AND UC.UserID = R.ReportsTo
    INNER JOIN Use_ex1 UEx1 ON (
    UEx1.OwnerId = R.OwnerID
    AND UEx1.Use_ex1_Id = R.ReportsTo
    INNER JOIN (
    SELECT Accounts.AssignedTo
    ,count(DISTINCT Acc_ex1_3) AS GroupCount
    FROM Accounts
    INNER JOIN Acc_ex1 ON (
    Accounts.AccountID = Acc_ex1.Acc_ex1_Id
    AND Acc_ex1.Acc_ex1_3 > '0'
    AND Accounts.OwnerID = 109
    GROUP BY Accounts.AssignedTo
    ) TotalGroups ON (TotalGroups.AssignedTo = R.ReportsTo)
    INNER JOIN (
    SELECT Accounts.AssignedTo
    ,count(DISTINCT Acc_ex1_3) AS GroupsNotContacted
    FROM Accounts
    INNER JOIN Acc_ex1 ON (
    Accounts.AccountID = Acc_ex1.Acc_ex1_Id
    AND Acc_ex1.OwnerID = Accounts.OwnerID
    AND Acc_ex1.Acc_ex1_3 > '0'
    INNER JOIN Acc_ex5 ON (
    Accounts.AccountID = Acc_ex5.Acc_ex5_Id
    AND Acc_ex5.OwnerID = Accounts.OwnerID
    WHERE Accounts.OwnerID = 109
    AND Acc_ex1.Acc_ex1_3 NOT IN (
    SELECT Acc_ex1_3
    FROM cte
    GROUP BY Accounts.AssignedTo
    ) TotalGroupsNotContacted ON (TotalGroupsNotContacted.AssignedTo = R.ReportsTo)
    WHERE R.OwnerID = 109
    Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran

    Hi All,
    Thanks for the replies.
    I have optimized that query to make it run in few seconds.
    Here is my final query.
    select ReportsTo as directReportingUserId, 
    UserName AS EmpName, 
    EmployeeCode AS EmpCode,
    Use_ex1_1 AS PortfolioCode,
    BranchName,
    GroupInfo.groupTotal,
    GroupInfo.groupLastContact,
    case when exists
    (select 1 from ReportingMembers RM
    where RM.ReportsTo =  UserInfo.ReportsTo
    and RM.MemberID <> UserInfo.ReportsTo
    ) then 0  else UserInfo.ReportsTo end as memberid1,
    (select code from Regions where ownerid=109 and  name=UserInfo.BranchName) as BranchCode,
    ROW_NUMBER() OVER (ORDER BY directReportingUserId) AS ROWNUMBER
    FROM 
    (select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
    (select top 1 TerritoryName 
    from UserTerritoryLevelView
    where displayOrder = 6
    and UserId = R.ReportsTo) as BranchName,
    Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
    from ReportingMembers R
    INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
    inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo )
    inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    union
    select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
    (select top 1 TerritoryName 
    from UserTerritoryLevelView
    where displayOrder = 6
    and UserId = R.ReportsTo) as BranchName,
    Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
    from ReportingMembers R
    --INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
    inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo)
    inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    where R.MemberID = 1
    ) UserInfo
    inner join 
    select directReportingUserId, sum(Groups) as groupTotal, SUM(GroupsNotContacted) as groupLastContact
    from
    select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
    case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
    FROM ReportingMembers R
    INNER JOIN TeamMembers T 
    ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
    inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
    --where TerritoryID in ( select ChildRegionID  RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
    union 
    select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
    case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
    FROM ReportingMembers R
    INNER JOIN TeamMembers T 
    ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
    inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
    --where TerritoryID in ( select ChildRegionID  RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
    where R.MemberID = 1
    ) GroupWiseInfo
    group by directReportingUserId
    ) GroupInfo
    on UserInfo.ReportsTo = GroupInfo.directReportingUserId
    Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran

  • Associative Array vs Table Scan

    Still new to PL/SQL, but very keen to learn. I wondered if somebody could advise me whether I should use a collection (such as an associative array) instead of repeating a table scan within a loop for the example below. I need to read from an input table of experiment data and if the EXPERIMENT_ID does not already exist in my EXPERIMENTS table, then add it. Here is the code I have so far. My instinct is that it my code is inefficient. Would it be more efficient to scan the EXPERIMENTS table only once and store the list of IDs in a collection, then scan the collection within the loop?
    -- Create any new Experiment IDs if needed
    open CurExperiments;
    loop
    -- Fetch the explicit cursor
    fetch CurExperiments
    into vExpId, dExpDate;
    exit when CurExperiments%notfound;
    -- Check to see if already exists
    select count(id)
    into iCheckExpExists
    from experiments
    where id = vExpId;
    if iCheckExpExists = 0 then
    -- Experiment ID is not already in table so add a row
    insert into experiments
    (id, experiment_date)
    values(vExpId, dExpDate);
    end if;
    end loop;

    Except that rownum is assigned after the result set
    is computed, so the whole table will have to be
    scanned.really?
    SQL> explain plan for select * from i;
    Explained.
    SQL> select * from table( dbms_xplan.display );
    PLAN_TABLE_OUTPUT
    Plan hash value: 1766854993
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |   910K|  4443K|   630   (3)| 00:00:08 |
    |   1 |  TABLE ACCESS FULL| I    |   910K|  4443K|   630   (3)| 00:00:08 |
    8 rows selected.
    SQL> explain plan for select * from i where rownum=1;
    Explained.
    SQL> select * from table( dbms_xplan.display );
    PLAN_TABLE_OUTPUT
    Plan hash value: 2766403234
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |     5 |     2   (0)| 00:00:01 |
    |*  1 |  COUNT STOPKEY     |      |       |       |            |          |
    |   2 |   TABLE ACCESS FULL| I    |     1 |     5 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM=1)
    14 rows selected.

  • Help with table scan

    I have a problem with full table scans that make very slow the performance of a report.
    The test case is below. It looks that when the column is called from the table, the index is in use. If I use the same select from the view, then I get a table scan.
    I would appreciate any idea on how to optimize it.
    Thanks a lot for the help.
    mj
    <pre>
    create table test1 (id1 number , id2 number, id3 number, col1 varchar(10),col2 varchar(50), col3 varchar(100));
    create table test2 (id4 number , id5 number, id6 number, col4 varchar(10),col5 varchar(50), col6 varchar(100));
    ALTER TABLE test1 ADD CONSTRAINT PK_test1 PRIMARY KEY(ID1) USING INDEX REVERSE;
    create index index1 on test1(ID2);
    create index index2 on test1(ID3,col2 );
    ALTER TABLE test2 ADD CONSTRAINT PK_test2 PRIMARY KEY(ID4) USING INDEX REVERSE;
    create or replace view test_view as select t1.*,
    case (select t2.id4 from test2 t2 where t1.id2 = t2.id5 and t2.id6 = -1)
    when t1.id2 then t1.id3
    else t1.id2
    end as main_id
    from test1 t1 ;
    create or replace view test_view2 as select * from test_view; --(requred by security levels)
    select * from test1 where id2 =1000;
    select * from test_view where id2 = 1000;
    select * from test_view2 where id2 = 1000;
    SQL> select * from test_view where id2 = 1000;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1970977999
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 125 | 1 (0)| 00:00:01 |
    |* 1 | TABLE ACCESS FULL | TEST2 | 1 | 39 | 2 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| TEST1 | 1 | 125 | 1 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | INDEX1 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("T2"."ID5"=:B1 AND "T2"."ID6"=(-1))
    3 - access("T1"."ID2"=1000)
    SQL> select * from test_view where main_id = 1000;
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 3806368241
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 125 | 4 (0)| 00:00:01 |
    |* 1 | TABLE ACCESS FULL | TEST2 | 1 | 39 | 2 (0)| 00:00:01 |
    |* 2 | FILTER | | | | | |
    | 3 | TABLE ACCESS FULL| TEST1 | 1 | 125 | 2 (0)| 00:00:01 |
    |* 4 | TABLE ACCESS FULL| TEST2 | 1 | 39 | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("T2"."ID5"=:B1 AND "T2"."ID6"=(-1))
    2 - filter(CASE WHEN "T1"."ID2"= (SELECT /*+ */ "T2"."ID4" FROM
    MJ42."TEST2" "T2" WHERE "T2"."ID5"=:B1 AND "T2"."ID6"=(-1)) THEN
    "T1"."ID3" ELSE "T1"."ID2" END =1000)
    4 - filter("T2"."ID5"=:B1 AND "T2"."ID6"=(-1))
    SQL>
    </pre>

    If you think about what the two queries are doing, it is easy to see why the first uses an index and the second does not.
    Your first query:
    SELECT * FROM test_view WHERE id2 = 1000explicitly uses an indexed column from test1 in the predicate. Oracle can use the index to identify the correct row from test1. Having found that single row in test1, it uses the FULL SCAN test2 to resolve the case statement.
    Your second query:
    SELECT * FROM test_view WHERE main_id = 1000uses the result of the case statement as the predicate. Oracle has no way of determing what row from test1 to use initially, so it must full scan both tables.
    John

  • Preventing Discoverer using Full Table Scans with Decode in a View

    Hi Forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query in Discoverer.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds. Changing the condition to Batch Status that = ‘Unposted’ returns the query in seconds.
    I’ve been doing some digging and have found the database view that is linked to the Journal Batches folder in Discoverer. See at end of post.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans.
    Any idea how do we get around this?
    SELECT
    JOURNAL_BATCH1.JE_BATCH_ID,
    JOURNAL_BATCH1.NAME,
    JOURNAL_BATCH1.SET_OF_BOOKS_ID,
    GL_SET_OF_BOOKS.NAME,
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),
    DECODE( JOURNAL_BATCH1.ACTUAL_FLAG, 'A', 'Actual', 'B', 'Budget', 'E', 'Encumbrance', NULL ),
    JOURNAL_BATCH1.DEFAULT_PERIOD_NAME,
    JOURNAL_BATCH1.POSTED_DATE,
    JOURNAL_BATCH1.DATE_CREATED,
    JOURNAL_BATCH1.DESCRIPTION,
    DECODE( JOURNAL_BATCH1.AVERAGE_JOURNAL_FLAG, 'N', 'Standard', 'Y', 'Average', NULL ),
    DECODE( JOURNAL_BATCH1.BUDGETARY_CONTROL_STATUS, 'F', 'Failed', 'I', 'In Process', 'N', 'N/A', 'P', 'Passed', 'R', 'Required', NULL ),
    DECODE( JOURNAL_BATCH1.APPROVAL_STATUS_CODE, 'A', 'Approved', 'I', 'In Process', 'J', 'Rejected', 'R', 'Required', 'V','Validation Failed','Z', 'N/A',NULL ),
    JOURNAL_BATCH1.CONTROL_TOTAL,
    JOURNAL_BATCH1.RUNNING_TOTAL_DR,
    JOURNAL_BATCH1.RUNNING_TOTAL_CR,
    JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_DR,
    JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_CR,
    JOURNAL_BATCH1.PARENT_JE_BATCH_ID,
    JOURNAL_BATCH2.NAME
    FROM
    GL_JE_BATCHES JOURNAL_BATCH1,
    GL_JE_BATCHES JOURNAL_BATCH2,
    GL_SETS_OF_BOOKS
    GL_SET_OF_BOOKS
    WHERE
    JOURNAL_BATCH1.PARENT_JE_BATCH_ID = JOURNAL_BATCH2.JE_BATCH_ID (+) AND
    JOURNAL_BATCH1.SET_OF_BOOKS_ID = GL_SET_OF_BOOKS.SET_OF_BOOKS_ID AND
    GL_SECURITY_PKG.VALIDATE_ACCESS( JOURNAL_BATCH1.SET_OF_BOOKS_ID ) = 'TRUE' WITH READ ONLY
    Thanks,
    Lance

    Discoverer created it's own SQL.
    Please see below the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    _________________________________

  • How to avoid full Table scan when using Rule based optimizer (Oracle817)

    1. We have a Oracle 8.1.7 DB, and the optimizer_mode is set to "RULE"
    2. There are three indexes on table cm_contract_supply, which is a large table having 28732830 Rows, and average row length 149 Bytes
    COLUMN_NAME INDEX_NAME
    PROGRESS_RECID XAK11CM_CONTRACT_SUPPLY
    COMPANY_CODE XIE1CM_CONTRACT_SUPPLY
    CONTRACT_NUMBER XIE1CM_CONTRACT_SUPPLY
    COUNTRY_CODE XIE1CM_CONTRACT_SUPPLY
    SUPPLY_TYPE_CODE XIE1CM_CONTRACT_SUPPLY
    VERSION_NUMBER XIE1CM_CONTRACT_SUPPLY
    CAMPAIGN_CODE XIF1290CM_CONTRACT_SUPPLY
    COMPANY_CODE XIF1290CM_CONTRACT_SUPPLY
    COUNTRY_CODE XIF1290CM_CONTRACT_SUPPLY
    SUPPLIER_BP_ID XIF801CONTRACT_SUPPLY
    COMMISSION_LETTER_CODE XIF803CONTRACT_SUPPLY
    COMPANY_CODE XIF803CONTRACT_SUPPLY
    COUNTRY_CODE XIF803CONTRACT_SUPPLY
    COMPANY_CODE XPKCM_CONTRACT_SUPPLY
    CONTRACT_NUMBER XPKCM_CONTRACT_SUPPLY
    COUNTRY_CODE XPKCM_CONTRACT_SUPPLY
    SUPPLY_SEQUENCE_NUMBER XPKCM_CONTRACT_SUPPLY
    VERSION_NUMBER XPKCM_CONTRACT_SUPPLY
    3. We are querying the table for a particular contract_number and version_number. We want to avoid full table scan.
    SELECT /*+ INDEX(XAK11CM_CONTRACT_SUPPLY) */
    rowid, pms.cm_contract_supply.*
    FROM pms.cm_contract_supply
    WHERE
    contract_number = '0000000000131710'
    AND version_number = 3;
    However despite of giving hint, query results are fetched after full table scan.
    Execution Plan
    0 SELECT STATEMENT Optimizer=RULE (Cost=1182 Card=1 Bytes=742)
    1 0 TABLE ACCESS (FULL) OF 'CM_CONTRACT_SUPPLY' (Cost=1182 Card=1 Bytes=742)
    4. I have tried giving
    SELECT /*+ FIRST_ROWS + INDEX(XAK11CM_CONTRACT_SUPPLY) */
    rowid, pms.cm_contract_supply.*
    FROM pms.cm_contract_supply
    WHERE
    contract_number = '0000000000131710'
    AND version_number = 3;
    and
    SELECT /*+ CHOOSE + INDEX(XAK11CM_CONTRACT_SUPPLY) */
    rowid, pms.cm_contract_supply.*
    FROM pms.cm_contract_supply
    WHERE
    contract_number = '0000000000131710'
    AND version_number = 3;
    But it does not work.
    Is there some way without changing optimizer mode and without creating an additional index, we can use the index instead of full table scan?

    David,
    Here is my test on a Oracle 10g database.
    SQL> create table mytable as select * from all_tables;
    Table created.
    SQL> set autot traceonly
    SQL> alter session set optimizer_mode = choose;
    Session altered.
    SQL> select count(*) from mytable;
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MYTABLE' (TABLE)
    Statistics
              1  recursive calls
              0  db block gets
             29  consistent gets
              0  physical reads
              0  redo size
            223  bytes sent via SQL*Net to client
            276  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> analyze table mytable compute statistics;
    Table analyzed.
    SQL>  select count(*) from mytable
      2  ;
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=1)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MYTABLE' (TABLE) (Cost=11 Card=1
              788)
    Statistics
              1  recursive calls
              0  db block gets
             29  consistent gets
              0  physical reads
              0  redo size
            222  bytes sent via SQL*Net to client
            276  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> disconnect
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining options

  • How can i make the optimiser to skip this full table scan ??

    Hi,
    I am trying to tune the below query, I have checked up all the possibilities to skip the full table scan on vhd_calldesh_archive, But am unable to find the predicate in the where clause, which is letting the optimiser to choose the full table scan on vhd_calldesk_archive table, which is very large one. how can i make the optimiser to skip this full table scan.
    Please check the below sql script and explain plan ,
    SELECT a.call_id, a.entry_date,
    NVL (INITCAP (b.full_name), caller_name) AS caller_name,
    c.description AS org_desc, a.env_id, i.env_desc, a.appl_id,
    d.appl_desc, a.module_id, e.module_desc, a.call_type_id,
    f.call_type_desc, a.priority, a.upduserid,
    INITCAP (g.full_name) AS lastupdated_username, a.call_desc, h.mode_desc,
    a.received_time,a.assignment_team, a.status,
    ROUND (lcc.pkg_com.fn_datediff ('MI',
    a.entry_date,
    a.status_date
    ) AS elapsed_time,
    ROUND (lcc.pkg_com.fn_datediff ('MI',
    a.entry_date,
    a.status_date
    ) AS resolved_min,
    CASE
    WHEN a.orgid in (1,100,200) THEN a.orgid
    ELSE j.regionorgid
    END AS region
    ,(SELECT coalesce(MAX(upddate),a.upddate) FROM lcc.vhd_callstatus stat WHERE stat.call_id = a.call_id
    ) as stat_upddate
    ,(SELECT team_desc from lcc.vhd_teams t where t.team_id = a.assignment_team) as team_desc
    ,a.eta_date
    ,coalesce(a.caller_contact, b.telephone) AS caller_contact
    ,coalesce(a.caller_email, b.email) as email
    ,a.affected_users
    ,a.outage_time
    ,a.QA_DONE
    ,a.LAST_ACTION_TEAM
    ,a.LAST_ACTION_USER
    ,INITCAP (k.full_name) AS last_action_username
    ,a.last_action_date
    ,l.team_desc as last_action_teamdesc
    ,a.refid
    ,INITCAP (lu.full_name) AS logged_name
    ,a.pmreview
    ,a.status as main_status
    FROM lcc.vhd_calldesk_archive a
    LEFT OUTER JOIN lcc.lcc_userinfo_details b ON b.user_name = a.caller_id
    INNER JOIN lcc.com_organization c ON c.code = a.orgid
    INNER JOIN lcc.vhd_applications d ON d.appl_id = a.appl_id
    INNER JOIN lcc.vhd_modules e ON e.appl_id = a.appl_id AND e.module_id = a.module_id
    INNER JOIN lcc.vhd_calltypes f ON f.call_type_id = a.call_type_id
    INNER JOIN lcc.com_rptorganization j ON j.orgid = a.orgid AND j.tree = 'HLPDK'
    LEFT OUTER JOIN lcc.lcc_userinfo_details g ON g.user_name = a.upduserid
    LEFT OUTER JOIN lcc.vhd_callmode h ON h.mode_id = a.mode_id
    LEFT OUTER JOIN lcc.vhd_environment i ON i.appl_id = a.appl_id AND i.env_id = a.env_id
    LEFT OUTER JOIN lcc.lcc_userinfo_details k ON k.user_name = a.last_action_user
    LEFT OUTER JOIN lcc.vhd_teams l ON l.team_id = a.last_action_user
    LEFT OUTER JOIN (select CALL_ID,upduserid FROM lcc.VHD_CALLDESK_HISTORY P where upddate
    in ( select min(upddate) from lcc.VHD_CALLDESK_HISTORY Q WHERE Q.CALL_ID = P.CALL_ID
    group by call_id)) ku
    ON ku.call_id = a.call_id
    LEFT OUTER JOIN lcc.lcc_userinfo_details lu ON NVL(ku.upduserid,A.upduserid) = lu.user_name;
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2104 | 3667K| 37696 |
    | 1 | UNION-ALL | | | | |
    | 2 | NESTED LOOPS OUTER | | 2103 | 3665K| 37683 |
    | 3 | VIEW | | 2103 | 3616K| 35580 |
    | 4 | NESTED LOOPS OUTER | | 2103 | 823K| 35580 |
    | 5 | NESTED LOOPS OUTER | | 2103 | 774K| 33477 |
    | 6 | NESTED LOOPS OUTER | | 2103 | 685K| 31374 |
    | 7 | NESTED LOOPS | | 2103 | 636K| 29271 |
    | 8 | NESTED LOOPS | | 2103 | 603K| 27168 |
    | 9 | NESTED LOOPS OUTER | | 2103 | 558K| 25065 |
    | 10 | NESTED LOOPS OUTER | | 2103 | 515K| 22962 |
    | 11 | NESTED LOOPS | | 2103 | 472K| 20859 |
    | 12 | NESTED LOOPS | | 2103 | 429K| 18756 |
    | 13 | NESTED LOOPS OUTER | | 4826 | 890K| 13930 |
    | 14 | NESTED LOOPS OUTER | | 4826 | 848K| 9104 |
    | 15 | NESTED LOOPS | | 4826 | 754K| 4278 |
    |* 16 | TABLE ACCESS FULL | COM_RPTORGANIZATION | 75 | 1050 | 3 |
    | 17 | TABLE ACCESS BY INDEX ROWID | VHD_CALLDESK | 64 | 9344 | 57 |
    |* 18 | INDEX RANGE SCAN | VHD_CALLDSK_ORGID | 2476 | | 7 |
    | 19 | VIEW PUSHED PREDICATE | | 1 | 20 | 1 |
    |* 20 | FILTER | | | | |
    | 21 | TABLE ACCESS BY INDEX ROWID | VHD_CALLDESK_HISTORY | 1 | 20 | 2 |
    |* 22 | INDEX RANGE SCAN | VHD_CALLDSK_HIST_CALLID_IDX | 1 | | 1 |
    |* 23 | FILTER | | | | |
    | 24 | SORT GROUP BY NOSORT | | 1 | 12 | 2 |
    | 25 | TABLE ACCESS BY INDEX ROWID | VHD_CALLDESK_HISTORY | 1 | 12 | 2 |
    |* 26 | INDEX RANGE SCAN | VHD_CALLDSK_HIST_CALLID_IDX | 1 | | 1 |
    | 27 | TABLE ACCESS BY INDEX ROWID | VHD_CALLMODE | 1 | 9 | 1 |
    |* 28 | INDEX UNIQUE SCAN | VHD_CALLMOD_MODID_PK | 1 | | |
    | 29 | TABLE ACCESS BY INDEX ROWID | VHD_APPLICATIONS | 1 | 20 | 1 |
    |* 30 | INDEX UNIQUE SCAN | VHD_APPL_APPLID_PK | 1 | | |
    | 31 | TABLE ACCESS BY INDEX ROWID | VHD_CALLTYPES | 1 | 21 | 1 |
    |* 32 | INDEX UNIQUE SCAN | VHD_CALLTYP_ID_PK | 1 | | |
    | 33 | TABLE ACCESS BY INDEX ROWID | VHD_TEAMS | 1 | 21 | 1 |
    |* 34 | INDEX UNIQUE SCAN | VHD_TEAMID_PK | 1 | | |
    | 35 | TABLE ACCESS BY INDEX ROWID | VHD_ENVIRONMENT | 1 | 21 | 1 |
    |* 36 | INDEX UNIQUE SCAN | VHD_ENV_APLENVID_PK | 1 | | |
    | 37 | TABLE ACCESS BY INDEX ROWID | VHD_MODULES | 1 | 22 | 1 |
    |* 38 | INDEX UNIQUE SCAN | VHD_MOD_APLMOD_ID_PK | 1 | | |
    | 39 | TABLE ACCESS BY INDEX ROWID | COM_ORGANIZATION | 1 | 16 | 1 |
    |* 40 | INDEX UNIQUE SCAN | COM_ORG_PK | 1 | | |
    | 41 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 |
    |* 42 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
    | 43 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 43 |
    |* 44 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
    | 45 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 | 1
    |* 46 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
    | 47 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 | 1
    |* 48 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
    | 49 | NESTED LOOPS OUTER | | 1 | 1785 | 13 |
    | 50 | VIEW | | 1 | 1761 | 12 |
    | 51 | NESTED LOOPS OUTER | | 1 | 1656 | 12 |
    | 52 | NESTED LOOPS OUTER | | 1 | 1632 | 11 |
    | 53 | NESTED LOOPS OUTER | | 1 | 1608 | 10 |
    | 54 | NESTED LOOPS | | 1 | 1565 | 9 |
    | 55 | NESTED LOOPS | | 1 | 1549 | 9 |
    | 56 | NESTED LOOPS | | 1 | 1535 | 9 |
    | 57 | NESTED LOOPS OUTER | | 1 | 1513 | 8 |
    | 58 | NESTED LOOPS OUTER | | 1 | 1492 | 7 |
    | 59 | NESTED LOOPS | | 1 | 1471 | 6 |
    | 60 | NESTED LOOPS | | 1 | 1450 | 5 |
    | 61 | NESTED LOOPS OUTER | | 1 | 1430 | 4 |
    | 62 | NESTED LOOPS OUTER | | 1 | 1421 | 3 |
    | 63 | TABLE ACCESS FULL | VHD_CALLDESK_ARCHIVE | 1 | 1401 | 2 |
    | 64 | VIEW PUSHED PREDICATE | | 1 | 20 | 1 |
    |* 65 | FILTER | | | | |
    | 66 | TABLE ACCESS BY INDEX ROWID | VHD_CALLDESK_HISTORY | 1 | 20 | 2 |
    |* 67 | INDEX RANGE SCAN | VHD_CALLDSK_HIST_CALLID_IDX | 1 | | 1 |
    |* 68 | FILTER | | | | |
    | 69 | SORT GROUP BY NOSORT | | 1 | 12 | 2 |
    | 70 | TABLE ACCESS BY INDEX ROWID| VHD_CALLDESK_HISTORY | 1 | 12 | 2 |
    |* 71 | INDEX RANGE SCAN | VHD_CALLDSK_HIST_CALLID_IDX | 1 | | 1 |
    | 72 | TABLE ACCESS BY INDEX ROWID | VHD_CALLMODE | 1 | 9 | 1 |
    |* 73 | INDEX UNIQUE SCAN | VHD_CALLMOD_MODID_PK | 1 | | |
    | 74 | TABLE ACCESS BY INDEX ROWID | VHD_APPLICATIONS | 1 | 20 | 1 |
    |* 75 | INDEX UNIQUE SCAN | VHD_APPL_APPLID_PK | 1 | | |
    | 76 | TABLE ACCESS BY INDEX ROWID | VHD_CALLTYPES | 1 | 21 | 1 |
    |* 77 | INDEX UNIQUE SCAN | VHD_CALLTYP_ID_PK | 1 | | |
    | 78 | TABLE ACCESS BY INDEX ROWID | VHD_TEAMS | 1 | 21 | 1 |
    |* 79 | INDEX UNIQUE SCAN | VHD_TEAMID_PK | 1 | | |
    | 80 | TABLE ACCESS BY INDEX ROWID | VHD_ENVIRONMENT | 1 | 21 | 1 |
    |* 81 | INDEX UNIQUE SCAN | VHD_ENV_APLENVID_PK | 1 | | |
    | 82 | TABLE ACCESS BY INDEX ROWID | VHD_MODULES | 1 | 22 | 1 |
    |* 83 | INDEX UNIQUE SCAN | VHD_MOD_APLMOD_ID_PK | 1 | | |
    | 84 | TABLE ACCESS BY INDEX ROWID | COM_RPTORGANIZATION | 1 | 14 | |
    |* 85 | INDEX UNIQUE SCAN | COM_RPTORG_PK | 1 | | |
    | 86 | TABLE ACCESS BY INDEX ROWID | COM_ORGANIZATION | 1 | 16 | |
    |* 87 | INDEX UNIQUE SCAN | COM_ORG_PK | 1 | | |
    | 88 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 43 |
    |* 89 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
    | 90 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 |
    |* 91 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
    | 92 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 | 1
    |* 93 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
    | 94 | TABLE ACCESS BY INDEX ROWID | LCC_USERINFO_DETAILS | 1 | 24 | 1
    |* 95 | INDEX UNIQUE SCAN | LCCUSERINFOIND | 1 | | |
    Predicate Information (identified by operation id):
    16 - filter("J"."TREE"='HLPDK')
    18 - access("J"."ORGID"="A"."ORGID")
    20 - filter( EXISTS (SELECT /*+ */ 0 FROM "LCC"."VHD_CALLDESK_HISTORY" "Q" WHERE "Q"."CALL_ID"=:B1
    "Q"."CALL_ID" HAVING MIN("Q"."UPDDATE")=:B2))
    22 - access("SYS_ALIAS_2"."CALL_ID"="A"."CALL_ID")
    23 - filter(MIN("Q"."UPDDATE")=:B1)
    26 - access("Q"."CALL_ID"=:B1)
    28 - access("H"."MODE_ID"(+)="A"."MODE_ID")
    30 - access("D"."APPL_ID"="A"."APPL_ID")
    32 - access("F"."CALL_TYPE_ID"="A"."CALL_TYPE_ID")
    34 - access("L"."TEAM_ID"(+)="A"."LAST_ACTION_TEAM")
    36 - access("I"."APPL_ID"(+)="A"."APPL_ID" AND "I"."ENV_ID"(+)="A"."ENV_ID")
    38 - access("E"."APPL_ID"="A"."APPL_ID" AND "E"."MODULE_ID"="A"."MODULE_ID")
    40 - access("C"."CODE"="A"."ORGID")
    42 - access("K"."USER_NAME"(+)="A"."LAST_ACTION_USER")
    44 - access("B"."USER_NAME"(+)="A"."CALLER_ID")
    46 - access("G"."USER_NAME"(+)="A"."UPDUSERID")
    48 - access("LU"."USER_NAME"(+)=NVL("SYS_ALIAS_4"."UPDUSERID_162","SYS_ALIAS_4"."UPDUSERID_25"))
    65 - filter( EXISTS (SELECT /*+ */ 0 FROM "LCC"."VHD_CALLDESK_HISTORY" "Q" WHERE "Q"."CALL_ID"=:B1
    "Q"."CALL_ID" HAVING MIN("Q"."UPDDATE")=:B2))
    67 - access("SYS_ALIAS_2"."CALL_ID"="SYS_ALIAS_1"."CALL_ID")
    68 - filter(MIN("Q"."UPDDATE")=:B1)
    71 - access("Q"."CALL_ID"=:B1)
    73 - access("H"."MODE_ID"(+)="SYS_ALIAS_1"."MODE_ID")
    75 - access("D"."APPL_ID"="SYS_ALIAS_1"."APPL_ID")
    77 - access("F"."CALL_TYPE_ID"="SYS_ALIAS_1"."CALL_TYPE_ID")
    79 - access("L"."TEAM_ID"(+)=TO_NUMBER("SYS_ALIAS_1"."LAST_ACTION_USER"))
    81 - access("I"."APPL_ID"(+)="SYS_ALIAS_1"."APPL_ID" AND "I"."ENV_ID"(+)="SYS_ALIAS_1"."ENV_ID")
    83 - access("E"."APPL_ID"="SYS_ALIAS_1"."APPL_ID" AND "E"."MODULE_ID"="SYS_ALIAS_1"."MODULE_ID")
    85 - access("SYS_ALIAS_1"."ORGID"="J"."ORGID" AND "J"."TREE"='HLPDK')
    87 - access("C"."CODE"="SYS_ALIAS_1"."ORGID")
    89 - access("B"."USER_NAME"(+)="SYS_ALIAS_1"."CALLER_ID")
    91 - access("SYS_ALIAS_1"."UPDUSERID"="G"."USER_NAME"(+))
    93 - access("K"."USER_NAME"(+)="SYS_ALIAS_1"."LAST_ACTION_USER")
    95 - access("LU"."USER_NAME"(+)=NVL("SYS_ALIAS_3"."UPDUSERID_162","SYS_ALIAS_3"."UPDUSERID_25"))
    Note: cpu costing is off

    I've tried to look thru your sql and changed it a bit. Of course not testet :-)
    Your problem isn't the archive table! I tried to remove the 2 selects from the select-clause. Furthermore you have a lot of nested loops in your explain, which is a performance-killer. Try getting rid of them, perhaps use /*+ USE_HASH(?,?) */.
    SELECT a.call_id, a.entry_date,
           NVL (INITCAP (b.full_name), caller_name) AS caller_name, c.description AS org_desc, a.env_id, i.env_desc, a.appl_id,
           d.appl_desc, a.module_id, e.module_desc, a.call_type_id, f.call_type_desc, a.priority, a.upduserid,
           INITCAP (g.full_name) AS lastupdated_username, a.call_desc, h.mode_desc, a.received_time, a.assignment_team, a.status,
           ROUND (lcc.pkg_com.fn_datediff ('MI', a.entry_date, a.status_date)) AS elapsed_time,
           ROUND (lcc.pkg_com.fn_datediff ('MI', a.entry_date, a.status_date)) AS resolved_min,
           CASE
              WHEN a.orgid IN (1, 100, 200)
                 THEN a.orgid
              ELSE j.regionorgid
           END AS region,
           COALESCE (stat.upddate, a.upddate) AS stat_upddate,
           t.team_desc, a.eta_date,
           COALESCE (a.caller_contact, b.telephone) AS caller_contact,
           COALESCE (a.caller_email, b.email) AS email, a.affected_users,
           a.outage_time, a.qa_done, a.last_action_team, a.last_action_user,
           INITCAP (k.full_name) AS last_action_username, a.last_action_date,
           l.team_desc AS last_action_teamdesc, a.refid,
           INITCAP (lu.full_name) AS logged_name, a.pmreview,
           a.status AS main_status
      FROM lcc.vhd_calldesk_archive a, lcc.lcc_userinfo_details b, lcc.com_organization c,
           lcc.vhd_applications d, lcc.vhd_modules e, lcc.vhd_calltypes f, lcc.com_rptorganization j,
           lcc.lcc_userinfo_details g, lcc.vhd_callmode h, lcc.vhd_environment i, lcc.lcc_userinfo_details k,
           lcc.vhd_teams l,
          (SELECT call_id, upduserid
           FROM lcc.vhd_calldesk_history p
           WHERE upddate IN (SELECT   MIN (upddate)
                             FROM lcc.vhd_calldesk_history q
                             WHERE q.call_id = p.call_id
                             GROUP BY call_id)) ku,
           lcc.lcc_userinfo_details lu,
          (SELECT call_id, MAX (upddate)
           FROM lcc.vhd_callstatus
           GROUP BY call_id) stat,
           lcc.vhd_teams t
      WHERE a.caller_id        = b.user_name(+)
        AND a.orgid            = c.code
        AND a.appl_id          = d.appl_id
        AND a.appl_id          = e.appl_id
        AND a.module_id        = e.module_id
        AND a.call_type_id     = f.call_type_id
        AND a.orgid            = j.orgid
        AND j.tree             = 'HLPDK'
        AND a.upduserid        = g.user_name(+)
        AND a.mode_id          = h.mode_id(+)
        AND a.appl_id          = i.appl_id(+)
        AND a.env_id           = i.env_id(+)
        AND a.last_action_user = k.user_name(+)
        AND a.last_action_user = l.team_id(+)
        AND a.call_id          = ku.call_id(+)
        AND NVL (ku.upduserid, a.upduserid) = lu.user_name(+)
        AND a.call_id          = stat.call_id
        AND a.assignment_team  = t.team_id;

  • Slow queries and full table scans in spite of context index

    I have defined a USER_DATASTORE, which uses a PL/SQL procedure to compile data from several tables. The master table has 1.3 million rows, and one of the fields being joined is a CLOB field.
    The resulting token table has 65,000 rows, which seems about right.
    If I query the token table for a word, such as "ORACLE" in the token_text field, I see that the token_count is 139. This query returns instantly.
    The query against the master table is very slow, taking about 15 minutes to return the 139 rows.
    Example query:
    select hnd from master_table where contains(myindex,'ORACLE',1) > 0;
    I've run a sql_trace on this query, and it shows full table scans on both the master table and the DR$MYINDEX$I table. Why is it doing this, and how can I fix it?

    After looking at the tuning FAQ, I can see that this is doing a functional lookup instead of an indexed lookup. But why, when the rows are not constrained by any structural query, and how can I get it to instead to an indexed lookup?
    Thanks in advance,
    Annie

  • Serial table scan with direct path read compared to db file scattered read

    Hi,
    The environment
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
    8K block size
    db_file_multiblock_read_count is 128
    show sga
    Total System Global Area 1.6702E+10 bytes
    Fixed Size                  2219952 bytes
    Variable Size            7918846032 bytes
    Database Buffers         8724152320 bytes
    Redo Buffers               57090048 bytes
    16GB of SGA with 8GB of db buffer cache.
    -- database is built on Solid State Disks
    -- SQL trace and wait events
    DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true )
    -- The underlying table is called tdash. It has 1.7 Million rows based on data in all_objects. NO index
    TABLE_NAME                             Rows Table Size/MB      Used/MB    Free/MB
    TDASH                             1,729,204        15,242       15,186         56
    TABLE_NAME                     Allocated blocks Empty blocks Average space/KB Free list blocks
    TDASH                                 1,943,823        7,153              805                0
    Objectives
    To show that when serial scans are performed on database built on Solid State Disks (SSD) compared to Magnetic disks (HDD), the performance gain is far less compared to random reads with index scans on SSD compared to HDD
    Approach
    We want to read the first 100 rows of tdash table randomly into buffer, taking account of wait events and wait times generated. The idea is that on SSD the wait times will be better compared to HDD but not that much given the serial nature of table scans.
    The code used
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_with_tdash_ssdtester_noindex';
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    /Server is rebooted prior to any tests
    Whern run as default, the optimizer (although some attribute this to the execution engine) chooses direct path read into PGA in preference to db file scattered read.
    With this choice it takes 6,520 seconds to complete the query. The results are shown below
    SQL ID: 78kxqdhk1ubvq
    Plan Hash: 1148949653
    SELECT *
    FROM
    TDASH WHERE OBJECT_ID = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          2         47          0           0
    Execute    100      0.00       0.00          1         51          0           0
    Fetch      100     10.88    6519.89  194142802  194831012          0         100
    total      201     10.90    6519.90  194142805  194831110          0         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS FULL TDASH (cr=1948310 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                         2        0.00          0.00
      direct path read                          1517504        0.05       6199.93
      asynch descriptor resize                      196        0.00          0.00
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      3.84       4.03        320      48666          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      3.84       4.03        320      48666          0           1
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID: 9babjv8yq8ru3
    Plan Hash: 0
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      2      3.84       4.03        320      48666          0           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.84       4.03        320      48666          0           2
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      log file sync                                   1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        9      0.01       0.00          2         47          0           0
    Execute    129      0.01       0.00          1         52          2           1
    Fetch      140     10.88    6519.89  194142805  194831110          0         130
    total      278     10.91    6519.91  194142808  194831209          2         131
    Misses in library cache during parse: 9
    Misses in library cache during execute: 8
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         5        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      direct path read                          1517504        0.05       6199.93
      asynch descriptor resize                      196        0.00          0.00
      102  user  SQL statements in session.
       29  internal SQL statements in session.
      131  SQL statements in session.
        1  statement EXPLAINed in this session.
    Trace file: mydb_ora_16394_test_with_tdash_ssdtester_noindex.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
         102  user  SQL statements in trace file.
          29  internal SQL statements in trace file.
         131  SQL statements in trace file.
          11  unique SQL statements in trace file.
           1  SQL statements EXPLAINed using schema:
               ssdtester.plan_table
                 Schema was specified.
                 Table was created.
                 Table was dropped.
    1531657  lines in trace file.
        6520  elapsed seconds in trace file.I then force the query not to use direct path read by invoking
    ALTER SESSION SET EVENTS '10949 trace name context forever, level 1'  -- No Direct path read  ;In this case the optimizer uses db file scattered read predominantly and the query takes 4,299 seconds to finish which is around 34% faster than using direct path read (default).
    The report is shown below
    SQL ID: 78kxqdhk1ubvq
    Plan Hash: 1148949653
    SELECT *
    FROM
    TDASH WHERE OBJECT_ID = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          2         47          0           0
    Execute    100      0.00       0.00          2         51          0           0
    Fetch      100    143.44    4298.87  110348670  194490912          0         100
    total      201    143.45    4298.88  110348674  194491010          0         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS FULL TDASH (cr=1944909 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                    129759        0.01         17.50
      db file scattered read                    1218651        0.05       3770.02
      latch: object queue header operation            2        0.00          0.00
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      3.92       4.07        319      48625          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      3.92       4.07        319      48625          0           1
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID: 9babjv8yq8ru3
    Plan Hash: 0
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      2      3.92       4.07        319      48625          0           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.92       4.07        319      48625          0           2
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      log file sync                                   1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        9      0.01       0.00          2         47          0           0
    Execute    129      0.00       0.00          2         52          2           1
    Fetch      140    143.44    4298.87  110348674  194491010          0         130
    total      278    143.46    4298.88  110348678  194491109          2         131
    Misses in library cache during parse: 9
    Misses in library cache during execute: 8
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                    129763        0.01         17.50
      Disk file operations I/O                        3        0.00          0.00
      db file scattered read                    1218651        0.05       3770.02
      latch: object queue header operation            2        0.00          0.00
      102  user  SQL statements in session.
       29  internal SQL statements in session.
      131  SQL statements in session.
        1  statement EXPLAINed in this session.
    Trace file: mydb_ora_26796_test_with_tdash_ssdtester_noindex_NDPR.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
         102  user  SQL statements in trace file.
          29  internal SQL statements in trace file.
         131  SQL statements in trace file.
          11  unique SQL statements in trace file.
           1  SQL statements EXPLAINed using schema:
               ssdtester.plan_table
                 Schema was specified.
                 Table was created.
                 Table was dropped.
    1357958  lines in trace file.
        4299  elapsed seconds in trace file.I note that there are 1,517,504 waits with direct path read with total time of nearly 6,200 seconds. In comparison with no direct path read, there are 1,218,651 db file scattered read waits with total wait time of 3,770 seconds. My understanding is that direct path read can use single or multi-block read into the PGA. However db file scattered reads do multi-block read into multiple discontinuous SGA buffers. So it is possible given the higher number of direct path waits that the optimizer cannot do multi-block reads (contigious buffers within PGA) and hence has to revert to single blocks reads which results in more calls and more waits?.
    Appreciate any advise and apologies for being long winded.
    Thanks,
    Mich

    Hi Charles,
    I am doing your tests for t1 table using my server.
    Just to clarify my environment is:
    I did the whole of this test on my server. My server has I7-980 HEX core processor with 24GB of RAM and 1 TB of HDD SATA II for test/scratch backup and archive. The operating system is RHES 5.2 64-bit installed on a 120GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive.
    Oracle version installed was 11g Enterprise Edition Release 11.2.0.1.0 -64bit. The binaries were created on HDD. Oracle itself was configured with 16GB of SGA, of which 7.5GB was allocated to Variable Size and 8GB to Database Buffers.
    For Oracle tablespaces including SYS, SYSTEM, SYSAUX, TEMPORARY, UNDO and redo logs, I used file systems on 240GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive. With 4K Random Read at 53,500 IOPS and 4K Random Write at 56,000 IOPS (manufacturer’s figures), this drive is probably one of the fastest commodity SSDs using NAND flash memory with Multi-Level Cell (MLC). Now my T1 table created as per your script and has the following rows and blocks (8k block size)
    SELECT
      NUM_ROWS,
      BLOCKS
    FROM
      USER_TABLES
    WHERE
      TABLE_NAME='T1';
      NUM_ROWS     BLOCKS
      12000000     178952which is pretty identical to yours.
    Then I run the query as brelow
    set timing on
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_bed_T1';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SELECT
            COUNT(*)
    FROM
            T1
    WHERE
            RN=1;
    which gives
      COUNT(*)
         60000
    Elapsed: 00:00:05.29
    tkprof output shows
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.02       5.28     178292     178299          0           1
    total        4      0.02       5.28     178292     178299          0           1
    Compared to yours:
    Fetch        2      0.60       4.10     178493     178498          0           1
    It appears to me that my CPU utilisation is by order of magnitude better but my elapsed time is worse!
    Now the way I see it elapsed time = CPU time + wait time. Further down I have
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=178299 pr=178292 pw=0 time=0 us)
      60000   TABLE ACCESS FULL T1 (cr=178299 pr=178292 pw=0 time=42216 us cost=48697 size=240000 card=60000)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
      60000    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T1' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       3        0.00          0.00
      SQL*Net message from client                     3        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      direct path read                             1405        0.00          4.68
    Your direct path reads are
      direct path read                             1404        0.01          3.40Which indicates to me you have faster disks compared to mine, whereas it sounds like my CPU is faster than yours.
    With db file scattered read I get
    Elapsed: 00:00:06.95
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      1.22       6.93     178293     178315          0           1
    total        4      1.22       6.94     178293     178315          0           1
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=178315 pr=178293 pw=0 time=0 us)
      60000   TABLE ACCESS FULL T1 (cr=178315 pr=178293 pw=0 time=41832 us cost=48697 size=240000 card=60000)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
      60000    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T1' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                         1        0.00          0.00
      db file scattered read                       1414        0.00          5.36
      SQL*Net message from client                     2        0.00          0.00
    compared to your
      db file scattered read                       1415        0.00          4.16On the face of it with this test mine shows 21% improvement with direct path read compared to db scattered file read. So now I can go back to re-visit my original test results:
    First default with direct path read
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          2         47          0           0
    Execute    100      0.00       0.00          1         51          0           0
    Fetch      100     10.88    6519.89  194142802  194831012          0         100
    total      201     10.90    6519.90  194142805  194831110          0         100
    CPU ~ 11 sec, elapsed ~ 6520 sec
    wait stats
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      direct path read                          1517504        0.05       6199.93
    roughly 0.004 sec for each I/ONow with db scattered file read I get
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          2         47          0           0
    Execute    100      0.00       0.00          2         51          0           0
    Fetch      100    143.44    4298.87  110348670  194490912          0         100
    total      201    143.45    4298.88  110348674  194491010          0         100
    CPU ~ 143 sec, elapsed ~ 4299 sec
    and waits:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                    129759        0.01         17.50
      db file scattered read                    1218651        0.05       3770.02
    roughly 17.5/129759 = .00013 sec for single block I/O and  3770.02/1218651 = .0030 for multi-block I/ONow my theory is that the improvements comes from the large buffer cache (8320MB) inducing it to do some read aheads (async pre-fetch). Read aheads are like quasi logical I/Os and they will be cheaper compared to physical I/O. When there is large buffer cache and read aheads can be done then using buffer cache is a better choice than PGA?
    Regards,
    Mich

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

  • Taking more time in INDEX RANGE SCAN compare to the full table scan

    Hi all ,
    Below are the version og my database.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for HPUX: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    I have gather the table statistics and plan change for sql statment.
    SELECT P1.COMPANY, P1.PAYGROUP, P1.PAY_END_DT, P1.PAYCHECK_OPTION,
    P1.OFF_CYCLE, P1.PAGE_NUM, P1.LINE_NUM, P1.SEPCHK  FROM  PS_PAY_CHECK P1
    WHERE P1.FORM_ID = :1 AND P1.PAYCHECK_NBR = :2 AND
    P1.CHECK_DT = :3 AND P1.PAYCHECK_OPTION <> 'R'
    Plan before the gather stats.
    Plan hash value: 3872726522
    | Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |              |       |       | *14306* (100)|          |
    |   1 |  *TABLE ACCESS FULL| PS_PAY_CHECK* |     1 |    51 | 14306   (4)| 00:02:52 |
    Plan after the gather stats:
    Operation     Object Name     Rows     Bytes     Cost
    SELECT STATEMENT Optimizer Mode=CHOOSE
              1           4
      *TABLE ACCESS BY INDEX ROWID     SYSADM.PS_PAY_CHECK*     1     51     *4*
        *INDEX RANGE SCAN     SYSADM.PS0PAY_CHECK*     1           3After gather stats paln look good . but when i am exeuting the query it take 5 hours. before the gather stats it finishing the within 2 hours. i do not want to restore my old statistics. below are the data for the tables.and when i am obserrving it lot of db files scatter rea
    NAME                                 TYPE        VALUE
    _optimizer_cost_based_transformation string      OFF
    filesystemio_options                 string      asynch
    object_cache_optimal_size            integer     102400
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      choose
    optimizer_secure_view_merging        boolean     TRUE
    plsql_optimize_level                 integer     2
    SQL> select count(*) from sysadm.ps_pay_check;
    select num_rows,blocks from dba_tables where table_name ='PS_PAY_CHECK';
      COUNT(*)
       1270052
    SQL> SQL> SQL>
      NUM_ROWS     BLOCKS
       1270047      63166
    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    db file sequential read           1,584,677       6,375      4   36.6   User I/O
    db file scattered read            2,366,398       5,689      2   32.7   User I/Oplease let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?

    suresh.ratnaji wrote:
    NAME                                 TYPE        VALUE
    _optimizer_cost_based_transformation string      OFF
    filesystemio_options                 string      asynch
    object_cache_optimal_size            integer     102400
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      choose
    optimizer_secure_view_merging        boolean     TRUE
    plsql_optimize_level                 integer     2
    please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
    Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
    On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default?

  • Tables in subquery resulting in full table scans

    Hi,
    This is related to a p1 bug 13009447. Customer recently upgraded to 10G. Customer reported this type of problem for the second time.
    Problem Description:
    All the tables in sub-query are resulting in full table scans and hence are executing for hours.
    Here is the query
    SELECT /*+ PARALLEL*/
    act.assignment_action_id
    , act.assignment_id
    , act.tax_unit_id
    , as1.person_id
    , as1.effective_start_date
    , as1.primary_flag
    FROM pay_payroll_actions pa1
    , pay_population_ranges pop
    , per_periods_of_service pos
    , per_all_assignments_f as1
    , pay_assignment_actions act
    , pay_payroll_actions pa2
    , pay_action_classifications pcl
    , per_all_assignments_f as2
    WHERE pa1.payroll_action_id = :b2
    AND pa2.payroll_id = pa1.payroll_id
    AND pa2.effective_date
    BETWEEN pa1.start_date
    AND pa1.effective_date
    AND act.payroll_action_id = pa2.payroll_action_id
    AND act.action_status IN ('C', 'S')
    AND pcl.classification_name = :b3
    AND pa2.consolidation_set_id = pa1.consolidation_set_id
    AND pa2.action_type = pcl.action_type
    AND nvl (pa2.future_process_mode, 'Y') = 'Y'
    AND as1.assignment_id = act.assignment_id
    AND pa1.effective_date
    BETWEEN as1.effective_start_date
    AND as1.effective_end_date
    AND as2.assignment_id = act.assignment_id
    AND pa2.effective_date
    BETWEEN as2.effective_start_date
    AND as2.effective_end_date
    AND as2.payroll_id = as1.payroll_id
    AND pos.period_of_service_id = as1.period_of_service_id
    AND pop.payroll_action_id = :b2
    AND pop.chunk_number = :b1
    AND pos.person_id = pop.person_id
    AND (
    as1.payroll_id = pa1.payroll_id
    OR pa1.payroll_id IS NULL
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/ NULL
    FROM pay_assignment_actions ac2
    , pay_payroll_actions pa3
    , pay_action_interlocks int
    WHERE int.locked_action_id = act.assignment_action_id
    AND ac2.assignment_action_id = int.locking_action_id
    AND pa3.payroll_action_id = ac2.payroll_action_id
    AND pa3.action_type IN ('P', 'U')
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/
    NULL
    FROM per_all_assignments_f as3
    , pay_assignment_actions ac3
    WHERE :b4 = 'N'
    AND ac3.payroll_action_id = pa2.payroll_action_id
    AND ac3.action_status NOT IN ('C', 'S')
    AND as3.assignment_id = ac3.assignment_id
    AND pa2.effective_date
    BETWEEN as3.effective_start_date
    AND as3.effective_end_date
    AND as3.person_id = as2.person_id
    ORDER BY as1.person_id
    , as1.primary_flag DESC
    , as1.effective_start_date
    , act.assignment_id
    FOR UPDATE OF as1.assignment_id
    , pos.period_of_service_id
    Here is the execution plan for this query. We tried adding hints in sub-query to force indexes to pick-up but it is still doing Full table scans.
    Suspecting some db parameter which is causing this issue.
    In the
    - Full table scans on tables in the first sub-query
    PAY_PAYROLL_ACTIONS, PAY_ASSIGNMENT_ACTIONS, PAY_ACTION_INTERLOCKS
    - Full table scans on tables in Second sub-query
    PER_ALL_ASSIGNMENTS_F PAY_ASSIGNMENT_ACTIONS
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 29 398.80 2192.99 238706 4991924 2383 0
    Fetch 1136 378.38 1921.39 0 4820511 0 1108
    total 1166 777.19 4114.38 238706 9812435 2383 1108
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 41 (APPS) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 FOR UPDATE
    0 PX COORDINATOR
    0 PX SEND (QC (ORDER)) OF ':TQ10009' [:Q1009]
    0 SORT (ORDER BY) [:Q1009]
    0 PX RECEIVE [:Q1009]
    0 PX SEND (RANGE) OF ':TQ10008' [:Q1008]
    0 HASH JOIN (ANTI BUFFERED) [:Q1008]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10006' [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE) [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 HASH JOIN (ANTI) [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10002'
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE)
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_POPULATION_RANGES_N4' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_PERIODS_OF_SERVICE' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_PERIODS_OF_SERVICE_N3' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_N4' (INDEX)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_ASSIGNMENT_ACTIONS_N51' (INDEX)
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10005' [:Q1005]
    0 VIEW OF 'VW_SQ_1' (VIEW) [:Q1005]
    0 HASH JOIN [:Q1005]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (BROADCAST) OF ':TQ10000'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 HASH JOIN [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10004' [:Q1004]
    0 PX BLOCK (ITERATOR) [:Q1004]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1004]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10001'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ACTION_INTERLOCKS' (TABLE)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_ACTION_CLASSIFICATIONS_PK' (INDEX (UNIQUE))[:Q1006]
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_F_PK' (INDEX (UNIQUE)) [:Q1006]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10007' [:Q1007]
    0 VIEW OF 'VW_SQ_2' (VIEW) [:Q1007]
    0 FILTER [:Q1007]
    0 HASH JOIN [:Q1007]
    0 BUFFER (SORT) [:Q1007]
    0 PX RECEIVE [:Q1007]
    0 PX SEND (BROADCAST) OF ':TQ10003'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 PX BLOCK (ITERATOR) [:Q1007]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1007]
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    enq: KO - fast object checkpoint 32 0.02 0.12
    os thread startup 8 0.02 0.19
    PX Deq: Join ACK 198 0.00 0.04
    PX Deq Credit: send blkd 167116 1.95 1103.72
    PX Deq Credit: need buffer 327389 1.95 266.30
    PX Deq: Parse Reply 148 0.01 0.03
    PX Deq: Execute Reply 11531 1.95 1901.50
    PX qref latch 23060 0.00 0.60
    db file sequential read 108199 0.17 22.11
    db file scattered read 9272 0.19 51.74
    PX Deq: Table Q qref 78 0.00 0.03
    PX Deq: Signal ACK 1165 0.10 10.84
    enq: PS - contention 73 0.00 0.00
    reliable message 27 0.00 0.00
    latch free 218 0.00 0.01
    latch: session allocation 11 0.00 0.00
    Thanks in advance
    Suresh PV

    Hi,
    welcome,
    how is the query performing if you delete all the hints for PARALLEL, because most of the waits are related to waits on Parallel.
    Herald ten Dam
    http://htendam.wordpress.com
    PS. Use "{code}" for showing your code and explain plans, it looks nicer

  • Finding the Text of SQL Query causing Full Table Scans

    Hi,
    does anyone have a sql script, that shows the complete sql text of queries that have caused a full table scan?
    Please also let me know as to how soon this script needs to be run, in the sense does it work only while the query is running or would it work once it completes (if so is there a valid duration, such as until next restart, etc.)
    Your help is appreciated.
    Thx,
    Mayuran

    Finding the Text of SQL Query Causing Full Table Scan

Maybe you are looking for