Full Table Scan issue

Hi,
I have a two tables (TAB1, TAB2)
I wrote a SQL query on these two tables and included other tables..
Col1 & Col2 in TAB1 defined as Char(8)
Col1 & Col2 in TAB2 defined as VarChar(8)
in the query where clause I wrote like
TAB2.Col1 = TRIM(TAB1.Col1)
and TAB2.Col1 = TRIM(TAB1.Col2)
Problem: If I add TRIM(), it is doing a full table scan and query is very very slow..
How to fix this issue?
Thanks

Sorry, I am not giving full details here.. but here is some info..
(11) TABLE ACCESS FULL TAB1 [Analyzed]
(11) Blocks: 430 Est. Rows: 62,770 of 62,770 Cost: 67
(19) TABLE ACCESS FULL STAT_ONE [Analyzed]
(19) Blocks: 821 Est. Rows: 53,580 of 53,580 Cost: 63
Index? U mean on COL1 & COL2? no idea I will find out..
Thanks

Similar Messages

  • Full table scan issue with optional params

    Hi
    I have following problem when writing PL/SQL queries for optional search conditions in the SQL statements
    for ex: if there's a select querty with 3 contions as follows
    select .......
    from
    table1 t1 join...
    teble2 t2 join
    where
    ( :param1 = -1 or t1.col1 = :param1)
    and
    ( :param2 = -1 or t2.col1 = :param2)
    this is mainly to support optional search conditions such as param1 and param2 where when any of those params are option it is passed as -1 so that condition of where clause becomes true and has no filtering.
    However once we use this way even the contion columns are indexed those are not used but full scan happens.
    if we write dynamic SQL (i.e removing condition check section totally from where clause dynamically based on the value of the param) then the query performs as usual. other option is to duplicate the same query in if ... then.. else block for different values of param1 and param2 .. in that case we would end up with huge SP..
    so what's the solution for this kind of scenario... do we have to end up with dynamic SQL (string concatanation to build the query at runtime with required conditions)
    Please help me on this. Normally what is the usual approach for such selection query since most search pages have this option to ommit several selection criterias if necessary... so it should be very common problem
    Thanks
    -Rasika

    Hello
    Have a look at this link to asktom It shows how to specify a dynamic where clause AND use bind variables in the resulting SQL statement.
    HTH
    David

  • Data Federator Full Table scan

    Hi,
    Is it possible to prevent a Full table scan when creating a join between tables of 2 catalogues in a Data Federator?
    This is seriously hampering the development time within Data Federator Development.
    I am working with IDT Beta to create a Universe based on Multiple Source. The delay is so huge when creating joins that we could revert to Universe Design Tool. I have posted it here as Data Federator gurus will have tweak about IDT as it incorporates DF within itself in BI 4.0.
    Any inputs will be great.. In case this is in the wrong forums, then please move it accordingly.
    VFernandes

    The issue was fixed when the GA was released. This was present in Beta

  • URGENT HELP Required: Solution to avoid Full table scan for a PL/SQL query

    Hi Everyone,
    When I checked the EXPLAIN PLAN for the below SQL query, I saw that Full table scans is going on both the tables TABLE_A and TABLE_B
    UPDATE TABLE_A a
    SET a.current_commit_date =
    (SELECT MAX (b.loading_date)
    FROM TABLE_B b
    WHERE a.sales_order_id = b.sales_order_id
    AND a.sales_order_line_id = b.sales_order_line_id
    AND b.confirmed_qty > 0
    AND b.data_flag IS NULL
    OR b.schedule_line_delivery_date >= '23 NOV 2008')
    Though the TABLE_A is a small table having nearly 1 lakh records, the TABLE_B is a huge table, having nearly 2 and a half crore records.
    I created an Index on the TABLE_B having all its fields used in the WHERE clause. But, still the explain plan is showing FULL TABLE SCAN only.
    When I run the query, it is taking long long time to execute (more than 1 day) and each time I have to kill the session.
    Please please help me in optimizing this.
    Thanks,
    Sudhindra

    Check the instruction again, you're leaving out information we need in order to help you, like optimizer information.
    - Post your exact database version, that is: the result of select * from v$version;
    - Don't use TOAD's execution plan, but use
    SQL> explain plan for <your_query>;
    SQL> select * from table(dbms_xplan.display);(You can execute that in TOAD as well).
    Don't forget you need to use the {noformat}{noformat} tag in order to post formatted code/output/execution plans etc.
    It's also explained in the instruction.
    When was the last time statistics were gathered for table_a and table_b?
    You can find out by issuing the following query:select table_name
    , last_analyzed
    , num_rows
    from user_tables
    where table_name in ('TABLE_A', 'TABLE_B');
    Can you also post the results of these counts;select count(*)
    from table_b
    where confirmed_qty > 0;
    select count(*)
    from table_b
    where data_flag is null;
    select count(*)
    from table_b
    where schedule_line_delivery_date >= /* assuming you're using a date, and not a string*/ to_date('23 NOV 2008', 'dd mon yyyy');

  • Preventing Discoverer using Full Table Scans with Decode in a View

    Hi Forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query in Discoverer.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds. Changing the condition to Batch Status that = ‘Unposted’ returns the query in seconds.
    I’ve been doing some digging and have found the database view that is linked to the Journal Batches folder in Discoverer. See at end of post.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans.
    Any idea how do we get around this?
    SELECT
    JOURNAL_BATCH1.JE_BATCH_ID,
    JOURNAL_BATCH1.NAME,
    JOURNAL_BATCH1.SET_OF_BOOKS_ID,
    GL_SET_OF_BOOKS.NAME,
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),
    DECODE( JOURNAL_BATCH1.ACTUAL_FLAG, 'A', 'Actual', 'B', 'Budget', 'E', 'Encumbrance', NULL ),
    JOURNAL_BATCH1.DEFAULT_PERIOD_NAME,
    JOURNAL_BATCH1.POSTED_DATE,
    JOURNAL_BATCH1.DATE_CREATED,
    JOURNAL_BATCH1.DESCRIPTION,
    DECODE( JOURNAL_BATCH1.AVERAGE_JOURNAL_FLAG, 'N', 'Standard', 'Y', 'Average', NULL ),
    DECODE( JOURNAL_BATCH1.BUDGETARY_CONTROL_STATUS, 'F', 'Failed', 'I', 'In Process', 'N', 'N/A', 'P', 'Passed', 'R', 'Required', NULL ),
    DECODE( JOURNAL_BATCH1.APPROVAL_STATUS_CODE, 'A', 'Approved', 'I', 'In Process', 'J', 'Rejected', 'R', 'Required', 'V','Validation Failed','Z', 'N/A',NULL ),
    JOURNAL_BATCH1.CONTROL_TOTAL,
    JOURNAL_BATCH1.RUNNING_TOTAL_DR,
    JOURNAL_BATCH1.RUNNING_TOTAL_CR,
    JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_DR,
    JOURNAL_BATCH1.RUNNING_TOTAL_ACCOUNTED_CR,
    JOURNAL_BATCH1.PARENT_JE_BATCH_ID,
    JOURNAL_BATCH2.NAME
    FROM
    GL_JE_BATCHES JOURNAL_BATCH1,
    GL_JE_BATCHES JOURNAL_BATCH2,
    GL_SETS_OF_BOOKS
    GL_SET_OF_BOOKS
    WHERE
    JOURNAL_BATCH1.PARENT_JE_BATCH_ID = JOURNAL_BATCH2.JE_BATCH_ID (+) AND
    JOURNAL_BATCH1.SET_OF_BOOKS_ID = GL_SET_OF_BOOKS.SET_OF_BOOKS_ID AND
    GL_SECURITY_PKG.VALIDATE_ACCESS( JOURNAL_BATCH1.SET_OF_BOOKS_ID ) = 'TRUE' WITH READ ONLY
    Thanks,
    Lance

    Discoverer created it's own SQL.
    Please see below the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    _________________________________

  • Tables in subquery resulting in full table scans

    Hi,
    This is related to a p1 bug 13009447. Customer recently upgraded to 10G. Customer reported this type of problem for the second time.
    Problem Description:
    All the tables in sub-query are resulting in full table scans and hence are executing for hours.
    Here is the query
    SELECT /*+ PARALLEL*/
    act.assignment_action_id
    , act.assignment_id
    , act.tax_unit_id
    , as1.person_id
    , as1.effective_start_date
    , as1.primary_flag
    FROM pay_payroll_actions pa1
    , pay_population_ranges pop
    , per_periods_of_service pos
    , per_all_assignments_f as1
    , pay_assignment_actions act
    , pay_payroll_actions pa2
    , pay_action_classifications pcl
    , per_all_assignments_f as2
    WHERE pa1.payroll_action_id = :b2
    AND pa2.payroll_id = pa1.payroll_id
    AND pa2.effective_date
    BETWEEN pa1.start_date
    AND pa1.effective_date
    AND act.payroll_action_id = pa2.payroll_action_id
    AND act.action_status IN ('C', 'S')
    AND pcl.classification_name = :b3
    AND pa2.consolidation_set_id = pa1.consolidation_set_id
    AND pa2.action_type = pcl.action_type
    AND nvl (pa2.future_process_mode, 'Y') = 'Y'
    AND as1.assignment_id = act.assignment_id
    AND pa1.effective_date
    BETWEEN as1.effective_start_date
    AND as1.effective_end_date
    AND as2.assignment_id = act.assignment_id
    AND pa2.effective_date
    BETWEEN as2.effective_start_date
    AND as2.effective_end_date
    AND as2.payroll_id = as1.payroll_id
    AND pos.period_of_service_id = as1.period_of_service_id
    AND pop.payroll_action_id = :b2
    AND pop.chunk_number = :b1
    AND pos.person_id = pop.person_id
    AND (
    as1.payroll_id = pa1.payroll_id
    OR pa1.payroll_id IS NULL
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/ NULL
    FROM pay_assignment_actions ac2
    , pay_payroll_actions pa3
    , pay_action_interlocks int
    WHERE int.locked_action_id = act.assignment_action_id
    AND ac2.assignment_action_id = int.locking_action_id
    AND pa3.payroll_action_id = ac2.payroll_action_id
    AND pa3.action_type IN ('P', 'U')
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/
    NULL
    FROM per_all_assignments_f as3
    , pay_assignment_actions ac3
    WHERE :b4 = 'N'
    AND ac3.payroll_action_id = pa2.payroll_action_id
    AND ac3.action_status NOT IN ('C', 'S')
    AND as3.assignment_id = ac3.assignment_id
    AND pa2.effective_date
    BETWEEN as3.effective_start_date
    AND as3.effective_end_date
    AND as3.person_id = as2.person_id
    ORDER BY as1.person_id
    , as1.primary_flag DESC
    , as1.effective_start_date
    , act.assignment_id
    FOR UPDATE OF as1.assignment_id
    , pos.period_of_service_id
    Here is the execution plan for this query. We tried adding hints in sub-query to force indexes to pick-up but it is still doing Full table scans.
    Suspecting some db parameter which is causing this issue.
    In the
    - Full table scans on tables in the first sub-query
    PAY_PAYROLL_ACTIONS, PAY_ASSIGNMENT_ACTIONS, PAY_ACTION_INTERLOCKS
    - Full table scans on tables in Second sub-query
    PER_ALL_ASSIGNMENTS_F PAY_ASSIGNMENT_ACTIONS
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 29 398.80 2192.99 238706 4991924 2383 0
    Fetch 1136 378.38 1921.39 0 4820511 0 1108
    total 1166 777.19 4114.38 238706 9812435 2383 1108
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 41 (APPS) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 FOR UPDATE
    0 PX COORDINATOR
    0 PX SEND (QC (ORDER)) OF ':TQ10009' [:Q1009]
    0 SORT (ORDER BY) [:Q1009]
    0 PX RECEIVE [:Q1009]
    0 PX SEND (RANGE) OF ':TQ10008' [:Q1008]
    0 HASH JOIN (ANTI BUFFERED) [:Q1008]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10006' [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE) [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 HASH JOIN (ANTI) [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10002'
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE)
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_POPULATION_RANGES_N4' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_PERIODS_OF_SERVICE' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_PERIODS_OF_SERVICE_N3' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_N4' (INDEX)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_ASSIGNMENT_ACTIONS_N51' (INDEX)
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10005' [:Q1005]
    0 VIEW OF 'VW_SQ_1' (VIEW) [:Q1005]
    0 HASH JOIN [:Q1005]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (BROADCAST) OF ':TQ10000'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 HASH JOIN [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10004' [:Q1004]
    0 PX BLOCK (ITERATOR) [:Q1004]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1004]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10001'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ACTION_INTERLOCKS' (TABLE)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_ACTION_CLASSIFICATIONS_PK' (INDEX (UNIQUE))[:Q1006]
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_F_PK' (INDEX (UNIQUE)) [:Q1006]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10007' [:Q1007]
    0 VIEW OF 'VW_SQ_2' (VIEW) [:Q1007]
    0 FILTER [:Q1007]
    0 HASH JOIN [:Q1007]
    0 BUFFER (SORT) [:Q1007]
    0 PX RECEIVE [:Q1007]
    0 PX SEND (BROADCAST) OF ':TQ10003'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 PX BLOCK (ITERATOR) [:Q1007]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1007]
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    enq: KO - fast object checkpoint 32 0.02 0.12
    os thread startup 8 0.02 0.19
    PX Deq: Join ACK 198 0.00 0.04
    PX Deq Credit: send blkd 167116 1.95 1103.72
    PX Deq Credit: need buffer 327389 1.95 266.30
    PX Deq: Parse Reply 148 0.01 0.03
    PX Deq: Execute Reply 11531 1.95 1901.50
    PX qref latch 23060 0.00 0.60
    db file sequential read 108199 0.17 22.11
    db file scattered read 9272 0.19 51.74
    PX Deq: Table Q qref 78 0.00 0.03
    PX Deq: Signal ACK 1165 0.10 10.84
    enq: PS - contention 73 0.00 0.00
    reliable message 27 0.00 0.00
    latch free 218 0.00 0.01
    latch: session allocation 11 0.00 0.00
    Thanks in advance
    Suresh PV

    Hi,
    welcome,
    how is the query performing if you delete all the hints for PARALLEL, because most of the waits are related to waits on Parallel.
    Herald ten Dam
    http://htendam.wordpress.com
    PS. Use "{code}" for showing your code and explain plans, it looks nicer

  • Why does not  a query go by index but FULL TABLE SCAN

    I have two tables;
    table 1 has 1400 rwos and more than 30 columns , one of them is named as 'site_code' , a index was created on this column;
    table 2 has more 150 rows than 20 column , this primary key is 'site_code' also.
    the two tables were analysed by dbms_stats.gather_table()...
    when I run the explain for the 2 sqls below:
    select * from table1 where site_code='XXXXXXXXXX';
    select * from table1 where site_code='XXXXXXXXXX';
    certainly the oracle explain report show that 'Index scan'
    but the problem raised that
    I try to explain the sql
    select *
    from table1,table2
    where 1.'site_code'=2.'site_code'
    the explain report that :
    select .....
    FULL Table1 Scan
    FULL Table2 Scan
    why......

    Nikolay Ivankin  wrote:
    BluShadow wrote:
    Nikolay Ivankin  wrote:
    Try to use hint, but I really doubt it will be faster.No, using hints should only be advised when investigating an issue, not recommended for production code, as it assumes that, as a developer, you know better than the Oracle Optimizer how the data is distributed in the data files, and how the data is going to grow and change over time, and how best to access that data for performance etc.Yes, you are absolutly right. But aren't we performing such an investigation, are we? ;-)The way you wrote it, made it sound that a hint would be the solution, not just something for investigation.
    select * from .. always performs full scan, so limit your query.No, select * will not always perform a full scan, that's just selecting all the columns.
    A select without a where clause or that has a where clause that has low selectivity will result in full table scans.But this is what I have ment.But not what you said.

  • Query is doing full table scan

    Hi All,
    The below query is doing full table scan. So many threads from application trigger this query and doing full table scan. Can you please tell me how to improve the performance of this query?
    Env is 11.2.0.3 RAC (4 node). Unique index on VZ_ID, LOGGED_IN. The table row count is 2,501,103.
    Query is :-
    select ccagentsta0_.LOGGED_IN as LOGGED1_404_, ccagentsta0_.VZ_ID as VZ2_404_, ccagentsta0_.ACTIVE as ACTIVE404_, ccagentsta0_.AGENT_STATE as AGENT4_404_,
    ccagentsta0_.APPLICATION_CODE as APPLICAT5_404_, ccagentsta0_.CREATED_ON as CREATED6_404_, ccagentsta0_.CURRENT_ORDER as CURRENT7_404_,
    ccagentsta0_.CURRENT_TASK as CURRENT8_404_, ccagentsta0_.HELM_ID as HELM9_404_, ccagentsta0_.LAST_UPDATED as LAST10_404_, ccagentsta0_.LOCATION as LOCATION404_,
    ccagentsta0_.LOGGED_OUT as LOGGED12_404_, ccagentsta0_.SUPERVISOR_VZID as SUPERVISOR13_404_, ccagentsta0_.VENDOR_NAME as VENDOR14_404_
    from AGENT_STATE ccagentsta0_ where ccagentsta0_.VZ_ID='v790531'  and ccagentsta0_.ACTIVE='Y';
    Table Scan                                                       AGENT_STATE                                                2.366666667
    Table Scan                                                       AGENT_STATE                                                0.3666666667
    Table Scan                                                       AGENT_STATE                                                1.633333333
    Table Scan                                                       AGENT_STATE                                                       0.75
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                2.533333333
    Table Scan                                                       AGENT_STATE                                                0.5333333333
    Table Scan                                                       AGENT_STATE                                                       1.95
    Table Scan                                                       AGENT_STATE                                                        0.8
    Table Scan                                                       AGENT_STATE                                                0.2833333333
    Table Scan                                                       AGENT_STATE                                                1.983333333
    Table Scan                                                       AGENT_STATE                                                        2.5
    Table Scan                                                       AGENT_STATE                                                1.866666667
    Table Scan                                                       AGENT_STATE                                                1.883333333
    Table Scan                                                       AGENT_STATE                                                        0.9
    Table Scan                                                       AGENT_STATE                                                2.366666667
    But the explain plan shows the query is taking the index
    Explain plan output:-
    PLAN_TABLE_OUTPUT
    Plan hash value: 1946142815
    | Id  | Operation                   | Name            | Rows  | Bytes | Cost (%C
    PU)| Time     |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT            |                 |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  1 |  TABLE ACCESS BY INDEX ROWID| AGENT_STATE     |     1 |   106 |   244
    (0)| 00:00:03 |
    |*  2 |   INDEX RANGE SCAN          | AGENT_STATE_IDX |   229 |       |     4
    (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       1 - filter("CCAGENTSTA0_"."ACTIVE"='Y')
       2 - access("CCAGENTSTA0_"."VZ_ID"='v790531')
    The values (VZ_ID) i have given are dummy values picked from the table. I dont get the actual values since the query is coming with bind variables. Please let me know your suggestion on this.
    Thanks,
    Mani

    Hi,
    But I am not getting what is the issue..its a simple select query and index is there on one of the leading columns (VZ_ID --- PK). Explain plan says its using its using Index and it only select fraction of rows from the table. Then why it is doing FTS. For Optimizer, why its like a query doing FTS.
    The rule-based optimizer would have  picked the plan with the index. The cost-based optimizer, however, is picking the plan with the lowest cost. Apparently, the lowest cost plan is the one with the full table scan. And the optimizer isn't necessarily wrong about this.
    Reading data from a table via index probes is only efficient when selecting a relatively small percentage of rows. For larger percentages, a full table scan is generally better.
    Consider a simple example: a query that selects from a table with biographies for all people on the planet. Suppose you are interested in all people from a certain country.
    select * from all_people where country='Vatican'
    would only return only 800 rows (as Vatican is an extremely small country with population of just 800 people). For this case, obviously, using an index would be very efficient.
    Now if we run this query:
    select * from all_people where contry = 'India',
    we'd be getting over a billion of rows. For this case, a full table scan would be several thousand times faster.
    Now consider the third case:
    select * from all_people where country = :b1
    What plan should the optimizer choose? The value of :b1 bind variable is generally not known during the parse time, it will be passed by the user when the query is already parsed, during run-time.
    In this case, one of two scenarios takes place: either the optimizer relies on some built-in default selectivities (basically, it takes a wild guess), or the optimizer postpones taking the final decision until the
    first time the query is run, 'peeks' the value of the bind, and optimizes the query for this case.
    In means, that if the first time the query is parsed, it was called with :b1 = 'India', a plan with a full table scan will be generated and cached for subsequent usage. And until the cursor is aged out of library cache
    or invalidated for some reason, this will be the plan for this query.
    If the first time it was called with :b1='Vatican', then an index-based plan will be picked.
    Either way, bind peeking only gives good results if the subsequent usage of the query is the same kind as the first usage. I.e. in the first case it will be efficient, if the query would always be run for countries with big popultions.
    And in the second case, if it's always run for countries with small populations.
    This mechanism is called 'bind peeking' and it's one of the most common causes of performance problems. In 11g, there are more sophisticated mechanisms, such a cardinality feedback, but they don't always work as expected.
    This mechanism is the most likely explanation for your issue. However, without proper diagnostic information we cannot be 100% sure.
    Best regards,
      Nikolay

  • Problem of full table scan on a partitioned table

    hi all
    There is a table called "si_sync_operation" that have 171040 number of rows.
    I partitioned that table into a new table called "si_sync_operation_par" with 7 partitoins.
    I issued the following statements
    SELECT * FROM si_sync_operation_par.
    SELECT * FROM si_sync_operation.
    The explain plan show that the cost of the first statement is 1626 and that of the second statments is 1810.
    The "cost" of full table scan on partitioned table is lower than the that of non-partitioned table.That's fine.
    But the "Bytes" of full table scan on partitioned table is 5761288680 and that of the non-partitioned table is 263743680.
    Why full table scan on partitioned table access more bytes than non-partitioned table?
    And how could a statment that access more bytes results a lower cost?
    Thank u very much

    As Hemant metioned bytes reported are approximate number of bytes. As far as Cost is concerned, according to Tom its just a number and we should not compare queries by their cost. (search asktom.oracle.com for more information)
    SQL> drop table non_part purge;
    Table dropped.
    SQL> drop table part purge;
    Table dropped.
    SQL>
    SQL> CREATE TABLE non_part
      2        (id  NUMBER(5),
      3         dt    DATE);
    Table created.
    SQL>
    SQL> CREATE TABLE part
      2        (id  NUMBER(5),
      3         dt    DATE)
      4         PARTITION BY RANGE(dt)
      5         (
      6         PARTITION part1_jan2008 VALUES LESS THAN(TO_DATE('01/02/2008','DD/MM/YYYY')),
      7         PARTITION part2_feb2008 VALUES LESS THAN(TO_DATE('01/03/2008','DD/MM/YYYY')),
      8         PARTITION part3_mar2008 VALUES LESS THAN(TO_DATE('01/04/2008','DD/MM/YYYY')),
      9         PARTITION part4_apr2008 VALUES LESS THAN(TO_DATE('01/05/2008','DD/MM/YYYY')),
    10         PARTITION part5_may2008 VALUES LESS THAN(TO_DATE('01/06/2008','DD/MM/YYYY'))
    11       );
    Table created.
    SQL>
    SQL>
    SQL> insert into non_part select rownum, trunc(sysdate) - rownum from dual connect by level <= 140;
    140 rows created.
    Execution Plan
    Plan hash value: 1731520519
    | Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  COUNT                        |      |       |            |          |
    |*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
    |   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(LEVEL<=140)
    SQL>
    SQL> insert into part select * from non_part;
    140 rows created.
    Execution Plan
    Plan hash value: 1654070669
    | Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT  |          |   140 |  3080 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| NON_PART |   140 |  3080 |     3   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> set line 10000
    SQL> set autotrace traceonly exp
    SQL> select * from non_part;
    Execution Plan
    Plan hash value: 1654070669
    | Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |          |   140 |  3080 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| NON_PART |   140 |  3080 |     3   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement
    SQL> select * from part;
    Execution Plan
    Plan hash value: 3392317243
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |      |   140 |  3080 |     9   (0)| 00:00:01 |       |       |
    |   1 |  PARTITION RANGE ALL|      |   140 |  3080 |     9   (0)| 00:00:01 |     1 |     5 |
    |   2 |   TABLE ACCESS FULL | PART |   140 |  3080 |     9   (0)| 00:00:01 |     1 |     5 |
    Note
       - dynamic sampling used for this statement
    SQL>
    SQL> exec dbms_stats.gather_table_stats(user, 'non_part');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats(user, 'part');
    PL/SQL procedure successfully completed.
    SQL>
    SQL>
    SQL> select * from non_part;
    Execution Plan
    Plan hash value: 1654070669
    | Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |          |   140 |  1540 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| NON_PART |   140 |  1540 |     3   (0)| 00:00:01 |
    SQL> select * from part;
    Execution Plan
    Plan hash value: 3392317243
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |      |   140 |  1540 |     9   (0)| 00:00:01 |       |       |
    |   1 |  PARTITION RANGE ALL|      |   140 |  1540 |     9   (0)| 00:00:01 |     1 |     5 |
    |   2 |   TABLE ACCESS FULL | PART |   140 |  1540 |     9   (0)| 00:00:01 |     1 |     5 |
    SQL>After analyzing the tables, notice that the Bytes column has changed value

  • Avod full table scan help...

    HI ,
    I have sql with some filter and all the have index. the table size is huge index is there in explain plan though index it's going for full table scan it's not recognizing index. i used index hint/*+ INDEX (SYM.SYM_DEPL,SYM.SYDB_DE_N18) */ though it's not recoginizing index in explian plan going for full table scan. and qury take more time.
    please help to resolve the issue and it should recognize index rather than full table scan..

    user13301356 wrote:
    HI ,
    I have sql with some filter and all the have index. the table size is huge index is there in explain plan though index it's going for full table scan it's not recognizing index. i used index hint/*+ INDEX (SYM.SYM_DEPL,SYM.SYDB_DE_N18) */ though it's not recoginizing index in explian plan going for full table scan. and qury take more time.
    please help to resolve the issue and it should recognize index rather than full table scan..What is database version? Are all columns in the table indexed? Copy and paste the query that you are executing.

  • Prompt on DATE forces FULL TABLE SCAN

    When using a prompt on a datetime field OBIEE sends a SQL to the Database with the TIMESTAMP COMMAND.
    Due to Timestamp the Oracle database does a Full Table Scan. The field ATIST is a date with time on the physical database.
    By Default ATIST was configured as TIMESTAMP in the rpd physical layer. The SQL request is sent to a Oracle 10 Database.
    That is the query sent to the database:
    -------------------- Sending query to database named PlantControl1 (id: <<10167>>):
    select distinct T1471.ATIST as c1,
    T1471.GUTMENGEMELD2 as c2
    from
    AGRUECK T1471 /* Fact_ARBEITSGANGMELDUNGEN */
    where ( T1471.ATIST = TIMESTAMP '2005-04-01 13:48:05' )
    order by c1, c2
    The result takes more than half a minute to appear.
    Because BIEE is using "TIMESTAMP" , the database performs a full table scan instead of using the index.
    By using TO_DATE instead of timestamp the result appears after a second.
    select distinct T1471.ATIST, T1471.GUTMENGEMELD2 as c2
    from
    AGRUECK T1471 /* Fact_ARBEITSGANGMELDUNGEN */
    where ( T1471.ATIST = to_date('2005.04.01 13:48:05', 'yyyy.mm.dd hh24:mi:ss') );
    Is there any way resolving the issue?
    PS:When the field ATIST is configured as date at the physical layer the SQL performs well is it uses the command "to_date" instead of "timestamp". But this cuts the time of the date, When it is configured as DATETIME, OBIEE uses TIMESTAMP again.
    What I need is a working date + time field.
    Anybody has encountered a similiar problem?

    To be honest I haven't come across many scenarios where the Time has been important. Most of our reporting stops at Day level.
    What is the real world business question being asked here that requires DayTime?
    Incidentally if you change your datatype on the base table you will see it works fine.
    CREATE TABLE daytime( daytime TIMESTAMP );
    CREATE UNIQUE INDEX dt ON daytime  (daytime)
    SQL> set autotrace traceonly
    SQL> SELECT * FROM daytime
      2  WHERE daytime = TIMESTAMP '2007-04-01 13:00:45';
    no rows selected
    Execution Plan
    Plan hash value: 3985400340
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     1 |    13 |     1   (0)| 00:00:01 |
    |*  1 |  INDEX UNIQUE SCAN| DT   |     1 |    13 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("DAYTIME"=TIMESTAMP' 2007-04-01 13:00:45.000000000')
    Statistics
              1  recursive calls
              0  db block gets
              1  consistent gets
              0  physical reads
              0  redo size
            242  bytes sent via SQL*Net to client
            362  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed
    SQL>However if its a date it would appear to do some internal function call which I guess is the source of the problem ...
    | Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |         |     1 |     9 |     2   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| DAYTIME |     1 |     9 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(INTERNAL_FUNCTION("DAYTIME")=TIMESTAMP' 2007-04-01
                  13:00:45.000000000')

  • Cursors - More number of repeated full table scans

    Hai,
    If i execute the below mentioned query in sql*plus
    select distinct cod_acct_no, a.cod_cc_brn, a.cod_prod
    from account_master a, product_master b
    where a.prod_no = b.prod_no
    and b.cod_sc_pkg = var_pi_cod_sc_pkg
    and cod_acct_stat not in ( 1,5)
    UNION
    select distinct a.cod_acct_no, a.cod_cc_brn, a.cod_prod
    from account_master a, charge_account sc
    where sc.cod_acct_no = a.cod_acct_no
    and sc.cod_sc_pkg = var_pi_cod_sc_pkg
    and cod_acct_stat not in ( 1,5)
    I am getting results within a minute. Account_master is the big table in this join. And only one full table scan is happening for this table.
    IF the same query is used in pl/sql cursors, it is taking very long time and after analysis, it was found that multiple times of FTS is happening for account_master table. Why it is happening ? Any help pls.
    Regards
    Sridhar

    Hi,
    First thing, you don't need to issue a distinct for each Select section in a Union: Union automatically does that, by sorting and returning distinct values of both result sets combined.
    But most importantly, the performance is poor because you're running a loop, not because the query itself is any worse than what it was in SQL*Plus.
    When you run a cursor for loop, the PL/SQL engine has to switch context to the SQL engine repeatedly to fetch the next row, one at a time, and even though you may
    not notice a performance degradation when you have a small resultset, as it scales up the problem becomes more visible.
    What is it you're performing inside the loop?
    Bottom line is you should avoid the cursor for loop solution and do it in a single statement or at least using bulk collect/forall operations, that should increase your performance dramatically.
    If you can post a little more about your problem, instead the solution you initially thought of, it would help us help you. Basically please provide us your Oracle version, table structures and sample data for them,
    along with an expected output or data manipulation.
    Regards,
    Sitja.

  • Bitmap index column goes for full table scan

    Hi all,
    Database : 10g R2
    OS : Windows xp
    my select query is :
    SELECT tran_id, city_id, valid_records
    FROM transaction_details
    WHERE type_id=101;
    And the Explain Plan is :
    Plan
    SELECT STATEMENT ALL_ROWSCost: 29 Bytes: 8,876 Cardinality: 634
    1 TABLE ACCESS FULL TABLE TRANSACTION_DETAILS** Cost: 29 Bytes: 8,876 Cardinality: 634
    total number of rows in the table = 1800 ;
    distinct value of type_ids are 101,102,103
    so i created a bit map index on it.
    CREATE BITMAP INDEX btmp_typeid ON transaction_details
    (type_id)
    LOGGING
    NOPARALLEL;
    after creating the index, the explain plan shows the same. why it goes for full table scan?.
    Kindly share ur idea on this.
    Edited by: 887268 on Apr 3, 2013 11:01 PM
    Edited by: 887268 on Apr 3, 2013 11:02 PM

    >
    I am sorry for being ignorant, can you please cite any scenario of locking due to bitmap indices? A link can be useful as well.
    >
    See my full reply in this thread
    Bitmap index for FKs on Fact tables
    >
    ETL is affected because DML operations (INSERT/UPDATE/DELETE) on tables with bitmapped indexes can have serious performance issues due to the serialization involved. Updating a single bit-mapped column value (e.g. from 'M' to 'F' for gender) requires both bitmapped index blocks to be locked until the update is complete. A bitmap index stored ROWID ranges (min rowid - max rowid) than can span many, many records. The entire 'range' of rowids is locked in order to change just one value.
    To change from 'M' the 'M' rowid range for that one row is locked and the ROWID must be removed from the range byt clearing the bit. To change to 'F' the 'F' rowid id range needs to be found, locked and the bit set that corresponds to that rowid. No other rows with rowids in the range can be changed since this is a serial operation. If the range includes 1000 rows and they all need changed it takes 1000 serial operations.

  • Source(oracle 11g) with oracle 8i using db link with full table scan

    Hi,
    I'm using oracle 8i version and i'm facing some issue  while using DBlinks.
    SourceDB1 I'm using oracle 8i(Source)
    select *  from tab1
    tabl where id in
    (select id from stab1@SourceDB1  where updt_seq_num > 167720 and work_grp_id in (2900,2901))
    For this, I could able to retrive the data.
    but, today we have migrated one of our source from oracle 8i to oracle 11G
    when I'm executing
    select *  from tab2
    tabl where id in
    (select id from stab2@SourceDB2  where updt_seq_num > 167720 and work_grp_id in (2900,2901))
    we couldn't able to retrive data as it's scaning full table scan.
    when oracle 8i source it's scanning using INDEX scan but if the source oracle 11 h it's scanning full table scan.
    Could you please advise how to resolve this issue for source oracle 11i ?
    Please let us know, if you need any information.

    Blocks that are read via full table scans are stored in the buffer cache, but putting them at the MRU end ensures that they don't push the rest of the useful blocks out of the buffer cache. In your example, if you're full scanning a 2 GB table (T) with a 500 MB buffer cache, the first block that is read from T is put at the MRU end of the buffer cache, displacing the previous block that was at the MRU end of the cache. The next block that is read from T is also put at the MRU end of the buffer cache, displacing the previous block that was at the MRU end of the cache, so block #2 from T displaces block #1 from T. So, you're reading 2 GB of data, but you're constantly purging the older blocks, so you're only really using that last block of the buffer cache.
    Justin

  • Full Table Scans on Auto Ship Confirm Report (WSHRDASC) Causing

    Severe performance issue with Auto Ship Confirm report WSHRDASC.
    From the statspack reports, a single sql statement that iscurrently consuming approximately 50% of all physical disk I/O.
    2 full table scans coming from this problem query :-
    Rows Row Source Operation
    SORT ORDER BY
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    TABLE ACCESS FULL WSH_NEW_DELIVERIES
    TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS
    INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (object id 81392)
    TABLE ACCESS BY INDEX ROWID HZ_PARTIES
    INDEX UNIQUE SCAN HZ_PARTIES_U1 (object id 172682)
    TABLE ACCESS BY INDEX ROWID WSH_EXCEPTIONS
    INDEX RANGE SCAN WSH_EXCEPTIONS_N9 (object id 133587)
    TABLE ACCESS BY INDEX ROWID WSH_DELIVERY_LEGS
    INDEX RANGE SCAN WSH_DELIVERY_LEGS_N1 (object id 46224)
    TABLE ACCESS BY INDEX ROWID WSH_DOCUMENT_INSTANCES
    INDEX RANGE SCAN WSH_DOCUMENT_INSTANCES_N1 (object id 46405)
    TABLE ACCESS FULL WSH_PICKING_BATCHES
    TABLE ACCESS BY INDEX ROWID FND_LOOKUP_VALUES
    INDEX RANGE SCAN FND_LOOKUP_VALUES_U1 (object id 34010)
    Please help i have applied one patch for this issue Patch 5531283. Then also same issue

    Hi;
    WHat is your EBS version?
    If note Full Table Scans on Auto Ship Confirm Report (WSHRDASC) Causing Performance Issues [ID 393014.1] doesnt help than I suggest rise SR
    Regard
    Helios

Maybe you are looking for