Physical read & logical read

Hi
What is ment by Physical-Read and Logical-Read in oracle?Can anybody explain me clearly.
Aqueel.

Physical read, reading from the disk and logical reads mean reading it from the memory.
When a session(query) required information (data blocks), it will first look into the buffer cache (SGA), if it find the required data block, it will read it from the memory, i.e. logical reads, but, if it doesn't find required data block in the memory, then it read from the disk(datafile), physical read.
Generally, people motive to reduce physical read and read as much as possible from the memory, I was also int he same wrong impression until I came across a good paper by tuning expert Cary Milsap, 'why we should focus on LIO instead of PIO'. also, search http://asktom.oracle.com 'logical reads', there are good discussion on this point.
Jaffar

Similar Messages

  • Physical reads, writes and logical reads,writes

    hi there,
    is there any way to find the physical reads and writes, logical reads and writes
    other than statspack report?

    It depends which version of Oracle are you using.
    If you are using 10g, you can get very comprehensive information from the dictionary views.
    If you are on <=9i, then, use, v$sysstat and look for 'physical reads', 'logical reads'.
    The values dervied from v$sysstat are cummulative values. i.e. values counted since the instance started.
    Jaffar

  • Logical reads VS physical reads

    hello all,
    What is the difference between logical reads and physical read ??? I do get the part of logical read (from buffer) and physical read(from disk). And also do know physical reads are bad, but is it true in oracle world..logical reads are bad ??? if so why ??? Could you please explain which on to look for. As i am going thru AWR report and i see segemts by logical/physical read.

    The first and foremost difference is that physcial reads are done from the hard disk. And this is always going tobe slwoer than the memory. That's why its said that the physcial ios must be removed. Logical ios are good as they are done from the memory.The theory that logical are also not good is because the logical ios require the access given by latches. So with lots of the gets for the latches put them in to contention and latch contention would make the access to the logical io slower.So its better to do this in less IO even they are logical too.
    Lots of logical IOs probably mean that you are accessing unnecessary data which may be not required.
    HTH
    Aman....

  • Find physical reads and logical reads ?

    Hi,
    how will find out physical reads and logical reads ?

    Well I would suggest you read the report from statspack/awr.They as suggested by Amit,have a load profile section.That would be helpful for you in finding the details of this thing. Also which version you are? If you are in 10g than the EM is able to give you a compare period report where you can compare 2 different days's periods information and can check which particular part has changed.
    In addition to this , look for the information of the metrics in the documentation.As the physical read and logical reads are statistics which are happening in the system.So from 10g onwards, oracle is keeping a track in the deflectionin the statistics. So if you see that than it willbe easy for you to manage and monitor it.
    I shall try to findthe name of some views related to it and post.
    Aman....

  • Logical read Vs Physical Reads

    Hi,
    I want to know How should we balance the logical reads and physical reads on database.
    I generaly assume logical reads is always better but what if too many session accessing the same
    objects it will cause latch contention.
    If it is physical read then it will take more time to read from disk than reading from SGA.
    If I look at statspack report I check the
    Logical reads: 59,846.54 840.03
    Physical reads: 1,095.91 15.38
    Is there any standards for logical reads on database. If there is latch contentions can we go for the
    decreasing the SGA to avoid the contentions so that there will balance between the physical reads
    and logical reads.

    amitbansode wrote:
    I want to know How should we balance the logical reads and physical reads on database.There is no balance as it implies some kind of "+perfect ratio+" for logical I/O vs. physical I/O.
    A high percentage of physical I/O can be perfectly acceptable and normal and correct for a specific database (e.g. think of a system collecting telemetry data where 90% or more of all I/O is writing new telemetry into the database and the remaining 10% is querying the data, with old data being aged out from the database using partition drops which is negligible I/O).
    A high percentage of logical I/O can be indicative of a serious application design problem - where 80GB of database data is read and read again and again and again.. resulting in over a TB of logical I/O. (actually saw this in production database some years ago)
    So there is no balance (e.g. not true that physical I/O = BAD and logical I/O = GOOD). No perfect cache hit ratio figure that tells you that the database is doing the right amounts of logical and physical I/O.
    And I want to emphasise what Mark said - tuning requires you to identify the performance problem first, before trying to solve it.
    It is very dangerous to take one metric, like the I/O cache hit ratio, and attempt to tune that. It alone is meaningless. Just like memory utilisation alone is useless and CPU utilisation alone is useless. It does not by any means point to an actual performance problem. E.g. 100% CPU utilisation can mean hardware has insufficient horses, instead of performance issues related with application design, database setting or kernel configuration.
    I often repeat the following mantra here on OTN - a fundamental concept IMO for software engineering:
    A solution is only as good as the problem definition.
    Identify the problem first - correctly and comprehensively. And then solve it.

  • Understanding Statistics io and Logical reads - is logical reads information correct

    Hi,
    This question arises during a performance test - on SQL Server 2012 with SP2.
    In the following example, table has only column and that is of data type INT.
    When inserted 592 records of data type INT it is doing only 1 logical read but as soon another record is inserted SP is reporting 2 logical reads. Why?
    In the code, i have highlighted difference between 
    statistics io - logical reads and sys.dm_exec_procedure_stats.total_logical_reads
    to understand the difference between these 2 information.
    set nocount on
    GO
    create table dbo.test_storage_and_logical_reads
    employee_number int --primary key
    GO
    go
    CREATE procedure dbo.test_sp_logical_reads
    as
    begin
    select
    employee_number
    from dbo.test_storage_and_logical_reads
    order by employee_number desc
    end
    go
    insert into dbo.test_storage_and_logical_reads
    (employee_number)
    VALUES (1)
    GO 592
    EXEC sp_spaceused 'dbo.test_storage_and_logical_reads'
    --set statistics io on
    --GO
    exec dbo.test_sp_logical_reads
    GO
    ----Table 'test_storage_and_logical_reads'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    ---- But sys.dm_exec_procedure_stats.total_logical_reads is reporting 3 instead.
    truncate table dbo.test_storage_and_logical_reads
    GO
    insert into dbo.test_storage_and_logical_reads
    (employee_number)
    VALUES (1)
    GO 593
    EXEC sp_spaceused 'dbo.test_storage_and_logical_reads'
    exec dbo.test_sp_logical_reads
    GO
    ----Table 'test_storage_and_logical_reads'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    ---- But sys.dm_exec_procedure_stats.total_logical_reads is reporting 4 instead.
    --drop procedure dbo.test_sp_logical_reads
    --drop table dbo.test_storage_and_logical_reads
    GO
    NB: I do understand the logical and physical reads. Thanks.
    For quick review of new features, try virtual labs: http://msdn.microsoft.com/en-us/aa570323

    Hi.
    I still need to test the scenario but if you read definition of this DMV it says 'Returns aggregate performance statistics for cached stored procedures' so I guess, I am not sure 3 can be due to this aggregated output given by this DMV. Will test it
    on SS 2012 SP2 will get back to you.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles
    Hi Shanky / Sean Gallardy
    i think Sean Gallardy created the "test_sp_logical_reads" table in master DB, i have that same issue when i accidentiatly created the table "test_sp_logical_reads" in master, but when i create the same table in USER DB like "sample1"
    it only allocate 1 page for 592,
    hi Asam,
    Let me narrow down the your question, if my understanding is correct ...
    <<When you have the free space in a page of a Heap, why SQL Server is assigning New page for a New Record>>
    Answer is in PFS bytes
    The answer is that PFS bytes are not fully reset until the page is reallocated. On deallocation, the only bit in the PFS byte that's changed is the allocation status bit - this makes it very easy to rollback a deallocation
    --Before inserting the 593 record plese execute the below query
    --Note Replace "database1" with your DBName
    DBCC TRACEON (3604);
    DBCC IND ('database1', 'test_storage_and_logical_reads', 1);
    --Result
    PageFID PagePID IAMFID IAMPID ObjectID IndexID PartitionNumber PartitionID iam_chain_type PageType IndexLevel NextPageFID NextPagePID PrevPageFID PrevPagePID
    1 2770 NULL NULL 517576882 0 1 72057594039828480 In-row data 10 NULL 0 0 0 0
    1 2769 1 2770 517576882 0 1 72057594039828480 In-row data 1 0 0 0 0 0
    --your intrested in the second record(page)
    DBCC PAGE ('database1', 1, 2769,3) WITH TABLERESULTS;
    --you can find 38th row as
    --PFS (1:1) = 0x64 MIXED_EXT ALLOCATED 100_PCT_FULL
    --which means your page is full
    --you can try inserting 300 rows it will show you PFS (1:1) = 0x61 MIXED_EXT ALLOCATED 50_PCT_FULL
    please refer in the below link
    http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/07/08/under-the-covers-gam-sgam-and-pfs-pages.aspx
    http://aboutsqlserver.com/2013/12/17/sql-server-storage-engine-heap-tables/
    "This page has a PFS byte value of 0x04 - how can it be full when its not allocated?"
    The answer is that PFS bytes are not fully reset until the page is reallocated. On deallocation, the only bit in the PFS byte that's changed is the allocation status bit - this makes it very easy to rollback a deallocation.
    Here's an example. Using a database with a simple table with one row.
    A DBCC PAGE of the IAM page includes:
    PFS (1:1) = 0x70 IAM_PG MIXED_EXT ALLOCATED 0_PCT_FULL
    If I run the following:
    BEGIN
    TRANSACTION
    DROP
    TABLE T1
    GO
    And then do the DBCC PAGE again, the output now includes:
    PFS (1:1) = 0x30 IAM_PG MIXED_EXT 0_PCT_FULL
    And if I rollback then transaction, the DBCC PAGE output reverts to:
    PFS (1:1) = 0x70 IAM_PG MIXED_EXT ALLOCATED 0_PCT_FULL
    Thanks
    Saravana Kumar C

  • Logical Reads for a SQL ID

    Dear Experts,
    Any idea how we can find out Logical reads for a SQL ID over a 24 hour period? AWR shows physical reads info.. any data dictionary view that captures logical I/O?
    Thanks

    Figured it out.. This is it:
    col Time_Taken for 999999999999
    col log_reads for 9999999999999
    select sql_id, sum(disk_reads_delta) as phy_reads, sum(BUFFER_GETS_DELTA) as log_reads, sum(ELAPSED_TIME_DELTA/1000000) as Time_Taken
    from dba_maint.DB_HIST_SQLSTATS
    where PARSING_SCHEMA_NAME='USERNAME' and BEGIN_INTERVAL_TIME>='03-JAN-13' and sql_id in ('abcdefghijkl')
    group by sql_id order by  phy_reads, log_reads, time_taken desc;

  • Buffer gets and logical reads

    Hello !
    Does the counter of event buffer gets include the logical reads ?
    Does the mertic buffer gets include the event of reading from undo buffer ?
    Thanks and regards,
    Pavel
    Edited by: Pavel on Jun 27, 2012 3:08 AM
    Edited by: Pavel on Jun 27, 2012 3:35 AM
    Edited by: Pavel on Jun 27, 2012 4:13 AM

    Hi,
    buffer gets = number of times a block was requested from buffer cache. A buffer get always request in a logical read. Depending on whether or not a copy of the block is available in the buffer cache, a logical read may or may not involve a physical read. So "buffer gets" and "logical reads" are basically synonyms and are often used interchangeably.
    Oracle doesn't have a special "undo buffer". Undo blocks are stored in rollback segments in UNDO tablespace, and are managed in the same way data blocks are (they're even protected by redo). If a consistent get requires reading from UNDO tablespace, then statistics counters will show that, i.e. there will be one more consistent get in your autotrace.
    For more information and some examples, see a thread at askTom:
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:549546900346542976
    Best regards,
    Nikolay

  • Buffer_gets and 'session logical reads'

    When I compare the following values:
    select sum(buffer_gets) from v$sqlarea
    and
    select value from v$sysstat where name='session logical reads'
    it usually happens that sum(buffer_gets) > 'session logical reads'.
    Does buffer_gets include logical and physical reads? As long as the SGA keeps all the SQL statments, I guess sum(buffer_gets) will be bigger than 'session logical reads'. Is this correct?
    Thxs

    Hi,
    buffer gets = number of times a block was requested from buffer cache. A buffer get always request in a logical read. Depending on whether or not a copy of the block is available in the buffer cache, a logical read may or may not involve a physical read. So "buffer gets" and "logical reads" are basically synonyms and are often used interchangeably.
    Oracle doesn't have a special "undo buffer". Undo blocks are stored in rollback segments in UNDO tablespace, and are managed in the same way data blocks are (they're even protected by redo). If a consistent get requires reading from UNDO tablespace, then statistics counters will show that, i.e. there will be one more consistent get in your autotrace.
    For more information and some examples, see a thread at askTom:
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:549546900346542976
    Best regards,
    Nikolay

  • About Logical Reads

    Problem:
    1.
    If you run a SQL code and you read the logical reads. is it benefit for performance issue to have low value of logical reads not matter what?
    2.
    What is the definition of logical reads?'
    Im a beginner in performance tuning

    Logical reads is a metric that can be used to evaluate query plans. Less logical reads is usually better. However, it's not the best metric, that's wallclock time, in my opinion. Particularly, looking blindly on logical reads, may lead you to think that
    a plan with lots of CPU-expensive hashing operations are better than a simpler plan with nested loop joins.
    The nice thing with logical reads over wallclock time, is that logical reads is not affected by other operations on the server.
    OP, overall I agree with Erland, all other things being equal, less logical reads is better than more logical reads.
    But sometimes other things are not all equal - it does you little good to trade logical reads for a ton of CPU instead!
    However, logical reads *are* affected, dramatically, by other operations on the server - they are not as purely "logical" as one might like!  If your buffers are empty and you have to do physical IO to get the data, even the *logical* reads shows higher
    numbers!  Exactly why this is the case is obscure, or complicated, or something, but it is.  Similarly even contention with other processes especially for buffer space can affect the "logical" reads to a small or larger extent.
    But the good news is that sometimes logical reads are really, really cheap, so sometimes you can get a query that does 100,000 logical reads but runs in a tenth of second anyway.  Whether it's worth a battle to cut the logical reads down then, is up
    to you.
    HTH,
    Josh

  • Scan count VS logical Reads

    I have two versions of a query.
    1.  Scan count 136, logical reads 2776907,CPU time = 230848 ms,  elapsed time = 55753 ms.
    2. Scan count 0, logical reads 89225788, CPU time = 437696 ms,  elapsed time = 46195 ms.
    The only difference I can see is 2 query it taking lesser of 10 minutes. Which is good here?  Both Execution plan has lots of hash joins.
    Srinivasan

    Hi Srinivasan,
    Scan counts should be as low as possible. As for Logical Reads, you want them to be as high as possible and more than physical
    reads is preferred (read from cache as opposed to drive).
    Scan count: Number
    of index or table scans performed.
    logical reads: Number of pages read from the data cache.
    For more info you can refer the msdn link => http://msdn.microsoft.com/en-us/library/ms184361.aspx
    Regards Harsh

  • AWR's buffer cache reads and logical reads

    In AWR report under "Segments by Logical Reads" section, there is a total logical reads, I assume it is in unit of block. Under "IOStat by Function summary" section, it has buffer cache reads in unit of bytes. Shouldn't the number of logical reads x 8k (if the block size is 8k) = the number of buffer cache reads?
    They are not equal, not even close, does anybody know why? Thanks

    Hi,
    buffer gets = number of times a block was requested from buffer cache. A buffer get always request in a logical read. Depending on whether or not a copy of the block is available in the buffer cache, a logical read may or may not involve a physical read. So "buffer gets" and "logical reads" are basically synonyms and are often used interchangeably.
    Oracle doesn't have a special "undo buffer". Undo blocks are stored in rollback segments in UNDO tablespace, and are managed in the same way data blocks are (they're even protected by redo). If a consistent get requires reading from UNDO tablespace, then statistics counters will show that, i.e. there will be one more consistent get in your autotrace.
    For more information and some examples, see a thread at askTom:
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:549546900346542976
    Best regards,
    Nikolay

  • Logical reads per second

    I have two databases - one is a clone of the other, amde a few months ago. Database A has somewhat more data, since it's the active production database, but not significantly more - perhaps 10% greater. They are on different boxes. Database A is on a Sun 280R 2-processor box. Database B is on a Dell 2950 with 2 dual-core processors. So this isn't exactly comparing apples to apples. However, when I run the same query on the two databases, I get radically different results. Against Database A, the query takes about 7 minutes. On Database B, it takes about 2 seconds. Logical reads per second on Database A reach 80,000-90,000; on Database B, they're about 3,000. There are a few configuration differences (both databases use automatic memory management):
    Database A Database B
    db_file_multiblock_read_count 64 16
    log_buffer 14290432 2104832
    open_cursors 1250 300
    sga_max_size 4194304000 536870912
    sga_target 2634022912 536870912
    shared_pool_reserved_size 38587596 7340032
    The timings were taken off-hours so neither database would be busy. I'm baffled by the extreme difference in execution times. Any help appreciated!
    Thanks,
    Harry
    Edited by: harryb on Apr 8, 2009 7:26 PM

    OK, let's start here....
    Database A (TEMPOP)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.3
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 64
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    ===================================================
    Database B (TEMPO11)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.1
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 16
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    =================================================================
    Now for the query that's causing the problem:
    SELECT dsk_document_attribute.value_text inspect_permit_no,
              NVL (activity_task_list.revised_due_date,
                   activity_task_list.default_due_date
                 inspect_report_due_date,
              agency_interest.master_ai_id agency_interest_id,
              agency_interest.master_ai_name agency_interest_name,
              get_county_code_single (agency_interest.master_ai_id)
                 parish_or_county_code,
              agency_interest_address.physical_address_line_1 inspect_addr_1,
              agency_interest_address.physical_address_line_2 inspect_addr_2,
              agency_interest_address.physical_address_line_3 inspect_addr_3,
              agency_interest_address.physical_address_municipality inspect_city,
              agency_interest_address.physical_address_state_code state_id,
              agency_interest_address.physical_address_zip inspect_zip,
              person.master_person_first_name person_first_name,
              person.master_person_middle_initial person_middle_initial,
              person.master_person_last_name person__last_name,
              SUBSTR (person_telecom.address_or_phone, 1, 14) person_phone,
              activity_task_list.requirement_id
       FROM dsk_document_attribute,
            agency_interest,
            activity_task_list,
            agency_interest_address,
            dsk_central_file dsk_aaa,
            dsk_central_file dsk_frm,
            person,
            person_telecom
       WHERE agency_interest.int_doc_id = 0
             AND agency_interest.master_ai_id =
                   agency_interest_address.master_ai_id
             AND agency_interest.int_doc_id = agency_interest_address.int_doc_id
             AND agency_interest.master_ai_id = dsk_frm.master_ai_id
             AND dsk_aaa.int_doc_id = activity_task_list.int_doc_id
             AND dsk_frm.int_doc_id = dsk_document_attribute.int_doc_id
             AND dsk_frm.doc_type_specific_code =
                   dsk_document_attribute.doc_type_specific_code
             AND dsk_frm.activity_category_code = 'PER'
             AND dsk_frm.activity_class_code = 'GNP'
             AND dsk_frm.activity_type_code IN ('MAB', 'NAB', 'REB')
             AND dsk_frm.program_code = '80'
             AND dsk_frm.doc_type_general_code = 'FRM'
             AND dsk_frm.doc_type_specific_code = 'PERSET'
             AND dsk_aaa.doc_template_id = 2000
             AND dsk_frm.master_ai_id = dsk_aaa.master_ai_id
             AND dsk_frm.activity_category_code = dsk_aaa.activity_category_code
             AND dsk_frm.program_code = dsk_aaa.program_code
             AND dsk_frm.activity_class_code = dsk_aaa.activity_class_code
             AND dsk_frm.activity_type_code = dsk_aaa.activity_type_code
             AND dsk_frm.activity_year = dsk_aaa.activity_year
             AND dsk_frm.activity_num = dsk_aaa.activity_num
             AND dsk_document_attribute.doc_attribute_code = 'PERMIT_NO'
             AND activity_task_list.requirement_id IN ('3406', '3548', '3474')
             AND activity_task_list.reference_task_id = 0
             AND NVL (activity_task_list.status_code, '$$$') <> '%  '
             AND person.master_person_id(+) =
                   f_get_gp_contact (agency_interest.master_ai_id)
             AND person.int_doc_id(+) = 0
             AND person.master_person_id = person_telecom.master_person_id(+)
             AND person.int_doc_id = person_telecom.int_doc_id(+)
             AND person_telecom.telecom_type_code(+) = 'wp';Here's the explain plan for Database A, where the query takes 7-8 minutes or more:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   253 |    34   (3)|
    |   1 |  NESTED LOOPS                       |                            |     1 |   253 |    34   (3)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   224 |    32   (0)|
    |   3 |    NESTED LOOPS OUTER               |                            |     1 |   169 |    31   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   144 |    29   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   122 |    27   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |    81 |    26   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    48 |    19   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    21 |    17   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   106 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    27 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    33 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     1 |    22 |     2   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     1   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | PERSON_TELECOM             |     1 |    25 |     2   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | DSK_DOCUMENT_ATTRIBUTE     |     1 |    29 |     1   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TYPE_CODE" AND
                  "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')============================================================================
    Here's the explan plan output for Database B, where the query takes 2-3 seconds:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   289 |    39   (0)|
    |   1 |  NESTED LOOPS OUTER                 |                            |     1 |   289 |    39   (0)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   260 |    37   (0)|
    |   3 |    NESTED LOOPS                     |                            |     1 |   205 |    36   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   172 |    35   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   145 |    34   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |   104 |    33   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    61 |    26   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    25 |    24   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   145 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    36 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    43 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     8 |   216 |     1   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     0   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | DSK_DOCUMENT_ATTRIBUTE     |     1 |    33 |     1   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | PERSON_TELECOM             |     1 |    29 |     2   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TY
                  PE_CODE" AND "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))===============================================================================
    Edited by: harryb on Apr 9, 2009 3:29 PM

  • Scan count 17 logical reads 176543 showing in execution plan

    Hello,
    I am getting scan count 17 and logical reads 145634 and some time the query takes 2 minutes and sometime 5 seconds.Thelogical reads showing against a big table(Product and Orders) that is having 10 milions records.After procedure executed it gives only 7008
    records as out put.
    SELECT Cust.Name
    FROM  dbo.Customers Cust WITH (NOLOCK)
    INNER JOIN Products Prod WITH (NOLOCK) ON Cust.ID = Prod.CustID
    INNER JOIN dbo.Orders  Ords WITH (NOLOCK) on Cust.RepID = prod.ProdId
    INNER JOIN dbo.[Address] Adds WITH (NOLOCK) on Prod.id = Adds.Id 
    WHERE ords.pickeddate between @startdate and @enddate
    Please do the needful.

    >>>>How to reduce logical reads against big tables?
    Have a useful index/s . But please show the execution plan of the query as David pointed.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • STATISTICS IO: Scan Count vs Logical reads

    Hello
    I'm doing som performance research, I have a index with following priority: ClientId, Active, ProductId. Active is a bit field telling whether the Product is active or not, it can be inactive products than active, but always at least one active product.
    When I'm executing
    SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20)
    I'm getting following result: Scan count 1, logical reads 490
    When I'm leading SQL Server to the right paths by including the to possible values in Active by executing the following SQL:
    SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20) AND Active IN (0,1)
    I'm getting following results: Scan count 14, logical reads 123
    With this information, which version would you say is fastest and why?
    When I was running this query 1000 times with different ClientId I got a average time of 172 ms for the first query, and 155 ms for the second one. I have been told that scan count is very expensive... out of this example it seems that the cost of 1 scan count is like 20 logical reads?

    When I was running this query 1000 times with different ClientId I got a average time of 172 ms for the first query, and 155 ms for the second one. I have been told that scan count is very expensive... out of this example
    it seems that the cost of 1 scan count is like 20 logical reads?
    Typically the pages are forced out of buffer (DBCC DROPCLEANBUFFERS) when doing such a measurement:
    http://www.sqlusa.com/bestpractices2005/executioninms/
    BOL: "Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server."
    LINK: http://technet.microsoft.com/en-us/library/ms187762.aspx
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

Maybe you are looking for

  • What do I do if I have a Corrupted file ??

    Ok, I just got my iPod and when I first downloaded it onto my PC it did not read my iPod ! But, now it does ! So after it could read the iPod on the computer, I started downloading songs ! Well, after it had downloaded 170 songs, it just stop downloa

  • Any good training videos for project server 2010

    Hi All, I am working on SharePoint for several years and we had installed project server & created PWA site. But project server looks lot different from SharePoint. is there any good training video's for project server 2010? Thanks for any help Rithu

  • Excel download into a sheet-2by taking formula from another sheet.-1

    i hv an internal table itab.which contain 3 fields a,b,c. a,b are input parameters (int).c = ab.Now i hv to download the values of a,b,c into a excel sheet-2 by <b>taking the formula</b>(c = ab) <b>from another sheet-1</b>.and everytime the new vales

  • Room Directory - links to room  is not working

    Hi All, we are using room directory to enter collaboration rooms.But the room directoy link is loading fine when we clicked on the top level navigation but if we click on the "Room Directory" link from detailed navigation area nothing happens showinf

  • [svn] 3791: Fix bug when resetting stylesheet on FxApplication

    Revision: 3791 Author: [email protected] Date: 2008-10-21 13:23:09 -0700 (Tue, 21 Oct 2008) Log Message: Fix bug when resetting stylesheet on FxApplication SDK-17691 RTE loading styles When the FxApplication replaces its style sheet, it unloads the p