Consistent reads

Hello
I'm trying to understand what a consistent get is and the only definition I can find is:
Consistent gets is a statistic showing the number of buffers that are obtained in consistent read (CR) mode
I understand the db block gets statistic to be the number of blocks read from the buffer cache, i.e. they didn't need to be loaded from disk, and the physical reads is self explanatory. But what is CR mode?
Cheers
David

Hello John
I think I'm becomming a little confused about the differences here. Taking the follwing two execution plans:
Execution Plan
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=2 Bytes=54)
   1    0   TABLE ACCESS (FULL) OF 'CD_GROUP' (Cost=17 Card=2 Bytes=54
Statistics
          0  recursive calls
          6  db block gets
        265  consistent gets
         57  physical reads
          0  redo size
        228  bytes sent via SQL*Net to client
        248  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed
Execution Plan
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=27)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'CD_GROUP' (Cost=2 Card=1 Bytes=27)
   2    1     INDEX (UNIQUE SCAN) OF 'CD_GROUP_PK' (UNIQUE) (Cost=1 Card=2)
Statistics
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        243  bytes sent via SQL*Net to client
        248  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
1     rows processedThe thing that is confusing me here is the relationship between the physical reads and the consistent/db block gets. In the first query, I understand that for the buffer cache to be read, the blocks need to be loaded if they are not already there, so that accounts for the physical reads. The consistent gets account for most of the data that is read, but then there are 6 db block gets issued as well. Why would these need to be issued?
Sorry, this is probably really basic stuff but I'm just struggling to understand the difference between consistent gets and db block gets in this instance.
David

Similar Messages

  • Same Execution plan But different consistent read values

    hi,
    my db version 10.2.0.3
    os version solaris 10.
    i have a query which has same execution plan but with diffrenet consistent read values when optimizer_mode is RULE and CHOOSE.
    what may be the cause of that?
    thanks,
    Here is the query:
    SELECT *
    FROM XXX
    WHERE id = 4567
    RULE based:
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | TABLE ACCESS BY INDEX ROWID| XXX|
    | 2 | INDEX RANGE SCAN | XXX_INX_ID |
    Note
    - 'PLAN_TABLE' is old version
    - rule based optimizer used (consider using cbo)
    Statistics
    1 recursive calls
    0 db block gets
    5 consistent gets
    0 physical reads
    0 redo size
    1973 bytes sent via SQL*Net to client
    492 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    COST Based:
    | Id | Operation | Name | Rows | Bytes | Co
    st (%CPU)|
    | 0 | SELECT STATEMENT | | 1 | 107 |
    4 (0)|
    | 1 | TABLE ACCESS BY INDEX ROWID| APPOINTMENT | 1 | 107 |
    4 (0)|
    | 2 | INDEX RANGE SCAN | APPO_INX_MASTERAPPOID | 1 | |
    3 (0)|
    Note
    - 'PLAN_TABLE' is old version
    Statistics
    0 recursive calls
    0 db block gets
    48120 consistent gets
    0 physical reads
    0 redo size
    1973 bytes sent via SQL*Net to client
    492 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    873792 wrote:
    hi,
    my db version 10.2.0.3
    RULE based:
    The Rule Based Optimizer is NOT supported for V10+

  • SQL execution does not change, but consistent read get higher

    Hi,
    Our SQL query execution does not change, but consistent reads get higher in our test enviroment but query looks fine in our development and production environments.
    In the Development instance the trace is:
    STAT #18446744071526492680 id=1 cnt=1 pid=0 pos=1 obj=7151 op='TABLE ACCESS BY INDEX ROWID MEMBER_CMS_SITE_ACCESS (cr=4 pr=0 pw=0 time=153 us cost=3 size=68 card=1)'
    STAT #18446744071526492680 id=2 cnt=1 pid=1 pos=1 obj=7152 op='INDEX UNIQUE SCAN MEMBER_SITE__MEMBERID_SITEID (cr=3 pr=0 pw=0 time=104 us cost=2 size=0 card=1)'
    So - we read 3 blocks from the index and then 1 from the table ( to make 4)
    In the Test instance the trace is:
    STAT #18446744071411593144 id=1 cnt=1 pid=0 pos=1 obj=7151 op='TABLE ACCESS BY INDEX ROWID MEMBER_CMS_SITE_ACCESS (cr=112 pr=0 pw=0 time=2820 us cost=3 size=70 card=1)'
    STAT #18446744071411593144 id=2 cnt=1 pid=1 pos=1 obj=7152 op='INDEX UNIQUE SCAN MEMBER_SITE__MEMBERID_SITEID (cr=3 pr=0 pw=0 time=90 us cost=2 size=0 card=1)'
    We read 3 blocks from the index but the table needs 109 more which cannot possibly be right.
    It looks like we are applying UNDO and those 109 are the work needed to do so.
    I have tried flushing the shared pool, and killing some sessions that could possibly be the cause... but with nothing in V$TRANSACTION
    I could understand this if there was a long-running transaction that's created a lot of dirty blocks, but there are no transactions at all... so it's not that.
    Thanks

    The SQL is very simple with only one predication in the where cluse and that is indexed,just like
    "select * from table where a=:b0"
    Column a is indexed. But ORACLE choose FTS not INDEX SCAN because of out-of-date stats maybe.
    So I updated the stats expecting to see ORACLE will choose INDEX SCAN. The fact is that ORACLE will not change existing FTS to INDEX until I flush the entire share pool.That's the problem.

  • Consistent Reads & Physical Reads

    hi all,
    what is Consistent Reads & Physical Reads

    There is no absolute value of the buffer cache hit ratio that is good or bad. The ratio value is first dependend on the application DML activity and the resources available and second is easily distorted (in either direction high or low) by poorly performing SQL. You can have a 99.9% hit ratio on a poorly performing database.
    The Performance and Tuning Guide contains information on how to tune the db buffer cache and Oracle will tune the cache automatically if set it to do so with version 9.2 and up via automatic SGA memory management. The Guide errs to the old days of small OLTP environments. Significant number of hash joins and hash aggregation can result in a lower average ratio corresponding to a well performing database.
    If you are using manual SGA memory management then you need to determine if the buffer pool is too small or if you think the pool size should be adequate then you need to hunt for the bad SQL (or in a non-OLTP environment just the cause of the low ratio value).
    HTH -- Mark D Powell --

  • Unable to relate consistent reads with number of phsyical block reads

    HI ,
    The question is we have observed the consistent reads are much more than total buffers required to give the results back.
    I have flushed the buffer_cache before executing the query and also queried the V$BH to know the buffer details for these objects ...after the flush before firing the query we don't have any buffers regarding these tables. Which is expected.
    We are doing DB file sequential reads through the plan and it will result into a single block read at a time.
    Please take a close look at "TABLE ACCESS BY INDEX ROWID CMPGN_DIM (cr=45379 pr=22949 pw=0 time=52434931 us)" line in the below row source plan..
    Here we have only 22949 physical reads means 22949 data buffers but we are seeing 45379 consistent gets.
    Note: We have the CMPGN_DIM and AD_GRP tables are in 4M block size table space and we have only than the default db_cache_size . My database block size is 8192.
    Can you please help me in understand how the 22949 sequential reads result into 45379 consistant gets.
    Even the V$BH query buffer details matches with physical reads .
    query row source plan from 10043 trace:
    27 SORT ORDER BY (cr=92355 pr=47396 pw=0 time=359030364 us)
    27 WINDOW SORT (cr=92355 pr=47396 pw=0 time=359030088 us)
    27 NESTED LOOPS OUTER (cr=92355 pr=47396 pw=0 time=359094569 us)
    27 NESTED LOOPS OUTER (cr=92276 pr=47395 pw=0 time=359041825 us)
    27 VIEW (cr=92197 pr=47393 pw=0 time=358984314 us)
    27 UNION-ALL (cr=92197 pr=47393 pw=0 time=358984120 us)
    26 HASH GROUP BY (cr=92197 pr=47393 pw=0 time=358983665 us)
    9400 VIEW (cr=92197 pr=47393 pw=0 time=359094286 us)
    9400 COUNT (cr=92197 pr=47393 pw=0 time=359056676 us)
    9400 VIEW (cr=92197 pr=47393 pw=0 time=359009672 us)
    9400 SORT ORDER BY (cr=92197 pr=47393 pw=0 time=358972063 us)
    9400 HASH JOIN OUTER (cr=92197 pr=47393 pw=0 time=358954170 us)
    9400 VIEW (cr=92191 pr=47387 pw=0 time=349796124 us)
    9400 HASH JOIN (cr=92191 pr=47387 pw=0 time=349758517 us)
    94 TABLE ACCESS BY INDEX ROWID CMPGN_DIM (cr=45379 pr=22949 pw=0 time=52434931 us)
    50700 INDEX RANGE SCAN IDX_CMPGN_DIM_UK1 (cr=351 pr=349 pw=0 time=1915239 us)(object id 55617)
    60335 TABLE ACCESS BY INDEX ROWID AD_GRP (cr=46812 pr=24438 pw=0 time=208234661 us)
    60335 INDEX RANGE SCAN IDX_AD_GRP2 (cr=613 pr=611 pw=0 time=13350221 us)(object id 10072801)
    7 VIEW (cr=6 pr=6 pw=0 time=72933 us)
    7 HASH GROUP BY (cr=6 pr=6 pw=0 time=72898 us)
    162 PARTITION RANGE SINGLE PARTITION: 4 4 (cr=6 pr=6 pw=0 time=45363 us)
    162 PARTITION HASH SINGLE PARTITION: 676 676 (cr=6 pr=6 pw=0 time=44690 us)
    162 INDEX RANGE SCAN PK_AD_GRP_DTL_FACT PARTITION: 3748 3748 (cr=6 pr=6 pw=0 time=44031 us)(object id 8347241)
    1 FAST DUAL (cr=0 pr=0 pw=0 time=9 us)
    25 TABLE ACCESS BY INDEX ROWID AD_GRP (cr=79 pr=2 pw=0 time=29817 us)

    I think that I understand your question. The consistent gets statistic (CR) indicates the number of times blocks were accessed in memory, and doing so possibly required undo to be applied to the blocks to provide a consistent get. The physical read statistic (PR) indicates the number of blocks that were accessed from disk. The consistent gets statistic may be very close to the physical reads statistic, or very different depending on several factors. A test case might best explain why the CR and PR statistics may differ significantly. First, creating the test objects:
    CREATE TABLE T1 AS
    SELECT
      ROWNUM C1,
      1000000-ROWNUM C2,
      RPAD(TO_CHAR(ROWNUM),800,'X') C3
    FROM
      DUAL
    CONNECT BY
      LEVEL<=1000000;
    CREATE INDEX INT_T1_C1 ON T1(C1);
    CREATE INDEX INT_T1_C2 ON T1(C2);
    CREATE TABLE T2 AS
    SELECT
      ROWNUM C1,
      1000000-ROWNUM C2,
      RPAD(TO_CHAR(ROWNUM),800,'X') C3
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    COMMIT;
    EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL)
    EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',CASCADE=>TRUE,ESTIMATE_PERCENT=>NULL)We now have 2 tables, the first with 1,000,000 rows with about 8 rows per block and having 2 indexes, and the second table with 100,000 rows and no indexes.
    SELECT
      TABLE_NAME,
      PCT_FREE,
      NUM_ROWS,
      BLOCKS
    FROM
      USER_TABLES
    WHERE
      TABLE_NAME IN ('T1','T2');
    TABLE_NAME   PCT_FREE   NUM_ROWS     BLOCKS
    T1                 10    1000000     125597
    T2                 10     100000      12655
    COLUMN INDEX_NAME FORMAT A10
    SELECT
      INDEX_NAME,
      BLEVEL,
      LEAF_BLOCKS,
      DISTINCT_KEYS DK,
      CLUSTERING_FACTOR CF,
      NUM_ROWS
    FROM
      USER_INDEXES
    WHERE
      TABLE_NAME IN ('T1','T2');
    INDEX_NAME     BLEVEL LEAF_BLOCKS         DK         CF   NUM_ROWS
    INT_T1_C1           2        2226    1000000     125000    1000000
    INT_T1_C2           2        2226    1000000     125000    1000000Now a test script to try a couple experiments with the two tables:
    SET LIN 120
    SET AUTOTRACE TRACEONLY STATISTICS EXPLAIN
    SET TIMING ON
    SPOOL C:\MYTEST.TXT
    ALTER SESSION SET STATISTICS_LEVEL=TYPICAL;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST1';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SELECT /*+ USE_HASH(T1 T2) */
      T1.C1,
      T2.C2,
      T1.C3
    FROM
      T1,
      T2
    WHERE
      T1.C2=T2.C2
      AND T1.C2 BETWEEN 900000 AND 1000000;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST2';
    SELECT /*+ USE_NL(T1 T2) */
      T1.C1,
      T2.C2,
      T1.C3
    FROM
      T1,
      T2
    WHERE
      T1.C2=T2.C2
      AND T1.C2 BETWEEN 900000 AND 1000000;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST3';
    SELECT /*+ USE_HASH(T1 T2) */
      T1.C1,
      T2.C2,
      T1.C3
    FROM
      T1,
      T2
    WHERE
      T1.C1=T2.C1
      AND T1.C1 BETWEEN 1 AND 100000;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST4';
    SELECT /*+ USE_NL(T1 T2) */
      T1.C1,
      T2.C2,
      T1.C3
    FROM
      T1,
      T2
    WHERE
      T1.C1=T2.C1
      AND T1.C1 BETWEEN 1 AND 100000;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST5';
    SELECT /*+ USE_NL(T1 T2) FIND_ME */
      T1.C1,
      T2.C2,
      T1.C3
    FROM
      T1,
      T2
    WHERE
      T1.C1=T2.C1
      AND T1.C1 BETWEEN 1 AND 100000;
    SET AUTOTRACE OFF
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';
    SPOOL OFFTest script output follows (note that the script was executed twice so that statistics related to the hard parse would be excluded):
    SQL> SELECT /*+ USE_HASH(T1 T2) */
      2    T1.C1,
      3    T2.C2,
      4    T1.C3
      5  FROM
      6    T1,
      7    T2
      8  WHERE
      9    T1.C2=T2.C2
    10    AND T1.C2 BETWEEN 900000 AND 1000000;
    100000 rows selected.
    Elapsed: 00:00:22.65
    Execution Plan
    Plan hash value: 488978626                                                                                             
    | Id  | Operation                    | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |                     
    |   0 | SELECT STATEMENT             |           | 99999 |    77M|       | 20139   (1)| 00:04:02 |                     
    |*  1 |  HASH JOIN                   |           | 99999 |    77M|  1664K| 20139   (1)| 00:04:02 |                     
    |*  2 |   TABLE ACCESS FULL          | T2        |   100K|   488K|       |  3435   (1)| 00:00:42 |                     
    |   3 |   TABLE ACCESS BY INDEX ROWID| T1        |   100K|    77M|       | 12733   (1)| 00:02:33 |                     
    |*  4 |    INDEX RANGE SCAN          | INT_T1_C2 |   100K|       |       |   226   (1)| 00:00:03 |                     
    Predicate Information (identified by operation id):                                                                    
       1 - access("T1"."C2"="T2"."C2")                                                                                     
       2 - filter("T2"."C2">=900000 AND "T2"."C2"<=1000000)                                                                
       4 - access("T1"."C2">=900000 AND "T1"."C2"<=1000000)                                                                
    Statistics
              0  recursive calls                                                                                           
              0  db block gets                                                                                             
          37721  consistent gets                                                                                           
          25226  physical reads                                                                                            
              0  redo size                                                                                                 
       82555058  bytes sent via SQL*Net to client                                                                          
          73722  bytes received via SQL*Net from client                                                                    
           6668  SQL*Net roundtrips to/from client                                                                         
              0  sorts (memory)                                                                                            
              0  sorts (disk)                                                                                              
         100000  rows processed                                                                                            
    STAT lines from the 10046 trace:
    STAT #37 id=1 cnt=100000 pid=0 pos=1 obj=0 op='HASH JOIN  (cr=37721 pr=25226 pw=0 time=106305676 us)'
    STAT #37 id=2 cnt=100000 pid=1 pos=1 obj=48144 op='TABLE ACCESS FULL T2 (cr=12511 pr=12501 pw=0 time=13403966 us)'
    STAT #37 id=3 cnt=100000 pid=1 pos=2 obj=48141 op='TABLE ACCESS BY INDEX ROWID T1 (cr=25210 pr=12725 pw=0 time=103903740 us)'
    STAT #37 id=4 cnt=100000 pid=3 pos=1 obj=48143 op='INDEX RANGE SCAN INT_T1_C2 (cr=6877 pr=225 pw=0 time=503602 us)'Elapsed: 00:00:00.01
    SQL> SELECT /*+ USE_NL(T1 T2) */
      2    T1.C1,
      3    T2.C2,
      4    T1.C3
      5  FROM
      6    T1,
      7    T2
      8  WHERE
      9    T1.C2=T2.C2
    10    AND T1.C2 BETWEEN 900000 AND 1000000;
    100000 rows selected.
    Elapsed: 00:00:20.17
    Execution Plan
    Plan hash value: 1773329022                                                                                            
    | Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |                              
    |   0 | SELECT STATEMENT            |           | 99999 |    77M|   303K  (1)| 01:00:43 |                              
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1        |     1 |   810 |     3   (0)| 00:00:01 |                              
    |   2 |   NESTED LOOPS              |           | 99999 |    77M|   303K  (1)| 01:00:43 |                              
    |*  3 |    TABLE ACCESS FULL        | T2        |   100K|   488K|  3435   (1)| 00:00:42 |                              
    |*  4 |    INDEX RANGE SCAN         | INT_T1_C2 |     1 |       |     2   (0)| 00:00:01 |                              
    Predicate Information (identified by operation id):                                                                    
       3 - filter("T2"."C2">=900000 AND "T2"."C2"<=1000000)                                                                
       4 - access("T1"."C2"="T2"."C2")                                                                                     
           filter("T1"."C2">=900000 AND "T1"."C2"<=1000000)                                                                
    Statistics
              0  recursive calls                                                                                           
              0  db block gets                                                                                             
         250219  consistent gets                                                                                           
          25227  physical reads                                                                                            
              0  redo size                                                                                                 
       82555058  bytes sent via SQL*Net to client                                                                          
          73722  bytes received via SQL*Net from client                                                                    
           6668  SQL*Net roundtrips to/from client                                                                         
              0  sorts (memory)                                                                                            
              0  sorts (disk)                                                                                              
         100000  rows processed                                                                                            
    STAT lines from the 10046 trace:
    STAT #36 id=1 cnt=100000 pid=0 pos=1 obj=48141 op='TABLE ACCESS BY INDEX ROWID T1 (cr=250219 pr=25227 pw=0 time=61410637 us)'
    STAT #36 id=2 cnt=200001 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=231886 pr=12727 pw=0 time=3000840 us)'
    STAT #36 id=3 cnt=100000 pid=2 pos=1 obj=48144 op='TABLE ACCESS FULL T2 (cr=18344 pr=12501 pw=0 time=14103896 us)'
    STAT #36 id=4 cnt=100000 pid=2 pos=2 obj=48143 op='INDEX RANGE SCAN INT_T1_C2 (cr=213542 pr=226 pw=0 time=1929742 us)'
    SQL> SELECT /*+ USE_HASH(T1 T2) */
      2    T1.C1,
      3    T2.C2,
      4    T1.C3
      5  FROM
      6    T1,
      7    T2
      8  WHERE
      9    T1.C1=T2.C1
    10    AND T1.C1 BETWEEN 1 AND 100000;
    100000 rows selected.
    Elapsed: 00:00:20.35
    Execution Plan
    Plan hash value: 689276421                                                                                             
    | Id  | Operation                    | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |                     
    |   0 | SELECT STATEMENT             |           | 99999 |    77M|       | 20143   (1)| 00:04:02 |                     
    |*  1 |  HASH JOIN                   |           | 99999 |    77M|  2152K| 20143   (1)| 00:04:02 |                     
    |*  2 |   TABLE ACCESS FULL          | T2        |   100K|   976K|       |  3435   (1)| 00:00:42 |                     
    |   3 |   TABLE ACCESS BY INDEX ROWID| T1        |   100K|    76M|       | 12733   (1)| 00:02:33 |                     
    |*  4 |    INDEX RANGE SCAN          | INT_T1_C1 |   100K|       |       |   226   (1)| 00:00:03 |                     
    Predicate Information (identified by operation id):                                                                    
       1 - access("T1"."C1"="T2"."C1")                                                                                     
       2 - filter("T2"."C1">=1 AND "T2"."C1"<=100000)                                                                      
       4 - access("T1"."C1">=1 AND "T1"."C1"<=100000)                                                                      
    Statistics
              0  recursive calls                                                                                           
              0  db block gets                                                                                             
          37720  consistent gets                                                                                           
          25225  physical reads                                                                                            
              0  redo size                                                                                                 
       82555058  bytes sent via SQL*Net to client                                                                          
          73722  bytes received via SQL*Net from client                                                                    
           6668  SQL*Net roundtrips to/from client                                                                         
              0  sorts (memory)                                                                                            
              0  sorts (disk)                                                                                              
         100000  rows processed                                                                                            
    STAT lines from the 10046 trace:
    STAT #38 id=1 cnt=100000 pid=0 pos=1 obj=0 op='HASH JOIN  (cr=37720 pr=25225 pw=0 time=69225424 us)'
    STAT #38 id=2 cnt=100000 pid=1 pos=1 obj=48144 op='TABLE ACCESS FULL T2 (cr=12511 pr=12501 pw=0 time=13204971 us)'
    STAT #38 id=3 cnt=100000 pid=1 pos=2 obj=48141 op='TABLE ACCESS BY INDEX ROWID T1 (cr=25209 pr=12724 pw=0 time=66504913 us)'
    STAT #38 id=4 cnt=100000 pid=3 pos=1 obj=48142 op='INDEX RANGE SCAN INT_T1_C1 (cr=6876 pr=224 pw=0 time=604405 us)'
    SQL> SELECT /*+ USE_NL(T1 T2) */
      2    T1.C1,
      3    T2.C2,
      4    T1.C3
      5  FROM
      6    T1,
      7    T2
      8  WHERE
      9    T1.C1=T2.C1
    10    AND T1.C1 BETWEEN 1 AND 100000;
    100000 rows selected.
    Elapsed: 00:00:28.11
    Execution Plan
    Plan hash value: 1467726760                                                                                            
    | Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |                              
    |   0 | SELECT STATEMENT            |           | 99999 |    77M|   303K  (1)| 01:00:43 |                              
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1        |     1 |   806 |     3   (0)| 00:00:01 |                              
    |   2 |   NESTED LOOPS              |           | 99999 |    77M|   303K  (1)| 01:00:43 |                              
    |*  3 |    TABLE ACCESS FULL        | T2        |   100K|   976K|  3435   (1)| 00:00:42 |                              
    |*  4 |    INDEX RANGE SCAN         | INT_T1_C1 |     1 |       |     2   (0)| 00:00:01 |                              
    Predicate Information (identified by operation id):                                                                    
       3 - filter("T2"."C1">=1 AND "T2"."C1"<=100000)                                                                      
       4 - access("T1"."C1"="T2"."C1")                                                                                     
           filter("T1"."C1"<=100000 AND "T1"."C1">=1)                                                                      
    Statistics
              0  recursive calls                                                                                           
              0  db block gets                                                                                             
         250218  consistent gets                                                                                           
          25225  physical reads                                                                                            
              0  redo size                                                                                                 
       82555058  bytes sent via SQL*Net to client                                                                          
          73722  bytes received via SQL*Net from client                                                                    
           6668  SQL*Net roundtrips to/from client                                                                         
              0  sorts (memory)                                                                                            
              0  sorts (disk)                                                                                              
         100000  rows processed                                                                                            
    STAT lines from the 10046 trace:
    STAT #26 id=1 cnt=100000 pid=0 pos=1 obj=48141 op='TABLE ACCESS BY INDEX ROWID T1 (cr=250218 pr=25225 pw=0 time=80712592 us)'
    STAT #26 id=2 cnt=200001 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=231885 pr=12725 pw=0 time=4601151 us)'
    STAT #26 id=3 cnt=100000 pid=2 pos=1 obj=48144 op='TABLE ACCESS FULL T2 (cr=18344 pr=12501 pw=0 time=17704737 us)'
    STAT #26 id=4 cnt=100000 pid=2 pos=2 obj=48142 op='INDEX RANGE SCAN INT_T1_C1 (cr=213541 pr=224 pw=0 time=2683089 us)'
    SQL> SELECT /*+ USE_NL(T1 T2) FIND_ME */
      2    T1.C1,
      3    T2.C2,
      4    T1.C3
      5  FROM
      6    T1,
      7    T2
      8  WHERE
      9    T1.C1=T2.C1
    10    AND T1.C1 BETWEEN 1 AND 100000;
    100000 rows selected.
    Elapsed: 00:00:17.81
    Execution Plan
    Plan hash value: 1467726760                                                                                            
    | Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |                              
    |   0 | SELECT STATEMENT            |           | 99999 |    77M|   303K  (1)| 01:00:43 |                              
    |   1 |  TABLE ACCESS BY INDEX ROWID| T1        |     1 |   806 |     3   (0)| 00:00:01 |                              
    |   2 |   NESTED LOOPS              |           | 99999 |    77M|   303K  (1)| 01:00:43 |                              
    |*  3 |    TABLE ACCESS FULL        | T2        |   100K|   976K|  3435   (1)| 00:00:42 |                              
    |*  4 |    INDEX RANGE SCAN         | INT_T1_C1 |     1 |       |     2   (0)| 00:00:01 |                              
    Predicate Information (identified by operation id):                                                                    
       3 - filter("T2"."C1">=1 AND "T2"."C1"<=100000)                                                                      
       4 - access("T1"."C1"="T2"."C1")                                                                                     
           filter("T1"."C1"<=100000 AND "T1"."C1">=1)                                                                      
    Statistics
              0  recursive calls                                                                                           
              0  db block gets                                                                                             
         250218  consistent gets                                                                                           
              0  physical reads                                                                                            
              0  redo size                                                                                                 
       82555058  bytes sent via SQL*Net to client                                                                          
          73722  bytes received via SQL*Net from client                                                                    
           6668  SQL*Net roundtrips to/from client                                                                         
              0  sorts (memory)                                                                                            
              0  sorts (disk)                                                                                              
         100000  rows processed                                                                                            
    STAT lines from the 10046 trace:
    STAT #36 id=1 cnt=100000 pid=0 pos=1 obj=48141 op='TABLE ACCESS BY INDEX ROWID T1 (cr=250218 pr=0 pw=0 time=6000438 us)'
    STAT #36 id=2 cnt=200001 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=231885 pr=0 pw=0 time=2401295 us)'
    STAT #36 id=3 cnt=100000 pid=2 pos=1 obj=48144 op='TABLE ACCESS FULL T2 (cr=18344 pr=0 pw=0 time=1400071 us)'
    STAT #36 id=4 cnt=100000 pid=2 pos=2 obj=48142 op='INDEX RANGE SCAN INT_T1_C1 (cr=213541 pr=0 pw=0 time=2435627 us)'So, what does the above mean? Why would a forced execution plan change the number of consistent gets? Why would not flushing the buffer cache cause the PR statistic to drop to 0, yet not change the CR statistic?
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Sata II hdd consistently reading 48-50C incorrect temp

    Since I bought my PC my hdd has consistently read high temps. I'm using Speedfan to measure my temperartures. From what I understand this is getting the reading from the S.M.A.R.T. Don't know why but its not reporting correctly as the drive does not feel that hot when I touch it. I'll have to conclude that there is a fault somewhere.
    This had me worried for a while but I'm over it now. Its interesting though that you can't read this temp from bios.

    I've got one reading 56C and during heavy video conversions it will peg 64C. Reads the same on all 6 SATA ports, with a 78 CFM of air. Also have 2 more, 36C and 42C average temps. They all measure about the same, actually, the 36c reads the warmest (Boot w/swap) with a digital probe @ 28C to 35C during heavy use. The others a just slighty cooler on the top side close to the spindle (where I can get to it the easiest)
    I take the temps with a grain of salt, but good to have as a gauge of system health.
    Note, can't really measure fan spead, but it sure sounds like 78 CFM!

  • ORA-08176: consistent read failure; rollback data not available

    Hi,
    We implemented UNDO management on our servers and started getting these errors for few of our programs.:
    ORA-08176: consistent read failure; rollback data not available
    These errors were not coming when we were using the old rollback segments and we have not changed any code on our server.
    1. What is possibly causing these errors?
    2. Why did they not surface with rollback segments but started appearing when we implemented AUM and Temporary TS (instead of fixed TS used as temporary TS).
    Our environment:
    RDBMS Version: 9.2.0.5
    Operating System and Version: Windows 2000 AS SP5
    Thanks
    Satish

    NOt much in the alert.log. I looked at the trace file, it also does not have much information:
    ORA-12012: error on auto execute of job 7988306
    ORA-20006: ORA-20001: Following error occured in Lot <4407B450Z2 Operation 7131> Good Bad rollup.ORA-08176: consistent read failure; rollback data not available
    ORA-06512: at "ARIES.A_SP$WRAPPER_ROLLUPS", line 106
    ORA-06512: at line 1
    *** SESSION ID:(75.13148) 2004-11-23 09:16:14.281
    *** 2004-11-23 09:16:14.281
    ORA-12012: error on auto execute of job 7988556
    ORA-20006: ORA-20006: Following error occured in Lot <3351A497V1 Operation 7295> For No FL Rollup, Updating T_GOOD.ORA-08176: consistent read failure; rollback data not available
    ORA-06512: at "ARIES.A_SP$WRAPPER_ROLLUPS", line 106
    ORA-06512: at line 1
    *** SESSION ID:(75.16033) 2004-11-23 09:28:10.703
    *** 2004-11-23 09:28:10.703
    The version we have is :
    Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
    PL/SQL Release 9.2.0.5.0 - Production
    CORE 9.2.0.6.0 Production
    TNS for 32-bit Windows: Version 9.2.0.5.0 - Production
    NLSRTL Version 9.2.0.5.0 - Production
    Thanks
    Satish

  • Interpreting tkprof execute query (consistent read)

    Hi all,
    I have stored procedure which executes a bit of Dynamic SELECT on an Oracle Text enabled column.
    On first execution I get the following stats;
    call    count   cpu      elapsed          disk      query    current       rows
    Parse   1       0.07     0.10                0          0          0          0
    Execute 1       4.72     5.55                0       5001          0          0
    Fetch   1       0.00     0.00                0          0          0          0
    total   3       4.79     5.65                0       5001          0          0If I call it the stored procedure with the same parameters immediately, I get the following;
    call    count   cpu      elapsed          disk      query    current       rows
    Parse   1       0.00     0.00                0          0          0          0
    Execute 1       0.01     0.00                0          0          0          0
    Fetch   1       0.00     0.00                0          0          0          0
    total   3       0.01     0.00                0          0          0          0On a Development system where there isn't other updates going on, can
    anybody suggest why I suffer this degradation in performance on the 1st
    run?
    The EXPLAIN plan is exactly the same for 1st and 2nd run.
    Regards

    Just as a followup, using Tim Hall's [CTXCAT example|http://www.oracle-base.com/articles/9i/FullTextIndexingUsingOracleText9i.php], you can see the number of consistent gets going way down for the second call
    SQL> SELECT id, price, name
      2  FROM   my_items
      3  WHERE  CATSEARCH(description, 'Bike', 'price BETWEEN 1 AND 5')> 0;
    Statistics
             10  recursive calls
              0  db block gets
             24  consistent gets
              0  physical reads
              0  redo size
            744  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              5  rows processed
    SQL> /
    Statistics
              3  recursive calls
              0  db block gets
              4  consistent gets
              0  physical reads
              0  redo size
            744  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              5  rows processed
    SQL> ed
    Wrote file afiedt.buf
      1  SELECT id, price, name
      2  FROM   my_items
      3* WHERE  CATSEARCH(description, 'Bike', 'price BETWEEN 4 AND 17')> 0
    SQL> /
    14 rows selected.
    Statistics
            110  recursive calls
              0  db block gets
            195  consistent gets
              0  physical reads
              0  redo size
           1120  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             14  rows processed
    SQL> /
    14 rows selected.
    Statistics
              3  recursive calls
              0  db block gets
              4  consistent gets
              0  physical reads
              0  redo size
           1120  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             14  rows processedThat leads me to suspect that there is some caching going on with the index. If we could see the raw trace file, there may well be some recursive SQL that gets fired off as part of the index maintenance.
    Justin

  • Why not use Redo log for consistent read

    Oracle 11.1.0.7:
    This might be a stupid question.
    As I understand if a select was issued at 7:00 AM and the data that select is going to read has changed at 7:10 AM even then Oracle will return the data that existed at 7:00 AM. And for this Oracle needs the data in Undo segments.
    My question is since redo also has past and current information why can't redo logs be used to retreive that information? Why is undo required when redo already has all that information.

    user628400 wrote:
    Thanks. I get that piece but isn't it the same problem with UNDO? It's written as it expires and there is no guranteee until we specifically ask oracle to gurantee the UNDO retention? I guess I am trying to understand that UNDO was created for effeciency purposes so that there is less performance overhead as compared to reading and writing from redo.And this also you said,
    >
    If data was changed to 100 to 200 wouldn't both the values be there in redo logs. As I understand:
    1. Insert row with value 100 at 7:00 AM and commit. 100 will be writen to redo log
    2. update row to 200 at 8:00 AM and commit. 200 will be written to redo log
    So in essence 100 and 200 both are there in the redo logs and if select was issued at 7:00 data can be read from redo log too. Please correct me if I am understanding it incorrectly.I guess you didnt understand the explaination that I did. Its not the old data that is kept. Its the changed vector of Undo that is kept which is useful to "recover" it when its gone but not useful as such for a select statement. Whereas in an Undo block, the actual value is kept. You must remember that its still a block only which can contain data just like your normal block which may contain a table like EMP. So its not 100,200 but the change vectors of these things which is useful to recover the transaction based on their SCN numbers and would be read in that order as well. And to read the data from Undo, its quite simple for oracle to do so using an Undo block as the transaction table which holds the entry for the transaction, knows where the old data is kept in the Undo Segment. You may have seen XIDSEQ, XIDUSN, XIDSLOT in the tranaction id which are nothing but the information that where the undo data is kept. And to read it, unlke redo, undo plays a good role.
    About the expiry of Undo, you must know that only INACTIVE Undo extents are marked for expiry. The Active Extents which are having an ongoing tranaction records, are never marked for it. You can come back after a lifetime and if undo is there, your old data would be kept safe by oracle since its useful for the multiversioning. Undo Retention is to keep the old data after commit, something which you need not to do if you are on 11g and using Total Recall feature!
    HTH
    Aman....

  • Single-statement 'write consistency' on read committed?

    Please note that in the following I'm only concerned about single-statement read committed transactions. I do realize that for a multi-statement read committed transaction Oracle does not guarantee transaction set consistency without techniques like select for update or explicit hand-coded locking.
    According to the documentation Oracle guarantees 'statement-level transaction set consistency' for queries in read committed transactions. In many cases, Oracle also provides single-statement write consistency. However, when an update based on a consistent read tries to overwrite changes committed by other transactions after the statement started, it creates a write conflict. Oracle never reports write conflicts on read committed. Instead, it automatically handles them based on the new values for the target table columns referenced by the update.
    Let's consider a simple example. Again, I do realize that the following design might look strange or even sloppy, but the ability to produce a quality design when needed is not an issue here. I'm simply trying to understand the Oracle's behavior on write conflicts in a single-statement read committed transaction.
    A valid business case behind the example is rather common - a financial institution with two-stage funds transfer processing. First, you submit a transfer (put transfer amounts in the 'pending' column of the account) in case the whole financial transaction is in doubt. Second, after you got all the necessary confirmations you clear all the pending transfers making the corresponding account balance changes, resetting pending amount and marking the accounts cleared by setting the cleared date. Neither stage should leave the data in inconsistent state: sum (amount) for all rows should not change and the sum (pending) for all rows should always be 0 on either stage:
    Setup:
    create table accounts
    acc int primary key,
    amount int,
    pending int,
    cleared date
    Initially the table contains the following:
    ACC AMOUNT PENDING CLEARED
    1 10 -2
    2 0 2
    3 0 0 26-NOV-03
    So, there is a committed database state with a pending funds transfer of 2 dollars from acc 1 to acc 2. Let's submit another transfer of 1 dollar from acc 1 to acc 3 but do not commit it yet in SQL*Plus Session 1:
    update accounts
    set pending = pending - 1, cleared = null where acc = 1;
    update accounts
    set pending = pending + 1, cleared = null where acc = 3;
    ACC AMOUNT PENDING CLEARED
    1 10 -3
    2 0 2
    3 0 1
    And now let's clear all the pending transfers in SQL*Plus Session 2 in a single-statement read-committed transaction:
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null;
    Session 2 naturally blocks. Now commit the transaction in session 1. Session 2 readily unblocks:
    ACC AMOUNT PENDING CLEARED
    1 7 0 26-NOV-03
    2 2 0 26-NOV-03
    3 0 1
    Here we go - the results produced by a single-statement transaction read committed transaction in session 2, are inconsistent � the second funds transfer has not completed in full. Session 2 should have produced the following instead:
    ACC AMOUNT PENDING CLEARED
    1 7 0 26-NOV-03
    2 2 0 26-NOV-03
    3 1 0 26-NOV-03
    Please note that we would have gotten the correct results if we ran the transactions in session 1 and session 2 serially. Please also note that no update has been lost. The type of isolation anomaly observed is usually referred to as a 'read skew', which is a variation of 'fuzzy read' a.k.a. 'non-repeatable read'.
    But if in the session 2 instead of:
    -- scenario 1
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null;
    we issued:
    -- scenario 2
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null and pending <> 0;
    or even:
    -- scenario 3
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null and (pending * 0) = 0;
    We'd have gotten what we really wanted.
    I'm very well aware of the 'select for update' or serializable il solution for the problem. Also, I could present a working example for precisely the above scenario for a major database product, providing the results that I would consider to be correct. That is, the interleaving execution of the transactions has the same effect as if they completed serially. Naturally, no extra hand-coded locking techniques like select for update or explicit locking is involved.
    And now let's try to understand what just has happened. Playing around with similar trivial scenarios one could easily figure out that Oracle clearly employs different strategies when handling update conflicts based on the new values for the target table columns, referenced by the update. I have observed the following cases:
    A. The column values have not changed: Oracle simply resumes using the current version of the row. It's perfectly fine because the database view presented to the statement (and hence the final state of the database after the update) is no different from what would have been presented if there had been no conflict at all.
    B. The row (including the columns being updated) has changed, but the predicate columns haven't (see scenario 1): Oracle resumes using the current version of the row. Formally, this is acceptable too as the ANSI read committed by definition is prone to certain anomalies anyway (including the instance of a 'read skew' we've just observed) and leaving behind somewhat inconsistent data can be tolerated as long as the isolation level permits it. But please note - this is not a 'single-statement write consistent' behavior.
    C. Predicate columns have changed (see scenario 2 or 3): Oracle rolls back and then restarts the statement making it look as if it did present a consistent view of the database to the update statement indeed. However, what seems confusing is that sometimes Oracle restarts when it isn't necessary, e.g. when new values for predicate columns don't change the predicate itself (scenario 3). In fact, it's bit more complicated � I also observed restarts on some index column changes, triggers and constraints change things a bit too � but for the sake of simplicity let's no go there yet.
    And here come the questions, assuming that (B) is not a bug, but the expected behavior:
    1. Does anybody know why it's never been documented in detail when exactly Oracle restarts automatically on write conflicts once there are cases when it should restart but it won't? Many developers would hesitate to depend on the feature as long as it's not 'official'. Hence, the lack of the information makes it virtually useless for critical database applications and a careful app developer would be forced to use either serializable isolation level or hand-coded locking for a single-statement update transaction.
    If, on the other hand, it's been documented, could anybody please point me to the bit in the documentation that:
    a) Clearly states that Oracle might restart an update statement in a read committed transaction because otherwise it would produce inconsistent results.
    b) Unambiguously explains the circumstances when Oracle does restart.
    c) Gives clear and unambiguous guidelines on when Oracle doesn't restart and therefore when to use techniques like select for update or the serializable isolation level in a single-statement read committed transaction.
    2. Does anybody have a clue what was the motivation for this peculiar design choice of restarting for a certain subset of write conflicts only? What was so special about them? Since (B) is acceptable for read committed, then why Oracle bothers with automatic restarts in (C) at all?
    3. If, on the other hand, Oracle envisions the statement-level write consistency as an important advantage over other mainstream DBMSs as it clear from the handling of (C), does anybody have any idea why Oracle wouldn't fix (B) using well-known techniques and always produce consistent results?

    I intrigued that this posting has attracted so little interest. The behaviour described is not intuitive and seems to be undocumented in Oracle's manuals.
    Does the lack of response indicate:
    (1) Nobody thinks this is important
    (2) Everybody (except me) already knew this
    (3) Nobody understands the posting
    For the record, I think it is interesting. Having spent some time investigating this, I believe the described is correct, consistent and understandable. But I would be happier if Oracle documented in the Transaction sections of the Manual.
    Cheers, APC

  • About database nature of read consistency,

    Hi All,
    can anybody explain me what is consistent read, logical read and physical read?

    Hi,
    A consistent read is when a database must read a cloned block that has been recently changed. This is a core concept of Oracle database that allows data to be changed and read at the same time. A User must be able to read all relevant data to the transaction at the time it started.
    For instance, let's assume I start doing a full table scan of a large table by noon. Another user comes by at 12:05 and deletes a few records of the same table. Oracle preserves the before-image of the table by the time I started doing my full table scan so that I don't get some results from noon and some results from 12:05. This is called read consistency. A Consistent read is that a session does to achieve this through the UNDO tablespaces.
    Logical read is a read from memory. For example a row that has been recently read and is now stored in the database buffer cache. The second time a query wants that row, oracle will detect it's in the cache and won't fetch from the disk again. This is a logical read, also called a consistent get.
    A Physical read is when the database will fetch the required blocks from the disk.

  • Ram issues-reading only one side

    My Lombard tells me i have 192mb of ram so i always figured 128 in the top, and the original 64 in the bottom. I went to check out some ram upgrades today at the shop to get upto 512mb. When we opened it up to test the new ram, we learned that i do have 128mb in the top, but there is already a 256mb in the bottom. So for some reason my comp is only reading half of each card. We even tried the new ram card and it also only read half of it. We tried different combonations and it didn't change. Its only reading half of each ram card.
    Anyone have any ideas why or what i can do about it? Thanks for any suggestions.
    Blake

    Hello, So i have taken it apart and played with it a bit more.
    First of all, each ram card has 4 black boxes (modules) per side. So maybe thats a bad sign right off.
    The one card still has a sticker on it from where it was purchased, it reads PC133 , Ibook 256 Wilkins Technology. The modules are Samsung but i think the board is not.
    The other card looks physically about the same, i am assuming that its a 128 because both slots consistantly read 1/2 of which ever card is inserted, in this case reading 64. The modules on it are Samsung and the board also appears to be Samsung. Its sticker reads Samsung Korea 9926. If this card is in the computer alone in either the top or bottom slot, the computer will not boot up. It just gives me an icon with the finder logo flashing to a ? on it.
    The 256 card will allow the comp to boot up no problem when its the only card installed, and the computer will show 128mb if this 256 card is in ether slot. And if i add the other card (that cannot work alone) than it adds another 64mb to the total giving me 192mb total. And this is consistant with the cards in either slot.
    In summary, the 128 card will not work alone, in either slot. But if the other card is installed, the 128 card will provide me with 64mb more ram, installed in either slot.
    Pretty weird...
    Could this have anything to do with needing to upgrade firmware? Probably a stupid question but just a thought...

  • Buffer Busy Waits in a Read-Mostly Database?

    11gR2 Standard Edition on Linux x86_64.
    The database consists of two large tables (12GB+), one column of each of which has an Oracle Text index on it. Once a month, the two tables are refreshed from elsewhere, the Text indexes are updated, and then they sit there for the rest of the month, effectively read-only as users perform full text searches. The instance runs in 20GB of RAM, of which 16GB is given over to the (8K, default) buffer cache, 1GB SGA, 2GB PGA.
    The principle recurring wait event on this database is buffer busy waits, for data blocks (i.e., not undo segment headers) -and the data blocks are those of the two tables (which have default freelists, freelist groups and initrans and maxtrans).
    I get that during the monthly refresh, when there's loads of inserts happening, there could be lots of buffer busy waits. Since that refresh happens at weekends out-of-hours, waits during that time are not of any great concern.
    My question is why there would be any such waits during the database's 'read-only' period, in between refreshes. I can positively guarantee that no DML is taking place then, yet the buffer busy waits still occur, from time to time.
    On a possibly related note, why would I see lots of "consistent reads" during the 'read-only' period? The data isn't changing at all, so why would the database be busy doing consistent reads when current reads (I would have thought) would be good enough to get the data in the state it's actually at?

    Catfive Lander wrote:
    11gR2 Standard Edition on Linux x86_64.
    The database consists of two large tables (12GB+), one column of each of which has an Oracle Text index on it. Once a month, the two tables are refreshed from elsewhere, the Text indexes are updated, and then they sit there for the rest of the month, effectively read-only as users perform full text searches. The instance runs in 20GB of RAM, of which 16GB is given over to the (8K, default) buffer cache, 1GB SGA, 2GB PGA.
    The principle recurring wait event on this database is buffer busy waits, for data blocks (i.e., not undo segment headers) -and the data blocks are those of the two tables (which have default freelists, freelist groups and initrans and maxtrans).
    I get that during the monthly refresh, when there's loads of inserts happening, there could be lots of buffer busy waits. Since that refresh happens at weekends out-of-hours, waits during that time are not of any great concern.
    My question is why there would be any such waits during the database's 'read-only' period, in between refreshes. I can positively guarantee that no DML is taking place then, yet the buffer busy waits still occur, from time to time.
    On a possibly related note, why would I see lots of "consistent reads" during the 'read-only' period? The data isn't changing at all, so why would the database be busy doing consistent reads when current reads (I would have thought) would be good enough to get the data in the state it's actually at?Catfive,
    Are you running 11.2.0.1 or 11.2.0.2? If you are running 11.2.0.1 there are at least two bugs fixed by 11.2.0.2 to correct problems that lead to buffer busy waits. You mentioned that this is a "mostly" read only database where you are experiencing these waits - does that mean that there might be some inserts, updates, and deletes (possibly auditing related?)? One of the bug reports found on Metalink (MOS) is this one:
    Doc ID 9341448.8, Bug 9341448 - "Buffer block contention on full block which keeps being tried for space"
    How did you determine that the buffer busy waits were related to these two tables? Did you check V$SEGMENT_STATISTICS, monitor the session level wait events, create a 10046 trace at level 8 or 12, or use some other method? Are these tables typically read using parallel execution? Is there any chance that the application is performing SELECT ... FOR UPDATE?
    Have you checked V$SESSION_EVENT to see which sessions waited on buffer busy waits? How severe are the buffer busy waits - 10 seconds in a 24 hour period, 1 minute in a 20 minute time period? Are you backing up this database using RMAN and comparing the change in the buffer busy waits before and after RMAN completes its backup?
    I wonder if using SGA_TARGET could lead to buffer busy waits during an automatic buffer cache resize operation?
    Regarding seeing "consistent reads" during the read only period, that should be expected when blocks are read from the buffer cache. Jonathan Lewis explained it well in at least one of the threads that he contributed to on OTN, but I cannot find that thread at the moment. Essentially (in as few words as possible), you will see current mode block accesses when the data blocks are being changed and consistent reads (consistent gets) when the blocks are being read. This thread includes comments that suggest what to check to determine if undo had to be applied to perform the consistent gets:
    Index consists 1.5mln blocks, but full scan gets 11mln blocks
    Edit:
    I found the thread with Jonathan's comment:
    high consistent read during parse call | tkprof output
    "If you're not doing a current read then the only alternative is to do a consistent read.
    Typically you do current reads because you want to change a block"
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Jan 5, 2011 8:45 AM
    Added link to second OTN thread

  • How to read explain & tkprof

    Hi I have a query wich take 15 s and I want it to be processed in 1 s.
    Here are the explain and tkprof. How do I read those? I only see a difference in the amount of rows fram aliquot.
    SQL Statement which produced this data:
      select * from table(dbms_xplan.display)
    PLAN_TABLE_OUTPUT
    | Id  | Operation                               |  Name                        | Rows  | Bytes |TempSpc| Cost  |
    |   0 | SELECT STATEMENT                        |                              |    31 |  3534 |       |  3950 |
    |   1 |  FILTER                                 |                              |       |       |       |       |
    |*  2 |   HASH JOIN                             |                              |    31 |  3534 |       |  3950 |
    |*  3 |    HASH JOIN                            |                              |    31 |  3441 |       |  3947 |
    |*  4 |     HASH JOIN                           |                              |  2697 |   273K|       |  3613 |
    |*  5 |      HASH JOIN                          |                              |  2697 |   250K|       |  2764 |
    |*  6 |       TABLE ACCESS FULL                 | ALIQUOT                      | 14573 |   156K|       |  1197 |
    |*  7 |       HASH JOIN                         |                              | 52552 |  4310K|       |  1478 |
    |   8 |        TABLE ACCESS FULL                | SDG_USER                     |   411 |  2466 |       |     2 |
    |*  9 |        HASH JOIN                        |                              |   105K|  8005K|       |  1472 |
    |  10 |         TABLE ACCESS FULL               | SDG                          |   411 |  1644 |       |     3 |
    |* 11 |         HASH JOIN                       |                              |   105K|  7595K|  5552K|  1465 |
    |  12 |          TABLE ACCESS FULL              | SAMPLE                       |   283K|  2218K|       |   649 |
    |* 13 |          HASH JOIN                      |                              |   105K|  6774K|       |   232 |
    |* 14 |           HASH JOIN                     |                              |    36 |  2124 |       |     8 |
    |* 15 |            HASH JOIN                    |                              |    98 |  4410 |       |     5 |
    |  16 |             NESTED LOOPS                |                              |     1 |    35 |       |     2 |
    |  17 |              TABLE ACCESS BY INDEX ROWID| U_PROTOCOL_VARIABLE          |     1 |    33 |       |     2 |
    |* 18 |               INDEX RANGE SCAN          | AK_U_PROTOCOL_VARIABLE       |     1 |       |       |     1 |
    |* 19 |              INDEX UNIQUE SCAN          | PK_U_PROTOCOL_VARIABLE_USER  |     1 |     2 |       |       |
    |  20 |             TABLE ACCESS FULL           | U_PROTOCOL_VALUE_USER        |  1075 | 10750 |       |     2 |
    |* 21 |            TABLE ACCESS FULL            | U_REF_DESIGNATION_USER       |    16 |   224 |       |     2 |
    |  22 |           TABLE ACCESS FULL             | SAMPLE_USER                  |   283K|  1940K|       |   221 |
    |  23 |      TABLE ACCESS FULL                  | TEST                         |   630K|  5544K|       |   650 |
    |  24 |     TABLE ACCESS FULL                   | TEST_USER                    |   630K|  4312K|       |   138 |
    |  25 |    TABLE ACCESS FULL                    | WORKSHEET_ALIQUOT_TYPE       |    25 |    75 |       |     2 |
    |  26 |   INDEX UNIQUE SCAN                     | PK_OPERATOR_GROUP            |     1 |     4 |       |       |
    |  27 |   INDEX UNIQUE SCAN                     | PK_OPERATOR_GROUP            |     1 |     4 |       |       |
    |  28 |   INDEX UNIQUE SCAN                     | PK_OPERATOR_GROUP            |     1 |     4 |       |       |
    |  29 |   INDEX UNIQUE SCAN                     | PK_OPERATOR_GROUP            |     1 |     4 |       |       |
    |  30 |   INDEX UNIQUE SCAN                     | PK_OPERATOR_GROUP            |     1 |     4 |       |       |
    |  31 |   INDEX UNIQUE SCAN                     | PK_OPERATOR_GROUP            |     1 |     4 |       |       |
    Predicate Information (identified by operation id):
       2 - access("SYS_ALIAS_1"."WORKSHEET_ALIQUOT_TYPE_ID"="U_REF_DESIGNATION_USER"."U_ALIQUOT_TYPE")
       3 - access("TEST_USER"."U_PROTOCOL_ID"="U_PROTOCOL_VALUE_USER"."U_PROTOCOL_ID" AND "SYS_ALIAS_3"."TEST_ID"
                  ="TEST_USER"."TEST_ID"
       4 - access("SYS_ALIAS_4"."ALIQUOT_ID"="SYS_ALIAS_3"."ALIQUOT_ID")
       5 - access("SYS_ALIAS_5"."SAMPLE_ID"="SYS_ALIAS_4"."SAMPLE_ID")
       6 - filter("SYS_ALIAS_4"."PLATE_ID" IS NOT NULL AND ("SYS_ALIAS_4"."STATUS"='P' OR "SYS_ALIAS_4"."STATUS"=
                  'V'))
       7 - access("SDG_USER"."U_CLIENT_TYPE"="U_REF_DESIGNATION_USER"."U_CLIENT_TYPE" AND "SYS_ALIAS_6"."SDG_ID"=
                  "SDG_USER"."SDG_ID"
       9 - access("SYS_ALIAS_6"."SDG_ID"="SYS_ALIAS_5"."SDG_ID")
      11 - access("SYS_ALIAS_5"."SAMPLE_ID"="SAMPLE_USER"."SAMPLE_ID")
      13 - access("SAMPLE_USER"."U_BOX_POSITION"="U_REF_DESIGNATION_USER"."U_REFERENCE_POSITION")
      14 - access("U_PROTOCOL_VALUE_USER"."U_PROTOCOL_VARIABLE_VALUE"="U_REF_DESIGNATION_USER"."U_TEST_TECHNIQUE")
      15 - access("U_PROTOCOL_VALUE_USER"."U_PROTOCOL_VARIABLE_ID"="U_PROTOCOL_VARIABLE_USER"."U_PROTOCOL_VARIABL
                  E_ID")
      18 - access("SYS_ALIAS_2"."NAME"='VALUE_Technique')
      19 - access("U_PROTOCOL_VARIABLE_USER"."U_PROTOCOL_VARIABLE_ID"="SYS_ALIAS_2"."U_PROTOCOL_VARIABLE_ID")
      21 - filter("U_REF_DESIGNATION_USER"."U_REFERENCE_STATUS"='A')
    Note: cpu costing is offtkprof
    TKPROF: Release 9.2.0.1.0 - Production on Mon May 21 17:32:12 2007
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Trace file: d:\oracle\admin\nautt\udump\nautt_ora_956.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    alter session set sql_trace true
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer goal: CHOOSE
    Parsing user id: 65 
    select VALUE
    from
    nls_session_parameters where PARAMETER='NLS_NUMERIC_CHARACTERS'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.12          0          3          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          0          0           1
    total        3      0.00       0.12          0          3          0           1
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 65 
    Rows     Row Source Operation
          1  FIXED TABLE FULL X$NLS_PARAMETERS
    select VALUE
    from
    nls_session_parameters where PARAMETER='NLS_DATE_FORMAT'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.04          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          0          0           1
    total        3      0.00       0.04          0          0          0           1
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 65 
    Rows     Row Source Operation
          1  FIXED TABLE FULL X$NLS_PARAMETERS
    select VALUE
    from
    nls_session_parameters where PARAMETER='NLS_CURRENCY'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.05          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          0          0           1
    total        3      0.00       0.05          0          0          0           1
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 65 
    Rows     Row Source Operation
          1  FIXED TABLE FULL X$NLS_PARAMETERS
    select to_char(9,'9C')
    from
    dual
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          3          0           1
    total        3      0.00       0.00          0          3          0           1
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 65 
    Rows     Row Source Operation
          1  TABLE ACCESS FULL DUAL
    SELECT ali.aliquot_id ,
    ctr.u_aliquot_type
    FROM ( SELECT test_user.u_protocol_id,sample_user.u_box_position,sdg_user.u_client_type,aliquot.aliquot_id 
    FROM lims_sys.sdg,lims_sys.sdg_user,lims_sys.sample,lims_sys.sample_user,lims_sys.aliquot,lims_sys.test,lims_sys.test_user 
    WHERE sdg.sdg_id = sdg_user.sdg_id 
    AND sdg.sdg_id = sample.sdg_id 
    AND sample.sample_id = sample_user.sample_id 
    AND sample.sample_id = aliquot.sample_id 
    AND aliquot.aliquot_id = test.aliquot_id 
    AND test.test_id = test_user.test_id 
    AND aliquot.plate_id is not null 
    AND aliquot.status in ('V','P')
    ) ali, 
    ( SELECT u_protocol_value_user.u_protocol_id,u_protocol_value_user.u_protocol_variable_value 
    FROM lims_sys.u_protocol_value_user,lims_sys.u_protocol_variable_user,lims_sys.u_protocol_variable 
    WHERE u_protocol_value_user.u_protocol_variable_id = u_protocol_variable_user.u_protocol_variable_id 
    AND u_protocol_variable_user.u_protocol_variable_id = u_protocol_variable.u_protocol_variable_id 
    AND u_protocol_variable.name = 'VALUE_Technique' 
    ) prt, 
    ( SELECT u_ref_designation_user.u_reference_position, u_ref_designation_user.u_client_type, u_ref_designation_user.u_test_technique,u_ref_designation_user.u_aliquot_type, worksheet_aliquot_type.name 
    FROM lims_sys.u_ref_designation_user,lims_sys.worksheet_aliquot_type 
    WHERE worksheet_aliquot_type.worksheet_aliquot_type_id = u_ref_designation_user.u_aliquot_type 
    AND u_ref_designation_user.u_reference_status = 'A' 
    ) ctr 
    WHERE prt.u_protocol_variable_value = ctr.u_test_technique
    AND ali.u_protocol_id = prt.u_protocol_id 
    AND ali.u_client_type = ctr.u_client_type
    AND ali.u_box_position = ctr.u_reference_position
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.37       0.80          0         90          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      2.76      14.81      35581      29680          0           2
    total        3      3.14      15.61      35581      29770          0           2
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 65 
    Rows     Row Source Operation
          2  FILTER 
          2   HASH JOIN 
          2    HASH JOIN 
        368     HASH JOIN 
        368      HASH JOIN 
         16       TABLE ACCESS FULL ALIQUOT
    127696       HASH JOIN 
        411        TABLE ACCESS FULL SDG_USER
    255392        HASH JOIN 
        411         TABLE ACCESS FULL SDG
    255392         HASH JOIN 
    283925          TABLE ACCESS FULL SAMPLE
    255392          HASH JOIN 
       1472           HASH JOIN 
        330            HASH JOIN 
          1             NESTED LOOPS 
          1              TABLE ACCESS BY INDEX ROWID U_PROTOCOL_VARIABLE
          1               INDEX RANGE SCAN AK_U_PROTOCOL_VARIABLE (object id 70871)
          1              INDEX UNIQUE SCAN PK_U_PROTOCOL_VARIABLE_USER (object id 70873)
       1075             TABLE ACCESS FULL U_PROTOCOL_VALUE_USER
         16            TABLE ACCESS FULL U_REF_DESIGNATION_USER
    283925           TABLE ACCESS FULL SAMPLE_USER
    630844      TABLE ACCESS FULL TEST
    630844     TABLE ACCESS FULL TEST_USER
         25    TABLE ACCESS FULL WORKSHEET_ALIQUOT_TYPE
          0   INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 70347)
          0   INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 70347)
          0   INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 70347)
          0   INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 70347)
          0   INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 70347)
          0   INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 70347)
    select 'x'
    from
    dual
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch        2      0.00       0.00          0          6          0           2
    total        6      0.00       0.00          0          6          0           2
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 65 
    Rows     Row Source Operation
          1  TABLE ACCESS FULL DUAL
    begin :id := sys.dbms_transaction.local_transaction_id; end;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        4      0.00       0.00          0          0          0           2
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 65 
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        9      0.37       1.03          0         93          0           0
    Execute     10      0.00       0.00          0          0          0           2
    Fetch        7      2.76      14.81      35581      29689          0           8
    total       26      3.14      15.84      35581      29782          0          10
    Misses in library cache during parse: 7
    Misses in library cache during execute: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       51      0.00       0.00          0          0          0           0
    Execute     52      0.00       0.00          0          0          0           0
    Fetch       52      0.00       0.00          0        110          0          51
    total      155      0.00       0.00          0        110          0          51
    Misses in library cache during parse: 2
       10  user  SQL statements in session.
       51  internal SQL statements in session.
       61  SQL statements in session.
    Trace file: d:\oracle\admin\nautt\udump\nautt_ora_956.trc
    Trace file compatibility: 9.00.01
    Sort options: default
           1  session in tracefile.
          10  user  SQL statements in trace file.
          51  internal SQL statements in trace file.
          61  SQL statements in trace file.
          12  unique SQL statements in trace file.
         561  lines in trace file.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            

    In this case the amount of rows of aliquot are 16
    where the CBO thinks there are 156k.I read 14K instead of 156K
    Suppose the actually amount of rows can variate from 0 till 500
    would the cardinality hint setting be 500?It would have to be small enough for the cost based optimizer to realize it should start its access plan from the other end. If 500 is enough to do that, then that's fine. Just experiment.
    Groet,
    Rob.

  • Problems reading AEBS 5th gen. configuration

    Since purchasing an new model Airport Extreme I've been having difficulty consistently reading it's configuration using the Airport Utility from a 4th generation model wirelessly.  Has anyone encountered similar issues?  Without making any changes to any one of the devices in my wifi network, I'll be able to see the 5th generation model after the Airport utility is opened and scans for devices. I can then connect to any one of the other devices on my network, except the 5th gen. one, (which is the device connected to my internet connection). It will try connecting for a few minutes and finally stop with a notice telling me to check my connection and re-scan. Now this shouldn't be necessary since I've confirmed that all of the devices on the network still have internet access, including theone I'm running the airport utility on. This means there is a connection between the two devices, but the airport utility cannot read the configuration.  I have a strong suspicion that there is a firmware/software problem with the new 5th gen. airport extreme, communicating to earlier models via the airport utility.
    So I was wondering if anyone else has seen anything similar?

    I am having a similar issue it looks like with my 5th generation AEBS; this is my first AEBS, though, as I switched after my other wireless router died. 
    Everything is kosher when I first power up or reboot the 5th gen. AEBS. 
    I have an iMac (wireless) and a wired PC laptop (non-apple) which I use for work connected to the AEBS, plus a few other Apple devices.  I use VNC to remote connect to iMac (running Snow Leopard) and this works well right after I reboot the AEBS.  I've also got a USB disk attached to the AEBS for use with Time Machine, which again works well right after rebooting the AEBS.
    However, after a few hours the AEBS appears to disapear; Time Machine stops working because it can't find the disk anymore, the Airport Utility on my iMac can't even detect the AEBS when scanning but the Airport Utility on my Windows laptop can connect to it, VNC stops working from my PC and iPhone/iPad because it can't find the iMac anymore, I have been able to access my USB disks by using the IP address and "connecting to server" in Finder but Finder doesn't report it automatically anymore and Time Machine still can't see it when I do this.  However, each of these systems can connect to the internet still as well as my PC can still talk to the network printer I have. 
    The only solution is to reboot the AEBS at which point everything starts working again.

Maybe you are looking for

  • Async process to sync bpel process

    Hi all, I have one aync bpel process which invokes sync bpel process. In sync bpel process, it has only one assign activity, which assigns input to output. Iam trying to test this scenario using test suite available in 11.1.1.2.0. I am receiving a ti

  • Partitioning causing bootcamp problem? (x-post from bootcamp forum))

    My boss's 17" MBP has 2 partitions - Mac OS (35 gb) and Vista (65 gb). The larger partition is formatted NTFS as it must be for Vista. Attempting to revert to one partition via Bootcamp shows the "Restore the startup disk to a single volume" button g

  • Post to Inspection Stock Problem

    Dear Gurus, What is the other ways for a material to be able to be posted in "QI stocks aside from MM02?  I am referring to a production order when GR should be in QI stocks.  I am using CO11N for production order GR not MIGO.  Is there a setting in

  • How to - link from HTML Region to Report Column?

    Dear Apex gurus, Ok, I'll go straight to my question. I have a file upload page with Report Region (conditional) which comes up after file uploading process. Under Report I have a column to delete uploaded file (column has an image in link, target in

  • I can't open my e-book after eksport in ibooks author on my macbook. What to do?

    This message comes up when i try to open the eksported file. "The document "E-book kopi.ibooks" could not be opened. Archives do not have the correct format. The archive may be corrupted, truncated or has an unexpected format." I know, that I can eks