10g buffer keep pool

Hi,
If I create a db_keep_cache_size pool and specify that a tables storage buffer_pool is the keep area. Further assume that this table is somewhat large (9Gb) and contains say a years worth of data, but 99 percent of queries only hit the 30 most current days. What portion of the table going to be loaded into the keep_pool only those blocks accessed by the query? Or does the whole table get loaded? This make a big difference as I could make the pool area about 1/10th of 9Gb (acceptable) as opposed to 9Gb (unacceptable).
What about if the keep_pool fills up.... will entire objects get pushed out? or just some of an objects blocks.
Thanks,
Chris

user10896542 wrote:
Hi,
If I create a db_keep_cache_size pool and specify that a tables storage buffer_pool is the keep area. Further assume that this table is somewhat large (9Gb) and contains say a years worth of data, but 99 percent of queries only hit the 30 most current days. What portion of the table going to be loaded into the keep_pool only those blocks accessed by the query? Or does the whole table get loaded? This make a big difference as I could make the pool area about 1/10th of 9Gb (acceptable) as opposed to 9Gb (unacceptable).
Hi,
Only that data would be cached which is used by the query. But you said 99% of queries hit 30 most current days. So there are chances thatn 1% of queries hit older data, in that case these older data would also get cached in keep pool.
What about if the keep_pool fills up.... will entire objects get pushed out? or just some of an objects blocks.
Oracle has LRU algorithim which decides which objects should remain and which goes out. No it does not push the entire objects but only objects which oracle feels is not required according to LRU algorithm.
Thanks,
Chris

Similar Messages

  • Cannot Load Index Into Keep Pool

    I've sucessfully loaded my ctxsys.context index into the keep pool (works great) using the web page: http://www.oracle.com/technology/products/text/htdocs/mem_load.html .
    I'm now trying the techniques to load a regular index (that joins to my text index).
    I've executed:
    alter index PROD_SVC_NDX storage (buffer_pool keep);
    Then I tried the hint INDEX_FFS on the base table:
    select /*+ INDEX_FFS ( INDEXDATA , PROD_SVC_NDX ) */ count(svc_code) FROM INDEXDATA ;
    The optimizer shows a ffs in the algorithm:
    INDEX FAST FULL SCAN| PROD_SVC_NDX |524K| 2047K|251 (2)| 0:00:04
    WHEN I QUERY THE BUFFER CACHE, only 9% of the index is in the keep pool..
    Owner............. Name...........................Type...........Cache....... %BLOCKS........POOL........BSize
    SYSTEM.......PROD_SVC_NDX...........INDEX..........1,248.................9..............KEEP.........8,192.
    How do you load regular index blocks into keep pool?
    What would be the best Oracle forum for this question?
    Joe

    I think the optimizer will always use a full scan for such a query, regardless of hints. What you need to do is fetch all rows individually. This can be done in a PL/SQL block with an outer loop which fetches all the indexed values, and an inner select that performs an indexed lookup with each value. For example:
    Given a table with a primary key index:
    create table foo (pk number primary key, bar varchar2);
    Populated:
    insert into foo values (1, 'aa');
    insert into foo values (2, 'ab');
    insert into foo values (3, 'ac');
    We could do this:
    declare
    v_bar varchar2(2);
    begin
    for q in (select pk from foo) loop
    select bar into v_bar from foo where pk = q.pk;
    end loop;
    end;
    It may well not be necessary to fetch every row - you should experiment and see whether maybe fetching every 10th row - or even every 100th row - is sufficient to fetch all index blocks into the SGA.
    - Roger

  • ODI 10g - session keep a table locked

    Hi,
    We have a random issue, with ODI session that keep a lock on a table, even replication is finished and session becomes inactive
    It generated dead locks as a trigger has to update the target table.
    what happened :
    - user application create rows (13)
    - ODI scenario replicate the rows (contract table)
    - 2nd scenario based on same source with another sunscriber run a stored procedure to create records in another table (around 30, positions table)
    this 2nd locked the target table, and when the run of the procedure finished, and commited, the lock was not released
    - ODI replicate another table (price) 30mn later, a trigger on target update position table with new values
    ---> trigger failed with deadlock (ora 60)
    ---> ODI failed as the trigger raised back the error
    this issue happened after 10 hours of same activity without issue, chained lot of time, but suddenly the lock become persistent (more than 4 hours)
    what can I do?
    use ODI 10g 10.1.3.5.0 - RDBMS 10.2.0.4

    Hi !
    For small tables wich are mostly accessed with full table scan you can use
    ALTER TABLE <table_name> STORAGE (BUFFER_POOL KEEP);KEEP pool should be properly sized , setting will cause once the table is read oracle will avoid flushing it out from pool.
    T

  • PUT large table in the keep pool?

    I would like to buffer a table in the keep pool:
    The table I would like to put in the keep pool is 200Mb (which is really large). The corresponding indices have a size of 70MB.
    Actually, many blocks of table are stored in the default pool. But some blocks of that huge table are not accessed for a long time. Thus they are purged from the default pool due to the LRU strategy (when blocks of other tables are queried and buffered)
    If the table (which is supposed to be in the keep pool) is queried again, the queried blocks have to be reloaded plus the corresponding index.
    All these reloads make up 600Mb in three hours, which could be saved, if the table and its indices are stored in the keep pool.
    The table itself has only one key field :-(
    The table has many many indices, because the table is accessed by many different fields :-(
    The table is changed by updates and insert 400 times in 3 hours :-( ---->indices are updated :-(
    I can not split that table :-(
    I can not increase the size of the cache :-(
    I dont know if response times (of queries targeting that table) would decrease significantly if the table is stored in the keep pool.
    What do you think? There must be a significant change in terms of the respnse time ??!?!

    Hi abapers,
    the error is here:
    DATA: lr_error TYPE REF TO cx_address_bcs.
    DATA: lv_message TYPE string.
    loop at tab_destmail_f into wa_destmail_f.   "or assigning fs
    TRY.
        recipient = cl_cam_address_bcs=>create_internet_address( wa_destmail_f-smtp_addr ).
    >>>>>>>>    send_request->add_recipient( recipient ).
      CATCH cx_address_bcs INTO lr_error.
        lv_message = lr_error->get_text( ).
    ENDTRY.
    endloop.
    The messages are:
    Zugriff über 'NULL' Objektreferenz nicht möglich.
    Access via 'NULL' object reference not possible.
    Err.tmpo.ejec.         OBJECTS_OBJREF_NOT_ASSIGNED
    Excep.                 CX_SY_REF_IS_INITIAL
    Fecha y hora           21.01.2010 19:00:55
    The information of   lv_message is:
    {O:322*CLASS=CX_ADDRESS_BCS}
    Excepción CX_ADDRESS_BCS ocurrida (programa: CL_CAM_ADDRESS_BCS============CP, include CL_CAM_ADDRESS_BCS============CM01W, línea: 45)
    I don't find the solution, any saving code?
    This is very important to me. Thanks a lot for your help!

  • I need help in configuring keep pool.

    Hi All,
    From my awrrpt report my db is experiencing small table table-full scans. Now I am planning to configure keep pool. I need to know how to calculate keep pool size.And what will be the impact/implications if I configure keep pool.
    Please give advice for me.
    Regards,
    Kiran

    Aman,
    I thought I'd written at least one "official" note on it, but I can't find anything. However, here's a reference to an old news thread (2003) where the old behaviour was noted for 8i and 9i.
    http://groups.google.com/group/comp.databases.oracle.server/browse_frm/thread/a62e495f62feb9f2/f3627c964d63071c
    My comments start at note 16.
    There's probably a news items somewhere where I point out the behaviour changed in 10g (possibly only R2).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • FTS on small Materialized View, should I cache it in the KEEP Pool ?

    Hi all,
    I have a small MV (1773 rows) that is used in a Query JOIN (the query & the explain plan is attached below). Although I already create index for the MV, it is always FTS in the query.
    I read a Tuning tips, that FTS on small table should be cached in the KEEP POOL, with this command :
    ALTER TABLE ITT.MV_CONVERT_UOM STORAGE (BUFFER_POOL KEEP);
    Should I do this ?
    Thank you for your help,
    xtanto.
    Query & explain PLAN :
    SELECT so_id_hdr, product_ord, qty_ord, UOM, MV.UOM_B, MV.UOM_K
    FROM SALESORDER_D SOD
    JOIN MV_CONVERT_UOM MV ON MV.PRODUCT = SOD.PRODUCT_ORD
    WHERE SO_id_hdr = 31944
    Plan hash value: 1323612888
    | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                  |                  |     5 |   225 |     5  (20)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                   |                  |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)             | :TQ10001         |     5 |   225 |     5  (20)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
    |*  3 |    HASH JOIN                      |                  |     5 |   225 |     5  (20)| 00:00:01 |  Q1,01 | PCWP |            |
    |   4 |     BUFFER SORT                   |                  |       |       |            |          |  Q1,01 | PCWC |            |
    |   5 |      PX RECEIVE                   |                  |     5 |   135 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   6 |       PX SEND BROADCAST           | :TQ10000         |     5 |   135 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |   7 |        TABLE ACCESS BY INDEX ROWID| SALESORDER_D     |     5 |   135 |     2   (0)| 00:00:01 |        |      |            |
    |*  8 |         INDEX RANGE SCAN          | SALESORDER_D_FKH |     5 |       |     1   (0)| 00:00:01 |        |      |            |
    |   9 |     PX BLOCK ITERATOR             |                  |  1773 | 31914 |     2   (0)| 00:00:01 |  Q1,01 | PCWC |            |
    |  10 |      MAT_VIEW ACCESS FULL         | MV_CONVERT_UOM   |  1773 | 31914 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - access("MV"."PRODUCT"="SOD"."PRODUCT_ORD")
       8 - access("SOD"."SO_ID_HDR"=31944)

    Hi Leo, below is execution plan for the query you gave :
    Plan hash value: 1323612888
    | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                  |                  |     5 |   200 |     5  (20)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                   |                  |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)             | :TQ10001         |     5 |   200 |     5  (20)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
    |*  3 |    HASH JOIN                      |                  |     5 |   200 |     5  (20)| 00:00:01 |  Q1,01 | PCWP |            |
    |   4 |     BUFFER SORT                   |                  |       |       |            |          |  Q1,01 | PCWC |            |
    |   5 |      PX RECEIVE                   |                  |     5 |   110 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   6 |       PX SEND BROADCAST           | :TQ10000         |     5 |   110 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |   7 |        TABLE ACCESS BY INDEX ROWID| SALESORDER_D     |     5 |   110 |     2   (0)| 00:00:01 |        |      |            |
    |*  8 |         INDEX RANGE SCAN          | SALESORDER_D_FKH |     5 |       |     1   (0)| 00:00:01 |        |      |            |
    |   9 |     PX BLOCK ITERATOR             |                  |  1773 | 31914 |     2   (0)| 00:00:01 |  Q1,01 | PCWC |            |
    |  10 |      MAT_VIEW ACCESS FULL         | MV_CONVERT_UOM   |  1773 | 31914 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    ------------I have tried using index hints like below, but it stil FTS.
    EXPLAIN PLAN FOR     
    SELECT /*+ INDEX(MV_CONVERT_UOM MV_CONVERT_UOM_IDX1) */sod.so_id_hdr ,sod.product_ord ,
         sod.qty_ord ,sod.uom ,mv.uom_b ,
         mv.uom_k FROM SALESORDER_D sod ,
         MV_CONVERT_UOM mv WHERE mv.product = sod.product_ord AND
         sod.so_id_hdr = 31944
    what to do now ?
    Thank you,
    xtanto

  • Identifying candidates for KEEP pool

    Hi,
    recently a DBA I know was asked to make a list of candidates for KEEP pool in buffer cache. He's not familiar with the application. Is there an automated way to do this, or is this something that only someone familiar with the business logic can do manually?
    I googled a bit on the subject, but haven't found much except for Mr. Burleson's scripts (who has a history of giving idiotic advice so I don't trust him) or rather general theoretical discussions (like the thread on AskTom). I'm interested in someone who has done it (or learned why this cannot be done) himself sharing his experience.
    Thanks in advance!
    Best regards,
    Nikolay

    rp0428 wrote:
    Jonathan - none of those referenced articles seem to have a recommendation as to what is appropriate for the KEEP cache. There is a mention of a large frequently used object.eey
    I thought that small lookup tables that were frequently used were the best candidates for the KEEP pool. Is this not the case?
    That suggestion was a consequence of a bug with the touch count algorithm that has been fixed. If blocks were read by TABLESCAN then their touch count was never incremented, so they would always fall off the end of the buffer cache LRU fairly promptly, hence putting them into a KEEP cache would solve the problem. However if the table was being used as a lookup table - with primary key access - the problem didn't arise and if it were a popular lookup then the normal LRU mechanism would keep it cached.
    Can you provide any links to your books or articles that have a recommendation on how to select objects for the keep pool?I don't think there are any - I think it's only possible to make a highly generic statement and then think about individual cases and watch out for unexpected side effects. (That's what my comments above, and the articles, are about).
    From my perspective - every time I've tried to beat the LRU by creating a KEEP pool I haven't managed to do any better than the default LRU and all I've managed to do is limit the flexibility of the system by carving a load of memory off the main cache.
    I have used the KEEP pool successfully - but not for "KEEP"ing; I've used it simply to cache LOBs in a small part of the cache. I've also had a couple of successes with the RECYCLE pool as a damaged limitation mechanism.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • Difference between keep pool Vs Recycle pool vs  Default pool

    Good Morning experts ;
    I need some differences between Keep pool Vs Recycle pool vs  Default pool.
    How it acts differ from each other.
    Thanks in advance ..

    8f953842-815b-4d8c-833d-f2a3dd51e602 wrote:
      Thanks for your answer  MARG.
    If i  pin  an object into shared pool , entire object ( all blocks) comes under into  buffer pool.
    but you say depending on the query plan ,
    Oracle will place only portions of objects into the buffer cache at any one time.
    Example :
    >> TO pin a table
    SQL> alter table emp  storage (buffer_pool keep);             
    This table having  1 million record and it contains 'n' of columns.
    Consider  i need o/p  from name , emp_id, salary columns only.
    i.e. who are getting salary more than  8000$.
    Oracle will show required o/p. As per your explanation , i cant guess  ..
    Question :  how oracle will place only portions of objects into the buffer instead of entire object ?
    Please elaborate little more .
    Oracle uses blocks.  The rows are in blocks.  When you ask for a column in a row, Oracle has to get the block.  When you ask for a couple of columns from many rows, Oracle has to get many blocks.  Oracle makes copies of blocks.  Oracle has to manage possibly many people accessing the same  or different rows in those blocks.  Each one needs to have the block appear as though it did when the transaction started.
    Oracle has many ways to get the blocks.  It can look in the SGA, if an appropriate one is not there it can read it from disk, or it may decide to read many blocks at once from a disk, or it could even decide to just read as much as it can into a user's PGA, perhaps also going to undo in any of those ways to make a read consistent copy for the user.
    So when you look at statistics for a session, you might see sequential gets or scattered gets.  The former is often from index access, then a single block is gotten from wherever, and placed in the SGA.  The latter is often from scanning, and the blocks are scattered about, as they aren't necessarily going to be gotten in an order.  Remember, an Oracle block may be a number of operating system blocks, and a multi[-oracle]-block read may be a lot of data.
    So, with all these blocks going into the SGA, it has to decide what stays and what goes.  It uses a least-recently-used (LRU) algorithm to eject blocks, and may read blocks into the middle or the end of the list, depending.  That's why the default buffer pool works so well, anything continually accessed will in the grand scheme of things be kept hot and stay there.  When SGA's were much smaller, it was a lot easier to have not-quite-so-hot things get ejected and written out, only to be read in soon after, so the alternate pools would allow those places to be kept, or recycled, as arbitrarily defined.
    So think of blocks as the portion of objects in the SGA.  There usually are multiple copies of blocks.

  • Keep pool LRU alogoritims

    Hi! every one,
    can any tell me is it keep pool is foolowing the LRU alog or not if yes plz eplain

    Hi,
    I'm taking my words back, today is not my day, better go to bed and take some care of this flu.
    Use of the KEEP Pool (By Don)
    The KEEP database buffer pool is configured using the BUFFER_POOL_KEEP initialization parameter, which looks like this:
    BUFFER_POOL_KEEP = ‘100,2’
    In Oracle9i, this becomes DB_KEEP_CACHE_SIZE and is specified in megabytes.
    The two specified parameters are, respectively, the number of buffers from the default pool to assign to the KEEP pool and the number of LRU (least recently used) latches to assign to the KEEP pool. The minimum number of buffers assigned to the pool is 50 times the number of assigned latches. The KEEP pool, as its name implies, is used to store object data that shouldn’t be aged out of the buffer pool, such as lookup information and specific performance-enhancing indexes. The objects are assigned to the KEEP pool either through their creation statement or by specifically assigning them to the pool using the ALTER command. Blocks already in the default pool are not affected by the ALTER command, only subsequently accessed blocks are.
    The KEEP pool should be sized such that it can hold all the blocks from all of the tables created with the buffer pool set to KEEP.
    Regards,
    Francisco Munoz Alvarez

  • Keep pool

    hi gurus,
    Is there any query which fetches you the objects ( tables ) in the keep pool.
    For me if i query one particular table , the elapse time per execution in awr report is showing almost 4 min.
    For rest other tables it is showing max 13 sec .
    Shall we get any benefit if we put that particular table in keep pool ,
    Just i want to crosscheck whether this table is in keep pool .
    Regards,
    Vamsi.

    Repeated myself from another thread but this kind of tuning should be one of the last things you look at, not the first.
    If the data from this table really is being constantly aged out of the buffer cache than you should understand why before moving it.
    Look at the particular SQL that is accessing this data rather than the table itself, if you want a hand, post it here, using the following
    How to post a SQL tuning request - HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long - When your query takes too long ...
    However, shot in the dark, does this offending table contain LOB/CLOB data?

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • KEEP POOL and count(*)

    Hello,
    I resized db_keep_cache_size and altered tables and indexes -> storage (buffer_pool keep).
    Now, I think, I have to select * from tables.
    Is command select count(*) from table an equivalent please ?
    If I run select count(*), Disk activity is on 100% and it takes 2minutes. But when I run script, where is
    set termout off
    select * from table;
    set termout on
    It takes very very long time and activity is maybe on 5%. Could you help me with this please ?
    Thank you very much! :)

    Ondrej T. wrote:
    I'm creating application, only for one user. Data from tablespace are static - writing is not possible. Only reading.
    There are 4 tables ( 7+7+3+18 ) GB.
    I want to put them into keep pool. ( allocated 40GB)
    I altered tables and indexes. But the data will be in pull after execution
    select * from tables
    When I run this command, execution is very slow. Disk usage - 5%.
    1) Why? Termout is off...
    When I run app, there will be checkout if the tables are in pool, if not(server restart), it will execute select * from tables.
    So, why is it too slow?
    ( When I run select count(*) from table , disk usage is 100% )Reading 40G data from disk will take a while. Btw, do you have enough RAM to keep indexes of these tables?
    Have you waited until your first select complete? What about second run?
    Why don't you use an in-memory database solution such as TimesTen?
    Regards
    Gokhan

  • Keep Pool question

    If I specify db_keep_pool_size as 200 M and I am presently keeping 150 M of tables and indexes in keep pool. What happens when the size of these segments goes beyond 200 M? Does the query throw an error?
    Thank You

    No.
    You simply start ageing out to disk blocks which, by definition, you had hoped to be keeping in memory. Therefore, your performance will suffer, because subsequent reads will be from disk, not memory.
    As a side-effect, the hit ratio on the keep pool will fall. So a keep pool hit ratio that is not very close to 100%, is a pretty sure-fire indication that the keep pool is smaller than the tables you've asked to store in it.

  • Table caching in keep pool!!

    Hello,
    I have read several threads on the net that we rarely need to cache any table (99.9999....% people don't need it). We have a table which is frequently used during our service flow, the size of the table is 981M(8m rows) approx and memory_target is set to 22G.
    We have also requests to cache most of the frequently accessed tables in cache some are small and some are moderate in size like the above. I myself not convinced to fulfill this request and i also don't have strong logical explanation that caching will become a bottleneck in the future.
    Is it explainable that keeping the said table in cache requires much more space in the cache, if not, it will eventually aged out?? For example, in case of updates?
    Please guide me in this regard.
    Regards,
    Raja.

    There is a dependency on what else is in the pool. Presumably a non-default pool would be smaller than a default pool, and a misunderstanding of block copy mechanisms (which of course vary by version) could lead to undersizing the pool, kicking off needed blocks unnecessarily, to where it would have been better to just leave everything in the default pool and let Larry sort it out.
    Of course full scans into the PGA, what is small?, direct I/O and a gazillion other things mean it depends, depends, depends. Not to mention [url http://jonathanlewis.wordpress.com/2010/03/20/not-keeping/]large objects assigned to the KEEP pool will always do buffered IO and never direct IO (regardless of the size of the object scan)   if bugs don't stop the object from going in in the first place.

  • Needing to add keep pool to SGA, sizing and checking for room?

    Hi all,
    I'm needing to experiment with pinning a table and index (recommended by COTS product vendor) to see if it helps performance.
    I'm trying to set up a keep pool...and put the objects in it
    I've gone into the database, and found that I will need to set up a keep pool:
    SQL> show parameter keep
    NAME TYPE VALUE
    buffer_pool_keep string
    control_file_record_keep_time integer 7
    db_keep_cache_size big integer 0
    That being said, and I'm having a HUGE senior moment right now...how
    do I go about making sure I have enough room to make a little keep
    pool?
    I've looked at my objects I want to put in there, and one is about
    .675 MB, and the other is about .370 MB. So, roughly a little more
    than 1MB
    Looking at my SGA parameters:
    SQL> show parameter sga
    NAME TYPE VALUE
    lock_sga boolean FALSE
    pre_page_sga boolean FALSE
    sga_max_size big integer 572M
    sga_target big integer 572M
    Now...how do I find out what is being used in SGA, to make sure I have room?
    I've been searching around, and trying to come up with some queries. I
    came up with this one:
    SQL> select name, value / (1024*1024) size_mb from v$sga;
    NAME SIZE_MB
    Fixed Size 1.97846222
    Variable Size 232.002007
    Database Buffers 332
    Redo Buffers 6.01953125
    From this, it appears everything is being used....so, not sure what to
    do from here.
    Suggestions and links greatly appreciated!
    cayenne

    SELECT SIZE_FOR_ESTIMATE, BUFFERS_FOR_ESTIMATE, ESTD_PHYSICAL_READ_FACTOR, ESTD_PHYSICAL_READS
      FROM V$DB_CACHE_ADVICE
        WHERE NAME          = 'KEEP'
         AND BLOCK_SIZE    = (SELECT VALUE FROM V$PARAMETER WHERE NAME = 'db_block_size')
         AND ADVICE_STATUS = 'ON';
    SELECT   ds.BUFFER_POOL,
             Substr(do.object_name,1,9) object_name,
             ds.blocks                  object_blocks,
             Count(* )                  cached_blocks
    FROM     dba_objects do,
             dba_segments ds,
             v$bh v
    WHERE    do.data_object_id = v.objd
             AND do.owner = ds.owner (+)
             AND do.object_name = ds.segment_name (+)
             AND do.object_type = ds.segment_type (+)
             AND ds.BUFFER_POOL IN ('KEEP','RECYCLE')
    GROUP BY ds.BUFFER_POOL,
             do.object_name,
             ds.blocks
    ORDER BY do.object_name,
             ds.BUFFER_POOL; Edited by: sb92075 on Jul 9, 2009 2:48 PM

Maybe you are looking for

  • Dynamic Hierarchy - SAP BPC 7.5 MS

    Hello everyone. I am running SAP BPC 7.5 on the Microsoft Platform. The issue I am running into is that I have children entities that have two parent entities. The Entity Dimension has our traditional organization chart. However, our reports require

  • In need of favicon assistance

    I've read some insightful suggestions from several people here, but am having a heck of a time trying to put a favicon on my website. I've followed scripting directions, but I guess I'm looking for an idiot's guide (the idiot would be me) step by ste

  • ORA-12519 with servlet connect

    Hi, I'm facing a strange behaviour on my tomcat webserver with a servlet. I'm using Oracle 10g as the database and try to access it with my servlet. The doPost-method throws the following exception while trying to establish the connection to the RDBM

  • No audio from line out

    hi, my audio works with internal speaker but nothing from line out when selected in sound controls. In audio midi setup, i do not appear to have any control on the master volume for line out, (its stuck at 0) i am suspecting a driver problem - May ha

  • Mapping of IOs

    I am doing some mapping of the IO 0REFER_DOC to 0DOC_NUMBER, and I got an information message: ConExits for assigned InfoObjects inconsistent: (KS) - ALPHA(TS) Would this cause any problem at all on the data loads?