Scripts to identify small tables for keep pool

I'm sorry for the double posting... seems my browser messed up and I didn't realize it posted already.
Edited by: ji li on Dec 6, 2012 8:40 AM

Hi
Please find below script to find segments less than 1GB.
select sum(bytes)/(1024*1024*1024) SIZE_IN_GB, segment_name from user_segments where bytes <1073741824 group by segment_name order by 1 asc
Please try yourself !!
Regards,
Anand.

Similar Messages

  • Identifying candidates for KEEP pool

    Hi,
    recently a DBA I know was asked to make a list of candidates for KEEP pool in buffer cache. He's not familiar with the application. Is there an automated way to do this, or is this something that only someone familiar with the business logic can do manually?
    I googled a bit on the subject, but haven't found much except for Mr. Burleson's scripts (who has a history of giving idiotic advice so I don't trust him) or rather general theoretical discussions (like the thread on AskTom). I'm interested in someone who has done it (or learned why this cannot be done) himself sharing his experience.
    Thanks in advance!
    Best regards,
    Nikolay

    rp0428 wrote:
    Jonathan - none of those referenced articles seem to have a recommendation as to what is appropriate for the KEEP cache. There is a mention of a large frequently used object.eey
    I thought that small lookup tables that were frequently used were the best candidates for the KEEP pool. Is this not the case?
    That suggestion was a consequence of a bug with the touch count algorithm that has been fixed. If blocks were read by TABLESCAN then their touch count was never incremented, so they would always fall off the end of the buffer cache LRU fairly promptly, hence putting them into a KEEP cache would solve the problem. However if the table was being used as a lookup table - with primary key access - the problem didn't arise and if it were a popular lookup then the normal LRU mechanism would keep it cached.
    Can you provide any links to your books or articles that have a recommendation on how to select objects for the keep pool?I don't think there are any - I think it's only possible to make a highly generic statement and then think about individual cases and watch out for unexpected side effects. (That's what my comments above, and the articles, are about).
    From my perspective - every time I've tried to beat the LRU by creating a KEEP pool I haven't managed to do any better than the default LRU and all I've managed to do is limit the flexibility of the system by carving a load of memory off the main cache.
    I have used the KEEP pool successfully - but not for "KEEP"ing; I've used it simply to cache LOBs in a small part of the cache. I've also had a couple of successes with the RECYCLE pool as a damaged limitation mechanism.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • Table in Keep Buffer Pool

    Hi DBAs,
    I need to identify the database tables and indexes which I can put in KEEP Pool. I need your hellp
    How can I identify the object which are good candidate for KEEP Pool ? ( I want to put all the tables/indexes which are more than 50% in db buffer)
    How can I cache these table and indexes in KEEP Pool ?
    Thanks
    -Samar-

    Hi,
    IMO tables in the KEEP pool should be relatively small, and they should fit in the pool in their entirety.
    Above all those tables should be lookup tables as opposed to fact tables.
    To get them in the pool
    issue
    alter table <table_name> storage (buffer pool keep)
    Sybrand Bakker
    Senior Oracle DBA

  • Keep pool

    hi gurus,
    Is there any query which fetches you the objects ( tables ) in the keep pool.
    For me if i query one particular table , the elapse time per execution in awr report is showing almost 4 min.
    For rest other tables it is showing max 13 sec .
    Shall we get any benefit if we put that particular table in keep pool ,
    Just i want to crosscheck whether this table is in keep pool .
    Regards,
    Vamsi.

    Repeated myself from another thread but this kind of tuning should be one of the last things you look at, not the first.
    If the data from this table really is being constantly aged out of the buffer cache than you should understand why before moving it.
    Look at the particular SQL that is accessing this data rather than the table itself, if you want a hand, post it here, using the following
    How to post a SQL tuning request - HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long - When your query takes too long ...
    However, shot in the dark, does this offending table contain LOB/CLOB data?

  • Script to find out table and index candidates to keep in the buffer pool

    I am looking for a script to find out tables and indexes to keep in the buffer pool.
    Could you help me on this ?
    thanks...
    Markus

    this is more of a open question. As you know ur data well. We do not know whats ur data. cachin tables in buffer pool is okay, but it might age out after not being used...instead you can use the KEEP POOL...to cache small tables/popular tables into the keep pool...as keep pool guarantees full caching .....
    here are some links on keep pool cacheing
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/memory.htm#sthref410
    http://www.dba-oracle.com/oracle_tips_cache_small_fts.htm
    http://www.dba-oracle.com/t_script_automate_keep_pool_tables_indexes.htm
    http://www.dba-oracle.com/oracle_news/news_caching_keep_pool_large_objects_clob_blob.htm
    Edited by: user630084 on Apr 8, 2009 5:48 AM

  • FTS on small Materialized View, should I cache it in the KEEP Pool ?

    Hi all,
    I have a small MV (1773 rows) that is used in a Query JOIN (the query & the explain plan is attached below). Although I already create index for the MV, it is always FTS in the query.
    I read a Tuning tips, that FTS on small table should be cached in the KEEP POOL, with this command :
    ALTER TABLE ITT.MV_CONVERT_UOM STORAGE (BUFFER_POOL KEEP);
    Should I do this ?
    Thank you for your help,
    xtanto.
    Query & explain PLAN :
    SELECT so_id_hdr, product_ord, qty_ord, UOM, MV.UOM_B, MV.UOM_K
    FROM SALESORDER_D SOD
    JOIN MV_CONVERT_UOM MV ON MV.PRODUCT = SOD.PRODUCT_ORD
    WHERE SO_id_hdr = 31944
    Plan hash value: 1323612888
    | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                  |                  |     5 |   225 |     5  (20)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                   |                  |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)             | :TQ10001         |     5 |   225 |     5  (20)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
    |*  3 |    HASH JOIN                      |                  |     5 |   225 |     5  (20)| 00:00:01 |  Q1,01 | PCWP |            |
    |   4 |     BUFFER SORT                   |                  |       |       |            |          |  Q1,01 | PCWC |            |
    |   5 |      PX RECEIVE                   |                  |     5 |   135 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   6 |       PX SEND BROADCAST           | :TQ10000         |     5 |   135 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |   7 |        TABLE ACCESS BY INDEX ROWID| SALESORDER_D     |     5 |   135 |     2   (0)| 00:00:01 |        |      |            |
    |*  8 |         INDEX RANGE SCAN          | SALESORDER_D_FKH |     5 |       |     1   (0)| 00:00:01 |        |      |            |
    |   9 |     PX BLOCK ITERATOR             |                  |  1773 | 31914 |     2   (0)| 00:00:01 |  Q1,01 | PCWC |            |
    |  10 |      MAT_VIEW ACCESS FULL         | MV_CONVERT_UOM   |  1773 | 31914 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - access("MV"."PRODUCT"="SOD"."PRODUCT_ORD")
       8 - access("SOD"."SO_ID_HDR"=31944)

    Hi Leo, below is execution plan for the query you gave :
    Plan hash value: 1323612888
    | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                  |                  |     5 |   200 |     5  (20)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                   |                  |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)             | :TQ10001         |     5 |   200 |     5  (20)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
    |*  3 |    HASH JOIN                      |                  |     5 |   200 |     5  (20)| 00:00:01 |  Q1,01 | PCWP |            |
    |   4 |     BUFFER SORT                   |                  |       |       |            |          |  Q1,01 | PCWC |            |
    |   5 |      PX RECEIVE                   |                  |     5 |   110 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   6 |       PX SEND BROADCAST           | :TQ10000         |     5 |   110 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |   7 |        TABLE ACCESS BY INDEX ROWID| SALESORDER_D     |     5 |   110 |     2   (0)| 00:00:01 |        |      |            |
    |*  8 |         INDEX RANGE SCAN          | SALESORDER_D_FKH |     5 |       |     1   (0)| 00:00:01 |        |      |            |
    |   9 |     PX BLOCK ITERATOR             |                  |  1773 | 31914 |     2   (0)| 00:00:01 |  Q1,01 | PCWC |            |
    |  10 |      MAT_VIEW ACCESS FULL         | MV_CONVERT_UOM   |  1773 | 31914 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    ------------I have tried using index hints like below, but it stil FTS.
    EXPLAIN PLAN FOR     
    SELECT /*+ INDEX(MV_CONVERT_UOM MV_CONVERT_UOM_IDX1) */sod.so_id_hdr ,sod.product_ord ,
         sod.qty_ord ,sod.uom ,mv.uom_b ,
         mv.uom_k FROM SALESORDER_D sod ,
         MV_CONVERT_UOM mv WHERE mv.product = sod.product_ord AND
         sod.so_id_hdr = 31944
    what to do now ?
    Thank you,
    xtanto

  • Table caching in keep pool!!

    Hello,
    I have read several threads on the net that we rarely need to cache any table (99.9999....% people don't need it). We have a table which is frequently used during our service flow, the size of the table is 981M(8m rows) approx and memory_target is set to 22G.
    We have also requests to cache most of the frequently accessed tables in cache some are small and some are moderate in size like the above. I myself not convinced to fulfill this request and i also don't have strong logical explanation that caching will become a bottleneck in the future.
    Is it explainable that keeping the said table in cache requires much more space in the cache, if not, it will eventually aged out?? For example, in case of updates?
    Please guide me in this regard.
    Regards,
    Raja.

    There is a dependency on what else is in the pool. Presumably a non-default pool would be smaller than a default pool, and a misunderstanding of block copy mechanisms (which of course vary by version) could lead to undersizing the pool, kicking off needed blocks unnecessarily, to where it would have been better to just leave everything in the default pool and let Larry sort it out.
    Of course full scans into the PGA, what is small?, direct I/O and a gazillion other things mean it depends, depends, depends. Not to mention [url http://jonathanlewis.wordpress.com/2010/03/20/not-keeping/]large objects assigned to the KEEP pool will always do buffered IO and never direct IO (regardless of the size of the object scan)   if bugs don't stop the object from going in in the first place.

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • Full Table Scans for small tables... in Oracle10g v.2

    Hi,
    The Query optimizer may select to do a full table scan instead of an index scan when the table is small (it consists of blocks the number of which is less than the the value of db_file_multiblock_read_count).
    So , i tried to see it using the dept table......
    SQL> select blocks , extents , bytes from USER_segments where segment_name='DEPT';
        BLOCKS    EXTENTS      BYTES
             8          1      65536
    SQL> SHOW PARAMETER DB_FILE_MULTIBLOCK_READ_COUNT;
    NAME                                 TYPE        VALUE
    db_file_multiblock_read_count        integer     16
    SQL> explain plan for SELECT * FROM DEPT
      2    WHERE DEPTNO=10
      3  /
    Explained
    SQL>
    SQL>  select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2852011669
    | Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Tim
    |   0 | SELECT STATEMENT            |         |     1 |    23 |     1   (0)| 00:
    |   1 |  TABLE ACCESS BY INDEX ROWID| DEPT    |     1 |    23 |     1   (0)| 00:
    |*  2 |   INDEX UNIQUE SCAN         | PK_DEPT |     1 |       |     0   (0)| 00:
    Predicate Information (identified by operation id):
       2 - access("DEPTNO"=10)
    14 rows selectedSo , according to the above remarks
    What may be the reason of not performing full table scan...????
    Thanks...
    Sim

    No i have not generalized.... In the Oracle's extract
    i have posted there is the word "might".....
    I just want to find an example in which selecting
    from a small table... query optimizer does perform a
    full scan instead of a type of index scan...Sorry for that... I don't mean to be rude :)
    See following...
    create table index_test(id int, name varchar2(10));
    create index index_test_idx on index_test(id);
    insert into index_test values(1, 'name');
    commit;
    -- No statistics
    select * from index_test where id = 1;
    SELECT STATEMENT ALL_ROWS-Cost : 3
      TABLE ACCESS FULL MAXGAUGE.INDEX_TEST(1) ("ID"=1)
    -- If first rows mode?
    alter session set optimizer_mode = first_rows;
    select * from index_test where id = 1;
    SELECT STATEMENT FIRST_ROWS-Cost : 802
      TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
       INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1)
    -- If statistics is gathered
    exec dbms_stats.gather_table_stats(user, 'INDEX_TEST', cascade=>true);
    alter session set optimizer_mode = first_rows;
    select * from index_test where id = 1;
    SELECT STATEMENT ALL_ROWS-Cost : 2
      TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
       INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1)
    alter session set optimizer_mode = all_rows;
    select * from index_test where id = 1;
    SELECT STATEMENT ALL_ROWS-Cost : 2
      TABLE ACCESS BY INDEX ROWID MAXGAUGE.INDEX_TEST(1)
       INDEX RANGE SCAN MAXGAUGE.INDEX_TEST_IDX (ID) ("ID"=1) See some dramatic changes by the difference of parameters and statistics?
    Jonathan Lewis has written a great book on cost mechanism of Oracle optimizer.
    It will tell almost everything about your questions...
    http://www.amazon.com/Cost-Based-Oracle-Fundamentals-Jonathan-Lewis/dp/1590596366/ref=sr_1_1?ie=UTF8&s=books&qid=1195209336&sr=1-1

  • PUT large table in the keep pool?

    I would like to buffer a table in the keep pool:
    The table I would like to put in the keep pool is 200Mb (which is really large). The corresponding indices have a size of 70MB.
    Actually, many blocks of table are stored in the default pool. But some blocks of that huge table are not accessed for a long time. Thus they are purged from the default pool due to the LRU strategy (when blocks of other tables are queried and buffered)
    If the table (which is supposed to be in the keep pool) is queried again, the queried blocks have to be reloaded plus the corresponding index.
    All these reloads make up 600Mb in three hours, which could be saved, if the table and its indices are stored in the keep pool.
    The table itself has only one key field :-(
    The table has many many indices, because the table is accessed by many different fields :-(
    The table is changed by updates and insert 400 times in 3 hours :-( ---->indices are updated :-(
    I can not split that table :-(
    I can not increase the size of the cache :-(
    I dont know if response times (of queries targeting that table) would decrease significantly if the table is stored in the keep pool.
    What do you think? There must be a significant change in terms of the respnse time ??!?!

    Hi abapers,
    the error is here:
    DATA: lr_error TYPE REF TO cx_address_bcs.
    DATA: lv_message TYPE string.
    loop at tab_destmail_f into wa_destmail_f.   "or assigning fs
    TRY.
        recipient = cl_cam_address_bcs=>create_internet_address( wa_destmail_f-smtp_addr ).
    >>>>>>>>    send_request->add_recipient( recipient ).
      CATCH cx_address_bcs INTO lr_error.
        lv_message = lr_error->get_text( ).
    ENDTRY.
    endloop.
    The messages are:
    Zugriff über 'NULL' Objektreferenz nicht möglich.
    Access via 'NULL' object reference not possible.
    Err.tmpo.ejec.         OBJECTS_OBJREF_NOT_ASSIGNED
    Excep.                 CX_SY_REF_IS_INITIAL
    Fecha y hora           21.01.2010 19:00:55
    The information of   lv_message is:
    {O:322*CLASS=CX_ADDRESS_BCS}
    Excepción CX_ADDRESS_BCS ocurrida (programa: CL_CAM_ADDRESS_BCS============CP, include CL_CAM_ADDRESS_BCS============CM01W, línea: 45)
    I don't find the solution, any saving code?
    This is very important to me. Thanks a lot for your help!

  • Is it really another error about full table scans for small tables....?????

    Hi ,
    I have posted the following :
    Full Table Scans for small tables... in Oracle10g v.2
    and the first post of Mr. Chris Antognini was that :
    "I'm sorry to say that the documentation is wrong! In fact when a full table scan is executed, and the blocks are not cached, at least 2 I/O are performed. The first one to get the header block (where the extent map is stored) and the second to access the first and, for a very small table, only extent."
    Is it really wrong....????
    Thanks...
    Sim

    Fredrik,
    I do not say in any way that the documentation in this point is wrong.....
    In my first post , i have inserted a link to a thread made in another forum:
    Full Table Scans for small tables... in Oracle10g v.2
    Christian Antognini has written that the documentation is wrong....
    I'm sorry to say that the documentation is wrong!
    In fact when a full table scan is executed, and the
    blocks are not cached, at least 2 I/O are performed. The
    first one to get the header block (where the extent map
    is stored) and the second to access the first and, for a
    very small table, only extent.I'm just wondering if he has right......!!!!!!!
    Thanks..
    Sim

  • SEQUENCE Object for Small Tables Only?

    QUOTE from a recent thread: "Long term solution should be to use SEQUENCE with NO CACHE for small tables instead of identity and benefit from performance improvement for large tables."
    Thread:
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/cf63d145-7084-4371-bde0-eb3b917c7163/identity-big-jump-100010000-a-feature?forum=transactsql
    How about using SEQUENCE objects for large tables? Thanks.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

    Well Erland, either you calm down your manager (with a martini?) or use NO CACHE.
    QUOTE: "This could cause a sequence to run out of numbers much more quickly than an IDENTITY value. It could also cause managers to become upset that values are missing, in which case they’ll need to simply get over it and accept that
    there will be numbers missing.
    If you need SQL Server to use every possible value, configure a cache setting of NO CACHE. This will cause the sequence to work much like the IDENTITY property. However, it will impact the sequence performance due to the additional metadata writes."
    LINK: Microsoft SQL Server: The Sequencing Solution
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Needing to add keep pool to SGA, sizing and checking for room?

    Hi all,
    I'm needing to experiment with pinning a table and index (recommended by COTS product vendor) to see if it helps performance.
    I'm trying to set up a keep pool...and put the objects in it
    I've gone into the database, and found that I will need to set up a keep pool:
    SQL> show parameter keep
    NAME TYPE VALUE
    buffer_pool_keep string
    control_file_record_keep_time integer 7
    db_keep_cache_size big integer 0
    That being said, and I'm having a HUGE senior moment right now...how
    do I go about making sure I have enough room to make a little keep
    pool?
    I've looked at my objects I want to put in there, and one is about
    .675 MB, and the other is about .370 MB. So, roughly a little more
    than 1MB
    Looking at my SGA parameters:
    SQL> show parameter sga
    NAME TYPE VALUE
    lock_sga boolean FALSE
    pre_page_sga boolean FALSE
    sga_max_size big integer 572M
    sga_target big integer 572M
    Now...how do I find out what is being used in SGA, to make sure I have room?
    I've been searching around, and trying to come up with some queries. I
    came up with this one:
    SQL> select name, value / (1024*1024) size_mb from v$sga;
    NAME SIZE_MB
    Fixed Size 1.97846222
    Variable Size 232.002007
    Database Buffers 332
    Redo Buffers 6.01953125
    From this, it appears everything is being used....so, not sure what to
    do from here.
    Suggestions and links greatly appreciated!
    cayenne

    SELECT SIZE_FOR_ESTIMATE, BUFFERS_FOR_ESTIMATE, ESTD_PHYSICAL_READ_FACTOR, ESTD_PHYSICAL_READS
      FROM V$DB_CACHE_ADVICE
        WHERE NAME          = 'KEEP'
         AND BLOCK_SIZE    = (SELECT VALUE FROM V$PARAMETER WHERE NAME = 'db_block_size')
         AND ADVICE_STATUS = 'ON';
    SELECT   ds.BUFFER_POOL,
             Substr(do.object_name,1,9) object_name,
             ds.blocks                  object_blocks,
             Count(* )                  cached_blocks
    FROM     dba_objects do,
             dba_segments ds,
             v$bh v
    WHERE    do.data_object_id = v.objd
             AND do.owner = ds.owner (+)
             AND do.object_name = ds.segment_name (+)
             AND do.object_type = ds.segment_type (+)
             AND ds.BUFFER_POOL IN ('KEEP','RECYCLE')
    GROUP BY ds.BUFFER_POOL,
             do.object_name,
             ds.blocks
    ORDER BY do.object_name,
             ds.BUFFER_POOL; Edited by: sb92075 on Jul 9, 2009 2:48 PM

  • How to identify the source column and source table for a measure

    Does anyone have a query that I can use to positively identify the source column and source table for a cube measure in an SSAS cube?  Visual Studio shows ID, Name, and Source, but it is nearly worthless in a large cube and database.
    Also - the same for a dimension would be great.
    If no query exists for this, can someone please explain how to find the source column/table for a measure and for a dimension?
    Thanks.

    DMVs don’t expose the DataSourceView content. AMO is much better suited for object model operations like
    this than the DMVs. PowerShell is also sometimes an option, but in this case C# code would be much easier because analyzing the contents of the DataSourceView is much easier using the .Net DataSet class.
    Hope this helps.
    Reeves
    Denver, CO

  • Any solution for keeping table length fixed?

    Is there a solution for keeping the table length fixed irrespective of the number of lines? We want the tables that run to the end of the page irrespective of the number of rows in the table for our purchase orders, invoices etc.
    We are using RTF templates.

    Is that same requirement you ppl had ?
    How can print blank rows for XML output report..

Maybe you are looking for

  • Unable to open existing project in idvd

    I have saved a project in Idvd (dvdproj. file), I can locate the project however when I try to open it, it opens a blank project.  I am using idvd ver. 7.04. What am I doing wrong as I don't want to re-create as I have several hours invested in the I

  • Compatibility of IE9 with Adobe Acrobat 9 PDF printer

    Hello, I know from reading this document that the Adobe Acrobat PDFMaker is not supported in IE9: http://helpx.adobe.com/acrobat/kb/compatible-web-browsers-pdfmaker-applications.html Does this only apply to the PDFMaker utility located within the bro

  • The display queue window is blank - message mapping

    Hi, sometimes, when I want to see the queues in IB: Design in the mappings, the queue window is just blank. any ideas?

  • CCM Image does not appear in Shopping cart

    Hi, I loaded a picture "Router.jpg" in a MIME-directory. Via the web I can access the picture using the link below: http://dessrm.entel.cl:8000/sap/bc/bsp/ccm/public/Router.jpg In a Purchasing Catalog I added the following caracteristics to a positio

  • Can't boot 10.4.7

    I have an iBook G4. I wanted to copy a file to my iBook from my Winodws PC via firewire by booting my iBook into Target Disk Mode. When I got everything connected, my Windows PC did not mount my iBook as an external harddrive. I think this had someth