Caching of tables

Hi All
I want some info on caching of tables in shared pool.
1) How many tables can I or should I cache in the shared pool. You would say depends on the Shared pool size, but can you let me know what percent of shared pool.
2) Also what tables should I cache in the shared pool .
---Tables accesses frequently
---- Small tables (What size is suggested)
What other considerations I need to make while caching
I have around 20 small tables (not more than 500 rows in each) and around 5 pl/sql programs all of which will run at different times.
But all of them will not access all the 20 tables. Only 5-7 tables at a time.
Please help
Thanks
Ashwin N.

Oracle makes use of a LRU algorithm to determine what data/sql/pl/sql should remain in memory for later use. This algorithm works very well in 99% of the cases. Only if this algorithm is disturbed by e.g. big loads, it sometimes is necessary to tell Oracle what should be 'kept' and what not. Usually, this is determined as part of a performance tuning phase.
Therefore, my suggestion would be that in general you don't explicitely keep or recycle objects. Only when your requirements are such that performance becomes an issue if you do nothing you should take a look at which objects should be kept and which ones not. Typically load/stage tables should not.
O, before this confuses you (like many before you): alter table cache; does not KEEP the blocks in memory rather it puts the blocks of the table at the beginning of the LRU after a full table scan rather than at the end. alter table storage(buffer_pool keep); does keep de db blocks in memory and alter table storage (buffer_pool recycle); makes sure blocks are aged out immediately.
Hope this helps,
L.

Similar Messages

  • Is it possible to cache and index and not cache the table?

    can someone point me to the syntax? I have been messing with it and can't get the cache command on the indexes to work. I dont want to cache the table. Just the index blocks.

    i have to joins between tables with denormalized data and join non-unique columns. The indexes I am using have high clustering factors. i have no way of solving this right now.
    in performnace tests queries use a lot of physical IO and take a long time to return. If I run them a second time, they still use alot of logical IO, but return quickly. I have enough CPU to handle the logical IO and I need to speed up queries.
    I dont have enough memory to cache the tables data involved, but I do have enough to cache the indexes. When I run a 10046 trace virtually all of the work is done in the index searches, so I was hoping to cache the indexes in order to speed up the queries.
    again I can't solve the data issues and I am not concerned about the high logical IOs since there is limited concurrency and I have plenty of CPU.
    I guess my only other option is to find out which table in the join is would benefit most from caching and cache that table since these are big tables and I can really only cache one of them.

  • Purge Cache by Table not behaving as expected

    Hello Gurus,
    We are using OBIEE 10.1.3.3.1.
    We have requirement where we need to Purge the cache by Table.
    So I am using ODBC function:
    Call SAPurgeCacheByTable ('DBName','Catalog','Schema','Physical Table');
    so when I run this from cmd prompt :
    nqcmd -d AnalyticsWeb -u Username -p password -s C:\OracleBI\Server\Bin\clear_cache.txt
    I got the result message as succeeded like below:
    [59118] Operation SAPurgeCacheByTable succeeded!
    row count 1
    Processed Queries 1
    But the thing is Local cache and web cache is not getting purged (.TBL files)
    Please help to resolve this issue.
    Thanks
    Kanna

    Hi,
    the problem is that OBIEE 10.1.3.4.0 ha a bug, see the support document SAPurgeCacheByTable and SAPurgeCacheByDatabase not working [ID 787797.1].
    The bug is logged as Bug 6906535 and it is fixed in the next release which is 11.1.
    There is currently no fix for 10.1.3.4.
    The support proposes to use SAPurgeAllCache() as a workaround.
    I hope it helps.
    Regards,
    Gianluca Ancarani

  • Database cache remote table

    Does anyone know if database caching can work with a remote database via a link.
    My app server is going to work with an 8i database. Our dba is going to do a dblink to an 8.0.5 database. I want to cache data from the 8.0.5 database to the appserver.
    So can I cache a table such as scott.emp@remotedb?
    P.S. I can't upgrade the 8.0.5 database quite yet because the application does not allow me to go to 8.1.7.

    forgot to mention my email is
    [email protected]

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • Cache 1000 tables at a time?

    hi,
    i want to cache 1000 tables at a time.
    how can i cache ? it could be either AWT or SWT
    thanx in advance
    :)

    TimesTen is a database. It supports creation of 1000s of tables. For caching, tables are encapsulated within cache groups. You can create throusands of cache groups. You are not limited to one cache group and one table; that would not be a very useful product :-)
    I would recommend that you read the very good Introduction and Cache User's Guides to be found here: http://docs.oracle.com/cd/E21901_01/welcome.html
    They explain the basic concepts related to using TimesTen as a cache as well as lot of other more in-depth information. Once you have done that tyou may then have other questions that the forum can help you with.
    Regards,
    Chris

  • CACHE Oracle Tables

    Hello Gurus,
    We are building a new application and identified that few tables will be accesses very frequently. To decrease I/O we are planning to CACHE these tables. I am not sure if we made right decision. My question what are the things you need to consider before caching Oracle tables.
    Any help greatly appreciated. Thanks.
    select * from V$VERSIONBANNER                                                                          
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production    
    PL/SQL Release 11.2.0.3.0 - Production                                          
    CORE     11.2.0.3.0     Production                                                        
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production               
    NLSRTL Version 11.2.0.3.0 - Production    

    OK, so you want to use multiple buffer pools and to put these tables into the keep pool?
    Why do you believe that this will improve performance? Oracle's default algorithm for aging out blocks that are seldomly used is pretty good for the vast majority of applications. Why do you believe that you can identify what blocks will most benefit from caching better than Oracle? Why do you believe that you wouldn't be better off giving whatever KEEP pool cache size you would allocate to the DEFAULT pool and letting Oracle's cache algorithm cache whatever it determines is appropriate? It is possible that there is something that you know about your application that allows you to make this sort of determination. But in the vast majority of cases I've seen, people that have tried to do so end up hurting performance at least a little because they're forcing Oracle at the margin to age out blocks that it would benefit from caching and to cache blocks that it would benefit from aging out.
    Do you understand the maintenance impact of using multiple buffer caches? If you are using a vaguely recent version of Oracle and using any of the automatic memory management features, Oracle does not automatically manage the non-default buffer caches. That increases the probability that using non-default buffer caches is going to create performance problems since humans are much less efficient at recognizing and reacting to changing memory utilization and substantially increases the amount of monitoring and work that the DBAs need to do on the system (which, in turn, increases the risk that they make a mistake).
    Justin

  • Cached temp tables

    Hi
    in Oracle 10g you can specify a CACHE option for temporary tables. It does show up too when you look at DBA_TABLES. I wonder does it actually make any difference in the way it functions, in fact I would like the table to be cached (maybe not in SGA, but in some other PGA area perhaps).
    Any input is welcome
    Regards
    Andrew

    I just discovered that there was an SP1 release of WebLogic 6.1. This release
    includes a configuration to specify the number of cached statements per conection,
    which can be set to zero.
    Thanks for your time.
    -Larry
    "Larry" <[email protected]> wrote:
    >
    I think I am having a problem with Prepared Statement Cache.
    My Setup: WL6.1, Informix DBMS, Informix JDBC driver, WL Connection Pool.
    I am trying to do a series of SQL statements including creating, populating,
    querying,
    and finally dropping a temp table. This works fine the first time.
    The second
    time, I can create the temp table, but I get a SQLException when I try
    to populate
    it, with a message: "Table XXXX has been dropped, altered or renamed"
    I'm guessing that the second SQL statement uses a cached statement attached
    to
    a different connection. Since temp tables are scoped to the connection,
    this
    doesn't work.
    Is there a way to disable Statement Caching, or better just turn it off
    for specific
    queries?
    (PS - I found something that looked promising on this list, but the patch
    did
    not work on WL6.1. It would have allowed me to set a STATEMENT_CACHE_SIZE
    property
    on my pool to zero.)
    Thanks for any help
    -Larry

  • Caching/Pinning tables and indexes - Howto?

    Hi all,
    I've hit upon a request by a COTS vendor to do something I'm not terribly familiar with....they want to 'cache' or 'pin' a table and an index in memory.
    I was researching, and saw something to the effect of doing for Table1
    ALTER TABLE Table1 CACHE;
    However, the vendor was mentioning some examples that seemed to indicate creating a keep pool (a separate buffer cache pool?)...and then doing something like
    ALTER TABLE Table1 STORAGE (buffer_pool KEEP)
    Can someone give me some insight as to the difference between these two concepts...links on how to do it, etc?
    Thanks in advance!
    cayenne

    burleson wrote:
    Here is the script that I use to automate the assignment of tables into the KEEP pool.
    BEWARE: This script is not for beginners:
    http://www.rampant-books.com/t_oracle_keep_pool_assignment.htm
    Hope this helps . . .
    Dear Mr. Burleson,
    I note that the article referenced makes the following comment about Oracle's suggestion for good candidates:
    <ul>
    +"It is easy to locate segments that are less than 10% of the size of their data buffer, but Oracle does not have a mechanism to track I/O at the segment level. To get around this issue, some DBAs place each segment into an isolated tablespace, so that the AWR can show the total I/O."+
    </ul>
    Oracle 9i introduced v$segstat - which tracks several different statistics at segment level. Statspack (when taking snapshots at level 7) and the AWR both capture and report segment level statistics. These statistics include the physical reads in 9i, and the number of segment scans in 10g.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Livecache data cache usage - table monitor_caches

    Hi Team,
    We have a requirement of capturing the Data cache usage of Livecache on an hourly basis.
    Instead of doing it manually by going into LC10 and copying the data into an excel, is there a table which captures this data on a periodic basis which we can use to get the report at a single shot.
    "monitor_caches" is one table which holds this data, but we are not sure how we can get the data from this table. Also, we need to see the contents of this table, we are not sure how we can do that.
    As "monitor_caches" is a maxdb table I am not sure how I can the data from this table. I have never worked on Maxdb before.
    Has anyone had this requirement.
    Warm Regards,
    Venu

    Hi,
    For Cache usage below tables can be referred
    Data Cache Usage - total (table MONITOR_CACHES)
    Data Cache Usage - OMS Data (table MONITOR_CACHES)
    Data Cache Usage - SQL Data (table MONITOR_CACHES)
    Data Cache Usage - History/Undo (table MONITOR_CACHES)
    Data Cache Usage - OMS History (table MONITOR_CACHES)
    Data Cache Usage - OMS Rollback (table MONITOR_CACHES)
    Out Of Memory Exceptions (table SYSDBA.MONITOR_OMS)
    OMS Terminations (table SYSDBA.MONITOR_OMS)
    Heap Usage (table OMS_HEAP_STATISTICS)
    Heap Usage in KB (table OMS_HEAP_STATISTICS)
    Maximum Heap Usage in KB (table ALLOCATORSTATISTICS)
    System Heap in KB (table ALLOCATORSTATISTICS)
    Parameter OMS_HEAP_LIMIT (KB) (dbmrfc command param_getvalue OMS_HEAP_LIMIT)
    For reporting purpose , look into the following BW extractors and develop BW report.
    /SAPAPO/BWEXDSRC APO -> BW: Data Source - Extractor
    /SAPAPO/BWEXTRAC APO -> BW: Extractors for Transactional Data
    /SAPAPO/BWEXTRFM APO -> BW: Formula to Calculate a Key Figure
    /SAPAPO/BWEXTRIN APO -> BW: Dependent Extractors
    /SAPAPO/BWEXTRMP APO -> BW: Mapping Extractor Structure Field
    Hope this helps.
    Regards,
    Deepak Kori

  • Cache agent table update transaction size

    Is there a way to impose a transaction size limit, a "commit every n rows", on readonly cache group updates?
    Specifically for single table cache groups.

    An unexpectedly large number of updates (> 1,000,000 rows) were made to an Oracle table with 89 columns referenced by a readonly cache group. The Cache Agent started an incremental update for this cache group and during the update, the datastore ran out of space, so the update was rolled back. All of the update and rollback records went into the lognnn files, using up most of the disk bandwidth. After the rollback completed and the Cache Agent started a refresh for that interval and the same failure/rollback sequence started again. This cache update failure/rollback cycle continued until the datastore full message was noticed in the log and I was able to pause the automatic refresh of this one table. Then I manually refreshed the cache group with "commit every n rows".

  • Cache database tables at startup

    Is there a way to cache certain database tables when starting up the WebLogic 6.1
    Server? We are researching if there is a way to do this for our lookup tables
    (ex: State, Country, Timezone). We know that an alternative would be to add
    these to our JSP pages, but we prefer it be managed in the database.
    Thanks in advance for any help provided.

    If it's read-only data, you can just cache it in a singleton pattern.
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    Clustering Weblogic? You're either using Coherence, or you should be!
    Download a Tangosol Coherence eval today at http://www.tangosol.com/
    "Adrienne" <[email protected]> wrote in message
    news:3c600d66$[email protected]..
    >
    Is there a way to cache certain database tables when starting up theWebLogic 6.1
    Server? We are researching if there is a way to do this for our lookuptables
    (ex: State, Country, Timezone). We know that an alternative would be toadd
    these to our JSP pages, but we prefer it be managed in the database.
    Thanks in advance for any help provided.

  • When cache log table modified "nologging" , Does any problem occur?

    test environment   :
        *. readonly cache group :
    create readonly cache group cg_tb_test1
    autorefresh interval 1 seconds
    from
    TB_TEST1
    (       C1      tt_integer,
             C2      CHAR (10),
             C3      tt_integer,
             C4      CHAR (10),
             C5      CHAR (10),
             C6      CHAR (10),
             C7      CHAR (10),
             C8      CHAR (10),
             C9      tt_integer,
             C10     DATE,
      PRIMARY KEY (C1)
        *. oracle's tables
        SQL> select * from tab;
             TB_TEST1                       TABLE
             TT_06_147954_L                 TABLE
             TT_06_AGENT_STATUS             TABLE
            TT_06_AR_PARAMS                TABLE
            TT_06_CACHE_STATS              TABLE
            TT_06_DATABASES                TABLE
            TT_06_DBSPECIFIC_PARAMS        TABLE
            TT_06_DB_PARAMS                TABLE
            TT_06_DDL_L                    TABLE
            TT_06_DDL_TRACKING             TABLE
            TT_06_LOG_SPACE_STATS          TABLE
            TT_06_SYNC_OBJS                TABLE
            TT_06_USER_COUNT               TABLE
    15 rows selected.
        SQL>
    After generated cache group , Lots of archive logs generated.  So, I modified the log table "TT_06_147954_L" with "nologging".
    Does any problem occur?
    Thank you.

    If you ever need to recover the Oracle database, or this table, in any way then you are hosed and things will break. Also, I'm pretty sure this is not supported. Why is the logging a problem?
    Chris

  • Unable to delete Order does not exist in live cache but in table POSMAPN

    Hi Experts,
    We are facing an issue where purchase order is not available in live cache (which means no GUID) but exists in database table POSMAPN. We have tried to delete it using standard SAP inconsistent order deletion program and also using BAPI BAPI_POSRVAPS_DELMULTI but not able to delete it.
    Can anybody suggest a method by which we can get rid of this order from the system.
    Thanks a lot.
    Best Regards,
    Chandan

    Hi Chandan,
    Apologize me for taking your question in a wrong perspective. If you want to delete the same then you need to Re-CIF the order from ECC so that it would come and sit in Live Cache. Once done, try using the BAPI.
    If you are not successful with the above approach try running the consistency report /SAPAPO/SDRQCR21 in APO system
    so that it first corrects the inconsistency between ECC and APO (Live Cache + DB tables) and then use the BAPI to delete the PO.
    Not sure if you have tried this way. If this does not solve your purpose you need to check SAP Notes.
    Thanks,
    Babu Kilari

  • More than one root table ,how to design cache group ?

    hi,
    each cache group have onle one root table , many child table ,if my relational model is :
    A(id number,name ....,primary key id)
    B(id number,.....,primary key id)
    A_B_rel (aid number,bid number,foreign key aid referenc a (id),
    foreign key bid referenc b(id))
    my select statement is "select ... from a,b,a_b_rel where ....",
    i want to cache these three table , how should i create cache group ?
    my design is three awt , Cache group A for A , Cache Group b for b, Cache group ab to a_b_rel ?
    are there other better solution ?

    As you have discovered, you cannot put all three of these tables into one cache group. For READONLY cache groups the solution is simple, put two of the tables (say A and A_B) in one cache group and the other table (B) in a different cache group and make sure that both use the same AUTOREFRESH interval.
    For your case, using AWT cache groups, the situation is a bit mnore complicated. You must cache the tables as two different cache groups as mentioned above, but you cannot define a foreign key relationship in TimesTen between tables in different cache groups. Hence you will need to add logic to your application to check and enforce the 'missing' foreignb key relationship (B + A_B in this example) to ensure that you do not inadvertently insert data that would violate the FK relationship defined in Oracle. Otherwise you could insert invalid data in TimesTen and this would then fail to propagate to Oracle.
    Chris

Maybe you are looking for

  • Calendar broken on my iPhone

    Well here is the problem. Ever since I synced iPhone Calendar with Google Calendar, the following would happen: -When I open the calendar, and tap a date, it will freeze and I am unable to tap anything. I can either press the sleep or home button, bo

  • Solution Manager help-desk VAR setup

    I am setting up help desk in Solution Manager 4.0 The guide tells me to to execute report: AI_SDK_CF_GET_SUSR_VAR_CUST When I try to execute it in transaction se38 I get the following error: RFC_system_error Function module BCSN_Z001_VARS_SUSER_LIST

  • Erased Firmware can't update since PC won't recognize pla

    I have a Zen Micro. My PC wasn't recognizing the device. I searched forums and found out about a problem with the firmware and the need for an update. The old firmware was deleted, but now I can't install the new firmware because the PC doesn't recog

  • I want to reset my iPod. It says I need to put in my restrictions pass code but I don't remember it. What do I do?

    I want to reset my iPod. Every time I go to reset it, it says that I need to put in my lock screen pass code then my restriction pass code. I don't remember what the code is. Every time I try I get it wrong and then I have to wait 60 minutes just to

  • User objests stored in which tablespace

    Hi, if we assign two tablespaces to a user with particular quota, in which tablespace user objects are going to store.