Multiple keep caches

hi
db_cache_size
dba_keep_cache_size
db_recycle_cache_size
above parameters are used to set multiple caches for default blocksize....
if we set db_8k_cache_size then how will we specify keep and recycle caches for this ..because oracle says that is we have multiple tablespaces with different block sizes then we can have all three above caches for all different blocksize tablespaces
Regards

Multiple buffer pools are only available for the standard block size. Non-standard block size caches have a single DEFAULT pool.
See
http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10743/memory.htm#i16408

Similar Messages

  • Set big table KEEP cache

    Hello,
    Version 10204 on linux redhat 4.
    In my data warehouse i have a table which it size size 5GB.
    This the most important table , and most of the queries are joining this table.
    Currently i have 32GB of memory in my machine.
    The SGA+PGA = 17GB
    Free memory in the machine is about 15GB:
    NAME                                 TYPE        VALUE
    sga_max_size                       big integer 12G
    sga_target                          big integer 12G
    pga_aggregate_target          big integer 5GEach night this table is being TRUNCATED and populated again with new data.
    I am thinking of setting this table in the KEEP cache.
    I would like to get your feedback if you think its the right thing to do.
    Thanks

    Hi,
    There is a best practice about this configuration (from oracle documentation):
    "A good candidate for a segment to put into the KEEP pool is a segment that is smaller than 10% of the size of the DEFAULT buffer pool and has incurred at least 1% of the total I/Os in the system."
    Hope this helps,
    Cheers.
    Cuneyt

  • Keep cache option

    could anyone explain me if i use keep cache option for the static table, whether it'll have impact to the SGA memory?(I believe the table stored in cache buffer)

    According to Oracle document "A good candidate for a segment to put into the KEEP pool is a segment that is smaller than 10% of the size of the DEFAULT buffer pool and has incurred at least 1% of the total I/Os in the system"
    so if u load a large table to buffer cache, it will occupy many memory, which could be free for other process. In turn, insufficient buffer cache occurs, the database performance will be worsen.

  • How to i configure multiple JBoss caches for standard lone application

    how to i configure multiple JBoss caches for standard long application running on single JVM..Please advice and provide me sample code if any..
    Thanks
    NAgs

    [http://www.jboss.org]
    Locking this thread.

  • Multiple key cache lookup cases for the same values

    Hi,
    Just curious whether someone else on this forum has dealt with this use case: we'd like to use the Coherence cache to store objects of say class Foo with fields a and b (Foo(a,b)) using a as the key. The named cache is backed by a database and puts will insert into the corresponding Foo table and gets will either result in a cache hit or read through using a CacheStore implementation we'd write.
    Now, for the 2nd requirement, we need to look up the same objects using field b (or a combination of different fields for that matter). Currently we are thinking of a 2nd named cache that maps b onto Foo(a,b) with a possible optimization that the 2nd cache will map b onto a so the client doing the get can turn around and query the first cache using a. Puts in the first cache will add entries to the second cache to keep the 2nd cache up to date with a -> b mappings. The optimization prevents Foo being stored in two caches.
    Note that we will not store all entries for Foo in the cache as the sheer number of expected entries makes this option not feasible hence we cannot rely on a cache query (using indexes) to look the object up.
    Any comments on this approach or ideas on how to implement this differently?
    Thanks!
    Marcel.

    Hi Marcel,
    That is correct, QueryMap only operates on entries that are in-memory; there is no way to "query-through" a cachestore for example.
    Given that, I think that your proposed approach (of maintaining a separate b->a mapping) makes sense.
    thanks,
    -Rob

  • IOS 8.1 Does Not Download All iCloud Mail/ Keeps Cache?

    Hello,
    I have setup and enabled an iCloud e-mail account on all of my devices and was successfully able to use the Mail app in Mac OS X Yosemite to copy mail from my Yahoo! inbox to my iCloud inbox.  All e-mail copied from Yahoo! Mail appears in the OS X Yosemite Mail app as well as in the iCloud.com site, however, my iPad Air 2 and iPhone 5 both running iOS 8.1 does not download all of the mail from iCloud.  I have tried the following on both iOS 8.1 devices with no success.
    - Disable iCloud mail and re-enable
    - Sign completely out of iCloud and sign in again, then reconfigure all iCloud services including mail
    The result of both steps is that the incomplete list of e-mail in the iCloud inbox just reappears in the Mail app instead of downloading from iCloud again.  It looks like the incomplete list of e-mail is stored on the iOS device and signing out of or disabling iCloud mail just hides it; re-enabling or signing in merely shows it again.  This appears to work differently than my Yahoo! Mail configuration where if I sign out of or delete the Yahoo! account and reconfigure, iOS downloads a fresh copy of all e-mail in the inbox and all other folders.  Here are my two questions:
    - How do I get my iPhone and iPad Air 2 to download ALL e-mail from my iCloud mail account?
    - Are my suspicions true? Does iOS actually store a cached copy of iCloud mail on the device when the service is disabled and just "redisplays" it when re-enabled?
    Thanks!

    I had the same problem. My iCloud it's a mess since the iOS 8.1 update.
    Probably your iCloud isn't working too.

  • Having 1 router and multiple WCCP cache devices: cisco and non-cisco

    I have a 6500 running WCCPv1. I have two devices single connected to the 6500: a CE565 and a non-cisco device that does WCCPv1. The 6500 is configured for WCCP redirection. What happens to the requests ? Are they serviced by both devices in parallel ? Is only one device servicing the request ? Load balanced ? I know a cluster won't be formed because the device is non-cisco. BTW, the non-cisco device only support WCCP v1.

    Will this detection between devices work if the non-cisco device is not really a cache engine, but a web filter that uses WCCP? In reality, my ideal goal would be that traffic would be redirected to the web filter (non-Cisco), get filtered, and then redirected to the catalyst, and then again redirected to the cache engine to be cached. But I am not sure this will happen due to routing. So I guess is either one or the other, correct ? I don't have the option to connect the web filter in other box, neither the Cache Engine. I thoutht that they would not detect each other at all and the router would be doing a decision there. How do they detect each other ? via which protocol ? WCCP ?

  • FYI: Cause of multiple front caches in server environment

    Hi All.
    Long time lurker, first time poster here. With this post I would just like to contribute with some information about a problem we recently solved at my work place.
    We have Coherence deployed in an app server environment. We discovered that our distributed schemes ended up having two front-caches by looking in the JMX. We got several instances of the same
    type of CacheMBean with different 'loader' arguments, like this:
    Coherence:type=Cache,service=DistributedCache,name=SomeCache,nodeId=3,tier=front,loader=x
    Coherence:type=Cache,service=DistributedCache,name=SomeCache,nodeId=3,tier=front,loader=y
    By trial and error we finally concluded that the cause of this was that, unless told otherwise, Coherence utilizes the context class loader (Thread#getContextClassLoader()) to fetch the front cache. In our case the context class loader
    differed depending on the entry point of our application. Web calls had one context class loader, and EJB calls another. The undesired side effect was that we ended up with separate front caches for web and EJB calls. This is of course not ideal.
    In order to fix this, one can specify which class loader to use when fetching the cache from CacheFactory: http://download.oracle.com/otn_hosted_doc/coherence/330/com/tangosol/net/CacheFactory.html#getCache(java.lang.String, java.lang.ClassLoader)
    The documentation already makes clear that the class loader is used for serializing. It is apparently also used for more.
    For us the fix was to specify getClass().getClassLoader(), which will vary only depending on the type of the calling instance, not vary depending on entry point. As a side note, this also worked well for us when mixing transactional and non-transactional interactions with the caches.
    Best regards,
    Alexander

    933421 wrote:
    Hi All.
    Long time lurker, first time poster here. With this post I would just like to contribute with some information about a problem we recently solved at my work place.
    We have Coherence deployed in an app server environment. We discovered that our distributed schemes ended up having two front-caches by looking in the JMX. We got several instances of the same
    type of CacheMBean with different 'loader' arguments, like this:
    Coherence:type=Cache,service=DistributedCache,name=SomeCache,nodeId=3,tier=front,loader=x
    Coherence:type=Cache,service=DistributedCache,name=SomeCache,nodeId=3,tier=front,loader=y
    By trial and error we finally concluded that the cause of this was that, unless told otherwise, Coherence utilizes the context class loader (Thread#getContextClassLoader()) to fetch the front cache. In our case the context class loader
    differed depending on the entry point of our application. Web calls had one context class loader, and EJB calls another. The undesired side effect was that we ended up with separate front caches for web and EJB calls. This is of course not ideal.
    In order to fix this, one can specify which class loader to use when fetching the cache from CacheFactory: http://download.oracle.com/otn_hosted_doc/coherence/330/com/tangosol/net/CacheFactory.html#getCache(java.lang.String, java.lang.ClassLoader)
    The documentation already makes clear that the class loader is used for serializing. It is apparently also used for more.
    For us the fix was to specify getClass().getClassLoader(), which will vary only depending on the type of the calling instance, not vary depending on entry point. As a side note, this also worked well for us when mixing transactional and non-transactional interactions with the caches.
    Best regards,
    AlexanderHi Alexander,
    depending on which appserver you use, you would probably want to go with a classloader which is a common root of the EJBs and web-apps. Depending on where your data classes reside, and what appserver you use, this may be an ejb-tier-wide classloader, or the ear classloader.
    Best regards,
    Robert

  • Cannot connect to multiple different cache clusters via ExtendTCP

    Hi,
    I'm trying to have two different ExtendTCP configurations for accessing different cache clusters, but I cannot get it to work. Essentially, the server configurations look like this:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>testcache1</cache-name>
    <scheme-name>testcache1-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>testcache1-distributed</scheme-name>
    <lease-granularity>member</lease-granularity>
    <backing-map-scheme>
    <local-scheme/>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <proxy-scheme>
    <scheme-name>testcache1-proxy</scheme-name>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>5</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>localhost</address>
    <port>9098</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    </caching-schemes>
    </cache-config>
    and
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>testcache2</cache-name>
    <scheme-name>testcache2-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>testcache2-distributed</scheme-name>
    <lease-granularity>member</lease-granularity>
    <backing-map-scheme>
    <local-scheme/>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <proxy-scheme>
    <scheme-name>testcache2-proxy</scheme-name>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>5</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>localhost</address>
    <port>9099</port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    </caching-schemes>
    </cache-config>
    And the client configuration looks like this:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>testcache1</cache-name>
    <scheme-name>testcache1-remote</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>testcache2</cache-name>
    <scheme-name>testcache2-remote</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <remote-cache-scheme>
    <scheme-name>testcache1-remote</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>localhost</address>
    <port>9098</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-cache-scheme>
    <remote-cache-scheme>
    <scheme-name>testcache2-remote</scheme-name>
    <service-name>ExtendTcpCacheService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>localhost</address>
    <port>9099</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    Now the problem is that Coherence apparently only creates the first tcp initiator, and when I try to access the second cache 'testcache2', Coherence looks for this cache in the remote cache identified by this first tcp initiator, which obviously does not have this cache. Accessing 'testcache1' works just fine.
    This is the output:
    2007-11-13 11:15:19.676 Oracle Coherence GE 3.3/387p4 <D5> (thread=main, member=n/a): Started: TcpInitiator(Running=true, ThreadCount=0, PingInterval=0, PingTimeout=5000, RequestTimeout=5000, ConnectTimeout=10000, RemoteAddresses=[172.16.16.248:9098], KeepAliveEnabled=true, TcpDelayEnabled=false, ReceiveBufferSize=0, SendBufferSize=0, LingerTimeout=-1)
    2007-11-13 11:15:19.678 Oracle Coherence GE 3.3/387p4 <D5> (thread=main, member=n/a): Opening Socket connection to 172.16.16.248:9098
    2007-11-13 11:15:19.680 Oracle Coherence GE 3.3/387p4 <Info> (thread=main, member=n/a): Connected to 172.16.16.248:9098
    Exception in thread "main" com.tangosol.io.pof.PortableException (Remote: An exception occurred while processing a CacheEnsureRequest) java.lang.IllegalArgumentException: No scheme for cache: "testcache2"
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:476)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:270)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:689)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:667)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy.ensureNamedCacheProxy(CacheServiceProxy.CDB:14)
         at com.tangosol.coherence.component.comm.messageFactory.CacheServiceFactory$CacheEnsureRequest.onRun(CacheServiceFactory.CDB:13)
         at com.tangosol.coherence.component.comm.message.Request.run(Request.CDB:13)
         at com.tangosol.coherence.component.net.extend.proxy.CacheServiceProxy.process(CacheServiceProxy.CDB:1)
         at com.tangosol.coherence.component.comm.Channel.onReceive(Channel.CDB:104)
         at com.tangosol.coherence.component.comm.Connection.onReceive(Connection.CDB:7)
         at com.tangosol.coherence.component.comm.ConnectionManager$MessageExecuteTask.run(ConnectionManager.CDB:6)
         at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:24)
         at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:49)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:35)
         at java.lang.Thread.run(Thread.java:613)
    Btw, this is with version 3.3 patch 5.
    Is there something wrong with my configurations, or is this a bug in Coherence ?
    Any help appreciated,
    Tom

    Hi Tom,
    You need to use difference service names:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>testcache1</cache-name>
                <scheme-name>testcache1-remote</scheme-name>
            </cache-mapping>
            <cache-mapping>
                <cache-name>testcache2</cache-name>
                <scheme-name>testcache2-remote</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <remote-cache-scheme>
                <scheme-name>testcache1-remote</scheme-name>
                <service-name>ExtendTcpCacheService1</service-name>
                <initiator-config>
                    <tcp-initiator>
                        <remote-addresses>
                            <socket-address>
                                <address>localhost</address>
                                <port>9098</port>
                            </socket-address>
                        </remote-addresses>
                        <connect-timeout>10s</connect-timeout>
                    </tcp-initiator>
                    <outgoing-message-handler>
                        <request-timeout>5s</request-timeout>
                    </outgoing-message-handler>
                </initiator-config>
            </remote-cache-scheme>
            <remote-cache-scheme>
                <scheme-name>testcache2-remote</scheme-name>
                <service-name>ExtendTcpCacheService2</service-name>
                <initiator-config>
                    <tcp-initiator>
                        <remote-addresses>
                            <socket-address>
                                <address>localhost</address>
                                <port>9099</port>
                            </socket-address>
                        </remote-addresses>
                        <connect-timeout>10s</connect-timeout>
                    </tcp-initiator>
                    <outgoing-message-handler>
                        <request-timeout>5s</request-timeout>
                    </outgoing-message-handler>
                </initiator-config>
            </remote-cache-scheme>
        </caching-schemes>
    </cache-config>Regards,
    user601849

  • Data processing in multiple caches in the same node

    we are using a partitioned cache to load data ( multiple types of data) in multiple named caches. In one partition, we plan to have all related data and we have got this using key association.
    Now I want to do processing with in that node and do some reconciliation of the data from various sources. We tried entry processor, but we want to consolidate all data from multiple named caches in the node. In a very naive form, I am thinking each named cache as a table and i am looking at ways to have a processor that will do some some processing on the related data.
    I see we could use a combination of Invocable object, Invocation service and entry processors, but I am unable to implement it successfully.
    Can you please point me to any reference implementation where I can do processing at the data node without transfering data back to client.
    Also any reference implementation of Map reduce (at the server side) in coherence would be helpful.
    Regards
    Ganesan

    Hi Ganesan
    A common approach to perform processing in the grid is to execute background threads in response to backing map listener events. The processing is co-located with data because the listener will be called in the JVM that owns the data. The thread can then make Coherence calls to access caches just like any other Coherence client.
    The Coherence Incbutor has numerous examples of this at http://coherence.oracle.com/display/INCUBATOR/Home. The Incubator Common component includes an event processing package that simplifies the handling of events. See the messaging pattern for an example: Message.java and MessageEventManager.java
    I am not sure I answered your question but I hope the information helps.
    Paul

  • Safari keep open page from cache, though the origin page has been available

    This is weird. Safari keep opening a site with this error message:
    501 Method Not Implemented
    Method Not Implemented
    GET to /indexfr.php not supported.
    But when i'm opening the site with Firefox or Opera, the website displayed correctly. Even when i'm using curl/wget from Terminal, all html codes are displayed correctly. I know that about 1 month ago, all browser displayed the same (firefox, safari, opera, curl) 501 method not implemented. But after the remote site has been recovered, Firefox can correctly display the web page, while safari can't. It seemed that safari keep displaying the cached page.
    I've tried empty the cache (from menu) many times, tried to push SHIFTCOMMANDR, tried to delete cache folder (~/Library/Cache/Safari), and also i've upgraded my safari to Safari 3 beta. All didn't work.
    IMHO, when the remote site responded with 501 HTTP response, safari keep caching 'the http response', and i don't know how to delete it. This is very annoying since i have to switch to Firefox to browse that only site.
    FYI: The remote site has more than one IP addresses, maybe i got this response from one of them. So, is it possible that safari keep caching the IP Address? and not to use round-robin behaviour? But i don't think so, because when i edited /etc/hosts to manually assign that site to one of the IP address, Safari still displayed 501 error message.
    So, my main question is, how to completely start over Safari just like if we buy a new computer?
    Sorry for my bad english
    Powerbook G4 1.3 12'   Mac OS X (10.4.10)  

    Ok, i've tried to put a question mark at the end of the url, but still got 501 Method not implemented.
    I'm sure that my ISP doesn't cache web pages. Because when i tried with curl or wget (from command line), i got full html code. It is ok to browse the site with Firefox and Opera. And when i set my http proxy to our ISP's proxy, i can browse that site !!!
    Here i provide some screenshots, please click:
    Firefox OK, Safari Don't
    Safari Don't ok. Proxy off. and my ISP doesn't do transparent caching.
    Safari Don't ok. curl is fine!
    Powerbook G4 1.3 12'   Mac OS X (10.4.10)  
    Powerbook G4 1.3 12'   Mac OS X (10.4.10)  

  • To keep query in buffer cache

    i want to ask how to keep a query in buffer cache always?
    a query which runs frequently in a database will always stay in buffer cache due to LRU. LRU will not allow that query to flush out form buffer cache as it is running frequently.but if i want to place a query explicitly in buffer cache what should i do?please answer it soon..

    i have recently completed the ORACLE DBA courseware and in an interview the interviewer asked me that if he want to place a query explicitly in buffer cache what will i do?
    as a fresher my knowledge is limited and i am trying to improve it continuously..
    while the interviewer was a senior oracle DBA, according to my knowledge i told him that we can keep objects like tables indexes in keep cache by declaring so or if the same query is running continuously it will automatically remain in LRU .putting a query is a new stuff i was searching for it but i wasn't getting answer for it so finally i decided to put this question on forum...
    i am trying hard to get a job as a DBA but all in vain i am nt getting one really it's becoming frustrating for me i have great hopes on oracle and besides hope i like learning oracle from beginning of my graduation thats y i decided to make my career in oracle and specially a DBA...
    now let's see what happens....

  • Timesten Database caching multiple oracle instances

    Hi All,
    I need to cache two oracle users which are running on different IP. Can Timesten support this ?
    How do i cache another datastore , if one user is already cache in timesten .

    This means One timesten can cache one single oracle database. ?
    CJ>>    Correct. One TimesTen database can cache data from one Oracle database. Note that a TimesTen instance (installation) supports multiple TimesTen databases.
    Is there any other way to shrink permsize ?
    CJ>>    There is no other way to shrink PermSize. And you can only shrink it if the currently configured size is larger than you really need.
    How can i deploy timesten on RAC?
    CJ>>   The same way as you would for a single instance database. A RAC database is just a single database. TimesTen fully supports RAC.
    Is there any similar feature available like RAC ?
    CJ>>   TimesTen  provides 'Cache Grid' where you can have multiple TimesTen caches which are caching the same Oracle database work together to act as a single distributed cache.

  • "clean cache" - the ball keeps spinning

    I'm relatively new to Mac. When I click "clean cache" the spinning ball shows up and doesn't stop.
    Normally it doesn't take more than 1 minute maximum to clean?
    I don't use it often, only after using PayPal or similar.
    Is there a way to fix it ?
    thanks

    Manually clear the caches for the browser yourself. Safari keeps cache in
    /Users/account/Library/Caches/Metadata
    There are folders for history, bookmarks and site icons as well.
    Repair your disk drive, sounds like it could have directory problem.

  • CACHE Oracle Tables

    Hello Gurus,
    We are building a new application and identified that few tables will be accesses very frequently. To decrease I/O we are planning to CACHE these tables. I am not sure if we made right decision. My question what are the things you need to consider before caching Oracle tables.
    Any help greatly appreciated. Thanks.
    select * from V$VERSIONBANNER                                                                          
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production    
    PL/SQL Release 11.2.0.3.0 - Production                                          
    CORE     11.2.0.3.0     Production                                                        
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production               
    NLSRTL Version 11.2.0.3.0 - Production    

    OK, so you want to use multiple buffer pools and to put these tables into the keep pool?
    Why do you believe that this will improve performance? Oracle's default algorithm for aging out blocks that are seldomly used is pretty good for the vast majority of applications. Why do you believe that you can identify what blocks will most benefit from caching better than Oracle? Why do you believe that you wouldn't be better off giving whatever KEEP pool cache size you would allocate to the DEFAULT pool and letting Oracle's cache algorithm cache whatever it determines is appropriate? It is possible that there is something that you know about your application that allows you to make this sort of determination. But in the vast majority of cases I've seen, people that have tried to do so end up hurting performance at least a little because they're forcing Oracle at the margin to age out blocks that it would benefit from caching and to cache blocks that it would benefit from aging out.
    Do you understand the maintenance impact of using multiple buffer caches? If you are using a vaguely recent version of Oracle and using any of the automatic memory management features, Oracle does not automatically manage the non-default buffer caches. That increases the probability that using non-default buffer caches is going to create performance problems since humans are much less efficient at recognizing and reacting to changing memory utilization and substantially increases the amount of monitoring and work that the DBAs need to do on the system (which, in turn, increases the risk that they make a mistake).
    Justin

Maybe you are looking for