Policy of Cache and Database

Is it possible to search a class (MyProject) each time in the cache (because all its directly mapped attributes are not volatile)
and to search a one-to-many attribute toward another Class (MyQuota) each time in the database (because there is a sort option)?
For the moment :
MyProject class uses a Full identity Map
MyQuota class uses a SoftCacheWeak identity Map
_myQuota attribute of MyProject doesn't use indirection                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

TopLink queries return objects, in this case an instance of MyProject. Queries do not return attributes of objects. When accessing an attribute from an Object TopLink does not re-issue a query the data is simply returned from the java reference. If a change is made to the oneToMany collection that change should be made to MyProject's collection and the cached versions will be updated. If the change to the collection is being made outside of TopLink, and those changes are required in the MyProject object, then the MyProject object will have to be refreshed from the database. If the collection is the only attribute that changes and you do not want to have to refresh the MyProject object to get the changes then I would recommend not mapping the oneToMany but maintaining it manually in your code and always executing seperate queries for the collection data.
--Gordon

Similar Messages

  • Forms/Reports: Role of the Database cache and Web cache

    Hello oracle experts,
    I am running a purely Forms and Reports based environment (9iAS).
    My question are:
    a. Is it possible to use features from the Web Cache and
    Database Cache to boost the performance of my applications?
    b. Are all components monitorable from the OEM?
    Please guide me so that i can configure my OEM to monitor my
    forms and reports services.
    thanks in advance for your reply
    Kind regards
    Yogeeraj

    Hi BradW,
    The way this is supposed to be done in Web Cache is by keeping separate copies of a cached page for different types of browsers distinguished by User-Agent header.
    In case of cache miss, Web Cache expects origin servers to return appropriate version of the page based on browser type, and the page from the origin server is just forwarded back to browser.
    Here, if the page is cacheable, Web Cache retains a separate copy for each type of User-Agent header value.
    And when there is a hit on this cached page, Web Cache returns the version of page with the User-Agent header that matches the request.
    Check out the config screen titled "Header Association" for this feature.
    About forwarding requests to different origin servers based on User-Agent header value, Web Cache does not have such capability.

  • Is there a difference between Web and Database Cache

    What, if any, is the difference between Web Cache and Database Cache?

    There is good documentation in the form of PDF files at the following general URL
    "http://otn.oracle.com/products/ias"
    and specifically at:
    http://technet.oracle.com/docs/products/ias/doc_index.htm
    I have read them all and most things are documented in great detail.
    I'm trying to get the Web Cache working but it core dumps immediately after trying to start it. I had no install errors reported and everything else I have tested seems to work. If you get it working could you please send me a note on anything special you did? I really need the Web Cache to work.

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Transactions involving multiple caches and a database

    Hi all,
    I'm curious if the following is possible with the transaction support in Coherence.
    I have the need to write data to two caches and a database from within my Tomcat container, and the whole operation must be atomic.
    Example:
    Write to database (this is NOT via a CacheStore)
    Write to cache 1 (uses write-through to database)
    Write to cache 2 (uses write-through to database)
    If any operation fails, the whole transaction needs to rollback. This feels like an XA transaction, but it looks like it won't work like I expect because the cache must be the last resource, but I have two caches. The ordering of operations is not important.
    Thanks,
    Rob

    MagnusE wrote:
    When using write-through there is (as far as I know) no way to get a fully transactional behaviour (assuming you have more than one cache node) since each node is responsible for persisting its own data items (they will each use a separate connection to the database).
    If you on the other hand uses a "cache beside" pattern this can be made transactional using XA. As long as both caches belong to the same cache service they count as a single "last resource"....
    /MagnusYou also need to use the same transaction isolation and concurrency setting for it being the same last resource. Practically you should only have a single CacheAdapter instance enrolled in the same transaction.
    Best regards,
    Robert

  • Database Smart Flash Cache and Data Guard

    We are on Oracle 11.2.0.2 on OEL 5 and our considering buying fusion io (PCIe flash drives) for our Production (Primary) database.
    My question is; Can we use Physical Data Guard with the Primary side using Database Smart Flash Cache and keep-ing tables and/or partitions in the Flash Cache and have a Standby side that does not have Database Smart Flash Cache?

    Yes you can.
    Data Guard will not check what type drive you are using. As long as you meet the database and network requirements it will work.
    http://www.oracle.com/technetwork/database/features/availability/maa-090890.html
    Check out these white papers if you have concerns
    "Migration to Automatic Storage Management (ASM)"
    "Best Practices for Creating a Low-Cost Storage Grid for Oracle Databases"
    In addition here's an example of using two different file systems with Data Guard
    http://pythianpang.wordpress.com/2009/07/07/data-guard-asm-primary-to-filesystem-physical-standby-using-rman-duplicate/
    Best Regards
    mseberg

  • Portal and Database Cache

    All,
    Any pointers on using Portal with Database Cache. Can we identify Portal tables as 'hot', so that Portal page, content area rendering will be faster ? I was told that Portal and iCache have different approaches to caching. What if I turned off Portal caching and let iCache do it for me?
    Would appreciate some response.
    Thanks
    Sanjay

    The caching portal provides isn't a product. It's an optimization on the architecture, basically storing information wherever we can so it doesn't have to be pregenerated.

  • Questions on cache and general RSRT settings for plancube

    Hi,
    we would like to:
    1) set request status 1 in RSRT for our planqueries, in order to automatically refresh the query after executing a planfunction (problem we have now is that the results of a planfunction are not automatically updated in the query. Only when doing something else like executing other function, saving, check locks, ... the results are visible).
    2) activate delta cache for our planqueries
    we have read OSS note 1136163 on RSRT settings. It says:
    Aggregation level "A" is implemented internally by the automatically created query "P/!!1P" (plan buffer query). This query acts like an InfoProvider. It reads the data of the database or provides the data from InfoProvider "P", it adds the data of the delta buffer "Dp" (or the delta buffer Dpi if P is a MultiProvider with several PartProviders Pi that can be planned) and transfers the data manager as data of the InfoProvider "A" of type "ALVL". The query "P/!!1P" can use aggregates and the cache; this is exactly like each normal query in "P". If "P" is a MultiProvider, it is useful to set PARTITIONMODE to "1".
                  For the query "P/!!1P" that is created automatically for an aggregation level or for all aggregation levels using the InfoProvider P, we recommend the following setting:
                  Read mode "H", request status "1", cache mode "1" or higher, delta cache "true" and SP grouping "1".
                  Furthermore, the selection to use the structure element (KIDSEL) should be "true".
                  The input-ready queries in "A" should not, and cannot, use a cache. The request status is irrelevant since queries in "A" are automatically set to current data. The delta buffer does not currently support hierarchy processing. Therefore, aggregation level "A" cannot completely support read mode "H". For input-ready queries at A:
                  Read mode "X", request status "0", cache mode "0".
                 The delta cache and SP grouping are not visible
    Problems we have:
    1) for query P/!!1P (PCA_AGQF/!!1PCA_AGQF in our example) does not allow changing the request status (greyed out). It now has value 0 instead of 1. It also does not allow to activate the delta cache flag. How to change this? In RSDIPROP we have set partitionmode to 1 for the multiprovider and activated the delta cache flag...
    2) can we use the cache / delta cache principle for our planqueries? If so, how to ensure these settings remain activated in RSRT?
    regards
    Dries
    regards
    dries

    Hi,
    To change the cache settings for your cube.
    Open the cube in RSA1 and click in 'Chance'.
    - click in the 'Environment' menu;
    - expand 'InfoProvider Properties'
      - select the option 'Change'.
    You will be able to set the cache mode for this provider.
    I don't think it will be possible use cache for a multiprovider, it is
    not possible.
    Regards,
    Amit

  • Cache and/or Connection problems under load

    I have a Kodo web app that's been running just fine in
    production for many months now. However, recently the web
    traffic has shot up by a huge amount, literally overnight.
    But unfortunately, it's caused the app to fail very ungracefully
    under the strain.
    It's been a crazy few days, and I haven't been able to do
    very much analysis because of higher priorities. But from
    what I have been able to gleen, it now looks like Kodo is
    the most likely culprit. From what I've read in other messages
    here, it appears others may have been experiencing similar
    problems.
    My environment: Redhat Linux 8, Postgres 7.3.4 with the
    included JDBC3 driver, Apache 1.3.x, Tomcat 4.1.x and the
    webapp connector. Similar behavior was seen with Apache 2.x,
    Tomcat 4.1.x and the JK2 connector (that was on the new machine
    I setup to handle the new traffic, which, of course, died the
    night before).
    As I mentioned, this app has been running reliably for
    months with no problems. But when placed under heavy load,
    it appears to get into some sort of pathological state where
    it slows down dramatically (asymptotically?) to the point where
    it's effectively locked up. In one case, where the app was
    left running for several hours in this state, requests were
    taking 90 minutes to complete (normal is 1-5 seconds).
    From what I can deduce, there seem to be four things
    going on, three of which have been mentioned in recent threads
    here:
    1) Excessive memory consumption. When the app is
    operating normally, I see fairly flat memory usage for
    the JVM process. Under load, the JVM steadily expands
    until it hits its heap limit. I've gotten OutOfMemory
    exceptions with a heap size of 350MB, which should be plenty.
    2) Level 2 cache locking issues. I've seen dozens of
    threads waiting on a lock in the DataCache code. Not sure
    if there's a deadlock happening here or just that the
    threads are waiting on a lock that's being held for a long time.
    3) Database Connection leaks or contention. I see threads
    spinning in the DataSource code trying to get a connection.
    I also see dozens of connections from the Postgres side which
    seem to be sitting idle, but in the middle of a transaction.
    When things get bad, I also see exceptions being thrown because
    of timeouts waiting for a connection to become available. It's
    a web app, PMs should not be tied up for more than a few seconds.
    4) CPU usage pegged or nearly so for the JVM. I suspect
    this is related to #3. Something very bad is going on here.
    If I stop all inbound requests to the JVM when it's in this
    bad state, it will continue to burn CPU at 90%+ for a very
    long time. I think it will eventually finish what it's doing,
    but I haven't had the luxury of waiting for it. It's definitely
    not a linear slowdown proportional to the load.
    Attached are my kodo.properties file and some thread stack
    traces along with some comments. Any advice would be greatly
    appreciated. This is not a complicated app nor am I doing
    anything unusual. It doesn't seem logical that Kodo could
    breakdown so dramatically under load, so I'm hoping it's some
    sort of interaction thing that I can work around.
    Thanks.
    Ron Hitchens {mailto:[email protected]} RonSoft Technologies
    (510) 494-9597 (Home Office) http://www.ronsoft.com
    (707) 924-3878 (fax) Bit Twiddling At Its Finest
    "Born with a broken heart" -Kenny Wayne Shepard

    Please read prior posts regarding level 2 cache. It is unusable under stress
    as far I am concerned. Basically entire cache gets locked on any database
    read. Makes it very unscalable
    Are you using 2.5.3? It will request a connection from a pool every time it
    resolves reference to a PC even if it is cached in PM and therefore Kodo
    does not need to read any. As result if you iterate over 100 objects in your
    query and for each object resolve reference to another object (always the
    same) kodo will request 100 database connections from the pool (and note
    they issue rollback on every time they return a connection to the pool so
    getting connection might be fairly expensive)
    In conjunction with level 2 cache contention this causes application to go
    into a stupor.
    Try to go back to 2.5.2 (or may be 2.5.4 they promised in the near future
    with a workaround) or use "persistent-manager" connection retention if you
    discard PM after each HTTP invocation - it will take care of connection
    pooling issue. As far as L2 cache I was unable to find any work around so
    far - see if you might be better of without cache. You might if your object
    graph is not very complex
    "Ron Hitchens" <[email protected]> wrote in message
    news:[email protected]...
    >
    I have a Kodo web app that's been running just fine in
    production for many months now. However, recently the web
    traffic has shot up by a huge amount, literally overnight.
    But unfortunately, it's caused the app to fail very ungracefully
    under the strain.
    It's been a crazy few days, and I haven't been able to do
    very much analysis because of higher priorities. But from
    what I have been able to gleen, it now looks like Kodo is
    the most likely culprit. From what I've read in other messages
    here, it appears others may have been experiencing similar
    problems.
    My environment: Redhat Linux 8, Postgres 7.3.4 with the
    included JDBC3 driver, Apache 1.3.x, Tomcat 4.1.x and the
    webapp connector. Similar behavior was seen with Apache 2.x,
    Tomcat 4.1.x and the JK2 connector (that was on the new machine
    I setup to handle the new traffic, which, of course, died the
    night before).
    As I mentioned, this app has been running reliably for
    months with no problems. But when placed under heavy load,
    it appears to get into some sort of pathological state where
    it slows down dramatically (asymptotically?) to the point where
    it's effectively locked up. In one case, where the app was
    left running for several hours in this state, requests were
    taking 90 minutes to complete (normal is 1-5 seconds).
    From what I can deduce, there seem to be four things
    going on, three of which have been mentioned in recent threads
    here:
    1) Excessive memory consumption. When the app is
    operating normally, I see fairly flat memory usage for
    the JVM process. Under load, the JVM steadily expands
    until it hits its heap limit. I've gotten OutOfMemory
    exceptions with a heap size of 350MB, which should be plenty.
    2) Level 2 cache locking issues. I've seen dozens of
    threads waiting on a lock in the DataCache code. Not sure
    if there's a deadlock happening here or just that the
    threads are waiting on a lock that's being held for a long time.
    3) Database Connection leaks or contention. I see threads
    spinning in the DataSource code trying to get a connection.
    I also see dozens of connections from the Postgres side which
    seem to be sitting idle, but in the middle of a transaction.
    When things get bad, I also see exceptions being thrown because
    of timeouts waiting for a connection to become available. It's
    a web app, PMs should not be tied up for more than a few seconds.
    4) CPU usage pegged or nearly so for the JVM. I suspect
    this is related to #3. Something very bad is going on here.
    If I stop all inbound requests to the JVM when it's in this
    bad state, it will continue to burn CPU at 90%+ for a very
    long time. I think it will eventually finish what it's doing,
    but I haven't had the luxury of waiting for it. It's definitely
    not a linear slowdown proportional to the load.
    Attached are my kodo.properties file and some thread stack
    traces along with some comments. Any advice would be greatly
    appreciated. This is not a complicated app nor am I doing
    anything unusual. It doesn't seem logical that Kodo could
    breakdown so dramatically under load, so I'm hoping it's some
    sort of interaction thing that I can work around.
    Thanks.
    Ron Hitchens {mailto:[email protected]} RonSoft Technologies
    (510) 494-9597 (Home Office) http://www.ronsoft.com
    (707) 924-3878 (fax) Bit Twiddling At Its Finest
    "Born with a broken heart" -Kenny Wayne Shepard
    With cahce enabled, 2.5.3
    Here the app had recently slowed down and then effectively locked up.
    There where many outstanding web requests that were not receiving output.
    At this point most threads seemed to be waiting at the same location.
    There were a large number of active database connections and most of
    them had open transactions (according to pg_stat_activity). The app
    was not responding to any web requests.
    It would seem that db transactions had been started, then the thread
    got stuck for a long time on a synchronization lock in the cache lookup.
    Below are two randomly chosen thread stack dumps.
    Thread-72[1] where
    [1] java.lang.Object.wait (native method)
    [2] java.lang.Object.wait (Object.java:429)
    [3]oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$ReaderLock.acquir
    e (WriterPreferenceReadWriteLock.java:169)
    [4]com.solarmetric.kodo.runtime.datacache.AbstractCacheImpl.acquireReadLock
    (AbstractCacheImpl.java:384)
    [5]com.solarmetric.kodo.runtime.datacache.TimedDataCache.acquireReadLock
    (TimedDataCache.java:256)
    [6] com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.load(DataCacheStoreManager.java:595)
    [7] com.solarmetric.kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java:2,330)
    [8] com.solarmetric.kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:897)
    [9] com.europeasap.data.City.jdoGetname (null)
    [10] com.europeasap.data.City.getName (City.java:39)
    [11] com.europeasap.form.CustomerBookingForm.populateDepartureCityInfo(CustomerBookingForm.java:922)
    [12] com.europeasap.form.CustomerBookingForm.onetimeInit(CustomerBookingForm.java:871)
    [13] com.europeasap.form.CustomerBookingForm.populatePackageInfo(CustomerBookingForm.java:880)
    [14] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:66)
    [15] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
    [16] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [17]
    org.apache.struts.action.ActionServlet.doGet (ActionServlet.java:492)
    [18] javax.servlet.http.HttpServlet.service (HttpServlet.java:740)
    [19] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
    [20] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
    [21] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
    [22] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
    [23]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [24] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [25] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [26] org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
    [27]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [28] org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
    [29]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [30] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [31] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [32] org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2,415)
    [33] org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
    [34]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [35] org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
    [36]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [37] org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:172)
    [38]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [39] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [40] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [41] org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
    [42]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [43] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [44] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [45] org.apache.catalina.connector.warp.WarpRequestHandler.handle (null)
    [46] org.apache.catalina.connector.warp.WarpConnection.run (null)
    [47] java.lang.Thread.run (Thread.java:534)
    Thread-64[1] where
    [1] java.lang.Object.wait (native method)
    [2] java.lang.Object.wait (Object.java:429)
    [3]oswego.cs.dl.util.concurrent.WriterPreferenceReadWriteLock$ReaderLock.acquir
    e (WriterPreferenceReadWriteLock.java:169)
    [4]com.solarmetric.kodo.runtime.datacache.AbstractCacheImpl.acquireReadLock
    (AbstractCacheImpl.java:384)
    [5]com.solarmetric.kodo.runtime.datacache.TimedDataCache.acquireReadLock
    (TimedDataCache.java:256)
    [6] com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.load(DataCacheStoreManager.java:595)
    [7] com.solarmetric.kodo.runtime.StateManagerImpl.loadField(StateManagerImpl.java:2,248)
    [8] com.solarmetric.kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:899)
    [9] com.europeasap.data.HotelPrices.jdoGetseasonalPrices (null)
    [10] com.europeasap.data.HotelPrices.normalizeIndex(HotelPrices.java:113)
    [11] com.europeasap.data.HotelPrices.getCost (HotelPrices.java:45)
    [12] com.europeasap.logic.CostHelper.findLowestHotel(CostHelper.java:181)
    [13] com.europeasap.logic.CostHelper.computeBasePackageCost(CostHelper.java:297)
    [14] com.europeasap.logic.CostHelper.computeFinalPackageCost(CostHelper.java:246)
    [15] com.europeasap.form.CustomerBookingForm.updateDisplayCosts(CustomerBookingForm.java:1,440)
    [16] com.europeasap.form.CustomerBookingForm.updateCustomizeDisplayInfo(CustomerBookingForm.java:1,407)
    [17] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:68)
    [18] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
    [19] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [20]
    org.apache.struts.action.ActionServlet.doGet (ActionServlet.java:492)
    [21] javax.servlet.http.HttpServlet.service (HttpServlet.java:740)
    [22] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
    [23] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
    [24] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
    [25] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
    [26]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [27] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [28] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [29] org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
    [30]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [31] org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
    [32]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [33] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [34] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [35] org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2,415)
    [36] org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
    [37]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [38] org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
    [39]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [40] org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:172)
    [41]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [42] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [43] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [44] org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
    [45]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [46] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [47] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [48] org.apache.catalina.connector.warp.WarpRequestHandler.handle (null)
    [49] org.apache.catalina.connector.warp.WarpConnection.run (null)
    [50] java.lang.Thread.run (Thread.java:534)
    while running slow, 2.5.3
    At this point, the app had been running several hours normally, then
    apparently slowed down and locked up while I was away. When looking
    at the app threads and database activity, everything appeared idle.
    No transactions seemed to be open in the db. But the app was not
    behaving normally. Web requests that did not make use of JDO worked
    fine (but slow). But requests that hit the db either blocked or were
    very slow to respond.
    Looking back at the log, there had been a large number of requests
    that threw exceptions because they could not get a connection within
    five seconds.
    Most threads were idle, waiting on read, but some were in the state
    shown by the following two stack dumps. Unlike the cache threads above,
    they did not seem to be waiting for a lock to be granted, they seemed
    to be spinning in the connection management code. Apparently trying
    to get a connection. I suspended and resumed the same thread repeatedly
    and it always seemd to be doing the same thing. Single stepping was
    very difficult because the debugger was slow to respond, apparently
    because of other threads also busy spinning.
    Postgres indicated that there where lots of connections open and
    that they were all idle, so there should not have been a shortage
    of connections in the pool. There are two mysteries here: 1) why
    can't this thread get a connection? and 2) Why is it busy spinning?
    Thread-56[1] where
    [1]com.solarmetric.datasource.PreparedStatementCache$CacheAwareConnection.prepa
    reStatement (PreparedStatementCache.java:184)
    [2]com.solarmetric.datasource.PreparedStatementCache$CacheAwareConnection.prepa
    reStatement (PreparedStatementCache.java:169)
    [3] com.solarmetric.datasource.ConnectionWrapper.prepareStatement(ConnectionWrapper.java:199)
    [4]com.solarmetric.kodo.impl.jdbc.schema.dict.AbstractDictionary.isClosed
    (AbstractDictionary.java:1,912)
    [5]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.getConnectionFromFact
    ory (SQLExecutionManagerImpl.java:186)
    [6] com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.getConnection(SQLExecutionManagerImpl.java:147)
    [7]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.newSQLExecutionManag
    er (JDBCStoreManager.java:828)
    [8]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getSQLExecutionManag
    er (JDBCStoreManager.java:714)
    [9]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getDatastoreConnecti
    on (JDBCStoreManager.java:287)
    [10]com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.getDatastoreCon
    nection (DataCacheStoreManager.java:465)
    [11] com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.load(DataCacheStoreManager.java:591)
    [12] com.solarmetric.kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java:2,330)
    [13] com.solarmetric.kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:897)
    [14] com.europeasap.data.City.jdoGetname (null)
    [15] com.europeasap.data.City.getName (City.java:39)
    [16] com.europeasap.form.CustomerBookingForm.populateDepartureCityInfo(CustomerBookingForm.java:922)
    [17] com.europeasap.form.CustomerBookingForm.onetimeInit(CustomerBookingForm.java:871)
    [18] com.europeasap.form.CustomerBookingForm.populatePackageInfo(CustomerBookingForm.java:880)
    [19] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:66)
    [20] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
    [21] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [22]
    org.apache.struts.action.ActionServlet.doGet (ActionServlet.java:492)
    [23] javax.servlet.http.HttpServlet.service (HttpServlet.java:740)
    [24] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
    [25] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
    [26] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
    [27] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
    [28]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [29] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [30] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [31] org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
    [32]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [33] org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
    [34]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [35] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [36] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [37] org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2,415)
    [38] org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
    [39]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [40] org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
    [41]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [42] org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:172)
    [43]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [44] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [45] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [46] org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
    [47]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [48] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [49] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [50] org.apache.catalina.connector.warp.WarpRequestHandler.handle (null)
    [51] org.apache.catalina.connector.warp.WarpConnection.run (null)
    [52] java.lang.Thread.run (Thread.java:534)
    Thread-56[1] where
    [1]com.solarmetric.datasource.DataSourceImpl$AbstractPool.findConnection
    (DataSourceImpl.java:826)
    [2] com.solarmetric.datasource.DataSourceImpl$AbstractPool.getConnection(DataSourceImpl.java:605)
    [3] com.solarmetric.datasource.DataSourceImpl.getConnection(DataSourceImpl.java:363)
    [4] com.solarmetric.datasource.DataSourceImpl.getConnection(DataSourceImpl.java:356)
    [5]com.solarmetric.kodo.impl.jdbc.runtime.DataSourceConnector.getConnection
    (DataSourceConnector.java:63)
    [6]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.getConnectionFromFact
    ory (SQLExecutionManagerImpl.java:185)
    [7] com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.getConnection(SQLExecutionManagerImpl.java:147)
    [8]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.newSQLExecutionManag
    er (JDBCStoreManager.java:828)
    [9]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getSQLExecutionManag
    er (JDBCStoreManager.java:714)
    [10]com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.getDatastoreConnecti
    on (JDBCStoreManager.java:287)
    [11]com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.getDatastoreCon
    nection (DataCacheStoreManager.java:465)
    [12]com.solarmetric.kodo.runtime.datacache.DataCacheStoreManager.initialize
    (DataCacheStoreManager.java:519)
    [13] com.solarmetric.kodo.runtime.StateManagerImpl.loadInitialState(StateManagerImpl.java:215)
    [14]com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectByIdFilter
    (PersistenceManagerImpl.java:1,278)
    [15] com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:1,179)
    [16]com.solarmetric.kodo.runtime.datacache.query.CacheAwareQuery$CachedResultLis
    t.get (CacheAwareQuery.java:432)
    [17] java.util.AbstractList$Itr.next (AbstractList.java:421)
    [18] com.europeasap.form.CustomerBookingForm.populateDepartureCityInfo(CustomerBookingForm.java:919)
    [19] com.europeasap.form.CustomerBookingForm.onetimeInit(CustomerBookingForm.java:871)
    [20] com.europeasap.form.CustomerBookingForm.populatePackageInfo(CustomerBookingForm.java:880)
    [21] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:66)
    [22] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
    [23] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [24]
    org.apache.struts.action.ActionServlet.doGet (ActionServlet.java:492)
    [25] javax.servlet.http.HttpServlet.service (HttpServlet.java:740)
    [26] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
    [27] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
    [28] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
    [29] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
    [30]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [31] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [32] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [33] org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
    [34]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [35] org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
    [36]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [37] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [38] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [39] org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2,415)
    [40] org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:180)
    [41]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [42] org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:170)
    [43]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [44] org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:172)
    [45]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:641)
    [46] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [47] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [48] org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:174)
    [49]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [50] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [51] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [52] org.apache.catalina.connector.warp.WarpRequestHandler.handle (null)
    [53] org.apache.catalina.connector.warp.WarpConnection.run (null)
    [54] java.lang.Thread.run (Thread.java:534)
    With cache disabled 2.4.3
    This run was an accident. I inadvertently ran the app with the older
    2.4.3 version of Kodo, with the cache disabled. This one got into trouble
    almost immediately. It also seemed to lockup with lots of opentransactions
    in the db. It's also interesting that these two threads also seem to be
    hanging around the same method as in 2.5.3.
    Thread-63[1] where 0x9f9
    [1]com.solarmetric.datasource.PreparedStatementCache$CacheAwareConnection.prepa
    reStatement (PreparedStatementCache.java:184)
    [2] com.solarmetric.datasource.ConnectionWrapper.prepareStatement(ConnectionWrapper.java:377)
    [3]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.prepareStatementInter
    nal (SQLExecutionManagerImpl.java:807)
    [4]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executePreparedQueryI
    nternal (SQLExecutionManagerImpl.java:761)
    [5]com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQueryInternal
    (SQLExecutionManagerImpl.java:691)
    [6] com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQuery(SQLExecutionManagerImpl.java:372)
    [7] com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeQuery(SQLExecutionManagerImpl.java:356)
    [8] com.solarmetric.kodo.impl.jdbc.ormapping.ClassMapping.loadByPK(ClassMapping.java:950)
    [9] com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.initialize(JDBCStoreManager.java:263)
    [10] com.solarmetric.kodo.runtime.StateManagerImpl.loadInitialState(StateManagerImpl.java:174)
    [11]com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectByIdFilter
    (PersistenceManagerImpl.java:1,023)
    [12] com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:942)
    [13] com.solarmetric.kodo.impl.jdbc.ormapping.OneToOneMapping.load(OneToOneMapping.java:147)
    [14] com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:375)
    [15] com.solarmetric.kodo.runtime.StateManagerImpl.loadField(StateManagerImpl.java:2,035)
    [16] com.solarmetric.kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:720)
    [17] com.europeasap.data.CityMarkup.jdoGetcity (null)
    [18] com.europeasap.data.CityMarkup.getCity (CityMarkup.java:30)
    [19] com.europeasap.logic.CostHelper.getCityMarkup (CostHelper.java:81)
    [20] com.europeasap.logic.CostHelper.computeBasePackageCost(CostHelper.java:289)
    [21] com.europeasap.logic.CostHelper.computeFinalPackageCost(CostHelper.java:246)
    [22] com.europeasap.form.CustomerBookingForm.updateDisplayCosts(CustomerBookingForm.java:1,440)
    [23] com.europeasap.form.CustomerBookingForm.updateCustomizeDisplayInfo(CustomerBookingForm.java:1,407)
    [24] com.europeasap.action.CustomizeTrip.perform (CustomizeTrip.java:68)
    [25] org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1,787)
    [26] org.apache.struts.action.ActionServlet.process(ActionServlet.java:1,586) [27]
    org.apache.struts.action.ActionServlet.doPost (ActionServlet.java:510)
    [28] javax.servlet.http.HttpServlet.service (HttpServlet.java:760)
    [29] javax.servlet.http.HttpServlet.service (HttpServlet.java:853)
    [30] org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:247)
    [31] org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:193)
    [32] org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:260)
    [33]org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invok
    eNext (StandardPipeline.java:643)
    [34] org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:480)
    [35] org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:995)
    [36] org.apache.catal

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • Purpose of Retention Policy Recovery Window and Redundancy

    Hi,
    Good Evening,
    I have some queries regarding the RMAN Retention Policy Recovery Window and Redundancy.
    1. Any condition is there to set the Retention Policy Recovery Window and Redundancy and control_file_record_keep_time?What is the relationship between these 3 parameters?
    2. Explain the scenario if i set the control_file_record_keep_time=4 Redundancy=3 and Recovery Window=7?
    3. If i set the Redundancy=3 and Recovery Window=7 means my backup place only have 3 copies of backup based on the redundancy then what is the purpose of Recovery Window=7 please give some example.
    4. If i change the values for Recovery Window=3 and Redundancy=7 what will happened, how many days backup will be available in my FRA location?Explain with one scenario?
    Thanks in advance.
    Vijay.

    Hi,
    Take a look of the above doc contents:
    Configuring the Backup Retention Policy
    As explained in "Backup Retention Policies", the backup retention policy specifies which backups must be retained to meet your data recovery requirements. This policy can be based on a recovery window or redundancy. Use the CONFIGURE RETENTION POLICY command to specify the retention policy.
    so  you have option to choose either  recovery windows or redundancy based you can set the configuration like
    read in the Doc What it said for both:
    Recovery Window-Based Retention Policy ==>RMAN does not consider any full or level 0 incremental backup as obsolete if it falls within the recovery window.  Additionally, RMAN retains all archived logs and level 1 incremental backups that are needed to recover to a random point within the window.
    Redundancy-Based Retention Policy==>The REDUNDANCY parameter of the CONFIGURE RETENTION POLICY command specifies how many full or level 0 backups of each datafile and control file that RMAN should keep. If the number of full or level 0 backups for a specific datafile or control file exceeds the REDUNDANCY setting, then RMAN considers the extra backups as obsolete. The default retention policy is REDUNDANCY 1.
    RMAN> show RETENTION POLICY;
    using target database control file instead of recovery catalog
    RMAN configuration parameters for database with db_unique_name DDTEST are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
    old RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    new RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
    new RMAN configuration parameters are successfully stored
    RMAN> show RETENTION POLICY;
    RMAN configuration parameters for database with db_unique_name DDTEST are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
    RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    old RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
    new RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    new RMAN configuration parameters are successfully stored
    RMAN> show RETENTION POLICY;
    RMAN configuration parameters for database with db_unique_name DDTEST are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    CONTROL_FILE_RECORD_KEEP_TIME:This parameter applies only to records in the control file that are circularly reusable (such as archive log records and various backup records) ref Doc:CONTROL_FILE_RECORD_KEEP_TIME
    1. Any condition is there to set the Retention Policy Recovery Window and Redundancy and control_file_record_keep_time?What is the relationship between these 3 parameters?
    2. Explain the scenario if i set the control_file_record_keep_time=4 Redundancy=3 and Recovery Window=7?
    3. If i set the Redundancy=3 and Recovery Window=7 means my backup place only have 3 copies of backup based on the redundancy then what is the purpose of Recovery Window=7 please give some example.
    4. If i change the values for Recovery Window=3 and Redundancy=7 what will happened, how many days backup will be available in my FRA location?Explain with one scenario?
    so i believe you can get the Answer from Your Question from Above details.
    HTH

  • Coherence and database backend updates

    Hi
    I am new to coherence, I liked the features of Coherence replicated cache, cache through etc..
    My Question is if I am using Coherence with cache through and partitioned caching and I have a back end update on data through a oracle database stored procedure how the coherence cache get the latest data changed by the stored procedure. Is there any event driven mechanism to invalidate the cache to reload the data or it is not a good practice in these scenario.
    Rgds
    Anil

    Hi Anil,
    it really depends on what you need to achieve.
    There is a very good wiki which describes most of the things you can do with Coherence at the url: http://wiki.tangosol.com/display/COH33UG/Coherence+3.3+Home
    However, since you have your existing database model which you want to retain because you want the data still reside in the database, depending on the consistency requirements you might not be totally free in representing data in Coherence.
    The best feature of Coherence to significantly reduce the load on the database is the write-behind cache.
    Write-behind functionality allows you to coalesce multiple updates to the same DB row into a single update as data is written out only after a certain amount of time thereby combining the changes from multiple updates to a single one.
    It also allows ripe updates to multiple cached entries for which the primary copies reside in the same cache node to be written out in the same database operation (preferably in batch mode).
    Due to these behaviors write-behind has a profound effect on write-heavy applications.
    However that way of operation requires that for any logic that needs to query consistently from the data-set and all operations changing the data-set go to the cache, because the database is not guaranteed to be consistent. Therefore it might not be good for you.
    Another approach is that if you want to do your DB changes directly in the DB, you can simply cache data in whatever structures that suit your access patterns in a read-through cache, and if there are any changes to the database you invalidate entries which are stale.
    The cache structures can be whatever which you choose appropriate to your logic, you can cache single entries, you can cache entire top-down object hierarchies, you can cache query results keyed by the query parameters.
    The point is that you are free to choose the most appropriate structure of what to cache as opposed to the caching features of other frameworks which choose the caching structures to be aligned to their classes and not your needs.
    Just keep in mind that without doing serious locking (which adversely affects both read and write performance), between reading any two or more entries from the cache a change might have occurred to one or more of those entries. This means that when using multiple entries from the cache, there might not be any transaction-set in the database which contains all entries in the state which you were getting them.
    So if you need any such guarantees, then the data you need such guarantees on must reside in a single cache entry and that cache entry must have been retrieved from the database with a transaction which provides those guarantees at all (if you read data from the database with READ_COMMITTED isolation and with multiple queries, then you don't get that consistency even from the database, as some of the entries read by the previous operations in the transaction might have been overwritten when another transaction committed before subsequent read operations in your transaction).
    There can be other approaches as well.
    It really all depends on your access patterns and without knowing more about that it is hard to suggest the correct solution.
    Best regards,
    Robert

  • How can I make WL 8.1 flush the cache and/or pool for 1.1 EJBs

    Hi,
    I'm using 1.1 deployment descriptors for my CMP entity bean that were previously
    used in the WL 5.1 version of my project.
    Things do get deployed but I've observed confusing information when monitoring
    the EJB via Admin Console.
    What appears is that the Weblogic container is not flushing the cache and/or pool
    after the bean has finished processing and also a sufficient time has expired
    (i.e. the idle-timeout-seconds)
    From what I've understood via the on-line information is that each EJB has its
    own cache (since I've not done anything special for that) and the instance in
    cache is only passivated when the cache is full and the server need to activate
    another instance. On passivation, it appears to be returning the instance to the
    pool. But its unclear/undocumented when the pool is cleared, if at all.
    What I want is that:
    1. A way to get my cached instance passivated
    2. A way to get my pooled instance flushed.
    The reason I'm looking into this is becasue in my case it appears that the cached/pooled
    instance are contributing to OutOfMemory errors and because of the nature of requirements,
    etc. we need to have the cache size be high for certain processing.
    Thanks
    Parasher

    I think it's probably best to contact technical support about this.
    There are different patches for different versions of WLS.
    I'd mention 'CR128026' to them to get started.
    -thorick
    "Parasher" <[email protected]> wrote:
    >
    Hi,
    Thank you for your reply !
    How can I get more information about this patch and the patch itself
    Is there a way I can look it up online or do I have to contact the support
    folks
    and if so what should I need to tell them to convey which patch I'm talking
    about.
    Thank you in advance.
    Parasher
    "thorick" <[email protected]> wrote:
    Hi,
    If you use 'Database' concurrency, then there is a patch available for
    some 8.1
    service
    packs to enable idle-timeout-seconds on the cache. I believe that this
    will be
    standard
    feature with the next service pack. There is no comparable mechanism
    for the
    pool in 8.1,
    this is a feature that is coming with the next major release of WLS.
    If the
    8.1 patch works
    for you, it can save you memory during off peak usage times. Notethat
    this
    patch does not
    work for 'Exclusive' concurrency.
    -thorick

  • Cache and Load Balancing with Oracle APEX Listener

    Hi,
    I intend to use only HTTP access.
    How to implement a Cache and Load Balancing with the Oracle APEX Listener?
    Is it possible to do with the the standalone running APEX Listener?
    Thanks by advance for any tips/documentation/references.
    Kind Regards.

    Hi,
    I think this question is best asked in the APEX Listener forum:
    ORDS, SODA & JSON in the Database
    Kind regards
    Sandro

  • Cache and Load Balancing for the Oracle APEX Listener

    Hi,
    I intend to use only HTTP access.
    My database is Oracle 11gR2, SE, 32 bit.
    How to implement a Cache and Load Balancing with the Oracle APEX Listener?
    Is it possible to do with the the standalone running APEX Listener?
    Thanks by advance for any tips/documentation/references.
    Kind Regards.

    Error. To be closed.

Maybe you are looking for