Refresh Ahead Cache with JPA

I am trying to use Refresh-ahead caching with JPACacheStore. My config backig-map config is given below. I am using the same JPA example as given in the Coherence tutorial. The cache is only loading the data from the when the server starts. When i change the data in the DB, it is not reflecting in the cache. I am not sure I am doing the right thing. Need your help!!
<backing-map-scheme>
                    <read-write-backing-map-scheme>
                         <!--Define the cache scheme-->
                         <internal-cache-scheme>
                              <local-scheme>
                                   *<expiry-delay>1m</expiry-delay>*
                              </local-scheme>
                         </internal-cache-scheme>
                         <cachestore-scheme>
                              <class-scheme>
                                   <class-name>com.tangosol.coherence.jpa.JpaCacheStore</class-name>
                                   <init-params>
                                        <!--
                                        This param is the entity name
                                        This param is the fully qualified entity class
                                        This param should match the value of the
                                        persistence unit name in persistence.xml
                                        -->
                                        <init-param>
                                             <param-type>java.lang.String</param-type>
                                             <param-value>{cache-name}</param-value>
                                        </init-param>
                                        <init-param>
                                             <param-type>java.lang.String</param-type>
                                             <param-value>com.oracle.handson.{cache-name}</param-value>
                                        </init-param>
                                        <init-param>
                                             <param-type>java.lang.String</param-type>
                                             <param-value>JPA</param-value>
                                        </init-param>
                                   </init-params>
                              </class-scheme>
                         </cachestore-scheme>
                         *<refresh-ahead-factor>0.5</refresh-ahead-factor>*
                    </read-write-backing-map-scheme>
               </backing-map-scheme>
Thanks in advance.
John

I guess this is the answer
Sorry for the dumb question :)
Note: For use with Partitioned (Distributed) and Near cache
topologies: Read-through/write-through caching (and variants) are
intended for use only with the Partitioned (Distributed) cache
topology (and by extension, Near cache). Local caches support a
subset of this functionality. Replicated and Optimistic caches should
not be used.

Similar Messages

  • After REFRESH the cached object is not consistent with the database table

    After REFRESH, the cached object is not consistent with the database table. Why?
    I created a JDBC connection with the Oracle database (HR schema) using JDeveloper(10.1.3) and then I created an offline database (HR schema)
    in JDeveloper from the existing database tables (HR schema). Then I made some updates to the JOBS database table using SQL*Plus.
    Then I returned to the JDeveloper tool and refreshed the HR connection. But I found no any changes made to the offline database table JOBS in
    JDeveloper.
    How to make the JDeveloper's offline tables to be synchronized with the underling database tables?

    qkc,
    Once you create an offline table, it's just a copy of a table definition as of the point in time you brought it in from the database. Refreshing the connection, as you describe it, just refreshes the database browser, and not any offline objects. If you want to syncrhnonize the offline table, right-click the offline table and choose "Generate or Reconcile Objects" to reconcile the object to the database. I just tried this in 10.1.3.3 (not the latest 10.1.3, I know), and it works properly.
    John

  • Read-Through Caching with expiry-delay and near cache (front scheme)

    We are experiencing a problem with our custom CacheLoader and near cache together with expiry-delay on the backing map scheme.
    I was under the assumption that it was possible to have an expiry-delay configured on the backing-scheme and that the near cache object was evicted when backing object was evicted. But according to our tests we have to put an expiry-delay on the front scheme too.
    Is my assumption correct that there will not be automatic eviction on the near cache (front scheme)?
    With this config, near cache is never cleared:
                 <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme />
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>With this config (added expiry-delay on front-scheme), near cache gets cleared.
            <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme>
                                 <expiry-delay>15s</expiry-delay>
                            </local-scheme>
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>

    Hi Jakkke,
    The Near Cache scheme allows to have configurable levels of cache coherency from the most basic expiry based cache to invalidation based cache to data versioning cache depending on the coherency requirements. The Near Cache is commonly used to achieve the performance of replicated cache without losing the scalability aspects of replicated cache and this is achieved by having a subset of data (based on MRU or MFU) in the <front-scheme> of the near cache and the complete set of data in the <back-scheme> of near cache. The <back-scheme> updates can automatically trigger events to invalidate the entries in the <front-scheme> based on the invalidation strategy (present, all, none, auto) configured for the near cache.
    If you want to expire the entries in the <front-scheme> and <back-scheme>, you need to specify an expiry-delay on both the schemes as mentioned by you in the last example. Now if you are expiring the items in the <back-scheme> for the reason that they get loaded again from the cache-store but the <front-scheme> keys remain same (only the values should be refreshed from the cache store) then you need not set the expiry-delay on the <front-scheme> rather mention the invalidation-strategy as present. But if you want to have a different set of entries in <front-scheme> after a specified expiry delay then you need to mention it in the <front-scheme> configuration.
    The near cache has the capability to keep front scheme and back scheme data in sync but the expiry of entries is not synced. Always, front-scheme is a subset of back-scheme.
    Hope this helps!
    Cheers,
    NJ

  • Unable to get refresh-ahead functionality working

    We are attempting to implement a cache with a refresh ahead so that the calling code never experience any latency due to a synchronous load from the cache loader that happens if requesting an object that has expired (so just standard refresh ahead as documented). I can't seem to get the configuration to recognize the refresh ahead setting at all - it behaves the same way regardless of whether or not the <refresh-ahead-factor> setting is included, and never does an asynchronous load. All the other settings in the configuration work as expected. I have included the configuration and what I expect the behaviour to be and what it currently is.
    Is there anything else in other configuration files, etc that needs to set up to enable asynchronous loads via the refresh-ahead-factor? Note I have tried upping the worker threads via the thread-count in tangosol-coherence-override.xml (no difference).
    One thing I couldn't seem to find in the documents is when the asynchronous load will occur. I assume it is near real time after a triggering 'get' after the refresh-ahead-factor, but if not that could explain the behaviour. Is there some setting that controls this, or does it happen as soon as a worker thread gets to it?
    In my tests I am running a single cluster node on my local machine (version 3.3 Grid Edition in Development mode). Note the 20second expiry and .5 refresh-ahead factor is just for test purposes to see behaviour, in production it will be 12hours with a higher refresh factor such as .75.
    Current behaviour (appears to be standard read-through and ignore the refresh-ahead-factor):
    - request for an object that does not exist in cache blocks the calling code until it is loaded and put into the cahce via the cache loader
    - request for an object that exists before the exiry period has passed (before 20 seconds since load in this configuration) since it was loaded returns the object from cache.
    - request for an object that exists after the exiry period has passed (after 20 seconds since load in this configuration) blocks the calling code until it is loaded and put into the cahce via the cache loader.
    - No requests ever appear to trigger asynchronous loads via the cache loader
    Expected behaviour, given the .5 refresh-ahead-factor
    - request for an object that does not exist in cache blocks the calling code until it is loaded and put into the cahce via the cache loader (same as above)
    - request for an object that exists before the exiry period has passed (before 20 seconds since load in this configuration) since it was loaded returns the object from cache, and triggers an asynchronous reload of the object via the cache loader if requested after the refresh ahead factor. So in this example I would expect that if I requested an object out of the cache between 10 - 20 seconds after it was loaded, it would return it from cache immediately and trigger an asynchronous reload of this object via the cache loader.
    - request for an object that exists in cache but has not been requested during the 10-20second period blocks the calling code until it is loaded and put into the cahce via the cache loader.
    Here is the entry from our coherence-cache-config.xml
    <distributed-scheme>
    <scheme-name>rankedcategories-cache-all-scheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <scheme-name>rankedcategoriesLoaderScheme</scheme-name>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>rankedcategories-eviction</scheme-ref>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.abe.cache.coherence.rankedcategories.RankedCategoryCacheLoader</class-name>
    </class-scheme>
    </cachestore-scheme>
    <refresh-ahead-factor>.5</refresh-ahead-factor>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>rankedcategories-eviction</scheme-name>
    <expiry-delay>20s</expiry-delay>
    </local-scheme>

    Hi Leonid,
    Yes, it works as expected. refresh-ahead works approximately like this :
    public Object get(Object oKey) {
    Entry entry = getEntry();
    if (entry != null and entry is about to expire) {
    schedule refresh();
    return entry.getValue();
    If you want to reload infrequently accessed entries you can do something like this (obviously this will not work if cache is size limited) :
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>fdg*</cache-name>
                <scheme-name>fdg-cache-all-scheme</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <distributed-scheme>
                <scheme-name>fdg-cache-all-scheme</scheme-name>
                <service-name>DistributedCache</service-name>
                <backing-map-scheme>
                    <!--
                    Read-write-backing-map caching scheme.
                    -->
                    <read-write-backing-map-scheme>
                        <scheme-name>categoriesLoaderScheme</scheme-name>
                        <internal-cache-scheme>
                            <local-scheme>
                                <expiry-delay>10s</expiry-delay>
                                <flush-delay>2s</flush-delay>
                                <listener>
                                        <class-scheme>
                                            <class-name>com.sgcib.fundingplatform.coherence.ReloadListener</class-name>
                                            <init-params>
                                                <init-param>
                                                    <param-type>string</param-type>
                                                    <param-value>{cache-name}</param-value>
                                                </init-param>
                                                <init-param>
                                                    <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
                                                    <param-value>{manager-context}</param-value>
                                                </init-param>
                                            </init-params>
                                        </class-scheme>
                                </listener>
                            </local-scheme>
                        </internal-cache-scheme>
                        <cachestore-scheme>
                            <class-scheme>
                                <class-name>com.sgcib.fundingplatform.coherence.DBCacheStore</class-name>
                            </class-scheme>
                        </cachestore-scheme>
                        <refresh-ahead-factor>0.5</refresh-ahead-factor>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
            <local-scheme>
                <scheme-name>categories-eviction</scheme-name>
                <expiry-delay>10s</expiry-delay>
                <flush-delay>2s</flush-delay>
            </local-scheme>
        </caching-schemes>
    </cache-config>
    package com.sgcib.fundingplatform.coherence;
    import com.tangosol.net.BackingMapManagerContext;
    import com.tangosol.net.DefaultConfigurableCacheFactory;
    import com.tangosol.net.cache.CacheEvent;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MultiplexingMapListener;
    import java.util.Map;
    import java.util.concurrent.ExecutorService;
    import java.util.concurrent.Executors;
    import java.util.concurrent.ThreadFactory;
    * dimitri Nov 26, 2008
    public class ReloadListener extends MultiplexingMapListener
        String                                  m_sCacheName;
        DefaultConfigurableCacheFactory.Manager m_bmmManager;
        ExecutorService m_executorService = Executors.newSingleThreadExecutor(new ThreadFactory()
        public Thread newThread(Runnable runnable)
            Thread thread = new Thread(runnable);
            thread.setDaemon(true);
            return thread;
        public ReloadListener(String sCacheName, BackingMapManagerContext ctx)
            m_sCacheName = sCacheName;
            m_bmmManager =
                (DefaultConfigurableCacheFactory.Manager) ctx.getManager();
        protected void onMapEvent(MapEvent evt)
            if (evt.getId() == MapEvent.ENTRY_DELETED && ((CacheEvent) evt).isSynthetic())
                m_executorService.execute(
                    new ReloadRequest(m_bmmManager.getBackingMap(m_sCacheName), evt.getKey()));
        public void finalize()
            m_executorService.shutdownNow();
        class ReloadRequest implements Runnable
            Map    m_map;
            Object m_oKey;
            public ReloadRequest(Map map, Object oKey)
                m_map  = map;
                m_oKey = oKey;
            public void run()
                m_map.get(m_oKey);
    package com.sgcib.fundingplatform.coherence;
    import java.util.Collection;
    import java.util.Map;
    import java.util.Date;
    import com.tangosol.net.cache.CacheStore;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Base;
    public class DBCacheStore extends Base implements CacheStore
         * Return the value associated with the specified key, or null if the
         * key does not have an associated value in the underlying store.
         * @param oKey key whose associated value is to be returned
         * @return the value associated with the specified key, or
         *         null if no value is available for that key
        public Object load(Object key)
            CacheFactory.log("load(" + key + ") invoked on " + Thread.currentThread().getName(),
                CacheFactory.LOG_DEBUG);
            return new Date().toString();
         * Return the values associated with each the specified keys in the
         * passed collection. If a key does not have an associated value in
         * the underlying store, then the return map will not have an entry
         * for that key.
         * @param colKeys a collection of keys to load
         * @return a Map of keys to associated values for the specified keys
        public Map loadAll(Collection colKeys)
            throw new UnsupportedOperationException();
        public void erase(Object arg0)
    // TODO Auto-generated method stub
        public void eraseAll(Collection arg0)
    // TODO Auto-generated method stub
        public void store(Object arg0, Object arg1)
    // TODO Auto-generated method stub
        public void storeAll(Map arg0)
    // TODO Auto-generated method stub
        // Test harness
        public static void main(String[] asArg)
            try
                NamedCache cache = CacheFactory.getCache("fdg-test");
                cache.get("bar"); // this will not be requested and reloaded by the
                                  // listener
                while (true)
                    CacheFactory.log("foo= " + cache.get("foo"), CacheFactory.LOG_DEBUG);
                    Thread.sleep(1000l);
            catch (Throwable oops)
                err(oops);
            finally
                CacheFactory.shutdown();
        }Regards,
    Dimitri

  • Periodic refresh of cache

    Hi
    Will the below ensure that distributed (internal local cache scheme) is refresh at regular intervals by setting high-units, low-units to zero.
    The use case is to expire all entries in cache at regular intervals, and reload them from cache store.
    <distributed-scheme>
    <scheme-name>example-distributed</scheme-name>
    <service-name>DistributedCache1</service-name>
    <backing-map-scheme>
         <read-write-backing-map-scheme>
         <scheme-name>DBCacheLoaderScheme</scheme-name>
         <internal-cache-scheme>
         <local-scheme>
    <high-units>0</high-units>
    <low-units>0</low-units>
                             <expiry-delay>60s</expiry-delay>
         </local-scheme>
         </internal-cache-scheme>
              <cachestore-scheme>
              <class-scheme>
                   <class-name>com.test.DBCacheStore</class-name>
                   <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>locations</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                   </init-params>                     
                   </class-scheme>
              </cachestore-scheme>
              <read-only>true</read-only>
              <refresh-ahead-factor>0.5</refresh-ahead-factor>     
         </read-write-backing-map-scheme>
         </backing-map-scheme>
         <thread-count>10</thread-count>
    <autostart>true</autostart>
    </distributed-scheme>
    I see with above config, that all objects are being expired at 60s(expiry delay) * .5 (refresh-ahead-factor) = 30 secs, (as low-units and max-units is set to zero). Please clarify if this configuration works in all cases.
    Thanks
    sunder

    Hi sunder,
    refresh-ahead does not on its own guarantees refresh.
    Refresh-ahead is only ever consulted if there was a get()-like operation (read operation specifically directed against the entry... queries are not suitable for triggering refresh-ahead) against the entry within the interval designated by the refresh-ahead-factor.
    Let's suppose you have refresh-ahead set to 0.25, and expiry-delay set to 1 minute, then if you have a get against the entry in the last 15 second (= 0.25 * 1 minute) before it expires, then there is going to be an asynchronous fetch of the entry from the cache store. This means that the triggering read will see the old value, but once the asynchronous fetch completes, it will replace the old value with the newly fetched value in the cache.
    But if there are no triggering reads, then it will simply expire instead of refreshing.
    Best regards,
    Robert
    Edited by: robvarga on May 9, 2011 1:56 PM

  • How to create a cache for JPA Entities using an EJB

    Hello everybody! I have recently got started with JPA 2.0 (I use eclipseLink) and EJB 3.1 and have a problem to figure out how to best implement a cache for my JPA Entities using an EJB.
    In the following I try to describe my problem.. I know it is a bit verbose, but hope somebody will help me.. (I highlighted in bold the core of my problem, in case you want to first decide if you can/want help and in the case spend another couple of minutes to understand the domain)
    I have the following JPA Entities:
    @Entity Genre{
    private String name;
    @OneToMany(mappedBy = "genre", cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<Novel> novels;
    @Entity
    class Novel{
    @ManyToOne(cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Genre genre;
    private String titleUnique;
    @OneToMany(mappedBy="novel", cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<NovelEdition> editions;
    @Entity
    class NovelEdition{
    private String publisherNameUnique;
    private String year;
    @ManyToOne(optional=false, cascade={CascadeType.PERSIST, CascadeType.MERGE})
    private Novel novel;
    @ManyToOne(optional=false, cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Catalog appearsInCatalog;
    @Entity
    class Catalog{
    private String name;
    @OneToMany(mappedBy = "appearsInCatalog", cascade = {CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<NovelEdition> novelsInCatalog;
    The idea is to have several Novels, belonging each to a specific Genre, for which can exist more than an edition (different publisher, year, etc). For semplicity a NovelEdition can belong to just one Catalog, being such a Catalog represented by such a text file:
    FILE 1:
    Catalog: Name Of Catalog 1
    "Title of Novel 1", "Genre1 name","Publisher1 Name", 2009
    "Title of Novel 2", "Genre1 name","Pulisher2 Name", 2010
    FILE 2:
    Catalog: Name Of Catalog 2
    "Title of Novel 1", "Genre1 name","Publisher2 Name", 2011
    "Title of Novel 2", "Genre1 name","Pulisher1 Name", 2011
    Each entity has associated a Stateless EJB that acts as a DAO, using a Transaction Scoped EntityManager. For example:
    @Stateless
    public class NovelDAO extends AbstractDAO<Novel> {
    @PersistenceContext(unitName = "XXX")
    private EntityManager em;
    protected EntityManager getEntityManager() {
    return em;
    public NovelDAO() {
    super(Novel.class);
    //NovelDAO Specific methods
    I am interested at when the catalog files are parsed and the corresponding entities are built (I usually read a whole batch of Catalogs at a time).
    Being the parsing a String-driven procedure, I don't want to repeat actions like novelDAO.getByName("Title of Novel 1") so I would like to use a centralized cache for mappings of type String-Identifier->Entity object.
    Currently I use +3 Objects+:
    1) The file parser, which does something like:
    final CatalogBuilder catalogBuilder = //JNDI Lookup
    //for each file:
    String catalogName = parseCatalogName(file);
    catalogBuilder.setCatalogName(catalogName);
    //For each novel edition
    String title= parseNovelTitle();
    String genre= parseGenre();
    catalogBuilder.addNovelEdition(title, genre, publisher, year);
    //End foreach
    catalogBuilder.build();
    2) The CatalogBuilder is a Stateful EJB which uses the Cache and gets re-initialized every time a new Catalog file is parsed and gets "removed" after a catalog is persisted.
    @Stateful
    public class CatalogBuilder {
    @PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
    private EntityManager em;
    @EJB
    private Cache cache;
    private Catalog catalog;
    @PostConstruct
    public void initialize() {
    catalog = new Catalog();
    catalog.setNovelsInCatalog(new ArrayList<NovelEdition>());
    public void addNovelEdition(String title, String genreStr, String publisher, String year){
    Genre genre = cache.findGenreCreateIfAbsent(genreStr);//##
    Novel novel = cache.findNovelCreateIfAbsent(title, genre);//##
    NovelEdition novEd = new NovelEdition();
    novEd.setNovel(novel);
    //novEd.set publisher year catalog
    catalog.getNovelsInCatalog().add();
    public void setCatalogName(String name) {
    catalog.setName(name);
    @Remove
    public void build(){
    em.merge(catalog);
    3) Finally, the problematic bean: Cache. For CatalogBuilder I used an EXTENDED persistence context (which I need as the Parser executes several succesive transactions) together with a Stateful EJB; but in this case I am not really sure what I need. In fact, the cache:
    Should stay in memory until the parser is finished with its job, but not longer (should not be a singleton) as the parsing is just a very particular activity which happens rarely.
    Should keep all of the entities in context, and should return managed entities form mehtods marked with ##, otherwise the attempt to persist the catalog should fail (duplicated INSERTs)..
    Should use the same persistence context as the CatalogBuilder.
    What I have now is :
    @Stateful
    public class Cache {
    @PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
    private EntityManager em;
    @EJB
    private sessionbean.GenreDAO genreDAO;
    //DAOs for other cached entities
    Map<String, Genre> genreName2Object=new TreeMap<String, Genre>();
    @PostConstruct
    public void initialize(){
    for (Genre g: genreDAO.findAll()) {
    genreName2Object.put(g.getName(), em.merge(g));
    public Genre findGenreCreateIfAbsent(String genreName){
    if (genreName2Object.containsKey(genreName){
    return genreName2Object.get(genreName);
    Genre g = new Genre();
    g.setName();
    g.setNovels(new ArrayList<Novel>());
    genreDAO.persist(t);
    genreName2Object.put(t.getIdentifier(), em.merge(t));
    return t;
    But honestly I couldn't find a solution which satisfies these 3 points at the same time. For example, using another stateful bean with an extended persistence context (PC) would work for the 1st parsed file, but I have no idea what should happen from the 2nd file on.. Indeed, for the 1st file the PC will be created and propagated from CatalogBuilder to Cache, which will then use the same PC. But after build() returns, the PC of CatalogBuilder should (I guess) be removed and re-created during the succesive parsing, although the PC of Cache should stay "alive": shouldn't in this case an exception being thrown? Another problem is what to do when the Cache bean is passivated. Currently I get the exception:
    "passivateEJB(), Exception caught ->
    java.io.IOException: java.io.IOException
    at com.sun.ejb.base.io.IOUtils.serializeObject(IOUtils.java:101)
    at com.sun.ejb.containers.util.cache.LruSessionCache.saveStateToStore(LruSessionCache.java:501)"
    Hence, I have no Idea how to implement my cache.. Can you please tell me how would you solve the problem?
    Many thanks!
    Bye

    Hi Chris,
    thanks for your reply!
    I've tried to add the following into persistence.xml (although I've read that eclipseLink uses L2 cache by default..):
    <shared-cache-mode>ALL</shared-cache-mode>
    Then I replaced the Cache bean with a stateless bean which has methods like
    Genre findGenreCreateIfAbsent(String genreName){
    Genre genre = genreDAO.findByName(genreName);
    if (genre!=null){
    return genre;
    genre = //Build new genre object
    genreDAO.persist(genre);
    return genre;
    As far as I undestood, the shared cache should automatically store the genre and avoid querying the DB multiple times for the same genre, but unfortunately this is not the case: if I use a FINE logging level, I see really a lot of SELECT queries, which I didn't see with my "home made" Cache...
    I am really confused.. :(
    Thanks again for helping + bye

  • Event-driven refresh of cached shared datasets

    We have a growing business intelligence solution deployed on SQL Server 2008 R2.  All reports are in SSRS and are executed via web browser by the end-users throughout the day.
    We have been investigating how to improve performance and so far it appears that converting report datasets to shared datasets and then caching those shared datasets provides the greatest opportunity.  (We can't snapshot or cache the individual reports
    themselves as they all contain customized content based upon the end user's roles and permissions).
    Creation of the shared datasets and setting them to be cached is easy, and we're seeing really nice performance improvements.
    However, we haven't been able to find a way to refresh the cached data without using the SSRS calendar-based schedule.  Our ETL process completes anywhere from 8 am to sometime after noon depending on data availability from source systems, so we can't
    just pick a point in time to refresh the cached data.
    Ideally we'd like to add commands to the end of the SSIS workstream that loads the database, and have these commands expire the existing cached datasets and refresh their data.
    We've been able to use the SetCacheOptions stored procedure to expire the cached datasets, ensuring that the next run of the top-level report will pull new data - but the performance hit on that first execution is significant.  We'd rather have the
    caches refresh systematically when ETL completes so that our end users are always seeing the fastest performance with the most current data.

    Hi GottaLoveSQL,
    As we know, there is no explicit setting for conditionally expiring the report cache. To work around this issue, you can use a stored procedure for the report dataset and create a parameter you can pass from the report to indicate that you should explictly
    flush the report instance cache. Then, in the stored procedure, check this parameter and expire the cache using the sp_start_job. After that, in the SSIS package which performs the ETL process, add a SQL Task which start a subscription job which will reload
    the dataset cache. For more information, please see:
    http://social.technet.microsoft.com/Forums/en-US/sqlreportingservices/thread/eb0dd2a6-9d2f-4430-9e88-020377e4fafd.
    Regards,
    Mike Yin
    TechNet Community Support

  • Local Cache with write-behind backing map

    Hi there,
    I am a Coherence newb, so I hope my question isn't too naive. I have been experimenting with Coherence using a write-behind JPA backing map, but I have only been able to make it work with a distributed cache. Because of my specific database RAC architecture, I need to ensure that entries written to the database from a given WLS node are restricted to a specific RAC node. In my view, using a local cache rather than a distributed cache should solve my problem, although if there is a way of configuring my cache so this works I'd appreciate the info.
    So, the short form of the question: can I back a local cache with a write-behind JPA map?
    Cheers,
    Ron

    Hi Ron,
    The configuration for <local-scheme> allows you to add a cache store but you cannot use write-behind, only write-through.
    Presumably you do not want the data to be shared by the different WLS nodes, i.e. if one node puts data in the cache and that is eventually written to a RAC node, that data cannot be seen in the cache by other WLS nodes. Or do you want all the WLS nodes to share the data but just write it to different RAC nodes?
    If you use a local-scheme then the data will only be local to that WLS node and not shared.
    I can think of a possible way to do what you want but it depends on the answer to the above question.
    JK

  • Refresh program cache

    Hi,
    i need to refresh program cache.
    My problem is:
    - run the program
    - call a FM
    - change data
    - call the FM (again) to refresh data.
    The fm gets the old data.
    How can I solve?

    This is the full source code...
    FORM bom_carica_modello USING par_matnr
                                  par_stlal
                                  par_stlan
                                  par_werks
                         CHANGING par_tab LIKE it_tab[]
                                  par_wa  LIKE wa_tab.
      DATA: it_bom_item OCCURS 0 WITH HEADER LINE.
      REFRESH it_bom_item.
      DATA: BEGIN OF it_diba OCCURS 0.
              INCLUDE STRUCTURE stpox.
      DATA: END   OF it_diba.
      REFRESH it_diba.
      CALL FUNCTION 'CS_BOM_EXPL_MAT_V2'
       EXPORTING
         datuv                       = p_datuv
        capid                       = 'PP01'
         mehrs                       = ' '
         mtnrv                       = par_matnr
         stlal                       = par_stlal
         stlan                       = par_stlan
         werks                       = par_werks
        TABLES
          stb                         = it_diba
      MATCAT                      =
       EXCEPTIONS
         alt_not_found               = 1
         call_invalid                = 2
         material_not_found          = 3
         missing_authorization       = 4
         no_bom_found                = 5
         no_plant_data               = 6
         no_suitable_bom_found       = 7
         conversion_error            = 8
         OTHERS                      = 9
      IF sy-subrc <> 0.
        MESSAGE ID 'ZM' TYPE 'S' NUMBER 099
              WITH 'Alternativa richiesta come modello non valida'.
      ENDIF.
      REFRESH par_tab.
      CLEAR par_wa.
      LOOP AT it_diba.
        IF it_diba-datub GE p_datub.
        Cancella le posizioni scadute
          CLEAR: par_wa.
          MOVE-CORRESPONDING it_diba TO par_wa.
          MOVE it_diba-meins TO par_wa-meins.
        Conversione UM
          CALL FUNCTION 'CONVERSION_EXIT_RUNIT_OUTPUT'
            EXPORTING
              input          = it_diba-meins
            LANGUAGE       = SY-LANGU
           IMPORTING
              output         = par_wa-meins.
          APPEND par_wa TO par_tab.
        ENDIF.
      ENDLOOP.
    ENDFORM.                    " carica_modello
    I have also added a wait command, a commit work... but the problem still the same...

  • About Leveraging database security with JPA...

    I've googled the web but haven't find anything about considering the security as an aspect of the development with JPA and TopLink Essentials as you can integrate VPD with Toplink... http://www.oracle.com/technology/products/ias/toplink/doc/1013/main/_html/dblgcfg008.htm
    What would be the best way to :
    1- Track user's specific behavior
    2- Implement Fine Grained access control
    from the database...
    Even if it's not in the spec... What do you think could be a design pattern to leverage the Oracle database features ?
    Best Regards,
    -Gregory

    Gregory,
    Using VPD with an ORM solution involves two pieces of functionality:
    1. An isolated cache so that entities read from a table using VPD cannot accidentally be accessed by other application threads. TopLink Essentials does support this through is JPA extensions:
    http://www.oracle.com/technology/products/ias/toplink/jpa/resources/toplink-jpa-extensions.html
    2. An approach for configuring the user credentials in the connections. Oracle TopLink provides exclusive connections with event call-backs for this as well as proxy authentication support. We do not currently support these options within TopLink Essentials.
    To address #2 using JPA and TopLink Essentials I would need to know more about your architecture. Assuming you are using JPA in EE with session beans and JTA transactions then you could lookup the JDBC connection directly from container within your transaction (prior to your first query requiring JPA) and invoke your VPD user config stored procedure.
    If you would like to work through the specifics of your requirements and then post the final solution back here you can contact me directly: douglas.clarke at oracle.com.
    Doug

  • How to use Oracle partitioning with JPA @OneToOne reference?

    Hi!
    A little bit late in the project we have realized that we need to use Oracle partitioning both for performance and admin of the data. (Partitioning by range (month) and after a year we will move the oldest month of data to an archive db)
    We have an object model with an main/root entity "Trans" with @OneToMany and @OneToOne relationships.
    How do we use Oracle partitioning on the @OneToOne relationships?
    (We'd rather not change the model as we already have millions of rows in the db.)
    On the main entity "Trans" we use: partition by range (month) on a date column.
    And on all @OneToMany we use: partition by reference (as they have a primary-foreign key relationship).
    But for the @OneToOne key for the referenced object, the key is placed in the main/source object as the example below:
    @Entity
    public class Employee {
    @Id
    @Column(name="EMP_ID")
    private long id;
    @OneToOne(fetch=FetchType.LAZY)
    @JoinColumn(name="ADDRESS_ID")
    private Address address;
    EMPLOYEE (table)
    EMP_ID FIRSTNAME LASTNAME SALARY ADDRESS_ID
    1 Bob Way 50000 6
    2 Sarah Smith 60000 7
    ADDRESS (table)
    ADDRESS_ID STREET CITY PROVINCE COUNTRY P_CODE
    6 17 Bank St Ottawa ON Canada K2H7Z5
    7 22 Main St Toronto ON Canada     L5H2D5
    From the Oracle documentation: "Reference partitioning allows the partitioning of two tables related to one another by referential constraints. The partitioning key is resolved through an existing parent-child relationship, enforced by enabled and active primary key and foreign key constraints."
    How can we use "partition by reference" on @OneToOne relationsships or are there other solutions?
    Thanks for any advice.
    /Mats

    Crospost! How to use Oracle partitioning with JPA @OneToOne reference?

  • How do I refresh a table with a bind variable using a return listener?

    JDev 11.1.2.1.0.
    I am trying to refresh a table with a bind variable after a record is added.
    The main page has a button which, on click, calls a task flow as an inline document. This popup task flow allows the user to insert a record. It has its own transaction and does not share data controls.
    Upon task flow return, the calling button's return dialog listener is invoked which should allow the user to see the newly created item in the table. The returnListener code:
        // retrieve the bind variable and clear it of any values used to filter the table results
        BindingContainer bindings = ADFUtils.getBindings();
        AttributeBinding attr = (AttributeBinding)bindings.getControlBinding("pBpKey");
        attr.setInputValue("");
        // execute the table so it returns all rows
        OperationBinding operationBinding = bindings.getOperationBinding("ExecuteWithParams");
        operationBinding.execute();
        // set the table's iterator to the newly created row
        DCIteratorBinding iter = (DCIteratorBinding) bindings.get("AllCustomersIterator");
        Object customerId = AdfFacesContext.getCurrentInstance().getPageFlowScope().get("newCustomerId");
        iter.setCurrentRowWithKeyValue((String)customerId);
        // refresh the page
        AdfFacesContext.getCurrentInstance().addPartialTarget(this.getFilterText());
        AdfFacesContext.getCurrentInstance().addPartialTarget(this.getCustomerTable());But the table does not refresh ... The bind variable's inputText component is empty. The table flickers as if it updates. But no new values are displayed, just the ones that were previously filtered or shown.
    I can do the EXACT SAME code in a button's actionListener that I click manually and the table will refresh fine. I'm really confused and have spent almost all day on this problem.
    Will

    Both options invoke the create new record task flow. The first method runs the "reset" code shown above through the calling button's returnListener once the task flow is complete. The second method is simply a button which, after the new record is added and the task flow returns, runs the "reset" code by my clicking it manually.
    I'm thinking that the returnListener code runs before some kind of automatic ppr happens on the table. I think this because the table contents flicker to show all customers (like I intend) but then goes back to displaying the restricted contents a split second later.
    Yes, the table is in the page that invokes the taskflow.
    Here are some pictures:
    http://williverstravels.com/JDev/Forums/Threads/2337410/Step1.jpg
    http://williverstravels.com/JDev/Forums/Threads/2337410/Step2.jpg
    http://williverstravels.com/JDev/Forums/Threads/2337410/Step3.jpg
    Step1 - invoke new record task flow
    Step2 - enter data and click Finish
    Step3 - bind parameter / table filter cleared. Table flickers with all values. Table reverts to previously filterd values.

  • Using a partitionned cache with off-heap storage for backup data

    Hi,
    Is it possible to define a partitionned cache (with data into the heap) with off-heap storage for backup data ?
    I think it could be worthwhile to do so, as backup data are associated with a different access pattern.
    If so, what are the impacts of such off-heap storage for backup data ?
    Particularly, what are the impacts on performance ?
    Thanks.
    Regards,
    Dominique

    Hi,
    It seems what using scheme for backup-store is broken in latest version of Coherence, I've got an exception using your setup.
    2010-07-24 12:21:16.562/7.969 Oracle Coherence GE 3.6.0.0 <Error> (thread=DistributedCache, member=1): java.lang.NullPointerException
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:466)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage$BackingManager.isPartitioned(PartitionedCache.java:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.instantiateBackupMap(PartitionedCache.java:24)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.setCacheName(PartitionedCache.java:29)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ServiceConfig$ConfigListener.entryInserted(PartitionedCache.java:17)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.ServiceConfig$Map.put(ServiceConfig.java:43)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$StorageIdRequest.onReceived(PartitionedCache.java:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.java:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.java:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.java:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.java:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.java:42)
         at java.lang.Thread.run(Thread.java:619)Tracing in debuger has shown what problem is in PartitionedCache$Storage#setCacheName(String) method, it calls instantiateBackingMap(String) before setting __m_CacheName field.
    It is broken in 3.6.0b17229
    PS using asynchronous wrapper around disk based backup storage should reduce performance impact

  • Create and schedule iBot to seed cache with  the saved query

    Hi all,
    May I know how to Create and schedule i Bot to seed cache with the saved query((iBot to run right a daily load to reseed the cache)

    Here is the documentation:
    10g
    http://download.oracle.com/docs/cd/E10415_01/doc/bi.1013/b31767.pdf
    11g
    http://download.oracle.com/docs/cd/E21764_01/bi.1111/e10544/delivers.htm#i1060405

  • Refresh toplink cache from tirgger

    How to update the toplink cache due to change in the database by some external process or procedure?
    This question was posted some time back and one of the suggestion was to create a trigger on the table holding the data and implement callout to the toplink cache to refresh. I will appreciate if any one can let me know where I can I find more information to implement such a callout method from trigger on the database table.
    We are accessing the toplink objects from OC4J container from where a singleton is managing the calls to the toplink objects. We already have methods in place to refresh the cached object based on timeouts but now the new requirements are to refresh the objects only if the data is changed in the database.
    Thanks
    Ahmad

    I have url error on this thread : How to refresh cache in TopLink, turn off cache
    [b]Discussion Forums Error
    We cannot process your request at this time. Please try again later.
    Thank's

Maybe you are looking for

  • 1.6 Dynamic Photo Gallery - alt and title attributes from xml file?

    Hi! I would like to attach information to my gallery images from the XML-file used by the gallery. Especially the alt and title attributes for the "img id="mainImage"-tag would add a bit more user friendliness. I found this example about adding capti

  • Pebrot MSN in cli. Change default colors

    hi! im looking forward to know how to change the default colors in pebrot (http://pebrot.sourceforge.net/) so it shows a transparent background in urxvt instead the default black solid color pebrot has a ~/.pebrot/pebrotrc file: # Pebrot config file

  • Delete the perticuler Line Items of PURCHASE ORDERS.

    Hi Friends, i want delete the perticuler Line Items of PURCHASE ORDERS. EXAMPLE : I HAVE ONE PO  : 2000010. IT HAVING 10 LINE ITEMS. LIKE 1 , 2 , 3 , 4 ,5 ,6 ,7 ,8 ,9 , 10. FOR THIS PO , I WANT DELETE THE 5TH LINE ITEM. MEAN FOR PERTICULAR PO I WANT

  • E63 Lock Code Problem After Update

    I recently checked for a firmware update for my E63. I got excited when i found out that there was an availabla one, the 510 .21 .010 and I immediately started the update process. Everything went ok but when i switched On the phone  it asked me for t

  • WLAN Module ID (702) ERROR USING THE SAME HP PART?!?!?

    Hi. I'm the owner of a computer store. I have a problem with an HP DV6-6181SL. I replaced the faulty WLAN module (HP SPARE 575920-001) with a new identical model (575920-001) and this is the error message I get: Wireless Module not supported Has The