Write-behind cache not removing entries after upgrade to 3.2

We recently upgraded tangosol.jar and coherence.jar from version 3.0 to version 3.2. After the upgrade, our write-behind caches began consuming all available memory and crashing the JVMs because the entries were not being removed from the cache after being written to the database. We rolled back to the 3.0 jars without making any other modifications and the caches behave as expected. We'd really like to move to 3.2 for the improved network fault tolerance, but we need to resolve this issue first.
What changes were made in 3.2 with respect to write-behind caches that might cause this issue? I've reviewed our configuration and our code and can't find anything unusual, but I'm not sure what I should be looking for.
Any ideas?

I've opened an SR, but I haven't heard back. In the meantime, I've continued digging and I've noticed something strange - in the store() method of our backing map implementation, we take the entry that we just persisted and remove it from the backing map.
In my small-scale local tests, the size of the map is 1 when we enter store() and is 0 when we leave, as expected. If we process another entry using the 3.0 jars, it's again 1 and then 0. However, it gets more interesting with the 3.2 jars - the size of the map is 1 when we enter store() the first time and 0 when we leave, but if we process another entry, the size is 2 when we enter and 1 when we leave. This pattern continues such that both values increase by 1 every time we process an entry.
This would imply that we're either removing the entries incorrectly, or they're somehow being reinserted into the map.
Any ideas?
Here's the body of our method (with a bunch of sysouts added to the normal logging because this app won't run correctly under a debugger):
        * Store the specified value under the specific key in the underlying
        * store, then remove the specific key from the internal map and hence
        * the cache itself. This method is intended to support both key/value
        * creation and value update for a specific key.
        * @param oKey   key to store the value under
        * @param oValue value to be stored
        * @throws UnsupportedOperationException if this implementation or the
        *                                       underlying store is read-only
        public void store(Object oKey, Object oValue)
            RemoveOnStoreRWBackingMap mapBacking = RemoveOnStoreRWBackingMap.this;
            System.out.println("map storing  " + oKey);
            System.out.println("Size before = " + mapBacking.entrySet().size());
            Iterator entries = mapBacking.entrySet().iterator();
            while (entries.hasNext()) {
                System.out.println("entry = " + entries.next());   
            String storeClassName = getCacheStore().getClass().getName();
            Logger log = Logger.getLogger(storeClassName);
            log.debug(storeClassName + ": In store method.  Storing " + oKey);
            long cFailuresBefore = getStoreFailures();
            log.debug(storeClassName + ": failures before=" + cFailuresBefore);
            super.store(oKey, oValue);
            long cFailuresAfter = getStoreFailures();
            log.debug(storeClassName + ": failures afer=" + cFailuresAfter);
            if (cFailuresBefore == cFailuresAfter)  {
                log.debug(storeClassName + ": About to remove");
                mapBacking = RemoveOnStoreRWBackingMap.this;
                Converter converter = mapBacking.getContext().getKeyToInternalConverter();
                System.out.println("removed " + mapBacking.remove(converter.convert(oKey)));
//                System.out.println("removed " + mapBacking.getInternalCache().remove(converter.convert(oKey)));
                log.debug(storeClassName + ": Removed");
            Converter converter = RemoveOnStoreRWBackingMap.this.getContext().getKeyFromInternalConverter();
            System.out.println("Size after = " + mapBacking.entrySet().size());
        }

Similar Messages

  • Pacman: could not remove entry from cache

    Hi,
    I've been getting "error: could not remove entry 'openbox' from cache" message while trying to uninstall openbox. Why?

    Any third-party software that doesn't install by drag-and-drop into the Applications folder, and uninstall by drag-and-drop to the Trash, is a system modification.
    Whenever you remove system modifications, they must be removed completely, and the only way to do that is to use the uninstallation tool, if any, provided by the developers, or to follow their instructions. If the software has been incompletely removed, you may have to re-download or even reinstall it in order to finish the job.
    I never install system modifications myself, and I don't know how to uninstall them. You'll have to do your own research to find that information.
    Here are some general guidelines to get you started. Suppose you want to remove something called “BrickMyMac” (a hypothetical example.) First, consult the product's Help menu, if there is one, for instructions. Finding none there, look on the developer's website, say www.brickmyrmac.com. (That may not be the actual name of the site; if necessary, search the Web for the product name.) If you don’t find anything on the website or in your search, contact the developer. While you're waiting for a response, download BrickMyMac.dmg and open it. There may be an application in there such as “Uninstall BrickMyMac.” If not, open “BrickMyMac.pkg” and look for an Uninstall button.
    You generally have to reboot in order to complete an uninstallation.
    If you can’t remove software in any other way, you’ll have to erase and install OS X. Never install any third-party software unless you're sure you know how to uninstall it; otherwise you may create problems that are very hard to solve.
    You may be advised by others to try to remove complex system modifications by hunting for files by name, or by running "utilities" that purport to remove software. I don't give such advice. Those tactics often will not work and maymake the problem worse.

  • Write behind cache, DB down, when should the system stop taking new data in

    Hello:
    We are trying to use Coherence for our custom ESB, which is brokering payloads of various size between consumer and provider applications.
    Before Coherence, stopping our DB meant organization-wide outage for critically important business services.
    Since we have at least 40G of RAM in production environment, we believe that our app
    can use Coherence write-behind option for tolerating at least several hours worth of DB outage.
    We are currently using a near cache backed by distributed cache in write-behind mode.
    9 business service JVMs (storage enabled=false) use 30 storage enabled JVMs.
    IMPORTANT: We need to create an automated alerting facility determining when
    amount of unsaved data reaches critical level since DB goes down. This alert should help us decide when our application stops accepting inbound traffic.
    It is hard to use QueueSize parameter for that because our payload memory footprint can vary from 1KB to 3MB.
    We do not expire any entries in order to enable support queries against the cache during DB outage.
    Our experiments with trying various flavors of overflow-scheme resulted in OutOfMemoryError, therefore
    we decided to implement RAM-only cache as a first step.
    <near-scheme>
    <scheme-name>message_payload_scheme</scheme-name>
    <front-scheme>
    <local-scheme>
    <scheme-ref>limited_entities_front_scheme</scheme-ref>
    <high-units>100</high-units>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>limited_bytes_scheme</scheme-ref>
    <high-units>199229440</high-units>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.comp.MessagePayloadStore</class-name>
    </class-scheme>
    </cachestore-scheme>
    <read-only>false</read-only>
    <write-delay-seconds>3</write-delay-seconds>
    <write-requeue-threshold>2147483646</write-requeue-threshold>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    </back-scheme>
    </near-scheme>
    <local-scheme>
    <scheme-name>limited_entities_front_scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <unit-calculator>FIXED</unit-calculator>
    </local-scheme>
    <local-scheme>
    <scheme-name>limited_bytes_scheme</scheme-name>
    <eviction-policy>HYBRID</eviction-policy>
    <unit-calculator>BINARY</unit-calculator>
    </local-scheme>

    Good info ... I feel like I need to restate my original question along with a couple of new questions caused by the discussion above.
    Q1. Does Coherence evict 'dirty', or 'queued', or 'unsaved' objects for cache configuration provided above?
    The answer should be 'NO', otherwise Coherence is unsafe to use as a system of record,
    it should not just drop unsaved information on the floor.
    Q2. What happens to the front tier of the near+partitioned write behind cache described above when amount of unsaved data exceeds max cache capacity defined via high-units?
    I would expect that map.put starts throwing exceptions: cache storage is full, so it should not accept more data
    Q3. How can I determine a moment when amount of dirty data in bytes(!), not in objects, hits 85% of
    max allowed cache capasity configured in bytes (using high-units param and BINARY calculator).
    'DirtyUnits' counter can probably be built with some lower-level Coherence API. Can we use
    this API?
    Please, understand, that we purchased Coherence for reliability, for making our
    system independent from short DB outages, for keeping our business services up
    and running when DBA need some time for admin operations like rebuilding an index.
    Performance benefits are secondary and are not as obvious for our system which
    uses primary keys only and has a well-tuned co-located Oracle back-end.
    We simply cannot put Coherence to production unless we prove that Coherence
    can reliably hold the data and give us information about approaching crisis
    (the cache full of unsaved data).
    If possible, forward this message to Cameron Purdy,
    who was presenting Coherence to our team several moths ago.
    Thanks,
    Vasili Smaliak
    Applications Architect, Enterprise App Integration
    GMAC ResCap
    [email protected]

  • Write-Behind Caching and Re-entrant Calls

    Support Team -
         The Coherence User Guide states that:
         "The CacheStore implementation must not call back into the hosting cache service. This includes OR/M solutions that may internally reference Coherence cache services. Note that calling into another cache service instance is allowed, though care should be taken to avoid deeply nested calls (as each call will "consume" a cache service thread and could result in deadlock if a cache service threadpool is exhausted)."
         I have Load-tested a use case wherein I have two caches: ABCache and BACache. ABCache is accessed by the application for write operation, BACache is accessed by the application for read operation. ABCache is a write-behind cache whose CacheStore populates BACache by reversing key and value of each cache entry stored in the ABCache.
         The solution worked under load with no issues.
         But can I use it? Or is it too dangerous?
         My write-behind thread-count setting is left at default (0). The documentation states that
         "If zero, all relevant tasks are performed on the service thread."
         What does this mean? Can I re-enter the caching service if my thread-count is zero?
         Thank you,
         Denis.

    Dimitri -
         I am not sure I fully understand your answer:
         1. "Your test worked because write-behing backing map invokes CacheStore methods asynchronously, on a write-behind thread." In my configuration, I have default value for thread-count, which is zero. According to the documentation, that means that CacheStore methods would be executed by the service thread and not by the write-behind thread. Do I understand this correctly?
         2. "If will fail if CacheStore method will need to be invoked synchronously on a service thread." I am not sure what is the purpose of the "service thread". In which scenarios the "CacheStore method will need to be invoked synchronously on a service thread"?
         Thank you,
         Denis.

  • Write-Behind Caching and Limited Internal Cache Size

    Let's say I have a write-behind cache and configure its internal cache to be of a fixed limited size, e.g. 10000 units. What would happen if more than 10000 units are added to the write-behind cache within the write-delay period? Would my CacheStore's storeAll() get all of the added values or would some of the values be missed because of the internal cache size limitation?

    Hi Denis,     >
         > If an entry is removed while it is still in the
         > write-behind queue, it will be removed from the queue
         > and CacheStore.store(oKey, oValue) will be invoked
         > immediately.
         >
         > Regards,
         > Dimitri
         Dimitri,
         Just to confirm, that I understand it right if there is a queued update to a key which is then remove()-ed from the cache, then the following happens:
         First CacheStore.store(key, queuedUpdateValue) is invoked.
         Afterwards CacheStore.erase(key) is invoked.
         Both synchronously to the remove() call.
         I expected only erase will be invoked.
         BR,
         Robert

  • Write-Behind Caching and Old Values

    Is there a way to access the old value cached in the write-behind cache for the same key from the CacheStore's store() or storeAll() method?

    I have a business POJO with three parts: partA,     > partB, partC inside. Each of these three parts is
         > persisted by a separate SQL. So, every time I persist
         > my POJO, up to 3 SQLs may be executed.
         I understand.
         > When a change happens in my POJO, it goes onto the
         > write-behind queue. In my CacheStore.store() or
         > CacheStore.storeAll() I would like to be able to make
         > an intelligent decision about which of the three
         > parts: partA, partB or partC has actually changed and
         > only run the SQL updates for the changed parts. This
         > would allow me to avoid massive amounts of
         > unnecessary SQL updates for the parts that did not
         > change.
         Right. Keep in mind that there are two conditions that you must be aware of:
         1) Multiple updates could have occurred to the object, meaning that the database update would have to "roll up" the results of multiple changes to the object.
         2) Some or all of the updates could have already occurred to the database. This may be a little trickier to understand, but it reflects the possible machine failure conditions that occurred while a write-behind was in progress.
         Although the latter are unlikely, they should be accounted for, and of course they are harder to test for with certainty. As a result, the updates to the information (the CacheStore implementation) must be built in an "idempotent" manner, i.e. allowing it to be executed more than once with no additional side-effects.
         > If I had access to the POJO stored under the same key
         > before the new value was put in cache, I could use
         > equals() on each of the three parts to find out
         > exactly which one of them changed.
         While this is true, you would need to compare the "known previous database state" version, not just the "old" version.
         > Of course, if this functionality is not available, I
         > would have to create dirty flags for each of the
         > three POJO parts. But I can't really clear my POJO's
         > flags and recache the POJO from within the store() or
         > storeAll(), right?
         Yes, but remember that those flags are "could be dirty" flags, because of the above failure modes that I described.
         Peace,
         Cameron Purdy
         Tangosol Coherence: The Java Data Grid

  • Time machine fails backing up with a "backup disk not found" error after upgrade to Lion 10.7.2

    Time machine fails backing up with a "backup disk not found" error after upgrade to Lion 10.7.2. I've erased and reformatted the drive; reconnected Time Machine(it saw it) and began a new backup. The backup starts again but fails before completing with backup drive not found error. I used the 10.7.2 combo update. Any help on how to fix this problem would be appreciated.

    Arghhhhhhhhhhhhhhhhhhh!!!!!!
    After waiting since October for a fix, I have now upgraded the firmware on the Time Capsule and installed the new Airport Utility, released yesterday, and the situation is even worse.
    Up until now, the Airport Utility had an option to disconnect all drives manually and the Time Capsule would then work until the next reboot – a temporary (?) work-around.
    Now that option does not exist in the new-look Airport Utility. And the Airport Utility installation can’t be rolled back and the old utility won’t restore.
    The sparesbundle is still not accessible. Anyone worked out a work-around in the new environment yet?
    I have another Time Machine backup working fine to a trusty old Lacie Drive so erased the one on my Time Capsule to see if that works. I have renamed the Capsule and the Time Capsule Drive.  But to build another full backup will take at least two days and I shall be away from tomorrow and am reluctant to leave the Capsule and computer up and running for a week. And I’ll bet the sparsbundle will still be nowhere to be found.
    How can Apple screw up so badly when they are to become the richest company in the entire world and – soon – will have more cash in the bank than the entire United States? Can’t they afford someone who really can sort this out? Especially when a Firewire connected hard drive – my trusty and simple LaCie – works fine.
    Words, almost, fail me.

  • Wifi is not working properly after upgrading 7.1.1 onmy i4s

    Dear Apple
                      i realy appriciate on your work ..right now i am suffreing from a problem that one i mentioned in subject .sometime my wifi is working good and sometime it shos dead to me ..but now its not working well after upgrading my phone on 7.1.1....
    its my humble request please find a solution and provide me as soon as possble..
    my totally work on wifi .
      and sometime it goes hang in setting when i come on setting folder and clicked on wifi its get hang and also showing wifi is not availabe..please provide me best solution
      Thanx & Regards
        Sam
      +91-92555-04628

    settings - general - reset - reset network settings.

  • Write-Behind Caching and Multiple Puts

    What happens when two consecutive puts are performed on the write-behind cache for the same key? Will CacheStore's store() or storeAll() be invoked once for every put() or only once for the last put() (the one which overrode the previous cached values)?

    Hi Denis,
         If you use write-behind, there will be no unnesessary database updates - only the last put() will result in database update.
         Regards,
         Dimitri

  • Elements 11 is not running anymore after upgrading my Mac to Mountain Lion

    Elements 11 is not running anymore after upgrading my Mac to Mountain Lion

    PSE11 does work OK with Mountain Lion but you need to delete the saved prefs and application state.
    Quit Elements.
    Launch Finder and click on Go (hold down the optn key) and click on Library when it appears in the list below Home.
    Delete the following files from the Preferences folder:
    com.adobe.PhotoshopElements.plist    
    Adobe Photoshop Elements 11 Paths
    Adobe Photoshop Elements 11 Settings
    Delete this file from the Saved Application State folder:
    com.adobe.PhotoshopElements.savedState

  • HT201412 My text messages go to my ipad but not my iphone after upgrading to ios7.0.2, does anyone know a fix

    My text messages go to my ipad but not my iphone after upgrading to ios7.0.2, does anyone know a fix

    They are not text messages, they are iMessages.
    Turn on iMessage on the iPhone.

  • Iphone calendar does not retain entries after 2 months

    iphione calendar does not retain entries after 2 months

    OMG. I just did that sync thing and IT ALL CAME BACK!!!
    I only hoped it would fix it for future. Wow- where WERE those entries that they could return? Is that the Cloud.
    Thanks.
    Low-tech elder

  • HT1688 My iphone 5 does not ring anymore after upgrading to ios 7.0.2

    My iphone 5 does not ring anymore after upgrading to ios 7.0.2.  It did not happen right away, but maybe 2 weeks ago.  I tried turning the "do not disturb" on and add allow everyone to call as well as turning it off but it only vibrates when calls come in.  I also turned my phone off and it still only vibrates when calls and messages come in.

    I found another feed and said there is a silent button next to volume bar.  That was the problem.

  • Contact addresses not showing up after upgrading to ios 7

    Contact addresses not showing up after upgrading to ios 7

    Contact addresses not showing up after upgrading to ios 7

  • Hi, my iphone could not be activated after upgrade it from IOS4 to IOS6, it shows that is due to activation server is unavailable and no sim card in the iphone. Anyone can help me please?

    Hi, my iphone could not be activated after upgrade it from IOS4 to IOS6, it shows that is due to activation server is unavailable and no sim card in the iphone. Anyone can help me please?

    It is 99% hacked/jailbroken iPhone to use a different carrier than intended. You have to put the sim from the original carrier or ask them to unlock the iPhone. We as well as Apple cannot help you with this.

Maybe you are looking for