Eviction of objects
Hello.
I have an object model where one A object contains a collection of B objects
and an association to a "current" B object, which is also in the collection.
The B class references other persistent objects and collections.
class A {
Collection b;
B currentB;
class B {
Collection c;
D d;
E e;
Now, to conserve memory, I don't want to have other B's than the current one
in memory. So when I set currentB to a new value, I want to free the old one
for garbage collection.
Do I need to handle this myself using pm.evict(), or will Kodo do it for me
automatically?
Regards,
Dag H__idahl
[email protected]
This is happening inside a transaction, but I'm not concerned about the
memory usage within transactions as such.
If I understand correctly, then the objects in the Collections (implemented
as Vectors in my code, by the way), are not hard referenced, so they will be
garbage collected as needed. Then there's nothing to worry about, I guess.
--Dag
"Patrick Linskey" <[email protected]> wrote in message
news:ardi0m$p44$[email protected]..
Calling pm.evict() will actually not help -- it will only free the state
of the object.
Is this happening in a transaction, or outside a transaction?
By default, Kodo does not maintain hard references to objects in the PM
cache. So the object will be eligable for garbage collection. This is
configurable with the CacheReferenceSize property.
If the code is happening in a transaction, then we by default do hold
onto a hard ref until the transaction commits. This is also configurable.
-Patrick
Dag Hoidahl wrote:
Hello.
I have an object model where one A object contains a collection of B
objects
and an association to a "current" B object, which is also in the
collection.
The B class references other persistent objects and collections.
class A {
Collection b;
B currentB;
class B {
Collection c;
D d;
E e;
Now, to conserve memory, I don't want to have other B's than the
current one
in memory. So when I set currentB to a new value, I want to free the
old one for garbage collection.
Do I need to handle this myself using pm.evict(), or will Kodo do it
for me automatically?
Regards,
Dag H__idahl
[email protected]
Patrick Linskey [email protected]
SolarMetric Inc. http://www.solarmetric.com
Similar Messages
-
Eviction of objects based on aging time and not cache size
Hi
Ii am using a coherence cluster ( extend) and wish to implement an aviction policy for each object inserted into the cache.
From the docs i have read i understand customized eviction policies are ALL size based and not time based ( mean eviction is triggered when cache is full and not when cache object aging time reached ).
It there a way to implement such eviction ?Hi Reem,
You can expire cache entries based on time by setting the expiry-delay in the cache configuration, for example
<distributed-scheme>
<scheme-name>SampleMemoryExpirationScheme</scheme-name>
<backing-map-scheme>
<local-scheme>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</backing-map-scheme>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
</serializer>
<autostart>true</autostart>
</distributed-scheme>The above configuration expires entries 10 seconds after they have been put into the cache.
You can find more information here: http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/appcacheelements.htm#BABDHGHJ
Regards,
JK -
We have a need to be able to explicilty reset the TopLink global cache from time to time. We have a signficant number of ReadOnly (lookup & metadata) objects that we want to retain in cache and so want to avoid using the session.getIdentityMapAccessor.initializeAllIdentityMaps() api.
Some of our descriptors use inheritance policies and there are both ReadOnly and Updateable child descriptors involved in these hierarchies, so we cannot make use of the session.getIdentityMapAccessor.initializeIdentityMap(Class) api.
For Non-ReadOnly child descriptors (involving inheritance) our only option is to explicitly remove (or possibly invalidate) the object in the global cache. We chose the remove approach using the session.getIdentityMapAccessor().removeFromIdentityMap(object) api. We first query for all objects (readAllQuery - check cache only) for a specific Class and then iterate over the results calling removeFromIdentityMap(object) to evict the objects of that class from the cache.
This approach should be safe as none of our ReadOnly ojbects would ever reference a Non-ReadOnly object. We also are aware of the issues involving inflight transactions and non-deterministic results in those cases. We will be resetting the cache only under special circumstances and under controlled conditions.
After using the above approach, we are now encountering a NPE whenever we query for objects that involve inheritance. We see a database query get executed against the DB to retrieve the target object from the database, but when TopLink goes to query for a referenced object that involves inheritance the NPE occurs.
Here's the stack trace:
[container]--nulljava.lang.NullPointerException
at oracle.toplink.publicinterface.DatabaseRow.getIndicatingNoEntry(DatabaseRow.java:269)
at oracle.toplink.publicinterface.DatabaseRow.get(DatabaseRow.java:244)
at oracle.toplink.publicinterface.InheritancePolicy.classFromRow(InheritancePolicy.java:260)
at oracle.toplink.internal.descriptors.ObjectBuilder.buildObject(ObjectBuilder.java:363)
at oracle.toplink.queryframework.ObjectLevelReadQuery.buildObject(ObjectLevelReadQuery.java:455)
at oracle.toplink.queryframework.ReadObjectQuery.executeObjectLevelReadQuery(ReadObjectQuery.java:424)
at oracle.toplink.queryframework.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:811)
at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:620)
at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:779)
at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
at oracle.toplink.publicinterface.Session.internalExecuteQuery(Session.java:2073)
at oracle.toplink.publicinterface.Session.executeQuery(Session.java:988)
at oracle.toplink.publicinterface.Session.executeQuery(Session.java:960)
at oracle.toplink.publicinterface.Session.executeQuery(Session.java:873)
We understand that objects involved in descriptor inheritance hierarchies all reside in the parent (root) identityMap.
We've verified our cache reset logic and verified that the cache appears to be reset properly (ReadOnly objects are still in cache, NonReadOnly objects have been successfully evicted from cache).
Any ideas on why this might be happening? Are there any special caveats with calling the removeFromIdentityMap(object) api when inheritance is involved?
Any other steps need to be taken?
Thanks in advance.
...SteveAny takers on this? I'm still hoping to get some feedback/suggestions on solving this issue.
I did some further follow-up investigation. I get the exact same NullPointerException when I invalidate objects in cache instead of explicitly removing them using the session.getIdentityMapAccessor().removeFromIdentityMap(Object) api.
After doing the reset of the cache, (per the approach outlined above), as a test, I cycled through all of the descriptors, created a ReadAllQuery for each one and executed it within a try/catch block to catch and report the NPE. It seems only some of the Descriptors that had all objects in their identityMap invalidated throw the NPE. There does not appear to be an obvious pattern or reason for the NPE. Some of the descriptors are dead simple and only involve simple directToField mappings!
TopLink source code is not available for all of the components reported in the stack trace, so I can't identify the reason or logic behind what might be causing the NPE. I don't understand what the DatabaseRow.getIndicatingNoEntry(DatabaseRow) is doing nor why the InheritancePolicy is involved. The Descriptors don't involve inheritance!
Are there any guidelines or caveats regarding invalidating all objects in an IdentityMap? Is there any other cache life-cycle/re-iniitalization that needs to be considered? Or is it simply not possible to partially re-initialize/reset the IdentityMaps in a Database/Server Session?
Thanks for any help/advice offered.
...Steve -
Hi,
Table 2 (the state transitions table) in the spec says that evict on a persistent-dirty instance
leaves its state unchanged. It does not say that an exception should be thrown as Kodo does. The
spec in section 12.5.1 implies that eviction may be delayed. In other words, the behavior that I
would expect, if I am correctly interpreting the murky depths of the spec here, is that evicting a
persistent-dirty instance has no immediate effect, but after the transaction completes, the instance
is then evicted. Evict marks it for eviction and this mark overrides the behavior indicated by the
transaction's RetainValues property.
If I were deciding implementation behavior, I would make all evictions inside a transaction put a
mark, even if they immediately evict the object, so that when the transaction completes, all evicted
objects are, in fact, evicted at transaction completion. So a PNT or PC object that is evicted and
reused within a transaction would get evicted again when the transaction completed. At transaction
completion, the eviction mark would be removed. As you can tell, I kind of like expected behavior
to be simply described and uniformly implemented. It makes an application programmer's life a lot
easier.
David EzzioDavid-
I believe you are correct. I have made a bug report about this:
http://bugzilla.solarmetric.com:8080/show_bug.cgi?id=124
David Ezzio <[email protected]> wrote:
Hi,
Table 2 (the state transitions table) in the spec says that evict on a persistent-dirty instance
leaves its state unchanged. It does not say that an exception should be thrown as Kodo does. The
spec in section 12.5.1 implies that eviction may be delayed. In other words, the behavior that I
would expect, if I am correctly interpreting the murky depths of the spec here, is that evicting a
persistent-dirty instance has no immediate effect, but after the transaction completes, the instance
is then evicted. Evict marks it for eviction and this mark overrides the behavior indicated by the
transaction's RetainValues property.
If I were deciding implementation behavior, I would make all evictions inside a transaction put a
mark, even if they immediately evict the object, so that when the transaction completes, all evicted
objects are, in fact, evicted at transaction completion. So a PNT or PC object that is evicted and
reused within a transaction would get evicted again when the transaction completed. At transaction
completion, the eviction mark would be removed. As you can tell, I kind of like expected behavior
to be simply described and uniformly implemented. It makes an application programmer's life a lot
easier.
David Ezzio--
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com
Kodo Java Data Objects Full featured JDO: eliminate the SQL from your code -
JdoPostLoad() does not get executed!!!
Kodo 2.4.3
Sequence
1. Create PC (docket)
2. Persist it
3. evict it
4. read one of its fields of PC type - jdoPostLoad() isn't triggered OR
read one of its simple fields which are part of default fetch group -
jdoPostLoad() gets triggered
After evictAll() I am reading
pmi.currentTransaction().begin();
nd = createDocket();
touchDocketMilestones(nd);
pmi.currentTransaction().commit();
pmi.evictAll();
pmi.currentTransaction().begin();
nd.getGoal(); //PC field so it is not in default fetch group: DO NOT
triggers jdoPostLoad()
nd.getDescription(); //String field from default fetch group: Triggers
jdoPostLoad()I agree that specs are rather wague about it. My point is that jdoPreClear()
and jdoPostLoad() logically should go in pairs - if jdoPreClear() invoked on
evict jdoPostLoad() should be invoked on first field access (it means that
default fetch group fields will have to be alwaysloaded first)
If jdoPostLoad() is not invoked on transition from hollow to p.clean we will
have no way to reinitialize transient fields the same way we do when an
object read first time
"Patrick Linskey" <[email protected]> wrote in message
news:[email protected]...
Alex,
It is not clear to me that jdoPostLoad() should be triggered in that
situation.
Evicting the object makes it transition to Hollow, meaning that all
fields in the object are marked as not loaded.
Subsequently, you access a field that is not in the default-fetch-group.
After this call completes, the default-fetch-group is still unloaded. So,
it is not clear to me that the jdoPostLoad() callback should have been
invoked.
Section 10.1 of the spec states that InstanceCallbacks.jdoPostLoad() is
invoked after the dfg is loaded. So, to invoke jdoPostLoad() after
loading a non-dfg field in a hollow object would be in violation of the
spec -- unless we first loaded the dfg, we'd be executing jdoPostLoad()
before loading the dfg.
I don't believe that the spec states that the dfg must be loaded when a
non-dfg field is loaded. Given this assumption on my part, I believe that
our behavior is correct.
-Patrick
On Fri, 11 Apr 2003 16:44:49 -0400, Alex Roytman wrote:
Kodo 2.4.3
Sequence
1. Create PC (docket)
2. Persist it
3. evict it
4. read one of its fields of PC type - jdoPostLoad() isn't triggered OR
read one of its simple fields which are part of default fetch group -
jdoPostLoad() gets triggered
After evictAll() I am reading
pmi.currentTransaction().begin();
nd = createDocket();
touchDocketMilestones(nd);
pmi.currentTransaction().commit();
pmi.evictAll();
pmi.currentTransaction().begin();
nd.getGoal(); //PC field so it is not in default fetch group: DO
NOT
triggers jdoPostLoad()
nd.getDescription(); //String field from default fetch group:
Triggers
jdoPostLoad()--
Patrick Linskey
SolarMetric Inc. -
Hi,
I am using Kodo 4.0.1 and sql server. I have the following Folder like
structure. In the UI you can move Folders around, and at certain moments I
get a nullpointer exception
public abstract class Folder extends AbstractFolderEntry{
private List<AbstractFolderEntry> children = null;
public Folder(){
super();
children = new ArrayList<AbstractFolderEntry>(); //contains both
(sub)Folders and DataElements
public List<AbstractFolderEntry> getChildren() {
Collections.unmodifiableList(children);
Caused by: java.lang.NullPointerException
at java.util.Collections$UnmodifiableCollection.<init>(Unknown Source)
at java.util.Collections$UnmodifiableList.<init>(Unknown Source)
at java.util.Collections.unmodifiableList(Unknown Source)
at com.ces.core.domain.Folder.getChildren(Folder.java:150)
at com.ces.core.application.task.MoveAbstractFolderEntryTask.body
My .jdo:
<class name="Folder">
<field name="children">
<collection element-type="AbstractFolderEntry" />
<extension vendor-name="kodo" key="inverse-logical" value="parent"/>
</field>
</class>
The specified method does a check for null and throws a
NullPointerException. I never set the collection to null. Adding the
following check to getChildren() seems to solve te problem:
if(children == null) {
//This should not happen, but it does and seems
//to be a bug in Kodo 4.0.1
//to init children, call size, otherwise it will
//cause a Nullpointer later on
children.size();
kind regards,
Christiaan<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2900.2963" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY>
<DIV><FONT face=Arial size=2>Hi David,</FONT></DIV>
<DIV><FONT face=Arial size=2>actually the "children == null" doesnt trigger Kodo
to load the children fields. The same check is used within
Collections.unmodifiableList(List l) method where this code throws the
nullpointer</FONT></DIV>
<DIV><FONT face=Arial size=2>if(l == null) {</FONT></DIV>
<DIV><FONT face=Arial size=2> throw new
NullpointerExcpetion()</FONT></DIV>
<DIV><FONT face=Arial size=2>}</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>I suspect that the problem is caused due to the
fact that I have two threads working on the objects at the same time (though
javax.jdo.option.Multithreaded is set to true). One thread evicts the objects
from cache at certain intervals, the other thread performs certain operations on
the Folder object. I haven't had a chance to build a testcase for this (will do
asap), but in my unittest, which doesnt have the situation of the two thrreads,
it never occurs.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2><David Ezzio> wrote in message </FONT><A
href="news:[email protected]"><FONT face=Arial
size=2>news:[email protected]</FONT></A><FONT face=Arial
size=2>...</FONT></DIV><FONT face=Arial size=2>> Hi Christiaan,<BR>>
<BR>> You said that if you add the code,<BR>> <BR>> if (children ==
null)<BR>> {<BR>> children.size();<BR>> }<BR>> <BR>> to the
method that returns <BR>> <BR>>
Collections.unmodifiableList(children);<BR>> <BR>> that it avoids a
NPE. Correct?<BR>> <BR>> If so, it would indicate that enhancement
has a bug. This behavior indicates that the first code is triggering
Kodo's field loading while the second is not. What is very curious is that
sometimes the field loads and sometimes not, as the NPE is not always
happening. <BR>> <BR>> Do you have other constructors for Folder
that might not initialize the children list? Have you tried removing the
instance initialization "... children = null"? This might be confusing the
enhancer. Decompiling the enhanced class and taking a look at what's
happening in the constructors and the getChildren method would probably tell us
whether this is an enhancement bug. Can you do that and post the relevant
sections? I find that DJDecompiler works very nicely.<BR>> <BR>> Let
us know how you make out.<BR>> <BR>> Thanks,<BR>> <BR>> David
Ezzio</FONT></BODY></HTML> -
Best practice - caching objects
What is the best practice when many transactions requires a persistent
object that does not change?
For example, in a ASP model supporting many organizations, organization is
required for many persistent objects in the model. I would rather look the
organization object up once and keep it around.
It is my understanding that once the persistence manager is closed the
organization can no longer be part of new transactions with other
persistence managers. Aside from looking it up for every transaction, is
there a better solution?
Thanks in advance
Garyproblem with using object id fields instead of PC object references in your
object model is that it makes your object model less useful and intuitive.
Taking to the extreme (replacing all object references with their IDs) you
will end up with object like a row in JDBC dataset. Plus if you use PM per
HTTP request it will not do you any good since organization data won't be in
PM anyway so it might even be slower (no optimization such as Kodo batch
loads)
So we do not do it.
What you can do:
1. Do nothing special just use JVM level or distributed cache provided by
Kodo. You will not need to access database to get your organization data but
object creation cost in each PM is still there (do not forget this cache we
are talking about is state cache not PC object cache) - good because
transparent
2. Designate a single application wide PM for all your read-only big
things - lookup screens etc. Use PM per request for the rest. Not
transparent - affects your application design
3. If large portion of your system is read-only use is PM pooling. We did it
pretty successfully. The requirement is to be able to recognize all PCs
which are updateable and evict/makeTransient those when PM is returned to
the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
all managed object of a certain class) so you do not have stale data in your
PM. You can use Apache Commons Pool to do the pooling and make sure your PM
is able to shrink. It is transparent and increase performance considerably
One approach we use
"Gary" <[email protected]> wrote in message
news:[email protected]...
>
What is the best practice when many transactions requires a persistent
object that does not change?
For example, in a ASP model supporting many organizations, organization is
required for many persistent objects in the model. I would rather look the
organization object up once and keep it around.
It is my understanding that once the persistence manager is closed the
organization can no longer be part of new transactions with other
persistence managers. Aside from looking it up for every transaction, is
there a better solution?
Thanks in advance
Gary -
Hi all,
I have an implementation on my own eviction strategy (com.tangosol.util.Cache.EvictionPolicy). How can I assoctiate it with my cache, is it in code or in xml config ? I found nothing on documetation...
I use Tangosol 2.5.
BTW: I implement only the requestEviction method, I don't known the goal of the entryTouched method, can you give me an example ?
Thx,
-emmanuelHi,
Shortly after the previous posts in this thread, I changed my code so that it does exactly as was suggested, i.e.
> Set cacheKeySet = cache.keySet(expiryFilter);
> cache.keySet().removeAll(cacheKeySet); However, in Coherence 2.5.1 we have just seen this error in all of our cache servers' logs (sometimes more than once in the same cache server in quick succession):
2005-11-03 22:28:50.743 Tangosol Coherence 2.5.1/293 <Error> (thread=DistributedCacheWorker:0, member=1): Terminating DistributedCache due to unhandled exception: java.lang.IllegalStateException
2005-11-03 22:28:50.744 Tangosol Coherence 2.5.1/293 <Error> (thread=DistributedCacheWorker:0, member=1): java.lang.IllegalStateException
at java.util.HashMap$HashIterator.remove(HashMap.java:804)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache.onRemoveAllRequest(DistributedCache.CDB:45)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$RemoveAllRequest$RemoveJob.run(DistributedCache.CDB:2)
at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:9)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:34)
at java.lang.Thread.run(Thread.java:595)
2005-11-03 22:28:50.781 Tangosol Coherence 2.5.1/293 <D5> (thread=DistributedCache, member=1): Service DistributedCache left the cluster I believe this is the first time we have seen this error since the change has been in place.
On a couple of cache servers this error also appears immediately afterwards:
2005-11-03 22:28:50.853 Tangosol Coherence 2.5.1/293 <Error> (thread=DistributedCache, member=1): BackingMapManager com.tangosol.net.DefaultConfigurableCacheFactory$Manager: failed to release a cache: aCache
2005-11-03 22:28:50.854 Tangosol Coherence 2.5.1/293 <Error> (thread=DistributedCache, member=1): (Wrapped) java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at com.tangosol.util.ThreadGate.doWait(ThreadGate.java:474)
at com.tangosol.util.ThreadGate.close(ThreadGate.java:232)
at com.tangosol.util.WrapperConcurrentMap.lock(WrapperConcurrentMap.java:89)
at com.tangosol.net.cache.ReadWriteBackingMap.terminateWriteThread(ReadWriteBackingMap.java:2027)
at com.tangosol.net.cache.ReadWriteBackingMap.release(ReadWriteBackingMap.java:1486)
at com.tangosol.net.DefaultConfigurableCacheFactory.release(DefaultConfigurableCacheFactory.java:2004)
at com.tangosol.net.DefaultConfigurableCacheFactory$Manager.releaseBackingMap(DefaultConfigurableCacheFactory.java:2176)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$Storage.invalidate(DistributedCache.CDB:27)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache.releaseAllStorage(DistributedCache.CDB:16)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache.onExit(DistributedCache.CDB:11)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:57)
at java.lang.Thread.run(Thread.java:595) In the cache client, the thread in question appears to be blocked (since last night):
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:474)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.waitPolls(DistributedCache.CDB:9)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.removeAll(DistributedCache.CDB:95)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap$KeySet.removeAll(DistributedCache.CDB:1)
com.tangosol.util.ConverterCollections$ConverterCollection.removeAll(ConverterCollections.java:509)
... Have you seen this before? Is there a solution to this? Is this improved in Coherence 3.0? (I'm guessing the answer is most likely yes :-) )
Regards,
Rohan Talip
P.S. For reference, this is our current JVM:
# java -version
java version "1.5.0_02"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_02-b09)
Java HotSpot(TM) Server VM (build 1.5.0_02-b09, mixed mode)
P.P.S. We are running on Red Hat Enterprise Linux ES 3 and we have disabled the Native POSIX Threading Library by setting an environment variable: LD_ASSUME_KERNEL=2.4.19 -
Setting the property "Cache Level" of the pcd object to "None".
Hi all
I have an EP 6.0 on NW04 SPS 17. I need to solve a problem and found note 960975. My question is, how could we change the setting the property "Cache Level" of the pcd object to "None"? Where should I go? Is it on the NWA, Visual Admin, Configtool or somewhere else?
Many thanks before.
Regards
Agoes
Message was edited by:
Agoes Boedi PoerwantoHi Agoes,
By using the tool Support Desk -> Portal Content Directory -> PCD Administration you can do this. Please note that this tool should only be used in debugging situations.
There is a new section "Release a Unit from the cache cluster wide" in this tool. With this new functionality, you can remove an object from the cache on all nodes in the cluster. If the object is still in use, it will be reread immediately from the database
Releasing the entire PCD cache can severely affect performance. Hence, if there are inconsistencies suspected with a single object, e.g. a role or an iview, the new section "Release a Unit from the cache cluster wide" can be used to evict the given object from the cache on all nodes in the cluster.
Cheers,
shyam -
Expiry-delay: how to evict an entry without "touching" it?
I have a cache (called pending mutation cache) with an expiry-delay set to 30 seconds and a listener (com.tangosol.net.events.EventInterceptor, EntryEvent.Type.REMOVED) configured on the same cache.
The objective: when an entry is evicted, the event interceptor code will be executed (some delayed operation we want to perform).
But entries are not evicted after 30 seconds. They will be evicted when an attempt is made to retrieve this entry after the 30 seconds delay. (based on note in documentation)
In my case, no client accesses the cache entries, so even though entries are expired, the event insterceptor is never triggered.
This code is part of a pending mutation pattern (delayed cache modifications - based on expiry-delay of pending mutation cache) I am trying to implement.
Any ideas how to achieve this? Is there an elegant way to introduce a deamon thread that will sweep all entries in this cache and trigger the eviction?
Tks!I have a cache (called pending mutation cache) with an expiry-delay set to 30 seconds and a listener (com.tangosol.net.events.EventInterceptor, EntryEvent.Type.REMOVED) configured on the same cache.
The objective: when an entry is evicted, the event interceptor code will be executed (some delayed operation we want to perform).
But entries are not evicted after 30 seconds. They will be evicted when an attempt is made to retrieve this entry after the 30 seconds delay. (based on note in documentation)
In my case, no client accesses the cache entries, so even though entries are expired, the event insterceptor is never triggered.
This code is part of a pending mutation pattern (delayed cache modifications - based on expiry-delay of pending mutation cache) I am trying to implement.
Any ideas how to achieve this? Is there an elegant way to introduce a deamon thread that will sweep all entries in this cache and trigger the eviction?
Tks! -
Removing object from kodo datacache doesnt always work.
Hi,
We're using a very old version of kodo - 3.4.1 but it seems to work well.
We seem to have come across an intermittent issue where if we remove an object from the datacache using the DataCache.commit call, then the object doesnt actually seem to get removed. So the application continues operation with the out of date cached object. We get no exception or error though.
Has anyone see anything like this before? I can't seem to find any available known issues list or bug database for kodo, I guess this isnt available?
Thanks,
DanThe size will refer to individual DataCache size.
KodoPersistenceManager.getDataStoreCache() or getDataStoreCache("myCache") will return the default or named datacaches respectively.
You can evict the L2 cache content on the returned DataCache instance from the previous methods. -
Hai,
I setup the product successfully. But now my requirment comes in diffrent way. i.e I want to refresh the objects present in the cache for the particular interval of time.
Once it reaches the specfied time, the values need to be refreshed from database. can anyone throws the light on this problem?
thanks in advance.
bye
cgHi Chidam,
I just moved this thread here from the Test Forum.
The most elegant approach is for you to implement a MapListener to hook into the expiry events (with cache items set to expire after a specified time period), at which point you can force a reload of the expired item.
If you look at com.tangosol.net.cache.CacheEvent, there is a method called isSynthetic() which returns true for synthetic (internal) events such as eviction. An eviction will appear as a synthetic delete event.
Jon Purdy
Tangosol, Inc. -
Querying objects not in the NamedCache
The wiki topic on querying (http://wiki.tangosol.com/display/COH32UG/Querying+the+Cache ) points out that a query will "apply only to currently cached data".
This seems fairly logical because it seems unreasonable to expect the cache to hold onto information that it has already evicted.
If you were to design a DAO layer (following http://wiki.tangosol.com/display/COH32UG/Managing+an+Object+Model ) using the first of the following architectures:
1. Direct Cache access:
App <---> NamedCache <---> CacheStore <---> DB
2. Direct Cache and DB-DAO access:
App
|
CacheAwareDAO <---> CacheStore <---> DB
|
NamedCache
|
CacheStore
|
DB
you would then have a situation where you would not be able to query evicted data.
So by using the 2nd strategy I assume you would probably always want to bypass the cache for all queries other than by primary key, to ensure that you are always querying the entire persistent population.
This seems a little coarse grained and also reduces the utility of the Coherence cache (unless the bulk of your queries are by primary key).
Can anybody tell me if my assumption is wrong and if there are any usage strategies the mitigate this aspect?
Thx,
BenHi Rob, >
> Why would you need 2 separate caches?
the first cache would have eviction policy, and caches values, but does not have indexes
the second would not have eviction, does not store data, but has index updates on changes.
This way you have a fully indexed but not stored data-set, similarly to the difference between stored and indexed attributes Lucene.
> Why not just
> maintain a index within each cache so that every
> entry causes the index to get updated inline (i.e.
> synchronously within the call putting the data into
> the cache)?
>
You cannot manually maintain an index, because that is not a configurable extension point (it is not documented how an index should be updated manually). You have to rely on Coherence to do it for you upon changes to entries in the owned partitions.
And since Coherence code does remove index references to evicted or removed data, therefore the index would not know about the non-cached data.
Or did I misunderstood on how you imagine the indexes to be maintained? Did you envision an index separate from what Coherence has?
> (You may have to change Coherence to do this.....)
Changing Coherence was exactly what I was trying to avoid. I tried to come up with things within the specified extension points, and the allowed things, although it might be possible that I still did not manage to remain within the allowed set of operations.
Of course, if changing Coherence is allowed, allowing an option of filtering index changes to non-eviction events is probably the optimal solution.
> And I don't think that the write-behind issue would
> be a problem, as the current state cache of the cache
> (and it's corresponding index) reflects the future
> state of the backing store (which accordingly to
> Coherence's resilience guarantee will definitely
> occur).
>
The index on the second cache in the write-behind scenario would be out-of-synch only if the second cache is updated by invocations to the cache-store of the first cache. If it is updated upon changes to the backing map, then it won't. Obviously if you don't have 2 caches, but only one, it cannot be out-of-synch.
> So you would have a situation where cache evictions
> occur regularly but the index just overflows to disk
> in such a fashion that relevant portions of it can be
> recalled in an intelligent fashion, leveraging some
> locality of reference for example.
>
I don't really see, how this could be done. AFAIK, all the indexes Coherence has are maintained in memory and does not overflow to disk, but I may be wrong on this, but again, I may have misunderstood what you refer on index handling.
> a) you leverage locality of reference by using as
> much keyed data access as possible
> b) have Coherence do the through-reading
> c) use database DAO for range querying
> d) if you were to use Hibernate for (c), you might be
> able to double dip by using Coherence as an L2 cache.
> (I don't know if this unecessarily duplicates cached
> data....)
>
> Any thoughts on this?
a: if you know ids on your own, then this is the optimal solution, provided cache hit rates can be driven high enough. if you have to query for ids, the latency might be too high.
b: read-through can become suboptimal, since AFAIK, currently the cache store reads all rows one by one, only read-ahead uses loadAll, but I may be wrong on this. Loading from database can be optimized for multiple id loading as well, to be faster than the same via cache store. So it is very important that the cache hit rate be very high for performance-relevant data in case of read-through.
c: use database dao for complex querying, possibly almost anything more complex than straight top-down queries. make performance tests for both solutions, try to rely on partition affinity, and try to come up with data structures that help with making indexes which can be queried with as few queries as possible, and with not too high index access count.
d: you cannot query by Coherence on Hibernate second-level cache, as Hibernate second-level caches do not contain structured data, but contain byte[][]s or byte[], holding the column values serialized to it (separately or the same byte[], I don't remember which).
Best regards,
Robert -
As a simple test, I started up Coherence on windows using the coherence.cmd file, keeping all the default options as they are. I added coherence.jar and tangosol.jar to my classpath, and wrote a small test just putting a few objects in a cache, and then pulling them out using get and entrySet.
When I print out the objects that I get back with the get call, I see from the memory address that if I get the same key multiple times, I get back a reference to the the same object everytime, which is what I want.
I'd like to have it work the same way when I do a query. In my test when I tried it out, if I run the same query twice, each one getting back 2 results, I wind up with 4 new objects in my JVM. In our case, the objects that we're caching are basically immutable and don't get updated by our application. Is there a way to tell Coherence to reuse the objects that it's already given me in previous results?
Thanks in advance.Thanks for the reply Robert.
For a local cache this can be true (provided thatthe
entry was not changed). For a replicated cachethis
may or may not be true, for a distributed or near
cache it most likely would not be true.I'm not sure what you mean when you say that it most
likely would not be true. Is there a way to
guarantee that it would be true?
For local cache and near cache it depends on eviction settings and explicit changes to the cache.
For replicated cache it depends on eviction settings and explicit changes to the cache and also whether the replicated cache decides that it relinquishes the references to the object form to store only the binary form (I am not sure what if anything can trigger this).
See above. Also, this is not really a good idea
anyway as it leads to thread-safety issues.Are there thread safety issues if our client is only
using these objects in a read only capacity?
Yes, there could be.
Imagine the case that you held on to an object returned by Coherence and you use the information in it in a read-only way, but Coherence just changed that object while you still read from it because someone on a different thread put a new entry to the cache for the same key.
If really no one changes the data-set, so it is populated once and and not changed ever after, then there are no thread-safety problems.
My concern is that we would have thousands of objects
in the cache that we'd want to be able to query many
different ways, but I didn't want to have all the
overhead of all the duplicate objects in our JVM.
I'm new to Coherence...am I looking at this
correctly? Is this even a valid concern?
What would be the memory footprint of the result-set for the query of a single thread in a unit of operation which cannot be made independent from the previous unit of operation? Multiply that with the number of threads that saturates your box (if you don't depend on external resources then this is practically the number of cores you allocated for the JVM). Would this really be so huge?
You could probably write some more about what you try to solve.
Best regards,
Robert -
Failed object is Object ID not the JDO Instance
Hi guys,
With 2.3.0B5, I tried parsing the JDOUserException thrown when an opt lock failure occurs. I found
that the nested JDOException had a failed object as I expected. But the failed object was not the
JDO instance but rather it's object id. Thinking this strange, I looked over the spec, and sure
enough it is unspecific about the object that should be returned as the failed object. I suspect
that the spec is unspecific because no one consider the possibility of using anything other than the
JDO instance. It certainly makes sense to me that it would be the JDO instance since one reason for
parsing the exception is to evict the offending instances.
David EzzioGood point. We'll try to change it to the actual object.
Maybe you are looking for
-
IDoc- XI scenario (IDoc should be sent by XI client 500)
Hi, I'd like to set up a scenario mentioned in the subject. As a source SAP system I'd like to use XI client 500. XI client 100 is used as Integration Engine. How can I generate some IDoc from the source SAP system (XI cl 500) to import it into the r
-
Hi When I turn on my pc and log into my account there is a message telling me I need to delete files. But the page won't load up anything to allow mw to do this. I am concerned I may have lost all my personal info. I can log into other family members
-
ITunes 11.3.1 Downloaded Twice?
So I downloaded iTunes 11.3.1 through the App Store (Updates) and I am just curious to know if any of you are seeing what I am. In the Updates section of the App Store there are now two listings of iTunes 11.3.1 "previous updates", on the same date,
-
[SOLVED] k3b does not detect optical drive
Hello, I am running Arch Linux x86_64 with recent updates. I have k3b 2.0.3 and when I launch it I get message that it does not detect my optical drive, and I need hal daemon. I checked, udev is running, node in dev /dev/sr0 has root.optical, my user
-
How to implement Drag and Drop in trees
Hi, I have two hierarchical tree in a form, i need to drag each nodes form one tree node and drop into another tree, how i possible. am using Forms 6i developer.