NullPointerException invoking Processor on Partitioned Cache

Hi all,
We are getting a NullPointerException when trying to invoke an EntryProcessor on Service B, from within EntryProcessor of Service A on a partitioned cache.
The strange thing is that it works fine when there is only one node, but fails when there are one or more nodes. I guess that makes some sense in that there wont be any serialization if there is only one node.
Can anyone suggest what may be causing the problem from the stack trace?
16:20:19,715 2012-05-02 16:20:19.715/3.272 Oracle Coherence GE 3.7.1.3 <D6> (thread=Proxy:FeedHandlerExtendTcpProxyService:TcpAcceptor, member=3): Opened: Channel(Id=1136375156, Open=true, Connection=0x000001370E233D590A660314E7BFF3B54A24F82E01F98987CE652E1F3C5DD30E)
16:20:29,913 2012-05-02 16:20:29.910/13.467 Oracle Coherence GE 3.7.1.3 <D5> (thread=Proxy:FeedHandlerExtendTcpProxyService:TcpAcceptorWorker:1, member=3): An exception occurred while processing a InvokeRequest for Service=Proxy:FeedHandlerExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed request execution for TradesPartitionedCache service on Member(Id=3, Timestamp=2012-05-02 16:20:18.135, Address=10.102.3.20:8088, MachineId=63987, Location=site:,machine:LONW00067144,process:9356, Role=cache)) Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for PositionPartitionedCache service on Member(Id=2, Timestamp=2012-05-02 16:13:27.876, Address=10.102.3.20:8090, MachineId=63987, Location=site:,machine:LONW00067144,process:12892, Role=cache)) null
     at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:68)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
     at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
     at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
     at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
     at java.lang.Thread.run(Unknown Source)
Caused by: Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for PositionPartitionedCache service on Member(Id=2, Timestamp=2012-05-02 16:13:27.876, Address=10.102.3.20:8090, MachineId=63987, Location=site:,machine:LONW00067144,process:12892, Role=cache)) null
     at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
     at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
     at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
     at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
     at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
     at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
     ... 2 more
Caused by: Portable(java.lang.NullPointerException)
     at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
     at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
     at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
     at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
     at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
     ... 12 more
thanks
Edited by: bish on 02-May-2012 17:42

Hi,
Have you tested that your EntryProcessors both serialize and deserialize properly. The last time I saw a stack trace similar to that (yesterday in fact) it was an error desrializing an EntryProcessor. If your code works with a single node then that would point to a serialization issue.
Unit testing all your POF classes is always a good idea...
InvocableMap.EntryProcessor original = ... create the object you want to test ...
ConfigurablePofContext pofContext = new ConfigurablePofContext("... name of your pof config file...");
Binary binary = ExternalizableHelper.toBinary(original, pofContext);
InvocableMap.EntryProcessor result = ExternalizableHelper.fromBinary(binary, pofContext);
... do some assertions to make sure the "result" matches the "original" ...On another note - calling a second EntryProcessor from inside the first EntryProcessor could lead to problems unless you are very careful to avoid thread starvation and cache/service re-entrancy.
JK

Similar Messages

  • Using a partitionned cache with off-heap storage for backup data

    Hi,
    Is it possible to define a partitionned cache (with data into the heap) with off-heap storage for backup data ?
    I think it could be worthwhile to do so, as backup data are associated with a different access pattern.
    If so, what are the impacts of such off-heap storage for backup data ?
    Particularly, what are the impacts on performance ?
    Thanks.
    Regards,
    Dominique

    Hi,
    It seems what using scheme for backup-store is broken in latest version of Coherence, I've got an exception using your setup.
    2010-07-24 12:21:16.562/7.969 Oracle Coherence GE 3.6.0.0 <Error> (thread=DistributedCache, member=1): java.lang.NullPointerException
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:466)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage$BackingManager.isPartitioned(PartitionedCache.java:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.instantiateBackupMap(PartitionedCache.java:24)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.setCacheName(PartitionedCache.java:29)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ServiceConfig$ConfigListener.entryInserted(PartitionedCache.java:17)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.ServiceConfig$Map.put(ServiceConfig.java:43)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$StorageIdRequest.onReceived(PartitionedCache.java:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.java:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.java:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.java:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.java:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.java:42)
         at java.lang.Thread.run(Thread.java:619)Tracing in debuger has shown what problem is in PartitionedCache$Storage#setCacheName(String) method, it calls instantiateBackingMap(String) before setting __m_CacheName field.
    It is broken in 3.6.0b17229
    PS using asynchronous wrapper around disk based backup storage should reduce performance impact

  • What happens to lock on expired item in partitioned cache?

    With a partitioned cache, what happens if an object with a lock on it expires?
    In other words, if I put it in with an expiry, something locks it, and it expires while the lock is present, what happens?

    Hi mesocyclone,
    The lock/unlock API is completely orthogonal to the data-related API (get, put, invoke, etc). Presence of absence of the data has no effect on the lock.
    Regards,
    Gene

  • Does arch support " Intel Core i7-920 processor(8MB L3 Cache, 2."

    Hi, i'm new to linux as well as Arch .  But would like to try Arch , since lot of linux users says Arch is good. I got my new home pc with  " Intel® Core™ i7-920 processor(8MB L3 Cache, 2.66GHz) " . Does Arch supports this processor to it's full potential. Did a search in the forums . I could not find the answer. Thanks in advance.

    As an x86 compatible CPU, why would it not be supported? Newer stuff is backwards compatible; you should worry about older stuff not being supported anymore, due to the new instructions introduced in more recent products.
    Also, we've had this kind of questions before: http://bbs.archlinux.org/viewtopic.php?id=88585
    Welcome to the forums ganeshsarathi - be sure to read the stickies. Our BBS search function is a bit shot, it's best to use google (or another engine that can filter on a specific URL).

  • Concurrency with Entry Processors on Replicated Cache

    Hi,
    In the documentation is says that entry processors in replicated caches are executed on the initiating node.
    How is concurrency handled in this situation?
    What happens if two or more nodes are asked to execute something on an entry at the same time?
    What happens if the node initiating the execution is a storage disabled node?
    Thanks!

    Jonathan.Knight wrote:
    In a distributed cache the entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes so I think one of the questions was what happens in this scenario. I presume the EP only execues on one of the nodes - it would not make sense for it to execute on all nodes - but which one does it use? Is there still a concept of owner for a replicated cache or is it random.
    At this point I would have coded a quick experiment to prove what happens but unfortunately I am a tad busy right now.
    JKHi Jonathan,
    in the replicated cache there is still a notion of ownership of an entry, in Coherence terms it is called a lease. It is always owned by the last node to have carried out a successful modification on it, where modification may be a put/remove but it can also be a lock operation. Lease granularity is per entry.
    Practically the lock operation in the code Dimitri pasted serves two purposes. First it ensures no other nodes can lock it, second it brings the lease to the locking node, so it can correctly execute the entry-processor locally on the entry.
    Best regards,
    Robert

  • Invoke methods from remote cache

    Hi, Guys
         I want to invoke methods from remote cache node WITHOUT joining the cluster.
         Do you provide some mechanism to implement this?
         Currently, I set up an empty cache which joined the same cluster to invoke methods.
         Thanks for your support.

    I want to invoke methods from remote cache node     > WITHOUT joining the cluster.
         > Do you provide some mechanism to implement this?
         Absolutely. It is the "client/server" extension to Coherence, which is called Coherence*Extend.
         See:
         http://wiki.tangosol.com/display/COH32UG/Configuring+and+Using+Coherence*Extend
         Peace,
         Cameron Purdy
         Tangosol Coherence: The Java Data Grid

  • Partition Cache

    Hi all ,
    Can any one help me to setup a partitioned cache for more than 2 nodes, with all information how to setup server side cache-config.xml and client side cache-config.xml.
    the perpose is load- balencing
    thanks in advance
    Vinod Yadav

    Hi Vinod,
    The Best Practices document below can be useful for what you are trying to achieve:
    http://coherence.oracle.com/display/COH35UG/Best+Practices
    Thanks,
    Cris

  • Partition-level Transactions - Entry Processor access "other" cache.

    Hi there,
    Playing around with Coherence 3.7 and like the look of the new Partition-level Transactions. I'm trying to implement an Entry Processor that accesses an "other" cache in order to create an audit record.
    The slides from Brian's SIG presentation show the following code for an example EP doing this:
    public void process(Entry entry)
    // Update an entry in another cache.
    ((BinaryEntry)entry).getBackingMapContext(“othercache”).getBackingMap().put(“othercachekey”, value);
    The problem I'm facing is the API doesn't seem to have an implementation of BinaryEntry.getBackingMapContext(String cachename). It just has a no-arg version which accesses the cache of the current entry. It's not an error just in the API docs, as the code doesn't compile either.
    Any ideas what to do here?
    Cheers,
    Steve

    Care to expand on that, Charlie?
    Reason I ask is that since I posted my reply to JK, I noticed that I was getting classcast errors on my audit record insert in the server logs. I had to resort to the use of "converters" to get my newly created audit record into binary format before the "put" into the second (also distributed) cache was successful without errors:
    BackingMapManagerContext ctx = ((BinaryEntry)entry).getContext();
    ctx.getBackingMapContext("PositionAudit").getBackingMap().put(
    ctx.getKeyToInternalConverter().convert(pa.getAuditId()),
    ctx.getValueToInternalConverter().convert(pa));
    The "PositionAudit" cache is the one I want to create a new entry in each time an EntryProcessor is invoked against Position objects in the "Position" cache. The object I'm creating for that second cache, "pa" is the newly created audit object based on data in the "position" object the entry processor is actually invoked against.
    So the requirement is pretty simple, at least in my mind: Position objects in the "positions" cache get updated by an EntryProcessor. When the EntryProcessor fires, it must also write an "audit" record/object to a second cache, based upon data in the Position object just manipulated. As all of the caches are distributed caches - which store their data in Binary format, AFAIK - I still have to do the explicit conversion to binary format of my new audit object to get it successfully "put" into the audit cache.
    Seems to still be quite a bit of messing around (including the KeyAssociator stuff to make sure these objects are in the same partitions in the first place for all this to work), but at least I now get "transactionally atomic" operations across both the positions and positions audit caches, something that couldn't be done from an EP prior to 3.7.
    As I say, it works now. Just want to make sure I'm going about it the right way. :)
    Any comments appreciated.
    Cheers,
    Steve

  • Best way to determine insertion order of items in cache for FIFO?

    I want to implement a FIFO queue. I plan on one producer placing unprocessed Orders into a cache. Then multiple consumers will each invoke an EntryProcessor which gets the oldest unprocessed order, sets it processed=true and returns it. What's the best way to determine the oldest object based on insertion order? Should I timestamp the objects with a trigger when they're added to the cache and then index by that value? Or is there a better way? maybe something coherence automatically saves when objects are inserted? Also, it's not critical that the processing order be precisely FIFO, close is good enough.
    Also, since the consumer won't know the key value for the object it will receive, how could the consumer call something like this so it doesn't violate Constraints on Re-entrant Calls? http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Thanks,
    Andrew

    Ok, I think I can see where you are coming from now...
    By using a queue for each for each FIX session then you will be experiencing some latency as data is pushed around inside the cluster between the 'owning node' for the order and the location of the queue; but if this is acceptable then great. The number of hops within the cluster and hence the latency will depend on where and how you detect changes to your orders. The advantage of assiging specific orders to each queue is that this will not change should the cluster rebalance; however you should consider what happens if the node controlling a specific FIX session is lost - do you recover from FIX log? If so where is that log kept? Remember to consider what happens if your cluster splits, such that the node with the FIX session is still alive, but is separated from the rest of the cluster. In examining these failure cases you may decide that it is easier to use Coherence's in-built partitioning to assign orders to sessions father than an attribute of order object.
    snidely_whiplash wrote:
    Only changes to orders which result in a new order or replace needing to be sent cause an action by the FIX session. There are several different mechanisms you could use to detect changes to your orders and hence decide if they need to be enqueued:
    1. Use a post trigger that is fired on order insert/update and performs the filtering of changes and if necessary adds the item to the FIX queue
    2. Use a cache store that does the same as (1)
    3. Use an entry processor to perform updates to the order object (as I believe you previously mentioned) and performs logic in (1)
    4. Use a CQC on the order cache
    5. A map listener on the order cache
    The big difference between 1-3 and 4, 5 is that the CQC is i) a SPOF ii) not likely located in the same place as your order object or the queue (assuming that queue is in fact an object in another cache), iii) asynchronously fired hence introducing latency. Also note that the CQC will store your order objects locally whereas a map listener will not.
    (1) and (3) will give you access to both old and new values should that be necessary for your filtering logic.
    Note you must be careful not to make any re-entrant calls with any of 1-3. That means if you are adding something to a FIX queue object in another cache (say using an entry processor) then it should be on a different cache service.
    snidely_whiplash wrote:
    If I move to a CacheStore based setup instead of the CQC based one then any change to an order, including changes made when executions or rejects return on the FIX session will result in the store() method being called which means it will be called unnecessarily a lot. It would be nice if I could specify the CacheStore only store() certain types of changes, ie. those that would result in sending a FIX message. Anything like that possible?There is negligible overhead in Coherence calling your store() method; assuming that your code can decide if anything FIX-related needs to be done based only on the new value of the order object then this should be very fast indeed.
    snidely_whiplash wrote:
    What's a partitioned "token cache"?This is a technique I have used in the past for running services. You create a new partitioned cache into which you place 'tokens' representing a user-defined service that needs to be run. The insertion/deletion of a token in the backing map fires a backing map listener to start/stop a service +(not there are 2 causes of insert/delete in a backing map - i) a user ii) cluster repartitioning)+. In this case that service might be a fix session. If you need to designate a specific member on which a service needs to run then you could add the member id to the token object; however you must be careful that unless you write your own partitioning strategy the token will likely not live on the same cache member as the token indicates; in which case you would want a ful map listener or CQC to listen for tokens rather than a backing map listener
    I hope that's useful rather than confusing!
    Paul

  • Updating a hierarchical data structure from an entry processor

    I have a tree-like data structure that I am attempting to update from an AbstractProcessor.
    Imagine that one value is a collection of child value keys, and I want to add a new child node in the tree. This requires updating the parent node (which contains the list of child nodes), and adding the child value which is a separate entry.
    I would rather not combine all bits of data into one value (which could make for a large serialized object), as sometimes I prefer to access (read-only) the child values directly. The child and the parent values live in the same partition in the partitioned cache, though, so get access should be local.
    However, I am attempting to call put() on the same cache to add a child value which is apparently disallowed. It makes sense that a blocking call is involved in this operation, as it needs to push out this data to the cluster member that has the backup value for the same operation, but is there a general problem with performing any kind of re-entrant work on Coherence caches from an entry processor for any value that is not the value you are processing? I get the assertion below.
    I am fine with the context blocking (preventing reads or writes on the parent node value) until the child completes, presuming that I handle deadlock prevention myself due to the order in which values are accessed.
    Is there any way to do this, either with entry processors or not? My code previously used lock, get and put to operate on the tree (which worked), but I am trying to convert this code to use entry processors to be more efficient.
    2008-12-05 16:05:34.450 (ERROR)[Coherence/Logger@9219882 3.4/405]: Assertion failed: poll() is a blocking call and cannot be called on the Service thread
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:30)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:1)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:928)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:887)
         at com.tangosol.net.cache.NearCache.put(NearCache.java:286)
         at com.conduit.server.properties.CLDistributedPropertiesManager$UpdatePropertiesProcessor.process(CLDistributedPropertiesManager.java:249)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.invoke(DistributedCache.CDB:20)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onInvokeRequest(DistributedCache.CDB:50)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$InvokeRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:637)

    Hi,
    reentrant calls to the same Coherence service is very much recommended against.
    For more about it, please look at the following Wiki page:
    http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Best regards,
    Robert

  • Data processing in multiple caches in the same node

    we are using a partitioned cache to load data ( multiple types of data) in multiple named caches. In one partition, we plan to have all related data and we have got this using key association.
    Now I want to do processing with in that node and do some reconciliation of the data from various sources. We tried entry processor, but we want to consolidate all data from multiple named caches in the node. In a very naive form, I am thinking each named cache as a table and i am looking at ways to have a processor that will do some some processing on the related data.
    I see we could use a combination of Invocable object, Invocation service and entry processors, but I am unable to implement it successfully.
    Can you please point me to any reference implementation where I can do processing at the data node without transfering data back to client.
    Also any reference implementation of Map reduce (at the server side) in coherence would be helpful.
    Regards
    Ganesan

    Hi Ganesan
    A common approach to perform processing in the grid is to execute background threads in response to backing map listener events. The processing is co-located with data because the listener will be called in the JVM that owns the data. The thread can then make Coherence calls to access caches just like any other Coherence client.
    The Coherence Incbutor has numerous examples of this at http://coherence.oracle.com/display/INCUBATOR/Home. The Incubator Common component includes an event processing package that simplifies the handling of events. See the messaging pattern for an example: Message.java and MessageEventManager.java
    I am not sure I answered your question but I hope the information helps.
    Paul

  • Calling another cache from within AbstractProcessor.process fails, why?

    I want to enrich a certain data object in a distributed cache via the EntryProcessor framework. The new data is looked up from another nearby cache.
    However, when calling otherCache.get(id) Coherence throws an exception saying that it does not allow blocking calls from within a processor:
    *[java] Caused by: com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service thread...*
    Is this fixable? What am I missing? Does anyone recommend alternative ways to read data from other caches inside AbstractProcessor?
    Thanks in advance,
    -Stefan
    This is the calling code that executes in the node where the primary object is:
    public class ExternalPropertiesUpdateProcessor extends AbstractProcessor implements PortableObject {
    @Override
    public Object process(InvocableMap.Entry entry) {
    Session session = null;
    if (entry.isPresent()) {
    session = (Session) entry.getValue();
    NamedCache cidCache = CacheFactory.getCache("cidCache");
    Long oldCid = new Long(97);
    CidConversion cidCon = (CidConversion) cidCache.get(oldCid); // * it fails here with above exception!
    session.setCid(cidCon.getNewCid());
    session.setState(Session.SESSION_CLOSED);
    entry.setValue(session); // make session changes durable in cache
    return session;
    Edited by: [email protected] on Dec 10, 2009 9:35 AM

    Ben Stopford wrote:
    Hi Rob
    You mention:
    If you call back to the same cache service via an invocation service, you still risk a deadlock or a livelock due to thread-pool depletion. It looks like it is working, then you may run into the problem. It is not 100% safe.which sparked my interest. Would you mind elaborating a little further. I'd always assumed that the threadpools for separate services were independent and thus there was no chance of deadlock from cross-service calls such as this.
    Thanks in advance
    BenHi Ben,
    as I mentioned, it can cause a problem if you indirectly call back to a service which was on the call stack.
    Let's see an example:
    You send an entry processor to a cache in service A, which synchronously (with the query() method) calls to Invocation service I in which the invocable agent sends an entry-processor to service A (again) targeted to an entry on the same node.
    Let's for the sake of the simplicity of the example suppose service A has a thread pool of size 2.
    You send two such entry-processors on entries residing in the same node.
    We now have a possible deadlock. If both entry-processors entered the first process() method (doesn't matter what entry as long as they are on the same node), they can both proceed into the invocation service call, but from that none of them can proceed further to the inner invoke() call, because there are no more free threads in service A!!! Poof, deadlock, both threads will wait until a thread becomes available... and none will become available because none of the service calls can proceed.
    Of course, starting with 3.5 the Guardian will kill one of the threads (at least), and in earlier version it will also time out, but the point is that they will not be able to complete in this scenario.
    Best regards,
    Robert

  • Partitioned & Replicated configuration

    Hi,
    I need to build a configuration with 6 dedicated cache nodes (ie. cache servers).
    Each of these node obviously need to be outside the application environment (ie.
    outside the weblogic/websphere box).
    Components deployed on weblogic / websphere would invoke calls on this
    cache cluster (containing the above 6 nodes).
    Each node is expected to be part of a partition carrying a unique set fo data (ie.
    each node will contain a different set of data). Furthermore, for the sake of
    failover, each of these 6 partitioned nodes need to have a failover capability.
    Can anyone post a sample configuration (XML) file for the above setup.
    regards
    Mike.

    Just to re-iterate, the requirements listed are provided by the Coherence Partitioned Cache Service. See:
    http://wiki.tangosol.com/display/COH32UG/Partitioned+Cache+Service
    You would use a cache configuration file (XML file) to specify the cache. To enumerate the requirements and how they work:
    I need to build a configuration with 6 dedicated
    cache nodes (ie. cache servers). Use the built-in cache server module. See:
    http://www.tangosol.com/downloads/javadoc/320/com/tangosol/net/DefaultCacheServer.html#main(java.lang.String[])
    I suggest that the caches should be configured with <local-storage> as false and the cache servers should specify -Dtangosol.coherence.distributed.localstorage=true on the command line. See:
    http://wiki.tangosol.com/display/COH32UG/distributed-scheme#distributed-scheme-localstorage
    Each of these node obviously need to be outside the
    application environment (ie.
    outside the weblogic/websphere box). If (as mentioned above) the caches are configured with <local-storage> as false, then the same cache config file when used inside WebLogic or WebSphere will attach to those same caches, but the data will only be managed by the dedicated "cache servers".
    Components deployed on weblogic / websphere would
    invoke calls on this
    cache cluster (containing the above 6 nodes).Yes, this is exactly how the Coherence Partitioned Cache Service works.
    Each node is expected to be part of a partition
    carrying a unique set fo data (ie.
    each node will contain a different set of data).Yes, this is exactly how the Coherence Partitioned Cache Service works.
    Furthermore, for the sake of
    failover, each of these 6 partitioned nodes need to
    have a failover capability.Yes, this is exactly how the Coherence Partitioned Cache Service works. By default, there is one level of backup (one server can die at a time without losing data). You can adjust this up or down by setting the <backup-count> element. See:
    http://wiki.tangosol.com/display/COH32UG/distributed-scheme#distributed-scheme-backupcount
    I hope this makes it very clear :)
    Peace,
    Cameron Purdy
    Tangosol Coherence: Clustered Caching for Java

  • Processor upgrade on Satellite L300-1BV

    I've already upgraded the laptop from a T1600 (Celeron), to a T3200 (Pentium Dual Core). But does anyone know how far I can upgrade in the 478 P range of processors?
    If so, could you supply the processor numbers?
    +Message was edited: Model number changed in subject+

    Its a Satellite L300-1BV and NOT Satellite L300D-1BV
    This is big difference!
    However, the Intel GL40 Express chipset supports these CPUs:
    Intel Celeron Processor 575 (1M Cache, 2.00 GHz, 667 MHz FSB)
    Intel Celeron Processor 585 (1M Cache, 2.16 GHz, 667 MHz FSB)
    Intel Celeron Processor T1700 (1M Cache, 1.83 GHz, 667 MHz FSB)
    Intel Celeron Processor T3100 (1M Cache, 1.90 GHz, 800 MHz FSB)
    Intel Celeron Processor T1600 (1M Cache, 1.66 GHz, 667 MHz FSB)
    Intel Celeron Processor T3000 (1M Cache, 1.80 GHz, 800 MHz FSB)
    Intel Celeron Processor 900 (1M Cache, 2.20 GHz, 800 MHz FSB)
    Intel Celeron Processor T3500 (1M Cache, 2.10 GHz, 800 MHz FSB)
    Intel Celeron Processor T3300 (1M Cache, 2.00 GHz, 800 MHz FSB)
    Intel Celeron Processor 925 (1M Cache, 2.30 GHz, 800 MHz FSB)
    Source: http://ark.intel.com/products/35501/Intel-82GL40-Graphics-and-Memory-Controller-Hub

  • Getting All Entries from a cache

    Hi Folks,
         Just a small interesting observation. In an attempt to get back all the data from my partitioned cache I tried the following approaches:
         //EntrySet
         NamedCache cache = NamedCache.getCache("MyCache");
         Iterator<Entry<MyKeyObj, MyObj>> iter = cache.entrySet().iterator();
         //iterator over objects and get values
         //KeySet & getAll
         NamedCache cache = NamedCache.getCache("MyCache");
         Map results = cache.getAll(cache.keySet());
         Iterator<Entry<MyKeyObj, MyObj>> iter = results.iterator();
         //iterate over objects and get values
         Retrieving ~47k objects from 4 nodes takes 21 seconds using the entryset approach and 10 seconds for the keyset/getal approach.
         does that sound right to you? That implies that the entryset iterator is lazy loaded using get(key) for each entry.
         Regards,
         Max

    Hi Gene,
         I actually posted the question, because we are currently performance-tuning our application, and there are scenarios where (due to having a large amount of badly organized legacy code with the bottom layers ported to Coherence) there are lots of invocations getting all the entries from some caches, sometimes even hundreds of times during the processing of a HTTP request.
         In some cases (typically with caches having a low cache-size) we found, that the entrySet-AlwaysFilter solution was way faster than the keyset-getall solution, which was about as fast as the solution iterating over the cache (new HashMap(cache)).
         I just wanted to ask if there are some rules of thumb on how long is it efficient to use the AlwaysFilter on distributed caches, and where it starts to be better to use the keyset-getall approach (from a naive test-case keyset-getall seemed to be better upwards from a couple-of-thousand entries).
         Also, as we are considering to move some of the caches (static data mostly, with usually less than 1000 entries, sometimes even as few as a dozen entries in a named cache, and in very few cases as many as 40000 entries) to a replicated topology, that is why I asked about the effect of using replicated caches...
         I expect the entrySet-AlwaysFilter to be slower than the iterating solution, since it effectively does the same, and also has some additional filter evaluation to be done.
         The keySet-getall will be something similar to the iterating solution, I guess.
         What is to be known about the implementation of the values() method?
         Can it be worth using in some cases? Does it give us an instant snapshot in case of replicated caches? Is it faster than the entrySet(void) in case of replicated caches?
         Thanks and best regards,
         Robert

Maybe you are looking for