Partition-level Transactions - Entry Processor access "other" cache.

Hi there,
Playing around with Coherence 3.7 and like the look of the new Partition-level Transactions. I'm trying to implement an Entry Processor that accesses an "other" cache in order to create an audit record.
The slides from Brian's SIG presentation show the following code for an example EP doing this:
public void process(Entry entry)
// Update an entry in another cache.
((BinaryEntry)entry).getBackingMapContext(“othercache”).getBackingMap().put(“othercachekey”, value);
The problem I'm facing is the API doesn't seem to have an implementation of BinaryEntry.getBackingMapContext(String cachename). It just has a no-arg version which accesses the cache of the current entry. It's not an error just in the API docs, as the code doesn't compile either.
Any ideas what to do here?
Cheers,
Steve

Care to expand on that, Charlie?
Reason I ask is that since I posted my reply to JK, I noticed that I was getting classcast errors on my audit record insert in the server logs. I had to resort to the use of "converters" to get my newly created audit record into binary format before the "put" into the second (also distributed) cache was successful without errors:
BackingMapManagerContext ctx = ((BinaryEntry)entry).getContext();
ctx.getBackingMapContext("PositionAudit").getBackingMap().put(
ctx.getKeyToInternalConverter().convert(pa.getAuditId()),
ctx.getValueToInternalConverter().convert(pa));
The "PositionAudit" cache is the one I want to create a new entry in each time an EntryProcessor is invoked against Position objects in the "Position" cache. The object I'm creating for that second cache, "pa" is the newly created audit object based on data in the "position" object the entry processor is actually invoked against.
So the requirement is pretty simple, at least in my mind: Position objects in the "positions" cache get updated by an EntryProcessor. When the EntryProcessor fires, it must also write an "audit" record/object to a second cache, based upon data in the Position object just manipulated. As all of the caches are distributed caches - which store their data in Binary format, AFAIK - I still have to do the explicit conversion to binary format of my new audit object to get it successfully "put" into the audit cache.
Seems to still be quite a bit of messing around (including the KeyAssociator stuff to make sure these objects are in the same partitions in the first place for all this to work), but at least I now get "transactionally atomic" operations across both the positions and positions audit caches, something that couldn't be done from an EP prior to 3.7.
As I say, it works now. Just want to make sure I'm going about it the right way. :)
Any comments appreciated.
Cheers,
Steve

Similar Messages

  • Write-Behind batch behavior in EP partition level transactions

    Hi,
    We use EntryProcessors to perform updates on multiple entities stored in the same cache partition. According to the documentation, Coherence handles all the updates in a "sandbox" and then commits them atomically to the cache backing map.
    The question is, when using write-behind, does Coherence guarantee that all entries updated in the same "partition level transaction" will be present in the same "storeAll" operation?
    Again, according to the documentation, the write-behind thread behavior is the following:
    The thread waits for a queued entry to become ripe.
    When an entry becomes ripe, the thread dequeues all ripe and soft-ripe entries in the queue.
    The thread then writes all ripe and soft-ripe entries either via store() (if there is only the single ripe entry) or storeAll() (if there are multiple ripe/soft-ripe entries).
    The thread then repeats (1).
    If all entries updated in the same partition level transaction become ripe or soft-ripe at the same instant they will all be present in the storeAll operation. If they do not become ripe/soft-ripe in the same instant, they may not be all present.
    So, it all depends on the behavior of the commit of the partition level transaction, if all entries get the same update timestamp, they will all become ripe at the same time.
    Does anyone know what is the behavior we can expect regarding this issue?
    Thanks.

    Hi,
    That comment is still correct for 12.1 and 3.7.1.10.
    I've checked Coherence APIs and the ReadWriteBackingMap behavior, and although partition level transactions are atomic, the updated entries will be added one by one to the write behind queue. In each added entry coherence uses current time to calculate when each entry will become ripe, so, there is no guarantee that all entries in the same partition level transaction will become ripe at the same time.
    This leads me to another question.
    We have a use case where we want to split a large entity we are storing in coherence into several smaller fragments. We use EntryProcessors and partition level transactions to guarantee atomicity in operations that need to update more than one fragment of the same entity. This guarantees that all fragments of the same entity are fully consistent. The cached fragments are then persisted into database using write-behind.
    The problem now is how to guarantee that all fragments are fully consistent in the database. If we just relly on coherence write-behind mecanism we will have eventual consistency in DB, but in case of multi-server failure the entity may become inconsistent in database, which is a risk we wouldnt like to take.
    Is there any other option/pattern that would allow us to either store all updates done on the entity or no update at all?
    Probably if in the EntryProcessor we identify which entities were updated and if we place them in another persistency queue as a whole, we will be able to achieve this, but this is a kind of tricky workaround that we wouldnt like to use.
    Thanks.

  • Concurrency with Entry Processors on Replicated Cache

    Hi,
    In the documentation is says that entry processors in replicated caches are executed on the initiating node.
    How is concurrency handled in this situation?
    What happens if two or more nodes are asked to execute something on an entry at the same time?
    What happens if the node initiating the execution is a storage disabled node?
    Thanks!

    Jonathan.Knight wrote:
    In a distributed cache the entry processor runs on the node that owns the cache entry. In a replicated cache the same entries are on all nodes so I think one of the questions was what happens in this scenario. I presume the EP only execues on one of the nodes - it would not make sense for it to execute on all nodes - but which one does it use? Is there still a concept of owner for a replicated cache or is it random.
    At this point I would have coded a quick experiment to prove what happens but unfortunately I am a tad busy right now.
    JKHi Jonathan,
    in the replicated cache there is still a notion of ownership of an entry, in Coherence terms it is called a lease. It is always owned by the last node to have carried out a successful modification on it, where modification may be a put/remove but it can also be a lock operation. Lease granularity is per entry.
    Practically the lock operation in the code Dimitri pasted serves two purposes. First it ensures no other nodes can lock it, second it brings the lease to the locking node, so it can correctly execute the entry-processor locally on the entry.
    Best regards,
    Robert

  • Transactional entry processors?

    Does the new "full XA support" also makes it possible to roll back changes performed using entry processors or is it still only "put/putAll" operations that are possible to perform transactionally ?
    /Magnus

    That is very good news - we have for a long time wanted to use entry processors for "path" updates but previously not been able to use them with XA!!!!!
    /Magnus

  • Can not access cache in entry processor

    Why the cache where the entry object exists can not be accessed in entry processor?
    The calculation in entry processor will not always be one grid object operation.
    One object would be related to other objects.
    If we could not do such thing in entry processor, we have to process calculation outside of the entry processor. However, caculation out of grid would not be a very efficient way for generating result.
    Does anyone have good idea on this?

    Hi, Robert,
    I understood your meaning.
    Invocable is an alternative solution for processing data. Use it outside of entry-processor, no dead lock would happen.
    I paste piece of my code here. Following codes can not work properly. Just like you said, can not wait for result inside of entry-processor. My original purpose is that get "Nation" property from another data object. Now, I only can do it outside of entry-processor.
    public class UpdateFxProcessor extends AbstractProcessor {
         public Object process(Entry entry) {
              HtWsMeasuresUsdFx hfx = (HtWsMeasuresUsdFx)entry.getValue();
         InvocationService service = (InvocationService)
    CacheFactory.getConfigurableCacheFactory()
    .ensureService("InvocationService");
         Map map = service.query(new NationInvocable(hfx.getWsNationCode()), null);
         String nation= (String)map.get(service.getCluster().getLocalMember());
              hfx.setNation(nation);     
    // ... other logics for calculating hfx....
              return null;
    My data grid environment will store more than 10 million HtWsMeasuresUsdFx objects. I hope that calculation on these objects could be executed in parallel. Maybe I misunderstood the function of entry-processor. If we use Invocable outside of entry-processor, it is hard to control its calculation in parallel, isn't it?
    Best regards,
    Copper

  • Accessing Cache From an Entry Processor

    Is it possible to Access another cache to perform operations on when calling cache.invoke()? In this call I pass in the key and an processor that invokes another cache.

    Hi Dan,
         If that other cache is in a cache service different from the cache service of the cache on which the entry-processor is running, then you can.
         Otherwise you should not, because your code will be prone to deadlocks.
         Best regards,
         Robert

  • My MacBook uses Mac OS X Version 10.6.8. I am unable to access some websites, such as Facebllk, Gooogle, etc, but I can access other websites such as MSN, Amazon. I have emptied the Safari cache with no change. Appreciate any help.

    My MacBook uses Mac OS X Version 10.6.8. I am unable to access some websites, such as Facebook and Google, but I can access other sites such as MSN and Amazon.  I have emptied the Safari cache with no change.  Appreciate any help in solving this problem.

    My MacBook uses Mac OS X Version 10.6.8. I am unable to access some websites, such as Facebook and Google, but I can access other sites such as MSN and Amazon.  I have emptied the Safari cache with no change.  Appreciate any help in solving this problem.

  • Accessing others LCR$_ROW_RECORD in same transaction

    Hi all,
    I'm writing the code for a DML apply handler, may I access others LCR$_ROW_RECORD for other tables than the one current for the apply handler but in the same transaction?
    example:
    PROCEDURE myApplyHandlerDML(in_any IN ANYDATA) IS
      lcr          SYS.LCR$_ROW_RECORD;
      rc           PLS_INTEGER; 
    BEGIN
      rc := in_any.GETOBJECT(lcr);
      ... I need to access others LCR$_ROW_RECORD of other tables but in the same originating transaction
    {code}
    thanks for any help, bye
    aldo                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hello Aldo,
    The DML Handler works for a single logical change record (LCR). The apply process evaluates a single LCR and and then pass it over to DML Handler (if defined), hence the DML Handler can not see the other LCRs of the same transaction and it does not know know how many more LCRs are going to come to it.
    If you would like to access all the LCRs in a transaction, then you will need to instruct the apply process to enqueue the same into another queue as persistent messages (usually LCRs in apply queue are buffered - in memory).
    This can be done using DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION and DBMS_APPLY_ADM.SET_EXECUTE APIs. For this you will have to create a different queue and specify that in SET_ENQUEUE_DESTINATION. Then you can process all the LCRs in the new queue with a procedure (similar to your DML Handler).
    Hope this helps.
    Thanks,
    Rijesh

  • Updating a hierarchical data structure from an entry processor

    I have a tree-like data structure that I am attempting to update from an AbstractProcessor.
    Imagine that one value is a collection of child value keys, and I want to add a new child node in the tree. This requires updating the parent node (which contains the list of child nodes), and adding the child value which is a separate entry.
    I would rather not combine all bits of data into one value (which could make for a large serialized object), as sometimes I prefer to access (read-only) the child values directly. The child and the parent values live in the same partition in the partitioned cache, though, so get access should be local.
    However, I am attempting to call put() on the same cache to add a child value which is apparently disallowed. It makes sense that a blocking call is involved in this operation, as it needs to push out this data to the cluster member that has the backup value for the same operation, but is there a general problem with performing any kind of re-entrant work on Coherence caches from an entry processor for any value that is not the value you are processing? I get the assertion below.
    I am fine with the context blocking (preventing reads or writes on the parent node value) until the child completes, presuming that I handle deadlock prevention myself due to the order in which values are accessed.
    Is there any way to do this, either with entry processors or not? My code previously used lock, get and put to operate on the tree (which worked), but I am trying to convert this code to use entry processors to be more efficient.
    2008-12-05 16:05:34.450 (ERROR)[Coherence/Logger@9219882 3.4/405]: Assertion failed: poll() is a blocking call and cannot be called on the Service thread
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:30)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:1)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:928)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:887)
         at com.tangosol.net.cache.NearCache.put(NearCache.java:286)
         at com.conduit.server.properties.CLDistributedPropertiesManager$UpdatePropertiesProcessor.process(CLDistributedPropertiesManager.java:249)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.invoke(DistributedCache.CDB:20)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onInvokeRequest(DistributedCache.CDB:50)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$InvokeRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:637)

    Hi,
    reentrant calls to the same Coherence service is very much recommended against.
    For more about it, please look at the following Wiki page:
    http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Best regards,
    Robert

  • What triggers a write-through/write-behind of entry processor changes?

    What triggers a write-through/write-behind when a cache entry is modified by a custom entry processor (subclass of AbstractProcessor)? Is it simply the call to Entry.setValue() that triggers this or are entry modifications detected in some other way?
    The reason I am asking is that in our Coherence cluster it looks like some entry modifications through entry processors have not triggered a write-behind, and I see no logical pattern as to which ones have and which ones haven't except that some specific entries are just never written to our database. We see from our logs that our implementation of the CacheStore.store() method is not called in these cases, and we also see that the cache entry has been modified successfully.
    We are using Coherence 3.3 on a three machine cluster with 8 nodes on each machine, accessed from clients through a TCP extend proxy.
    Regards,
    Mikael Carlstedt
    mBlox Inc
    Edited by: user3849225 on 16-Sep-2010 04:57

    Hi Mikael
    Calling setEntry() will result in a call to the CacheStore.store() method unless the value you are setting is the same as the existing entry value. If you are using writebehind then storeAll() will be called instead of store() if there are multiple entries waiting to be stored. Writebehind will also coelesce entries so that only the last entry for a given key will be stored.
    What patch level are you using?
    Paul
    Edited by: pmackin on Sep 17, 2010 12:08 AM

  • Spawning new entry processors from within an existing entry processor

    Is it possible / legal to spawn a new entry processor (to operate within a different cache) from within an existing entry processor.
    E.g we have a parent and a child cache, We will receive an update of the parent and start an entry processor to do this. Off the back of the parent update we will also need to update some child entries in another cache and need to start a new entry processor for the child entries. Is it legal to do this?

    Hi Ghanshyam,
    yes, in case of (a), you would be mixing different types in the same cache. There is nothing wrong with that from Coherence's point of view, as long as all code which is supposed to access such objects in their deserialized form is able to handle this situation.
    This means that you need to use special extractors for creating indexes, and you need to write your filters, entry processors and aggregators appropriately to take this into account. But that's all it means.
    The EntryProcessor on the child could be invoked, so long as there are more service
    threads configured. This allows retaining partition affinity. I don't think this is technically
    illegal.It is problematic, as invoking an entry-processor from another entry-processor in the same cache service can lead to deadlock/livelock situations. You won't find it out in a simple test as you get an exception or not.
    But even if it is technically not guarded against, firing a second entry-processor consumes an additional thread from the thread-pool. Now if you get to a situation when all (or at least more than half of the thread-pool size) of your entry-processors try to fire an additional entry-processor, and there are no more threads in the thread-pool, then some or all would be waiting for a thread to be available, and of course none would be available, because there are not enough single-thread entry-processors to leave to get a thread to everyone.
    However, none of them can back off as all are waiting for the fired entry-processor to complete. Poof, no processing is possible on your cache service.
    Another problematic situation which can arise if entry processors are fired from entry processors is that your entry-processors may deadlock on entries (entry processors executing on some entries and trying to execute on another entry on which another entry processor executes and also tries to execute on the first entry). In this case the entry-processors would wait on each other to execute.
    No code running in the cache server invoked by Coherence is supposed to access a cache service from code running in the threads of the same cache service, except for a couple of specifically named operations which only release resources but not consume additional new ones.
    Best regards,
    Robert

  • Strange read-through operation after entry processor work

    Hi.
    We use the combination cache listener - entry processor to do some actions when the data comes to coherence. We use Oracle Coherence Version 3.5.3/465.
    Just after the entry processor has set the new value for the entry, the new "get" operation is called for the cache and jdbc hit is done for this key.
    Here is the entry processor:
    public Object process(Entry entry) {       
            if (!entry.isPresent()) {
                // No entities exist for this CoreMatchingString - creating new Matching unit
                MatchingUnit newUnit = new MatchingUnit(newTrade);
                entry.setValue(newUnit, true);
                return null;
            ((MatchingUnit)entry.getValue()).processMatching(newTrade);
            return null;
        }Very interesting, that if I use entry.setValue(value) without second parameter - I recieve the db hit just on setValue method. According to docs, setValue() with one parameter returns the previous value and its logical, that the cache hit (and therefore the db hit) is done just on set. But I use the overloaded version void setValue(java.lang.Object oValue, boolean fSynthetic), which is said to be light and should not fetch previous version of the object. But this is done anyway! Not on setValue itself, but just after the process() method called.
    Actually it's strange, that coherence tries to fetch the previous value in the case it didn't exist! The cache.invoke(matchingStr, new CCPEntryProcessor(ccp)) is invoked on not existing record and it is created just on invokation. May be it's the bug or the place for optimization.
    Thanks

    bitec wrote:
    Thanks, Robert, for such detailed answer.
    Still not clear for me, why synthetic inserts are debatable. There are lots of cases, when the client simply updates/inserts the record (using setValue()) and does not need to recieve the previous value. If he needs, he will launch the method:Hi Anton,
    it is debatable because the purpose of the fSynthetic flag is NOT so that you can optimize a cache store operation away. Synthetic event means that this is not real change in the data set triggered by the user, it is something Coherence has done on the backing map / on the cache for its own reasons and decisions to be able to provide high-availability to the data, and it only changes that particular Coherence node's subset of data but does not have a meaning related to the actually existing full data set. Such reasons are partition movement and cache eviction (or possibly any other reasons why Coherence would want to change the content of the backing map without telling the user that anything has changed).
    If you set the synthetic flag, you are trying to masquerade a real data change as an event which Coherence decided to trigger. This is why it is debatable. Also, synthetic backing map events may not always lead to dispatching cache events (for partition rebalance they definitely not). This optimization may also be extended to synthetic cache events.
    java.lang.Object setValue(java.lang.Object oValue)and recieve the previous value. If he doesn't, he calls:
    void setValue(java.lang.Object oValue, boolean fSynthetic)and DOESN'T recieve the previous value as method is marked void. Thus he cannot get the previous value anyhow using this API, except of the direct manual db call. Yep, because Coherence is not interested in the old value in case of a synthetic event. The synthetic methods exist so that some entry can be changed in Coherence (usually by Coherence itself) in a way that it indicates a synthetic event so that listeners are not notified.
    Some valid uses for such functionality for setValue invoked by user code could be compacting some cached value and replacing the value the backing map stores with the compacted representation, which does not mean a change in the meaning of the actual cached value, only the representation changes. Of course if the setValue(2) method does not actually honor the synthetic flag, then such functionality will still incur all the costs of a normal setValue(1) call.
    But the previous value is anyway fetched by Coherence itself just after process() and client anyway doesn't get it! But any listeners on the cache may need to get it due to cache semantics reasons.
    In this case I regard this as the bug, cause the client, which uses this API doesn't expect the cache hit to take place (no return value for this overloaded setValue() method), but it takes and leads to some extra problems, resulting from read-through mechanizm.
    I would not regard it as a bug, it is probably the case of documenting a possible optimization too early, when it ultimately did not get implemented. I definitely would not try to abuse it to set a value without triggering a db fetch, as again, the intention of the synthetic flag is related not only to the cache loader functionality, but also to events and marking that a change indicates a real data change or a Coherence data management action.
    Now I understand why coherence does not know, whether this is inserted or updated value, thanks for details.
    Anton.
    *Edited: I thought about this problem from the point of the oracle user, but may be this additional hit is necessary for event producers, which need to make the events, containing old/new values. In this case this seems to be the correct behaviour.... Seems that I need some other workaround to avoid db hit. The best workaround is the empty load() method for cachestore...You can try to find a workaround, but it is an ugly can of worms, because of the scenario when more than one thread tries to load the same entry, and some of them try to load it with a reason.
    You may try to put some data into thread-local to indicate that you don't really want to load that particular key. The problem is that depending on configuration and race conditions your cache loader may not be called to clean up that thread-local as some other thread may just be invoked right now and in that case its return value is going to be returned to all other threads, too, so you may end up with a polluted thread.
    Best regards,
    Robert
    Edited by: robvarga on Oct 15, 2010 4:25 PM

  • Simple get vs entry processor for processsing 'get' requests

    We're wondering about the best way to handle a simple get where we want to do some processing on the request:
    1. we might want to authenticate/authorize a proxy as well as an 'implied' user sitting behind the proxy
    2. we might need to look up the cache entry by modifying the key presented by the user: for example, changing capitalization
    3. we might even have multiple aliases for the cache entry but only store it once to avoid data duplication-- in this case, we'd need to resolve an alias to another name entierly.
    Would it be best to use an entry processor to do this (a 'GetProcessor') or is there a better way using simple 'get' in the basic api with some server-side logic to intercept the request and do the processing? If the latter, can you please explain how to do it?
    And please point out any performance considerations if you can.
    Thanks!
    Edited by: Murali on Apr 25, 2011 2:51 PM

    Hi Murali,
    You would probably be better off using an Invocable and InvocationService for what you want to do. The main reason for this would be points 2 and 3 where you say you might want to modify the key or have aliases (I presume you mean aliases for keys).
    If you use a get or an EntryProcessor these two requests would be routed to the storage member that owns the key of the get request or EntryProcessor invoke. If you then wanted to modify the key the get or EntryProcessor may now be on the wrong node as a different node may own the new key.
    If your data access requests are all coming from client applications over Extend and not from within cluster members then you could intercept calls to the cache on the Extend Proxy and do your extra processing and key modification there. There are a couple of different ways of doing this depending on what version of Coherence you are using and whether this is restricted to a few caches or all caches. Coherence 3.6 and above make this easier as they introduce methods and configuration as part of the security API that would allow you to intercept calls an easily wrapper caches. It is still possible in 3.5 but a bit more work.
    Probably the easiest way on an ExtendProxy is to create a wrapper class that can wrap the real cache and intecept the required methods. You can do this by extending WrapperNamedCache and overriding the methods you want to intercept, such as get(). Actually making Coherence use your wrapper instead of the real cache can be done a number of ways depending again on which version of Coherence you have.
    Are all your data access requirements just gets or do you intend to use Filter queries? Obviously any query made againsta a cache where the Filter was targeted at the Key rather than the value would fail if the filter was using unmodified key values. You would also need to cope with getAll requests.
    If you can expand a bit on where the requests for data will come from (i.e. are they all form Extend clients) and which version of Coherence you have then it would be possible to give a better answer as right now there are quite a few possibilities.
    JK

  • Entry Processor and worker thread question

    We have a cache service configured with a thread count of 10 . When i run a Entry processor to do some processing on all entries of the cache, i see in the logs that each of the entry is processed sequentially on same worker thread even though multiple threads are configured for cache service . I was hoping Entry processor would run parallely on multiple threads for different entry's . Am i missing something here?? Appreciate all the help .
    Here is thelog:: Looks like entry processor runs sequentially on same thread.
    011/04/06 15:58:55:489 [*DistributedCacheWorker:6*] INFO ENTRY PROCESSOR ON ACCOUNT 1872
    2011/04/06 15:58:55:490 [*DistributedCacheWorker:6]* INFO ENTRY PROCESSOR ON ACCOUNT 38570
    2011/04/06 15:58:55:490 [*DistributedCacheWorker:6*] INFO ENTRY PROCESSOR ON ACCOUNT 38856
    2011/04/06 15:58:55:491 [*DistributedCacheWorker:6*] INFO ENTRY PROCESSOR ON ACCOUNT 1586279
    2011/04/06 15:58:55:492 [*DistributedCacheWorker:6*] INFO ENTRY PROCESSOR ON ACCOUNT 38570

    Hello,
    Entry Processors will use multiple threads. They will however only use one thread per partition.
    My guess is what you are seeing is that the EP is working on the same partition.
    hth,
    -Dave

  • Entry processor performance and serialization

    Our application invokes an entry processor from a .Net extend client. We are seeing very poor performance the first few times the processor is invoked (typically 60ms vs 2ms). I was expecting this to be due to the overhead of the initial TCP connection but this doesn't appear to be the case. Here is a typical set of timings from the beginning of invoking the EP on the client:
    5ms: Object starts being deserialized on proxy
    7ms: Deserialization is finished, and serialization begins
    9ms: Serialization is complete.
    55ms: Deserialization begins on the storage node
    58ms: Deserialization complete on storage node
    58.5: Entry processor invocation complete
    60ms: Serialization of result is complete
    My questions are:
    - What is happening between the end of serialization of the object by the proxy and deserialization starting on the storage node?
    - Given that the cache is POF enabled, why is the proxy deserializing and reserializing the entry processor?
    Version is 3.7.1.6

    Hi Rob,
    Config is below. defer-key-association-check is not set. The timings were established by logging the POF serialization in each process so its definitely the proxy that is doing it.
    Regards,
    Dave
    Client config:
    <?xml version="1.0"?>
    <cache-config xmlns="http://schemas.tangosol.com/cache">
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>Cache</cache-name>
    <scheme-name>remote-scheme</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <remote-cache-scheme>
    <scheme-name>remote-scheme</scheme-name>
    <service-name>ExtendProxyService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>localhost</address>
    <port>1234</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>5s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>1s</request-timeout>
    </outgoing-message-handler>
    <serializer>
    <class-name>Tangosol.IO.Pof.ConfigurablePofContext, Coherence</class-name>
    <init-params>
    <init-param>
    <param-type>string</param-type>
    <param-value>assembly://..../pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    </cache-config>
    Server cache config:
    <?xml version="1.0"?>
    <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
    xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
    <defaults>
    <serializer>pof</serializer>
    </defaults>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>Cache</cache-name>
    <scheme-name>cache-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>cache-distributed</scheme-name>
    <service-name>PartitionedCache</service-name>
    <thread-count>10</thread-count>
    <partition-count>2039</partition-count>
    <backing-map-scheme>
    <local-scheme />
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendProxyService</service-name>
    <thread-count>5</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>localhost</address>
    <port system-property="tangosol.coherence.proxy.port"></port>
    </local-address>
    </tcp-acceptor>
    </acceptor-config>
    <proxy-config>
    <cache-service-proxy>
    <class-name>....InteceptorCacheService</class-name>
    <init-params>
    <init-param>
    <param-type>com.tangosol.net.CacheService</param-type>
    <param-value>{service}</param-value>
    </init-param>
    </init-params>
    </cache-service-proxy>
    </caching-schemes>
    </cache-config>

Maybe you are looking for

  • Error while approving file programmatically

    Hi All, I am getting the following error while approving the file(.aspx) programatically which is in the pages library of a subsite in a sitecollection. The file pages/abc.aspx has been modified by SHAREPOINT\\system on etc., Please help me regarding

  • Steps to solve a problem

    Please confirm me the steps to download a video while browsing in internet? Next how to download videos,or mp3 via nokia 5070 while browsing in internet but at the PC. please tell me the ways to download some creature completely

  • What is the best, most efficient way to read a .xls File and create a pipe-delimited .csv File?

    What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File? Thanks in advance for your review and am hopeful for a reply. ITBobbyP85

  • Remove large bottom menu bar.

    When I open the Firefox I now get a different screen than I did in the past. There are two things I would like to change, but cannot figure out how to change: 1) There is a large-fonted bar about an inch high that spreads across the bottom of the scr

  • Is there a way to specify compiler options in mxml file?

    I would like to implement the module optimization techniques that require "-link-report=parentLinkReport.xml" when compiling the main module, and "-load-externs=parentLinkReport.xml" when compiling the submodules. and can't find a convenient way to d