Concurrent access to data structure by threads

Hi,
I have list of object which is read by a single thread but several writer threads can into it.
The reader reads the objects of the list and does some processing.
And the reader thread must wait for a notification from writer threads that the list has been written into.
So each writer thread writes into the list and then notifies the reader . Each time the reader receives a notification it starts its processing and then checks if all the writers are done, if they aren't then it waits again and if all the writers and done then its does a final procesing and exits.
I was able to synchronise the writers but i am not able to synchronize the reader thread and the writer threads.
my code is as follows :-
class Data
private List<String> resultList = Collections.synchronizedList(new ArrayList<String>());
private boolean modified = false;
public List<String> getResultList()
return this.resultList;
public void setResultList(List<String> resultList)
this.resultList = resultList;
public void addElementToResultList(String str)
this.resultList.add(str);
public synchronized boolean IsModified()
               return this.modified;
     public synchronized void setModified(boolean modified)
          this.modified = modified;
Class CallerTest
Data myData = new Data();
List<String> resultList = null
int counter = 0;
     public synchronized void incrementCounter()
          this.counter++;
     public synchronized int getCounter()
          return this.counter;
     public void setFinalResult(List<String> resultList )
          this.resultList = resultList ;
public void TestMeth()
     int numWriters = 5;
     ReaderThread rt = new rt.start(this,numWriters ,myData );
     rt.start();
     for(int i =0;i<numWriters ;i++)
          WriterThread wt = new WriterThread (this,myData );
          wt.start();
     try{
          rt.join();
     catch(Exception x)
class ReaderThread extends Thread
     CallerTest caller = null;
     int numWriters=0;
     Data data = null;
public ReaderThread ()
public ReaderThread (CallerTest caller,int numWriters,Data data)
     this.caller = caller;
     this.data = data;
     this.numWriters = numWriters;
public void run()
     try{
               while(!data.IsModified())
     data.getResultList().wait();
mergeResults(); // this is some processing
if(caller.getCounter() < this.numWriters)
data.setModified(false);
else
mergeResults();
caller.setFinalResult(finalResult);
catch(Exception ent)
log.error("The exception is in remote result reader ", ent);
class WriterThread extends Thread()
     private CallerTest caller = null;
     private Data data = null;
     WriterThread()
     WriterThread(CallerTest caller,Data data)
          this.caller = caller;
          this.data = data;
     public void run()
          String str = getStringName()
          writeResult(str);
     public String getStringName()
          //some processing here
     public synchronized void writeResult(String result)
               try{
                    if(result != null)
                         data addElementToResultList(result);
                         caller.incrementCounter();
                         rrd.setModified(true);
                         data.getResultList().notifyAll();
               catch(Exception ent)
                    log.error("exception in WriterThread write remote results", ent);
however on running the program , i get an exception :-
[ERROR] [Mon Jan 25 12:13:49 2010] [Thread-59| WriterThread] exception in WriterThread write remote results
java.lang.IllegalMonitorStateException
at java.lang.Object.notifyAll(Native Method)
WriterThread.writeResults(WriterThread.java:328)
WriterThread.run(WriterThread.java:83)
Please can you tell me how to synchronise the reader and writers.
greatly appreciate your help.
Thanks,
Helen
Edited by: hreese on Jan 25, 2010 11:08 AM

As far as i can see (not to far because you didn't use the CODE tags to make your code readable) your code is a little bit of a mess.
You can start with defining which kind of errors can be thrown and handle the exceptions properly.
So use at least:
try{
  block
catch(Exception e){
  e.printstacktrace
}Try to find out where the exception is thrown and debug the code.
Don't place a bunche of code on the forum, because most people won't read it all.

Similar Messages

  • Accessing tcp_t data structure

    I need to access tcp_t datastructure which is implemented in Solaris 10,
    how could it can be programmed in c to access tcp_t data structure.

    Look at /usr/include/inet/tcp.h - it defines the structure tcp_t.
    Add the following line to your source file:
    #include <inet/tcp.h>
    I'm not sure that this information is exactly what you are asking for.
    Probably you can find good examples of source code on google.com
    Thanks.
    Nik

  • Using a single data structure in a desktop application

    Hello,
    I am programming an application that needs to constantly access a data structure, for instance to add / edit / update / search data. Several graphical user interfaces need to modify this data structure. I was wondering of easy ways to use the data structure throught the whole application. One solution I found was to use the singleton pattern for my data structure, though lots of people have recommended me to avoid using that pattern. What are better ways of accessing that single data structure from all of those GUIs ?
    Thank you,
    Alfredo

    Not that there is anything wrong with a Singleton pattern, but I don't see how it would help you in this case.
    Just create the DataStructure and let every GUI that need to use it have a reference to it.

  • Unexpected error occurred :concurrent access to HashMap attempted

    While runnig the ALBPM 5.7 we got this error. This looks like the ALBPM workflow engine is using HashMap in a unsynchronized way. is this a known issue and is there a work around for this?
    This error happened shortly after a possible blip in the database server, with exception message which said:
    Message:
    The connectivity to the BEA AquaLogic™ BPM Server database has been successful restablished.
    Any thoughts/insight/past experience....
    Looks like we should be using Hashtable instead of a HashMap (or atleast a Synchronized HashMap)
    This is best done at creation time, to prevent accidental unsynchronized access to the map:
    Map m = Collections.synchronizedMap(new HashMap(...));
    See Exception message below
    Message:
    An unexpected error occurred while running an automatic item.
    Details: Connector [ffmaeng_ENGINE_DB_FUEGOLABS_ARG:SQL:Oracle (ALI)] caused an exception when getting a resource of type [0].
    Detail:Connector [ffmaeng_ENGINE_DB_FUEGOLABS_ARG:SQL:Oracle (ALI)] caused an exception when getting a resource of type [0].
    Caused by: concurrent access to HashMap attempted by Thread[ET(49),5,Execution Thread Pool]
    fuego.connector.ConnectorException: Connector [ffmaeng_ENGINE_DB_FUEGOLABS_ARG:SQL:Oracle (ALI)] caused an exception when getting a resource of type [0].
    Detail:Connector [ffmaeng_ENGINE_DB_FUEGOLABS_ARG:SQL:Oracle (ALI)] caused an exception when getting a resource of type [0].
    at fuego.connector.ConnectorException.exceptionOnGetResource(ConnectorException.java:95)
    at fuego.connector.ConnectorTransaction.getResource(ConnectorTransaction.java:285)
    at fuego.connector.JDBCHelper.getConnection(JDBCHelper.java:43)
    at fuego.server.service.EngineConnectorService.getConnection(EngineConnectorService.java:260)
    at fuego.server.service.EngineConnectorService.getEngineConnection(EngineConnectorService.java:160)
    at fuego.transaction.TransactionAction.getEngineHandle(TransactionAction.java:180)
    at fuego.server.execution.EngineExecutionContext.getEngineHandle(EngineExecutionContext.java:352)
    at fuego.server.execution.EngineExecutionContext.persistInstances(EngineExecutionContext.java:1656)
    at fuego.server.execution.EngineExecutionContext.persist(EngineExecutionContext.java:1010)
    at fuego.transaction.TransactionAction.beforeCompletion(TransactionAction.java:133)
    at fuego.connector.ConnectorTransaction.beforeCompletion(ConnectorTransaction.java:654)
    at fuego.connector.ConnectorTransaction.commit(ConnectorTransaction.java:330)
    at fuego.transaction.TransactionAction.commit(TransactionAction.java:303)
    at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470)
    at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:540)
    at fuego.transaction.TransactionAction.start(TransactionAction.java:213)
    at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:118)
    at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:58)
    at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42)
    at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:264)
    at fuego.server.execution.ToDoItem.run(ToDoItem.java:531)
    at fuego.component.ExecutionThread.processMessage(ExecutionThread.java:754)
    at fuego.component.ExecutionThread.processBatch(ExecutionThread.java:734)
    at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:140)
    at fuego.component.ExecutionThread.doProcessBatch(ExecutionThread.java:132)
    at fuego.fengine.ToDoQueueThread$PrincipalWrapper.processBatch(ToDoQueueThread.java:432)
    at fuego.component.ExecutionThread.work(ExecutionThread.java:818)
    at fuego.component.ExecutionThread.run(ExecutionThread.java:397)
    Caused by: java.util.ConcurrentModificationException: concurrent access to HashMap attempted by Thread[ET(49),5,Execution Thread Pool]
    at java.util.HashMap.onExit(HashMap.java:226)
    at java.util.HashMap.transfer(HashMap.java:690)
    at java.util.HashMap.resize(HashMap.java:676)
    at java.util.HashMap.addEntry(HashMap.java:1049)
    at java.util.HashMap.put(HashMap.java:561)
    at fuego.lang.cache.CacheStatistic.lock(CacheStatistic.java:246)
    at fuego.lang.cache.TimedMultiValuatedCache.getLocked(TimedMultiValuatedCache.java:282)
    at fuego.lang.cache.TimedPool.get(TimedPool.java:80)
    at fuego.connector.impl.BaseJDBCPooledConnector.getConnection(BaseJDBCPooledConnector.java:140)
    at fuego.connector.impl.BaseJDBCConnector.getResource(BaseJDBCConnector.java:222)
    at fuego.connector.ConnectorTransaction.getResource(ConnectorTransaction.java:280)
    ... 26 more

    Hi BalusC,
    I forgot to tell one thing, the exception what I mentioned is coming very rarely. The application is in Production and they getting this Exception once in 3 months. Is there any way to re-produce the same exception number of times to check whether it has been fixed or not after installing the updates as you said. If you have any information regarding this exception please send me.
    Thank You.

  • How to use multithreading to access sensor data?

    I'm trying to access sensor data on another thread but keep getting error - No overload for 'OnSensorChanged' matches delegate 'System.Threading.ThreadStart' What am I doing wrong?
            public string str
    { get; set; }
            public void OnSensorChanged(SensorEvent e)
                if (e != null)
                     str
    = e.Values [2].ToString("0.0");
            public void button_OnClick (object sender, EventArgs eventArgs)
                Setup setup = new Setup();
                Thread newThread = new Thread(new ThreadStart(setup.OnSensorChanged));
                newThread.Start();
                newThread.Join();
                newThread.Abort();
                       _text.Text = str;

    Hi Kringle,
    >>No overload for 'OnSensorChanged' matches delegate 'System.Threading.ThreadStart'
    This exception indictates that the method signature for OnSensorChanged does not match the signature defined by a ThreadStart delegate. You need to pass a SensorEvent object into this method, for example:
    Thread newThread = new Thread(setup.OnSensorChanged);
    newThread.Start(new SensorEvent());
    Reference:
    https://msdn.microsoft.com/en-us/library/system.threading.parameterizedthreadstart%28v=vs.110%29.aspx
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Concurrent access to changing data structure

    I would like to design a data structure that would allow multiple read only threads to access it but periodically allow the structure to be modified or completely refreshed by one other thread. I would prefer to not have to synchronize access to the structure by the read only threads just to allow one thread to modify it once in a while. Can the object be locked and the other threads blocked just when the one thread is modifying the structure until it completes?
    Is there a common or recommended design for this situation?

    So you want it to block while the writer is writing, but not otherwise, correct?
    Check out Doug Lea's concurrency package. I've never used it, but it seems quite popular and I gather it's got a lot of handy tools. It might have what you're looking for. Google for it. It (or something based on it) might even be included in 1.5/5.0, IIRC.
    Barring that, I think that if your reads are quick (i.e., just retrieving a single value) and you don't have a ton of threads all doing a lot of very, very frequent reads, then you can just synchronize all access to the structure's state. Uncontended locks are quick to obtain and release, so unless your reads take a relatively long time or they're very densely packed in time, you won't get many collisions.
    If you do need to implement something like this, then what I'm thinking of is to have a flag that indicates whether there's a read or write going on. If there's a read going on, other reads can go on, but no writes. If there's a write going on, no other writes or reads can occur. Access to the flag would have to be synchronized, readers/writers would have to wait/notify each other, so you might not end up saving that much performance-wise anyway. I'm not even sure if there's a good way to make it work--it might just end up being an overcomplicated, non-working double-checked lock.
    Start with just syncing all access to the structure's state--both read and write--and then run some tests to see if your performance requirements are met.

  • Storing a lot of data in an indexed data structure for quick access.

    I'm designing an app. which will need to store a large amount of data in memory. Records will be flowing into the app. via a socket. The app will receive about 30 records/second which is about 108,000 records/hour and about 600,000 records/day. I need to store the records in an indexed data structure so that I can access them quickly. For example, at 9:00am I will need to access records received at 8:30am, 8:35am, 8:40am, etc. This program will be multithreaded and as I understand Vector is the only data structure that is thread safe. Is Vector my only choice? How do I access objects in a Vector using an index? Is there something better that I can use?

    Is Vector my only choice?If you want to access the objects by key then you should use something like a HashMap. But if you want to access them by an array index then an ArrayList would be more appropriate.
    as I understand Vector is the only data structure that is thread safeYou can get a thread-safe version of any Collection object by using the Collection.synchronizedCollection method.
    How do I access objects in a Vector using an index? I'd suggest you read the API documentation. And probably the Sun tutorial on Collections at http://java.sun.com/docs/books/tutorial/collections/index.html
    600,000 records/day. Unless you plan to dump old data after a short period of time, you may want to consider using a database to avoid running out of memory.

  • How to synchronize concurrent access to static data in ABAP Objects

    Hi,
    1) First of all I mwould like to know the scope of static (class-data) data of an ABAP Objects Class: If changing a static data variable is that change visible to all concurrent processes in the same Application Server?
    2) If that is the case. How can concurrent access to such data (that can be shared between many processes) be controlled. In C one could use semaphores and in Java Synchronized methods and the monitor concept. But what controls are available in ABAP for controlling concurrent access to in-memory data?
    Many thanks for your help!
    Regards,
    Christian

    Hello Christian
    Here is an example that shows that the static attributes of a class are not shared between two reports that are linked via SUBMIT statement.
    *& Report  ZUS_SDN_OO_STATIC_ATTRIBUTES
    REPORT  zus_sdn_oo_static_attributes.
    DATA:
      gt_list        TYPE STANDARD TABLE OF abaplist,
      go_static      TYPE REF TO zcl_sdn_static_attributes.
    <i>* CONSTRUCTOR method of class ZCL_SDN_STATIC_ATTRIBUTES:
    **METHOD constructor.
    *** define local data
    **  DATA:
    **    ld_msg    TYPE bapi_msg.
    **  ADD id_count TO md_count.
    **ENDMETHOD.
    * Static public attribute MD_COUNT (type i), initial value = 1</i>
    PARAMETERS:
      p_called(1)  TYPE c  DEFAULT ' ' NO-DISPLAY.
    START-OF-SELECTION.
    <b>* Initial state of static attribute:
    *    zcl_sdn_static_attributes=>md_count = 0</b>
      syst-index = 0.
      WRITE: / syst-index, '. object: static counter=',
               zcl_sdn_static_attributes=>md_count.
      DO 5 TIMES.
    <b>*   Every time sy-index is added to md_count</b>
        CREATE OBJECT go_static
          EXPORTING
            id_count = syst-index.
        WRITE: / syst-index, '. object: static counter=',
                 zcl_sdn_static_attributes=>md_count.
    <b>*   After the 3rd round we start the report again (via SUBMIT)
    *   and return the result via list memory.
    *   If the value of the static attribute is not reset we would
    *   start with initial value of md_count = 7 (1+1+2+3).</b>
        IF ( p_called = ' '  AND
             syst-index = 3 ).
          SUBMIT zus_sdn_oo_static_attributes EXPORTING LIST TO MEMORY
            WITH p_called = 'X'
          AND RETURN.
          CALL FUNCTION 'LIST_FROM_MEMORY'
            TABLES
              listobject = gt_list
            EXCEPTIONS
              not_found  = 1
              OTHERS     = 2.
          IF sy-subrc <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
          ENDIF.
          CALL FUNCTION 'DISPLAY_LIST'
    *       EXPORTING
    *         FULLSCREEN                  =
    *         CALLER_HANDLES_EVENTS       =
    *         STARTING_X                  = 10
    *         STARTING_Y                  = 10
    *         ENDING_X                    = 60
    *         ENDING_Y                    = 20
    *       IMPORTING
    *         USER_COMMAND                =
            TABLES
              listobject                  = gt_list
            EXCEPTIONS
              empty_list                  = 1
              OTHERS                      = 2.
          IF sy-subrc <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
          ENDIF.
        ENDIF.
      ENDDO.
    <b>* Result: in the 2nd run of the report (via SUBMIT) we get
    *         the same values for the static counter.</b>
    END-OF-SELECTION.
    Regards
      Uwe

  • Shared data - concurrent access

    Entity beans are best used when shared data is being concurrently accessed.
    Could you please clarify this statement . How does it different than shared data - concurrent access by jdbc DAO class ?

    jverd wrote:
    I have no idea what you're asking. I cannot provide clarification until you do.Ok. let me explain further. my question is , is it a good idea that whenever we need shared concurrent data access in a application Entity bean is the best ? Why? does it really outsmart the jdbc DAO access methodology whenever it comes to the realm of shared concurrent data access ?
    For example: I can think of a system e.g online auction house application. This needs concurrent shared data access ..right ?
    Do you think use of Entity bean is best here instead of jdbc DAO access methodology ? Why ?

  • Is Serialization a thread safe / concurrent access safe action?

    Hi,
    Is Serialization a thread safe / concurrent access safe operation? If not, how can one make sure a Serializable object won't be modified during the serialization process?
    Thanks!

    Jrm wrote:
    Is Serialization a thread safe / concurrent access safe operation?Serialization is not inherently thread-safe. It is up to you to make it thread-safe.
    If not, how can one make sure a Serializable object won't be modified during the serialization process?Control access to the objects you want to serialize so that no modifications occur during serialization.

  • How to access data structures in C dll from java thru JNI?

    We have been given API's( collection of C Functions) from some vendor.
    SDK from vendor consist of:
    Libpga.DLL, Libpga.h,Libpga.lib, Along with that sample program Receiver.h (i don't know its written in C or C++), I guess .C stnads for C files?
    Considering that I don't know C or C++ (Except that I can understand what that program is doing) & i have experience in VB6 and Java, In order to build interface based on this API, I have two option left, Use these dll either from VB or Java.
    As far as I know, calling this DLL in VB requires all the data structures & methods to be declared in VB, I guess which is not the case with Java (? I'm not sure)
    I experiemnted calling these function from Java through JNI, and I successfully did by writting wrapper dll. My question is whether I have to declare all the constants & data structures defined in libpga.h file in java, in order to use them in my java program??
    Any suggesstion would be greatly appreciated,
    Vini

    1. There are generators around that claim to generate suitable wrappers, given some dll input. I suggest you search google. Try JACE, jni, wrapper, generator, .... Also, serach back through this forum, where there have been suggestions made.
    2. In general, you will need to supply wrappers, and if you want to use data from the "C side" in java, then you will need java objects that hold the data.

  • Which data structure?

    Hi there...
    Could you let me know what is the most efficient data structure to use (in terms of storage and access) if I have to store around 1 million <key,value> pairs?
    Thanx :)
    Tariq

    Yes, concurrency, when two or more threads would act upon the same data or data structure. Concurrent access, etc. Your question has nothing to do with concurrency, but rather Collections. But yes, HashMap or Hashtable depending on if one thread is going to be accessing the same data or multiple threads. Also a database might come in handy as well, such as MySQL or JavaDB (Apache Derby).

  • Is timesten still using T-tree as data structure?

    I just come across this paper - http://www.memdb.com/paper.pdf , this researcher do some experiments and showing that using concurrent B-tree algorithm is actually faster than T-tree operation. How do you think about this paper? Do you think actually he is using a inefficient algorithm to access T-tree? Or, Timesten already know the limitation of T-tree and have changed the internal data-structure?

    Yes, we are aware of the comparisons between T-Trees, concurrent B-trees etc. At the moment TimesTen still uses T-trees but this may change in the future :-)
    Chris

  • Clusters as data structures

    I am looking for the best and simplest way to create and manage data structures in Labview.
    Most often I use clusters as data structures, however the thing I don't like about this approach is that when I pass the cluster to a subroutine, the subroutine needs a local copy of the cluster.
    If I change the cluster later (say I add a new data member), then I need to go through all the subroutines with local copies of the cluster and make the edit of the new member (delete/save/relink to sub-vi, etc).
    On a few occasions in the past, I've tried NI GOOP, but I find the extra overhead associated with this approach cumbersome, I don't want to have to write 'get' and 'set' methods for every integer and string, I like being able to access the cluster/object data via the "unbundle by name" feature.
    Is there a simple or clever way of having a single global reference to a data object (say a cluster) that is shared by a group of subroutines and which can then be used as a template as an input or output parameter? I might guess the answer is no because Labview is interpreted and so the data object has to be passed as a handle, which I guess is how GOOP works, and I have the choice of putting in the extra energy up front (using GOOP) or later (using clusters if I have to edit the data structure). Would it be advisable to just use a data cluster as a global variable?
    I'm curious how other programmers handle this. Is GOOP pretty widely used? Is it the best approach for creating maintainable LV software ?
    Alex

    Alex,
    Encapsulation of data is critical to maintaining a large program. You need global, but restricted, access to your data structures. You need a method that guarantees serial, atomic access so that your exposure to race conditions is minimimized. Since LabVIEW is inherently multi-threaded, it is very easy to shoot yourself in the foot. I can feel your pain when you mention writing all those get and set VIs. However, I can tell you that it is far less painful than trying to debug a race condition. Making a LabVIEW object also forces you to think through your program structure ahead of time - not something we LabVIEW programmers are accustomed to doing, but very necessary for large program success. I have use three methods of data encapsulation.
    NI GOOP - You can get NI GOOP from the tutorial Graphical Object Oriented Programming (GOOP). It uses a code interface node to store the strict typedef data cluster. The wizard eases maintenance. Unfortunately, the code interface node forces you through the UI thread any time you access data, which dramatically slows performance (about an order of magnitude worse than the next couple of methods).
    Functional Globals - These are also called LV2 style globals or shift register globals. The zip file attached includes an NI-Week presentation on the basics of how to use this approach with an amusing example. The commercial Endevo GOOP toolkit now uses this method instead of the code interface node method.
    Single-Element Queues - The data is stored in a single element queue. You create the database by creating the queue and stuffing it with your data. A get function is implemented by popping the data from the queue, doing an unbundle by name, then pushing the data back into the queue. A set is done by popping the data from the queue, doing a bundle by name, then pushing the data back into the queue. You destroy the data by destroying the queue with a force destroy. By always pulling the element from the queue before doing any operation, you force any other caller to wait for the queue to have an element before executing. This serializes access to your database. I have just started using this approach and do not have a good example or lots of experience with it, but can post more info if you need it. Let me know.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Updating a hierarchical data structure from an entry processor

    I have a tree-like data structure that I am attempting to update from an AbstractProcessor.
    Imagine that one value is a collection of child value keys, and I want to add a new child node in the tree. This requires updating the parent node (which contains the list of child nodes), and adding the child value which is a separate entry.
    I would rather not combine all bits of data into one value (which could make for a large serialized object), as sometimes I prefer to access (read-only) the child values directly. The child and the parent values live in the same partition in the partitioned cache, though, so get access should be local.
    However, I am attempting to call put() on the same cache to add a child value which is apparently disallowed. It makes sense that a blocking call is involved in this operation, as it needs to push out this data to the cluster member that has the backup value for the same operation, but is there a general problem with performing any kind of re-entrant work on Coherence caches from an entry processor for any value that is not the value you are processing? I get the assertion below.
    I am fine with the context blocking (preventing reads or writes on the parent node value) until the child completes, presuming that I handle deadlock prevention myself due to the order in which values are accessed.
    Is there any way to do this, either with entry processors or not? My code previously used lock, get and put to operate on the tree (which worked), but I am trying to convert this code to use entry processors to be more efficient.
    2008-12-05 16:05:34.450 (ERROR)[Coherence/Logger@9219882 3.4/405]: Assertion failed: poll() is a blocking call and cannot be called on the Service thread
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:30)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:1)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:928)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:887)
         at com.tangosol.net.cache.NearCache.put(NearCache.java:286)
         at com.conduit.server.properties.CLDistributedPropertiesManager$UpdatePropertiesProcessor.process(CLDistributedPropertiesManager.java:249)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.invoke(DistributedCache.CDB:20)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onInvokeRequest(DistributedCache.CDB:50)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$InvokeRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:637)

    Hi,
    reentrant calls to the same Coherence service is very much recommended against.
    For more about it, please look at the following Wiki page:
    http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Best regards,
    Robert

Maybe you are looking for

  • Disappearing Songs on iPod - Thank you Apple!

    4000 songs gone, about 1 months of work ripping my CD Lib lost. Thank you Apple Engineers! Background: When I Rip a CD, I store temporaly the files on my PC HD, then I update the iTune Music List, then I synchronize the iPod. Afterward, I delete the

  • Dynamic LOV on portlet

    I have a dynamic LOV as the only item in a portlet. The page is a 2 column layout with 2 tabs. When I click on the second tab I get 'There is currently no HTML content' message where the LOV should appear. Why?

  • Dialing issue after upgrade

    The phone has now ver 4.4.4 of android.  After an update (I do not remember the exact number) the phone started to behave weird when placing a call.  Randomly, it you select a contact and make a call, the phone aplication goes away and nohing happens

  • 3 cores stop working after sleep (10.8.3)

    Hello all and thanks for reading this post. The problem I am experiencing is that, after restarting my computer of sleep mode I lost the activity on 3 out the 4 cores of my computer. It gets solved when I restart the computer (not logging out my acco

  • Any clue?

    How can I use Person and Trip tables in Travel database acrossing two pages. In my Page1.jsp,PersonRowset is bound to dropdown list and TripRowset is bound to DataTable . In the next page, I want to show more colums of the Trip table response to the