Thread-safe increment()

class Counter {
int count = 0;
int increment() {
int n=count;
count = n + 1;
return n;
Can you make it thread-safe?

I need a good book for this topics
Core Libraries
I/O
JavaBeans
Language and Utilities
Math     Integration Libraries
JDBC
JNDI
RMI
Java Language
Data Types
Expressions
Objects
Syntax     Media Programming
Image I/O
Java 2D
Print Service
Support Libraries
Internationalization
Java Management Extensions (JMX)
Networking
Security
XML     Tools
Annotation Processing
Compiler
Deployment
Java Platform Debugger Architecture (JPDA)
Javadoc
User Interface Libraries
Abstract Window Toolkit
Accessibility
Drag-and-Drop Data Transfer
Input Method Framework
Swing     Virtual Machine
Byte Codes
Hot Spot
Memory Management
Thread Management

Similar Messages

  • Hashtable really thread safe or is there a bug in it ?

    Hashtable is said to be thread-safe, so I think it should be true.
    So I just want someone to tell me what is wrong with my following observation:
    I downloaded the source code of jdk 1.3.1 and read the file Hashtable.java, that contains the implementation code of Hashtables.
    Lots of methods (like get(), put(), contains(), ...) are declared public and synchronized. So far, all is good.
    But the function size() is defined as follows:
    public int size() {
         return count;
    where count is an instance variable defined as follows:
    * The total number of entries in the hash table.
    private transient int count;
    As the function size() is not synchronized, it can retrieve the value of "count" when another method is in the process of updating it.
    For instance, the method put() (that is synchronized) increments the variable "count" with this instruction:
    count++;
    But we know that operation on variable of type "int" are atomic, so this issue doesn't harm the thread safety of the code.
    But there is another issue with multithreading, that deals with caching of data in the working memory of each thread.
    Suppose the following scenario:
    1) Thread A calls method size(). This method reads the value of "count" and put it in the working memory of thread A.
    2) Thread B calls method put(). This method increments "count" using instruction "count++".
    3) Thread A calls method size() again. This method reads the value of "count" from the working memory of the thread, and this cached value may not be the updated value by thread B.
    As size() is not synchronized, or "count" is not declared "volatile", multiple threads may retrieve stale value, because the content of the thread working memory is not in sync with main memory.
    So, here I see a problem.
    Is there something wrong in this observation ?
    But

    Is there something wrong in this observation ?Unfortunately not, as far as I can tell.
    Recent analysis of java's synchronized support has revealed the danger in such unsynchronized code blocks, so it may have been a simple case that it wasn't realised that size is not thread-safe.
    If this is an issue, use Collections.synchronizedMap - in Sun's JDK 1.3, at least, all of the methods are synchronized, except for iterators.
    Regards,
    -Troy

  • Is singleton thread safe?

    hello all,
    please help me by answering these questions?
    singleton patterns calls for creation of a single class that can be shared between classes. since one class has been created
    Can singletons be used in a multithreaded environment? is singleton thread safe?
    Are threading problems a consequence of the pattern or programming language?
    thank you very much,
    Hiru

    Hi,
    Before explaining whether a singleton is thread safe
    e i want to explaing what is thread safe here
    When multiple threads execute a single instance of a
    program and therefore share memory, multiple threads
    could possibly be attempting to read and write to the
    same place in memory.If we have a multithreaded
    program, we will have multiple threads processing the
    same instance.What happens when Thread-A examines
    instance variable x? Notice how Thread-B has just
    incremented instance variable x. The problem here is
    Thread-A has written to the instance variable x and
    is not expecting that value to change unless Thread-A
    explicitly does so. Unfortunately Thread-B is
    thinking the same thing regarding itself; the only
    problem is they share the same variable.
    First method to make a program threadsafe-: Avoidance
    To ensure we have our own unique variable instance
    for each thread, we simply move the declaration of
    the variable from within the class to within the
    method using it. We have now changed our variable
    from an instance variable to a local variable. The
    difference is that, for each call to the method, a
    new variable is created; therefore, each thread has
    its own variable. Before, when the variable was an
    instance variable, the variable was shared for all
    threads processing that class instance. The following
    thread-safe code has a subtle, yet important,
    difference.
    second defense:Partial synchronization
    Thread synchronization is an important technique to
    know, but not one you want to throw at a solution
    unless required. Anytime you synchronize blocks of
    code, you introduce bottlenecks into your system.
    When you synchronize a code block, you tell the JVM
    that only one thread may be within this synchronized
    block of code at a given moment. If we run a
    multithreaded application and a thread runs into a
    synchronized code block being executed by another
    thread, the second thread must wait until the first
    thread exits that block.
    It is important to accurately identify which code
    block truly needs to be synchronized and to
    synchronize as little as possible. In our example, we
    assume that making our instance variable a local
    variable is not an option.
    Third Defence:--Whole synchronization
    Here u should implement an interface which make the
    whole class a thread safe on or synchronized
    Thre views are made by Phillip Bridgham, M.S., is a
    technologist for Comtech Integrated Systems.I'm
    inspired by this and posting the same hereWas there a point in all of that? The posted Singleton is thread-safe. Furthermore, some of that was misleading. A local variable is only duplicated if the method is synchronized, a premise I did not see established. Also, it is misleading to say that only one Thread can be in a synchronized block of code at any time because the same block may be using different monitors at runtime, in which case two threads could be in the same block of code at the same time.
    private Object lock;
    public void doSomething() {
        lock = new Object();
        synchronized(lock) {
            // Do something.
    }It is not guaranteed that only one Thread can enter that synchronized block because every Thread that calls doSomething() will end up synchronizing on another monitor.
    This is a trivial example and obviously something no competent developer would do, but it illustrates that the statement assumes premises that I have not seen established. It would be more accurate to say that only one Thread can enter a synchronized block so long as it uses the same monitor.
    It's also not noted in the discussion of local variables vs instance variables that what he says only applies to primitives. When it comes to actual Objects, just because the variable holding the reference is unique to each Thread does not make use of it thread-safe if it points to an Object to which references are held elsewhere.

  • Are i++, i-- atomic and thread safe for int i

    I want to minimize the sync time multiple threads take to contend for a shared pool, which is implemented as a FIFO. My plan is to:
    1. Use a int variable "count" to trace the free obj count. Hence eliminates the need of synchonized methods size() and isEmpty()
    2. Increment & decrement count when implemeting push() and pop()
    3. Minimize the number of statements in synchonized block as possible within push() and pop(), which in this case, excludes count++ and count-- provided these operations are atomic and thread safe
    Have I lost something ?

    So, may I make a conclusion.
    To minimize contention, I use FIFO instead LIFO or stack and build it from scratch without extending from existing Collection-related class
    1 public class FIFO {
    2 . . private volatile int count;
    3 . . private Object head, tail;
    4 . . public Object pop() {
    5 . . . .synchonized (tail) {
    6 . . . . .if (count >= 3) {
    7 . . . . . .Object tail_origin = tail;
    8 . . . . . .// Re-organizing the linked list skipped
    9 . . . . . .synchonized (this) {
    10 . . . . . . count--;
    11 . . . . . }
    12 . . . . . return tail_origin;
    13 . . . . }
    14 . . . . else {
    15 . . . . . throw new PoolEmptyException();
    16 . . . . }
    17 . . . }
    18 . . }
    19 . . public void push(Object o) {
    20 . . . synchronized (head) {
    21 . . . . . // Re-organizing the linked list skipped
    22 . . . . . head = o;
    23 . . . . . synchonized (this) {
    24 . . . . . . count++;
    25 . . . . . }
    26 . . . }
    27 . . }
    28 . . public int sizeOf() {
    29 . . . return count;
    30 . . }
    31 }
    1. The reason I choose FIFO is to avoid the contention between push() and pop() -- they are NOT declared as synchonized, instead they only lock head and tail respectively. (Sorry I borrow the name push and pop from stack)
    2. pop() only works when count >= 3 to avoid head == tail after pop() and hence a further call to push() block concurrent call to pop() and vice versa (see line 5 and 20)
    3. The goal is to minimize the time and scope of contention. In line 9 and 23, I lock the whole pool object only when modify count, which is not atomic.
    4. This pool is used for a thread pool supporting many concurrent short-lived HTTP request
    Have I missed something ?
    Thanks a lot!

  • Discussion: AtomicBuffer - lock-free and thread-safe

    Hi,
    Since I'm always interested in discussions, I posted the following code for a lock-free, thread-safe buffer implementation.
    In this case the buffer stores (Message,Throwable, Level) triplets but that's not the point here.
    The interesting parts are addEntry() and getNextSlot(). It's actually the first time, that I tried to make use of the atomic package in a real application.
    I'm looking forward to any kind of valuable comment, may it be criticism or praise :)
    Cheers, Daniel
    * Array-based buffer for storing message-throwable-level triples.
    * Uses CAS for lock-free, thread-safe access.
    * @author dgloeckner
    public class AtomicBuffer {
         private final int mSize;
         private Object[] mMessageBuffer;
         private Throwable[] mThrowableBuffer;
         private Level[] mLevels;
         private AtomicInteger mNextSlot;
         private AtomicInteger mDiscardedMessages;
         public AtomicBuffer(int pSize) {
              super();
              mSize = pSize;
              mMessageBuffer = new Object[pSize];
              mThrowableBuffer = new Throwable[pSize];
              mLevels = new Level[pSize];
              mNextSlot = new AtomicInteger(0);
              mDiscardedMessages = new AtomicInteger(0);
          * Uses CAS for lock-free, thread-safe buffer access.
          * @param pMessage log message
          * @param pThrowable throwable for the message (may be null)
          * @param pLevel log level
         public void addEntry(Object pMessage, Throwable pThrowable, Level pLevel) {
              int lSlot = getNextSlot();
              if (lSlot < mSize) {
                   mMessageBuffer[lSlot] = pMessage;
                   mLevels[lSlot] = pLevel;
                   mThrowableBuffer[lSlot] = pThrowable;
              } else {
                   //Buffer is full
                   mDiscardedMessages.incrementAndGet();
          * Returns the next free slot.
          * Only increments the next slot counter,
          * if free space is left.
          * @return next free slot
         protected int getNextSlot() {
              int lNextSlot = 0;
              //CAS loop checks if the value read from mNextSlot
              //is smaller than the buffer size.
              //If not, the loop returns.
              //If yes, the loop tries to increment mNextSlot using CAS.
              do {
                   lNextSlot = mNextSlot.get();
                   if (lNextSlot >= mSize) {
                        break;
              } while (!mNextSlot.compareAndSet(lNextSlot, lNextSlot + 1));
              return lNextSlot;
         public int getSize() {
              return mSize;
         public int getDiscardedMessages() {
              return mDiscardedMessages.get();
         public Level[] getLevels() {
              return mLevels;
         public Object[] getMessages() {
              return mMessageBuffer;
         public Throwable[] getThrowables() {
              return mThrowableBuffer;
          * Returns the number of filled slots.
          * @return number of filled slots
         public int getFilledSlots() {
              return mNextSlot.get();
    }

    Hi,
    OK, so we were talking about different specifications.
    If you are sure about that before the time you mentioned only writes, and after that time only read(s) will take place, it's an absolutely good situation.
    Sorry, I was very wrong, you're right - some (or most) implementations use hardware CAS. Hey that's a nice feature, isn't it?
    I can see that you are using the loop to determine, whether another thread has accessed the counter (or something similar). I think this technique is great,
    however, you might end up with a thread waiting for a single buffer space, while the others proceed (although this is not likely).
    Anyway, I don't really know why would I want to use a code this complicated.
    What about these lines:
    public void addEntry(Object pMessage, Throwable pThrowable, Level pLevel) {
    int lSlot = mNextSlot.incrementAndGet();
    //So this has incremented the value (now remember to instantiate with -1) atomically, fast, with no locking and no loops
    //Here I don't really worry about the real value of this int, as I only want to know if my entry fits to the buffer
    if (lSlot < mSize) {
    mMessageBuffer[lSlot] = pMessage;
    mLevels[lSlot] = pLevel;
    mThrowableBuffer[lSlot] = pThrowable;
    public int getDiscardedMessages(){
    return mNextSlot.get()-mSize; //but here I can reuse the information (i.e. the difference), which you would have discarded
    }To use this code for general purposes, one could modify it like this:
    <code>...
    //in the constructor:
    fillableSize = pSize;
    mSize = pSize + OVERFLOW; //where OVERFLOW is a number that satisfies the conditions in a given environment
    </code>...
    public boolean isFull(){
    return mNextSlot.get() > fillableSize;
    //I am sure that there is a thread periodically checking / storing the log itself, of which core should be something like this:
    if (currentAtomicBuffer.isFull()){ //or a given time period is exceeded
    AtomicBuffer newAtomicBuffer = new AtomicBuffer(BUFFER_SIZE);
    logThisToDatabase(lastAtomicBuffer); // log the previous one - there should not be any writer threads
    lastAtomicBuffer = currentAtomicBuffer; // this is the preivous one - note that there might be some threads writing to it, but this does not mean any problem
    currentAtomicBuffer = newAtomicBuffer; // from now on, work to the new buffer
    //for monitoring purposes, you still can use getDiscardedMessages: this will inform you if OVERFLOW is set right. Anyway, it should always return 0.However, one might want to catch the interruptedException (thrown in the thread that empties the buffer) and handle it with a last logThisToDatabase call on lastAtomicBuffer and currentAtomicBuffer as well.
    Kind Regards,
    Zoltan

  • SSRS - Is there a multi thread safe way of displaying information from a DataSet in a Report Header?

     In order to dynamically display data in the Report Header based in the current record of the Dataset, we started using Shared Variables, we initially used ReportItems!SomeTextbox.Value, but we noticed that when SomeTextbox was not rendered in the body
    (usually because a comment section grow to occupy most of the page if not more than one page), then the ReportItem printed a blank/null value.
    So, a method was defined in the Code section of the report that would set the value to the shared variable:
    public shared Params as String
    public shared Function SetValues(Param as String ) as String
    Params = Param
    Return Params 
    End Function
    Which would be called in the detail section of the tablix, then in the header a textbox would hold the following expression:
    =Code.Params
    This worked beautifully since, it now didn't mattered that the body section didn't had the SetValues call, the variable persited and the Header displayed the correct value. Our problem now is that when the report is being called in different threads with
    different data, the variable being shared/static gets modified by all the reports being run at the same time. 
    So far I've tried several things:
    - The variables need to be shared, otherwise the value set in the Body can't be seen by the header.
    - Using Hashtables behaves exactly like the ReportItem option.
    - Using a C# DLL with non static variables to take care of this, didn't work because apparently when the DLL is being called by the Body generates a different instance of the DLL than when it's called from the header.
    So is there a way to deal with this issue in a multi thread safe way?
    Thanks in advance!
     

    Hi Angel,
    Per my understanding that you want to dynamic display the group data in the report header, you have set page break based on the group, so when click to the next page, the report hearder will change according to the value in the group, when you are using
    the shared variables you got the multiple thread safe problem, right?
    I have tested on my local environment and can reproduce the issue, according to the multiple safe problem the better way is to use the harshtable behaves in the custom code,  you have mentioned that you have tryied touse the harshtable but finally got
    the same result as using the ReportItem!TextBox.Value, the problem can be cuased by the logic of the code that not works fine.
    Please reference to the custom code below which works fine and can get all the expect value display on every page:
    Shared ht As System.Collections.Hashtable = New System.Collections.Hashtable
    Public Function SetGroupHeader( ByVal group As Object _
    ,ByRef groupName As String _
    ,ByRef userID As String) As String
    Dim key As String = groupName & userID
    If Not group Is Nothing Then
    Dim g As String = CType(group, String)
    If Not (ht.ContainsKey(key)) Then
    ' must be the first pass so set the current group to group
    ht.Add(key, g)
    Else
    If Not (ht(key).Equals(g)) Then
    ht(key) = g
    End If
    End If
    End If
    Return ht(key)
    End Function
    Using this exprssion in the textbox of the reportheader:
    =Code.SetGroupHeader(ReportItems!Language.Value,"GroupName", User!UserID)
    Links belowe about the hashtable and the mutiple threads safe problem for your reference:
    http://stackoverflow.com/questions/2067537/ssrs-code-shared-variables-and-simultaneous-report-execution
    http://sqlserverbiblog.wordpress.com/2011/10/10/using-custom-code-functions-in-reporting-services-reports/
    If you still have any problem, please feel free to ask.
    Regards
    Vicky Liu

  • How can I use a Selector in a thread safe way?

    Hello,
    I'm using a server socket with a java.nio.channels.Selector contemporarily by 3 different threads (this number may change in the future).
    From the javadoc: Selectors are themselves safe for use by multiple concurrent threads; their key sets, however, are not.
    Following this advise, I wrote code in this way:
             List keys = new LinkedList(); //private key list for each thread
             while (true) {
              keys.clear();
              synchronized(selector) {
                  int num = selector.select();
                  if (num == 0)
                   continue;
                  Set selectedKeys = selector.selectedKeys();
                  //I expected this code to produce disjoint key sets on each thread...
                  keys.addAll(selectedKeys);
                  selectedKeys.clear();
              Iterator it = keys.iterator();
              while (it.hasNext()) {
                  SelectionKey key = (SelectionKey) it.next();
                  if ((key.readyOps() & SelectionKey.OP_ACCEPT) == SelectionKey.OP_ACCEPT) {
                   Socket s = serverSocket.accept();
                   SocketChannel sc = s.getChannel();
                   sc.configureBlocking( false );
                   sc.register( selector, SelectionKey.OP_READ );
                  } else if ((key.readyOps() & SelectionKey.OP_READ) == SelectionKey.OP_READ) {
    //.....Unfortunately synchronizing on the selector didn't have the effect I expected. When another thread select()s, it sees the same key list as the other thread that select()ed previously. When control arrives to serverSocket.accept(), one thread goes ahead and the other two catch an IllegalBlockingModeException.
    I'm not willing to handle this exception, the right thing to do is giving disjoint key sets to each thread. How can I achieve this goal?
    Thanks in advance

    A single thread won't be enough cause after reading data from the socket I do some processing on it that may take long.So despatch that processing to a separate thread.
    Most of this processing is I/O boundI/O bound on the socket? or something else? If it's I/O bound on the socket that's even more of a reason to use a single thread.
    Anyway I think I'll use a single thread with the selector, put incoming data in a queue and let other 2 or 3 threads read from it.Precisely. Ditto outbound data.
    Thanks for your replies. But I'm still curious: why is a selector thread safe if it can't be used with multiple threads because of it's semantics?It can, but there are synchronization issues to overcome (with Selector.wakeup()), and generally the cost of solving these is much higher than the cost of a single-threaded selector solution with worker threads for the application processing.

  • Is the Illustrator SDK thread-safe?

    After searching this forum and the Illustrator SDK documentation, I can't find any references to a discussion about threading issues using the Illustrator C++ SDK. There is only a reference in some header files as to whether menu text is threaded, without any explanation.
    I take this to mean that probably the Illustrator SDK is not "thread-safe" (i.e., it is not safe to make API calls from arbitrary threads; you should only call the API from the thread that calls into your plug-in). Does anyone know this to be the case, or not?
    If it is the case, the normal way I'd write a plug-in to respond to requests from other applications for drawing services would be through a mutex-protected queue. In other words, when Illustrator calls the plug-in at application startup time, the plug-in could set up a mutually exclusive lock (a mutex), start a thread that could respond to requests from other applications, and request periodic idle processing time from the application. When such a request arrived from another application at an arbitrary time, the thread could respond by locking the queue, adding a request to the queue for drawing services in some format that the plug-in would define, and unlocking the queue. The next time the application called the plugin with an idle event, the queue could be locked, pulled from, and unlocked. Whatever request had been pulled could then be serviced with Illustrator API calls. Does anyone know whether that is a workable strategy for Illustrator?
    I assume it probably is, because that seems to be the way the ScriptingSupport.aip plug-in works. I did a simple test with three instances of a Visual Basic generated EXE file. All three were able to make overlapping requests to Illustrator, and each request was worked upon in turn, with intermediate results from each request arriving in turn. This was a simple test to add some "Hello, World" text and export some jpegs,
    repeatedly.
    Any advice would be greatly appreciated!
    Glenn Picher
    Dirigo Multimedia, Inc.
    [email protected]

    Zac Lam wrote:
    The Memory Suite does pull from a specific memory pool that is set based on the user-specified Memory settings in the Preferences.  If you use standard OS calls, then you could end up allocating memory beyond the user-specified settings, whereas using the Memory Suite will help you stick to the Memory settings in the Preferences.
    When you get back NULL when allocating memory, are you hitting the upper boundaries of your memory usage?  Are you getting any error code returned from the function calls themselves?
    I am not hitting the upper memory bounds - I have several customers that have 10's of Gb free.
    There is no error return code from the ->NewPtr() call.
         PrMemoryPtr (*NewPtr)(csSDK_uint32 byteCount);
    A NULL pointer is how you detect a problem.
    Note that changing the size of the ->ReserveMemory() doesn't seem to make any difference as to whether you'll get a memory ptr or NULL back.
    btw my NewPtr size is either
         W x H x sizeof(PrPixelFormat_YUVA4444_32f)
         W x H x sizeof(PrPixelFormat_YUVA4444_8u)
    and happens concurrently on #cpu's threads (eg 16 to 32 instances at once is pretty common).
    The more processing power that the nVidia card has seems to make it fall over faster.
    eg I don't see it at all on a GTS 250 but do on a GTX 480, Quadro 4000 & 5000 and GTX 660
    I think there is a threading issue and an issue with the Memory Suite's pool and how it interacts with the CUDA memory pool. - note that CUDA sets RESERVED (aka locked) memory which can easily cause a fragmenting problem if you're not using the OS memory handler.

  • Is the Memory Suite thread safe?

    Hi all,
    Is the memory suite thread safe (at least when used from the Exporter context)?
    I ask because I have many threads getting and freeing memory and I've found that I get back null sometimes. This, I suspect, is the problem that's all the talk in the user forum with CS6 crashing with CUDA enabled. I'm starting to suspect that there is a memory management problem when there is also a lot of memory allocation and freeing going on by the CUDA driver. It seems that the faster the nVidia card the more likely it is to crash. That would suggest the CUDA driver (ie the code that manages the scheduling of the CUDA kernels) is in some way coupled to the memory use by Adobe or by Windows alloc|free too.
    I replaced the memory functions with _aligned_malloc|free and it seems far more reliable. Maybe it's because the OS malloc|free are thread safe or maybe it's because it's pulling from a different pool of memory (vs the Memory Suite's pool or the CUDA pool)
    comments?
    Edward

    Zac Lam wrote:
    The Memory Suite does pull from a specific memory pool that is set based on the user-specified Memory settings in the Preferences.  If you use standard OS calls, then you could end up allocating memory beyond the user-specified settings, whereas using the Memory Suite will help you stick to the Memory settings in the Preferences.
    When you get back NULL when allocating memory, are you hitting the upper boundaries of your memory usage?  Are you getting any error code returned from the function calls themselves?
    I am not hitting the upper memory bounds - I have several customers that have 10's of Gb free.
    There is no error return code from the ->NewPtr() call.
         PrMemoryPtr (*NewPtr)(csSDK_uint32 byteCount);
    A NULL pointer is how you detect a problem.
    Note that changing the size of the ->ReserveMemory() doesn't seem to make any difference as to whether you'll get a memory ptr or NULL back.
    btw my NewPtr size is either
         W x H x sizeof(PrPixelFormat_YUVA4444_32f)
         W x H x sizeof(PrPixelFormat_YUVA4444_8u)
    and happens concurrently on #cpu's threads (eg 16 to 32 instances at once is pretty common).
    The more processing power that the nVidia card has seems to make it fall over faster.
    eg I don't see it at all on a GTS 250 but do on a GTX 480, Quadro 4000 & 5000 and GTX 660
    I think there is a threading issue and an issue with the Memory Suite's pool and how it interacts with the CUDA memory pool. - note that CUDA sets RESERVED (aka locked) memory which can easily cause a fragmenting problem if you're not using the OS memory handler.

  • Can use the same thread safe variable in the different processes?

    Hello,
    Can  use the same thread safe variable in the different processes?  my application has a log file used to record some event, the log file will be accessed by the different process, is there any synchronous method to access the log file with CVI ?
    David

    Limiting concurrent access to shared resources can be better obtained by using locks: once created, the lock can be get by one requester at a time by calling CmtGetLock, the other being blocked in the same call until the lock is free. If you do not want to lock a process you can use CmtTryToGtLock instead.
    Don't forget to discard locks when you have finished using them or at program end.
    Alternatively you can PostDeferredCall a unique function (executed in the main thread) to write the log passing the apprpriate data to it.
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • Native library NOT thread safe - how to use it via JNI?

    Hello,
    has anybody ever tried to use a native library from JNI, when the library is not thread safe?
    The library (Windows DLL) was up to now used in an MFC App and thus was only used by one user - that meant one thread - at a time.
    Now we would like to use the library like a "server": many Java clients connect the same time to the library via JNI. That would mean each client makes its calls to the library in its own thread. Because the library is not thread safe, this would cause problems.
    Now we discussed to load the library several times - separately for each client (for each thread).
    Is this possible at all? How can we do that?
    And do you think we can solve the problem in this way?
    Are there other ways to use the library, though it is not thread safe?
    Any ideas welcome.
    Thanks for any contributions to the discussion, Ina

    (1)
    has anybody ever tried to use a native library from
    JNI, when the library (Windows DLL) is not thread safe?
    Now we want many Java clients.
    That would mean each client makes its calls
    to the library in its own thread. Because the library
    is not thread safe, this would cause problems.Right. And therefore you have to encapsulate the DLL behind a properly synchronized interface class.
    Now the details of how you have to do that depends: (a) does the DLL contain state information other than TLS? (b) do you know which methods are not thread-safe?
    Depending on (a), (b) two extremes are both possible:
    One extreme would be to get an instance of the interface to the DLL from a factory method you'll have to write, where the factory method will block until it can give you "the DLL". Every client thread would obtain "the DLL", then use it, then release it. That would make the whole thing a "client-driven" "dedicated" server. If a client forgets to release the DLL, everybody else is going to be locked out. :-(
    The other extreme would be just to mirror the DLL methods, and mark the relevant ones as synchronized. That should be doable if (a) is false, and (b) is true.
    (2)
    Now we discussed to load the library several times -
    separately for each client (for each thread).
    Is this possible at all? How can we do that?
    And do you think we can solve the problem in this
    way?The DLL is going to be mapped into the process address space on first usage. More Java threads just means adding more references to the same DLL instance.
    That would not result in thread-safe behavior.

  • Is Persistence context thread safe?  nop

    In my code, i have a servlet , 2 stateless session beans:
    in the servlet, i use jndi to find a stateless bean (A) and invoke a method methodA in A:
    (Stateless bean A):
    @EJB
    StatelessBeanB beanB;
    methodA(obj) {
    beanB.save(obj);
    (Stateless bean B, where container inject a persistence context):
    @PersistenceContext private EntityManager em;
    save(obj) {
    em.persist(obj);
    it is said entity manager is not thread safe, so it should not be an instance variable. But in the code above, the variable "em" is an instance variable of stateless bean B, Does it make sense to put it there using pc annotation? is it then thread safe when it is injected (manager) by container?
    since i think B is a stateless session bean, so each request will share the instance variable of B class which is not a thread safe way.
    So , could you please give me any guide on this problem? make me clear.
    any is appreciated. thank you
    and what if i change stateless bean B to
    (Stateless bean B, where container inject a persistence context):
    @PersistenceUnit private EntityManagerFactory emf;
    save(obj) {
    em = emf.creatEntityManager()
    //begin tran
    em.persist(obj);
    // commit tran
    //close em
    the problem is i have several stateless beans like B, if each one has a emf injected, is there any problem ?

    Hi Jacky,
    An EntityManager object is not thread-safe. However, that's not a problem when accessing an EntityManager from EJB bean instance state. The EJB container guarantees that no more than one thread has access to a particular bean instance at any given time. In the case of stateless session beans, the container uses as many distinct bean instances as there are concurrent requests.
    Storing an EntityManager object in servlet instance state would be a problem, since in the servlet programming model any number of concurrent threads share the same servlet instance.
    --ken                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Web service client proxy thread safe?

    Is the jax-ws web service client proxy and port objects thread safe in Weblogic 10.3.4?
    Been searching through the docs but can't find any info on this.

    I have searched and it seems that with cxf it's safe to do something like this.
    Nobody knows if this is safe using metro?

  • Thread-safe design pattern for encapsulating Collections?

    Hi,
    This must be a common problem, but I just can't get my head around it.
    Bascially I'm reading in data from a process, and creating/updating data structuers from this data. I've got a whloe bunch of APTProperty objects (each with a name/value) stored in a sychronized HashMap encapsulated by a class called APTProperties. It has methods such as:
    addProperty(APTProperty) // puts to hashmap
    getProperty(String name) // retreives from hashmap
    I also want clients to be able to iterate through all the APTProperties, in a thread safe manner . ie. they aren't responsible for sychronizing on the Map before getting the iterator. (See Collections.synchronizedMap() API docs).
    Is there any way of doing this?
    Or should I just have a method called
    Iterator propertyIterator() which returns the corresponding iterator of the HashMap (but then I can't really make it thread safe).
    I'd rather not make the internal HashMap visible to calling clients, if possible, 'cause I don't want people tinkering with it, if you know what I mean.
    Hope this makes sense.
    Keith

    In that case, I think you need to provide your own locking mechanism.
    public class APTProperties {
    the add, get and remove methods call lock(), executes its own logic, then call unlock()
    //your locking mechanism
    private Thread owner; //dont need to be volatile due to synchronized access
    public synchronized void lock() {
    while(owner != null) {
    wait();
    th = Thread.currentThread();
    public synzhronized void unlock() {
    if(owner == currentThread()){
    owner = null;
    notifyAll();
    }else {
    throw new IllegalMonitorStateException("Thread: "+ Thread.currentThread() + "does not own the lock");
    Now you dont gain a lot from this code, since a client has to use it as
    objAPTProperties.lock();
    Iterator ite = objAPTProperties.propertyIterator();
    ... do whatever needed
    objAPTProperties.unlock();
    But if you know how the iterator will be used, you can make the client unaware of the locking mechanism. Lets say, your clients will always go upto the end thru the iterator. In that case, you can use a decorator iterator to handle the locking
    public class APTProperties {
    public Iterator getThreadSafePropertyIterator() {
    private class ThreadSafeIterator implements Iterator {
    private iterator itr;
    private boolean locked;
    private boolean doneIterating;
    public ThreadSafeIterator(Iterator itr){
    this.itr = itr;
    public boolean hasNext() {
    return doneIterating ? false : itr.hasNext();
    public Object next() {
    lockAsNecessary();
    Object obj = itr.next();
    unlockAsNecessary();
    return obj;
    public void remove() {
    lockAsNecessary();
    itr.remove();
    unlockAsNecessary();
    private void lockAsNecessary() {
    if(!locked) {
    lock();
    locked = true;
    private void unlcokAsNecessary() {
    if(!hasNext()) {
    unlock();
    doneIterating = true;
    }//APTProperties ends
    The code is right out of my head, so it may have some logic problem, but the basic idea should work.

  • Thread safe ?

    Hi
    This code is currently running on my companys server in a java class implementing the singleton pattern:
    public static ShoppingAssistant getInstance() {
        if (instance == null) {
            synchronized (ShoppingAssistant.class) {
                instance = new ShoppingAssistant();
        return instance;
    }1 Is there really a need for synchronizing the method?
    2 If you choose to synchronize it. Why not just synchronize the whole method?
    3 Is this method really thread safe? Is I see it a thread A could be preempted after checking the instance and seing that it is null. Thereafter another thread B could get to run, also see that instance is null and go on and create the instance. Thereafter B modifies some of the instance�s instance variables. After that B is preempted and A gets to run. A, thinking instance is null, goes on and create a new instance of the class and assigns it to instance, thus owerwriting the old instance and creating a faulty state.
    By altering the code like this I think that problem is solved although the code will probably run slower. Is that correct?
    public static ShoppingAssistant getInstance() {
        synchronized (ShoppingAssistant.class) {
            if (instance == null) {
                instance = new ShoppingAssistant();
    }Regards, Mattias

    public class Singleton {
    private static final instance = newSingleton();
    public static Singleton getInstance() {
    return instance;
    }This seems like it's so obviously the standard
    solution, why do
    people keep trying to give lazy solutions? Do they
    expect it to be
    more efficient that this?I can see one possible performance gains with a lazy solution:
    If you use something else in that class (besides getInstance()), but don't use getInstance, then you don't take the performance hit of instantiating the thing.
    Of course, this is dubious at best because how often do you have other static methods in your singleton class? And not use the instance? And have construction be so expensive that a single unnecessary one has a noticeably detrimental impact on your app's performance?

Maybe you are looking for