Synchronized - volatile

Can somebody confirm following thought.
int count;
synchronized (this) {
    ++count;
} is safer then volatile int count;
++count;Iunderstand that probability of having trouble with volatile version is extremely small. But am I correct when saying that this probabily is not zero. If I'm right, is ther any usage for volatile variables in "production" software which can not fail.

The synchronized code is correct because it makes the read-increment-write operation appear atomic to other threads. The volatile code is not correct if other threads will access the count variable.
Iunderstand that probability of having trouble with
volatile version is extremely small.I don't believe you can make any prediction about the possibility of problems by not properly synchronizing your code. A good guess is that you almost certainly will have a problem with the volatile code.
But am I correct
when saying that this probabily is not zero. If I'm
right, is ther any usage for volatile variables in
"production" software which can not fail.Yes, there is a usage of volatile that is correct -- if you merely read or write a volatile variable, each thread will see the proper value. You could achieve the same goal by using synchronized code, but you have to remember to put all accesses of the variable in synchronized code, and this implementation may be slower than simply making the variable volatile.

Similar Messages

  • Use of volatile modifier

    can any one explain me the use of volatile modifier

    What does volatile do?
    This is probably best explained by comparing the effects that volatile and synchronized have on a method. volatile is a field modifier, while synchronized modifies code blocks and methods. So we can specify three variations of a simple accessors using those two keywords:
    int i1; int geti1() {return i1;}
    volatile int i2; int geti2() {return i2;}
    int i3; synchronized int geti3() {return i3;}
    geti1() accesses the value currently stored in i1 in the current thread. Threads can have local copies of variables, and the data does not have to be the same as the data held in other threads. In particular, another thread may have updated i1 in it's thread, but the value in the current thread could be different from that updated value. In fact Java has the idea of a "main" memory, and this is the memory that holds the current "correct" value for variables. Threads can have their own copy of data for variables, and the thread copy can be different from the "main" memory. So in fact, it is possible for the "main" memory to have a value of 1 for i1, for thread1 to have a value of 2 for i1 and for thread2 to have a value of 3 for i1 if thread1 and thread2 have both updated i1 but those updated value has not yet been propagated to "main" memory or other threads.
    On the other hand, geti2() effectively accesses the value of i2 from "main" memory. A volatile variable is not allowed to have a local copy of a variable that is different from the value currently held in "main" memory. Effectively, a variable declared volatile must have it's data synchronized across all threads, so that whenever you access or update the variable in any thread, all other threads immediately see the same value. Of course, it is likely that volatile variables have a higher access and update overhead than "plain" variables, since the reason threads can have their own copy of data is for better efficiency.
    Well if volatile already synchronizes data across threads, what is synchronized for? Well there are two differences. Firstly synchronized obtains and releases locks on monitors which can force only one thread at a time to execute a code block, if both threads use the same monitor (effectively the same object lock). That's the fairly well known aspect to synchronized. But synchronized also synchronizes memory. In fact synchronized synchronizes the whole of thread memory with "main" memory. So executing geti3() does the following:
    1. The thread acquires the lock on the monitor for object this (assuming the monitor is unlocked, otherwise the thread waits until the monitor is unlocked).
    2. The thread memory flushes all its variables, i.e. it has all of its variables effectively read from "main" memory (JVMs can use dirty sets to optimize this so that only "dirty" variables are flushed, but conceptually this is the same. See section 17.9 of the Java language specification).
    3. The code block is executed (in this case setting the return value to the current value of i3, which may have just been reset from "main" memory).
    4. (Any changes to variables would normally now be written out to "main" memory, but for geti3() we have no changes.)
    5. The thread releases the lock on the monitor for object this.
    So where volatile only synchronizes the value of one variable between thread memory and "main" memory, synchronized synchronizes the value of all variables between thread memory and "main" memory, and locks and releases a monitor to boot. Clearly synchronized is likely to have more overhead than volatile.
    I got the above information from the below link......
    http://www.javaperformancetuning.com/news/qotm030.shtml
    and also we can describe the volatile modifier as
    Volatile� A volatile modifier is mainly used in multiple threads. Java allows threads can keep private working copies of the shared variables(caches).These working copies need be updated with the master copies in the main memory.
    But possible of data get messed up. To avoid this data corruption use volatile modifier or synchronized .volatile means everything done in the main memory only not in the private working copies (caches).( Volatile primitives cannot be cached ).
    So volatile guarantees that any thread will read most recently written value.Because they all in the main memory not in the cache�Also volatile fields can be slower than non volatile.Because they are in main memory not in the cache.But volatile is useful to avoid concurrency problem.
    This above information i got from
    http://cephas.net/blog/2003/02/17/using-the-volatile-keyword-in-java/
    Thats all i think you understood what a volatile modifier is.And if you got any doubts still in Java you can reach me to [email protected]
    With Regards,
    M.Sudheer.

  • Question on synchronized method / block and Thread Cache

    Hi all,
    I came across a blog post at http://thejavacodemonkey.blogspot.com/2007/08/making-your-java-class-thread-safe.html which is talking about making Java classes thread safe. But, I was surprised to see that a thread can have my class' instance variable in its cache. Also, I have couple of questions on the things posted in the blog. I want to get the opinion from Java experts here.
    1. Given the example class
    class MyClass{
         private int x;
         public synchronized void setX(int X) { this.x = x; }
         public synchronized int getX() { return this.x; }
    Having the following instructions in two threads won't guarantee that it will print 1 and 2 as other thread can get the slot and call setX to modify the value - Am I right?
    obj is an MyClass Instance available to both the threads
    Thread 1:
    obj.setX(1);
    System.out.println(obj.getX());
    Thread 2:
    obj.setX(2);
    System.out.println(obj.getX());
    It will print 1 and 2 (in any order) only if I synchronize these calls on "obj" as follows - Is my understanding correct or ???
    Thread 1:
    synchronized(obj)
    obj.setX(1);
    System.out.println(obj.getX());
    Thread 2:
    synchronized(obj)
    obj.setX(2);
    System.out.println(obj.getX());
    2. If my understanding on point 1 (given above) is right, What the blog-post says is wrong as I cannot even expect my thread 1 to print 1 and thread 2 to print 2. Then, again a question arises as why a thread cache has a object's instance variable value in its cache and need to make my instance variable volatile. Can anyone explain me in detail? Won't the thread always refer the heap for Object's instance variable value?
    Thanks in advance,
    With regards,
    R Kaja Mohideen

    your basic understanding (as far as i can understand what you've written) seems to be correct. if you run your first 2 threads, you can get "11", "12", "21", or "22" (ignoring newlines). if you run your second 2 threads, you can get "12" or "21".
    i'm not sure i follow your second point about your thread's "cache". i think you are asking about the visibility of changes between threads, and, no, there is not concept of a shared "heap" in the memory model. basically, (conceptually) each thread has its own working memory, and it only shares updates to that memory when it has to (i.e. when a synchronization point is encountered, e.g. synchronized, volatile, etc). if every thread was forced to work out of a shared "heap", java on multi-core systems would be fairly useless.

  • Servletcontext question

    If I have an array of integers in the ServletContext and I will be doing reads only on it, should this be declared as synchronized/volatile?
    Also, how do I pull out that array from the ServletContext in say a function that is under the ServletContext but does not have the import javax.servlet.* and javax.servlet.http.*? Do I have to just pass the array from the servlet?
    Thanks

    Reme wrote:
    If I have an array of integers in the ServletContext and I will be doing reads only on it, should this be declared as synchronized/volatile?Wrap it in a javabean with only a getter.
    Also, how do I pull out that array from the ServletContext in say a function that is under the ServletContext but does not have the import javax.servlet.* and javax.servlet.http.*? Do I have to just pass the array from the servlet?I don't understand you. Please use the right terminology at the correct places.

  • Is volatile necessary for variables only accessed inside synchronized block

    Hi,
    I am using ExecutorService to execute a set of threads. Then the calling thread needs to wait until all of them are done to reuse the thread pool to run another set of threads (so I can't use the ExecutorService.shutdown() to wait for all of the threads at this point). So I write a simple monitor as below to coordinate the threads.
    My question is: will it work? someone suggests that it might not work because it will busy spin on the non-volatile int which may or may not be updated with the current value from another thread depending on the whims of the JVM. But I believe that variables accessed inside synchronized blocks should always be current. Can anyone please help me to clarify this? Really appreciate it.
         * Simple synchronization class to allow a thread to wait until a set of threads are done.
         class ThreadCoordinator{
         private int totalActive = 0;
         public synchronized void increment(){
         totalActive++;
         notifyAll();
         public synchronized void decrement(){
         totalActive--;
         notifyAll();
         public synchronized void waitForAll(){
         while(totalActive != 0){
         try{
         wait();
         }catch (InterruptedException e){
         //ignore
         }

    Don't do that. Just save the Futures returned by the ExecutorService, and call get() on them all. This will only return when all the tasks have finished.

  • Volatile vs synchronized

    Does anyone have any opinions or references on the performance advantage (or disadvantage) of using a volatile variable as opposed to synchronizing on some object to reference it?

    Here's a little code snippet I dreamed up and the results
    public class Test {
         static volatile int thisInt = 0;
         static int thatInt = 0;
         static int theirInt = 0;
         public static void testVolatile() {
              long startTime = System.currentTimeMillis();
              for (int i = 0; i < 100000000; i++)
                   thisInt = i;
              long stopTime = System.currentTimeMillis();
              System.out.println("Total time for volatile: " + (stopTime - startTime));
         public static synchronized void testSynchronized() {
              long startTime = System.currentTimeMillis();
              for (int i = 0; i < 100000000; i++)
                   thatInt = i;
              long stopTime = System.currentTimeMillis();
              System.out.println("Total time for synchronized: " + (stopTime - startTime));
         public static void testFineSynchronized() {
              long startTime = System.currentTimeMillis();
              for (int i = 0; i < 100000000; i++) {
                   synchronized (Test.class) {
                        theirInt = i;
              long stopTime = System.currentTimeMillis();
              System.out.println("Total time for fine synchronized: " + (stopTime - startTime));
         public static void main(String args[]) {
              Test.testVolatile();
              Test.testSynchronized();
              Test.testFineSynchronized();
    }java version "1.3.1"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1-b24)
    Java HotSpot(TM) Client VM (build 1.3.1-b24, mixed mode)
    The computer is a single processor PII 400Mhz
    Total time for volatile: 1332
    Total time for synchronized: 1572
    Total time for fine synchronized: 95107

  • Which object's monitor does a synchronized method acquire?

    from the Java Tutorial for concurrency programming:
    " When a thread invokes a synchronized method, it automatically acquires the intrinsic lock _for that method's object_ and releases it when the method returns. The lock release occurs even if the return was caused by an uncaught exception. "
    what exactly does this mean?
    do synchronized methods acquire the monitors for objects of type: java.lang.reflection.Method
    please consider this code:
    public class Foo {
      private int counter = 0;
      public synchronized void incriment() { counter++; }
      public synchronized void decriment() { counter--; }
    Foo f = new Foo();
    Class[] sig = new Class[0];
    Method m = f.getClass().getMethod("incriment", sig);
    // ok. so "m" is the relevant method object.
    f.incriment(); // <-- is the monitor for "m" ,
                          // or the monitor for "f", acquired?
    .......my reading of the Concurrency Tutorial is that synchronized methods use the monitors of java.lang.reflection.Method objects?
    and thus, Foo is not thread safe, right?
    however, this simple change makes Foo thread-safe?
    public class Foo {
      private volatile int counter = 0; // "volatile"
      public void incriment() { counter++; }
      public void decriment() { counter--; }
    }thanks.
    Edited by: kogose on Feb 23, 2009 7:13 PM

    tensorfield wrote:
    jverd wrote:
    tensorfield wrote:
    kogose wrote:
    what exactly does this mean?It means you're complicating things.
    If a method is synchronized, it is. You don't need to go beyond that. The method is synchronized.Not true. You have to know what it means for a method to be synchronized. Often people come in with the erroneous impression that it somehow prevents you from using or accessing the object in any other thread.It's very simple. If a synchronized method is called at the same time from many threads only one call will be executed at a time. The calls will be lined up and performed one after the other in sequence.
    AND because synchronization is on a per object basis, when one synchronized method is being called from one thread, all synchronized methods of that same object are blocked for calling from other threads.
    Simple as that.No, it's not that simple, and as stated, that is not correct. In particular, you didn't mention that for an instance method, all the various threads have to be trying to call instance methods on the same object in order for execution to be sequential.
    You really can't understand Java's syncing without understanding how it relates to locks, and what it means for a method to be synchronized in terms of which lock it acquires.
    Edited by: jverd on Feb 25, 2009 2:47 PM

  • Use of volatile for variable in jdk 1.6 on Linux platforms

    Hello,
    I run the following code on my computer (dual core). This code just create two threads: one adds one to a value contained in
    an object and the other subbs one to it :
    // File: MultiCore.java
    // Synopsis: At the end of some executions the value of a is not equal
    // to 0 (at laest for one of teh threads) even if we do the
    // same number of ++ and --
    // Java Context: java version "1.6.0_11"
    // Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
    // Java HotSpot(TM) Server VM (build 11.0-b16, mixed mode)
    // Linux Context: Linux 2.6.27-12-generic i686 GNU/Linux
    // Author: L. Philippe
    // Date: 03/10/09
    class MaData {
    public int a;
    public MaData() { a = 0; }
    public synchronized void setA( int val ) { a = val; }
    public synchronized int getA() { return a; }
    class MonThread extends Thread {
    MaData md;
    int nb, nb_iter;
    public MonThread( MaData ref, int rnb )
         md = ref;
         nb = rnb;
         nb_iter = 1000;
    public void run()
         if ( nb == 0 ) {
         for ( int i = 0 ; i < nb_iter ; i++ ) {
              // Increment MaData
              md.setA( md.getA()+1 );
         } else {
         for ( int i = 0 ; i < nb_iter ; i++ ) {
              // Decrement MaData
              md.setA( md.getA()-1 );
         System.out.println( Thread.currentThread().getName() + " a= " + md.a);
    public class MultiCore {
    volatile static MaData md;
    public static void main(String args[])
         try {
         // Data to be shared
         md = new MaData();
         MonThread mt1 = new MonThread( md, 0 );
         MonThread mt2 = new MonThread( md, 1 );
         mt1.start();
         mt2.start();
         mt1.join();
         mt2.join();
         } catch ( Exception ex ) {
         System.out.println( ex );
    This is the result i got:
    Thread-0 a= -734
    Thread-1 a= -801
    This ok for the first one but the second should obviously be 0. For me that mean that the volatile does not work and the threads just access to their cache on different cores. Can someone check that ?
    Thanks,
    Laurent

    Why should the second line obviously be zero?
    I don't think even if you make 'a' volatile that setA(getA() + 1) suddenly becomes atomic, because it becomes:
    int temp = getA();
    // if other thread update a, temp does not get updated
    temp = temp + 1;
    setA(temp);So you'll need a synchronized in/decrement methods on MaData or use an AtomicInteger.

  • JMM: legal to optimize non-volatile flag out of particular loop condition?

    Does Java Memory Model allow JIT compiler to optimize non-volatile flag out of loop conditions in code like as follows...
    class NonVolatileConditionInLoop {
      private int number;
      private boolean writingReady = true; // non-volatile, always handled inside synchronized block
      public synchronized void setNumber(int n) {
        while (!writingReady) { // non-volatile flag in loop condition
          try { wait(); }
          catch (InterruptedException e) { e.printStackTrace(); }
        this.number = n;
        this.writingReady = false;
        notifyAll();
      public synchronized int getNumber() {
        while (writingReady) { // non-volatile flag in loop condition
          try { wait(); }
          catch (InterruptedException e) { e.printStackTrace(); }
        this.writingReady = true;
        notifyAll();
        return number;
    }...so that it will execute like this:
    class NonVolatileConditionInLoopHacked {
      private int number;
      private boolean writingReady = true; // non-volatile, always handled inside synchronized block
      public synchronized void setNumber(int n) {
        if (!writingReady) { // moved out of loop condition
          while (true) {
            try { wait(); }
            catch (InterruptedException e) { e.printStackTrace(); }
        this.number = n;
        this.writingReady = false;
        notifyAll();
      public synchronized int getNumber() {
        if (writingReady) { // moved out of loop condition
          while (true) {
            try { wait(); }
            catch (InterruptedException e) { e.printStackTrace(); }
        this.writingReady = true;
        notifyAll();
        return number;
    This question was recently discussed in [one of threads|http://forums.sun.com/thread.jspa?messageID=11001801#11001801|thread] at New To Java forum.
    My take on it is that optimization like above is legal. From the perspective of single-threaded program, repeated checks for writingReady are redundant because it is not modified within the loop. As far as I understand, unless explicitly forced by volatile modifier (and in our case it is not), optimizing compiler "has a right" to optimize based on single-thread execution assumption.
    Opposite opinion is that JMM prohibits such an optimization because methods containing the loop(s) are synchronized.

    gnat wrote:
    ...so that it will execute like this:
    class NonVolatileConditionInLoopHacked {
    private int number;
    private boolean writingReady = true; // non-volatile, always handled inside synchronized block
    public synchronized void setNumber(int n) {
    if (!writingReady) { // moved out of loop condition
    while (true) {
    try { wait(); }
    catch (InterruptedException e) { e.printStackTrace(); }
    this.number = n;
    this.writingReady = false;
    notifyAll();
    public synchronized int getNumber() {
    if (writingReady) { // moved out of loop condition
    while (true) {
    try { wait(); }
    catch (InterruptedException e) { e.printStackTrace(); }
    this.writingReady = true;
    notifyAll();
    return number;
    This question was recently discussed in [one of threads|http://forums.sun.com/thread.jspa?messageID=11001801#11001801|thread] at New To Java forum.
    My take on it is that optimization like above is legal. From the perspective of single-threaded program, repeated checks for writingReady are redundant because it is not modified within the loop. As far as I understand, unless explicitly forced by volatile modifier (and in our case it is not), optimizing compiler "has a right" to optimize based on single-thread execution assumption.
    Opposite opinion is that JMM prohibits such an optimization because methods containing the loop(s) are synchronized.One of the problems with wait() and your the proposed optimization is that "interrupts and spurious wakeups are possible" from wait() . See [http://java.sun.com/javase/6/docs/api/java/lang/Object.html#wait()] Therefore your wait() would loop without checking if this optimization would occur and a interrupt or spurious wake-up happened. Therefore for this reason I do not believe writingReady would be rolled out of the loop. Also the code isn't even equivalent. Once all the threads wake-up due to the notifyAll() they would spin in the while(true) and wait() again. I don't think the JMM prohibits such an optimization because methods containing the loop(s) are synchronized, but because it contains a wait(). The wait() is kind of a temporary flow control escape out of the loop.
    Example:
    writingReady is true
    Thread A calls getNumber(). It waits().
    Thread B calls setNumber(). It calls notifyAll() and writingReady is now false;
    Thread A wakes up in getNumber() and while(true) loop and waits again(). //Big problem.

  • Synchronizing many reader threads with one writer thread?

    Hi
    I was wondering if there is a way in java to allow different threads to read an object simultaneously however to block them all only when the object is being updated. Here is an example:
    I have the following object which is shared between many Servlet instances:
    public class ActiveFlights
         private final HashMap<String,Flight> flights;
         private final Object lock;
         public ActiveFlights()
              flights = new HashMap<String, Flight>();
              lock = new Object();
         public void updateFlights(ArrayList<FlightData> newFlights)
              synchronized (lock)
                   //some code which updates the HashMap
         public ArrayList<Flight> getSomeFlights()
              ArrayList<Flight> wantedFlights = new ArrayList<Flight>();
              synchronized (lock)
                   //some code which selects flights from the HashMap
              return wantedFlights;
    }Now all the Servlet doGet() functions call the getSomeFlights() method. There is also a Timer object which calls the updateFlights() method once every few minutes.
    I need the synchronized blocks in order that the Timer doesn't try to update my object while it is being read by a Servlet however this is not optimal since it also causes each Servlet doGet() to block the others even though it would be possible for to doGet()'s to read the object simultaneously.
    Is there a better way of doing this?
    Thanks
    Aharon

    It is highly unlikely this is a real performance issue for you. Unless you know this is a bottle neck, you don't need to change your code.
    However, as an exercise, you can just use ConcurentHashMap for lockless Map access. However, there is still a risk of getting a read in the middle of a write.
    Instead you can take a snapshot copy of the Map and use this snapshot for reads. See below.
    In term of coding style;
    - I suggest you use the Map and List interfaces where ever possible.
    - You don't need to create a seperate lock object, the Map will do the same job.
    - Use can create the HashMap on the same line as it is declared removing the need to define a constructor.
    public class ActiveFlights {
         private final Map<String, Flight> flights = new HashMap<String, Flight>();
         private volatile Map<String, Flight> flightCopy = new HashMap<String, Flight>();
         public void updateFlights(List<FlightData> newFlights) {
              //some code which updates the HashMap
              // this takes a snapshot copy of the flights.  Use the copy for reads.
              flightCopy = new HashMap<String, Flight>(flights);
         public List<Flight> getSomeFlights() {
              // take a copy of the reference, neither the reference, nor the map it refers to will change over the life of this method call.
              final Map<String, Flight> flightCopy = this.flightCopy;
              final List<Flight> wantedFlights = new ArrayList<Flight>();
              //some code which selects flightCopy from the HashMap
              return wantedFlights;
    }

  • Reflecting a synchronized field

    If I have a class which defines a synchronized field, and I use reflection to get the corresponding Field object, do the set and get methods of that Field object acquire the appropriate locks (on the object or class, depending on whether it is an instance or a static field) before setting and getting? I haven't found anything helpful here or via Google.

    From the 1.4.2 JLS (I suspect little has changed)
    8.3.1 Field Modifiers
    FieldModifiers:
         FieldModifier
         FieldModifiers FieldModifier
    FieldModifier: one of
         public protected private
         static final transient volatile
    There is no synchronized keyword here. That modifying a field with synchronized does not cause the compiler to choke I don't know.
    In response to the poster who said that the volatile modifier renders concurrent access to a field impossible, this is completely untrue. The volatile modifier tells the VM that a primitive value should always be stored in main memory and not in a Thread Local cache. This ensures that concurrent access of such variables is always done on the one "copy" available and not on a possibly out-of-synch version local to the Thread.
    I am not certain if JLS and the VMS together enforce VM's to ensure that Object references are not cached although this would make sense. In any event it does not hurt performance if you make reference variables volatile because the actual Object is always to be found on the heap.

  • Performance problem with synchronized singleton

    I'm using the singleton pattern to cache incoming JMS Message data from a 3rd party. I'm seeing terrible performance though, and I think it's because I've misunderstood something.
    My singleton class stores incoming JMS messages in a HashMap, so that successive messages can be checked to see if they are a new piece of data, or an update to an earlier one.
    I followed the traditional examples of a private constructor and a public getInstance method, and applied the double-checked locking to the latter. However, a colleague then suggested that all my other methods in the same class should also be synchronized - is this the case or am I creating an unnecessary performance bottleneck? Or have I unwittingly created that bottleneck elsewhere?
    package com.mycode;
    import java.util.HashMap;
    import java.util.Iterator;
    public class DataCache {
        private volatile static DataCache uniqueInstance;
        private HashMap<String, DataCacheElement> dataCache;
        private DataCache() {
            if (dataCache == null) {
                dataCache = new HashMap<String, DataCacheElement>();
        public static DataCache getInstance() {
             if (uniqueInstance == null) {
                synchronized  (DataCache.class) {
                    if (uniqueInstance == null) {
                        uniqueInstance = new DataCache();
            return uniqueInstance;
        public synchronized void put(String uniqueID, DataCacheElement dataCacheElement) {
            dataCache.put(uniqueID, dataCacheElement);
        public synchronized DataCacheElement get(String uniqueID) {
            DataCacheElement dataCacheElement = (DataCacheElement) dataCache.get(uniqueID);
            return dataCacheElement;
        public synchronized void remove(String uniqueID) {
            dataCache.remove(uniqueID);
        public synchronized int getCacheSize() {
         return dataCache.keySet().size();
         * Flushes all objects from the cache that are older than the
         * expiry time.
         * @param expiryTime (long milliseconds)
        public synchronized void flush(long expiryTime) {
            String uniqueID;
            long currentDate = System.currentTimeMillis();
            long compareDate = currentDate - (expiryTime);
            Iterator<String> iterator = dataCache.keySet().iterator();
            while( iterator.hasNext() ){
                // Get element by unique key
                uniqueID = (String) iterator.next();
                DataCacheElement dataCacheElement = (DataCacheElement) get(uniqueID);
                // get time from element
                long lastUpdatedDate = dataCacheElement.getUpdatedDate();
                // if time is greater than 1 day, remove element from cache
                if (lastUpdatedDate <  compareDate) {
                    remove(uniqueID);
        public synchronized void empty() {
            dataCache.clear();
    }

    m0thr4 wrote:
    SunFred wrote:
    m0thr4 wrote:
    I [...] applied the double-checked locking
    Which is broken. http://www.ibm.com/developerworks/java/library/j-dcl.html
    from the link:
    The theory behind double-checked locking is perfect. Unfortunately, reality is entirely different. The problem with double-checked locking is that there is no guarantee it will work on single or multi-processor machines.
    The issue of the failure of double-checked locking is not due to implementation bugs in JVMs but to the current Java platform memory model. The memory model allows what is known as "out-of-order writes" and is a prime reason why this idiom fails[b].
    I had a read of that article and have a couple of questions about it:
    1. The article was written way back in May 2002 - is the issue they describe relevant to Java 6's memory model? DCL will work starting with 1.4 or 1.5, if you make the variable you're testing volatile. However, there's no reason to do it.
    Lazy instantiation is almost never appropriate, and for those rare times when it is, use a nested class to hold your instance reference. (There are examples if you search for them.) I'd be willing to be lazy instantiation is no appropriate in your case, so you don't need to muck with syncing or DCL or any of that nonsense.

  • Synchronizing on an int shouldn't be necessairy, should it?

    The following example consists of 2 threads that both increment a static int by 1 and decrement it by 1.
    Both threads are stopped abruptly after half a second.
    What i don't understand is why there are other results possible than 0, 1 or 2.
    Synchronizing one a shared object helps but this shouldn't be necessairy. What is going one here?
    import java.io.*;
    import java.util.*;
    public class Test {
        static int a1   = 0;
        public static void main(String[] args) throws Exception {
           Thread t1 = new Thread(new Runnable() {
              public void run() {
                 while (true) {a1++; a1--;}
           Thread t2 = new Thread(new Runnable() {
              public void run() {
                 while (true) {a1++; a1--;}
           t1.start();
           t2.start();
           Thread.sleep(500);
           t1.stop();
           t2.stop();
           System.out.println("Finished: " +a1);
        } //main
    } // Test

    Found this:
    "In order to provide good performance in the absence
    of synchronization, the compiler, runtime, and cache
    are generally allowed to reorder ordinary memory
    operations as long as the currently executing thread
    cannot tell the difference. (This is referred to as
    within-thread as-if-serial semantics.) Volatile reads
    and writes, however, are totally ordered across
    threads; the compiler or cache cannot reorder
    volatile reads and writes with each other.
    Unfortunately, the JMM did allow volatile reads and
    writes to be reordered with respect to ordinary
    variable reads and writes, meaning that we cannot use
    volatile flags as an indication of what operations
    have been completed."
    http://www-106.ibm.com/developerworks/java/library/j-j
    tp02244.html#6.0
    So it looks like it can be done at any level :(
    /KajActually, that looks like it prohibits reordering a++/a-- if a is volatile, which the OP said he did.
    I've also heard though, that the JMM is broken and that some VM's don't implement volatile correctly. I don't know which ones and what problems they have, but that may have something to do with what he's observing.

  • Atomic operation and volatile variables

    Hi ,
    I have one volatile variable declared as
    private volatile long _volatileKey=0;
    This variable is being incremented(++_volatileKey)  by a method which is not synchronized. Could there be a problem if more than one thread tries to change the variable ?
    In short is ++ operation atomic in case of volatile variables ?
    Thanks
    Sumukh

    Google[ [url=http://www.google.co.uk/search?q=sun+java+volatile]sun java volatile ].
    http://www.javaperformancetuning.com/tips/volatile.shtml
    The volatile modifier requests the Java VM to always access the shared copy of the variable so the its most current value is always read. If two or more threads access a member variable, AND one or more threads might change that variable's value, AND ALL of the threads do not use synchronization (methods or blocks) to read and/or write the value, then that member variable must be declared volatile to ensure all threads see the changed value.
    Note however that volatile has been incompletely implemented in most JVMs. Using volatile may not help to achieve the results you desire (yes this is a JVM bug, but its been low priority until recently).
    http://cephas.net/blog/2003/02/17/using_the_volatile_keyword_in_java.html
    Careful, volatile is ignored or at least not implemented properly on many common JVM's, including (last time I checked) Sun's JVM 1.3.1 for Windows.

  • Concurrent, non-synchronized, lazy initialization: is this correct?

    Guys, I would appreciate a sanity check on some code of mine.
    I have a class, call it State. Every instance is associated with a key of some type. Each user will know the key it wants to use. It will present that key to State to request the corresponding State instance. They must always receive back that sole instance that corresponds to key. The keys are unknown until runtime, so you must lazy initialize a State instance when first presented with a given key.
    If concurrency was not an issue, then State could have lazy initialization code like this:
         /** Contract: never contains null values. */
         private static final Map<Object,State> map = new HashMap<Object,State>();
         public static State get(Object key) {
              State state = map.get(key);
              if (state == null) {
                   state = new State();
                   map.put(key, state);
              return state;
         private State() {}     // CRITICAL: private to ensure that get is the only place instances createdBut heavy concurrency on get is present in my application. While I could trivially make the above code safe by synchronizing get, that would cause it bottleneck the entire application. Vastly superior would be the use of some sort of concurrent data structure that would only bottleneck during the relatively rare times when new State instances must be created, but would allow fast concurrent retrievals when the State instance has already been created, which is the vast majority of the time.
    I think that the unsynchronized code below does the trick:
         /** Contract: never contains null values. */
         private static final ConcurrentMap<Object,State> map = new ConcurrentHashMap<Object,State>();
         public static State get(Object key) {
              State current = map.get(key);
              if (current != null) return current;
              State candidate = new State();
              current = map.putIfAbsent(key, candidate);
              return (current == null) ? candidate : current;
         }Here is how it works: most of the time, just the first two lines
         /** (ignore this) */
         State current = map.get(key);
         if (current != null) return current;will be executed because the mapping will exist. This will be really fast, because ConcurrentHashMap is really fast (its always slower than HashMap, but maybe only 2X slower for the scenario that I will use it in; see [https://www.ibm.com/developerworks/java/library/j-benchmark2/#dsat] ).
    If the relevant State instance is not present in map, then we speculatively create it. Because get is unsynchronized, this may/may not be the one actully put on map and returned by the subsequent lines of code because another thread supplying the same key may be concurrently executing the same lines of code. That's why its named candidate.
    Next--and this is the real critical step--use ConcurrentMap's atomic putIfAbsent method to ensure that just the first thread to call the method for a given key is the one who suceeds in putting it's candidate in map. We reassign current to the result of putIfAbsent. Its value will be null only if there was no previous mapping for key but this call to putIfAbsent suceeded in putting candidate on map; thus, candidate needs to be returned in this case. But if current is not null, then some other thread with the same key must have been concurrently calling this code and was the thread which suceeded in putting its State on map; thus in this case current will have that other thread's value and so return it (with this thread's candidate being a waste; oh well).
    Note that this problem is similar in spirit to the infamous double checked locking idiom. Since that is such a treacherous minefield
    [http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html].
    I would really appreciate another set of eyes looking over my code.

    Hi Bbatman,
    Thank you so much for your answer.
    Pietblock: I believe that your double checked locking proposal is still wrong.OK, I do believe you.
    Did you read the link that I provided in my initial post?
    It discusses some known solutions,
    the best of which for singletons is a value helper class,
    followed by making the reference to your singleton be volatile
    (assuming that you are using jdk 1.5+).Yes, I did read that article some time ago, when I first got warned for double checking. I never quite understood the keyword "volatile". Now, when reading it again, I see some light glimmering.
    But your proposal has some strange elements.
    First, both the singleton field as well as accessor method are always static
    --why did you make them instance based in your code?
    In order to call them, you already need an instance which defeats the purpose!
    Are you sure about that choice?Yes, that is deliberately. The intended use is to define a static instance of SingletonReference in a class that needs to be instantiated as a singleton. That reference is then used to obtain the singleton.
    Assuming that leaving out static on those was a typo,
    the only innovation in your proposal is the claim that
    "Executing an instance method on the new singleton prevents inlining".
    Where did you get that idea from?
    Its news to me. Instance methods often get inlined, not just static methods.The non static field is not a typo, but intentional. What you call a "innovation", my assumption that I prevented inlining, is exactly what I was not very sure about. I can't imagine how an interpreter or JIT compiler would navigate around that construct, but nevertheless, I am not sure. Your remark suggests that it is unsafe and I will reconsider my code (change it).
    The reason why most double checked locking solutions fail,
    including I believe yours,
    is due to completely unintuitive behavior of optimizing java runtime compilers;
    things can happen in different time order than you expect if you do not use some sort of locking.
    Specifically, typical double checked locking fails because there is no lock on the singleton,
    which means that writes to it can occur in strange order,
    which means that your null-check on it may not prevent multiple instances from being created. Yes, you convinced me.
    Thank you so much for looking into it
    Piet

Maybe you are looking for

  • Payroll retro issue - related to /563

    Hi PY experts, Problem: I have one PY retro issue. Issue reported in December  is :-  One employee has a /563 claim of $18.45 (Claim from prev mth).  Why is here a deduction shown for this payslip?. Issue reported in Jan 2010 is :- same employee has

  • After software update, will not boot up past blue screen

    I just did a software update and after restarting, the computer will not go past the blue screen with the spinning wheel at the bottom. Please help! I've tried everything under troubleshooting in the manual. Any other ideas?

  • How to remove a song stuck on the iPhone?

    I purchased a song in the iPhone iTunes store then synced my iPhone with my computer, only to discover that someone had already purchased the same song there.. Since then that song has been stuck on my iPhone, no matter what I do it appears in the 'P

  • How can I copy text spanning multiple pages in iBook?

    I like to drop sections of text from i books into letters, journal entries etc and for the life of me I cannot figure out the functionality of grabbing text that spans multiple pages.

  • Inquiry about opening any file with internet explorer

    Dear all, i am opening any file with internet explorer on the client machine.but i want to open it as READ-ONLY. so is there any parameter i should add to the command. the command i am using is : client_host('"\Program Files\Internet Explorer\iexplor