Atomic operation?

When i am going through java docs i have encountered the following statement about Atomic action
" An atomic action cannot stop in the middle: it either happens completely, or it doesn't happen at all. No side effects of an atomic action are visible until the action is complete."
So i understand this as, A thread does not give up(does not move out of running state) until it completes the atomic action(a sequence of steps). So atomic action done by one thread is visible to all the threads. i mean to say no other thread executes until the the thread that is executing the atomic action completes. so any modification done by atomic action will be visible to other threads.
Is my understanding right? Did I understood this rightly?
Edited by: Muralidhar on Feb 16, 2013 5:37 PM

teasp wrote:
Atomic operation is totally different thing from visibility. When a Thread is executing atomic operation, it means no other thread have a chance to disturb the action, but it does not garantee other Threads can "see" the result of the operation immediately.Well if it isn't Mr. Concurrency Expert.
Based on the ignorance and hostility you demonstrated in your other thread, I wouldn't start dishing out answers to people just yet.
Offense very much intended.

Similar Messages

  • Atomic operations (official support?)

    The Solaris kernel has the following functions for atomic operations (see
    below). I can find no evidence that they are officially supported by Sun
    Microsystems. Anyone have more info? I'm writing a device driver, and need
    atomic functionality. Are these functions safe to use?
    $ nm -p /dev/ksyms | grep atomic
    00000268641708 T atomic_add_16
    00000268641708 T atomic_add_16_nv
    00000268641780 T atomic_add_32
    00000268641780 T atomic_add_32_nv
    00000268641812 T atomic_add_64
    00000268641812 T atomic_add_64_nv
    00000268641812 T atomic_add_long
    00000268641812 T atomic_add_long_nv
    00000268641876 T atomic_and_32
    00000268641876 T atomic_and_uint
    00000268641844 T atomic_or_32
    00000268641844 T atomic_or_uint

    Safe to use.
    Check /usr/include/sys/atomic.h
    * Copyright (c) 1996-1999 by Sun Microsystems, Inc.
    * All rights reserved.
    #ifndef SYSATOMIC_H
    #define SYSATOMIC_H
    #pragma ident "@(#)atomic.h 1.7 99/08/15 SMI"
    #include <sys/types.h>
    #include <sys/inttypes.h>
    #ifdef __cplusplus
    extern "C" {
    #endif
    * Add delta to target
    extern void atomic_add_16(uint16_t *target, int16_t delta);
    extern void atomic_add_32(uint32_t *target, int32_t delta);
    extern void atomic_add_long(ulong_t *target, long delta);
    extern void atomic_add_64(uint64_t *target, int64_t delta);
    * logical OR bits with target
    extern void atomic_or_uint(uint_t *target, uint_t bits);
    extern void atomic_or_32(uint32_t *target, uint32_t bits);
    * logical AND bits with target
    extern void atomic_and_uint(uint_t *target, uint_t bits);
    extern void atomic_and_32(uint32_t *target, uint32_t bits);
    * As above, but return the new value. Note that these _nv() variants are
    * substantially more expensive on some platforms than the no-return-value
    * versions above, so don't use them unless you really need to know the
    * new value atomically (e.g. when decrementing a reference count and
    * checking whether it went to zero).
    extern uint16_t atomic_add_16_nv(uint16_t *target, int16_t delta);
    extern uint32_t atomic_add_32_nv(uint32_t *target, int32_t delta);
    extern ulong_t atomic_add_long_nv(ulong_t *target, long delta);
    extern uint64_t atomic_add_64_nv(uint64_t *target, int64_t delta);
    * If target == cmp, set target = newval; return old value
    extern uint32_t cas32(uint32_t *target, uint32_t cmp, uint32_t newval);
    extern ulong_t caslong(ulong_t *target, ulong_t cmp, ulong_t newval);
    extern uint64_t cas64(uint64_t *target, uint64_t cmp, uint64_t newval);
    extern void casptr(void target, void cmp, void newval);
    * Generic memory barrier used during lock entry, placed after the
    * memory operation that acquires the lock to guarantee that the lock
    * protects its data. No stores from after the memory barrier will
    * reach visibility, and no loads from after the barrier will be
    * resolved, before the lock acquisition reaches global visibility.
    extern void membar_enter(void);
    * Generic memory barrier used during lock exit, placed before the
    * memory operation that releases the lock to guarantee that the lock
    * protects its data. All loads and stores issued before the barrier
    * will be resolved before the subsequent lock update reaches visibility.
    extern void membar_exit(void);
    * Arrange that all stores issued before this point in the code reach
    * global visibility before any stores that follow; useful in producer
    * modules that update a data item, then set a flag that it is available.
    * The memory barrier guarantees that the available flag is not visible
    * earlier than the updated data, i.e. it imposes store ordering.
    extern void membar_producer(void);
    * Arrange that all loads issued before this point in the code are
    * completed before any subsequent loads; useful in consumer modules
    * that check to see if data is available and read the data.
    * The memory barrier guarantees that the data is not sampled until
    * after the available flag has been seen, i.e. it imposes load ordering.
    extern void membar_consumer(void);
    #if defined(_LP64) || defined(_ILP32)
    #define atomic_add_ip atomic_add_long
    #define atomic_add_ip_nv atomic_add_long_nv
    #define casip caslong
    #endif
    #ifdef __cplusplus
    #endif
    #endif /* SYSATOMIC_H */

  • Stupid question about atomic operations

    Why there are separate atomic classes? Why not make all primitive operations atomic? Is this a performance issue?

    Currently, I'm readin this:
    http://www.oreilly.com/catalog/jthreads3/chapter/ch05.
    pdf . It says that, for example, ++ operation is not
    atomic.++ by nature is a 3 step process--read, add, write. I suppose it would have been technically possible to make that an atomic operation, but it probably would have involved complicating the JLS and JMM, and possibly hurting overall performance of all operations.

  • Atomic operations

    Hello,
    are there atomic operations available with Sun Studio on Linux? On Solaris there is atomic.h, gcc has Intel's __sync_* intrinsics, so I am looking for an alternative that doesn't involve writing assembler code.

    clamage45 wrote:
    Solaris 10 and Open Solaris have support for atomic operations, via <atomic.h>. You can use it in programs created with Sun Studio.Er, I must have missed something. I know how to use atomics on solaris (with any compiler) or with gcc (on any platform) but I don't know how to use atomics with Sun Studio on Linux (the case I am asking about).

  • Atomic operation across two distributed cache's

    Hi,
    Let's say I've got CacheA and CacheB, both disttributed, and I need to perform a put() on each cache. What happens if the machine explodes exactly after the moment I invoked the put() on CacheA, but before I call put() on CacheB.
    I need two perform these two puts() as an atomic operation but what I can gather from the transaction documentation is that even if I use a transaction there is a possibility that the first put() can succeed and the second fail, without a rollback of the first transaction taking place.
    Is this the case? And if so, what can I do to work around this?
    Cheers,
    Henric

    Hi Henric,
    if the client machine on which the put() is called explodes, then you are out of luck. To be able to correct such problems, I think, it is recommended that your client code should be idempotent so that such operations could be retried and completed.
    Best regards,
    Robert

  • Setting field an atomic operation?

    Is setting a field an atomic operation in Java? I'm not talking about using a mutator function, I'm talking about just using the assignment operator.
    From what I can tell there's only 1 opcode involved, which is setfield.

    Setting a variable's value is atomic for everything but non-volatile longs and doubles.
    There is no such thing as an "object" field. You're thinking of references, and they are atomic as well. (Note that your'e not copying an object, just setting a reference to point to the same object as another reference.)
    Note, however, that if you have something like this: Foo foo = new Foo(); that while the assignment of the reference value is atomic, the construction of the Foo object is not.

  • Assigning  an object atomic operation?

    is assigning the refrence to a object using = an atomic operation? or might it be interrupted in the middle of the assignment in a multihreaded application

    The JLS #17.1 defines stores as atomic except in the case of long and double (#17.4).

  • Memory barriers in atomic operations

    According to the Apple documentation there exist pairs of function to perform atomic operations such as OSAtomicIncrement32()/OSAtmoicIncrement32Barrier().
    http://developer.apple.com/documentation/Darwin/Reference/ManPages/man3/OSAtomic Increment32Barrier.3.html
    "Barriers strictly order memory access on a weakly-ordered architecture such as PPC. All loads and stores executed in sequential program order before the barrier will complete before any load or store executed after the barrier."
    However, it is not clear whether the memory barrier must be used:
    - AFAIK PPC is not the only case where instruction could be executed out of order, Intel processors also execute instruction out of order.
    - In which cases must be used the memory barrier version or problems could arise?
    "If you are unsure which version to use, prefer the barrier variants as they are safer."
    Is this sentence a joke? It is neither precise nor formal.

    I think that this paragraph is quite imprecise. Particularly:
    "Most code will want to use the barrier functions to insure that memory shared between threads is properly synchronized".
    "Most code" must be divided into the specific cases where memory barriers are necessary or not.
    "For example, if you want to initialize a shared data structure and then atomically increment a variable to indicate that the initialization is complete, then you must use OSAtomicIncrement32Barrier() OSAtomicIncrement32Barrier() to ensure that the stores to your data structure complete before the atomic add. Likewise, the consumer of that data structure must use OSAtomicDecrement32Barrier(), in order to ensure that their loads of the structure are not executed before the atomic decrement"
    Atomic operation only deals with fundamental variables (for data structures there exist other synchronization techniques). If you want to do two operations atomically (initialize and increment) it implies two atomic operations, so you should not use atomic operations in this case. Instead you should wrap both operations in a critical section, for instance, by means of an NSLock.
    "On the other hand, if you are simply incrementing a global counter, then it is safe and potentially much faster to use OSAtomicIncrement32()".
    That is an atomic operation over an atomic variable, but that is true only when one core exists. Synchronization between the pipelines of several cores is not correctly managed in the Intel processors (assembler atomic operations are only guarantee within an specific core). If you don't know the deployment machine, you must use the memory barrier version (just in case). However, if there is only one thread that modifies the atomic variable, the atomic operation without memory barrier is faster and guarantee that the other (reader) threads always will find the variable in a valid state (either completely modified or not modified at all).
    "If you are unsure which version to use, prefer the barrier variants as they are safer."
    Yes, that is the easy answer IF YOU ARE UNSURE what you are doing or what atomic operation are intended for. However, it is a good idea to say to your clients in which cases memory barriers in reference to atomic operations are needed or not.

  • Atomic operation and volatile variables

    Hi ,
    I have one volatile variable declared as
    private volatile long _volatileKey=0;
    This variable is being incremented(++_volatileKey)  by a method which is not synchronized. Could there be a problem if more than one thread tries to change the variable ?
    In short is ++ operation atomic in case of volatile variables ?
    Thanks
    Sumukh

    Google[ [url=http://www.google.co.uk/search?q=sun+java+volatile]sun java volatile ].
    http://www.javaperformancetuning.com/tips/volatile.shtml
    The volatile modifier requests the Java VM to always access the shared copy of the variable so the its most current value is always read. If two or more threads access a member variable, AND one or more threads might change that variable's value, AND ALL of the threads do not use synchronization (methods or blocks) to read and/or write the value, then that member variable must be declared volatile to ensure all threads see the changed value.
    Note however that volatile has been incompletely implemented in most JVMs. Using volatile may not help to achieve the results you desire (yes this is a JVM bug, but its been low priority until recently).
    http://cephas.net/blog/2003/02/17/using_the_volatile_keyword_in_java.html
    Careful, volatile is ignored or at least not implemented properly on many common JVM's, including (last time I checked) Sun's JVM 1.3.1 for Windows.

  • Atomic operation on array elements

    For array elements of primitive types (32 bits and less): byte, short, int. Are array operations atomic?
    For example, define byte b[] = new byte[1];
    thread A executes b[0]++; thread B reads b[0]. or both threads writing..
    Question:
    1 Is it possible to get an incostintent value reading b[0].
    2. Do you think JVM or OS know that two threads work on the same element and synchronizes them to ensure value consistency?
    As a consequence of synchronizatrion, is there speed difference if I have two threads writing to b[0] and b[1] OR two threads writing to same element b[0]. I checked, there seems to be none.

    Suppose I do not want to synchronize access to array
    elements. Then you can't guarantee correct behavior of things like ++.
    It's going degrade performanceHave you measured this, so that you know it will be a problem, or are you just assuming?
    but would
    like to understand whether JVM enforces the same
    atomic rule on array elements as does with variables.++ is NOT atomic on anything--variables or array elements. All threading issues are the same for array elements as for variables.
    Can anynody comment on performance? Is it faster to
    modify different elements of array versus the same
    index? shouldn't the same be slower?Try it yourself and see if you can even see a meaningful difference.

  • Socket.writeByte, socket.flush - atomic operations?

    I've written a framework to facilitate RPC calls to a PHP server from  flash.  It's called flashmog.
    I  have a bug that I'm trying to root out.  The way flashmog is set up is  that you can make a single socket connection to a server and flashmog  will multiplex different RPC calls over that single socket using a very  simple protocol.  Basically every RPC call gets serialized into a byte  array and then crammed onto the socket, preceded by an UnsignedInt which  indicates how long the serialized RPC is.  This is handled by a method  I've written in a class that extends the flash.net.Socket class:
             public function executeRPC(serviceName:String, methodName:String,  methodParams:Array):void {
                 if (!this.connected) {
                     log.write('RPCSocket.executeRPC failed. ' + methodName +  ' attempted on service ' + serviceName + ' while not connected',  Log.HIGH);
                     throw new Error('RPCSocket.executeRPC failed. ' +  methodName + ' attempted on service ' + serviceName + ' while not  connected.');
                     return;
                 var rpc:Array = new Array();
                 rpc[0] = serviceName;
                 rpc[1] = methodName;
                 rpc[2] = methodParams;
                 var serializedRPC:ByteArray = serialize(rpc);
                 if (!serializedRPC) {
                     log.write('RPCSocket.executeRPC failed.  Serialization  failed for method ' + methodName + ' on service ' + serviceName,  Log.HIGH);
                     dispatchEvent(new IOErrorEvent(IOErrorEvent.IO_ERROR,  false, false, 'RPCSocket.executeRPC failed.  Serialization failed for  method ' + methodName + ' on service ' + serviceName));
                 super.writeUnsignedInt(serializedRPC.length); // send a  message length value
                 super.writeBytes(serializedRPC);
                 super.flush();
             } // executeRPC
    This works fine  for small messages and low-bandwidth situations, but when I start  cramming large amounts of data down the socket (say 180,000 bytes per  second) then things get out of whack and the server starts complaining  about unreasonable message length values.  The message length value is  transmitted as the first few bytes of an rpc call (see the  writeUnsignedInt call above) and indicates how long the upcoming RPC  call is in bytes.
    It seems that the server is trying to  interpret parts of my rpc data as a message length indicator.  This  means the sync is off.
    Question 1: Is there anyway  flash would interrupt the execution of the code in this function such  that part of  a separate RPC call might be transmitted in the middle of a  prior rpc call?  Or is this code atomic?  Can I reasonably depend on  flash to load my data into the socket in the order that I call it?
    Question  2: Can anyone recommend a better approach to sending these rpc calls?   Ideally something that would be able to synchronize itself continously  rather than depending on in-order transmission of data I cram onto the  socket.

    This test was pretty carefully designed. The packets being sent are all about 1000 or 2000 bytes.  The max size for a packet is much much larger (500 Kbytes) and the error I'm getting is complaining about a message length of 959,459,634 which is impossible.
    It has occurred to me to try and packet-ize my rpc calls but that's a lot of extra effort.  I was rather hoping to rely on the code to pour data into the socket in order and retrieve it in order.  This is supposed to be guaranteed by the type of socket in use.

  • Are the read and write operations atomic for an array in a local variable.

    Hi,
    I would like to know when you access an array in a local variable, is it an atomic operation?
    Thanks,
    Mat

    Thanks for the comments. I agree with you. However, I my case, race conditions and synchronization are not issues. Therefore, the only thing that matters to me is that the write and read operation of the array must be atomic. I know that I can implement that with a LV2 style global but I want to avoid it if possible.
    If writing and reading to an array are atomic operations then I can simply use local or global variables.
    All I need to know is: Is reading or writing an array in a local variable an atomic operation?
    Thanks,
    Mat

  • Atomic reads and writes to memory

    In Solaris 10 running on x86 64bit processor are reads/writes of an int variable (written in C compiled with gcc) atomic operations? I have read on various forums that memory reads and writes of primitives are atomic but there are people who question this statement.
    I know there are atomic_ops provided by Solaris 10 for both user space and kernel but I am only interested in ensuring that an update to an int is atomic?

    ps10x# cat fred.c
    main() {
      int i;
      i=13;
      i=17;
    ps10x# gcc -S fred.c
    ps10x# cat fred.s
            .file   "fred.c"
            .text
    .globl main
            .type   main, @function
    main:
            leal    4(%esp), %ecx
            andl    $-16, %esp
            pushl   -4(%ecx)
            pushl   %ebp
            movl    %esp, %ebp
            pushl   %ecx
            subl    $20, %esp
            movl    $13, -8(%ebp)
            movl    $17, -8(%ebp)
            addl    $20, %esp
            popl    %ecx
            popl    %ebp
            leal    -4(%ecx), %esp
            ret
            .size   main, .-main
            .ident  "GCC: (GNU) 4.3.4"So it seems pretty clear that setting a variable is "movl $13, -8(%ebp)"
    and while I'm no expert on x86 assembler, I'm fairly sure a single movl instruction is atomic.
    However, the real issue is that the compiler might decide to stash a temporary copy of the variable in a register.
    And that could throw off the "atomic" nature of things.
    I believe that the volatile keyword is supposed to help prevent those kinds of issues.

  • Is update by filter atomic?

    Hi.
    Let we update cache via Filter and UpdaterProcessor by invokeAll. Is this update atomic per object in cache? Is there any case when coherence has find object in cache by filter but somebody has changed this object before it has updated in cache?
    Thank you.
    Edited by: St. on Aug 8, 2012 6:12 AM

    Hi,
    InvokeAll runs EP on a number of Entries (selected from Filter). The InvokeAll is not a single atomic operation (for all the entries as whole) but Invoking EP on each entry from the filter is an atomic operation.
    So as you said there can be a case where you do InvokeAll and there is another thread updated an entry before EP runs on that entry.
    reg
    Dasun.

  • Is ServletContext.setAttribute() atomic?

    If my servlet make just a call to ServletContext.setAttribute() (or session.setAttribute() ), is it a must to synchronize this single call? Assuming any two threads could execute the setAttribute() call simultaneously.
    This question come into my mine because I realize that even setting primitive double type isn't guaranteed to be done atomically by the JVM spec. Therefore, if 2 threads are modifying the same double variable at the same time, problems could happen - one thread's data get PARTIALLY overwritten by other thread..
    Now, if 2 threads simultaneously calling setAttribute() on ServletContext, would changes of one thread overwrite PARTIALLY the changes made by the other? If the call only contain atomic write operations, then one operation can completely overwrite the other one and we got no problem. But if setAttribute() contains some non-atomic operations the partial update problem could happen.

    Atomic or thread-safe? I believe the former is true whereas the latter is probably false.
    - Saish

Maybe you are looking for

  • Where can I find 3D materials add-ons for Photoshop CC 2014?

    Where can I find 3D materials add-ons for Photoshop CC 2014? When I pull down the 3D menu to Get More Content, it takes me here: Downloadable 3D content | Photoshop.com. When I download and install the "Versatile materials", Adobe Extension Manger te

  • Minor modification to Dr Browns Image Processor Pro?

    I came across Dr Browns fantastic script and I'm hoping there's a developer that might be able to help with a minor (?) revision to the code. He has kindly released the source code on his website  http://www.russellbrown.com/images/tips_downloads/Ope

  • Calling from my MacBook Pro does not work?

    I have a Mac Book Pro 13'' Mid 2013 (Bluetooth 4.0) now running OS X Yosemite and I got a IPhone 5s with IOS 8.1 ... Incoming Calls I can receive with my Iphone, but when I want to call somebody with my Mac Book it doesn't work!"? Every time the mess

  • CP1515n and HPLIP don't use black cartige

    Hello, I'm using the CP1515n as anetwork printer for Windows and Linux systems manly for printing documents (PDF, Word OpenOffice). While printing from Windows works fine under Linux when printing in color mode I can't get the printer to use the bla

  • Installed but unable to open

    I have installed iTunes 7.0, but when I click on the desktop shortcut, nothing happens - not even an error message? What could be the problem.