Socket.writeByte, socket.flush - atomic operations?

I've written a framework to facilitate RPC calls to a PHP server from  flash.  It's called flashmog.
I  have a bug that I'm trying to root out.  The way flashmog is set up is  that you can make a single socket connection to a server and flashmog  will multiplex different RPC calls over that single socket using a very  simple protocol.  Basically every RPC call gets serialized into a byte  array and then crammed onto the socket, preceded by an UnsignedInt which  indicates how long the serialized RPC is.  This is handled by a method  I've written in a class that extends the flash.net.Socket class:
         public function executeRPC(serviceName:String, methodName:String,  methodParams:Array):void {
             if (!this.connected) {
                 log.write('RPCSocket.executeRPC failed. ' + methodName +  ' attempted on service ' + serviceName + ' while not connected',  Log.HIGH);
                 throw new Error('RPCSocket.executeRPC failed. ' +  methodName + ' attempted on service ' + serviceName + ' while not  connected.');
                 return;
             var rpc:Array = new Array();
             rpc[0] = serviceName;
             rpc[1] = methodName;
             rpc[2] = methodParams;
             var serializedRPC:ByteArray = serialize(rpc);
             if (!serializedRPC) {
                 log.write('RPCSocket.executeRPC failed.  Serialization  failed for method ' + methodName + ' on service ' + serviceName,  Log.HIGH);
                 dispatchEvent(new IOErrorEvent(IOErrorEvent.IO_ERROR,  false, false, 'RPCSocket.executeRPC failed.  Serialization failed for  method ' + methodName + ' on service ' + serviceName));
             super.writeUnsignedInt(serializedRPC.length); // send a  message length value
             super.writeBytes(serializedRPC);
             super.flush();
         } // executeRPC
This works fine  for small messages and low-bandwidth situations, but when I start  cramming large amounts of data down the socket (say 180,000 bytes per  second) then things get out of whack and the server starts complaining  about unreasonable message length values.  The message length value is  transmitted as the first few bytes of an rpc call (see the  writeUnsignedInt call above) and indicates how long the upcoming RPC  call is in bytes.
It seems that the server is trying to  interpret parts of my rpc data as a message length indicator.  This  means the sync is off.
Question 1: Is there anyway  flash would interrupt the execution of the code in this function such  that part of  a separate RPC call might be transmitted in the middle of a  prior rpc call?  Or is this code atomic?  Can I reasonably depend on  flash to load my data into the socket in the order that I call it?
Question  2: Can anyone recommend a better approach to sending these rpc calls?   Ideally something that would be able to synchronize itself continously  rather than depending on in-order transmission of data I cram onto the  socket.

This test was pretty carefully designed. The packets being sent are all about 1000 or 2000 bytes.  The max size for a packet is much much larger (500 Kbytes) and the error I'm getting is complaining about a message length of 959,459,634 which is impossible.
It has occurred to me to try and packet-ize my rpc calls but that's a lot of extra effort.  I was rather hoping to rely on the code to pour data into the socket in order and retrieve it in order.  This is supposed to be guaranteed by the type of socket in use.

Similar Messages

  • Atomic operations (official support?)

    The Solaris kernel has the following functions for atomic operations (see
    below). I can find no evidence that they are officially supported by Sun
    Microsystems. Anyone have more info? I'm writing a device driver, and need
    atomic functionality. Are these functions safe to use?
    $ nm -p /dev/ksyms | grep atomic
    00000268641708 T atomic_add_16
    00000268641708 T atomic_add_16_nv
    00000268641780 T atomic_add_32
    00000268641780 T atomic_add_32_nv
    00000268641812 T atomic_add_64
    00000268641812 T atomic_add_64_nv
    00000268641812 T atomic_add_long
    00000268641812 T atomic_add_long_nv
    00000268641876 T atomic_and_32
    00000268641876 T atomic_and_uint
    00000268641844 T atomic_or_32
    00000268641844 T atomic_or_uint

    Safe to use.
    Check /usr/include/sys/atomic.h
    * Copyright (c) 1996-1999 by Sun Microsystems, Inc.
    * All rights reserved.
    #ifndef SYSATOMIC_H
    #define SYSATOMIC_H
    #pragma ident "@(#)atomic.h 1.7 99/08/15 SMI"
    #include <sys/types.h>
    #include <sys/inttypes.h>
    #ifdef __cplusplus
    extern "C" {
    #endif
    * Add delta to target
    extern void atomic_add_16(uint16_t *target, int16_t delta);
    extern void atomic_add_32(uint32_t *target, int32_t delta);
    extern void atomic_add_long(ulong_t *target, long delta);
    extern void atomic_add_64(uint64_t *target, int64_t delta);
    * logical OR bits with target
    extern void atomic_or_uint(uint_t *target, uint_t bits);
    extern void atomic_or_32(uint32_t *target, uint32_t bits);
    * logical AND bits with target
    extern void atomic_and_uint(uint_t *target, uint_t bits);
    extern void atomic_and_32(uint32_t *target, uint32_t bits);
    * As above, but return the new value. Note that these _nv() variants are
    * substantially more expensive on some platforms than the no-return-value
    * versions above, so don't use them unless you really need to know the
    * new value atomically (e.g. when decrementing a reference count and
    * checking whether it went to zero).
    extern uint16_t atomic_add_16_nv(uint16_t *target, int16_t delta);
    extern uint32_t atomic_add_32_nv(uint32_t *target, int32_t delta);
    extern ulong_t atomic_add_long_nv(ulong_t *target, long delta);
    extern uint64_t atomic_add_64_nv(uint64_t *target, int64_t delta);
    * If target == cmp, set target = newval; return old value
    extern uint32_t cas32(uint32_t *target, uint32_t cmp, uint32_t newval);
    extern ulong_t caslong(ulong_t *target, ulong_t cmp, ulong_t newval);
    extern uint64_t cas64(uint64_t *target, uint64_t cmp, uint64_t newval);
    extern void casptr(void target, void cmp, void newval);
    * Generic memory barrier used during lock entry, placed after the
    * memory operation that acquires the lock to guarantee that the lock
    * protects its data. No stores from after the memory barrier will
    * reach visibility, and no loads from after the barrier will be
    * resolved, before the lock acquisition reaches global visibility.
    extern void membar_enter(void);
    * Generic memory barrier used during lock exit, placed before the
    * memory operation that releases the lock to guarantee that the lock
    * protects its data. All loads and stores issued before the barrier
    * will be resolved before the subsequent lock update reaches visibility.
    extern void membar_exit(void);
    * Arrange that all stores issued before this point in the code reach
    * global visibility before any stores that follow; useful in producer
    * modules that update a data item, then set a flag that it is available.
    * The memory barrier guarantees that the available flag is not visible
    * earlier than the updated data, i.e. it imposes store ordering.
    extern void membar_producer(void);
    * Arrange that all loads issued before this point in the code are
    * completed before any subsequent loads; useful in consumer modules
    * that check to see if data is available and read the data.
    * The memory barrier guarantees that the data is not sampled until
    * after the available flag has been seen, i.e. it imposes load ordering.
    extern void membar_consumer(void);
    #if defined(_LP64) || defined(_ILP32)
    #define atomic_add_ip atomic_add_long
    #define atomic_add_ip_nv atomic_add_long_nv
    #define casip caslong
    #endif
    #ifdef __cplusplus
    #endif
    #endif /* SYSATOMIC_H */

  • Stupid question about atomic operations

    Why there are separate atomic classes? Why not make all primitive operations atomic? Is this a performance issue?

    Currently, I'm readin this:
    http://www.oreilly.com/catalog/jthreads3/chapter/ch05.
    pdf . It says that, for example, ++ operation is not
    atomic.++ by nature is a 3 step process--read, add, write. I suppose it would have been technically possible to make that an atomic operation, but it probably would have involved complicating the JLS and JMM, and possibly hurting overall performance of all operations.

  • Atomic operation?

    When i am going through java docs i have encountered the following statement about Atomic action
    " An atomic action cannot stop in the middle: it either happens completely, or it doesn't happen at all. No side effects of an atomic action are visible until the action is complete."
    So i understand this as, A thread does not give up(does not move out of running state) until it completes the atomic action(a sequence of steps). So atomic action done by one thread is visible to all the threads. i mean to say no other thread executes until the the thread that is executing the atomic action completes. so any modification done by atomic action will be visible to other threads.
    Is my understanding right? Did I understood this rightly?
    Edited by: Muralidhar on Feb 16, 2013 5:37 PM

    teasp wrote:
    Atomic operation is totally different thing from visibility. When a Thread is executing atomic operation, it means no other thread have a chance to disturb the action, but it does not garantee other Threads can "see" the result of the operation immediately.Well if it isn't Mr. Concurrency Expert.
    Based on the ignorance and hostility you demonstrated in your other thread, I wouldn't start dishing out answers to people just yet.
    Offense very much intended.

  • Atomic operations

    Hello,
    are there atomic operations available with Sun Studio on Linux? On Solaris there is atomic.h, gcc has Intel's __sync_* intrinsics, so I am looking for an alternative that doesn't involve writing assembler code.

    clamage45 wrote:
    Solaris 10 and Open Solaris have support for atomic operations, via <atomic.h>. You can use it in programs created with Sun Studio.Er, I must have missed something. I know how to use atomics on solaris (with any compiler) or with gcc (on any platform) but I don't know how to use atomics with Sun Studio on Linux (the case I am asking about).

  • Atomic operation across two distributed cache's

    Hi,
    Let's say I've got CacheA and CacheB, both disttributed, and I need to perform a put() on each cache. What happens if the machine explodes exactly after the moment I invoked the put() on CacheA, but before I call put() on CacheB.
    I need two perform these two puts() as an atomic operation but what I can gather from the transaction documentation is that even if I use a transaction there is a possibility that the first put() can succeed and the second fail, without a rollback of the first transaction taking place.
    Is this the case? And if so, what can I do to work around this?
    Cheers,
    Henric

    Hi Henric,
    if the client machine on which the put() is called explodes, then you are out of luck. To be able to correct such problems, I think, it is recommended that your client code should be idempotent so that such operations could be retried and completed.
    Best regards,
    Robert

  • Setting field an atomic operation?

    Is setting a field an atomic operation in Java? I'm not talking about using a mutator function, I'm talking about just using the assignment operator.
    From what I can tell there's only 1 opcode involved, which is setfield.

    Setting a variable's value is atomic for everything but non-volatile longs and doubles.
    There is no such thing as an "object" field. You're thinking of references, and they are atomic as well. (Note that your'e not copying an object, just setting a reference to point to the same object as another reference.)
    Note, however, that if you have something like this: Foo foo = new Foo(); that while the assignment of the reference value is atomic, the construction of the Foo object is not.

  • Assigning  an object atomic operation?

    is assigning the refrence to a object using = an atomic operation? or might it be interrupted in the middle of the assignment in a multihreaded application

    The JLS #17.1 defines stores as atomic except in the case of long and double (#17.4).

  • Memory barriers in atomic operations

    According to the Apple documentation there exist pairs of function to perform atomic operations such as OSAtomicIncrement32()/OSAtmoicIncrement32Barrier().
    http://developer.apple.com/documentation/Darwin/Reference/ManPages/man3/OSAtomic Increment32Barrier.3.html
    "Barriers strictly order memory access on a weakly-ordered architecture such as PPC. All loads and stores executed in sequential program order before the barrier will complete before any load or store executed after the barrier."
    However, it is not clear whether the memory barrier must be used:
    - AFAIK PPC is not the only case where instruction could be executed out of order, Intel processors also execute instruction out of order.
    - In which cases must be used the memory barrier version or problems could arise?
    "If you are unsure which version to use, prefer the barrier variants as they are safer."
    Is this sentence a joke? It is neither precise nor formal.

    I think that this paragraph is quite imprecise. Particularly:
    "Most code will want to use the barrier functions to insure that memory shared between threads is properly synchronized".
    "Most code" must be divided into the specific cases where memory barriers are necessary or not.
    "For example, if you want to initialize a shared data structure and then atomically increment a variable to indicate that the initialization is complete, then you must use OSAtomicIncrement32Barrier() OSAtomicIncrement32Barrier() to ensure that the stores to your data structure complete before the atomic add. Likewise, the consumer of that data structure must use OSAtomicDecrement32Barrier(), in order to ensure that their loads of the structure are not executed before the atomic decrement"
    Atomic operation only deals with fundamental variables (for data structures there exist other synchronization techniques). If you want to do two operations atomically (initialize and increment) it implies two atomic operations, so you should not use atomic operations in this case. Instead you should wrap both operations in a critical section, for instance, by means of an NSLock.
    "On the other hand, if you are simply incrementing a global counter, then it is safe and potentially much faster to use OSAtomicIncrement32()".
    That is an atomic operation over an atomic variable, but that is true only when one core exists. Synchronization between the pipelines of several cores is not correctly managed in the Intel processors (assembler atomic operations are only guarantee within an specific core). If you don't know the deployment machine, you must use the memory barrier version (just in case). However, if there is only one thread that modifies the atomic variable, the atomic operation without memory barrier is faster and guarantee that the other (reader) threads always will find the variable in a valid state (either completely modified or not modified at all).
    "If you are unsure which version to use, prefer the barrier variants as they are safer."
    Yes, that is the easy answer IF YOU ARE UNSURE what you are doing or what atomic operation are intended for. However, it is a good idea to say to your clients in which cases memory barriers in reference to atomic operations are needed or not.

  • Error message when access WLS: active sockets and socket readers configuration

    Hi,
    I got the following error when I tried to access the WLS using a program to get
    the mbeans data.
    This error happens when I have 3 or more servers running ( 1 admin server, 2 or
    more managed servers). With cluster with more than 2 servers running, this error
    also occurs.
    <Sep 10, 2001 8:35:01 PM CDT> <Warning> <JavaSocketMuxer> <There are: '3' active
    sockets, but the maximum number of socket readers allowed by theconfiguration
    is: '2', you may want alter your configuration.>
    I creased the socket readers from 33% to 66%, but I still got the same error.
    I'm using WLS version 6.0 sp2
    My configuration is:
    Execute Threads = 15,
    Socket Readers = 33% or 66%
    Does anyone know how to fix this ? I am really appreciate for any suggestions.
    thanks,
    Kieu

    thank you, I just found out about setting those sockets using command line options
    an hour ago. But thanks a lot.
    -Kieu
    Kaye Wilcox <[email protected]> wrote:
    Kieu,
    You could try increasing the number of execute threads, you can do this
    via
    the admin console on the <server> --> Tuning tab.
    See http://edocs.bea.com/wls/docs60/perform/WLSTuning.html#1104317 for
    guidelines on setting the thread pool size and the number of socket readers.
    Here is a link that talks about socket communication in a cluster
    http://edocs.bea.com/wls/docs60/cluster/features.html#1007001.

  • Two Socket errors: Socket write and Socket reset

    does anyone know the reasons and remedies for the error
    java.net.SocketException: Software caused connection abort: socket write error
    and
    java.net.SocketException: Connection reset by peer: socket write error
    please reply if someone knows
    thanks

    These are basically the same error triggered in different places.
    This occurs when the other end of the connection closes while the client is still writing to it.

  • Persistent socket connection - socket write error

    I'm connecting to a server using sockets. My application acts like a client, sending requests, and also like a server, listenning for notifications. I was using a client socket for the first task (send a message when required) and a server socket for the second (permanently listen for incoming messages). This needs to be some kind of persistent connection.
    It happens now I'm required to send and receive using the same port. The only way I found to do this is to have a single socket (client) bound to a certain port. It connects to the server and one thread keeps listenning for incoming messages (by reading the input stream) while another process is launched whenever I need to send a message (by writing to the socket's output stream). The socket is created only once (when the application starts) and its output/input streams reused whenever needed.
    This works well for a while. However, when I try to send a message after some idle time (lets say 20 minutes) strange things happen. The first attempt to send a message returns success (although nothing is actually received by the server). The second attempt returns java.net.SocketException: Software caused connection abort: socket write error. I don't understand this behaviour. Can there be a timeout? I only write to the socket after testing if it's connected.. So why is this failing? Also, any other ideas on how to send and receive using the same port? A different and better approach maybe..
    Thanks in advance

    Socket.isConnected just tells you whether you have personally called Socket.connect() or new Socket(host, port,...). It doesn't tell you anything about the state of the connection.
    You should certainly issue periodic application 'pings' at suitable intervals, and many application protocols do this. For example, Java RMI reuses connections that are less than 15 seconds old but only if they pass a ping test.
    In general however you can't insist on a persistent connection over TCP/IP, especially if you have this kind of hardware in the circuit. What you can do is recognize when the connection has been lost and form a new one. The network is going to fail somewhere some time and your program has to be robust against that.

  • Atomic operation and volatile variables

    Hi ,
    I have one volatile variable declared as
    private volatile long _volatileKey=0;
    This variable is being incremented(++_volatileKey)  by a method which is not synchronized. Could there be a problem if more than one thread tries to change the variable ?
    In short is ++ operation atomic in case of volatile variables ?
    Thanks
    Sumukh

    Google[ [url=http://www.google.co.uk/search?q=sun+java+volatile]sun java volatile ].
    http://www.javaperformancetuning.com/tips/volatile.shtml
    The volatile modifier requests the Java VM to always access the shared copy of the variable so the its most current value is always read. If two or more threads access a member variable, AND one or more threads might change that variable's value, AND ALL of the threads do not use synchronization (methods or blocks) to read and/or write the value, then that member variable must be declared volatile to ensure all threads see the changed value.
    Note however that volatile has been incompletely implemented in most JVMs. Using volatile may not help to achieve the results you desire (yes this is a JVM bug, but its been low priority until recently).
    http://cephas.net/blog/2003/02/17/using_the_volatile_keyword_in_java.html
    Careful, volatile is ignored or at least not implemented properly on many common JVM's, including (last time I checked) Sun's JVM 1.3.1 for Windows.

  • Atomic operation on array elements

    For array elements of primitive types (32 bits and less): byte, short, int. Are array operations atomic?
    For example, define byte b[] = new byte[1];
    thread A executes b[0]++; thread B reads b[0]. or both threads writing..
    Question:
    1 Is it possible to get an incostintent value reading b[0].
    2. Do you think JVM or OS know that two threads work on the same element and synchronizes them to ensure value consistency?
    As a consequence of synchronizatrion, is there speed difference if I have two threads writing to b[0] and b[1] OR two threads writing to same element b[0]. I checked, there seems to be none.

    Suppose I do not want to synchronize access to array
    elements. Then you can't guarantee correct behavior of things like ++.
    It's going degrade performanceHave you measured this, so that you know it will be a problem, or are you just assuming?
    but would
    like to understand whether JVM enforces the same
    atomic rule on array elements as does with variables.++ is NOT atomic on anything--variables or array elements. All threading issues are the same for array elements as for variables.
    Can anynody comment on performance? Is it faster to
    modify different elements of array versus the same
    index? shouldn't the same be slower?Try it yourself and see if you can even see a meaningful difference.

  • Spawn socket from socket

    Hi, I'm trying to make a server backend for a game. The design is intended to be such that an initial socket connection is made with the client and that socket is then passed off to a server thread. Later on in the thread (this is a two player game) once it's verified that both parties are ready and connected I'd like to then spawn another socket from the currently active socket and pass that new socket off to a chat server thread so that the main game communication and the chat communication are in separate threads and sockets for better efficiency. But when I try to use the Socket clone method I get the error:
    clone() has protected access in java.lang.Object
    so how do I do this? Thanks!

    John_Musbach wrote:
    Hi, I'm trying to make a server backend for a game. The design is intended to be such that an initial socket connection is made with the client and that socket is then passed off to a server thread. Later on in the thread (this is a two player game) once it's verified that both parties are ready and connected I'd like to then spawn another socket from the currently active socket and pass that new socket off to a chat server thread so that the main game communication and the chat communication are in separate threads and sockets for better efficiency. You can't clone it.
    First you should try to use the first socket until you determine that that socket is a bottleneck.
    If it is in fact a bottleneck they you would need to do the following.
    1. The server sends a message to the client that says "start second session"
    2. The client connects (second time) to the server, perhaps using a different port.
    3. The server accepts and spins the thread.
    4. The client sends what ever info is necessary to identify itself so that it can be correctly tied to the first socket.
    You might note that if the socket is the bottleneck then how likely is it that your network is not the bottleneck? Another socket wont fix a network that is full.

Maybe you are looking for

  • Error when creating Production Order

    Dear All, When i was trying to create Production Order using CO01, there was an error message that says "error when calculating cost, see log, log is deleted when saving" what is the problem from that error message? how can i solve this problem? best

  • Rahmen in Acrobat X Pro setzen

    Hallo, ich habe ein kleines Problem: Ich möchte in Acrobat X Pro meine Seiten mit einem Rahmen versehen. Ich habe dazu in Word eine Rahmenvorlage gefunden und als pdf gespeichert. In Acrobat bin ich wie folgt vorgegangen: Werkzeuge/Hintergrund und da

  • My entire screen is in tiny font, toolbars, tabs and  the main window

    I found my answer on how fix the size of my resolution back to normal on here but having trouble with getting my settings and font size back to default. Everything on my screen is smaller and not sure how to go back to "normal". Any help would be app

  • Application doesn't viewed in Workspace

    Hi, I'm using HFM 9.3 version with oracle 10G.I configured shared services,HFM & hyperion repoting analysis. I can able to create application in HFM and load data into it.But i can't able to view the application in the workspace. I have given provisi

  • Exchange 2013 coexistence with 2010

    Who all has had problems with 2013 proxing requests to 2010? I tried a migration and couldn't even get OWA to proxy or redirect. I was receiving errors like this: Autodiscover would fail for 2010 users but fine for 2013 users: HTTP Response Headers: