Thread safe architecture problem

I am developing a app that is utilizing a proxy pattern to "wrap" instances of a few classes. The consumer of the proxy then does not need an instance of the inner class and the inner class is shared among all of the proxy consumers. However, I need the ability for the underlying class to be changed as needed ( reloaded through a ClassLoader to be more precise ). The problem is that I am trying to move this into a multi-threaded environment and am running into a few problems. I know that synchronization could solve my problems, but that will adversely affect the performance because only one thread will be able to execute the code any give time given that all of the proxies use the same underlying class. The reason that the proxies all wrap the same class is to eliminate constantly creating instances to handle the processing. (they are referenced from jars at runtime and use reflection to instantiate them)
Does any one have any suggestions on how to better structure this?

There are probably better ways, but one simple way would be to use a ReadWriteLock. In the proxy: aquire the read lock upon invocations to the underlying "real" instance. When you want to switch implementations, aquire the write lock (will wait for all readers to finish, and then prevent any readers from executing while you swap in your new state).
This way, if your old implementation needs to be "shut down", you can be sure that no one is relying on it when you do so. On the other hand, it doesn't prevent concurrent usage of the underlying instance when it's in service.
See java.util.concurrent.locks.ReadWriteLock (java 1.5) or doug leas util.concurrent for java versions before that

Similar Messages

  • RDBMS non-thread-safe issue

    <snip>
    This (as you probably know) is due to the fact that the code provided by
    most of the DB vendors is not thread safe.
    <snip>
    Sean's comment, above, speaks to an issue that is causing some concern
    within my (large - 180 projects in development) organisation.
    May I please ask the forum if there are others out there who have an
    understanding of / concern with this "problem"?
    My perception (quite possibly flawed) of the "problem" is that the RDBMS
    cannot multi-thread data access objects. So we find ourselves in a situation
    where we can achieve scaleability in just about all other
    performance-sensitive areas of a system's technical architecture (we're
    using DCE -- but have found that Encina is not advisable except where there
    is a true requirement for heterogeneous distributed 2 phase commit, which we
    don't often see.....) but when we go to hit on the RDBMS, we go back to good
    ole single-threading.
    In certain circumstances, this shortfall of RDBMS technology -- I won't
    mention any names, of course, but the initials are "Oracle" -- seems to be
    hindering our achievement of a desired technical architecture.
    Is this a "Pro*C / PL/SQL stored procedures" problem or is it something that
    is in the RDBMSs' DNA?
    How can we get around it?
    Any comments?
    Regards
    Jon

    Jon
    I agree. But it is the best solution within the constraints of existing
    technology. At least we don't use a process per client. 10 replicated
    copies of a service could service the needs of 100 clients.
    Eric
    >
    At 13:15 6/09/96 EST, you wrote:
    Eric
    Thanks for your response. Yep. I realise that the issue I've presented is
    clearly not something that Forte causes or is responsible for in any way.
    Forte can, as you've pointed out, actually help in this area. But I don't
    think that getting Forte to spawn another instance of a data access server
    is really the best solution. Ie, that's not what we tend to have in mind
    when we think about "scaleability". The best solution is -- perhaps -- to
    get the RDBMS people to thread-safe all code and libraries. I have pointedly
    asked Oracle for a position on this -- and got the usual blank stare.
    I also considered whether people might get upset at me for posting what is
    clearly a non-Forte-specifc question in a Forte forum. But then I went ahead
    and did it anyway. Justification being (assumption follows) that the kind of
    people who hang out on the Forte forum may tend to be more
    architecture-oriented than your run of the mill VB / SQL*Net / PL/SQL stored
    procedures kinda guy/gal, and may be using or considering Forte (plug for
    Forte follows) precisely because it clearly enables a superior architecture.
    Should proably post to the comp.database.oracle forum, but I just don't know
    them as well.
    Regards
    Jon
    From: Eric Gold
    To: McLeod, Jon
    Cc: [email protected]
    Subject: Re: RDBMS non-thread-safe issue
    Date: Friday, 6 September 1996 11:21AM
    Jon
    In response to this message.....read below...
    Sean's comment, above, speaks to an issue that is causing some concern
    within my (large - 180 projects in development) organisation.
    May I please ask the forum if there are others out there who have an
    understanding of / concern with this "problem"?
    My perception (quite possibly flawed) of the "problem" is that the RDBMS
    cannot multi-thread data access objects. So we find ourselves in a situation
    where we can achieve scaleability in just about all other
    performance-sensitive areas of a system's technical architecture (we're
    using DCE -- but have found that Encina is not advisable except where there
    is a true requirement for heterogeneous distributed 2 phase commit, which we
    don't often see.....) but when we go to hit on the RDBMS, we go back to good
    ole single-threading.
    In certain circumstances, this shortfall of RDBMS technology -- I won't
    mention any names, of course, but the initials are "Oracle" -- seems to be
    hindering our achievement of a desired technical architecture.
    Is this a "Pro*C / PL/SQL stored procedures" problem or is it something that
    is in the RDBMSs' DNA?
    How can we get around it?
    Any comments?Jon,
    Go ahead ask the Forum any questions you want. This "problem"
    is not a problem in Forte. What we allow you to do is "replicate" your
    data access services so that each one runs inside its own
    process. Each one of these processes (aka partitions) has its
    own connection to the database. The routing to the replicated
    partitions is transparent to the clients. Clients send a
    message like "DatabaseService.GetCustomer()" and then the Forte
    router sees which replicated copy of the service is not currently
    processing a request and routes it to that free replicate. You
    can dynamically increase or decrease the number of replicated
    copies of the service easily.
    We call this feature "load balancing" in Forte. It is achieved
    by checking a box in the data access service object definition.
    You can dynamically increase/decrease the number of replicates
    and also dynamically move replicates to other nodes in the environment.
    This approach assumes that you are using application driven
    security and not database security. Each replicated copy
    of the service is using the same generic username/password
    to connect to the database.
    I am forwarding this answer to forte-users because others
    might not completely understand this feature.
    Eric
    Eric Gold
    Technical Director
    Forte Australia
    Voice: 61-2-9926-1403
    Fax: 61-2-9926-1401
    Eric Gold
    Technical Director
    Forte Australia
    Voice: 61-2-9926-1403
    Fax: 61-2-9926-1401

  • SSRS - Is there a multi thread safe way of displaying information from a DataSet in a Report Header?

     In order to dynamically display data in the Report Header based in the current record of the Dataset, we started using Shared Variables, we initially used ReportItems!SomeTextbox.Value, but we noticed that when SomeTextbox was not rendered in the body
    (usually because a comment section grow to occupy most of the page if not more than one page), then the ReportItem printed a blank/null value.
    So, a method was defined in the Code section of the report that would set the value to the shared variable:
    public shared Params as String
    public shared Function SetValues(Param as String ) as String
    Params = Param
    Return Params 
    End Function
    Which would be called in the detail section of the tablix, then in the header a textbox would hold the following expression:
    =Code.Params
    This worked beautifully since, it now didn't mattered that the body section didn't had the SetValues call, the variable persited and the Header displayed the correct value. Our problem now is that when the report is being called in different threads with
    different data, the variable being shared/static gets modified by all the reports being run at the same time. 
    So far I've tried several things:
    - The variables need to be shared, otherwise the value set in the Body can't be seen by the header.
    - Using Hashtables behaves exactly like the ReportItem option.
    - Using a C# DLL with non static variables to take care of this, didn't work because apparently when the DLL is being called by the Body generates a different instance of the DLL than when it's called from the header.
    So is there a way to deal with this issue in a multi thread safe way?
    Thanks in advance!
     

    Hi Angel,
    Per my understanding that you want to dynamic display the group data in the report header, you have set page break based on the group, so when click to the next page, the report hearder will change according to the value in the group, when you are using
    the shared variables you got the multiple thread safe problem, right?
    I have tested on my local environment and can reproduce the issue, according to the multiple safe problem the better way is to use the harshtable behaves in the custom code,  you have mentioned that you have tryied touse the harshtable but finally got
    the same result as using the ReportItem!TextBox.Value, the problem can be cuased by the logic of the code that not works fine.
    Please reference to the custom code below which works fine and can get all the expect value display on every page:
    Shared ht As System.Collections.Hashtable = New System.Collections.Hashtable
    Public Function SetGroupHeader( ByVal group As Object _
    ,ByRef groupName As String _
    ,ByRef userID As String) As String
    Dim key As String = groupName & userID
    If Not group Is Nothing Then
    Dim g As String = CType(group, String)
    If Not (ht.ContainsKey(key)) Then
    ' must be the first pass so set the current group to group
    ht.Add(key, g)
    Else
    If Not (ht(key).Equals(g)) Then
    ht(key) = g
    End If
    End If
    End If
    Return ht(key)
    End Function
    Using this exprssion in the textbox of the reportheader:
    =Code.SetGroupHeader(ReportItems!Language.Value,"GroupName", User!UserID)
    Links belowe about the hashtable and the mutiple threads safe problem for your reference:
    http://stackoverflow.com/questions/2067537/ssrs-code-shared-variables-and-simultaneous-report-execution
    http://sqlserverbiblog.wordpress.com/2011/10/10/using-custom-code-functions-in-reporting-services-reports/
    If you still have any problem, please feel free to ask.
    Regards
    Vicky Liu

  • Is the Illustrator SDK thread-safe?

    After searching this forum and the Illustrator SDK documentation, I can't find any references to a discussion about threading issues using the Illustrator C++ SDK. There is only a reference in some header files as to whether menu text is threaded, without any explanation.
    I take this to mean that probably the Illustrator SDK is not "thread-safe" (i.e., it is not safe to make API calls from arbitrary threads; you should only call the API from the thread that calls into your plug-in). Does anyone know this to be the case, or not?
    If it is the case, the normal way I'd write a plug-in to respond to requests from other applications for drawing services would be through a mutex-protected queue. In other words, when Illustrator calls the plug-in at application startup time, the plug-in could set up a mutually exclusive lock (a mutex), start a thread that could respond to requests from other applications, and request periodic idle processing time from the application. When such a request arrived from another application at an arbitrary time, the thread could respond by locking the queue, adding a request to the queue for drawing services in some format that the plug-in would define, and unlocking the queue. The next time the application called the plugin with an idle event, the queue could be locked, pulled from, and unlocked. Whatever request had been pulled could then be serviced with Illustrator API calls. Does anyone know whether that is a workable strategy for Illustrator?
    I assume it probably is, because that seems to be the way the ScriptingSupport.aip plug-in works. I did a simple test with three instances of a Visual Basic generated EXE file. All three were able to make overlapping requests to Illustrator, and each request was worked upon in turn, with intermediate results from each request arriving in turn. This was a simple test to add some "Hello, World" text and export some jpegs,
    repeatedly.
    Any advice would be greatly appreciated!
    Glenn Picher
    Dirigo Multimedia, Inc.
    [email protected]

    Zac Lam wrote:
    The Memory Suite does pull from a specific memory pool that is set based on the user-specified Memory settings in the Preferences.  If you use standard OS calls, then you could end up allocating memory beyond the user-specified settings, whereas using the Memory Suite will help you stick to the Memory settings in the Preferences.
    When you get back NULL when allocating memory, are you hitting the upper boundaries of your memory usage?  Are you getting any error code returned from the function calls themselves?
    I am not hitting the upper memory bounds - I have several customers that have 10's of Gb free.
    There is no error return code from the ->NewPtr() call.
         PrMemoryPtr (*NewPtr)(csSDK_uint32 byteCount);
    A NULL pointer is how you detect a problem.
    Note that changing the size of the ->ReserveMemory() doesn't seem to make any difference as to whether you'll get a memory ptr or NULL back.
    btw my NewPtr size is either
         W x H x sizeof(PrPixelFormat_YUVA4444_32f)
         W x H x sizeof(PrPixelFormat_YUVA4444_8u)
    and happens concurrently on #cpu's threads (eg 16 to 32 instances at once is pretty common).
    The more processing power that the nVidia card has seems to make it fall over faster.
    eg I don't see it at all on a GTS 250 but do on a GTX 480, Quadro 4000 & 5000 and GTX 660
    I think there is a threading issue and an issue with the Memory Suite's pool and how it interacts with the CUDA memory pool. - note that CUDA sets RESERVED (aka locked) memory which can easily cause a fragmenting problem if you're not using the OS memory handler.

  • Is the Memory Suite thread safe?

    Hi all,
    Is the memory suite thread safe (at least when used from the Exporter context)?
    I ask because I have many threads getting and freeing memory and I've found that I get back null sometimes. This, I suspect, is the problem that's all the talk in the user forum with CS6 crashing with CUDA enabled. I'm starting to suspect that there is a memory management problem when there is also a lot of memory allocation and freeing going on by the CUDA driver. It seems that the faster the nVidia card the more likely it is to crash. That would suggest the CUDA driver (ie the code that manages the scheduling of the CUDA kernels) is in some way coupled to the memory use by Adobe or by Windows alloc|free too.
    I replaced the memory functions with _aligned_malloc|free and it seems far more reliable. Maybe it's because the OS malloc|free are thread safe or maybe it's because it's pulling from a different pool of memory (vs the Memory Suite's pool or the CUDA pool)
    comments?
    Edward

    Zac Lam wrote:
    The Memory Suite does pull from a specific memory pool that is set based on the user-specified Memory settings in the Preferences.  If you use standard OS calls, then you could end up allocating memory beyond the user-specified settings, whereas using the Memory Suite will help you stick to the Memory settings in the Preferences.
    When you get back NULL when allocating memory, are you hitting the upper boundaries of your memory usage?  Are you getting any error code returned from the function calls themselves?
    I am not hitting the upper memory bounds - I have several customers that have 10's of Gb free.
    There is no error return code from the ->NewPtr() call.
         PrMemoryPtr (*NewPtr)(csSDK_uint32 byteCount);
    A NULL pointer is how you detect a problem.
    Note that changing the size of the ->ReserveMemory() doesn't seem to make any difference as to whether you'll get a memory ptr or NULL back.
    btw my NewPtr size is either
         W x H x sizeof(PrPixelFormat_YUVA4444_32f)
         W x H x sizeof(PrPixelFormat_YUVA4444_8u)
    and happens concurrently on #cpu's threads (eg 16 to 32 instances at once is pretty common).
    The more processing power that the nVidia card has seems to make it fall over faster.
    eg I don't see it at all on a GTS 250 but do on a GTX 480, Quadro 4000 & 5000 and GTX 660
    I think there is a threading issue and an issue with the Memory Suite's pool and how it interacts with the CUDA memory pool. - note that CUDA sets RESERVED (aka locked) memory which can easily cause a fragmenting problem if you're not using the OS memory handler.

  • Native library NOT thread safe - how to use it via JNI?

    Hello,
    has anybody ever tried to use a native library from JNI, when the library is not thread safe?
    The library (Windows DLL) was up to now used in an MFC App and thus was only used by one user - that meant one thread - at a time.
    Now we would like to use the library like a "server": many Java clients connect the same time to the library via JNI. That would mean each client makes its calls to the library in its own thread. Because the library is not thread safe, this would cause problems.
    Now we discussed to load the library several times - separately for each client (for each thread).
    Is this possible at all? How can we do that?
    And do you think we can solve the problem in this way?
    Are there other ways to use the library, though it is not thread safe?
    Any ideas welcome.
    Thanks for any contributions to the discussion, Ina

    (1)
    has anybody ever tried to use a native library from
    JNI, when the library (Windows DLL) is not thread safe?
    Now we want many Java clients.
    That would mean each client makes its calls
    to the library in its own thread. Because the library
    is not thread safe, this would cause problems.Right. And therefore you have to encapsulate the DLL behind a properly synchronized interface class.
    Now the details of how you have to do that depends: (a) does the DLL contain state information other than TLS? (b) do you know which methods are not thread-safe?
    Depending on (a), (b) two extremes are both possible:
    One extreme would be to get an instance of the interface to the DLL from a factory method you'll have to write, where the factory method will block until it can give you "the DLL". Every client thread would obtain "the DLL", then use it, then release it. That would make the whole thing a "client-driven" "dedicated" server. If a client forgets to release the DLL, everybody else is going to be locked out. :-(
    The other extreme would be just to mirror the DLL methods, and mark the relevant ones as synchronized. That should be doable if (a) is false, and (b) is true.
    (2)
    Now we discussed to load the library several times -
    separately for each client (for each thread).
    Is this possible at all? How can we do that?
    And do you think we can solve the problem in this
    way?The DLL is going to be mapped into the process address space on first usage. More Java threads just means adding more references to the same DLL instance.
    That would not result in thread-safe behavior.

  • Is Persistence context thread safe?  nop

    In my code, i have a servlet , 2 stateless session beans:
    in the servlet, i use jndi to find a stateless bean (A) and invoke a method methodA in A:
    (Stateless bean A):
    @EJB
    StatelessBeanB beanB;
    methodA(obj) {
    beanB.save(obj);
    (Stateless bean B, where container inject a persistence context):
    @PersistenceContext private EntityManager em;
    save(obj) {
    em.persist(obj);
    it is said entity manager is not thread safe, so it should not be an instance variable. But in the code above, the variable "em" is an instance variable of stateless bean B, Does it make sense to put it there using pc annotation? is it then thread safe when it is injected (manager) by container?
    since i think B is a stateless session bean, so each request will share the instance variable of B class which is not a thread safe way.
    So , could you please give me any guide on this problem? make me clear.
    any is appreciated. thank you
    and what if i change stateless bean B to
    (Stateless bean B, where container inject a persistence context):
    @PersistenceUnit private EntityManagerFactory emf;
    save(obj) {
    em = emf.creatEntityManager()
    //begin tran
    em.persist(obj);
    // commit tran
    //close em
    the problem is i have several stateless beans like B, if each one has a emf injected, is there any problem ?

    Hi Jacky,
    An EntityManager object is not thread-safe. However, that's not a problem when accessing an EntityManager from EJB bean instance state. The EJB container guarantees that no more than one thread has access to a particular bean instance at any given time. In the case of stateless session beans, the container uses as many distinct bean instances as there are concurrent requests.
    Storing an EntityManager object in servlet instance state would be a problem, since in the servlet programming model any number of concurrent threads share the same servlet instance.
    --ken                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Thread-safe design pattern for encapsulating Collections?

    Hi,
    This must be a common problem, but I just can't get my head around it.
    Bascially I'm reading in data from a process, and creating/updating data structuers from this data. I've got a whloe bunch of APTProperty objects (each with a name/value) stored in a sychronized HashMap encapsulated by a class called APTProperties. It has methods such as:
    addProperty(APTProperty) // puts to hashmap
    getProperty(String name) // retreives from hashmap
    I also want clients to be able to iterate through all the APTProperties, in a thread safe manner . ie. they aren't responsible for sychronizing on the Map before getting the iterator. (See Collections.synchronizedMap() API docs).
    Is there any way of doing this?
    Or should I just have a method called
    Iterator propertyIterator() which returns the corresponding iterator of the HashMap (but then I can't really make it thread safe).
    I'd rather not make the internal HashMap visible to calling clients, if possible, 'cause I don't want people tinkering with it, if you know what I mean.
    Hope this makes sense.
    Keith

    In that case, I think you need to provide your own locking mechanism.
    public class APTProperties {
    the add, get and remove methods call lock(), executes its own logic, then call unlock()
    //your locking mechanism
    private Thread owner; //dont need to be volatile due to synchronized access
    public synchronized void lock() {
    while(owner != null) {
    wait();
    th = Thread.currentThread();
    public synzhronized void unlock() {
    if(owner == currentThread()){
    owner = null;
    notifyAll();
    }else {
    throw new IllegalMonitorStateException("Thread: "+ Thread.currentThread() + "does not own the lock");
    Now you dont gain a lot from this code, since a client has to use it as
    objAPTProperties.lock();
    Iterator ite = objAPTProperties.propertyIterator();
    ... do whatever needed
    objAPTProperties.unlock();
    But if you know how the iterator will be used, you can make the client unaware of the locking mechanism. Lets say, your clients will always go upto the end thru the iterator. In that case, you can use a decorator iterator to handle the locking
    public class APTProperties {
    public Iterator getThreadSafePropertyIterator() {
    private class ThreadSafeIterator implements Iterator {
    private iterator itr;
    private boolean locked;
    private boolean doneIterating;
    public ThreadSafeIterator(Iterator itr){
    this.itr = itr;
    public boolean hasNext() {
    return doneIterating ? false : itr.hasNext();
    public Object next() {
    lockAsNecessary();
    Object obj = itr.next();
    unlockAsNecessary();
    return obj;
    public void remove() {
    lockAsNecessary();
    itr.remove();
    unlockAsNecessary();
    private void lockAsNecessary() {
    if(!locked) {
    lock();
    locked = true;
    private void unlcokAsNecessary() {
    if(!hasNext()) {
    unlock();
    doneIterating = true;
    }//APTProperties ends
    The code is right out of my head, so it may have some logic problem, but the basic idea should work.

  • Thread safe ?

    Hi
    This code is currently running on my companys server in a java class implementing the singleton pattern:
    public static ShoppingAssistant getInstance() {
        if (instance == null) {
            synchronized (ShoppingAssistant.class) {
                instance = new ShoppingAssistant();
        return instance;
    }1 Is there really a need for synchronizing the method?
    2 If you choose to synchronize it. Why not just synchronize the whole method?
    3 Is this method really thread safe? Is I see it a thread A could be preempted after checking the instance and seing that it is null. Thereafter another thread B could get to run, also see that instance is null and go on and create the instance. Thereafter B modifies some of the instance�s instance variables. After that B is preempted and A gets to run. A, thinking instance is null, goes on and create a new instance of the class and assigns it to instance, thus owerwriting the old instance and creating a faulty state.
    By altering the code like this I think that problem is solved although the code will probably run slower. Is that correct?
    public static ShoppingAssistant getInstance() {
        synchronized (ShoppingAssistant.class) {
            if (instance == null) {
                instance = new ShoppingAssistant();
    }Regards, Mattias

    public class Singleton {
    private static final instance = newSingleton();
    public static Singleton getInstance() {
    return instance;
    }This seems like it's so obviously the standard
    solution, why do
    people keep trying to give lazy solutions? Do they
    expect it to be
    more efficient that this?I can see one possible performance gains with a lazy solution:
    If you use something else in that class (besides getInstance()), but don't use getInstance, then you don't take the performance hit of instantiating the thing.
    Of course, this is dubious at best because how often do you have other static methods in your singleton class? And not use the instance? And have construction be so expensive that a single unnecessary one has a noticeably detrimental impact on your app's performance?

  • Thread safe doubt

    Hi every1
    My question:
    Multiple threads enter the method att()...i want to count how much time does attacher.attach() method takes for every thread that comes in...i think the code below works... but if i declared the timeTakenForAttach as global variable..someone told me its not thread safe...can some1 please explain me the meaning of all this OR can i declare timeTakenForAttach as global and it won't be a problem..
    class A{
      long totalAttachSetupMillis=0L;
      public void att() {
          long start = System.currentTimeMillis();
                   attacher.attach();
          long timeTakenForAttach = System.currentTimeMillis() - start;
           recordAttachSuccess (timeTakenForAttach);
       public synchronized void recordAttachSuccess(long timeForAttach){
                totalAttachSetupMillis += timeForAttach;
    }Thankyou..

    javanewbie83 wrote:
    I have to sychronize record sucess as different threads would enter att() and enter record success method and simultaneously update, so i will end up getting a wrong value.Let me give you a more detailed explanation of why NOT having it synchronized would not work.
    First of all, let's take a look at the statement of concern:
    totalAttachSetupMillis += timeForAttach;For the purposes of synchronization, however, we need to look at it like this:
    int temp = totalAttachSetupMillis + timeForAttach; //call this LINE A
    totalAttachSetupMillis = temp; //call this LINE BNow, lets say we had threads named 1 and 2. If you didn't sychronize the method, you could possibly have this execution flow:
    //initial values: totalAttachSetupMillis = 0, timeForAttack(1) = 20, timeForAttack(2) = 30.  Total should be 50
    1A //temp(1) = 0 + 20 = 20
    2A //temp(2) = 0 + 30 = 30 (remember totalAttachSetupMillis isn't set until B)
    2B //totalAttachSetupMillis = temp(2) = 30
    1B //totalAttachSetupMillis = temp(1) = 20 (instead of 50)In other words, total disaster could possibly ensue. Which is why you wanted to synchronize it. Ed's explanation of your other question was right on the money, though.

  • Thread Safe Issue with Servlet

    I saw the following statement in one of the J2EE compliant server documentations:
    "By default, servlets are not thread-safe. The methods in a single servlet instance are usually executed numerous times simultaneously (up to the available memory limit)."
    I'm quite concerned with this statement for the primary reason that (I'm trying to reason by reading it out loud) servlets are not going to be thread-safe especially when available memory hit really really low!! So, when the application is still having sufficient memory, we will not likely run into concurrency problems, i.e the happy scenario for a short period after server is started. BUT, good things don't last long.. Anyway, hope someone can explain to me with more insights. Thanks.

    Don't worry, memory occupation and thread safety are not related at all.
    In my opinion, the following is the meaning of the statement you quote.
    Since the servlet specification doesn't force any implementation to spawn a new servlet object upon each request (and this should be a real memory hit!), nor to synchronize calls to servlet methods, you should always code your servlet in a "stateless" fashion: you should be aware the same method on the same object could (and probably will) be called concurrently if multiple concurrent client requests are submitted.
    Hope I've been clear enough...

  • What does it mean to be "thread safe"?

    What does it mean to be "thread safe"?
    I am working with a team on a project here at work. Someone here suggested that we build all of our screens during the initialization of the application to save time later. During the use of the application, the screens would then be made visible or invisible.
    One of the objections to that idea was that the swing components (many of which we use) are not thread safe. Can anyone tell me what the relevance of that is?
    Thanks

    To understand why Swing is not thread safe you have to understand a little bit of history, and a little bit of how swing actually works. The history(short version) is that it is nearly impossible to make a GUI toolkit thread safe. X is thread safe(well, sorta) and it's a really big mess. So to keep things simple(and fast) Swing was developed with an event model. The swing components themselves are not thread safe, but if you always change them with an event on the event queue, you will never have a problem. Basically, there is a Thread always running with any GUI program. It's called the awt event handler. When ever an event happens, it goes on an event queue and the event handler picks them off one by one and tells the correct components about it. If you have code that you want to manipulate swing components, and you are not ON the awt thread(inside a listener trail) you must use this code.
    SwingUtilities.invokeLater( new Runnable() {
      public void run() {
        // code to manipulate swing component here
    });This method puts the defined runnable object on the event queue for the awt thread to deal with. This way changes to the components happen in order, in a thread safe way.

  • Java.util.Locale not thread-safe !

    In multithreading programming, we know that double-checking idiom is broken. But lots of code, even in sun java core libraries, are written using this idiom, like the class "java.util.Locale".
    I have submitted this bug report just now,
    but I wanted to have your opinion about this.
    Don't you think a complete review of the source code of the core libraries is necessary ?
    java.util.Locale seems not to be thread safe, as I look at the source code.
    The static method getDefault() is not synchronized.
    The code is as follows:
    public static Locale getDefault() {
    // do not synchronize this method - see 4071298
    // it's OK if more than one default locale happens to be created
    if (defaultLocale == null) {
    // ... do something ...
    defaultLocale = new Locale(language, country, variant);
    return defaultLocale;
    This method seems to have been synchronized in the past, but the bug report 4071298 removed the "synchronized" modifier.
    The problem is that for multiprocessor machines, each processor having its own cache, the data in these caches are never synchronized with the main memory.
    The lack of a memory barrier, that is provided normally by the "synchronized" modifier, can make a thread read an incompletely initialized Locale instance referenced by the static private variable "defaultlocale".
    This problem is well explained in http://www.javaworld.com/javaworld/jw-02-2001/jw-0209-double.html and other documents about multithreading.
    I think this method must just be synchronized again.

    Shankar, I understand that this is something books and articles about multithreading don't talk much about, because for marketing reasons, multithreading is supposed to be very simple.
    It absolutely not the case.
    Multithreading IS a most difficult topic.
    First, you must be aware that each processor has its own high-speed cache memory, much faster than the main memory.
    This cache is made of a mixture of registers and L1/L2/L3 caches.
    Suppose we have a program with a shared variable "public static int a = 0;".
    On a multiprocessor system, suppose that a thread TA running on processor P1 assign a value to this variable "a=33;".
    The write is done to the cache of P1, but not in the main memory.
    Now, a second thread TB running on processor P2 reads this variable with "System.out.prinln(a);".
    The value of "a" is retrieved from main memory, and is 0 !
    The value 33 is in the cache of P1, not in main memory where its value is still 0, because the cache of P1 has not been flushed.
    When you are using BufferedOutputStream, you use the "flush()" method to flush the buffer, and the "synch()" method to commit data to disk.
    With memory, it is the same thing.
    The java "synchronized" keyword is not only a streetlight to regulate traffic, it is also a "memory barrier".
    The opening brace "{" of a synchronized block writes the data of the processor cache into the main memory.
    Then, the cache is emptied, so that stale values of other data don't remain here.
    Inside the "synchronized" block, the thread must thus retrieve fresh values from main memory.
    At the closing brace "}", data in the processor cache is written to main memory.
    The word "synchronized" has the same meaning as the "sync()" method of FileDescriptor class, which writes data physically to disk.
    You see, it is really a cache communication problem, and the synchronized blocks allows us to devise a kind of data transfer protocol between main memory and the multiple processor local caches.
    The hardware does not do this memory reconciliation for you. You must do it yourself using "synchronized" block.
    Besides, inside a synchronized block, the processor ( or compiler ) feels free to write data in any order it feels most appropriate.
    It can thus reorder assignments and instruction.
    It is like the elevator algorithm used when you store data into a hard disk.
    Writes are reordered so that they can be retrieved more efficiently by one sweep or the magnetic head.
    This reordering, as well as the arbitrary moment the processor decides to reconciliate parts of its cache to main memory ( if you don't use synchronized ) are the source of the problem.
    A thread TB on processor P2 can retrieve a non-null pointer, and retrieve this object from main memory, where it is not yet initialized.
    It has been initialized in the cache of P1 by TA, though, but TB doen't see it.
    To summarize, use "synchronized" every time you access to shared variables.
    There is no other way to be safe.
    You get the problem, now ?
    ( Note that this problem has strictly nothing to do with the atomicity issue, but most people tend to mix the two topics...
    Besides, as each access to a shared variable must be done inside a synchronized block, the issue of atomicity is not important at all.
    Why would you care about atomicity if you can get a stale value ?
    The only case where atomicity is important is when multiple threads access a single shared variable not in synchronized block. In this case, the variable must be declared volatile, which in theory synchronizes main and cache memory, and make even long and double atomic, but as it is broken in lots of implementation, ... )

  • ODBC SQLGetData not thread-safe when retrieving lob

    Hi MaxDB developpers,
    we are in the process of migrating our solution from SapDb 7.4.03.32 to MaxDb 7.6.03.7. We use the ODBC driver on windows, from multi-threaded applications.
    We encountered bugs in the ODBC driver 7.4.03.32 and made our own fixes since we had the sources. I checked if these problems were fixed in 7.6.03.7 and they are allmost addressed, but one:
    when two threads use two different stmt from the same dbc and call simultaneously SQLGetData to retrieve a LONG column, a global variable not protected by a critical section is changed and the application crashes. The variable in cause is dbc_block_ptr->esqblk.sqlca.sqlrap->rasqlldp which is set by pa30bpcruntime and reset by pa30apcruntime during the call to apegetl. Calls to apegetl are protected by PA09ENTERASYNCFUNCTION except in SQLGetData, when it calls pa60MoveLongPos or pa60MoveLong.
    Since MaxDB is a critical feature of our application, we would like to know when this bug can be fixed by SAP. Or maybe could we get access to the sources of sqlod32w.dll 7.6.03.7 to fix it ourselves ?
    Thanks,
    Guillaume

    Hello Guillaume
    Regarding the threaded access to SQLGetData. Of course, it is possible to manage the syncronization as you proposed. However, I'm still not sure, whether this really solves general problems.
    The point is, that the MaxDB-ODBC driver is thread safe for concurrent connections, but not for concurrent statement processing within a single connection. Therefore I would like to ask you how your application accesses data via SQLGetData (due to connections and threads), and what amount of data is usually transfered.
    Regards  Thomas

  • Hashtable really thread safe or is there a bug in it ?

    Hashtable is said to be thread-safe, so I think it should be true.
    So I just want someone to tell me what is wrong with my following observation:
    I downloaded the source code of jdk 1.3.1 and read the file Hashtable.java, that contains the implementation code of Hashtables.
    Lots of methods (like get(), put(), contains(), ...) are declared public and synchronized. So far, all is good.
    But the function size() is defined as follows:
    public int size() {
         return count;
    where count is an instance variable defined as follows:
    * The total number of entries in the hash table.
    private transient int count;
    As the function size() is not synchronized, it can retrieve the value of "count" when another method is in the process of updating it.
    For instance, the method put() (that is synchronized) increments the variable "count" with this instruction:
    count++;
    But we know that operation on variable of type "int" are atomic, so this issue doesn't harm the thread safety of the code.
    But there is another issue with multithreading, that deals with caching of data in the working memory of each thread.
    Suppose the following scenario:
    1) Thread A calls method size(). This method reads the value of "count" and put it in the working memory of thread A.
    2) Thread B calls method put(). This method increments "count" using instruction "count++".
    3) Thread A calls method size() again. This method reads the value of "count" from the working memory of the thread, and this cached value may not be the updated value by thread B.
    As size() is not synchronized, or "count" is not declared "volatile", multiple threads may retrieve stale value, because the content of the thread working memory is not in sync with main memory.
    So, here I see a problem.
    Is there something wrong in this observation ?
    But

    Is there something wrong in this observation ?Unfortunately not, as far as I can tell.
    Recent analysis of java's synchronized support has revealed the danger in such unsynchronized code blocks, so it may have been a simple case that it wasn't realised that size is not thread-safe.
    If this is an issue, use Collections.synchronizedMap - in Sun's JDK 1.3, at least, all of the methods are synchronized, except for iterators.
    Regards,
    -Troy

Maybe you are looking for

  • Dynamic image is not being displayed in Adobe Reader 8.1

    Hi, In interactive form, we have followed below required steps to show a dynamic image. For the left image, drag and drop an Image Field element from the standard Library tab to the Body Pages pane. Select this image field and edit the following prop

  • XML diff and merge - free lib?

    I'm using my own XML communication protocol and got a problem. I transfer a configuration with this protocol, and the config can grow very large. So I want to create a diff of old/new configuration an transfer only this diff. On the receiver side the

  • How to extend wireless with airport express and time capsule

    So, I have a time capsule & because of the thick walls in my apartment, I can barely use my ipad and iphone at night in my bedroom! (my macbookPRO stays at 3 bars however!) I bought an airport express thinking I could plug it closer to or inside my b

  • Filter data in multiple tabs from same XMLDataSet

    Hi All: What I need to do is display the content from one XML file in several different modes. I would prefer to use only one data call if possible because all information exists in the one file; however, I have not yet found a way to successfully fi

  • Underscan in boot camp

    When running Windows 7 from Boot Camp on my 15" MBP, my external ASUS 24" monitor has an underscan, in other words the image does not fill the entire screen.  How can I fix this? When running Windows within Parallels this does not happen, so it must