Mapping threads to Processors

Hello,
I have a few questions, and if you can help and can send a link to some simple code to learn from that would be great.
I read that Java threads do not have a process id at all. They have a thread id.
I tried, the following: hread.currentThread().getName() , and it works
I tried, the following: Thread.currentThread().hashCode(), and it works
Question 1: what is a hashcode? and why it is needed?
Question 2: I tried the following, Thread.currentThread().getId() it does not work, the compliler gives an error
Question 3: I read that currently we cannot map threads to processors, please then do you know of another way of mapping threads, or mapping different java programs to "different" processors and then allowing the threads to talk to each other from different processors. The most important thing for now, is how to map the thread or progam to different processors on the the same system.
Thank you very much

Question 1: what is a hashcode? and why it is needed?Every Object has a hashcode. The explanation is in the documentation of Object.
It has nothing to do with your issue.
Question 2: I tried the following, Thread.currentThread().getId() it does not work, the compliler gives an error It should work. Post the actual code and the actual error (copy and paste).
Still, there is no correlation between this number and the processor that runs the thread.
Question 3: I read that currently we cannot map threads to processors, please then do you know of another way of mapping threads, or mapping different java programs to "different" processors and then allowing the threads to talk to each other from different processors. The most important thing for now, is how to map the thread or progam to different processors on the the same system.There is no way to achieve what you want. Being platform independent, Java code has to run the same way
on multi-processors and single-processor systems.

Similar Messages

  • Where did the famous & long E71 & Maps thread go?

    There was a thread until few hours ago which discussed the cheating of Nokia by not providing maps to E71 users. Where has it disappeared?
    Solved!
    Go to Solution.

    Sorry about that - there were a few posts that went over the top (foul language) and instead of just removing them, a moderator had removed the whole thread by mistake. I put the thread back this morning when I noticed that.
    I wrote all my posts from 2005-2011 as an "Admin" for this community. I still work for Nokia as an external consultant, so my rank in all posts is now "Employee".

  • JDK 1.6 on Solaris. Multiple java processes and thread freezes

    Hi, we've come across a really weird behavior on the Solaris JVM, reported by a customer of ours.
    Our server application consists of multiple threads. Normally we see them all running within a single Java process, and all is fine.
    At some point in time, and only on Solaris 10, it seems that the main Java process starts a second Java process. This is not our code trying to execute some other application/command. It's the JVM itself forking a new copy of itself. I assumed this was because of some JVM behaviour on Solaris that uses multiple processes if the number of threads is > 128. However at the time of spawn there are less than 90 threads running.
    In any case, once this second process starts, some of the threads of the application (incidentally, they're the first threads created by the application at startup, in the first threadgroup) stop working. Our application dumps a list of all threads in the system every ten minutes, and even when they're not working, the threads are still there. Our logs also show that when the second process starts, these threads were not in the running state. They had just completed their operations and were sleeping in their thread pool, in a wait() call. Once the second process starts, jobs for these threads just queue up, and the wait() does not return, even after another thread has done a notify() to inform them of the new jobs.
    Even more interesting, when the customer manually kills -9 the second process, without doing anything in our application, all threads that were 'frozen' start working again, immediately. This (and the fact that this never happens on other OSes) makes us think that this is some sort of problem (or misconfiguration) specific to the Solaris JVM, and not our application.
    The customer initially reported this with JDK 1.5.0_12 , we told them to upgrade to the latest JDK 1.6 update 6, but the problem remains. There are no special JVM switches (apart from -Xms32m -Xmx256m) used. We're really at a dead end here in diagnosing this problem, as it clearly seems to be outside our app. Any suggestion?

    Actually, we've discovered that that's not really what was going on. I still believe there's a bug in the JVM, but the fork was happening because our Java code tries to exec a command line tool once a minute. After hours of this, we get a rogue child process with this stack (which is where we are forking this command line tool once a minute):
    JVM version is 1.5.0_08-b03
    Thread t@38: (state = IN_NATIVE)
    - java.lang.UNIXProcess.forkAndExec(byte[], byte[], int, byte[], int, byte[], boolean, java.io.FileDescriptor, java.io.FileDescriptor, java.io.FileDescriptor) @bci=168980456 (Interpreted frame)
    - java.lang.UNIXProcess.forkAndExec(byte[], byte[], int, byte[], int, byte[], boolean, java.io.FileDescriptor, java.io.FileDescriptor, java.io.FileDescriptor) @bci=0 (Interpreted frame)
    - java.lang.UNIXProcess.<init>(byte[], byte[], int, byte[], int, byte[], boolean) @bci=62, line=53 (Interpreted frame)
    - java.lang.ProcessImpl.start(java.lang.String[], java.util.Map, java.lang.String, boolean) @bci=182, line=65 (Interpreted frame)
    - java.lang.ProcessBuilder.start() @bci=112, line=451 (Interpreted frame)
    - java.lang.Runtime.exec(java.lang.String[], java.lang.String[], java.io.File) @bci=16, line=591 (Interpreted frame)
    - java.lang.Runtime.exec(java.lang.String, java.lang.String[], java.io.File) @bci=69, line=429 (Interpreted frame)
    - java.lang.Runtime.exec(java.lang.String) @bci=4, line=326 (Interpreted frame)
    - java.lang.Thread.run() @bci=11, line=595 (Interpreted frame)There are also several dozen other threads all with the same stack:
    Thread t@32: (state = BLOCKED)
    Error occurred during stack walking:
    sun.jvm.hotspot.debugger.DebuggerException: can't map thread id to thread handle!
         at sun.jvm.hotspot.debugger.proc.ProcDebuggerLocal.getThreadIntegerRegisterSet0(Native Method)
         at sun.jvm.hotspot.debugger.proc.ProcDebuggerLocal.getThreadIntegerRegisterSet(ProcDebuggerLocal.java:364)
         at sun.jvm.hotspot.debugger.proc.sparc.ProcSPARCThread.getContext(ProcSPARCThread.java:35)
         at sun.jvm.hotspot.runtime.solaris_sparc.SolarisSPARCJavaThreadPDAccess.getCurrentFrameGuess(SolarisSPARCJavaThreadPDAccess.java:108)
         at sun.jvm.hotspot.runtime.JavaThread.getCurrentFrameGuess(JavaThread.java:252)
         at sun.jvm.hotspot.runtime.JavaThread.getLastJavaVFrameDbg(JavaThread.java:211)
         at sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:50)
         at sun.jvm.hotspot.tools.JStack.run(JStack.java:41)
         at sun.jvm.hotspot.tools.Tool.start(Tool.java:204)
         at sun.jvm.hotspot.tools.JStack.main(JStack.java:58)I'm pretty sure this is because the fork part of the UnixProcess.forkAndExec is using the Solaris fork1 system call, and thus all the Java context thinks all those threads exist, whereas the actual threads don't exist in that process.
    It seems to me that something is broken in UnixProcess.forkAndExec in native code; it did the fork, but not the exec, and this exec thread just sits there forever. And of course, it's still holding all the file descriptors of the original process, which means that if we decide to restart our process, we can't reopen our sockets for listening or whatever else we want to do.
    There is another possibility, which I can't completely rule out: this child process just happened to be the one that was fork'd when the parent process called Runtime.halt(), which is how the Java process exits. We decided to exit halfway through a Runtime.exec(), and got this child process stuck. But I don't think that's what happens... from what I understand that we collected, we see this same child process created at some point in time, and it doesn't go away.
    Yes, I realize that my JVM is very old, but I cannot find any bug fixes in the release notes that claim to fix something like this. And since this only happens once every day or two, I'm reluctant to just throw a new JVM at this--although I'm sure I will shortly.
    Has anyone else seen anything like this?

  • MAP Toolkit - How to use this MAP tool kit for all SQL Server inventory in new work enviornment

    Hi Every one
     Just joined to new job and planning to do Inventory for whole environment so I can get list of all SQL Server installed . I downloaded MAP tool kit just now. So looking for step by step information to use this for SQL Inventory. If anyone have documentation
    or screen shot and can share would be great.
    Also like to run It will be good to run this tool anytime or should run in night time when is less activity? 
    Hoe long generally takes for medium size environment where server count is about 30 ( Dev/Staging/Prod)
    Also any scripts that will give detailed information would be great too..
    Thank you 
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi Logicinisde,
    According to your description, since the issue regards Microsoft Assessment and Planning Solution Accelerator. I suggestion you post the question in the Solution Accelerators forums at
    http://social.technet.microsoft.com/Forums/en-US/map/threads/ . It is appropriate and more experts will assist you.
    The Microsoft Assessment and Planning (MAP) Toolkit is an agentless inventory, assessment, and reporting tool that can securely assess IT environments for various platform migrations. You can use MAP as part of a comprehensive process for planning and migrating
    legacy database to SQL Server instances.
    There is more information about how to use MAP Tool–Microsoft Assessment and Planning toolkit, you can review the following articles.
    http://blogs.technet.com/b/meamcs/archive/2012/09/24/how-to-use-map-tool-microsoft-assessment-and-planning-toolkit.aspx
    Microsoft Assessment and Planning Toolkit - Technical FAQ:
    http://ochoco.blogspot.in/2009/02/microsoft-assessment-and-planning.html
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Concurrency in Swing,  Multi-processor system

    I have two questions:
    1. This is a classic situation where I am looking for a definitive answer on: I've read about the single-thread rule/EDT, SwingWorker, and the use of invokeLater()/invokeAndWait(). The system I am designing will have multiple Swing windows (JInternalFrames) that do fairly complex GUI work. No direct interaction is needed between the windows which greatly simplifies things. Some windows are horrendously complex, and I simply want to ensure that one slow window doesn't bog the rest of the UIs. I'm not entirely clear on what exactly I should be threading: the entire JInternalFrame itself should be runnable? The expensive operation within the JInternalFrame? A good example of this is a complex paint() method: in this case I've heard of spawning a thread to render to a back-buffer of sorts, then blitting the whole thing when ready. In short, what's the cleanest approach here to ensure that one rogue window doesn't block others? I apologize if this is something addressed over and over but most examples seem to point to the classic case of "the expensive DB operation" within a Swing app.
    2. Short and sweet: any way to have Swing take advantage of multi-processor systems, say, a system with 6 processors available to it? If you have one Swing process that spawns 10 threads, that's still just one process and the OS probably wouldn't be smart enough to distribute the threads across processors, I'm guessing. Any input on this would be helpful. Thank you!

    (1) You need to use a profiler. This is the first step in any sort of optimization. The profiler does two important things: First, it tells you where are the real bottlenecks (which is usually not what you expect), and eliminates any doubt as to a certain section of code being 'slow' or 'fast'. Second, the profilter lets you compare results before and after. That way, you can check that your code changes actually increased performance, and by exactly how much.
    (2) Generally speaking, if there are 10 threads and 10 CPU's, then each thread runs concurrently on a different CPU.
    As per (1), the suggestion to use double buffering is the likely best way to go. When you think about what it takes to draw an image, 90% of it can be done in a worker thread. The geometry, creating Shapes, drawing then onto a graphics object, transformations and filters, all that can be done offline. Only copying the buffered image onscreen is the 10% that needs to happen in the EDT thread. But again, use a profiler first.

  • Coordinate threads

    I am doing a Message Passing Interface program, which has 7 threads runs simultaniously. Each thread send / receive message to/from other threads. The problem is that the program is running in a PC where threads compete processor. Sometimes a thread is stopped in the middle of message proccessing and another thread continue to work , which will made the program unstable. How to coordinate the threads to make them stop only at a defeined point?
    Thanks.

    Any of those programs can still be suspendd at any time.
    I think you need to define your goal more clearly. Just saying "I want to coordinate theads" doesn't really say anything.
    When you run multiple threads or multiple programs, you are implicitly stating that they're independent of each other and you don't care which one runs or stops when.
    You can force threads to stop and wait for each other or for external events at certain points, but you can't force them to get CPU time--that can be taken away at any time, for any length of time.

  • Is Aperture 3 enabled for hyper-threading?

    That would make the difference betweening getting a Mac with a Core i7 processor.

    There is a big difference between multicore and hyper-threading.
    Basically you have three technologies, multiprocessor, multicore and hyper-threading. The first two give you multiple physical hardware. In the case of the first, multiprocessor, you get multiple independent CPU's. In the second, multicore, you get multiple processing units on a single chip.
    In the third case, hyper-threading,  each processor gets a logical 'twin'. That each each physical processors pretends to be two processors.
    So that's the hardware. From a software perspective you have multi-tasking  multu-processing and multi-threading.
    Multi-tasking is the illusion that many things are happening at once on a single core system. Its been around since the late 60's usually called a time sharing systems. The OS switches between tasks loading one in as it unloads another. Happens fast enough so that it appears that things are happening at the same time but they are not.
    Multi-processing is multi-tasking but on a system with multiple cores (or cpus). Each core gets a program and can run simultaneously. OS X was designed for this from the beginning using SMP (symmetric multiprocessing). So each application you are running gets its own core
    Multi-threading is multi-tasking within a single application. Again OS X is designed for this. Applications create threads and then the OS can run those threads on an available core. Of course it doesn't always make sense to do this. If the application needs a calculation to finish before it can proceed nothing is gained by running the calculation on a separate core.
    Aperture makes use of a lot of the technology in the system. Which will give you the biggest performance hit is hard to say. I still haven't seen any real world benchmarks on Aperture.  Maany users here have there own stories and will swear by the solutions they have decided on.
    The GPU is important. Aperture uses the Core Imaging Technology and the best GPU you can get for your system is a plus. However Core Imaging is also setup to make use of the fastest execution path available. So if you;re doing something that he CPU can do faster then the GPU it will go that route. And in that case having extra cores lying around would be a plus.
    So to answer your intended question, does Aperture make use of multi-threading (not hyper-threading). The answer is yes. Does it make a tremendous difference, that depends on your definition of tremendous. You'll never get twice the performance when running on two cores that much I can say for sure.

  • RMI thread

    Hi,
    I am pretty new in RMI stuff, so I have a basic question:
    When a client connect to a server using RMI, does a new thread is created and running in the server side? If so, how can I control this thread (changing its name etc.).
    Thanks,
    Jhon

    You may run into some thread ID limitations, depending on the details of what you are trying to do.
    Specifically, there is no guarantee that the same thread is used for every RMI remote method call involving the same RMI connection.
    But if you can live with this limitation, then why not (for logging purposes) just do a lookup table mapping thread ID to name?

  • Single Threaded Program

    What do you mean by a single threaded program?? For example I have a single threaded word processor which can do only one task at a time. Does this mean that there is one and only thread which is carrying out all tasks one at a time??Say,formatting the document and then printing??I think I'm very near to it but not getting it properly.Please help.Thanks a lot.

    A single threaded program is a normal program that has only one thread. That doesn't mean that it can do only one task at a time, but that one thread is performing all the tasks.

  • Spawning new entry processors from within an existing entry processor

    Is it possible / legal to spawn a new entry processor (to operate within a different cache) from within an existing entry processor.
    E.g we have a parent and a child cache, We will receive an update of the parent and start an entry processor to do this. Off the back of the parent update we will also need to update some child entries in another cache and need to start a new entry processor for the child entries. Is it legal to do this?

    Hi Ghanshyam,
    yes, in case of (a), you would be mixing different types in the same cache. There is nothing wrong with that from Coherence's point of view, as long as all code which is supposed to access such objects in their deserialized form is able to handle this situation.
    This means that you need to use special extractors for creating indexes, and you need to write your filters, entry processors and aggregators appropriately to take this into account. But that's all it means.
    The EntryProcessor on the child could be invoked, so long as there are more service
    threads configured. This allows retaining partition affinity. I don't think this is technically
    illegal.It is problematic, as invoking an entry-processor from another entry-processor in the same cache service can lead to deadlock/livelock situations. You won't find it out in a simple test as you get an exception or not.
    But even if it is technically not guarded against, firing a second entry-processor consumes an additional thread from the thread-pool. Now if you get to a situation when all (or at least more than half of the thread-pool size) of your entry-processors try to fire an additional entry-processor, and there are no more threads in the thread-pool, then some or all would be waiting for a thread to be available, and of course none would be available, because there are not enough single-thread entry-processors to leave to get a thread to everyone.
    However, none of them can back off as all are waiting for the fired entry-processor to complete. Poof, no processing is possible on your cache service.
    Another problematic situation which can arise if entry processors are fired from entry processors is that your entry-processors may deadlock on entries (entry processors executing on some entries and trying to execute on another entry on which another entry processor executes and also tries to execute on the first entry). In this case the entry-processors would wait on each other to execute.
    No code running in the cache server invoked by Coherence is supposed to access a cache service from code running in the threads of the same cache service, except for a couple of specifically named operations which only release resources but not consume additional new ones.
    Best regards,
    Robert

  • How many thread can I create?

    I wrote a multithread program. I use Excutors class and newCachedThreadPool method to create a thread pool.
    I want to create max threads that i can. In some where, I saw Max thread that is allowed, depends on OS, availabe memory, JVM.
    I use SUSE 11.1 as operating system, have 3.75 GB RAM & 7.5 GB swap & use jdk 1.6
    I want to create max threads that i can. I don't know how many i can create.
    please help me.

    Hi _Security, Mutli-Threading isn't always a good thing, because of the constant switching the processor has to do, causing it to be slower then it could be if you properly queued your tasks instead of threading each one. Each thread you create is another thing your processor has to execute each cycle, so if you have tons of threads, your processor will have to execute and switch through each one every cycle, making it bog down performance by a long shot.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • How can I tell the VMware GdbServer stub to set a processor core during kernel debugging session?

    I'm using the VMware DbgServer stub for kernel mode debugging a Windows X86 VM and I want to set a different thread (or processor) in order to get/set the processor context, but the DbgServer replies that the target only has one thread running.
    (gdb) info threads
    Sending packet:
    $T1#85...Ack
    Packet received: OK
    Sending packet: $qfThreadInfo#bb...Ack
    Packet received: m1
    Sending packet: $qsThreadInfo#c8...Ack
    Packet received: l
    Thread 1 ( Name: VCPU 0 ) 0x81626a08 in ?? ()
    Questions:
    1.  How can I tell the VMware DbgServer stub to set a processor core or even another thread during kernel debugging session?
    2.  How can I set/get context or stepping from a different thread or processor core?
    Any help will be highly appreciated.
    Thanks,
    Alex.

    Nick-
    In my experience, the only tell is in the result with Lightroom unless your processor is dog slow. You can select PS as the active app. on my Mac and see it working.
    You could also modify the action to not save and close, and then check History in PS.

  • Thread.currentThread() unique for each server request?

    Hello
    I have a hosted application in glassfish. When there is a http request I write a file to disk in server machine.
    I am just wondering in writer.class can I use Thread.currentThread() as a unique identification for each and every request?
    Thank you

    I am sorry. I made the question wired by adding unnessasary things.
    Let me ask it simple, can I replace the singleton class IDpool with a class with a static variable id. like following
    IDpool class (singleton and map<thread, id>)
    public class IdPool {
        protected static IdPool pool = null;
        protected static Map<Thread,Long> idmap = new HashMap<Thread,Long>();
        public static Map<Thread,Long> getIdMap()
            return idmap;
        public static IdPool getSingleInstance()
            if(IdPool.pool==null)
                IdPool.pool = new IdPool();
            return IdPool.pool;
        public synchronized Long makeNewId(Thread thread)
            Long id =  Calendar.getInstance().getTimeInMillis();
            IdPool.idmap.put(thread,id);
            return id;
        public synchronized Long getId(Thread thread)
            return IdPool.idmap.get(thread);
    }replace above with following
    class IDClass
       public static long ID;
    }In request handler class
    // make new id
    IDClass.ID = 123;In FileWriter.class
    // get ID
    long id = IDClass.ID;Because for each and every request it starting as a new application. Application means calling for the backend Request handler and file writer program I have made. Those are in a seperate jar file.The same instance of the application never used by two request. And once the request is finished application initialize for that particular request close down.
    EJP, thank you very much for your kind attention for this matter.

  • Risk, if any, of closing ServletOutputStream using a user thread?

    Hello All,
    What is the risk of writing/flushing/closing ServletOutputStream using a user thread other than the HTTP thread associated with the request?
    I want to do that to implement client request timeouts. Currently in our application, for every request received we are creating a new thread to make the HTTP thread timeout after a configurable amount of time is elapsed. Though creating a new thread is doing the job just fine but that approach is creating another problem that the servers are not scaling to the load we receive. We frequently get "unable to create threads due to unavailable memory" errors. At any point in time, any of our servers receive ~200 requests, which means that server has to create/handle 400+ threads. If the servers are slow due to external systems, then a lot more threads will be accumulated in the server. When the thread count reaches 500+ it fails to accept any further requests. So as a possible alternative to current approach, I am thinking of running few threads (may be a couple of threads) in the background and timeout the requests by closing the associated ServletOutputStream with TimeoutException.
    Note: I know that as another possible solution, I can go for few more servers in the cluster but before that I want to check if the above approach is a possible solution because it is less complex and saves money on the hardware.
    Thanks,
    Srinivas

    Hello Kaj,
    Sorry for the cross-post. Actually, I couldn't decide if I should post it as a concurrency, Java Servlet or as a design question.
    Maxideon,
    Sure, it can be done with single thread also. I said a couple of threads because a single thread may not get sufficient CPU time to handle all timeouts. But otherwise yes, we can try it with single thread also.
    ejp,
    We do not own the client application. We are a web services application and there are many clients who consume our services.
    The standard solution to handle timeouts in the server is by creating an additional thread, which is what we have followed. Now, that design is posing a new problem to us.
    No. A custom TimeoutException.
    I want to use 2 or 3 user threads to timeout the request if needed. That way I do not have to create 1 thread for each request. In the current approach server creates/handles 400 threads for 200 requests. With the alternative solution I can make it to create/handle 202 or 203 threads for 200 requests.
    dubwai,
    Here I am answering to your reply to my same question in the other forum.
    The current design is working just fine for timeouts. But sometimes it is failing to scale to the load we receive. In the current design we have created 1 thread for each request to timeout HTTP thread, which means that for 200 requests we will have 400 threads (200 HTTP threads and 200 user threads) in the server. For some reason, if the load increases to 250 requests or if external systems are slow then more threads accumulate in the server and it fails to accept incoming requests when thread count reaches ~500. That is the problem we are having.
    In the alternative design I proposed that we will create just a couple (may be 2 or 3) of threads running in the background, instead of 1 thread for each request. These background threads monitor HTTP threads and timeout the request if needed by writing and closing ServlerOutputStream. This way server will have to create/handle only ~200 HTTP threads plus a couple of user threads. For this approach to work I have to close ServletOutputStream using background threads instead of HTTP thread and want to know what is the rick of doing that?
    To All,
    Here is a rough implementation of user thread. The HTTP thread registers itself in threadMap variable with its starttime and in threadResponse variable with its associated HttpServletResponse. This user thread uses these values to find request timeout (10 seconds) and close response if needed.
    public class TimeoutMonitor implements Runnable {
        public static Map<Thread, Long> threadMap = new HashMap<Thread, Long>();
        public static Map<Thread, HttpServletResponse> threadResponse =
            new HashMap<Thread, HttpServletResponse>();
         @Override
         public void run() {
             while(true) {
              try {
                  wait(1000);
                  Set<Thread> keys = threadMap.keySet();
                  Iterator<Thread> threads = keys.iterator();
                  while(threads.hasNext()) {
                      Thread thread = threads.next();
                      Long startTime = threadMap.get(thread);
                   if((System.currentTimeMillis() - startTime) > 10000) {
                       HttpServletResponse response =
                                    threadResponse.get(thread);
                       try {
                        PrintWriter writer = response.getWriter();
                        writer.write("Timeout Exception");
                        writer.flush();
                        writer.close();
                        threadMap.remove(thread);
                        threadResponse.remove(thread);
                       } catch (IOException e) {
                        e.printStackTrace();
              } catch (InterruptedException e1) {
                  // TODO Auto-generated catch block
                  e1.printStackTrace();
        }I have test run this program in tomcat and seem to be working fine. Please note that here response/writer are closed by user thread and not HTTP thread. My questions is, do you see any problem in doing that?
    Thanks,
    Srinivas

  • Best way to keep a 'history' of cache changes?

    Simple question, really, which I guess many people have run into before, so I'm looking for a bit of 'best practice' as regards Coherence.
    We have a distributed cache which is holding financial data (Portfolio Positions), and we plan to update these using Entry Processors for scalability (as a single incoming Trade could affect multiple Positions, so we want them processed in parallel). So far, so good. (I hope! If you have a better approach, please feel free to add it. :))
    Now, each Position that is modified needs to be 'audited', so we have a 'before' and 'after' image. This needs to be persisted. I have currently created a separate cache - 'PositionHistoryCache' - and set it up so it's flushed to Oracle in a "write behind" manner. This seems to work OK - i.e. updating this 'other' distributed cache from within the Entry Processor works fine. Does this seem sensible as an approach as regards keeping 'history' - i.e. using a separate cache and 'put'ing to it?
    Also, I'm keen not to run into any 'reentrancy' problems in our application. So what's the general rule here, when we are using Entry Processors elsewhere? Is it simply the 'service name' that determines whether the distributed caches are served by different service threads? In other words, as long as the 'history' cache we are trying to talk to is declared with a different 'service-name' to the cache that has the calling Entry Processor we can freely 'put' to it without issue?
    Many thanks if you can help clear up the above design issues.

    Hi Steve,
    yes, the (possibly inherited) service name for the cache scheme determines which cache service a cache belongs to.
    As for best practice, you probably would want to use key-affinity and the cache same service for the audit cache and try to put the data into the backing map directly. Since this is inserts and child records we are speaking about (access to the audit record is demarcated by access to the to-be-audited record, and if you do it from an entry-processor then the audit entry is always local because it is affine to the to-be-audited entry), it should be safe, provided that you only ever insert/update the audit entry into the backing map from entry processors manipulating the parent entry.
    You would still have the same failure cases as the different cache service cache.put approach: if you crash after the audit record has been inserted but process did not finish, then you may end up with lost but audited updates or duplicate audit records for a single change.
    Note that this is an advanced functionality and you would probably want to consult with your Oracle support representative to ensure that you know the implications this approach brings with itself with each Coherence version you try to use it with.
    An alternative approach would be to move audit into the same cache as the record and use key affinity to ensure audit records reside on the same node as the audited record, and use EntryProcessors sent to both the changed and the audit entry keys together to update both records together atomically. This is a much safer approach, it is guaranteed to be atomic as long as only the cache is concerned, on the other hand you need to know the audit entry key in advance and use the key-based invokeAll method (you can't use the filter-based invokeAll method as that cannot add new entries). Also you have additional work if you use filter-based read operations to filter out audit records from query results.
    Best regards,
    Robert

Maybe you are looking for

  • Upgrading from 10.6.4 to 10.6.5

    Hi, I have now sucessfully managed to upgrade from 10.6.4 to 10.6.5 on my MacBook Pro but it took several attempts & some very good advice from fellow forum members. Before upgrading, I verified & repaired permissions. I then run software update & at

  • When I back up my iPhone to iTunes, does it save my Notes??

    I need my Notes to transfer to a new iPhone! Thanks

  • "Vendor" field in the customer master - XD03

    Hi, I would like to know how useful the "vendor" field can be in the "control data" tab of the customer master (and vice versa with the vendor master). Is this only for information only or can we use this information to consolidate open items for one

  • Invoicing Plan - Down payment issue

    Hi I urgently require help on Invoicing plan, I have created an partial invoicing plan with downpayment (Billing Rule 4 ) I want system to consider this downpayment date and release the payment automatically on the due date. As of now I am unable to

  • Solution for error code: 4SNS/1/40000000/IBOR-9.426

    My computer recently has been running at its best. I did a diagnostic and gave me an error code 4SNS/1/40000000/IBOR-9.426. I wanted to get an idea what this means and what is the solution?