Accessing tcp_t data structure

I need to access tcp_t datastructure which is implemented in Solaris 10,
how could it can be programmed in c to access tcp_t data structure.

Look at /usr/include/inet/tcp.h - it defines the structure tcp_t.
Add the following line to your source file:
#include <inet/tcp.h>
I'm not sure that this information is exactly what you are asking for.
Probably you can find good examples of source code on google.com
Thanks.
Nik

Similar Messages

  • Concurrent access to data structure by threads

    Hi,
    I have list of object which is read by a single thread but several writer threads can into it.
    The reader reads the objects of the list and does some processing.
    And the reader thread must wait for a notification from writer threads that the list has been written into.
    So each writer thread writes into the list and then notifies the reader . Each time the reader receives a notification it starts its processing and then checks if all the writers are done, if they aren't then it waits again and if all the writers and done then its does a final procesing and exits.
    I was able to synchronise the writers but i am not able to synchronize the reader thread and the writer threads.
    my code is as follows :-
    class Data
    private List<String> resultList = Collections.synchronizedList(new ArrayList<String>());
    private boolean modified = false;
    public List<String> getResultList()
    return this.resultList;
    public void setResultList(List<String> resultList)
    this.resultList = resultList;
    public void addElementToResultList(String str)
    this.resultList.add(str);
    public synchronized boolean IsModified()
                   return this.modified;
         public synchronized void setModified(boolean modified)
              this.modified = modified;
    Class CallerTest
    Data myData = new Data();
    List<String> resultList = null
    int counter = 0;
         public synchronized void incrementCounter()
              this.counter++;
         public synchronized int getCounter()
              return this.counter;
         public void setFinalResult(List<String> resultList )
              this.resultList = resultList ;
    public void TestMeth()
         int numWriters = 5;
         ReaderThread rt = new rt.start(this,numWriters ,myData );
         rt.start();
         for(int i =0;i<numWriters ;i++)
              WriterThread wt = new WriterThread (this,myData );
              wt.start();
         try{
              rt.join();
         catch(Exception x)
    class ReaderThread extends Thread
         CallerTest caller = null;
         int numWriters=0;
         Data data = null;
    public ReaderThread ()
    public ReaderThread (CallerTest caller,int numWriters,Data data)
         this.caller = caller;
         this.data = data;
         this.numWriters = numWriters;
    public void run()
         try{
                   while(!data.IsModified())
         data.getResultList().wait();
    mergeResults(); // this is some processing
    if(caller.getCounter() < this.numWriters)
    data.setModified(false);
    else
    mergeResults();
    caller.setFinalResult(finalResult);
    catch(Exception ent)
    log.error("The exception is in remote result reader ", ent);
    class WriterThread extends Thread()
         private CallerTest caller = null;
         private Data data = null;
         WriterThread()
         WriterThread(CallerTest caller,Data data)
              this.caller = caller;
              this.data = data;
         public void run()
              String str = getStringName()
              writeResult(str);
         public String getStringName()
              //some processing here
         public synchronized void writeResult(String result)
                   try{
                        if(result != null)
                             data addElementToResultList(result);
                             caller.incrementCounter();
                             rrd.setModified(true);
                             data.getResultList().notifyAll();
                   catch(Exception ent)
                        log.error("exception in WriterThread write remote results", ent);
    however on running the program , i get an exception :-
    [ERROR] [Mon Jan 25 12:13:49 2010] [Thread-59| WriterThread] exception in WriterThread write remote results
    java.lang.IllegalMonitorStateException
    at java.lang.Object.notifyAll(Native Method)
    WriterThread.writeResults(WriterThread.java:328)
    WriterThread.run(WriterThread.java:83)
    Please can you tell me how to synchronise the reader and writers.
    greatly appreciate your help.
    Thanks,
    Helen
    Edited by: hreese on Jan 25, 2010 11:08 AM

    As far as i can see (not to far because you didn't use the CODE tags to make your code readable) your code is a little bit of a mess.
    You can start with defining which kind of errors can be thrown and handle the exceptions properly.
    So use at least:
    try{
      block
    catch(Exception e){
      e.printstacktrace
    }Try to find out where the exception is thrown and debug the code.
    Don't place a bunche of code on the forum, because most people won't read it all.

  • Using a single data structure in a desktop application

    Hello,
    I am programming an application that needs to constantly access a data structure, for instance to add / edit / update / search data. Several graphical user interfaces need to modify this data structure. I was wondering of easy ways to use the data structure throught the whole application. One solution I found was to use the singleton pattern for my data structure, though lots of people have recommended me to avoid using that pattern. What are better ways of accessing that single data structure from all of those GUIs ?
    Thank you,
    Alfredo

    Not that there is anything wrong with a Singleton pattern, but I don't see how it would help you in this case.
    Just create the DataStructure and let every GUI that need to use it have a reference to it.

  • Storing a lot of data in an indexed data structure for quick access.

    I'm designing an app. which will need to store a large amount of data in memory. Records will be flowing into the app. via a socket. The app will receive about 30 records/second which is about 108,000 records/hour and about 600,000 records/day. I need to store the records in an indexed data structure so that I can access them quickly. For example, at 9:00am I will need to access records received at 8:30am, 8:35am, 8:40am, etc. This program will be multithreaded and as I understand Vector is the only data structure that is thread safe. Is Vector my only choice? How do I access objects in a Vector using an index? Is there something better that I can use?

    Is Vector my only choice?If you want to access the objects by key then you should use something like a HashMap. But if you want to access them by an array index then an ArrayList would be more appropriate.
    as I understand Vector is the only data structure that is thread safeYou can get a thread-safe version of any Collection object by using the Collection.synchronizedCollection method.
    How do I access objects in a Vector using an index? I'd suggest you read the API documentation. And probably the Sun tutorial on Collections at http://java.sun.com/docs/books/tutorial/collections/index.html
    600,000 records/day. Unless you plan to dump old data after a short period of time, you may want to consider using a database to avoid running out of memory.

  • Concurrent access to changing data structure

    I would like to design a data structure that would allow multiple read only threads to access it but periodically allow the structure to be modified or completely refreshed by one other thread. I would prefer to not have to synchronize access to the structure by the read only threads just to allow one thread to modify it once in a while. Can the object be locked and the other threads blocked just when the one thread is modifying the structure until it completes?
    Is there a common or recommended design for this situation?

    So you want it to block while the writer is writing, but not otherwise, correct?
    Check out Doug Lea's concurrency package. I've never used it, but it seems quite popular and I gather it's got a lot of handy tools. It might have what you're looking for. Google for it. It (or something based on it) might even be included in 1.5/5.0, IIRC.
    Barring that, I think that if your reads are quick (i.e., just retrieving a single value) and you don't have a ton of threads all doing a lot of very, very frequent reads, then you can just synchronize all access to the structure's state. Uncontended locks are quick to obtain and release, so unless your reads take a relatively long time or they're very densely packed in time, you won't get many collisions.
    If you do need to implement something like this, then what I'm thinking of is to have a flag that indicates whether there's a read or write going on. If there's a read going on, other reads can go on, but no writes. If there's a write going on, no other writes or reads can occur. Access to the flag would have to be synchronized, readers/writers would have to wait/notify each other, so you might not end up saving that much performance-wise anyway. I'm not even sure if there's a good way to make it work--it might just end up being an overcomplicated, non-working double-checked lock.
    Start with just syncing all access to the structure's state--both read and write--and then run some tests to see if your performance requirements are met.

  • How to access data structures in C dll from java thru JNI?

    We have been given API's( collection of C Functions) from some vendor.
    SDK from vendor consist of:
    Libpga.DLL, Libpga.h,Libpga.lib, Along with that sample program Receiver.h (i don't know its written in C or C++), I guess .C stnads for C files?
    Considering that I don't know C or C++ (Except that I can understand what that program is doing) & i have experience in VB6 and Java, In order to build interface based on this API, I have two option left, Use these dll either from VB or Java.
    As far as I know, calling this DLL in VB requires all the data structures & methods to be declared in VB, I guess which is not the case with Java (? I'm not sure)
    I experiemnted calling these function from Java through JNI, and I successfully did by writting wrapper dll. My question is whether I have to declare all the constants & data structures defined in libpga.h file in java, in order to use them in my java program??
    Any suggesstion would be greatly appreciated,
    Vini

    1. There are generators around that claim to generate suitable wrappers, given some dll input. I suggest you search google. Try JACE, jni, wrapper, generator, .... Also, serach back through this forum, where there have been suggestions made.
    2. In general, you will need to supply wrappers, and if you want to use data from the "C side" in java, then you will need java objects that hold the data.

  • Can we access report date wise?

    hi all
    can we access report date wise if yes then plz guide me?
    i made two text items,one for start date and second one for end date in form.if i issue date in these two text items like this
    start date: 01/08/2009 End date: 08/08/2009 and after pressing the when_button_pressed and it give me the report up to giving date,is it possible?
    plz help me and guide me more
    thanks in advance
    sarah

    hi sir
    sir i got an error in the following code
    SET_REPORT_OBJECT_PROPERTY(rep, REPORT_OTHER , 'P_START="' || TO_CHAR(:BLOCK.START_DATE, 'DD.MM.YYYY') || '" P_END="'|| TO_CHAR(:BLOCK.END_DATE, 'DD.MM.YYYY') || '"');
    here is the error:" to many declaration of 'To_char' match this call"
    my table name is date1
    and table structure is
    name
    address
    start_date
    end_date
    and i used this where query WHERE COLUMN BETWEEN TO_DATE(:P_START, 'DD.MM.YYYY') AND TO_DATE(:P_END, 'DD.MM.YYYY') in the form where caluse
    i am using u r codes for displaying report which one you posted yesterday
    sarah

  • What's the easiest way to move app data and data structures to a server?

    Hi guys,
    I've been developing my app locally with Apex 4.2 and Oracle 11g XE on Windows 7. It's getting close to the time to move the app to an Oracle Apex server. I imagine Export/Import is the way to move the app. But what about the app tables and data (those tables/data like "customer" and "account" created specifically for the app)? I've been using a data modeling tool, so I can run a DDL script to create the data structures on the server. What is the easiest way to move the app data to the server? Is there a way to move both structures and data in one process?
    Thanks,
    Kim

    There's probably another way to get here, but, in SQL Developer, on the tree navigation, expand the objects down to your table, right click, then click EXPORT.. there you will see all the options. This is a tedious process and it sucks IMO, but yeah, it works. It sucks mostly because 1) it's one table at a time, 2) if your data model is robust and has constraints, and sequences and triggers, then you'll have to disable them all for the insert, and hope that you can re-enable constraints, etc without a glitch (good luck, unless you have only a handful of tables)
    I prefer using the oracle command line EXP to export an entire schema, then on the target server I use IMP to import the schema. That way, it's near exact. This makes life messy if you develop more than one application in a single schema, and I've felt that pain -- however -- it's a whole lot easier to drop tables and other objects than it is to create them! (thus, even if the process of EXP/IMP moved more than you wanted to "move".. just blow away what you don't want on the target after the fact..)
    You could use oracle's datapump method too.
    Alternatively, what can be done, IF you have access to both servers from your SQL developer instance (or if you can tnsping them both already from the command line, you can use SQL*PLUS), is run a script that will identify your apex apps' objects (usually by prefix to object names, like EBA_PROJ_%, etc) and do all the manual work for you. I've created a script that does exactly this so that I can move data from dev to prod servers over a dblink. It's tricky because of the order that must be executed to disable constraints and then re-enable them, and of course, trickier if you don't consistently prefix ALL of your "application objects"... (tables, views, triggers, sequences, functions, procs, indexes, etc)

  • Is timesten still using T-tree as data structure?

    I just come across this paper - http://www.memdb.com/paper.pdf , this researcher do some experiments and showing that using concurrent B-tree algorithm is actually faster than T-tree operation. How do you think about this paper? Do you think actually he is using a inefficient algorithm to access T-tree? Or, Timesten already know the limitation of T-tree and have changed the internal data-structure?

    Yes, we are aware of the comparisons between T-Trees, concurrent B-trees etc. At the moment TimesTen still uses T-trees but this may change in the future :-)
    Chris

  • Clusters as data structures

    I am looking for the best and simplest way to create and manage data structures in Labview.
    Most often I use clusters as data structures, however the thing I don't like about this approach is that when I pass the cluster to a subroutine, the subroutine needs a local copy of the cluster.
    If I change the cluster later (say I add a new data member), then I need to go through all the subroutines with local copies of the cluster and make the edit of the new member (delete/save/relink to sub-vi, etc).
    On a few occasions in the past, I've tried NI GOOP, but I find the extra overhead associated with this approach cumbersome, I don't want to have to write 'get' and 'set' methods for every integer and string, I like being able to access the cluster/object data via the "unbundle by name" feature.
    Is there a simple or clever way of having a single global reference to a data object (say a cluster) that is shared by a group of subroutines and which can then be used as a template as an input or output parameter? I might guess the answer is no because Labview is interpreted and so the data object has to be passed as a handle, which I guess is how GOOP works, and I have the choice of putting in the extra energy up front (using GOOP) or later (using clusters if I have to edit the data structure). Would it be advisable to just use a data cluster as a global variable?
    I'm curious how other programmers handle this. Is GOOP pretty widely used? Is it the best approach for creating maintainable LV software ?
    Alex

    Alex,
    Encapsulation of data is critical to maintaining a large program. You need global, but restricted, access to your data structures. You need a method that guarantees serial, atomic access so that your exposure to race conditions is minimimized. Since LabVIEW is inherently multi-threaded, it is very easy to shoot yourself in the foot. I can feel your pain when you mention writing all those get and set VIs. However, I can tell you that it is far less painful than trying to debug a race condition. Making a LabVIEW object also forces you to think through your program structure ahead of time - not something we LabVIEW programmers are accustomed to doing, but very necessary for large program success. I have use three methods of data encapsulation.
    NI GOOP - You can get NI GOOP from the tutorial Graphical Object Oriented Programming (GOOP). It uses a code interface node to store the strict typedef data cluster. The wizard eases maintenance. Unfortunately, the code interface node forces you through the UI thread any time you access data, which dramatically slows performance (about an order of magnitude worse than the next couple of methods).
    Functional Globals - These are also called LV2 style globals or shift register globals. The zip file attached includes an NI-Week presentation on the basics of how to use this approach with an amusing example. The commercial Endevo GOOP toolkit now uses this method instead of the code interface node method.
    Single-Element Queues - The data is stored in a single element queue. You create the database by creating the queue and stuffing it with your data. A get function is implemented by popping the data from the queue, doing an unbundle by name, then pushing the data back into the queue. A set is done by popping the data from the queue, doing a bundle by name, then pushing the data back into the queue. You destroy the data by destroying the queue with a force destroy. By always pulling the element from the queue before doing any operation, you force any other caller to wait for the queue to have an element before executing. This serializes access to your database. I have just started using this approach and do not have a good example or lots of experience with it, but can post more info if you need it. Let me know.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Need help on efficient searches using hashes on large #s of data structures

    I have a massive amount of data and I need a way to access it using generated indexes instead of traditional searching such as binary chop as I require a fast response time.
    I have 100,000 Array data structures each containing roughly 10 to 60 elements. The elements of the Arrays do not store any raw data they just hold references to objects. I have about 10,000 unique objects which are referenced to by the arrays.
    What I need to do is supply a number of objects as the query and the system should return all the arrays which contain two or more different objects from the query.
    So for example>
    Given query objects> &#8220;DER&#8221; , &#8220;ERE&#8221; , &#8220;YPS&#8221;
    and arrays> [DER, SFR, PPR]
         [PER, ERE, SWE, YPS]
         [ERE, PPD, DER, YPS, SWE]     
         [PRD, LDF, WSA, MMD]
    It should return arrays 2 and 3.
    I had a look at hashMaps but I&#8217;m not entirely sure if that&#8217;s what I need.
    Would I need to hash the each Array and then hash the query objects and look for overlaps in the two hashes or something similar?
    Thanks In Advance

    For a single query, it's more efficient to do less work, so don't create intermediate objects, don't create a index of objects to sets of possibles, and stop counting matches when you find enough:.
                    final HashSet<String> querySet = new HashSet<String>(Arrays.asList(query));
                    final Set<String> results = new LinkedHashSet<String>();
                    // find matches using scan of all arrays
                    for (String[] array : data) {
                        int matchCount = 0;
                        for (String elt : array) {
                            if (querySet.contains(elt)) {
                                ++matchCount;
                                if (matchCount >= 2) {
                                    results.add(Arrays.asList(array).toString());
                                    break;
                    }Using random data, 100,000 arrays of selections from a list of 10,000 trigrams, scanning all the arrays takes about as long as the set code, but doesn't require generation of the map, which can take up a lot of memory if you pre-create the sets (I didn't complete a run doing that, although it should be faster to query, it used too much memory and my computer started swapping so I gave up after a few minutes).
    Running queries of size 2 to 25, times in ms:
    linear scan of all arrays in data set:
    elapsed: 2196
    creating index of trigram to arrays containing trigram:
    elapsed: 4635
    query using set union and index lookup:
    elapsed: 2195
    query using direct scan and index lookup:
    elapsed: 164
    So if I was doing one query per dataset, just use a simple scan as above. If you're doing lots on the same dataset, then create a map<String, List<String[]>> to index the arrays, and the lookup will save quite a bit of time.
    Creating the index:.
                dataSets = new HashMap<String, List<String[]>>();
                for (String trigram : trigrams)
                    dataSets.put(trigram, new ArrayList<String[]>());
                for (String[] array : data)
                    for (String trigram : array)
                        dataSets.get(trigram).add(array);Finding matches using scan of arrays with at least one elt in query:.
                    final HashSet<String> querySet = new HashSet<String>(Arrays.asList(query));
                    final Set<String> results = new LinkedHashSet<String>();
                    for (String trigram : query) {
                        for (String[] array : dataSets.get(trigram)) {
                            for (String elt : array) {
                                if (elt != trigram && querySet.contains(elt)) {
                                    results.add(Arrays.asList(array).toString());
                                    break;
                    }

  • Data Structures and Algorithms in java book

    Hi guys,
    I want to know a good book which is good for Data Structures and Algorithms in java. I am good at Core java but a beginner for Data Structures in Java. I am a little poor in Data Structures concepts.
    Following are the books I have found on the net. Could you help me the choose the best outta them.
    1. Data Structures and Algorithms in Java - Mitchell Waite
    2. Data Structures in Java - Sandra Anderson
    3. Fundamentals of OOP and Data Structures in Java - Richard Weiner & Lewis J. Pinson
    4. Object Oriented Data Structures Using Java - Nell Dale, Daniel T. Joyce, Chip Weems

    lieni wrote:
    I good data structures book doesn't have to be language-specific.Thx DrLazlo, my speachYes.
    The OP wrote:
    I have access to these books and dont know which one to start with.What I meant is that you shouldn't narrow your search to insist that the book you choose have "Java" in the title.

  • Implementing a Graph Data Structure

    First time posting here, I was introduced to LabVIEW while participating in FIRST.
    I was reading about Graph Data Structures, and wanted to see if I could use them in LabVIEW, as it looked like a good way to relate information. Unfortunately, it appears that the straight forward way that I attempted will not work. I tried creating a typedef which consisted of a cluster of two elements, an array of linked nodes and an array of edge weights. Alas, I found you can't have recursive data types, as the array of linked nodes in the typedef needed to be the same as the typedef. I know why this is after a bit of searching, but I was wondering if there was a way to get around this. From my research, it seems like using a root class and a child class is one possible, but advanced way of doing it.
    I am currently thinking of just representing the linked nodes of a node as an array of index numbers which you can use to get the referenced nodes from the graph array. This means that the graph array cannot be sorted or otherwise modified so that the index numbers won't work, you can only add objects onto the end. Is this thinking right, or is there a different way to go about this?
    Solved!
    Go to Solution.

    Not an easy problem, as recursion is not native to LV programming (much less than any other language I used except assembler).
    But the solution to your index number problem is to use 'keys'. Each node should have a unique key (number), so you can adress it (search for the key) even if the array is sorted in some way.
    Again, it is not native to LV. This means, that you need to program the logic other languages use for that on your own (pseudo-pointers, handles, keys, hashes).
    From a practical side: I would not implement the graph in LV itself but access a graph from some other source (data base?) via an interface (ActiveX, .NET). To learn a bit more about how to do it, try playing around with the tree control (it is a limited graph).
    Felix
    www.aescusoft.de
    My latest community nugget on producer/consumer design
    My current blog: A journey through uml

  • Which data structure?

    Hi there...
    Could you let me know what is the most efficient data structure to use (in terms of storage and access) if I have to store around 1 million <key,value> pairs?
    Thanx :)
    Tariq

    Yes, concurrency, when two or more threads would act upon the same data or data structure. Concurrent access, etc. Your question has nothing to do with concurrency, but rather Collections. But yes, HashMap or Hashtable depending on if one thread is going to be accessing the same data or multiple threads. Also a database might come in handy as well, such as MySQL or JavaDB (Apache Derby).

  • Updating a hierarchical data structure from an entry processor

    I have a tree-like data structure that I am attempting to update from an AbstractProcessor.
    Imagine that one value is a collection of child value keys, and I want to add a new child node in the tree. This requires updating the parent node (which contains the list of child nodes), and adding the child value which is a separate entry.
    I would rather not combine all bits of data into one value (which could make for a large serialized object), as sometimes I prefer to access (read-only) the child values directly. The child and the parent values live in the same partition in the partitioned cache, though, so get access should be local.
    However, I am attempting to call put() on the same cache to add a child value which is apparently disallowed. It makes sense that a blocking call is involved in this operation, as it needs to push out this data to the cluster member that has the backup value for the same operation, but is there a general problem with performing any kind of re-entrant work on Coherence caches from an entry processor for any value that is not the value you are processing? I get the assertion below.
    I am fine with the context blocking (preventing reads or writes on the parent node value) until the child completes, presuming that I handle deadlock prevention myself due to the order in which values are accessed.
    Is there any way to do this, either with entry processors or not? My code previously used lock, get and put to operate on the tree (which worked), but I am trying to convert this code to use entry processors to be more efficient.
    2008-12-05 16:05:34.450 (ERROR)[Coherence/Logger@9219882 3.4/405]: Assertion failed: poll() is a blocking call and cannot be called on the Service thread
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:30)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(DistributedCache.CDB:1)
         at com.tangosol.util.ConverterCollections$ConverterCacheMap.put(ConverterCollections.java:2433)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:928)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:887)
         at com.tangosol.net.cache.NearCache.put(NearCache.java:286)
         at com.conduit.server.properties.CLDistributedPropertiesManager$UpdatePropertiesProcessor.process(CLDistributedPropertiesManager.java:249)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.invoke(DistributedCache.CDB:20)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onInvokeRequest(DistributedCache.CDB:50)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$InvokeRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:637)

    Hi,
    reentrant calls to the same Coherence service is very much recommended against.
    For more about it, please look at the following Wiki page:
    http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Best regards,
    Robert

Maybe you are looking for