Queue - 1 Producer and 2 Consumers

I look for structure Producer-Consumer but i like that my aquisition data posted in queue would be dequeue for 2 consumers (like copies of data).
In first consumer, the data would be analized e show in graphs. In second consumer, the data would be accumulated until 10 positions (staus queue = 10) and after this, the data would go to file in disk.
This would be the better solution?
I would can do it with 2 queues or have other way?
Thx for any clue...
Leonardo de S. Cavadas
Maintenance Engineer and Inspection - Bureau Veritas do Brasil
Engineer Metallurgist with emphasis in Advanced Materials
Technologist in Computer Science

Hello Jaime Rodriguez and Laemmermann,
Thx for 2 proposes, but i want that all data would be stored in disk and my consumer for show analysis can't limit the process. I think in use discard rotine for put the show loop very fast for dont full queue (example: IF queue full, discard next analisys of data(that include several rotines) for show...until queue status = 50%...is one simple analysis for show only). I will have other aplication for complete analysis the all data). Limit of queue is computer's memory.
My propose is in VI in attached (Labview 7.1).
Wait for your good clues. Thx for all.
Leonardo de S. Cavadas
Maintenance Engineer and Inspection - Bureau Veritas do Brasil
Engineer Metallurgist with emphasis in Advanced Materials
Technologist in Computer Science
Attachments:
queue_exemplo.vi ‏68 KB

Similar Messages

  • Queue: one producer vs multiple consumers

    Hi,
    This may be a simple question. In my application, I have a master/multiple slaves queue architecture. So the master enqueues messages. Each message is adressed to only one slave. Since all the slaves are waiting after queue elements, they all should receive the message and all dismiss it but one, the destinator. Apparently, it is not working like that. Once the message is dequeued by a slave, it is no more available in the queue and the others are not notified. Is that right?
    I could use a seperate queue for each slave, but when it comes to be 100 slaves, I find it a but heavy to manage hundred queues. I tried instead to do a router-like architecture. Everyone is listening and for each message they all verify who is the destinator. Only the destinator processes the message and the other ones simply dismiss it. Is there a way for me to implement that?
    Joe

    It actually sounds like a fairly dicey problem, depending on your need for response times and total messaging bandwidth.  Many of the issues seem to revolve around multiple producers and consumers wanting simultaneous read and write access to a continuously changing entity.  At least that's how it sounded to me, that all the consumers are independent and should be allowed to consume their own messages just as soon as they're able to notice them.
    For example:
    Consumer #3 performs a preview and discovers that there's a message waiting at position #9.  In order to pull it out of the queue, it must first dequeue the first 8 messages in line.  Then after dequeueing its own message, it must re-enqueue those 1st 8 messages, this time using "Enqueue at Opposite End" to restore the original sequence.
    However, before it can finish re-enqueue'ing those 8, Consumer #11 performs a preview and discovers that there's a message at (apparent) position 4.  This really *should* be position 12, but how can Consumer #11 know what #3 is in the middle of?
    Meanwhile, one of Consumer #3's 8 messages is addressed to Consumer #11, and it's actually quite important that Consumer #11 gets this one first instead of the one that it just found for itself
    And so on...
    Now, some (most?  all?)  of this kind of conflict can be resolved through the use of semaphores, but the penalty is that whenever any Consumer is accessing the shared queue, 100's of other Consumers are stuck waiting their turn.  And so all the independence and parallelism you were after in the first place with the producer / consumer pattern starts slipping away...
    I haven't personally built an app around a massively parallel messaging architecture.  I hope someone who has gets involved here because I'd like to learn how it's done.  I'd wager there are a few different elegant solutions, but all I can manage to think of are the pitfalls.
    -Kevin P.

  • Will the memory leak for queue when used in producer and consumer mode in DAQ to transfer different sized array.

    In the data acquisition, I use one loop to poll data from hardware, another loop to receive the data from polling loop sent by queue.
    But everytime the size of the transferred data array may not be the same, so the system may assign different array size and recycle very frequently.
    Will it cost memory leak. Or will it slow down the performance, since the array size is not fixed, so every time need to create a new sized array.
    Any suggestion or better method. 
    Solved!
    Go to Solution.

    As i understand your description, your DAQ-loop acquires data with the setting '-1' for samples to read at the DAQmx read function. This results in the different array sizes.
    Passing those arrays directly to a queue is valid and it does not have significant drawback in performance (at least as far as i know) and it definetly does not leak memory.
    So the question is more or less:
    Is it valid that your consumer receives different array sizes for analysis? How does your consumer handle those arrays? 
    hope this helps,
    Norbert 
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Producer to N Consumers notification

    Hi,
    I have the following situation (see also attached vi, made with LV2009SP1):
    1) one producer that is used as a "commands generator" for the consumer threads
    2) three consumers threads, each of them makes a particular task (let's say they checks something, each with different execution time)
    How can the producer have the replies (in term of error cluster) from the three parallel threads?
    Once I enqueue the command in the queues, each thread can work in parallel, but how can it report back?
    Is the producer-consumer a pattern with intrinsic "mono-directional" communication from producer to consumer(s)?
    Maybe one solution would be to avoid "producer consumer" and simply put three subVI in parallel, and then collecting their error outputs (with "find first error"). But this solution cannot update the LED controls that I want in the "main" VI.
    Any advice would be appreciated.
    thanks
    Attachments:
    Producer-MultiConsumer-Test.vi ‏27 KB

    Here is a rough idea.  Use just one queue for producer to consumers direction and one queue for consumers to producer direction.  The queue element can be a cluster with one element being the consumer number (1, 2, or 3), and another element being the command.  The producer queues up the command with the proper consumer number (use a different number to broadcast to all consumers, like 255).  Each consumer previews the queue.  When it sees its ID, it dequeues that element.  Extra coding is need to handle the broadcast case.  Some type of notifier to let the last consumer know to dequeue the broadcasted command.
    In the other direction, same principle.  Each consumer puts its ID and response in a cluster to be queued.  The producer dequeues and acts appropriately.
    Upon stop, the loops will have to be written so that they can be interupted at any time so that you don't wait on the slowest loop to complete before shutting down.  While loops with a check for the shutdown command, or For Loops with the break condition enabled.  For this, maybe a separate notifier could be used.
    A wild idea with much coding, but I believe it could be made to work for your purpose.  Is it better than having three separate queues (or 6 queues for both directions)?  I don't know.  It may be worth a try.
    - tbob
    Inventor of the WORM Global

  • Could Buffer replace the Queue in Producer/Consumer Design Pattern

    Hello,
    I have a question that the task of Buffer is to store the data and the queue is also of the same so could we use the Buffer inplace of queue in a Producer/Consumer Design Pattern.
    Solved!
    Go to Solution.

    No, those buffer examples are not nearly equal to a queue and will never ever "replace" queues in producer/consumer.
    The most important advantage of queues for producer/consumer (which none of the other buffer mechanics share) is that it works eventbased to notify the reader that data is available. So if you would simply replace the queue by overly elaborate buffer mechanics as you attached to your last post, you will lose a great deal of the the purpose using producer/consumer.
    So, to compare both mechanics:
    - Queue works eventbased, whereas the buffer example does not.
    - Queue has to allocate memory during runtime if more elements are written to the queue than dequeued. This is also true for the buffer (it has to be resized).
    - Since the buffer is effectively simply an array with overhead, memory management is getting slow and messy with increasing memory fragmentation. Queues perform way better here (but have their limits there too).
    - The overhead for the buffer (array handling) has to be implemented manually. Queue functions encapsulate all necessary functionality you will ever need. So queues do have a simple API, whereas the buffer has not.
    - Since the buffer is simply an array, you will have a hard time sharing the content in two parallel running loops. You will need to either implement additional overhead using data value references to manage the buffer or waste lots of memory by using mechanics like variables. In addition to wasting memory, you will presumably run into race conditions so do not even think about this.
    So this leads to four '+' for the queue and only one point where "buffer" equals the queue.
    i hope, this clears things up a bit.
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • JMS queue getting slower and slower

    We are facing a rather strange behaviour on our WL 8.1.5 cluster (2 nodes).
              One of our JMS queues receives about 400 messages every 5 minutes (the distribution is unbalanced with a peak every 5 minutes). These are forwarded to one MDB instance on every cluster node. Everything works perfect for some time, but there's a creeping performance degradation, although the cluster nodes' resources are far from being exhausted.
              The first message peeks can be processed quite well. After some time, fewer and fewer messages can be processed within a one-minute slot, finally resulting in a queue that gets fuller and fuller. After about 5 hours and some thousand unprocessed (meaning: not processed by the MDB and thus still in the queue) messages the queue becomes a 'storage pipe', forwarding an old message only if the sender inserts a new one.
              Any suggestions ?

    I don't think there is any switch to resolve this issue. You need to conduct JMS tuning to improve performance.
              o Start with downstream , Check if your message processing itself is very slow. Monitor Message Driven bean for any rolledback transactions or timed out transactions. Incremental thread dumps during peak processing time may also help.
              o Monitor memory usage and GC frequency of the server during peak condition. If you see high memory usage and high GC frequency, you might want to
              - Tune JVM with Garbage collection and heap size.
              - Turn on message paging to free up memory resources
              o I am not sure, How well it works ? However you might want to try control flow between producer and consumer.
              http://edocs.bea.com/wls/docs81/ConsoleHelp/jms_tuning.html#1121837
              Good luck !!!

  • Queue Block (SMQ2) and no GATP invoking for Sales Order

    Hi,
    I am encountering two issues:
    - Every time I activate any integration model for customer, materials etc. it queues up in SMQ2 in ECC and on processing there it goes to the APO
    - When I create sales order it does not invoke the GATP screen. However, if I click onto the 'Item availability' button it takes me to the APO screen with a different check mode (different from the one maintained in product master)
    - On saving the sales order and then clicking 'Item availability' button again, I get the message 'Order do not exist'. Next, if I check SMQ1 in ECC I find the queue. If I try to re-process, I get the message "LUWs in status NOSEND must be picked up by the application.
    Well, we have maintained X0 for the MRP materials, did CIF manually, and have udated the product master ATP view with new check mode. We have also maintained the distribution, RFC destination, change transfer settings etc.
    Can someone please tell me as to what is it that controls the queue communication between ECC & APO or if anything else that is controlling this.
    Regards,
    Avijit

    Hi Santosh,
    We maintained this settings in /SAPAPO/C4 in APO.
    Also, we maintained Debug = 'R', Logging = 'D' (Detailed) and RFC Mode = 'Q' (Queued RFC) in CFC2 of ECC.
    Even on changing the Debugging off in APO system to "Debugging on" / "Debugging on, Recording of t/qRFCs (NOSENDS)". I get the queue in SMQ2 of APO with status 'NOEXEC' i.e. no automatic execution.
    Another info:
    The queues are CF* and for that we maintained like this:
    In ECC (T-code: SMQS): Host id of ECC system is appearing at the 'Schedular Information' section and against the Destination of SCM system we maintained Host id of ECC system once again
    In ECC (T-code: SMQR): Host id of ECC system is appearing at the 'Schedular Information' section and against CF* queue Destination with Logon data is blank
    In SCM (T-code: SMQS): Host id of SCM system is appearing at the 'Schedular Information' section and against the Destination of ECC system Host id is kept blank
    In SCM (T-code: SMQR): Host id of SCM system is appearing at the 'Schedular Information' section and against CF* queue Destination with Logon data is blank
    In the light of these, is there something , that I am missing. Please let me know.
    Regards,
    Avijit

  • Podcast Producer and Internet Explorer

    I've configured Podcast Producer on 10.6.8, and it is working fine - items are processed from the Capture software through to PcP2 and Xgrid, and are then uploaded to a Wiki.
    The issue is with clients using Internet Explorer (any version) and their ability to play audio files uploaded from Capture or through the web interface of the wiki. Videos work fine, but whatever code is used to upload just audio files to the wiki cannot be interpreted by IE. I know that 10.6.8 was meant to address some of these issues, but I am just not sure how to proceed.
    Has anyone had a similar experience?
    Cheers
    Kieran

    Not sure if this will fix your problem, but I was having the same problem until I looked at the specs for XGrid, it has to run on an Intel Mac. This is where you can get the specs:
    http://docs.info.apple.com/article.html?artnum=306737
    Here is what is in the doc:
    Podcast Capture requirements
    Any Mac running Mac OS X 10.5 Leopard
    iSight camera (built-in or external) or FireWire DV camcorder
    Podcast Producer Server requirements
    Any Mac running Mac OS X Server version 10.5 Leopard
    Xsan for optional cluster file services
    Podcast Producer Xgrid rendering requirements
    Any Intel-based Macintosh Server or Intel-based desktop Mac (a Mac Pro, for example)
    Mac OS X 10.5 Leopard or Mac OS X Server version 10.5 Leopard
    At least 1 GB of memory (RAM) plus 512 MB of additional RAM per processor core
    At least 50 GB of available disk space
    Xsan for optional cluster file services
    Quartz Extreme-enabled video chipset
    Note: Quartz Extreme support can be verified in the Graphics section of Apple System Profiler.
    Note: If a system is providing multiple Podcast services (for example, Podcast Producer and Podcast Producer Xgrid rendering), the system needs to meet both requirements.

  • HT1222 What if i should update to ios 7 coz i read on paper yesterday that you should wait coz you need to be in queue for download and there could be bug so that they can be resolved

    What if i should update to ios 7 coz i read on paper yesterday that you should wait coz you need to be in queue for download and there could be bug so that they can be resolved?

    do whatever you'd like to do.  No one's forcing you to update.

  • Do I get confirmation that my Iphoto book is being produced and sent out?

    I put together a photo book in Iphoto and submitted it. It went through all the rendering and processing. My credit card bill shows it was in the process of being charged, but there was no confirmation from Apple that my book will be produced and mailed out. Is this how it is suppose to happen?
    Thank you,
    Danny

    Visit the online Apple Store of your country:
    http://store.apple.com/
    You can log in to your account using the same Apple ID and Password you used to place your initial order
    or
    you can just give them a call.

  • I'm using iTunes Producer and am getting a delivery error:

    I'm using iTunes Producer and am getting a delivery error:
    ERROR ITMS-3000: "Line 85 column 29: element "data_file" incomplete; missing required element "checksum" at XPath /package/book/assets/asset[2]/data_file"
    can anyone please help?
    This is a fixed-layout ePub produced in InDesign, if that makes any difference.

    On navigating to /Library/Application Support/Adobe, None of these folders are present on your system,
    Could you please provide a snapshot for this location.
    These folders should get created on Install.
    Regards,
    Ashutosh

  • IB Queue: Can a Queue be Unordered and Partitioned at the same same time?

    Hi,
    My question is related to Unordered Queue:
    Can a Queue be Unordered and Partitioned at the same same time?
    From PeopleBooks: Managing Service Operation Queues
    "Unordered:
    Select to enable field partioning and to process service operations unordered.
    By default, the check box is cleared and inbound service operations that are assigned to a queue are processed one at a time sequentially in the order that they are sent.
    Select to force the channel to handle all of its service operations in parallel (unordered), which does not guarantee a particular processing sequence. This disables the channel’s partitioning fields.
    Clear this check box if you want all of the queues’s service operations processed sequentially or if you want to use the partitioning fields."
    This seems to indicate that Unordered queues don't use partitioned fields. Yet, there are a few delivered Queues that are Unordered and have one or more Partition fields selected.
    EOEN_MSG_CHNL
    PSXP_MSG_CHNL
    SERVICE_ORDERS
    How does partitioning work in this case? Or is partitioning ignored in such cases?
    Thanks!

    I guess you could use reflection and do something like the following:
    import java.lang.reflect.Constructor ;
    import java.lang.reflect.Method ;
    public class MyClass<T> implements Cloneable {
      T a ;
      public MyClass ( final T a ) {
        // super ( ) ; // Superfluous
        this.a = a ;
      public MyClass<T> clone ( ) {
        MyClass<T> o = null ;
        try { o = (MyClass<T>) super.clone ( ) ; }
        catch ( Exception e ) { e.printStackTrace ( ) ; System.exit ( 1 ) ; }
        o.a = null ;
        //  See if there is an accessible clone method and if there is use it.
        Class<T> c = (Class<T>) a.getClass ( ) ;
        Method m = null ;
        try {
          m = c.getMethod ( "clone" ) ;
          o.a = (T) m.invoke ( a ) ;
        } catch ( NoSuchMethodException nsme ) {
          System.err.println ( "NoSuchMethodException on clone." ) ;
          //  See if there is a copy constructor an if so use it.
          Constructor<T> constructor = null ;
          try {
            System.err.println ( c.getName ( ) ) ;
            constructor = c.getConstructor ( c ) ;
            o.a = constructor.newInstance ( a ) ;
          } catch ( Exception e ) { e.printStackTrace ( ) ; System.exit ( 1 ) ; }
        } catch ( Exception e ) { e.printStackTrace ( ) ; System.exit ( 1 ) ; }
        return o ;
      public String toString ( ) { return "[ " + ( ( a == null ) ? "" : a.toString ( ) ) + " ]" ; }
      public static void main ( final String[] args ) {
        MyClass<String> foo = new MyClass<String> ( "zero" ) ;
        MyClass<String> fooClone = foo.clone ( ) ;
        System.out.println ( "foo = " + foo ) ;
        System.out.println ( "fooClone = " + fooClone ) ;
    }

  • SMQ2 - Found a queue with name "*" and all other queues stuck in STOP state

    Hi all,
    We have a problem on the ECC side, on outbound asynchronous proxies. Sometimes a new queue appears on SMQ2 there with name "", and with status "STOP". At this time, all other queues stuck with STOP status... the only way of resetting all those queues to RUNNING state again is to delete the queue named as ""...
    Some of you has any idea how this queue "*" appears there and stop all other queues?
    thanks
    roberti

    Hi Ravi,
    Is not only about one queue stopped... the problem is this queue named ""... with status STOP, all other queues stops too with it. The only way to make things work again is to delete the queue named as ""... the problem is to discover what or who is creating this stupid "*" queue. I think that this is a procedural error on datacenter, but they are saying is not the case... I'm investigating, but if someone else has any idea about I would appreciate
    thanks!
    roberti

  • How large can a LabVIEW Queue in elements and bytes?

    How large can a LabVIEW Queue in elements and bytes?

    rocke72 wrote:
    How large can a LabVIEW Queue in elements and bytes?
    In
    elements it is likely something like 2^31. In bytes it is most probably
    around the same number or better, depending how exactly the different
    queue elements are stored. I think they are stored as independent
    pointers so it would be theoretically better than those 2^31. In
    practice however starting to allocated so much memory in LabVIEW will
    always cause problems on nowadays computers. Without 64 bit CPU and OS
    going above 2 GB of memory per application is very hard and as far as I
    know not supported by LabVIEW.
    Also allocating many chunks of memory (like a few million queue
    elements holding strings) will definitely slow down your system tremendously eventhough
    it will work but the OS memory manager will be really stress tested
    then.
    Rolf Kalbermatter
    Message Edited by rolfk on 06-09-2006 12:24 AM
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Queue is registered and cannot be activated

    Hi all
    we have an R3AD  Queue problem. " The queue is registered and therefore cannot be activated"
    The status of the queue is "RUNNING".
    When I try to acitvate it gives message " The queue is registered and therefore cannot be activated"
    Can anyone help me out in this case…..

    Hi Harry,
    Check if this is useful.
    Maintain the following parameter using transaction SM30 (table maintenance) for table SMOFPARSFA:
    Key : R3A_COMMON
    Parameter Name: CRM_DO_NOT_REGISTER_INBOUND_QUEUE
    Parameter Value: X
    Caution:
    This parameter has the effect that the queues R3A, CSA, CRI* or "CDB*" are no longer registered automatically. Ensure that all required objects are registered.
    Award points if useful.
    Thanks,
    Ravi

Maybe you are looking for

  • Graphics Problems

    Athlon 64 3200+ MSI K8N Neo Platinum 1Gb DDR PC3200 GeForce FX 5600 128Mb Antex Truepower 550W (30A on 12v) 36Gb Western Digital Raptor Windows XP SP2 + All latest drivers, directx, bios etc The system seems stable enough while in windows however whe

  • Question on Decode function.

    I have the following table structure. Table name: TMP_tab_1 Columns: role_id : comm_code : amount I need to write a sql statement for If role_id = 280 and comm_code <> 330 then sum(amount) end if Please advise how to use decode to resolve the above i

  • OCR and Java

    Hi All, I want to capture the text on the image file? So is there any API available in java which can read the text on a scanned image? Thanks

  • Best practice in moving files from Downloads to other locations

    I'm new to Mac and tend to download or get sent a fair few files, apps and so forth. Over the past few weeks the Download stack has started looking rather messy (full of logos from corporate emails and that sort of crap). So I thought I'd have my fir

  • Web dynpro upgrade

    Hello All, I have upgraded my portal,web as and developer stuido from SP14 to SP12. All the web dynrpo applications on the portal pages have stopped functioning. I cannot click on any of the web dynpro application controls nor do these applications s