Preview queue at opposite end

Is it possible to preview a LabVIEW queue at the opposite end?
I'm using LabVIEW 8.6
Many thanks for your help
Conway
Solved!
Go to Solution.

The Get Queue Status function has an optional input to Return Elements. If this is true, the function will return a preview of all the elements in the queue. You can then index out the last element in the array to preview the most recent thing put into the queue.
Jarrod S.
National Instruments

Similar Messages

  • Photos go to opposite ends of the timeline when flagging, causing me to lose my place

    Hey everyone,
    I'm using 5.4 and whenever I flag a photo (as 'Pick' or 'reject') it puts them at either end of my timeline, and instantly jumps me there. This causes me to lose my place, and I'm editing about 1800 photos from a concert so you can imagine how annyoing this is.
    I've looked online for a solution, and gone through menu trees, but I can't find anything
    Thanks y'all

    Change your sort order to something else than what you have it set as.

  • Why do RAM previews get slower towards end of comp?

    I was just wondering, why do RAM previews and even updates in the comp. monitor when playing with effects get slower as you get deeper into the comp.? When i first begin a comp everything is extremely fast but  when 2 to 3 minutes of the comp is completed everything slows down and each frame takes like 4 or 5 seconds to load. Do you think it's a matter of RAM?

    No. Whatever issues you have are specific to your setup like calculations on effects accumulating or multiplying. This can easily happen with particle effects or temporal effects like Timewarp - the later the time in the comp, the more particles or frames have to be processed. You have not provided any relevant info, anyway, so there is not much point to making generic claims based on such vague and incomplete info. We know nothing about your system or even what version of AE you use, so aprdon me, this is pointless.
    Mylenium

  • ORA-03113: end-of-file on communication channel:(The queuing is not enabled

    The queuing is not enabled.
    SQL> select * from user_queues;
    NAME QUEUE_TABLE QID QUEUE_TYPE MAX_RETRIES RETRY_DELAY ENQUEUE DEQUEUE RETENTION USER_COMMENT
    ACK_QUEUE ACK_QUEUE_T 36560 NORMAL_QUEUE 5 6 NO NO 0
    AQ$_ACK_QUEUE_T_E ACK_QUEUE_T 36561 EXCEPTION_QUEUE 0 0 NO NO 0 exception queue
    AQ$_IFW_SYNC_E IFW_SYNC 36562 EXCEPTION_QUEUE 0 0 NO NO 0 exception queue
    IFW_SYNC_QUEUE IFW_SYNC 36563 NORMAL_QUEUE 5 6 NO NO 0
    In blrhpqe2 system when we try to create the queue it gives end-of-communication error. It seems there some problem with the dbms_aqadmn. Can you please check-up and let know your feedback.
    Below the output of the screen contents:
    1* select * from user_queues
    SQL> begin
    2 dbms_aqadm.start_queue(ACK_QUEUE);
    3 commit;
    4 end;
    5 /
    begin
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    please help me on this

    >>>ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    conn to database again.
    ORA-03113: end-of-file on communication channel
    Cause: The connection between Client and Server process was broken.
    Action: There was a communication error that requires further investigation. First, check for network problems and review the SQL*Net setup. Also, look in the alert.log file for any errors. Finally, test to see whether the server process is dead and whether a trace file was generated at failure time.

  • Queue: one producer vs multiple consumers

    Hi,
    This may be a simple question. In my application, I have a master/multiple slaves queue architecture. So the master enqueues messages. Each message is adressed to only one slave. Since all the slaves are waiting after queue elements, they all should receive the message and all dismiss it but one, the destinator. Apparently, it is not working like that. Once the message is dequeued by a slave, it is no more available in the queue and the others are not notified. Is that right?
    I could use a seperate queue for each slave, but when it comes to be 100 slaves, I find it a but heavy to manage hundred queues. I tried instead to do a router-like architecture. Everyone is listening and for each message they all verify who is the destinator. Only the destinator processes the message and the other ones simply dismiss it. Is there a way for me to implement that?
    Joe

    It actually sounds like a fairly dicey problem, depending on your need for response times and total messaging bandwidth.  Many of the issues seem to revolve around multiple producers and consumers wanting simultaneous read and write access to a continuously changing entity.  At least that's how it sounded to me, that all the consumers are independent and should be allowed to consume their own messages just as soon as they're able to notice them.
    For example:
    Consumer #3 performs a preview and discovers that there's a message waiting at position #9.  In order to pull it out of the queue, it must first dequeue the first 8 messages in line.  Then after dequeueing its own message, it must re-enqueue those 1st 8 messages, this time using "Enqueue at Opposite End" to restore the original sequence.
    However, before it can finish re-enqueue'ing those 8, Consumer #11 performs a preview and discovers that there's a message at (apparent) position 4.  This really *should* be position 12, but how can Consumer #11 know what #3 is in the middle of?
    Meanwhile, one of Consumer #3's 8 messages is addressed to Consumer #11, and it's actually quite important that Consumer #11 gets this one first instead of the one that it just found for itself
    And so on...
    Now, some (most?  all?)  of this kind of conflict can be resolved through the use of semaphores, but the penalty is that whenever any Consumer is accessing the shared queue, 100's of other Consumers are stuck waiting their turn.  And so all the independence and parallelism you were after in the first place with the producer / consumer pattern starts slipping away...
    I haven't personally built an app around a massively parallel messaging architecture.  I hope someone who has gets involved here because I'd like to learn how it's done.  I'd wager there are a few different elegant solutions, but all I can manage to think of are the pitfalls.
    -Kevin P.

  • Queues - Differences between LV 5 and LV6

    The implementation of queues has changed from LabVIEW 5 to LabVIEW 6.
    The old queue VIs still exist for compatibility. Some of the differences
    are quite obvious, but there's one difference I don't understand:
    The (old) 'Create Queue' VI now is build around the new 'Obtain Queue'
    VI. It stores all created named nueues in an array and searches this
    array before calling 'Obtain Queue'. In a textbox it says that this is
    necessary due to a different behaviour of the new named queues. I don't
    see/understand this difference. To make the question clearer:
    Were is the difference between between old and new queues when creating
    or abtaining a named queue (that already exists)?
    TIA,
    Thomas
    Thomas Ludwig
    Institute of Mineralogy
    Un
    iversity of Heidelberg, Germany

    Hi,
    Here is some great information on the new queues thanks to Stephen Mercer here at NI.
    Queues underwent an overhaul between LabVIEW 6i and LabVIEW 6.1 (not before that). The most obvious change is the conversion from VIs to primitive functions.
    The refnum wires themselves are different colors -- they still have "refnum green" on the outside of the wire, but the inside shows the type of data they carry.
    LV6.1 exhibits true reference counting. In LV6i, each time you executed "Create Queue" with the same name, you received the same refnum value (assuming you hadn't somewhere called "Destroy Queue"). So you had a single reference running around on all your diagrams.This caused problems for some applications since they had one set of VIs creating the queues and another set using them; the two sets were operationally independent and there was never an easy way to know when everyone was done using the queues and it was safe to destroy them. In LV6.1, the queues and notifiers are properly reference counted. This has zero impact on unnamed queues. But for named queues, each time you "Obtain Queue" for the same name, you get a different refnum value. IT STILL POINTS TO THE SAME QUEUE. But now many different VIs can acquire refnums to the same queue and release each of their refnums when they are done with them. When the last refnum is released, we (LabVIEW) know it is time to actually destroy the queue. That's why the names "create" and "destroy" were changed. The way I've explained it to others: think of the queue as a tire. The refnum is a rope attaching the tire to a low hanging tree branch. Each time an "Obtain Queue" is called for the same name, a new rope is added tying the tire to the branch. When a "Release Queue" is called, one of the ropes is removed. When the last rope gets removed, the tire falls to the ground -- the queue is removed from memory and its data is thrown away. The special terminal on "Release Queue" called "release all?" can be wired with TRUE to invalidate all refnums for this queue and immediately cause its destruction.
    What does this mean to a user? It means that you want to include a "Release Queue" for every "Obtain Queue" you use or risk continually allocating 4 byte refnums that aren't cleaned up until the end of your VI's execution. Most programs don't notice the continuous allocation, but those that are designed to run for long periods of time (days) have had a bit of trouble with this concept. It also means you are better off actually running the refnum wire to the various points of your diagram that need it rather than -- as was frequently done in LV6i -- taking the shortcut of doing another "Create Queue" in each physical location where you need that refnum. All refnums obtained by a given VI heirarchy will be cleaned up automatically when that VI heirarchy stops execution, but if you want them cleaned up sooner, use "Release Queue".
    The 6.1 queues have a new capability not existing in 6i. The "Preview Queue Element" primitive allows you to inspect what the front element of the queue is without actually taking it out of the queue. This is useful for doing look-ahead work.
    There are new example programs that ship with 6.1 to help you understand the queue behaviors.
    Bryan Dickey
    Applications Engineer
    National Instruments
    http://www.ni.com/suppport

  • Please Help me place the arrows of my ScrollPane scroll bar together on one end of the scroll track!!

    I have a scrollpane component with a movie clip of some
    thumbnail images.
    I just want to have its scrollbar arrows together on one end
    of the track (or together ANYwhere) instead of having them at
    opposite ends of the scroll track.
    I have been able to customize the appearance of the
    scrollpane and its scrollbar using the HaloTheme library, but that
    approach has so far been of no use in getting the arrows together.
    I cannot tell you how deeply any help will be appreciated. I
    will probably sob and mewl with gratitude, the way I imagine
    someone lost in a vast rain forest for many weeks mewls when
    rescued. I am desperate. I am going insane. Please, please help me.

    Cherrylanenc are you on a managed network?  If not then I would recommend reviewing the steps listed in Sign in, activation, or connection errors | CC, CS6, CS5.5 - http://helpx.adobe.com/x-productkb/policy-pricing/activation-network-issues.html.

  • Multiple queue vs single queue - peformance

    Hi All,
    Could pls throw some light on the following design.
    Currenty, We have more than 100 queues and for each there is a different MDB to receive the message.Now, somehow we made a single instance of same MDB listen to all these queues by looking at the input message. The requiement now is, is it possible to club all the queues into single queue as it's asynchronous messaging. The receiver may also try write into the same queue at the end of message processing.If so, how would be performance. Is this performance better than having multiple queues?.
    The message are expecting to be 25 k/day in the queue. And we are using Weblogic server.
    Please let me know if you need more details.
    Thanks
    Srinath

    You could try to use selectors to assign messages to particular MDBs. This way you off load some of the work on the broker but selectors by their nature usually have an impact on broker's performance. It all depends on your application, your expectations, size and the rate of your messages, how long it takes to process them on the client side. There is no simple answer in this case. You would have to test some scenarios to come up with a best solution that fits your requirements. Having said that I personally believe that some degree of partitioning of your queue could be beneficial. On a heavily loaded system it may increase the concurrency of the system and its resilience, particularly if you place the queues in different brokers. Try to imagine your broker as if it was a DB server with just one table (I'm not sure if this is the best analogy but I can't think about anything else at the moment).

  • Previews not loading

    I have  pictures on an external  hard drive  that are imported into  my  catalogue.  One  folder ( out of  many)  when  called up  do not  show  previews  when in Libary.  If  I choose  develop, the   pictures appears  and  so does the  preview.  When I try to backup , I get an  error  message  that my preview  cache is  having a  problem.   All other  folders  have  good  previews.

    WW's solution likely won't work. The Camera Raw Cache only affects what happens in the Develop Module.
    Have you tried selecting all images in that folder (grid mode, Ctrl-A) and choosing the "Library->Previews->Render Standard-Sized Previews" menu item? I'd try that first.
    If that doesn't work, you'll need to delete your previews.
    In the same directory as your Lightroom catalogue, there is a folder whose name ends in ".lrdat". Rename this folder, go into Lightroom and select the "Library->Previews->Render Standard-Sized Previews" menu item. Tell it to render them for all photos. If you have a large library, this may take quite a while, but you should have your previews back at the end.
    Hal

  • "Warm transfers" to a queue using ad-hoc conferencing

    Hi all,
    Just wondering if someone can help. We have an issue with a UCCX 7 deployment where an agent in a queue initiates what he refers to as a warm transfer to another queue. Essentially he he is transferring a caller to another queue by initiating a conference with the queue and then ending his side of the call once in there, leaving just the caller hearing the "your call is important" type messages. Think of it as holding the callers hand until they have been queued. The issue is that once their call is handed off to the next available agent in the second queue the caller, the new agent AND the queue are now conferenced together ie. the caller and the new agent continue to hear the in queue messages.
    Is this the expected behaviour in this scenario? We have asked the agents to simply transfer the callers to the second queue if possible, but as they aren't very happy with what they see as a loss in functionality I'd like to rule out a possible solution before saying it can't be done. The part that I'm not wrapping my head around is how, once the agent leaves the conference, the caller, the new agent and the queue are being conferenced together. Surely the ad-hoc conference ends once there are only two participants, and if so what is initiating the new conference.
    Any ideas will obviously be very much appreciated. Cheers in advance
    Jason

    That's not the correct name, in my opinion.
    The industry standard definition of a "warm transfer" or "consultative transfer" is that the caller is placed on hold while the first agent finds the second agent. Whether the first agent does this directly to the second agent, or does it through the queue, wherein he must wait for the second agent to become available and answer the consult call in the queue, is immaterial.
    This is "warm" because the first and second agent can exchange information about the caller, producing a smoother handoff of the call. Once the second agent has been given the information and indicates they will accept the call, the transfer can be completed.
    There is no need to do this as a conference - and it is detrimental to do so, as it uses up conferencing resources and bandwidth (a problem in a branch office environment). When done as a consult, and once connected to the target agent, one can even use the alternate-reconnect to toggle back to and forth to the caller and second agent to facilitate introductions. With practice, this can be done correctly without doing a conference.
    Should no second agent answer the queued call in a REASONABLE time, the first agent gets the caller back (kills the consult leg) and explains the situation. This is what makes it "warm" - the caller is never left stranded.
    Now what your blokes are doing is a bit useless. What is the point of the two parties being in conference to hear the queue message? If the agent wants to dump the caller in the queue, which is what they are doing (nothing "warm" about it), then they can start a consult to the queue point, hear the start of the messaging, and complete the transfer - dumping the caller into the queue.
    Basically, they are messed up. They COULD do it with a conference as long as they WAIT for the 2nd agent to answer - then all three parties are in conference. But it my opinion it's not needed. But if they want to dump the caller off into the queue, just use a consult + transfer in the normal way.
    Regards,
    Geoff

  • Queue overflow errors in tag engine.

    I was testing the 310 tags in my tag database using the interactive server tester, when I encountered the queue overflow error. I am storing and retrieving values in a Allen Bradley SLC 5/04 PLC module. I am using RSLinx as an OPC server. I tried increasing the queue, but still ended up with the error. How will a queue overflow effect the performance of the tag engine and are there ways to better pinpoint the problem?
    Thanks,
    Mike Thomson
    [email protected]

    Mike,
    Check out these links...
    http://digital.ni.com/public.nsf/3efedde4322fef19862567740067f3cc/862567530005f09e862567c700746a65?OpenDocument
    http://zone.ni.com/devzone/conceptd.nsf/2d17d611efb58b22862567a9006ffe76/120e7a0c342df3fa86256812005c056c?OpenDocument
    http://zone.ni.com/devzone/conceptd.nsf/2d17d611efb58b22862567a9006ffe76/bb7a08241bb0797c86256812005d1f3c?OpenDocument
    and if they don't help, then write back with info like: Version of LabVIEW in the Help>> about LabVIEW
    Have you ran the update to LV DSC to 6.0.2?
    Find out by going to Start>>Settings>>Control Panel>>add/remove programs>> just LOOK for LV Datalogging and supervisory control version.
    Version of Logos?... find the lkopc.exe Properties>>Version
    Let me know if further problems exist
    Thanks,
    Bryce

  • Is it possible to get the element data type of a Queue from itself?

    Hi everyone,
    i have a Q that has a cluster as element data type.
    now when i want to enqueue
    i'll use bundle-by-name
    and for that i'd have to have my data-type present
    (long cable from whereever (possibly from where i obtained the Q).
    my question is,
    whether there is a method/property-node/something that allows me to
    wire the Queue Refnum into it and receive the element-data-type,
    so i can then input into the top of bundle-by-name?
    (i really dont want to have that cable all over the place)
    the reason i ask here is that
    the help for the outgoing Queue Refnum from the Obtain Queue method,
    shows the element-data-type and so i hope there might be a solution.
    thx for your time
    and cheers
    j
    Solved!
    Go to Solution.

    If I understand your question correctly, the answer is "Yes, it's very easy ..."
    The answer is "Preview Queue".  Here I create a Queue of some mysterious type (it's a cluster having a Thing and a Center, but you don't know that yet).  I take the Queue reference from whereever I can find it and pass it into Preview Queue Element.  I take the output and use it to define my cluster in Bundle by Name.
    Two caveats.  This copies the first element of the Queue into the cluster, so you probably need to be sure to define all of the elements of your cluster.  But what if the Queue is empty (as mine is, above, as I just Obtained it) -- well, that's why 0 is wired into the TimeOut input, since I do not want to wait "forever" (-1) for the empty Queue to have an element!  Turns out that even in this case, you still get the correct Cluster elements!
    Neat, huh?
    Bob Schor

  • Preview for Effects

    I am in the Effects menu for Drop Shadow and want to preview, but there is not a preview check box (I am in Windows - CS4).  I am following a tutorial that is for both but examples are on a Mac.  Does Windows give me a preview option for any of the options in the effects window?  It has in everything else.
    Thanks

    Similar, but without the preview box.  I ended up closing out of everything and restarting and then the preview box came up.  I have no idea why it did not display the first time.
    thanks

  • After Effects RAM Preview not showing first 50 frames or so - Mac OS

    So I'm on a 2012 iMac, and when i RAM preview,  it plays audio but video does not show until about 50 or 60 frames in, its like a glitch or lag thats started recently, so it makes ram previewing from a certain time a real pain as i then miss the exact part i want and it c matches up with video a second or two later, i can't find any solutions, please help!!

    As Andrew says, Adobe and Apple are working on figuring this bug out. See the known issues section of this page: After Effects good to go with Mac OSX v10.10 (Yosemite)
    If you let the RAM preview loop at the end of its playback and play through again, it should be smooth and issue-free.

  • Redirecting failed messages to other consumers of distributed queue

    Hi,
    We have a simple cluster with two servers A and B, each hosting one MDB whose task is to consume message from a distributed queue and forward them to an EIS via a JCA resource adapter. Server B acts as a failover server. If the resource adapter is unable to deliver the message, the MDB will throw a runtime exception and the message will therefore be redelivered. In order to achieve high-availability, we have configured the distributed queue to send the redelivered message to an error destination (dead-letter-queue DLQ). The DLQ again has another instance of the MDB as a consumer with a different resource adapter.
    Using DLQs is working fine as a high-availability mechanism when the MDB is unable to deliver the message to the EIS, but is this a common or valid approach?
    An alternative to using DLQs would be to configure weblogic to send redelivered message to other consumers on the distributed queue. The problem we have is that the failed message always gets redelivered to the same MDB (i.e. the MDB which consumes messages but throws an exception because the resource adapter fails). Is it somehow possible to configure the MDB or even change the MDB code to notify weblogic to send the failed message to another consumer of the distributed queue? Would the MDB be able to disconnect the JMS connection when throwing the exception and if so, would the disconnection cause the application server to deliver the message to another consumer?
    Many thanks

    Thanks Tom,
    Setting the distributed-destination-connection property to EveryMember seems to be exactly what we need to allow other distributed queue member to consume the message which has been put back on the queue after a rollback. In order to ensure that it won't be the same MDB consuming the failed message again, we would have to temporarily suspend the MDB. From what I read, one approach is to sleep the MDB after throwing the runtime exception (but is this possible, i.e. is it not going to interrupt the onMessage before going to sleep?). I also read that there is a new property from WLS 9.0 onwards to automatically stop the MDB for a certain time after an exception has ocurred, how can I configure this? Are we also going to have to set the MDB pool size to 1 in order to ensure that the message gets consumed by an MDB on a different server?
    We have also tried to use a non-collocated approach instead using a collocated approach with distributed queues but we end up with the same problem that a message does not get redelivered to MDBs on other servers after a runtime exception has been thrown.
    Thanks a lot

Maybe you are looking for