Dequeue messages one at a time

We have a scenario where we have to send messages to a server(on a TCP/IP port).
However since we are expecting huge volumes and the server cannot handle such huge volumes, we are dumping all the messages into an AQ.
We want to dequeue these messages one at a time and send them to this server.
The AQ Adapter however seems to dequeue all the messages in the queue in one shot. It then creates separate instances for each message and bombards the server with these messages.
The whole purpose of using the queue is not served.
Is there anyway we can throttle this dequeuing?
The AQ Adapter gives 2 options:
1) Polling : In this case however each time it polls it picks up all the messages in the queue that are ready to be dequeued.
2) Notification : Havent seen much documentation around this? Am not sure if this will solve our problem.
Any possible alternatives/solutions are welcome.
Thanks,
Vinod.

Hi Vinod,
To follow up on Lili's reply, you can refer to the solution I posted in the following thread:
Re: About AQ ADT example functionality
Scroll down to the section "How to make AQ process synchronous" and follow the outlines steps.
Thanks.

Similar Messages

  • Processing JMS messages one at a time

    Hi, I'm on a project where we're trying to synch Siebel and EBS Customer information. However there are certain events that can trigger Customer data being sent twice simultaneously; this produces problematic racing conditions when the Customer data is new to the receiving system.
    We're trying to throttle that by sending the messages to a JMS queue and process them one at a time, ie, we only want one instance of the JMS queue listener activated at any one time (we don't expect continuous load; mainly small bursts). Unfortunately I'm not sure which properties have to be changed to do this. I have created a JMS listener that consumes message, pretty much created by the adapter wizard. Currently the listener is spawning new instances just as fast as the messages come in causing EBS AppAdapter errors (can't create new Customer as it already exists).
    Here's the JMS operation definition:
    <pc:inbound_binding />
    <operation name="Consume_Message">
    <jca:operation
    ActivationSpec="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"
    DestinationName="jms/CustomerInterface"
    UseMessageListener="false"
    PayloadType="TextMessage"
    OpaqueSchema="false" >
    </jca:operation>
    <input>
    <jca:header message="hdr:InboundHeader_msg" part="inboundHeader"/>
    </input>
    </operation>
    </binding>
    This is the activation agent:
    <activationAgents>
    <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="ReceiveAccountJmsAdapter">
    <property name="portType">Consume_Message_ptt</property>
    </activationAgent>
    </activationAgents>
    This is the connector-factory:
         <connector-factory location="eis/CustomerInterface" connector-name="Jms Adapter">
              <config-property name="connectionFactoryLocation" value="jms/Queue/CustomerInterface"/>
              <config-property name="factoryProperties" value=""/>
              <config-property name="acknowledgeMode" value="AUTO_ACKNOWLEDGE"/>
              <config-property name="isTopic" value="false"/>
              <config-property name="isTransacted" value="false"/>
              <config-property name="username" value="oc4jadmin"/>
              <config-property name="password" value="********"/>
              <connection-pooling use="none">
              </connection-pooling>
              <security-config use="none">
              </security-config>
         </connector-factory>
    What else will I have to do to get this to work? Or will I be forced to deploy it to it's own BPEL domain where dspMaxThreads and dspMinThreads are set to 1?
    -- Just tried to put it on its own domain and change the dspMaxThreads/dspMinThreads and that didn't work.
    - Dale
    Message was edited by:
    Dale Earnest

    Hi Al,
    Yes - it possible.
    Please read the section Describing Message Ordering of the Adapter Life-Cycle Management of the Application Server Adapter Concepts Guide found here: http://download.oracle.com/docs/cd/B31017_01/integrate.1013/b31005/life_cycle.htm#BABJIFJI
    Best regards
    Christian Damsgaard

  • Processing msgs one at a time

    Hi,
    Is there any way in JMS which will help in processing messages received by onMessage() on a one at a time basis.
    What I mean is, I shud not process the next message till the first one is processed?
    I am assuming that onMessage() executes as soon as a message is put in the queue.
    Thanks,
    -raj

    i am new to JMS, please can you explain more detailed, I want to process messages one at a time. Appreciate your time..

  • How to display all validator messages at the same time?

    Hi Guys,
    I have a form with validators attached to a couple of my input boxes. I tried to write validators which are reusable in other parts of my app ie. social security number check etc. I also then customized the messages and it all works fine.
    But when I submit the form it displays the messages one at a time, in other words every validator is performed and if there was an error the form is rendered. I believe this is how it should behave and that's fine.
    But what if I want all validators to be performed when I submit the form?Then all messages are displayed at the top of the page and the user can make all his changes and try again. This makes more sense to me.
    So my question is whether there is a way to force the page to perform all validators when the page is submitted and then display all error messages in a h:messages tag?
    Cheers and thanks alot
    p.s. i know about and use the hidden input field validator hack which does all validations, but if I do it that way I duplicate the code which does the social security check for all applicable forms.

    Strange, I was under the impression that all validators were run by default. In my JSF apps, all the validators run and output their errors without any special confguration to make this happen.
    I wonder why yours are just running one at a time? Could you show some of your JSP code, and the hidden field validator code?
    CowKing

  • Stream Apply Problems (Status=DEQUEUE MESSAGES,ABORTED,ENABLE)

    Hello,
    I configure database level stream 11.1.0.7 on Microsoft Windows 32bit.
    Some time target site not reach able, server restart or network outage. and changes not applies after when target site available
    Stream -> Apply --> status show DEQUEUE MESSAGES or ABORTED
    Ever time i have to stop and start Apply through enterprise manager.
    How i can configure for applying changing automatically after server restart, after network outage resolve, or any error occure.
    Thanks

    Apply process will automatically restart when there are network outages. However, it will be aborted when there are errors, if you don't set DISABLE_ON_ERROR parameter to N.

  • I uninstalled Firefox once, reinstalled it and it ran. I had a problem so I did it again. Now it will not run and I get an error message saying that firefox is running and you can only run one at a time. I can't figure out what is running.

    Because of a problem, I uninstalled Firefox once, reinstalled it and it ran. I had a problem so I uninstalled/reinstalled it again. Now it will not run. I get an error message saying that firefox is running and you can only run one at a time. I have uninstalled multiple times and can't figure out what is running. The is only one Firefox installed and it is not open. What does this mean and how do I fix it?

    If you use ZoneAlarm Extreme Security then try to disable Virtualization.
    *http://kb.mozillazine.org/Browser_will_not_start_up#XULRunner_error_after_an_update
    See also:
    *[[/questions/880050]]

  • Oracle AQ, messages dequeued after one hour

    Hi,
    We are facing a weird issue in one of our databases.
    ATESTMED
    SQL> select count (*) from emx_alarm_qtab;
    COUNT(*)
    26
    SQL> set serveroutput on
    SQL> DECLARE
    2 CURSOR c IS select * from emx_alarm_qtab;
    3 r c%ROWTYPE;
    4 i number;
    5 BEGIN
    6 i:=0;
    7 FOR r in c LOOP
    8 emx_queue.purge_alarm (r.msgid);
    9 i := i + 1;
    10 END LOOP;
    11 dbms_output.put_line ('Count ' || i);
    12 END;
    13 /
    Already purged MsgID=9E18AB1B7F1D3E9EE04083583A6E0BFA
    Already purged MsgID=9E18AB1B7F1E3E9EE04083583A6E0BFA
    Already purged MsgID=9E18AB1B7F1F3E9EE04083583A6E0BFA
    Already purged MsgID=9E18AB1B7F203E9EE04083583A6E0BFA
    Already purged MsgID=9E18AB1B7F213E9EE04083583A6E0BFA
    Already purged MsgID=9E18AB1B7F223E9EE04083583A6E0BFA
    Purged MsgID=9E1B6F30AE954564E04083583B6E1BB0
    Purged MsgID=9E1B6F30AE964564E04083583B6E1BB0
    Purged MsgID=9E1B6F30AE974564E04083583B6E1BB0
    Already purged MsgID=9E1B6F30AE904564E04083583B6E1BB0
    Already purged MsgID=9E1B6F30AE914564E04083583B6E1BB0
    Already purged MsgID=9E1B6F30AE924564E04083583B6E1BB0
    Purged MsgID=9E1B6F30AE984564E04083583B6E1BB0
    Purged MsgID=9E1B6F30AE934564E04083583B6E1BB0
    Purged MsgID=9E1B6F30AE944564E04083583B6E1BB0
    Already purged MsgID=9E1B6F30AE9D4564E04083583B6E1BB0
    Already purged MsgID=9E1B6F6FCDB86BA8E04083583B6E1BBD
    Already purged MsgID=9E1B6F6FCDC06BA8E04083583B6E1BBD
    Already purged MsgID=9E1B6F6FCDC16BA8E04083583B6E1BBD
    Already purged MsgID=9E1B6F6FCDC26BA8E04083583B6E1BBD
    Purged MsgID=9E1B6F6FCDC36BA8E04083583B6E1BBD
    Purged MsgID=9E1B6F6FCDC46BA8E04083583B6E1BBD
    Purged MsgID=9E1B6F6FCDC56BA8E04083583B6E1BBD
    Purged MsgID=9E1B6F6FCDC66BA8E04083583B6E1BBD
    Purged MsgID=9E1B6F6FCDC76BA8E04083583B6E1BBD
    Purged MsgID=9E1B6F6FCDC86BA8E04083583B6E1BBD
    Count 26
    PL/SQL-procedur Σr utf÷rd utan fel.
    SQL> commit;
    COMMIT Σr utf÷rt.
    SQL> select count (*) from emx_alarm_qtab;
    COUNT(*)
    26
    in SOATEST
    SQL> select count (*) from emx_alarm_qtab;
    COUNT(*)
    12
    SQL> set serveroutput on
    SQL> DECLARE
    2 CURSOR c IS select * from emx_alarm_qtab;
    3 r c%ROWTYPE;
    4 i number;
    5 BEGIN
    6 i:=0;
    7 FOR r in c LOOP
    8 emx_queue.purge_alarm (r.msgid);
    9 i := i + 1;
    10 END LOOP;
    11 dbms_output.put_line ('Count ' || i);
    12 END;
    13 /
    Purged MsgID=9E1A8C9F2CA8ABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2C90ABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2C91ABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2C92ABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2C93ABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2C94ABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2C95ABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2CA9ABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2CAAABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2CABABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2CACABE1E040B30A2902673D
    Purged MsgID=9E1A8C9F2CADABE1E040B30A2902673D
    Count 12
    PL/SQL-procedur Σr utf÷rd utan fel.
    SQL> commit;
    COMMIT Σr utf÷rt.
    SQL> select count (*) from emx_alarm_qtab;
    COUNT(*)
    0
    in ATESTMED, the rows are not getting dequeued. They get dequeued after one hour.
    the aq_tm_processes is already 0 in both the databases.
    This has started all of a sudden.
    Please help.
    Thanks,
    Rosh

    Is it exactly one hour? Has the timezone changed recently, there is a 10G R2 bug around timezones where the Queue Table is created in one timezone but isn't changed when the DB moves to a different timezone so they become out of sync. What does the Time Manager Info value tell you?

  • Hi. I have got problem with my ITunes. Every time I´m opening the program I´ve got the message: One unkwown problem is appering (-42032) Can anybode please tell me what to do wit this error? Thanks

    Hi. I have got problem with my ITunes. Every time I´m opening the program I´ve got the message: One unkwown problem is appering (-42032) Can anybode please tell me what to do wit this error? Thanks

    Gary,
    discussions may sometimes be slow for an hour or so (at which point the opening page of discussions will eventually apologize for the inconvenience, back on line soon...) but your description looks like a cache problem.
    Try OnyX freeware to do some cleanup and see if that helps.
    http://www.titanium.free.fr/index.html

  • Exception while dequeuing message

    Hi,
    I am getting an error sayin
    "Exception while dequeuing message : Dequeue error in AQ object, ORA-25215: user_
    data type and queue type do not match"
    What will be the problem?Please help me with solution.
    Thanks in advance

    This is the link i am following for enqueuing the message into a queue table, its happening successfully.
    http://www.oratechinfo.co.uk/aq.html
    I can see the message i enqueued in the queue table with the following query at the scheduled time.
    select user_data from queue_table;
    Below is the C++ code to dequeue the msg.In DequeueObject() function on this particular line "msgid = oaq.Dequeue();"
    the control moves to console which not proceeding further.I am wondering what went wrong.
    //This is a simple program showing how to call oo4o api from a mulithreaded application.
    //Note that every thread has its own OStartup() and OShutdown() routines.
    // PROJECT SETTINGS : Under C/C++ option, make sure the project options is /MT for release
    // or /MTd for debug(NOT /ML or /MLd).
    #include "windows.h"
    #include "stdio.h"
    #include <iostream>
    #include <process.h>          
    #include <oracl.h>
    using namespace std;
    OSession osess ;
    int DequeueRaw();
    int DequeueObject();
    int main(int argc, char **argv)
         int retVal = 0;
         OStartup(OSTARTUP_MULTITHREADED);
         // create session object for each thread. This gives maximum
         // concurrency to the thread execution. This is also useful when OO4O
         // error reported on session object for one thread cannot be seen by
         // another thread.
         try
              osess.Open();
              if ( ! osess.IsOpen() )
                   cout << "Session not opened: Error: " << osess.GetErrorText() << endl;
                   osess.Close();
                   OShutdown();
                   return -1;
         //     retVal = DequeueRaw();
              retVal = DequeueObject();
         catch(OException oerr)
              cout << "Exception while dequeuing message : " << oerr.GetErrorText() << endl;
              retVal = -1;
         return retVal;
    // This function dequeues a message of default type(string of characters)
    // from the raw_msg_queue.
    // Gets the message priority after dequeuing
    // Checks if any message with correlation like 'AQ' is available on the queue.
    int DequeueRaw()
         ODatabase odb;
         OAQ oaq;
         OAQMsg oaqmsg;
         OValue msg;
         const char *msgid = 0;
         odb.Open(osess, "MICROSOFT", "OMNIPOS", "OMNIPOS");
         if ( ! odb.IsOpen() )
              cout << "Database not opened: " << odb.GetErrorText() << endl;
              odb.Close();
              return(-1);
         // Open the 'raw_msg_queue'
         oaq.Open(odb,"example_queue");
         if( !oaq.IsOpen())
              cout << "AQ not opened: " << oaq.GetErrorText() << endl;
              return(-1);
         // Get an instance of the default message(of RAW type)
         oaqmsg.Open(oaq);
         if( !oaqmsg.IsOpen() )
              cout << "AQMsg not opened: " << oaqmsg.GetErrorText() << endl;
              return(-1);
         // Dequeue a message
         //msgid = oaq.Dequeue();
         //if (msgid )
         //     // Retrieve the message attributes
         //     oaqmsg.GetValue(&msg);
         //     const char *msgval = msg;
         //     cout << "Message '" << msgval <<
         //          "' dequeued at priority : " << oaqmsg.GetPriority() << endl;
         // Dequeue message with correlation like "AQ"
         oaq.SetCorrelate("%AQ%");
         oaq.SetDequeueMode(3);
         msgid = oaq.Dequeue();
         if (msgid )
              // Retrieve the message attributes
              char msgval[101];
              long len = oaqmsg.GetValue(msgval,100);
              msgval[len] = '\0';
              cout << "Message '" << msgval <<
                   "' dequeued at priority : " << oaqmsg.GetPriority() << endl;
         // Close all of the objects
         oaqmsg.Close();
         oaq.Close();
         odb.Close();
         return 0;
    // This function dequeues a message of user-defined type MESSAGE_TYPE
    // from the msg_queue.
    // Gets the message priority after dequeuing
    // Checks if any message with correlation like 'SCOTT' is available on the queue.
    int DequeueObject()
         ODatabase odb;
         OAQ oaq;
         OAQMsg oaqmsg;
         const char *msgid = 0;
         OValue msg;
         char subject[255];
         char text[255];
         odb.Open(osess, "MICROSOFT", "OMNIPOS", "OMNIPOS");
         if ( ! odb.IsOpen() )
              cout << "Database not opened: " << odb.GetErrorText() << endl;
              odb.Close();
              return(-1);
         // Open the 'msg_queue'
         oaq.Open(odb,"example_queue");
         if( !oaq.IsOpen())
              cout << "AQ not opened: " << oaq.GetErrorText() << endl;
              return(-1);
         // Get an instance of the udt MESSAGE_TYPE (check out schema for details)
         oaqmsg.Open(oaq,1,"MESSAGE_TYPE");
         if( !oaqmsg.IsOpen() )
              cout << "AQMsg not opened: " << oaqmsg.GetErrorText() << endl;
              return(-1);
         // Dequeue message with correlation like "SCOTT"
         oaq.SetCorrelate("%OMNIPOS%");
         oaq.SetDequeueMode(3);
         msgid = oaq.Dequeue();
         if (msgid )
              // Retrieve the message attributes
              // Get the subject,text attributes of the message
              OObject msgval;
              oaqmsg.GetValue(&msgval);
              msgval.GetAttrValue("subject", subject,255);     
              msgval.GetAttrValue("text", text,255);
              cout << "Message '" << (subject ? subject :"") << "' & Body : '" << text <<
                   "' dequeued at priority : " << oaqmsg.GetPriority() << endl;
              msgval.Close();
         msgid = 0;
         oaq.SetNavigation(1);
         oaq.SetCorrelate("");
         // Dequeue a message
         msgid = oaq.Dequeue();
         if (msgid )
              // Retrieve the message attributes
              OObject msgval;
              oaqmsg.GetValue(&msg);
              msgval = msg;          
              // Get the subject,text attributes of the message
              msgval.GetAttrValue("subject", subject,255);     
              msgval.GetAttrValue("text", text,255);
              cout << "Message '" << (subject ? subject :"") << "' & Body : '" << text <<
                   "' dequeued at priority : " << oaqmsg.GetPriority() << endl;
              msgval.Close();
         // Close all of the objects
         msgid = NULL;
         msg.Clear();
         oaqmsg.Close();
         oaq.Close();
         odb.Close();
         return 0;
    }

  • Dequeue Messages queued by caprture process

    RDBMS Version: 10.1.0.4
    Operating System and Version: WINDOWS 2003 SERVER
    Error Number (if applicable):
    Product (i.e. SQL*Loader, Import, etc.): Oracle Streams
    Hi,
    Can I do the following.
    I had set up couple of databases for replication using ORACLE STREAMS in hub configuration(One central and many local). My problem is that sometimes some of the connection between them will not work for a significant period of time. That's why I need some tool to move the data.
    So I decide to dequeue the which are captured by the CAPTURE process and to process them with a PL/SQL program, save their payload in a file or external table and then trough some way to bring this data to the master server and the other way round.
    But I can't dequeue messages because the agent created for the propagation process has no name and I can't dequeue messages if I don't have the CONSUMER.
    All examples that I saw in Metalink use already propagated messages with a propagation process.
    So I need some other way to propagate my messages different then propagation process! The task is even more complicated because at the central DB some of the messages has a multiple consumers...
    Any suggestions?

    Wel in 10g it is just as simple as: create a apply-process by adding a table/schema rule that matches the enqueued messages. If you have a captureprocess for several tables off a schema, then a schema rule for that schema owning those tables would do. Then with the set_enqueue_destination api you can add a queue on this rule. Which means that the apply process won't apply those messages but reroutes them on your user-queue. This queue you can create by yourself but it must be of type 'sys.anydata'.
    The messages enqueued on that queue by the apply process can be dequeued by a user process.
    So just put-in an apply process that dequeues the captured LCR's for you and enqueues them on your user-queue. And then you can dequeue them.
    Regards,
    Martien

  • Airport Express extends 5 Ghz and 2.4 Ghz or just one at a time?

    I recently switched my dual band Time Capsule to use different names for the 5 Ghz and 2.4 Ghz networks.
    Will the Airport Express extend both or just one at a time, if so, which one will it extend?
    I have two Airport Express modules, bought a long time apart, is there any difference in functionality and how do I tell them apart if this is the case?
    All are running the most recent firmware version.

    Welcome to the discussion area, Mike!
    Will the Airport Express extend both or just one at a time, if so, which one will it extend?
    The AirPort Express is a single band device, so it can extend either the 2.4 GHz or 5 GHz band, +but not both at the same time+. Since you have different names for the 5 GHz and 2.4 GHz bands, the Express will extend the 2.4 GHz band by default. Using AirPort Utility, the setup application for the AirPorts, it is possible to configure the AirPort Express to extend either band.
    If you are perhaps thinking of extending the 5 GHz band, this can be a bit tricky because 5 GHZ signals do not travel effectively over distance or penetrate obstructions as well as 2.4 GHz signals. You almost have to have a line-of-sight relationship between the Time Capsule and the AirPort Express to be able to extend the 5 GHz band.
    I have two Airport Express modules, bought a long time apart, is there any difference in functionality and how do I tell them apart if this is the case?
    Look on the side of the AirPort Express for the model number in the small print. You'll need Model No A 1264 to be able to "extend". If you have the older Model No A 1084, that version will not be able to "extend" a wireless network, but you could use it to "join" the wireless network and stream AirTunes to the device.
    All are running the most recent firmware version.
    That would be 7.5.2. If you have the older version of the AirPort Express, the latest firmware version for that device would be 6.3.
    Message was edited by: Bob Timmons

  • How delete on ipad 2 2000 emails without selecting one at a time?

    gow can I delete over 2000 emails from my iPad 2 without selecting them one at a time?

    You can't on the iPad itself, unfortunately. Your best bet is to use your computer to mass delete the emails, You can also change how far back your various email accounts sync messages to your iPad. That will significantly reduce the amount of daily emails on your device.

  • HT201365 I recently updated my Ipad 2 with iOS7 and I suddenly have 794 emails in my inbox which were previously read and deleted.  how do I get rid of them without doing one at a time?

    I recently updated my Ipad2 to iOS7 and suddenly have 794 emails in my inbox which were previously reviewed and deleted.  How can I get rid of them without doing one at a time?

    You should be syncing your contacts with an app on your computer or cloud service (iCloud, Gmail, Yahoo, etc), and not relying on a backup.  If you haven't been doing this, start now and then restore your old backup.  You will then be able to sync the new contacts back into the phone.  However, you will lose all messages, etc newer thant the backup.

  • Fetching many records all at once is no faster than fetching one at a time

    Hello,
    I am having a problem getting NI-Scope to perform adequately for my application.  I am sorry for the long post, but I have been going around and around with an NI engineer through email and I need some other input.
    I have the following software and equipment:
    LabView 8.5
    NI-Scope 3.4
    PXI-1033 chassis
    PXI-5105 digitizer card
    DELL Latitude D830 notebook computer with 4 GB RAM.
    I tested the transfer speed of my connection to the PXI-1033 chassis using the niScope Stream to Memory Maximum Transfer Rate.vi found here:
    http://zone.ni.com/devzone/cda/epd/p/id/5273.  The result was 101 MB/s.
    I am trying to set up a system whereby I can press the start button and acquire short waveforms which are individually triggered.  I wish to acquire these individually triggered waveforms indefinitely.  Furthermore, I wish to maximize the rate at which the triggers occur.   In the limiting case where I acquire records of one sample, the record size in memory is 512 bytes (Using the formula to calculate 'Allocated Onboard Memory per Record' found in the NI PXI/PCI-5105 Specifications under the heading 'Waveform Specifications' pg. 16.).  The PXI-5105 trigger re-arms in about 2 microseconds (500kHz), so to trigger at that rate indefinetely I would need a transfer speed of at least 256 Mb/s.  So clearly, in this case the limiting factor for increasing the rate I trigger at and still be able to acquire indefinetely is the rate at which I transfer records from memory to my PC.
    To maximize my record transfer rate, I should transfer many records at once using the Multi Fetch VI, as opposed to the theoretically slower method of transferring one at a time.  To compare the rate that I can transfer records using a transfer all at once or one at a time method, I modified the niScope EX Timestamps.vi to allow me to choose between these transfer methods by changing the constant wired to the Fetch Number of Records property node to either -1 or 1 repectively.  I also added a loop that ensures that all records are acquired before I begin the transfer, so that acquisition and trigger rates do not interfere with measuring the record transfer rate.  This modified VI is attached to this post.
    I have the following results for acquiring 10k records.  My measurements are done using the Profile Performance and Memory Tool.
    I am using a 250kHz analog pulse source.
    Fetching 10000 records 1 record at a time the niScope Multi Fetch
    Cluster takes a total time of 1546.9 milliseconds or 155 microseconds
    per record.
    Fetching 10000 records at once the niScope Multi Fetch Cluster takes a
    total time of 1703.1 milliseconds or 170 microseconds per record.
    I have tried this for larger and smaller total number of records, and the transfer time per is always around 170 microseconds per record regardless if I transfer one at a time or all at once.  But with a 100MB/s link and 512 byte record size, the Fetch speed should approach 5 microseconds per record as you increase the number of records fetched at once.
    With this my application will be limited to a trigger rate of 5kHz for running indefinetely, and it should be capable of closer to a 200kHz trigger rate for extended periods of time.  I have a feeling that I am missing something simple or am just confused about how the Fetch functions should work. Please enlighten me.
    Attachments:
    Timestamps.vi ‏73 KB

    Hi ESD
    Your numbers for testing the PXI bandwidth look good.  A value of
    approximately 100MB/s is reasonable when pulling data accross the PXI
    bus continuously in larger chunks.  This may decrease a little when
    working with MXI in comparison to using an embedded PXI controller.  I
    expect you were using the streaming example "niScope Stream to Memory
    Maximum Transfer Rate.vi" found here: http://zone.ni.com/devzone/cda/epd/p/id/5273.
    Acquiring multiple triggered records is a little different.  There are
    a few techniques that will help to make sure that you are able to fetch
    your data fast enough to be able to keep up with the acquired data or
    desired reference trigger rate.  You are certainly correct that it is
    more efficient to transfer larger amounts of data at once, instead of
    small amounts of data more frequently as the overhead due to DMA
    transfers becomes significant.
    The trend you saw that fetching less records was more efficient sounded odd.  So I ran your example and tracked down what was causing that trend.  I believe it is actually the for loop that you had in your acquisition loop.  I made a few modifications to the application to display the total fetch time to acquire 10000 records.  The best fetch time is when all records are pulled in at once. I left your code in the application but temporarily disabled the for loop to show the fetch performance. I also added a loop to ramp the fetch number up and graph the fetch times.  I will attach the modified application as well as the fetch results I saw on my system for reference.  When the for loop is enabled the performance was worst at 1 record fetches, The fetch time dipped  around the 500 records/fetch and began to ramp up again as the records/fetch increases to 10000.
    Note I am using the 2D I16 fetch as it is more efficient to keep the data unscaled.  I have also added an option to use immediate triggering - this is just because I was not near my hardware to physically connect a signal so I used the trigger holdoff property to simulate a given trigger rate.
    Hope this helps.  I was working in LabVIEW 8.5, if you are working with an earlier version let me know.
    Message Edited by Jennifer O on 04-12-2008 09:30 PM
    Attachments:
    RecordFetchingTest.vi ‏143 KB
    FetchTrend.JPG ‏37 KB

  • Spiked CPU when viewing multiple streams one at a time then publishing

    Hi all,
    We're finding the Flash Player's CPU process 'ratchets' slightly higher each time a user views a new webcam stream one at a time in a WebcamSubscriber.
    For example, I'm watching the stream for User A, then that stream is deleted and I then watch the stream for user B, then that stream is deleted and I watch the stream A, B, or C, and so on.  It seems to 'ratchet' higher by the same amount regardless of whether I've seen that particular user's stream before.
    It's only a few percentage points of CPU usage each time, but in the aggregate it can get to 100% usage and crash the flash player quickly. 
    Some details of our tests:
    - We've managed to contain it so this only happens when the user is also publishing.
    - If a user has been watching for a long time without publishing, then starts publishing, the CPU usage will suddenly spike as soon as they start publishing.  In our tests it spikes as much and more as if they had been publishing the entire time. 
    - If the user starts publishing before they start watching, they're not affected by this spike. 
    - If a user starts publishing but hasn't been watching the streams, the publishing CPU usage is normal. 
    - Refreshing the browser page and publishing again, CPU usage is normal. 
    - Calling System.gc(); while running in the flash player debugger seems to have no effect on the CPU spikes.  Whatever streams are being kept around must still have something pointing to them so they won't be garbage collected. 
    To subscribe to each successive stream, we're doing that by setting
    webcamSubscriber.publisherIDs = [ newStreamID ];
    We've been experimenting for a long time with different settings that might reduce the CPU spikes and ratcheting, but haven't been able to resolve the issue.
    How can we prevent this CPU ratcheting and spking?
    Thanks very much,
    -Trace

    Is it possible that you have other components that are interfering or spiking your CPU. Also if possible can you share your code.
    Would you be able to check this link and see it hogs your CPU for more subscribers. - http://blogs.adobe.com/arunpon/files/2011/05/WebCameraFinal31.swf
    Code for the app in the link
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" xmlns:rtc="http://ns.adobe.com/rtc">
         <mx:Script>
              <![CDATA[
                   import com.adobe.coreUI.controls.CameraUserBar;
                   import com.adobe.rtc.collaboration.WebcamSubscriber;
                   import com.adobe.rtc.events.CollectionNodeEvent;
                   import com.adobe.rtc.events.SessionEvent;
                   import com.adobe.rtc.events.SharedPropertyEvent;
                   import com.adobe.rtc.events.StreamEvent;
                   import com.adobe.rtc.events.UserEvent;
                   import com.adobe.rtc.messaging.UserRoles;
                   import com.adobe.rtc.sharedManagers.StreamManager;
                   import com.adobe.rtc.sharedManagers.descriptors.StreamDescriptor;
                   import com.adobe.rtc.sharedModel.SharedProperty;
                   protected var _camSubscribers:Object;
                   protected var _currentSubscriber:WebcamSubscriber ;
                   protected var _sharedProperty:SharedProperty ;
                    *  Handler for the stop and start buttons.
                   protected function startBtn_clickHandler(event:MouseEvent):void
                        if ( startBtn.label == "Start" ) {
                             webCamPub.publish();
                             startBtn.label = "Stop" ;
                             if (_camSubscribers && _camSubscribers[cSession.userManager.myUserID]) {
                                  var webcamSubscriber:WebcamSubscriber = _camSubscribers[cSession.userManager.myUserID];
                                  smallSubscriberContainer.addChild(webcamSubscriber);
                        }else if (startBtn.label == "Stop" ){
                             webCamPub.stop();
                             startBtn.label = "Start" ;
                    * SynchronizationChange event handler. Initialize the Shared property used to sync the Subscriber info
                    * who would be the centre of the app.
                   protected function cSession_synchronizationChangeHandler(event:Event):void
                        if (cSession.isSynchronized) {
                             _sharedProperty = new SharedProperty();
                             _sharedProperty.isSessionDependent = true ;
                             _sharedProperty.sharedID = "webcamShare2" ;
                             _sharedProperty.connectSession = cSession ;
                             _sharedProperty.subscribe();
                             _sharedProperty.addEventListener(SharedPropertyEvent.CHANGE,onChange);
                             _camSubscribers = new Object();
                             cSession.streamManager.addEventListener(StreamEvent.STREAM_RECEIVE,onStreamRecieved);
                             cSession.streamManager.addEventListener(StreamEvent.STREAM_DELETE,onStreamDelete);
                             addExistingStreamers();
                    *  Set up a thumbnail subscriber for every new camera stream
                   protected function onStreamRecieved(p_evt:StreamEvent):void
                        if (p_evt.streamDescriptor.type == StreamManager.CAMERA_STREAM) {
                             setUpfromDescriptor(p_evt.streamDescriptor);
                    * Clicking a subscriber updates the shared value, which in turn enlarges the thumbnail after getting updated
                   protected function onClick(p_evt:MouseEvent):void
                        if ( (p_evt.currentTarget is WebcamSubscriber) &&  !(p_evt.target.parent is CameraUserBar)) {
                             _sharedProperty.value = (p_evt.currentTarget as WebcamSubscriber).publisherIDs;
                    * Clean up when a user stops publishing his camera or exits his app.
                   protected function onStreamDelete(p_evt:StreamEvent):void
                        if (p_evt.streamDescriptor.type == StreamManager.CAMERA_STREAM) {
                             if ( _camSubscribers[p_evt.streamDescriptor.streamPublisherID]) {
                                  var webcamSubscriber:WebcamSubscriber = _camSubscribers[p_evt.streamDescriptor.streamPublisherID];
                                  if (webcamSubscriber) {
                                       smallSubscriberContainer.removeChild(webcamSubscriber);    
                                  if (p_evt.streamDescriptor.streamPublisherID != cSession.userManager.myUserID) {
                                       webcamSubscriber.removeEventListener(UserEvent.STREAM_CHANGE,onCameraPause);
                                       webcamSubscriber.removeEventListener(UserEvent.USER_BOOTED,onUserBooted);
                                       delete _camSubscribers[p_evt.streamDescriptor.streamPublisherID];
                                       webcamSubscriber.close();
                                       webcamSubscriber = null;
                                  } else {
                                       if (_currentSubscriber && _currentSubscriber.publisherIDs[0] == cSession.userManager.myUserID) {
                                            _sharedProperty.value = null;
                    * Logic for handling the Pause event on CameraUserBar on every Subscriber
                   protected function onCameraPause(p_evt:UserEvent):void
                        var userStreams:Array = cSession.streamManager.getStreamsForPublisher(p_evt.userDescriptor.userID,StreamManager.CAMERA_STREAM);
                        if (userStreams.length == 0) {
                             trace("onCameraPause: no userStreams");
                             return;
                        for (var i:int = 0; i< userStreams.length ; i++ ) {
                             if (userStreams[i].type == StreamManager.CAMERA_STREAM ) {
                                  break;
                        var streamDescriptor:StreamDescriptor = userStreams[i];
                        if ( streamDescriptor.streamPublisherID == cSession.userManager.myUserID ) {
                             cSession.streamManager.pauseStream(StreamManager.CAMERA_STREAM,!streamDescriptor.pause,streamDescriptor.streamPublisherID);
                    * Initial set up of all users who are streaming when this app launches
                   protected function addExistingStreamers():void
                        var streamDescritpors:Object = cSession.streamManager.getStreamsOfType(StreamManager.CAMERA_STREAM);
                        for (var i:String in streamDescritpors) {
                             setUpfromDescriptor(streamDescritpors[i]);
                    * Helper method to create a thumbnail subscriber.
                   protected function setUpfromDescriptor(p_descriptor:StreamDescriptor):void
                        if (! _camSubscribers[p_descriptor.streamPublisherID]) {
                             var webCamSubscriber:WebcamSubscriber = new WebcamSubscriber();
                             webCamSubscriber.connectSession = cSession ;
                             webCamSubscriber.addEventListener(UserEvent.STREAM_CHANGE,onCameraPause);
                             webCamSubscriber.addEventListener(UserEvent.USER_BOOTED,onUserBooted);
                             webCamSubscriber.webcamPublisher = webCamPub;
                             webCamSubscriber.subscribe();
                             webCamSubscriber.sharedID = p_descriptor.streamPublisherID;
                             webCamSubscriber.publisherIDs = [p_descriptor.streamPublisherID];
                             webCamSubscriber.height = webCamSubscriber.width = 180;
                             webCamSubscriber.addEventListener(MouseEvent.CLICK, onClick);
                             smallSubscriberContainer.addChild(webCamSubscriber);
                             _camSubscribers[p_descriptor.streamPublisherID] = webCamSubscriber;
                    * This method is the listener to SharedPropertyEvent.CHANGE event. It updates the centred subscribes as its value
                    * changes.
                   protected function onChange(p_evt:SharedPropertyEvent):void
                        if ( _currentSubscriber != null ) {
                             _currentSubscriber.removeEventListener(UserEvent.USER_BOOTED,onUserBooted);
                             _currentSubscriber.removeEventListener(UserEvent.STREAM_CHANGE,onCameraPause);
                             centeredSubscriber.removeChild(_currentSubscriber);
                             _currentSubscriber.close();
                             _currentSubscriber = null ;
                        if ( _sharedProperty.value == null || _sharedProperty.value.length == 0 ) {
                             return ;
                        _currentSubscriber = new WebcamSubscriber();
                        _currentSubscriber.connectSession = cSession ;
                        _currentSubscriber.subscribe();
                        _currentSubscriber.webcamPublisher = webCamPub ;
                        _currentSubscriber.publisherIDs = _sharedProperty.value ;
                        _currentSubscriber.addEventListener(UserEvent.USER_BOOTED,onUserBooted);
                        _currentSubscriber.addEventListener(UserEvent.STREAM_CHANGE,onCameraPause);
                        _currentSubscriber.width = _currentSubscriber.height = 500;
                        centeredSubscriber.addChild(_currentSubscriber);
                    * Logic for handling the Close event on CameraUserBar on every Subscriber
                   protected function onUserBooted(p_evt:UserEvent=null):void
                        var tmpFlag:Boolean = false;
                        if (_currentSubscriber && _currentSubscriber.publisherIDs[0] == p_evt.userDescriptor.userID) {
                             if (_currentSubscriber.parent) {
                                  _currentSubscriber.removeEventListener(UserEvent.USER_BOOTED,onUserBooted);
                                  _currentSubscriber.removeEventListener(UserEvent.STREAM_CHANGE,onCameraPause);
                                  _currentSubscriber.close();
                                  _currentSubscriber.parent.removeChild(_currentSubscriber);
                                  _currentSubscriber = null;
                                  _sharedProperty.value = null;
                             tmpFlag = true;
                        if ( _camSubscribers[p_evt.userDescriptor.userID]) {
                             var webcamSubscriber:WebcamSubscriber = _camSubscribers[p_evt.userDescriptor.userID];
                             tmpFlag = true;
                        if (tmpFlag) {
                             webCamPub.stop();
                             startBtn.label = "Start";
              ]]>
         </mx:Script>
         <!--
         You would likely use external authentication here for a deployed application;
         you would certainly not hard code Adobe IDs here.
         -->
         <rtc:AdobeHSAuthenticator
              id="auth"
              userName="Your Username"
              password="Your password" />
         <rtc:ConnectSessionContainer id="cSession" authenticator="{auth}" width="100%" height="100%" roomURL="Your RoomUrl">
              <mx:VBox id="rootContainer" width="100%" height="800" horizontalAlign="center">
                   <rtc:WebcamPublisher width="1" height="1" id="webCamPub"/>
                   <mx:VBox width="500" height="500" id="centeredSubscriber" horizontalAlign="center" verticalAlign="middle"/>
                   <mx:Label text="Click on a Subscriber thumbnail to make it bigger." />
                   <mx:HBox width="100%" height="200" horizontalAlign="center" verticalAlign="top" id="smallSubscriberContainer" creationComplete="cSession_synchronizationChangeHandler(event)"/>
                   <mx:Button  id="startBtn" label="Start"  click="startBtn_clickHandler(event)" height="20"/>
              </mx:VBox>
         </rtc:ConnectSessionContainer>
    </mx:Application>
    Thanks
    Arun

Maybe you are looking for

  • How to restore imovie '09 projects from external HD and old HD disk image.

    I moved the imovie events to an external HD (directory iMovie events) and did a fresh install of OSX (for other reasons). Now would like to restore my iMovie projects. I have an image of my hard drive from before the reinstall. Anybody got any hints?

  • A mysterious audio file opens when I open my browser. How do I find and delete it?

    A mysterious audio file opens when I open my browser. How do I find and delete it?

  • Authorisation object - F_BKPF_BUP

    Hi, I posted this question before but would just like to get confirmation, since no developer in my company thinks that this is possible. We want to restrict certain users from posting to a specific period, while we want to give everyone access to pe

  • Error: The underlying task reported failure on exit - PLEASE HELP

    After running Verify disc I keep getting the following message, What does it mean and how can I repair it? - the Repair disk option is always blanked out; "The volume Macintosh HD needs to be repaired. Error: The underlying task reported failure on e

  • Audigy 2 Va

    Ok I hope I can explain this the best I can so you guys understand. Kinda wierd. I used to have a c-media card. kinda crashed all the time when switching from one voice program to another. So I decided to buy this audigy card. I play a lot of games o