Simple single consumer dequeue

I want to ensure that a single process is the dequeing client. Can I just submit a job with a dequeue in an infinite loop? The dequeue will just wait by default for a message to be enqueued at which point it will dequeue, process and loop back looking for another message. Is this a standard kind of approach? If I need another dequeue process, I just submit another one of these dequeue jobs in an infinite loop to be a second processor. Any similar approaches used to ensure only "X" number of messages are being worked on at a given time? Any thoughts would be appreciated.

Have you considered using a single consumer queue?

Similar Messages

  • Creating job to listen to single consumer queue

    i have code that will allow a job to listen on a queue for a mulitple consumer queue but was wondering how to change it for a single consumer queue. The code i currently have for the multiple conusmer looks like this
         DBMS_SCHEDULER.DEFINE_METADATA_ARGUMENT(
              program_name => 'P_ToDBlocking_Implement'
              ,argument_position => 1
              ,metadata_attribute => 'EVENT_MESSAGE'
         dbms_scheduler.enable('P_ToDBlocking_Implement');
         DBMS_SCHEDULER.CREATE_EVENT_SCHEDULE (
              schedule_name =>      'S_ToDBlocking_Implement'
              ,start_date =>      systimestamp
              ,event_condition => 'corrid=''NOTIFY'''
              ,queue_spec => 'event_msg_q');
    Maybe i am thinking about this wrong how could i get a consumer in a single consumer queue to listen to the queue and process all queue that come in?
    Edited by: user457357 on Sep 12, 2008 12:19 AM

    Why don't you step back, post your full version number, and explain what the business process is. In other words what it is you are trying to achieve.
    When discussing business processes you don't write code. You don't name methodologies. You say things like.
    An end user adds a new row into a table and when that happens I want a job to run .... or something like that.
    Without a context any answer is pure guesswork.

  • Simple Single-Source Replication(error occurred when looking up remote obj)

    Dears
    I am following the "Simple Single-Source Replication Example"
    (http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/repsimpdemo.htm#g1033597)
    In Configure Capture, Propagation, and Apply for Changes to One Table; when I Set the Instantiation SCN for the hr.jobs Table at str2.net by following procedure
    SQL> DECLARE
    2 iscn NUMBER; -- Variable to hold instantiation SCN value
    3 BEGIN
    4 iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    5 [email protected](
    6 source_object_name => 'scott.emp',
    7 source_database_name => 'str1.net',
    8 instantiation_scn => iscn);
    9 END;
    10 /
    DECLARE
    ERROR at line 1:
    ORA-04052: error occurred when looking up remote object
    [email protected]
    ORA-00604: error occurred at recursive SQL level 1
    ORA-02085: database link STR2.NET connects to
    ORCL.REGRESS.RDBMS.DEV.US.ORACLE.COM
    Note also advise where should I have to execute above at (STR1.NET or STR2.NET)
    regards;

    Good morning Lenin,
    Have you checked the implementation of the connectors and are your locations well registered?
    Has a similar setup ever worked well?
    Can you access the source table using SQL (e.g. with SQL*Plus or TOAD)?
    Regards, Patrick

  • Simple single point analog output with NI-DAQmx in VC++ 6.0

    Specs: NI-DAQmx 7, VisualStudio C++ 6.0,  PCI-6722,8channel AO
    We have a very simple application: set a voltage (actually 6 different channels) and keep until we want it changed again, perform the change very quickly in response to an image capturing algorithm. So I don't need any waveforms or buffering.
    In this forum post http://forums.ni.com/ni/board/message?board.id=231&message.id=3283&query.id=18094 you talk about an AOOnePoint example, but I get an error that the NI-DAQ driver does not support my device.
    I may need to use NI-DAQmx, but how? I have a working sample, but it seems a bit slow and certainly overkill of this simple application:
    // Link with \DAQmx ANSI C Dev\lib\msvc\NIDAQmx.lib
    #include "NIDAQmx.h"
    void SetVoltage( double voltage )
        DAQmxErrChk (DAQmxCreateTask("",&taskHandleAnalog));
      DAQmxErrChk (DAQmxCreateAOVoltageChan(taskHandleAnalog,sChannel,"",m_MinVoltage,m_MaxVoltage,DAQmx_Val_Volts,NULL));
    DAQmxErrChk (DAQmxCfgSampClkTiming(taskHandleAnalog,"",Freq,DAQmx_Val_Rising,DAQmx_Val_ContSamps,NUMBER_OF_AO_SAAMPLES));
    DAQmxErrChk (DAQmxWriteAnalogF64(taskHandleAnalog,NUMBER_OF_AO_SAAMPLES,0,1.0,DAQmx_Val_GroupByChannel,data,&written,NULL));
    DAQmxErrChk (DAQmxStartTask(taskHandleAnalog));

    Sorry about this multi posting, I don't know how to delete it

  • When printing a simple single photo, my picture looks fine until I press the print button, then the view of what it is going to print looks like it has zoomed in and cuts off part of my original picture.  This does this with every picture I try to print.

    When I press print, it looks like it crops or zooms my original pictures and cuts it off, on the print view. Therefore it only prints a zoomed in partial picture?  Anyone else have this problem?

    This usually occurs because you're printing a photo in an Aspect Ratio different from the actual shot. The Aspect Ratio is the shape of the photo, expressed as the length x breadth.
    So, if you have - for instance - a 4:3 shot and try print that at  4:6 you will have issues.

  • Powershell can't find AD-user with simple single commandline

    I have multiple users called Anna, but when I use the following powershell command line only 2 out of three shows up in the ps-prompt.
    Get-ADUser -filter {givenName -eq "anna"} I also tried
    Get-ADUser -filter {givenName -like "anna"} still only gives me two out of three users called Anna.
    I  don't get any errors, and it gives user info from the other two called Anna. I have checked that the last one also exists and isn't disablet.
    What is wrong here?
    I run PowerShell from my own Win 7 32bit Workstation in an elevated prompt I have enabled ps-remoting and Winrm and also ran the Wsman command so I can remote enable to a server og pc via powershell. I have also created a local ps profile
    that always import the Activedirectory module.

    You're right, she was created with her firstname and middlename in AD where you type in the firstname.
    But when I used your suggestion and added a * after her firstname she popped up. :)
    This is what I wrote: 
    Get-ADUser -filter {samaccountname -like "anna *"} | FL SamAccountName, SurName, GivenName
    The * after her firstname gave me the results I wanted. Thanks for the help.
    Simon

  • Best way to handle pessimistic locking after dequeueing

    Dear all,
    we have a simple single-consumer scenario where multiple worker
    processes listen to one task queue. Every task is specific to exactly one business
    object that is pessimistically locked (SELECT FOR UPDATE) as soon as
    the AQ message has been fully parsed by a worker.
    Obviously, this may lead to unnecessary waiting. For example, when
    there are 5 successive messages belonging to the same business object,
    having them dequeued by 5 worker processes in parallel will result
    in 1 worker processing its task and 4 workers waiting.
    To me, this seems to be a common use case. How is it "usually"
    handled?
    Best regards,
    Michael

    Hi Paul!
    Thanks a lot for your answers that would completely solve
    my problem if I had another setting :-) . I try to clarify
    things by answering your questions (slightly reordered).
    I have to apologize in advance that I take my description to the
    next level of detail and hope not to deter you too much.
    3. Why is this a single consumer Q when you have multiple business
    areas hanging off it? It would seem an ideal candidate for having
    a subscriber per business area and then subsequently a worker per subscriber.Instead of a multi-subscriber setting, we employ multiple
    queues for different business areas and do some "queue routing"
    in our middle-ware. We wanted the different areas to become as
    detangled as possible (maybe that was unnecessary)
    2. Why lock the database object before processing the message?
    Cant you just rely on oracle's internal row locking mechanisms?
    Do you lock the whole table? If so why? First of all, we only lock one row that designates the "coarse-grained lock".
    I think you will understand this with the description below.
    We have one component that orchestrates our workflows and updates the
    states of our business objects. Let's assume, our basic "business object container"
    was called 'stream'. Different streams are guaranteed to be independent (and hence may be
    concurrently processed). So, a SELECT FOR UPDATE on a stream is enough
    to pessimistically lock the whole business object.
    Such a stream contains a complex graph of sub-objects. When the response
    of, e.g. a 'youtube syncer' component, comes in, the state of many
    objects in this graph may change (which we obviously want to do within a transaction).
    E.g. when all children of a node become READY,
    this node becomes READY which, in turn, may yield its
    parent READY.
    The problem with ordinary row-locking is now: responses for one stream tend to cluster,
    i.e. sometimes there are 100 responses in the queue for one stream, and as state
    updates often have large consequences for the whole state graph, blocking
    (or NOWAIT exceptions) are the rule not the exception, resulting in performance that
    is even worse than coarse-grained locking.
    1. Why have multiple processes DQ'ing from this Q?
    Is the message volume too great for one worker?Some tasks are long-running like transferring large files to remote hosts.
    With workers running on different hosts and networks we have nice
    load balancing.
    Even the short-running jobs take often >1s CPU time and batch
    jobs may generate several millions of AQ messages that we process
    in a cluster.
    Ok, if you reach this line, you certainly deserve my thankfulness :-)
    Best regards,
    Michael

  • One producer two consumer loops, dequeue same element in both consumers

    Hi!
    What is the best way for the following:
    I enqueu data in my producer loop. I need this data dequeue-d in both of my consumer loops, but I want to dequeu the same element in both loops.
    Of course if I put dequeues in both loop, then the second consumer loop will loose the odd elements and the first consumer loop will loose the even elements.
    thanks!
    Solved!
    Go to Solution.

    Blokk wrote:
    Hi!
    What is the best way for the following:
    I enqueu data in my producer loop. I need this data dequeue-d in both of my consumer loops, but I want to dequeu the same element in both loops.
    Of course if I put dequeues in both loop, then the second consumer loop will loose the odd elements and the first consumer loop will loose the even elements.
    thanks!
    Makes very little sence- that is, the problem is stated in such a way as to preclude an informative response
    Gerd wrote"
    either create two queues or use notifiers...
    Best regards,
    GerdW
    Usually, Gerd gives good advice but on this one I'm going to pick on him just a bit- I bet he rushed in just a tad without thinking about the premise of the OP's question - and Gerd I don't mean to sound mean, my appologies
    The question pre-supposes that to use a queue element twice in parallel it must be read twice.  This is false and led to less than optimal advice.  What about a template like shown in this snippet? 
    We can certainly dequeue once and spawn as many independant actions as we need within a single consumer loop!  Much more scalable than creating a queue for each action.
    Jeff

  • How to dequeue messages that were enqueued when app was offline

    I have 2 question. The following is the scenario -
    I have 2 different process. Process A and Process B.
    Process A enqueue's the message in the message queue.
    Process B dequeue's the message from the message queue.
    1) Process B shuts down for some time but Process A continues to enqueue the message in the queue. When Process B comes back live how to dequeue the messages in the message queue that were posted by Process A when Process B was offline?
    2) The queue that I am using is multiple consumer queue as there needs to be more than 1 Process B to dequeue the message. The logic behind the design is if one of the process B dies the other process B can still continue to process. At the same time if 1 instance of Process B has picked up a message it should notify other Process B to not process the message.
    I coudn't find any samples. Any help is greatly appreciated.

    Hello,
    The messages that process A enqueues and are not consumed while process B is down will remain in the queue until process B restarts.
    It sounds as though you don't need to use a multi-consumer queue as all you appear to want to do is consume the messages as quickly as possible. Is that correct? If it is you could have multiple process Bs consuming messages from the same single consumer queue. Messages that one process is consuming will not be available to another and this is handled internally.
    You can also have multiple processes consuming messages associated with the same consumer on a multi-cosnumer queue and the same applied.
    Or you could have multiple processes on a multi-consumer queue associated with different subscribers which are all interested in the same message.
    What you use depends on what your design requires but each message will be consumed only once in the case of a single consumer queue and only once per subscriber/recipient in a multi-consumer queue.
    Thanks
    Peter

  • Best Practice when deploying a single mdb into a cluster

    At a high level, we are converting all of our components to Weblogic processes that use Stateless Session Beans and Message Driven Beans. All of the weblogic processes will be clustered, and all of the topic queues will be distributed (Uniform Distributed Topics / Queues).
              We have one component that is a single MDB reading from a single queue on 1 machine. It is a requirement that the JMS messages on that queue be processed in order, and the processing of messages frequently requires that the same row in the DB be updated. Does anyone have any thoughts on the best design for that in our clustered environment?
              One possible solution we have come up with (not working):
              Possible Solution 1: Use a distributed topic and enforce a single client via client-id on the connection factory, causing a single consumer.
              1.Deploy a uniform-distributed Topic to the cluster.
              2.Create a connection factory with a client-id.
              3.Deploy a single FooMDB to the cluster.
              Problem with Solution 1: WL allows multiple consumers on Topic with same client-id
              1.Start (2) servers in cluster
              2.FooMDB running on Server_A connects to Topic
              3.FooMDB running on Server_B fails with unique id exception (yeah).
              4.Send messages - Messages are processed only once by FooMDB on Server_A (yeah).
              5.Stop Server_A.
              6.FooMDB running on Server_B connects automatically to Topic.
              7.Send messages - Messages are processed by FooMDB on Server_B (yeah).
              8.Start Server_A
              9.FooMDB successfully connects to Topic, even though FooMDB on Server_B is already connected (bad). Is this a WL bug or our config bug??
              10.Send messages - Messages are processed by both FooMDB on Server_A and Server_B (bad). Is this a WL bug or our config bug??
              Conclusion: Does anyone have any thoughts on the best design for that in our clustered environment? and if the above solution is doable, what mistake might we have made?
              Thank you in advance for your help!
              kb

    Thanks for the helpful info Tom.
              Kevin - It seems that for both the MDB, and the JMS provider, there are (manual or scripted) actions to be taken during any failure event + failure probes possibly required to launch these actions...?
              In the case of the JMS provider, the JMS destination needs to be migrated in the event of managed-server or host failure; if this host is the one that also runs the Admin server then the Admin server also needs to be restarted on a new host too, in order that it can become available to receive the migration instructions and thus update the config of the managed server which is to be newly targetted to serve the JMS destination.
              In the case of the MDB, a deployment action of some sort would need to take place on another managed-server, in the event of a failure of the managed server or the host, where the original MDB had been initally deployed.
              The JMS Destination migration actions can be totally avoided by the use of another JMS implementation which has a design philosophy of "failover" built into it (for example, Tibco EMS has totally automatic JMS failover features) and could be accessed gracefully by using Weblogic foreign JMS. The sinlge MDB deployed on one of the Weblogic managed servers in the cluster would still need some kind of (possibly scripted) redeployment action, and on top of this, there would need to be some kind of health check process to establish if this re-deployment action was actually required to be launched. It is possible that the logic and actions required just to establish the true functional health of this MDB could themsevles be as difficult as the original design requirement :-)
              All of this suggests that for the given requirement; the BEA environment is not well suited; and if no other environment or JMS provider is available at your site, then a manipulation of process itself may be required to enable it to be handled in a highly-available way which can be gracefully administered in a Weblogic cluster.
              We have not discussed the message payload design and the reasons that message order must be respected - by changing the message payload design and possibly adding additional data, this requirement "can", "in certain circumstances", be avoided.
              If you can't do that, I suggest you buy a 2 node Sun Cluster with shared HA storage and use this to monitor a simple JMS client java program that periodically checks for items on the Queue. The Tibco EMS servers could also be configured on this platform and give totally automatic failover protection for both process and host failure scenarios. With the spare money we can go to the pub.
              P.S. I don't work for Tibco or Sun and am a BIG Weblogic fan :-)

  • Multiple consumers dequeue

    I use a rule based subscription to enqueue message in a queue, because I need to specify which consumer can dequeue the message, but I also need to let the message dequeued by a single consumer.
    In other words, while enqueuing I specify the consumers which can dequeue the message, but only one of them can dequeue the message - the first which fetches first.
    Does anyone know a solution to this problem?
    Thanks a lot
    Antonio

    So it seems the solution is to explicity assign the message to a specific consumer during enqueuing, using the recipient_list parameter of the specify message properties.
    Anyway, thanks for the answer.
    Bye
    Antonio

  • Running subVI in parallel with itself using producer-consumer pattern

    I'm using the procuder-consumer pattern to create what I call threads in response to various events much like in Producer Consumer with Event Structure and Queues
    I have several physical instruments, and would like to run the exact
    same test in parallel on each of the instruments.  I have a subVI
    that takes as inputs a VISA resource and a few control references among
    other things and then performs the desired experiment.  I have a
    separate consumer loop for each physical instrument, with this subVI
    inside each consumer loop.
    My test VI worked great; each consumer loop was a simple while loop
    incrementing a numeric indicator (not using my real subVI above). 
    However, my real program can only run one consumer loop at a time much
    to my dismay.  Reworking my simple test VI to use a subVI to
    increment the indicator rather than just explicitly coding everything
    resulted in the same problem: only a single consumer loop ran at a time
    (and as I stopped the running loop, another would get a chance to
    begin). The subVI in this case was extremely
    simple taking only a ref to the indicator and a ref to a boolean to
    stop, and incrementing the indicator in a while-loop.
    Is there a way around this?  Now that I've spent the time making a
    nice subVI to do the entire experiment on on the physical instrument, I
    thought it would be fairly trivial to extend this to control multiple
    instruments performing the same experiment in parallel.  It seems
    only one instance of a given subVI may run at one time.  Is this
    true?  If it is indeed true, what other options do I have?  I
    have little desire to recode my subVI to manually handle multiple
    physical instruments; this would also result in a loss of functionality
    as all parallel experiments would run more or less in lock step without
    much more complexity.
    Thank you.

    You need to make your subvi reentrant.  Then it can run several instances at any time with each instance occupying its own unique memory space.  Click on File - VI Properties - Execution, and check the reentry execution checkbox.  Then save the vi.
    - tbob
    Inventor of the WORM Global

  • Dequeue problem using deq_condition

    Hello All,
    I have a payload that is an named object
    type x_data_fragments AS OBJECT (
    df_value_1 VARCHAR2(60),
    df_prime_value_1 VARCHAR2(60),
    df_prime_value_2 VARCHAR2(60));
    I desire to conditionally dequeue when df_value_1 is equal to contents of a variable p_condition.
    Database version is 10.2.0.3.0
    This is a Single consumer queue.
    This works, the "EXIT WHEN" line below in the code SNIPPET01:
    EXIT WHEN (l_data_fragments.df_value_1 = p_condition);
    Ideally I'd like to use the deq_condition Dequeue Option.
    That is my problem and what I'm asking for help with.
    I cannot get deq_condition to work. see SNIPPET02 below.
    I've tried many flavors of syntax.
    I even tried a function that returned the df_value_1 fragment from the object, with this syntax:
    l_dequeue_options.deq_condition :=
    'MyPackage.F_GET_DF_VALUE_1_VALUE(tab.user_data.x_data_fragments) = '||''''||p_condition||'''';
    Can't get it to work with deq_condition.
    Gives ORA-00904: "TAB"."USER_DATA"."X_DATA_FRAGMENTS": invalid identifier
    I can't tell if this is an AQ bug or whether I have bad syntax.
    Any help is most appreciated.
    Thanks,
    Tom
    SNIPPET01:
    PROCEDURE p_dequeue_conditional(p_qname IN VARCHAR2,
    p_condition IN VARCHAR2,
    p_data_fragments OUT x_data_fragments)
    IS
    DEQUEUE_CONDIT_FAIL EXCEPTION;
    LOOP_CNT INTEGER := 0;
    MAX_WAIT INTEGER := 0;
    l_dequeue_options DBMS_AQ.dequeue_options_t;
    l_message_properties DBMS_AQ.message_properties_t;
    l_message_handle RAW(16);
    l_data_fragments x_data_fragments;
    BEGIN
    l_dequeue_options.dequeue_mode := DBMS_AQ.BROWSE;
    LOOP
    DBMS_AQ.DEQUEUE(
    queue_name => p_qname,
    dequeue_options => l_dequeue_options,
    message_properties => l_message_properties,
    payload => l_data_fragments,
    msgid => l_message_handle
    EXIT WHEN (l_data_fragments.df_value_1 = p_condition);
    END LOOP;
    SNIPPET02:
    PROCEDURE p_dequeue_conditional(p_qname IN VARCHAR2,
    p_condition IN VARCHAR2,
    p_wait_seconds IN NUMBER,
    p_data_fragments OUT x_data_fragments)
    IS
    l_dequeue_options DBMS_AQ.dequeue_options_t;
    l_message_properties DBMS_AQ.message_properties_t;
    l_message_handle RAW(16);
    l_data_fragments x_data_fragments;
    BEGIN
    l_dequeue_options.navigation := DBMS_AQ.FIRST_MESSAGE;
    l_dequeue_options.wait := p_wait_seconds;
    l_dequeue_options.deq_condition :=
    'tab.user_data.x_data_fragments.df_value_1 = '''||p_condition||'''';
    DBMS_AQ.DEQUEUE(
    queue_name => p_qname,
    dequeue_options => l_dequeue_options,
    message_properties => l_message_properties,
    payload => l_data_fragments,
    msgid => l_message_handle
    p_data_fragments := l_data_fragments;
    COMMIT;
    END;

    Afternoon,
    Problem solved.
    A wonderful co-worker put different pair of eyes on my item.
    My user_data is of my object type.
    Changed this:
    'MyPackage.F_GET_DF_VALUE_1_VALUE(tab.user_data.x_data_fragments) = '||''''||p_condition||'''';
    To this:
    'MyPackage.F_GET_DF_VALUE_1_VALUE(tab.user_data) = '||''''||p_condition||'''';
    All working fine now.
    Thanks for your time.
    Tom

  • Retry counts on multi consumer queues in Java

    Is there a way to get the retry count of a message that has been dequeued from a multiconsumer queue? I believe we were able to access the retry count of a message on a single consumer queue, but when dequeuing from a multiconsumer queue, the retry count field is not filled with a value.
    Can this be done?

    Is there a way to get the retry count of a message that has been dequeued from a multiconsumer queue? I believe we were able to access the retry count of a message on a single consumer queue, but when dequeuing from a multiconsumer queue, the retry count field is not filled with a value.
    Can this be done?

  • Oracle AQ - dequeue multiple threads

    Hi,
    I have a single consumer AQ queue containing 1000 records. I have a bpel process that dequeues from the queue and performs some action.
    I need to have 10 processed polling from the queue at a time and performing the post xyz task. When any one of them is done, the next record can be dequeued.
    Thanks,
    Rosh

    i have the same problem with my proxy service on the osb which is polling an aq.
    when 100 messages are in the queue and it starts polling it will dequeue them all in 1 thread and will hammer the backend service.
    i couldn't find any setting on the aq jca itself to control this. my current solution is osb based and we use throttling on the business service to control it a bit.
    in the old esb you could have a setting like minimumDelayBetweenMessages to control this, but i already decompiled the aq adapter from the osb but can't find any interesting setting anymore
    see : https://blogs.oracle.com/kavinmehta/entry/aqapps-adapter-endpoint-properties for the old settings which worked for us

Maybe you are looking for

  • Is there preventative repair or maintenance for my DV2000 Laptop?

    So i updated the BIOS to the latest version, and the laptop still heats up real bad, freezes, then shuts down! My laptop is on the list that qualifies for the repair BUT it doesnt fit the "symptoms" that HP lists...What should i do??? Would getting a

  • Hi i have problem in my account

    hi i have problem in my account I Tsat secret question for the account and I want to use in the procurement process from iTunes didnot  send email to Rest

  • Templates not updating child pages

    I am running DM CS3, Version 9.0, Build 3481. When I have the HTTP address listed in the box on the Site Definition page (Local Info), any changes that I make to a template are not saved in the child pages when I save the template. I have the Links s

  • Releasing app for customers

    Hi, the company I work for delivers a client-server application. For our customers we would like to create a stripped down client for the iPhone platform. Since this application would only be usable for a small number of people, I don't think it belo

  • How to control the Parallel background process in 10gr2?

    Hi Experts, In my one of the dev environment there are 5 databases running and server load was very high due to one the database EMP. In the EMP database there are 70 SYS session is running once we bring up the DB and its running some insert statemen