Performance of AQ when using callback procedure (using PL/SQL notification)

I am enqueueing XMLType in AQ.XML file is large in size.NOw I want to dequeue the AQ using PL/SQL notification(using callback procedure which will call DBMS_AQ.DEQUEUE to dequeue the AQ and process the XML.After fetching the data from XML ,I want to store these data in another table columns).
Please suggest,In this case which approach would be better to dequeue the AQ table :
1.PL/SQL notification(using callback procedure)
OR
2.Scheduling the call to DBMS_AQ.DEQUEUE to dequeue the AQ table
What would be the impact on performance of above two approaches?

Hi,
The question of performance is not down to the technique one employes to enqueue/dequeue, but at a basic level, is relative to the number of messages!
The is what the Oracle documentation states:
>
When persistent messages are enqueued, they are stored in database tables. The performance characteristics of queue operations on persistent messages are similar to underlying database operations. The code path of an enqueue operation is comparable to SELECT and INSERT into a multicolumn queue table with three index-organized tables. The code path of a dequeue operation is comparable to SELECT, DELETE, and UPDATE operations on similar tables.
>
So, if anything, the dequeue is a fraction more time & resource consuming. However, it is only a "fraction". You could always add multiple subscribers to dequeue if you want!
You could ofcourse choose to have non-persistent messages if your application does not need the kind of fault-tolerance that persistent messaging offers, which speed things up, but only a bit, so don't get too excited about it!
There are other things to consider however, which you may wish to read up on before getting bogged down on whether you should use notification or the scheduler:
- Is your system clustered?
- Concurrency on a single queue (i.e. multiple enqueues/dequeues, but single queue)
- Propogation latency issues
Read about these and much more at:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14257/perform.htm
Finally, whether you pl/sql notification or not and whether you use dbms_scheduler is down to your application requirements!
P;

Similar Messages

  • Dreamweaver CS6 performs very slowly when using Bootstrap

    I'm working with a Bootstrap template that includes alot of JavaScript and CSS files, and Dreamweaver CS6 performs EXTREMELY SLOWLY when working with this site.
    It takes a few seconds to be able to scroll, especially when working in "Split" view, and everytime I want to Preview the page in a browser I must save the file and then wait 2-5 seconds for the save to complete before I hit Preview or else Dreamweaver will crash.
    I've checked for updates to Dreamweaver CS6 but my copy seems to be up-to-date.
    Any suggestions?

    Dreamweaver performs very slowly.
    This depends on a number of factors.
    The operating system - apparently the latest version of Mac OS causes all sorts of problems
    Processor
    RAM
    Disk space
    External links to scripts and style sheets
    In my case, I do not have a problem other than the normal ones
    Windows 8.1
    i7
    16GB
    2T
    local files

  • Getting core dump when using EXEC SQL CLOSE

    In my pro*c program , i have used a cursor to fetch the set of accounts.Once cursor is opened , code will perform set
    of operation using fetched data and then cursor is closed. Between open and closing of cursor , i have used 23 EXEC
    SQL CLOSE. For example i am copying the value of a to b using strlcpy between fetch and close cursor statement.If
    returned value from strlcpy is greater than size of destination variable, then flow should not proceed , in that case I will
    close the cursor using EXEC SQL CLOSE and return the flow to calling program. Similarly i have closed the cursor at
    another 22 locations.
    When i compile the code and run binary the core dump occurs. On analyzing the core it shows
    t@null (l@8) terminated by signal SEGV (no mapping at the fault address)
    0xffffffffffffffff: <bad address 0xffffffffffffffff>
    dbx: core file read error: address 0xfc4ffe48 not in data space
    Current function is dbMtBaseClass::Pswd_Change
    7860 sqlcxt(&_dbMtCtx, &sqlctx, &sqlstm, &sqlfpn);
    if I remove any of the three EXEC SQL CLOSE commands , core dump does not occurs.
    It looks strange.Please help me to resolve the issue.

    In my pro*c program , i have used a cursor to fetch the set of accounts.Once cursor is opened , code will perform set
    of operation using fetched data and then cursor is closed. Between open and closing of cursor , i have used 23 EXEC
    SQL CLOSE. For example i am copying the value of a to b using strlcpy between fetch and close cursor statement.If
    returned value from strlcpy is greater than size of destination variable, then flow should not proceed , in that case I will
    close the cursor using EXEC SQL CLOSE and return the flow to calling program. Similarly i have closed the cursor at
    another 22 locations.
    When i compile the code and run binary the core dump occurs. On analyzing the core it shows
    t@null (l@8) terminated by signal SEGV (no mapping at the fault address)
    0xffffffffffffffff: <bad address 0xffffffffffffffff>
    dbx: core file read error: address 0xfc4ffe48 not in data space
    Current function is dbMtBaseClass::Pswd_Change
    7860 sqlcxt(&_dbMtCtx, &sqlctx, &sqlstm, &sqlfpn);
    if I remove any of the three EXEC SQL CLOSE commands , core dump does not occurs.
    It looks strange.Please help me to resolve the issue.

  • 4.1EA2 User-Defined-Report strips CRLF when using PL/SQL DBMS Output

    Hi,
    I have a report with a child report.
    That child report uses pl/sql to print to the screen. (dbms output)
    When i use the procedure manually it prints fine.
    When i copy this output to a text editor i can see the CRLF in there.
    If i call the same procedure tru the reports section of SQLDeveloper the output is stripped of any CRLF characters.
    This is causing it to be useless for me: i am using it to generate code and it all appears as one big line on the screen.
    Copying this to a text editor indeed shows all CRLFs gone.
    Is this intentional or a bug ?

    Jeff,
    Thnx for the answer.
    I don't have a problem writing html in my code being a former webdeveloper with experience in apex as well.
    The problem is that the option presents itself as a standard plsql dbmsoutput, which suggests identical output as a dbmsoutput pane in sqldeveloper.
    From your answer i gather it really is supposed to do html formatting.
    Your "solution" of inserting br tags would be fine if it weren't for the fact that when i do a copy of the output in this report pane sqldeveloper strips away the br tags but doesn't replace them with crlf's.
    And since the crlfs that were in the output are also stripped in this result pane i get a single line of text when i copy over a page of text from the result pane into a normal texteditor.
    As such it behaves differently then say a browser when i copy a page of text from the browser and paste it in a text editor. There i see the crlfs that were present in the original source.
    Is there a listing somewhere that explains what tags are supported?
    Or CSS ?
    Does it support stuff like white-space: pre ?
    I understand the usefullness of a html-aware plsql dbmsoutput output option in the reports list of a user-defined report.
    However i would like to use this opportunity to advocate to include , as an extra option or tick box etc., an option to just display the output as the normal dbmsoutput pane displays it.
    That is without extra formatting.
    I like the idea of being able to click in the table section above and to have bottom pane generate the appropriate code as output.
    That  use-case can not happen right now since it formats as html (and not even the correct way) or, when going along with this route, strips everything to a single line of text when copying the data from the result pane.
    rgrds mike

  • How to measure performance of supplier when using scheduling agreement ?

    Hello all,
    My client has an absolute need to be able to measure the performance of its suppliers based on delivery dates and delivered quantities. That is to say he needs to be able to compare what dates and quantities were asked to what has been really delivered.
    Most of the procurement processes used are based on scheduling agreements : schedule lines are generated by MRP and forecast is sent to supplier while firm requirements are sent through JIT calls.
    It seems that when doing GR in MIGO, it is done against the outline agreement number, and not against the call. Therefore, we have no way to compare dates and quantity with what was expected (in the JIT call).
    Do you know if SAP proposes a standard solution to this, and what could be a solution to this issue ?
    Thanks for your help
    E. Vallez

    Hi,
    My client faced the same problem and we ended up developing an own analysis in LIS. Since the GR is not linked to specific schedule line (SAP does some kind of apportioning, but it doesn't have to correlate to the correct match), one needs to do assumptions. Our assumption was the closest schedule line, i.e. each GR is related to the schedule line with the closest date. Then all GR the same day are totaled together before the quantity reliability is calculated, since the very same shipment can be reported through several GR transactions in SAP (one per pallet).
    If anybody has info about what SAP has to offer in this question (or is developing), please tell us!
    BR
    Raf

  • "There is no valid action to be performed" error message when using free-form approvals

    Hi all,
    I'm using free-form approval template with no owners and reviewers assigned, just security filters are applied on Entity dim.
    I have started all entityes and they are in "First Pass" stauts. And now I'm not able to change status on a planning unit and I'm getting following error message:
    There is no valid action that can be performed on the selected Planning Unit.
    Why I'm not able to change status? Could anyone help me out of this?
    BR
    Vlado

    when the user id created in solman was P0041, and hence the BP was created assigned to this ID.
    at some stage of time, a request came to change delete P0041 and recreate as ALMA41 to maintain the standards.
    but it was missed to change the P0041 in the BP to reflect ALMA41.
    Daher Abdeen.

  • JDev 10.1.3 Prod Bug in Structure Window when using PL/SQL and Stored Types

    Using JDev 10.1.3 build 3673
    When loading/editing a PL/SQL package with code like the following:
    rec := NEW Record001Row;
    Where Record001Row is a stored type on the database gives an "Error Expecting ;" in the JDev Structure Window. Once the above line is commented out, the PL/SQL program structure appears in the Structure Window. I can then uncomment the above line and the structure of the PL/SQL program stays, but the "Error Expecting ;" still appears.

    Logged the bug 5010466.
    Thank you for reporting this.

  • Item null when using PL/SQL function body returning SQL query

    I have an item P5_CUSTOMER_ID (text). Whatever be the value I enter in this text box, the PL/SQL seems to execute the ELSE condition. What am I missing? I am using 2.0 and 10g Oracle.
    Declare
    qry VARCHAR2(1000);
    Begin
    IF :P5_CUSTOMER_ID IS NOT NULL THEN
    qry := 'select :P5_CUSTOMER_ID as NOT_NULL from dual';
    ELSE
    qry := 'Select :P5_CUSTOMER_ID as NULL_NULL from dual';
    END IF;
    return qry;
    End;

    I forgot to mention that P5_CUSTOMER_ID value is printed in the report. See my Select statement). When I enter '10' in the text box, the SQL output prints it under the column heading 'NULL_NULL'. Since it recognizes the value, it should execute the IS NOT NULL condition, but it does not.

  • How to get the addition DML Error information when using PL/SQL?

    Given the following statements:
    drop table bob;
    create table bob
    col1 number(6,2),
    col2 number(6,2) not null
    insert into bob(col1, col2) values (123.12, null);
    insert into bob(col1, col2) values (123.12, 12345.12);
    commit;
    I get different behavior from Oracle error messages.
    Here is the sample output:
    Table dropped.
    Table created.
    insert into bob(col1, col2) values (123.12, null)
    ERROR at line 1:
    ORA-01400: cannot insert NULL into ("BOB"."COL2")
    insert into bob(col1, col2) values (123.12, 12345.12)
    ERROR at line 1:
    ORA-01438: value larger than specified precision allows for this column
    Commit complete.
    Note that in the first error (1400) the generated error message contains the name of the problematic column ("BOB"."COL2"). Also note that in the regurgitated statement the error pointer (*) is at the far left. In this case, the regurgitated statement is of no use to me because the error message identifies the culprit column.
    However, in the second error (1438) the generated error message is very generic and does NOT indicate the problematic column. In addition, in this case the error pointer (*) of the regurgitated statement has moved to point to the troublesome column. In this case, I must use BOTH the error message and the regurgitated statement to completely understand the problem.
    This difference in how error messages are constructed causes difficulty within PL/SQL. If PL/SQL happens to generate a 1400 error message, then simply capturing the error message and giving it to the user provides enough information for him/her to solve the problem. However, if a 1438 error is generated then the resulting error message has insufficient information for the user to pursue the problem.
    So my question is, within PL/SQL do you know how to get your hands on Oracle's regurgitated statements so that they can be presented to the user to assisted him/her in problem resolution?
    Thanks.

    I don't think it's that easy. Notice that if you put the insert into an actual pl/sql block, you don't get the correct column pointer anymore.
    BEGIN insert into bob(col1, col2) values (123.12, 12345.12); END;
    ERROR at line 1:
    ORA-01438: value larger than specified precision allows for this column
    ORA-06512: at line 1
    Richard

  • Notification Error when using PL/SQL document

    I have a notification that calls a procedure that returns some HTML body of the email. It worked once, but now I get the error below. In Oracle Apps on the notification tab the notification shows correctly, it's the actual sending of the email that is failing.
    Thank you for any help you may be able to provide,
    Blake
    [WF_ERROR] ERROR_MESSAGE=3835: Error '-20002 - ORA-20002: 2018: Unable to generate the notification XML. Caused by: 2020: Error when getting notification content. Caused by: sqlerrm wf_notification.GetAttrDoc2(123387, NOTIF_BASIC_INFO_DOC, text/html) Wf_Notification.GetAttrDoc(123387, NOTIF_BASIC_INFO_DOC, text/html) Wf_Notification.GetText(123387, text/html) Wf_Notification.GetBody(123387, text/html) WF_NOTIFICATION.GetFullBody(nid => 123387, disptype => text/html) WF_MAIL.GetLOBMessage3(nid => 123387, r_ntf_pref => MAILHTML)' encountered during execution of Generate function 'WF_XML.Generate' for event 'oracle.apps.wf.notification.send'. ERROR_STACK= WF_MAIL.GetLOBMessage3(123387, WFMAIL, 2020: Error when getting notification content. Caused by: sqlerrm wf_notification.GetAttrDoc2(123387, NOTIF_BASIC_INFO_DOC, text/html) Wf_Notification.GetAttrDoc(123387, NOTIF_BASIC_INFO_DOC, text/html) Wf_Notification.GetText(123387, text/html) Wf_Notification.GetBody(123387, text/html) WF_NOTIFICATION.GetFullBody(nid => 123387, disptype => text/html) WF_MAIL.GetLOBMessage3(nid => 123387, r_ntf_pref => MAILHTML), Step -> Getting text/html body) WF_XML.GenerateDoc(oracle.apps.wf.notification.send, 123387) WF_XML.Generate(oracle.apps.wf.notification.send, 123387) WF_XML.Generate(oracle.apps.wf.notification.send, 123387) Wf_Event.setMessage(oracle.apps.wf.notification.send, 123387, WF_XML.Generate) Wf_Event.dispatch_internal()

    I am getting the following error ie
    WF_ERROR] ERROR_MESSAGE=3835: Error '-20002 - ORA-20002: 2018: Unable to generate the notification XML. Caused by: 2020: Error when getting notification content. Caused by: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "SYS.HTP", line 1557 ORA-06512: at "SYS.HTP", line 1735 ORA-06512: at "APPS.HR_UTIL_DISP_WEB", line 3519 ORA-06512: at "APPS.XX_HR_MDA_AUDIT_VIEW_WEB", line 2077 ORA-06502: PL/SQL: numeric or value error: character string buffer too small Wf_Notification.GetAttrClob(2648602, XX_HR_REVIEW_DO' encountered during execution of Generate function 'WF_XML.Generate' for event 'oracle.apps.wf.notification.send'. ERROR_STACK= WF_MAIL.GetLOBMessage3(2648602, WFMAIL, 2020: Error when getting notification content. Caused by: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "SYS.HTP", line 1557 ORA-06512: at "SYS.HTP", line 1735 ORA-06512: at "APPS.HR_UTIL_DISP_WEB", line 3519 ORA-06512: at "APPS.XX_HR_MDA_AUDIT_VIEW_WEB", line 2077 ORA-06502: PL/SQL: numeric or value error: character string buffer too small Wf_Notification.GetAttrClob(2648602, XX_HR_REVIEW_DOC_ATTR, text/html) Wf_Notification.oldGetAttrClob(2648602, XX_HR_REVIEW_DOC_ATTR, text/html) WF_NOTIFICATION.GetFullBody(nid => 2648602, disptype => text/html) WF_MAIL.GetLOBMessage3(nid => 2648602, r_ntf_pref => MAILHTML), Step -> Getting text/html body) WF_XML.GenerateDoc(oracle.apps.wf.notification.send, 2648602) WF_XML.Generate(oracle.apps.wf.notification.send, 2648602) WF_XML.Generate(oracle.apps.wf.notification.send, 2648602) Wf_Event.setMessage(oracle.apps.wf.notification.send, 2648602, WF_XML.Generate) Wf_Event.dispatch_internal()
    At line 2077 in "APPS.XX_HR_MDA_AUDIT_VIEW_WEB" the code is :
    WHEN others THEN
    hr_util_disp_web.display_fatal_errors
    (p_message => g_package_name||'.display_detail - '||sqlerrm||' '||sqlcode);
    At line 3519 in "APPS.HR_UTIL_DISP_WEB" the code is :
    l_session_id := icx_sec.getid(icx_sec.pv_session_id);
    htp.p('<HTML>');
    htp.p('<HEAD>');
    htp.p('</HEAD>');
    htp.p('<BODY>');
    htp.p('<SCRIPT language="JavaScript">');
    htp.p('window.location="hr_util_disp_web.display_fatal_error_form?'
    ||'p_message='||icx_call.encrypt2(p_message, l_session_id)
    ||'"');
    htp.p('</SCRIPT>');
    htp.p('</BODY>');
    htp.p('</HTML>');
    Any inputs on how to debug it.
    Edited by: user12010068 on Dec 10, 2009 3:59 AM

  • Queue Concurrent Processing using PL/SQL notification

    I have a queue that I need to have the processing done concurrently no matter how many messages are in the queue.  Currently one 1 message is processed at a time, but I need at least 10 processed at once.  Each message could take up to 3 minutes to process so doing one at a time is taking too long when the number of messages grows to more than 5 or 10.  How can this be done?
    begin
      DBMS_AQADM.CREATE_QUEUE_TABLE
      ( queue_table => 'z3.health_check_qtab',
        queue_payload_type =>  'z3.sil_queue_msg_typ',
        multiple_consumers => TRUE );
      DBMS_AQADM.CREATE_QUEUE
      ( queue_name => 'HEALTH_CHECK_Q',
        queue_table => 'z3.health_check_qtab');
      DBMS_AQADM.START_QUEUE
      (  queue_name => 'HEALTH_CHECK_Q');
    end;
    begin
      dbms_aqadm.add_subscriber
      ( queue_name => 'Z3.HEALTH_CHECK_Q',
        subscriber => sys.aq$_agent( 'HC_recipient1', null, null ) );
    end;
    BEGIN
      dbms_aq.register
      ( sys.aq$_reg_info_list(
        sys.aq$_reg_info('Z3.HEALTH_CHECK_Q:HC_RECIPIENT1',
                         DBMS_AQ.NAMESPACE_AQ,
                        'plsql://Z3.Z_IMPACT_LIST_PKG.DEQUEUE_HEALTH_CHECK',
                        HEXTORAW('FF')) ) ,
      1 );
    end;
    Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

    I'm also looking into how to do this (on 10gR2 RAC). I would like to enqueue say 1,000,000 requests and have 10-30 (configurable) worker 'threads' dequeueing and executing a PL/SQL procedure during certain parts of the day.

  • Crystal 2011 getting Logon failed IM004 error when using ODBC (RDO)

    I just installed Crystal Reports 2011 on Windows 7 Ultimate SP1.  Sporadically, but way too often (I would estimate about 60% of the time), when I go to set up an ODBC (RDO) connection or refresh an existing report and enter the password, I get the following error:
    Logon failed.
    Details: IM004:[Microsoft][ODBC Driver Manager] Driver's SQLAllocHandle on SQL_Handle_ENV failed
    Other times, it just works, with no changed made.  It seems to work slightly more often when using the SQL Server driver (the one for SQL 2000) as opposed to the SQL Native Client or SQL Server Native Client 10.0 drivers.  But not often enough for that to be the solution.
    I have tested this with a locally installed SQL Server (on the same machine as Crystal Reports) and remote SQL Servers, with SQL 2000, 2005 and 2008.  Same results.
    I have Crystal Reports 2008 installed on the same machine and have not once had this problem.  I have tried using the same ODBC connections I already had from Crystal 2008 with Crystal 2011 and also to create new ones, same results either way.
    Is this a known issue, or is there a switch/setting I need to change to make this work consistently?  I am especially puzzled by the fact that it seems random, I have not been able to find any pattern at all.

    Hi Victoria,
    MS SQL Server 2008 has a client install now, the MDAC/WDAC doesn't install it as MS used to do with all of their OS's. CR also had to update the use of SQL Server 2008 to include the SQLNCLI10 client engine.
    I suggest, if you have not already, install the Native 10 client from either the SQL Server CD or you can download the MSI from MS's site and only use the Native 10 driver.
    MS kind of dropped ODBC and moved to OLE DB. The native client actually uses the OLE DB driver so using ODBC just puts another layer on top of OLE DB anyways. It's more convenient to use ODBC for deploying applications but it can affect performance.
    Anyways, as noted, CR 2011 is still in Ramp up mode so this issue may have been tracked. If not though you can purchase a support case, if it's a bug then you'll get a refund, if it's a limitation in MS then no refund.
    There is [SP 1 for CR 2011|https://smpdl.sap-ag.de/~sapidp/012002523100007683092011E/cr2011sp1.exe] - http://service.sap.com/sap/bc/bsp/spn/bobj_download/main.htm
    Seems to be a problem with that page right now so try again later... and apply SP 1 or wait until it goes GA and test again. If it's still an issue then you can log a case.
    Firefox just got updated and it's failing, IE works though...
    Thanks again
    Don
    Edited by: Don Williams on Jun 22, 2011 8:51 AM

  • Buffering video downlowds via Quicktime, when using a HSDPA USB modem.

    Hi,
    Does any one else experience very poor streaming performance via Quicktime, when using a HSDPA USB modem. Every time, when using the Quicktime player and receiving data via a HSDPA USB modem, there seems to be minimal buffering of that data, so that the video quality is extremely pixelated and virtually unwatchable; are there settings that can be altered to correct this problem? This issue seems to be so petty, yet Apple is unable to offer a solution? That doesn't seem right?
    Regards,
    mike_biz

    Is this a file that Quicktime PRO is not able to open?
    Yes. Your camera's native video format is MPEG-4 and appears to possibly be "muxed" with MP3 audio.
    If not, is there anyway that I can convert this file to one that I can use on my Mac?
    Try installing the free Flip4Mac component. Then try playing the .asf file in the QT Player.
    Is it possible to upload from my camcorder via the included USB cable and convert directly through Quicktime PRO?
    If your file plays using the Flip4Mac player component, you may then wish to consider purchasing full conversion codec which will allow you to export your DXG-305V files to any frame-to-frame synchronized QT compatible format in QTPro.
    ADDED: Sorry about the duplication, too slow with my response.

  • Future support for using PL/SQL core business logic with ADF BC

    We want to migrate our large Forms client/server (6i) application to ADF, possibly using a migration tool like Ciphersoft Exodus.
    One scenario could be to use ADF BC and ADF-Faces or a different JSF-Implementation for presentation and business layer but keep our heavy PL/SQL-businesslogic inside the Oracle database in packages, triggers, functions and procedures.
    This scenario could be chosen due to the huge amount of interconnected logic inside the database (10 years of development; no technical components; any package may access any table and more of this kind of dependencies). The business logic nowadays held in Forms client will be moved mainly into the database as a prerequisite to this scenario.
    Choosing this "keep-logic-in-DB"-scenario we need a good support by ADF BC to do so. We know and prototyped that it is possible to call some PL/SQL via JDBC from ADF BC and it is possible to use stored procedure calls for standard business entity data access (ins, del, upd, ..). But this does not solve our problems. We want to reuse core business logic coded in PL/SQL. This is much more than change the ADF standard behavior for an update with an own PL/SQL-call.
    Now my question:
    Will there be a kind of sophisticated support to use ADF BC in combination with database-kept logic?
    If so, when will this happen and how will the common problems of transactional state inside the database and inside the ADF BC be solved? Any plans or ideas yet?
    Many other clients do have similar applications built in Forms and PL/SQL and would be glad to hear about a path of direction.
    I've read the technical article 'understanding the ADF BC state management feature' which you have contributed to. One current limitation is pointed out there: Using PL/SQL with ADF BC limits ADF AM pooling to 'restricted level' which reduces scalability.
    Are you aware of additional main problems/tasks to solve when using PL/SQL heavily with ADF BC, which we have to think about?
    Thank you for any response.
    Ingmar

    My main problem is two 'concurrent' areas holding state in an application system based on DB-stored PL/SQL-logic in combination with ADF BC.
    For a new System everything can be made ok:
    Sure, it is possible to build a new system with the business logic included in ADF BC only. All long-living state will be handled in the BC layer ( including support for UnitsOfWork longer than the webside short HTTP-requests and HTTP-sessions and longer than the database transactions.
    For an old system these problems arise:
    1. DB data changes not reflected in BC layer:
    Our PL/SQL-logic changes data in tables without notifying the ADF BC layer (and its cache). To keep the data in ADF BC entity objects identical to the changed database content a synchronization is needed. BC does not know which part of the application data has been changed because it has not initiated the changes through its entity objects. Therefore a full refresh is needed. In a Forms4GL environment the behavior is similar: We do frequently requeries of all relevant (base)tables after calling database stored logic to be sure to get the changed data to display and to operate on it.
    -> Reengineering of the PL/SQL-logic to make the ADF BC layer aware of the changes is a big effort (notifying BC about any change)
    2. longer living database transactions
    Our PL/SQL-logic in some areas makes use of lengthy database transactions. The technical DB-transaction is similar to the UnitOfWork. If we call this existing logic from ADF BC, database state is produced which will not be DB-committed in the same cycle.
    This reduces scalability of ADF BC AM pooling.
    Example:
    a) Call a DB-stored logic to check if some business data is consistent and prepare some data for versioning. This starts a DB-transaction but does not commit it.
    b) Control is handed back to the user interface. Successful result of step a) is displayed
    c) User now executes the versioning operation
    d) Call another DB-stored logic to execute the versioning. DB-transaction is still open
    e) Business layer commits the transaction automatically after successful finishing step d). Otherwise everything from a) to e) is rolled back.
    -> redesign of this behavior (= cutting the 1to1 relation between LogicalUnitOfWork and the technicalDatabaseTransaction is a big effort due to the big amount of code.

  • Performance issues when using AQ notification with one consumer

    We have developed a system to load data from a reservation database to a reporting database
    A a certain point in the proces, a message with the identifier of the reservation is enqueued to a queue (multi-consumer) on the same DB and then propagated to a similar queue on the REP database.
    This queue (multi-consumer) has AQ notification enabled (with one consumer) which calls the queue_callback procedure which
    - dequeues the message
    - calls a procedure to load the Resv data into the Reporting schema (through DB link)
    We need each message to be processed ONLY ONCE thus the usage of one single subscriber (consumer)
    But when load testing our application with multiple threads, the number of records created in the Reservation Database becomes quite large, meaning a large number of messages going through the first queue and propagating to the second queue (very quickly).
    But messages are not processed fast enough by the 2nd queue (notification) which falls behind.
    I would like to keep using notification as processing is automatic (no need to set up dbms_jobs to dequeue etc..) or something similar
    So having read articles, I feel I need to use:
    - multiple subscribers to the 2nd queue where each message is processed only by one subscriber (using a rule : say 10 subscribers S0 to S10 with Si processing messages where last number of the identifier is i )
    problem with this is that there is an attempt to process the message for each subscriber, isn't there
    - a different dequeuing method where many processes are used in parallel , with each message is processed only by one subscriber
    Does anyone have experience and recommendations to make on how to improve throughput of messages?
    Rgds
    Philippe

    Hi, thanks for your interest
    I am working with 10.2.0.4
    My objective is to load a subset of the reservation data from the tables in the first DB (Reservation-OLTP-150 tables)
    to the tables in the second DB (Reporting - about 15 tables at the moment), without affecting performance on the Reservation DB.
    Thus the choice of advanced queueing (asyncronous )
    - I have 2 similar queues in 2 separate databases ( AND Reporting)
    The message payload is the same on both (the identifier of the reservation)
    When a certain event happens on the RESERVATION database, I enqueue a message on the first database
    Propagation moves the same message data to the second queue.
    And there I have notification sending the message to a single consumer, which:
    - calls dequeue
    - and the data load procedure, which load this reservation
    My performance difficulties start at the notification but I will post all the relevant code before notification, in case it has an impact.
    - The 2nd queue was created with a script containing the following (similar script for fisrt queue)
    dbms_aqadm.create_queue_table( queue_table => '&&CQT_QUEUE_TABLE_NAME',
    queue_payload_type => 'RESV_DETAIL',
    comment => 'Report queue table',
    multiple_consumers => TRUE,
    message_grouping => DBMS_AQADM.NONE,
    compatible => '10.0.0',
    sort_list => 'ENQ_TIME',
    primary_instance => '0',
    secondary_instance => '0');
    dbms_aqadm.create_queue (
    queue_name => '&&CRQ_QUEUE_NAME',
    queue_table => '&&CRQ_QUEUE_TABLE_NAME',
    max_retries => 5);
    - ENQUEUING on the first queue (snippet of code)
    o_resv_detail DLEX_AQ_ADMIN.RESV_DETAIL;
    o_resv_detail:= DLEX_AQ_ADMIN.RESV_DETAIL(resvcode, resvhistorysequence);
    DLEX_RESVEVENT_AQ.enqueue_one_message (o_resv_detail);
    where DLEX_RESVEVENT_AQ.enqueue_one_message is :
    PROCEDURE enqueue_one_message (msg IN RESV_DETAIL)
    IS
    enqopt           DBMS_AQ.enqueue_options_t;
    mprop           DBMS_AQ.message_properties_t;
    enq_msgid           dlex_resvevent_aq_admin.msgid_t;
    BEGIN
    DBMS_AQ.enqueue (queue_name => dlex_resvevent_aq_admin.c_resvevent_queue,
    enqueue_options => enqopt,
    message_properties => mprop,
    payload => msg,
    msgid => enq_msgid
    END;
    - PROPAGATION: The message is dequeued from 1st queue and enqueued automatically by AQ propagation into this 2nd queue.
    (using a call to the following 'wrapper' procedure)
    PROCEDURE schedule_propagate (
    src_queue_name IN VARCHAR2,
    destination IN VARCHAR2 DEFAULT NULL
    IS
    sprocname dlex_types.procname_t:= 'dlex_resvevent_aq_admin.schedule_propagate';
    BEGIN
    DBMS_AQADM.SCHEDULE_PROPAGATION(queue_name => src_queue_name,
                                            destination => destination,
    latency => 10);
    EXCEPTION
    WHEN OTHERS
    THEN
    DBMS_OUTPUT.put_line (SQLERRM || ' occurred in ' || sprocname);
    END schedule_propagate;
    - For 'NOTIFICATION': ONE subscriber was created using:
    EXECUTE DLEX_REPORT_AQ_ADMIN.add_subscriber('&&STQ_QUEUE_NAME','&&STQ_SUBSCRIBER',NULL,NULL, NULL);
    this is a wrapper procedure that uses:
    DBMS_AQADM.add_subscriber (queue_name => p_queue_name, subscriber => subscriber_agent );
    Then notification is registered with:
    EXECUTE dlex_report_aq_admin.register_notification_action ('&&AQ_SCHEMA','&&REPORT_QUEUE_NAME','&&REPORT_QUEUE_SUBSCRIBER');
    - job_queue_processes is set to 10
    - The callback procedure is as follows
    CREATE OR REPLACE PROCEDURE DLEX_AQ_ADMIN.queue_callback
    context RAW,
    reginfo SYS.AQ$_REG_INFO,
    descr SYS.AQ$_DESCRIPTOR,
    payload RAW,
    payloadl NUMBER
    IS
    s_procname CONSTANT VARCHAR2 (40) := UPPER ('queue_callback');
    r_dequeue_options DBMS_AQ.DEQUEUE_OPTIONS_T;
    r_message_properties DBMS_AQ.MESSAGE_PROPERTIES_T;
    v_message_handle RAW(16);
    o_payload RESV_DETAIL;
    BEGIN
    r_dequeue_options.msgid := descr.msg_id;
    r_dequeue_options.consumer_name := descr.consumer_name;
    DBMS_AQ.DEQUEUE(
    queue_name => descr.queue_name,
    dequeue_options => r_dequeue_options,
    message_properties => r_message_properties,
    payload => o_payload,
    msgid => v_message_handle
    -- Call procedure to load data from reservation database to Reporting DB through the DB link
    dlex_report.dlex_data_load.load_reservation
    ( in_resvcode => o_payload.resv_code,
    in_resvHistorySequence => o_payload.resv_history_sequence );
    COMMIT;
    END queue_callback;
    - I noticed that messages are not taken out of the 2nd queue,
    I guess I would need to use the REMOVE option to delete messages from the queue?
    Would this be a large source of performance degradation after just a few thousand messages?
    - The data load through the DB may be a little bit intensive but I feel that doing things in parallel would help.
    I would like to understand if Oracle has a way of dequeuing in parallel (with or without the use of notification)
    In the case of multiple subscribers with notification , does 'job_queue_processes' value has an impact on the degree of parallelism? If not what setting has?
    And is there a way supplied by Oracle to set the queue to notify only one subscriber per message?
    Your advice would be very much appreciated
    Philippe
    Edited by: user528100 on Feb 23, 2009 8:14 AM

Maybe you are looking for