Long running database call

Hi
I have a scenario where a call to PLSQL is taking about 15+ minutes to complete.
Calls to database through database adapter is a sync call in BPEL, so yes the timeout is making the transaction JTA inactive.
I understand that increasing the transaction time out is not an option as the time to complete might vary based on amount of data to process.
What are the best practices for designing this scenario?
Every Little Help
Kalidass Mookkaiah

This is not a very good pattern as you have no viability of the second BPEL process, if this never gets invoked the first process will be none the wiser.
Plus how does your second BPEL process kick off the long running PL/SQL, once it gets invoked you still need to call the PL/SQL. How is this achieved using this pattern?
If you want to call a long running PL/SQL it can be achieved with AQ and correlations.
1.) Create an async BPEL.
2.) Invoke an AQ adapter that places a message on a request queue with a correlation id, so you need to use correlation sets.
3.) You setup your PL/SQL to subscribe to this queue, so when a message is posted the PL/SQL is invoked.
4.) Once complete it places the result on a response queue using the same correlation id
5.) Your BPEL process is listening for that correlation id on the response queue. (I like to respond with the status of the PL/SQL)
6.) Always best to have a pick activity after the receive so if the response is not within a timeframe you can implement some error handling.
cheers
James

Similar Messages

  • Find long running database objects over week (without using V$ views)

    find long running database objects over week (without using V$ views) as v$ views contains information only upto objects resides on main memory . I want to know the objects which takes highest time withing one week.

    Hello,
    welcome to the forum.
    This is the forum for the tool {forum:id=260}. Your question about v$views should be posted in {forum:id=61} or {forum:id=75}. There is a FAQ {message:id=9360002}: especially the part about providing essential informations like 4 digit version number :-)
    Regards
    Marcus

  • Tracking status of long running synchronous calls.

    Flows 1 --- async call ----> FLows 2 ------ async call --------> Flows 3 -----sync call ----> axis ws
    The last sync call can take a long time.
    In the previous situation, i can't see Flow 1,2,3 evolution....
    What i try to do :
    I use the scriplet code setTitle() to show the flow status (RUNNING, USERTASK, SUCCESS) to show the flow
    activity.
    My example :
    * Flows1 : setTitle('USERTASK')
    * Flows 1 is waiting for a user task.
    * User responds ..
    * Flows1 : setTitle('RUNNING')
    * Flows1 call Flow2 (flow2 call flow 3 ...)
    * I can't see the Flow1,2,3 status until Flows3 end
    so, the user will wait ...
    Question is how can i force DB commiting while sync flow is called ?

    David,
    I created a project that mimics you use case and the title setting works as expected ( = the title changes and is visible through the console as the process gets executed).
    Here is a link to a zip file that contain Flow1, Flow2 and Flow3 as described in your use case.
    http://www.collaxa.org/TitleProject.zip
    (deploy in the 3, 2, 1 order).
    Does the attached project accurately describe your use case?
    Thanks,
    Edwin

  • Erorr calling long running sync web service

    Hello
    I have definied simple async process:
    assign (from process input to ws invoke input)
    sync ws invoke
    assign (from ws invoke output to process output)
    Sync Web Service is long running (few minutes). In order to avoid timeout I have accordingly set sync-max-wait-time property.
    I have observed that after starting process instance I can't see it in BPEL Console (however I know it is working). Furthermore when invoke activity ends (so instance state must be saved in database) I can see following error on system console:
    <BaseCubeSessionBean::logError> Error while invoking bean "delivery": Cannot ins
    ert audit trail.
    The process domain was unable to insert the current log entries for the instance
    "2314" to the audit trail table. The exception reported is: RollbackException:
    The transaction has been marked for rollback (timed out)
    Please check that the machine hosting the datasource is physically connected to
    the network. Otherwise, check that the datasource connection parameters (user/p
    assword) is currently valid.
    sql statement: INSERT INTO audit_trail ( cikey, domain_ref, count_id, log ) VALU
    ES ( ?, ?, ?, ? )
    <2005-01-06 13:14:10,268> <ERROR> <default.collaxa.cube.engine.dispatch> <BaseSc
    heduledWorker::process> Failed to handle dispatch message invoke instance messag
    e 0c92305db480f36d:149b290:10147dc1b2e:-7fd6 ... exception Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.
    engine.dispatch.message.invoke.InvokeInstanceMessage"; the exception is: Transac
    tion was rolled back: timed out; nested exception is: java.rmi.RemoteException:
    No Exception - originate from:java.lang.Exception: No Exception - originate from
    :; nested exception is:
    java.lang.Exception: No Exception - originate from:
    Best Regards

    Hi,
    There are 2 ways to work around the error that you are seeing.
    Option #1: increase the JTA timeout of the underlying application server.
    Option #2: in the bpel.xml of your process, add a "nonBlockingInvoke" property to the partnerLink representing the interaction with the long running service.
    Option #2 should also address the fact that the instance is not visible in the bpel console.
    Edwin

  • LONG running qurries in oracle database

    HI,
    How to check LONG running qurries in oracle database ? and also can you please tell me how to check current runnning quries in DB.
    Thanks,
    Shyamu.A.

    you mean long ops...
    SELECT sid, to_char(start_time,'hh24:mi:ss') stime,
    message,( sofar/totalwork)* 100 percent
    FROM v$session_longops
    WHERE sofar/totalwork < 1if you want locked objects
    select
      object_name,
      object_type,
      session_id,
      type,           -- Type or system/user lock
      lmode,         -- lock mode in which session holds lock
      request,
      block,
      ctime           -- Time since current mode was granted
    from
      v$locked_object, all_objects, v$lock
    where
      v$locked_object.object_id = all_objects.object_id AND
      v$lock.id1 = all_objects.object_id AND
      v$lock.sid = v$locked_object.session_id
    order by
      session_id, ctime desc, object_name
    /Query for currently running queries more than 60 secs...
    select s.username,s.sid,s.serial#,s.last_call_et/60 mins_running,q.sql_text from v$session s
    join v$sqltext_with_newlines q
    on s.sql_address = q.address
    where status='ACTIVE'
    and type <>'BACKGROUND'
    and last_call_et> 60
    order by sid,serial#,q.piece

  • Long Running SQL and ORDS Spawns Multiple Database Sessions

    Hi all.
    We have a strange situation when accessing a long running SQL Report (a single APEX Page).
    The SQL takes about 15 mins to run but when I monitor what database sessions are spawned by the APEX Listener, I see multiple sessions all executing the same SQL. It appears that after 6 minutes, the APEX Listener spawns a new database session to execute the same SQL.
    Has anyone seen this before and if so, is there a key setting I am missing as I don't want this to happen. I am new to the APEX Listener and WebLogic so apologies if this is the way it's meant to work but it seems odd that after a certain amount of time (6 minutes in my case) a new database session is spawned to do the same work.
    We are running:
    WebLogic: 10.3.0.6
    APEX_LISTENER_VERSION 2.0.0.354.17.06
    Datadate: 11.2.0.3.0 Production
    APEX: 4.2.1.00.08
    Cheers for any help.
    Duncs

    Hi Duncan,
    With all respect, you should please rethink your interface.  I would never consider writing a Web application with a request that knowingly takes 15 minutes to return the results.  You can consider doing this asynchronously via DBMS_SCHEDULER and then alerting the user (via email, perhaps) that their results are ready.  Or if you can precompute this in advance, consider using materialized views so that the user's response time is sub-second.
    In an era where the patience of the average end-user is measured in single-digit seconds, it is impractical to ever expect an end-user to wait 15 minutes for their resultant Web page.
    Joel

  • Call long running process and return immediately

    Hi everyone,
    Here' my problem : I am calling an onDemand application process when clicking on a button. It's a very long running process; and while waiting for the response my browser stops responding (white page).
    So, is there a way to call a process and return immediately to the javascript. The process will set a field to a given value when it finishes. In javascript side, I can then check in a while boucle the value of this field to know if the process has ended.But I dont know how to return just after the application process call ...
    Thanks and best regards,
    Othman

    I presume that you can achieve that by means of a batch job.
    You can use an automatic page reload every X seconds until a certain "flag" (or similar mechanism) changes state and check it in a before header process. In the meanwhile you can display some animated GIF for instance.
    Once the processing is completed you remove the reload timer from the header or you branch to a different page using a programmatic technique like procedure owa_util.redirect_url.
    Bye,
    Flavio
    http://www.oraclequirks.blogspot.com/search/label/Apex

  • Long running query--- included steps given by Randolf

    Hi,
    I have done my best to follow Randolf instruction word-by-word and hope to get solution for my
    problem soon. Sometime back I have posted a thread on this problem then got busy with other
    stuff and was not able to follow it. Here I am again with same issue.
    here is link for my previous post
    long running query in database 10gHere is backgroud of my requriemment.
    I am working on Oracle forms 10g which is using package given below. We want to display client information
    with order count basd on different status like Pending, Error, back Order, expedited, std shipping.
    Output will look something like.
    client name   pending    error   backorder   expedited   std shipping
    ABC            24         0       674          6789         78900
    XYZ            35         673    5700           0           798274
    .There are total 40 clients . The long running query are expedited and std shipping.
    When i run package from Oracle Form Developer it takes 3 mintues to run but when I run same query in our application using forms
    (which uses Oracle Application Server) it takes around 1 hour, which is completly unacceptable.
    User wants it be done in less than 1 mintue.
    I have tried combining Pending,error and backorder queries together but as far as I know it will not
    work in Oracle Form as we need a place holder for each status.
    Please dont think it is Forms related question, it is a Performance problem.
    PACKAGE BODY ORDER_COUNT_PKG IS
    PROCEDURE post_query IS
      BEGIN
           BEGIN
                SELECT count(*)
                  INTO :ORDER_STATUS.PENDING
                  FROM orders o
              WHERE o.status = 'P'
                   AND (parent_order_id is null
                    OR  (order_type='G'
                         AND parent_order_id=original_order_number))
                      AND  o.client = :ORDER_STATUS.CLIENT_NUMBER;
         EXCEPTION
           WHEN OTHERS THEN
           NULL;
           END;
             BEGIN
                 SELECT  count(*)
                   INTO  :ORDER_STATUS.ERROR
                   FROM  orders o
                  WHERE  o.status = 'E'
                    AND  (parent_order_id is null
                     OR  (order_type='G'
                           AND parent_order_id=original_order_number))
                       AND  o.client = :ORDER_STATUS.CLIENT_NUMBER;
                EXCEPTION
           WHEN OTHERS THEN
           NULL;     
            END;
           BEGIN
                SELECT count(*)
                  INTO :ORDER_STATUS.BACK_ORDER
                  FROM orders o
              WHERE o.status = 'B'
                   AND (parent_order_id is null
                    OR (order_type='G'
                         AND parent_order_id=original_order_number))
                      AND o.client = :ORDER_STATUS.CLIENT_NUMBER;
          EXCEPTION
           WHEN OTHERS THEN
           NULL;   
         END;
           BEGIN
                SELECT count(*)
                  INTO :ORDER_STATUS.EXPEDITE
                  FROM orders o,shipment_type_methods stm
                 WHERE o.status in ('A','U')
             AND (o.parent_order_id is null
              OR (o.order_type = 'G'
             AND o.parent_order_id = o.original_order_number))
             AND o.client = stm.client
             AND o.shipment_class_code = stm.shipment_class_code
             AND (nvl(o.priority,'1') = '2'
              OR  stm.surcharge_amount <> 0)
                      AND  o.client = :ORDER_STATUS.CLIENT_NUMBER
              GROUP BY  o.client;
              EXCEPTION
           WHEN OTHERS THEN
           NULL;          
         END;           
           BEGIN
                SELECT count(*)
                  INTO :ORDER_STATUS.STD_SHIP
                  FROM  orders o,shipment_type_methods stm
                 WHERE o.status in ('A','U')
             AND  (o.parent_order_id is null
              OR (o.order_type = 'G'
             AND o.parent_order_id = o.original_order_number))
             AND nvl(o.priority,'1') <> '2'
             AND o.client = stm.client
             AND o.shipment_class_code = stm.shipment_class_code
             AND stm.surcharge_amount = 0
                      AND o.client = :ORDER_STATUS.CLIENT_NUMBER
              GROUP BY o.client;
          EXCEPTION
           WHEN OTHERS THEN
           NULL;
           END;
      END post_query;
      END ORDER_COUNT_PKG;one of the query which is taking long time is
    SELECT count(*)
                   FROM  orders o,shipment_type_methods stm
                 WHERE o.status in ('A','U')
             AND  (o.parent_order_id is null
              OR (o.order_type = 'G'
             AND o.parent_order_id = o.original_order_number))
             AND nvl(o.priority,'1') <> '2'
             AND o.client = stm.client
             AND o.shipment_class_code = stm.shipment_class_code
             AND stm.surcharge_amount = 0
               AND o.client = :CLIENT_NUMBER
              GROUP BY o.clientThe version of the database is 10.2.1.0.2
    SQL> alter session force parallel dml;These are the parameters relevant to the optimizer:
    SQL> show parameter user_dump_dest
    NAME                                 TYPE        VALUE
    user_dump_dest                       string      /u01/app/oracle/admin/mcgemqa/
                                                     udump
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter db_file_multi
    NAME                                 TYPE        VALUE
    db_file_multiblock_read_count        integer     16
    SQL> show parameter db_block_size
    NAME                                 TYPE        VALUE
    db_block_size                        integer     8192
    SQL> show parameter cursor_sharing
    NAME                                 TYPE        VALUE
    cursor_sharing                       string      EXACTHere is the output of EXPLAIN PLAN:
    SQL> explain plan for
      2  SELECT count(*)
      3         FROM  orders o,shipment_type_methods stm
      4       WHERE o.status in ('A','U')
      5           AND  (o.parent_order_id is null
      6            OR (o.order_type = 'G'
      7           AND o.parent_order_id = o.original_order_number))
      8           AND nvl(o.priority,'1') <> '2'
      9           AND o.client = stm.client
    10           AND o.shipment_class_code = stm.shipment_class_code
    11           AND stm.surcharge_amount = 0
    12     AND o.client = :CLIENT_NUMBER
    13    GROUP BY o.client
    14  /
    Explained.
    Elapsed: 00:00:00.12
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 559278019
    | Id  | Operation                      | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                         |     1 |    35 | 46764   (3)| 00:09:22 |
    |   1 |  SORT GROUP BY NOSORT          |                         |     1 |    35 | 46764   (3)| 00:09:22 |
    |*  2 |   TABLE ACCESS BY INDEX ROWID  | ORDERS                  |   175K|  3431K| 25979   (3)| 00:05:12 |
    |   3 |    NESTED LOOPS                |                         | 25300 |   864K| 46764   (3)| 00:09:22 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS   |     1 |    15 |     2   (0)| 00:00
    |*  5 |      INDEX RANGE SCAN          | U_SHIPMENT_TYPE_METHODS |     2 |       |     1   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | ORDERS_ORDER_DATE       |   176K|       |  2371   (8)| 00:00:29 |
    Predicate Information (identified by operation id):
       2 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
                  "O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
                  AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE")
       4 - filter("STM"."SURCHARGE_AMOUNT"=0)
       5 - access("STM"."CLIENT"=:CLIENT_NUMBER)
       6 - access("O"."CLIENT"=:CLIENT_NUMBER)
           filter("O"."STATUS"='A' OR "O"."STATUS"='U')
    24 rows selected.
    Elapsed: 00:00:00.86
    SQL> rollback;
    Rollback complete.
    Elapsed: 00:00:00.07Here is the output of SQL*Plus AUTOTRACE including the TIMING information:
    SQL> SELECT count(*)
      2         FROM  orders o,shipment_type_methods stm
      3       WHERE o.status in ('A','U')
      4           AND  (o.parent_order_id is null
      5            OR (o.order_type = 'G'
      6           AND o.parent_order_id = o.original_order_number))
      7           AND nvl(o.priority,'1') <> '2'
      8           AND o.client = stm.client
      9           AND o.shipment_class_code = stm.shipment_class_code
    10           AND stm.surcharge_amount = 0
    11     AND o.client = :CLIENT_NUMBER
    12    GROUP BY o.client
    13  /
    Elapsed: 00:00:03.09
    Execution Plan
    Plan hash value: 559278019
    | Id  | Operation                      | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                         |     1 |    35 | 46764   (3)| 00:09:22 |
    |   1 |  SORT GROUP BY NOSORT          |                         |     1 |    35 | 46764   (3)| 00:09:22 |
    |*  2 |   TABLE ACCESS BY INDEX ROWID  | ORDERS                  |   175K|  3431K| 25979   (3)| 00:05:12 |
    |   3 |    NESTED LOOPS                |                         | 25300 |   864K| 46764   (3)| 00:09:22 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS   |     1 |    15 |     2   (0)| 00:00
    |*  5 |      INDEX RANGE SCAN          | U_SHIPMENT_TYPE_METHODS |     2 |       |     1   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | ORDERS_ORDER_DATE       |   176K|       |  2371   (8)| 00:00:29 |
    Predicate Information (identified by operation id):
       2 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
                  "O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
                  AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE")
       4 - filter("STM"."SURCHARGE_AMOUNT"=0)
       5 - access("STM"."CLIENT"=:CLIENT_NUMBER)
       6 - access("O"."CLIENT"=:CLIENT_NUMBER)
           filter("O"."STATUS"='A' OR "O"."STATUS"='U')
    Statistics
             55  recursive calls
              0  db block gets
           7045  consistent gets
              0  physical reads
              0  redo size
            206  bytes sent via SQL*Net to client
            238  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> disconnect
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> The TKPROF output for this statement looks like the following:
    SELECT count(*)
           FROM  orders o,shipment_type_methods stm
         WHERE o.status in ('A','U')
             AND  (o.parent_order_id is null
              OR (o.order_type = 'G'
             AND o.parent_order_id = o.original_order_number))
             AND nvl(o.priority,'1') <> '2'
             AND o.client = stm.client
             AND o.shipment_class_code = stm.shipment_class_code
             AND stm.surcharge_amount = 0
       AND o.client = :CLIENT_NUMBER
      GROUP BY o.client
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.04       0.04          0          0          0           0
    Fetch        2      2.96       2.91          0       7039          0           1
    total        4      3.01       2.95          0       7039          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 95 
    Rows     Row Source Operation
          1  SORT GROUP BY NOSORT (cr=7039 pr=0 pw=0 time=2913701 us)
         91   TABLE ACCESS BY INDEX ROWID ORDERS (cr=7039 pr=0 pw=0 time=261997906 us)
         93    NESTED LOOPS  (cr=6976 pr=0 pw=0 time=20740 us)
          1     TABLE ACCESS BY INDEX ROWID SHIPMENT_TYPE_METHODS (cr=2 pr=0 pw=0 time=208 us)
          3      INDEX RANGE SCAN U_SHIPMENT_TYPE_METHODS (cr=1 pr=0 pw=0 time=88 us)(object id 81957)
         91     INDEX RANGE SCAN ORDERS_ORDER_DATE (cr=6974 pr=0 pw=0 time=70 us)(object id 81547)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.02          0.02
    ********************************************************************************The DBMS_XPLAN.DISPLAY_CURSOR output:
    SQL> variable CLIENT_NUMBER varchar2(20)
    SQL> exec :CLIENT_NUMBER := '14'
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.06
    SQL> SELECT /*+ gather_plan_statistics */ count(*)
      2         FROM  orders o,shipment_type_methods stm
      3       WHERE o.status in ('A','U')
      4           AND  (o.parent_order_id is null
      5            OR (o.order_type = 'G'
      6           AND o.parent_order_id = o.original_order_number))
      7           AND nvl(o.priority,'1') <> '2'
      8           AND o.client = stm.client
      9           AND o.shipment_class_code = stm.shipment_class_code
    10           AND stm.surcharge_amount = 0
    11     AND o.client = :CLIENT_NUMBER
    12    GROUP BY o.client
    13  /
      COUNT(*)
            91
    Elapsed: 00:00:02.85
    SQL> set termout on
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  4nfj368y8w6a3, child number 0
    SELECT /*+ gather_plan_statistics */ count(*)        FROM  orders o,shipment_type_methods stm      WHERE
    o.status in ('A','U')          AND  (o.parent_order_id is null           OR (o.order_type = 'G'
    AND o.parent_order_id = o.original_order_number))          AND nvl(o.priority,'1') <> '2'          AND
    o.client = stm.client          AND o.shipment_class_code = stm.shipment_class_code          AND
    stm.surcharge_amount = 0    AND o.client = :CLIENT_NUMBER   GROUP BY o.client
    Plan hash value: 559278019
    | Id  | Operation                      | Name                    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    |   1 |  SORT GROUP BY NOSORT          |                         |      1 |      1 |      1 |00:00:02.63 |    7039 |
    |*  2 |   TABLE ACCESS BY INDEX ROWID  | ORDERS                  |      1 |    175K|     91 |00:03:56.87 |    7039 |
    |   3 |    NESTED LOOPS                |                         |      1 |  25300 |     93 |00:00:00.02 |    6976 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS   |      1 |      1 |      1 |00:00:00.01 |       2 |
    |*  5 |      INDEX RANGE SCAN          | U_SHIPMENT_TYPE_METHODS |      1 |      2 |      3 |00:00:00.01 |       1 |
    |*  6 |     INDEX RANGE SCAN           | ORDERS_ORDER_DATE       |      1 |    176K|     91 |00:00:00.01 |    6974 |
    Predicate Information (identified by operation id):
       2 - filter((("O"."PARENT_ORDER_ID" IS NULL OR ("O"."ORDER_TYPE"='G' AND
                  "O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER"))) AND NVL("O"."PRIORITY",'1')<>'
                  "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE"))
       4 - filter("STM"."SURCHARGE_AMOUNT"=0)
       5 - access("STM"."CLIENT"=:CLIENT_NUMBER)
       6 - access("O"."CLIENT"=:CLIENT_NUMBER)
           filter(("O"."STATUS"='A' OR "O"."STATUS"='U'))
    32 rows selected.
    Elapsed: 00:00:01.30
    SQL> I'm looking forward for suggestions how to improve the performance of this statement.
    Thanks
    Sandy

    Please find explain plan for No hint
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 559278019
    | Id  | Operation                      | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                         |     1 |    35 | 46764   (3)| 00:09:22 |
    |   1 |  SORT GROUP BY NOSORT          |                         |     1 |    35 | 46764   (3)| 00:09:22 |
    |*  2 |   TABLE ACCESS BY INDEX ROWID  | ORDERS                  |   175K|  3431K| 25979   (3)| 00:05:12 |
    |   3 |    NESTED LOOPS                |                         | 25300 |   864K| 46764   (3)| 00:09:22 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS   |     1 |    15 |     2   (0)| 00:00
    |*  5 |      INDEX RANGE SCAN          | U_SHIPMENT_TYPE_METHODS |     2 |       |     1   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | ORDERS_ORDER_DATE       |   176K|       |  2371   (8)| 00:00:29 |
    Predicate Information (identified by operation id):
       2 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
                  "O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
                  AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE")
       4 - filter("STM"."SURCHARGE_AMOUNT"=0)
       5 - access("STM"."CLIENT"=:CLIENT_NUMBER)
       6 - access("O"."CLIENT"=:CLIENT_NUMBER)
           filter("O"."STATUS"='A' OR "O"."STATUS"='U')
    24 rows selected.
    Elapsed: 00:00:00.86Explain Plan for Parallel Hint
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 559278019
    | Id  | Operation                      | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                         |     1 |    35 | 46764   (3)| 00:09:22 |
    |   1 |  SORT GROUP BY NOSORT          |                         |     1 |    35 | 46764   (3)| 00:09:22 |
    |*  2 |   TABLE ACCESS BY INDEX ROWID  | ORDERS                  |   175K|  3431K| 25979   (3)| 00:05:12 |
    |   3 |    NESTED LOOPS                |                         | 25300 |   864K| 46764   (3)| 00:09:22 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS   |     1 |    15 |     2   (0)| 00:00
    |*  5 |      INDEX RANGE SCAN          | U_SHIPMENT_TYPE_METHODS |     2 |       |     1   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | ORDERS_ORDER_DATE       |   176K|       |  2371   (8)| 00:00:29 |
    Predicate Information (identified by operation id):
       2 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
                  "O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
                  AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_CODE")
       4 - filter("STM"."SURCHARGE_AMOUNT"=0)
       5 - access("STM"."CLIENT"='14')
       6 - access("O"."CLIENT"='14')
           filter("O"."STATUS"='A' OR "O"."STATUS"='U')
    24 rows selected.
    Elapsed: 00:00:08.92Explain Plan for USE_Hash hint
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 1465232248
    | Id  | Operation                     | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                         |     1 |    35 | 46786   (3)| 00:09:22 |
    |   1 |  SORT GROUP BY NOSORT         |                         |     1 |    35 | 46786   (3)| 00:09:22 |
    |*  2 |   HASH JOIN                   |                         | 25300 |   864K| 46786   (3)| 00:09:22 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| SHIPMENT_TYPE_METHODS   |     1 |    15 |     2   (0)| 00:00:0
    |*  4 |     INDEX RANGE SCAN          | U_SHIPMENT_TYPE_METHODS |     2 |       |     1   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS BY INDEX ROWID| ORDERS                  |   175K|  3431K| 46763   (3)| 00:09:22 |
    |*  6 |     INDEX RANGE SCAN          | ORDERS_ORDER_DATE       |   176K|       |  4268   (8)| 00:00:52 |
    Predicate Information (identified by operation id):
       2 - access("O"."CLIENT"="STM"."CLIENT" AND "O"."SHIPMENT_CLASS_CODE"="STM"."SHIPMENT_CLASS_COD
                  E")
       3 - filter("STM"."SURCHARGE_AMOUNT"=0)
       4 - access("STM"."CLIENT"='14')
       5 - filter(("O"."PARENT_ORDER_ID" IS NULL OR "O"."ORDER_TYPE"='G' AND
                  "O"."PARENT_ORDER_ID"=TO_NUMBER("O"."ORIGINAL_ORDER_NUMBER")) AND NVL("O"."PRIORITY",'1')<>'2
       6 - access("O"."CLIENT"='14')
           filter("O"."STATUS"='A' OR "O"."STATUS"='U')
    25 rows selected.
    Elapsed: 00:00:01.09
    SQL> Thanks
    Sandy

  • How to delay a long running tasks start until display is updated?

    I am having a problem in that a progress bar I want to use to show progress of a long running background process is not showing up for a long time (up to 10 seconds) after the long running process is started. This is in an AIR application and the background process is an external native process, so once it is launched the UI thread is free to run, but the launch of the process can take time.
    Below is the current state of the relevant code.
    In addition to the current format I have also tried using the CREATION_COMPLETE, EXIT_FRAME and RENDER events with the same results.
    If I up the value in setTimeout to 500ms the progress bar displays quickly, but I would prefer to not delay the launch of the background process for no reason.
    If I comment out the loadPorject call the progress bar is displayed instantly.
    Any help is appreciated.
    private function continueLoad(evt:Event):void
         // We are about to start some potentially long running process
         CursorManager.setBusyCursor();
         curPopup = new SyncProgress();
         curPopup.addEventListener(Event.ENTER_FRAME, popupLoadedHandler);
         PopUpManager.addPopUp(curPopup, parentView, true);
         PopUpManager.centerPopUp(curPopup);
         curPopup.stage.invalidate();
    private function popupLoadedHandler(event:Event):void
         curPopup.removeEventListener(Event.ENTER_FRAME, popupLoadedHandler);
         setTimeout(function():void{syncManager.loadProject(mainViewModel.selectedUserItem.id,proj ectFile.nativePath,overwrite);},0);

    DBMS_SCHEDULER is very powerful and can be a bit unwieldy. I tend to use DBMS_SCHEDULER for jobs which are purely 'in the database' i.e. not specifically APEX-related - in addition, I find it's better for stuff that needs to be run regularly without human intervention (some sort of refresh process, daily cleanup etc).
    If you are intending to run this process from APEX as a pseudo "on demand" process (i.e. generated by a user request) and have quite simple requirements (e.g. there's no dependencies on other jobs), it might be worth checking out the apex scheduling API - namely the package APEX_PLSQL_JOB:
    http://download.oracle.com/docs/cd/E14373_01/apirefs.32/e13369/apex_plsql_job.htm#BGBCJIJI
    It generates a unique job number which you can use to reference its progress - plus it's much simpler to use.
    p.s. using DBMS_SCHEDULER, yes the job name has to be unique but you can generate one by either using a sequence or data not likely to be repeated, like the current timestamp.

  • Are mutliple database calls really significant with a network call for a web API?

    At one of my employers, we worked on a REST (but it also applies to SOAP) API. The client, which is the application UI, would make calls over the web (LAN in typical production deployments) to the API. The API would make calls to the database.
    One theme that recurs in our discussions is performance: some people on the team believe that you should not have multiple database calls (usually reads) from a single API call because of performance; you should optimize them so that each API call has only
    (exactly) one database call.
    But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory and execute reads very, very quickly (eg. SQL Server loads and
    keeps everything in RAM and consumes almost all your free RAM if it can).
    TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why?
    To be clear, I'm talking about order of magnitude -- I know that it depends on specifics (machine hardware, choice of API and DB, etc.) If I have a call that takes O(milliseconds), does optimizing for DB calls that take an order of magnitude less, actually
    matter? Or is there more to the problem than this?
    Edit: for posterity, I think it's quite ridiculous to make claims that we need to improve performance by combining database calls under these circumstances -- especially
    with a lack of profiling. However, it's not my decision whether we do this or not; I want to know what the rationale is behind thinking this is a correct way of optimizing web API calls.

    But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory
    and execute reads very, very quickly (eg. SQL Server loads and keeps everything in RAM and consumes almost all your free RAM if it can).
    The Logic
    In theory, you are correct. However, there are a few flaws with this rationale:
    From what you stated, it's unclear if you actually tested / profiled your app. In other words, do you actually know that
    the network transfers from the app to the API are the slowest component? Because that is intuitive, it is easy to assume that it is. However, when discussing performance, you should never assume. At my employer, I am the performance lead. When I first joined,
    people kept talking about CDN's, replication, etc. based on intuition about what the bottlenecks must be. Turns out, our biggest performance problems were poorly performing database queries.
    You are saying that because databases are good at retrieving data, that the database is necessarily running at peak performance, is being used optimally, and there is nothing that can be done
    to improve it. In other words, databases are designed to be fast, so I should never have to worry about it. Another dangerous line of thinking. That's like saying a car is meant to move quickly, so I don't need to change the oil.
    This way of thinking assumes a single process at a time, or put another way, no concurrency. It assumes that one request cannot influence another request's performance. Resources are shared,
    such as disk I/O, network bandwidth, connection pools, memory, CPU cycles, etc. Therefore, reducing one database call's use of a shared resource can prevent it from causing other requests to slow down. When I first joined my current employer, management believed
    that tuning a 3 second database query was a waste of time. 3 seconds is so little, why waste time on it? Wouldn't we be better off with a CDN or compression or something else? But if I can make a 3 second query run in 1 second, say by adding an index, that
    is 2/3 less blocking, 2/3 less time spent occupying a thread, and more importantly, less data read from disk, which means less data flushed out of the in-RAM cache.
    The Theory
    There is a common conception that software performance is simply about speed.
    From a purely speed perspective, you are right. A system is only as fast as its slowest component. If you have profiled your code and found that the Internet is the slowest component, then everything else is obviously not the slowest part.
    However, given the above, I hope you can see how resource contention, lack of indexing, poorly written code, etc. can create surprising differences in performance.
    The Assumptions
    One last thing. You mentioned that a database call should be cheap compared to a network call from the app to the API. But you also mentioned that the app and the API servers are in the same LAN. Therefore, aren't both of them comparable as network calls? In
    other words, why are you assuming that the API transfer is orders of magnitude slower than the database transfer when they both have the same available bandwidth? Of course the protocols and data structures are different, I get that, but I dispute the assumption
    that they are orders of magnitude different.
    Where it gets murkey
    This whole question is about "multiple" versus "single" database calls. But it's unclear how many are multiple. Because of what I said above, as a general rule of thumb, I recommend making as few database calls as necessary. But that is
    only a rule of thumb.
    Here is why:
    Databases are great at reading data. They are storage engines. However, your business logic lives in your application. If you make a rule that every API call results in exactly one database call, then your business logic may end up in the database. Maybe that
    is ok. A lot of systems do that. But some don't. It's about flexibility.
    Sometimes to achieve good decoupling, you want to have 2 database calls separated. For example, perhaps every HTTP request is routed through a generic security filter which validates from the DB that the user has the right access rights. If they do, proceed
    to execute the appropriate function for that URL. That function may interact with the database.
    Calling the database in a loop. This is why I asked how many is multiple. In the example above, you would have 2 database calls. 2 is fine. 3 may be fine. N is not fine. If you call the database in a loop, you have now made performance linear, which means it
    will take longer the more that is in the loop's input. So categorically saying that the API network time is the slowest completely overlooks anomalies like 1% of your traffic taking a long time due to a not-yet-discovered loop that calls the database 10,000
    times.
    Sometimes there are things your app is better at, like some complex calculations. You may need to read some data from the database, do some calculations, then based on the results, pass a parameter to a second database call (maybe to write some results). If
    you combine those into a single call (like a stored procedure) just for the sake of only calling the database once, you have forced yourself to use the database for something which the app server might be better at.
    Load balancing: You have 1 database (presumably) and multiple load balanced application servers. Therefore, the more work the app does and the less the database does, the easier it is to scale because it's generally easier to add an app server than setup database
    replication. Based on the previous bullet point, it may make sense to run a SQL query, then do all the calculations in the application, which is distributed across multiple servers, and then write the results when finished. This could give better throughput
    (even if the overall transaction time is the same).
    TL;DR
    TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why?
    Yes, but only to a certain extent. You should try to minimize the number of database calls when practical, but don't combine calls which have nothing to do with each other just for the sake of combining them. Also, avoid calling the database in a loop at all
    costs.

  • Cancel long running statement in Oracle Lite (OLITE_10.3.0.3.0 olite40.jar)

    On JDBC statement, there is the method 'cancel' to instruct the database to cancel an executing statement. This works fine on Oracle server database, but not on Oracle lite database. The method call 'cancel' just blocks and the running statement is never interrupted.
    The example I tried is very simple. There is a thread started which executes a long running statement. I noticed, that when moving the cursor forward by calling rs.next(), it just blocks. That would be ok, if the statement could be canceled from within the main thread by stmt.cancel. But this call blocks as well with no implact on the running statement. Why is that? Do I miss something or is it not possible to cancel a long running statements in Oracle Lite?
    In the following my code snipped:
    public class CancelStatement {
         private static final String PATH_DB = "XX";
         private static final String PATH_LIB = "XX";
         private static final String CON_STRING = "jdbc:polite:whatever;DataDirectory=" + PATH_DB + ";Database=XX;IsolationLevel=Read Committed;Autocommit=Off;CursorType=Forward Only";
         private static final String USER = "XX";
         private static final String PASSWORD = "XX";
         public static void main(String args[]) throws Exception {
              System.setProperty("java.library.path", PATH_LIB);
              Class.forName("oracle.lite.poljdbc.POLJDBCDriver");
              Connection con = DriverManager.getConnection(CON_STRING, USER, PASSWORD);
              Statement stmt = con.createStatement();
              Thread thread = new Thread(new LongStatementRunnable(con, stmt));
              thread.start();
              Thread.sleep(3000);
              // stop long running statement
              System.out.println("cancel long running statement");
              stmt.cancel(); // XXX does not work, as call is blocked until out of memory
              System.out.println("statement canceled");
         private static class LongStatementRunnable implements Runnable {
              private Connection con;
              private Statement stmt;
              public LongStatementRunnable(Connection con, Statement stmt) {
                   this.con = con;
                   this.stmt = stmt;
              @Override
              public void run() {
                   try {
                        System.out.println("start long running statement...");
                        // execute long running statement
                        ResultSet rs = stmt.executeQuery("SELECT * FROM PERSON P1, PERSON P2");
                        while (rs.next()) { // here the execution gets blocked
                             System.out.println("row"); // is never entered
                        rs.close();
                        stmt.close();
                        con.close();
                        System.out.println("long running statement finished...");
                   } catch (Exception e) {
                        e.printStackTrace();
    }I would be very glad if you could help me.
    Thanks a lot
    Daniel
    Edited by: 861793 on 26.05.2011 14:29

    Unfortunately Oracle Lite doesn't have this option. You can call your statement from a second thread as you have done, but you won't be able to kill or cancel this operation. The only way to get fix this is by rebooting. You can use process explorer to find the dll process that is executing the SQL, but the tables will be locked and sync would be locked as well until the process is finished running in shared memory.

  • Handling long running processes in APEX

    Hello Experts,
    I need your help!
    I would like to call a long running PL-Proc from an Apex-App. So I build a page for the parameters with the wizzard - all fine.
    Pressing the button starts the pl-proc - thats also fine.
    Problem: While the PL-Proc is running, there is no activity signaled in the browser. There is no way to see, that something is running in the background.
    After the pl-proc is finished, the branch to an other site took place - so the "after process" action is working as desired.
    I tried "javascript:html_Submit_Progress(this);" - but this is not a solution because there is only a very short time to reload the page itself. The PL-Proc need much more time.
    Is there a way to show the user, that background action is running?
    Any Ideas?
    Thanks for your help!
    Kind regards,
    Frank

    A few things to keep in mind. The animated gif solutions don't show a true progress bar indicator, it's just an animation on a loop. If the process takes too long, you will hit the session timeout value defined for Oracle HTTP Server.
    Another thought is this. In the code that is the long running process, make calls to [dbms_application_info.set_session_longops|http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_appinf.htm#i996999] that indicate the real progress in the code. Then have a page that queries v$session_longops and refreshes every x (say 5 seconds). That page could even have an animated gif on it to indicate there is still activity.
    Tyler Muth
    http://tylermuth.wordpress.com
    [Applied Oracle Security: Developing Secure Database and Middleware Environments|http://www.amazon.com/gp/product/0071613706?ie=UTF8&tag=tylsblo-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0071613706]

  • Explicit commit during a long-running transaction in EclipseLink

    Hi,
    I am currently upgrading a J2EE application from OAS with Toplink Essentials to WL 10.3.3 with Eclipselink and have the following issue with transactions.
    The application was developed to have long-running transactions for business reasons in specific scenarios. However, some other queries must be created and committed along the way to make sure that we have this specific data in the database before the final commit. This call (and subsequent code) is in an EJB method that has the "@TransactionAttribute(TransactionAttributeType.REQUIRED)" defined on it. Taking this out gives me the same behaviour.
    The application has the following implementation of the process, which fails:
    Code
    EntityManager em = PersistenceUtil.createEntityManager();
    em.getTransaction().begin();
    PersistenceUtil.saveOrUpdate(em,folder);
    em.getTransaction().commit(); --->>>>FAILS HERE
    Error
    javax.ejb.EJBTransactionRolledbackException: EJB Exception: : javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Cannot call Connection.rollback in distributed transaction. Transaction Manager will commit the resource manager when the distributed transaction is committed.
    So I tried the following to see if it would work, but I believe that the transaction end and anything after that will fail since it requires a transaction to continue
    PersistenceUtil.getUnitOfWork().writeChanges();
    PersistenceUtil.getUnitOfWork().commit();
    Error
    javax.persistence.TransactionRequiredException: joinTransaction has been called on a resource-local EntityManager which is unable to register for a JTA transaction.
    Can anyone help me as to how to commit a transaction within the long running transaction in this environment? I also want to be sure that the long-running transaction does not fail or is not stopped along the way.
    Thanking you in advance

    You seem to be using JTA, so you cannot use JPA transactions, you must defined your transaction in JTA, such as in your SessionBean.
    When using JTA you should never use,
    em.getTransaction().begin();
    If you do not want to use JTA, then you need to set your persistence unit to be RESOURCE_LOCAL in your persistence.xml.
    Also you need to ensure you use a non-jta enabled DataSource.
    James : http://www.eclipselink.org

  • Long running ACT_UPG during EHP4 (SP-Stack 08)

    During EHP4 Installation after a installation of ERP 6.0 EHP4 Ready) ACT_UPG is running , but very slowly.
    In a  different system (linux on intel cpu) it took only 6 hours. In my actual system it is already running since nearly  13 hours- since 08.11.2010 04:35:58 !
    I compared the actually written logfile
    /sapmnt/D01/EHPI/abap/tmp$ wc -l SAPAA70104.D01
      588286 SAPAA70104.D01
    with that one from a finished installation on linux (with lower sp-stack than 08 and older sapehpi)
    There the file  has 1.451.798 rows, which means  that my ACT_UPG  has more than the same work to do...
    I have checked  the shadow instance: 4  batchprocesses- only one is running with RDDMASGL since 08.11.2010 04:35:58
    Add. information:
    -Database and CI are on separate hosts (solaris sparc). Both are inclued in /etc/hosts.
    - I used  sapehpi version SAPehpi_38-10005811.SAR  - SAPehpi 700 from  21.10.2010
    - I have used following settings :
    SAPehpiConsole.log:
    Enter the maximum number of batch processes during the installation:
    BATCH PROCESSES: 3
    Enter the maximum number of parallel processes during uptime:
    MAXIMUM UPTIME PROCESSES: 3
    Enter the number of parallel import processes during downtime:
    R3TRANS PROCESSES: 3
    To optimize memory usage during activation, you can use the so-called
    "memory-optimized activator". Do you want to use this possibility?
    01)  -  Yes
    02)  *  No
    : No
    Any ideas ?
    Uta
    Edited by: Uta Hedemann on Nov 9, 2010 5:34 PM

    problem solved itself. ACT_UPG is finished
    The corresponding logfile seems to be invalid for analyzing the runtime . Much more smaller than on the other system on linux....
    /sapmnt/D01/EHPI/abap/log$ wc -l SAPAA70104.D01
      617706 SAPAA70104.D01
    Good luck  for everybody who has also long run times!
    Edited by: Uta Hedemann on Nov 9, 2010 6:26 PM

  • Question : Scheduling a Long Running Process in BPEL

    Hi All,
    I have a doubt regarding deployment and scheduling of a Long Running Process in BPEL console like the way we schedule a concurrent program in Oracle Apps.
    Business Scenario:
    We have developed a BPEL process to pull data from the Database, do the required transformation and then enqueue the transformed data in a queue present in another database.
    Then a Java program will pick the data from the queue and then generate an XML file and send it to FTPserver of the Trading Partners(TP's).
    We have two Trading Partners and hence the generated XML will be put into the appropriate FTP servers of the TP's.
    The BPEL processing for both the Trading Partners is identical. Only the generated XML is put in different FTP servers.
    My question here is:
    1. Since the processing is same for both the TP's, how can we use the same process for both the Trading Partners.Can we handle it in the BPEL process itself ?
    2. Is there any way to schedule the same process in BPEL console, to first run for Trading Partner TP1 and wait for say 15 mins and then run for another Trading Partner TP2 and this cycle should continue since we will be deploying it in production environment and continuous updation of the transactions will be happening in the database.
    So far my knowledge in BPEL goes, i could not find any way to schedule a process in BPEL console. If any one has come across similar scenario, please post your updates/approach which will help us in defining the architecture for our client.
    Thanks In Advance,
    Cheers
    Dibya

    Hi Marc,
    Thanks a Lot for ur inputs.
    Need some more clarification for quetsion no.1 in my previous post.
    From our BPEL process we are calling a package which takes the Trading Partner name as the input and some other parameters. Our BPEL process will be run for both the Trading Partners since the processing is identical.
    If we develop two different processes for both Trading Partners, then there will be locking issues in the database since both the processes will refer to the same table.
    Hence we were looking for an option where we can use the same process for both the Trading Partners. Is there any way to handle it in BPEL process.
    Your inputs will be really helpful.
    Thanks In Advance,
    Dibya

Maybe you are looking for