Two DML operations in one FORALL?

Hi all,
In 11g is it possible to make two dml operations in one Forall loop?
For example:
SQL> create table xx_test (col1 number, col2 number, col3 number);
Table created
SQL> create table xx_test1 (col1 number, col2 number, col3 number);
Table created
SQL> insert into xx_test values(1,2,3);
1 row inserted
SQL>  insert into xx_test values(1,2,3);
1 row inserted
SQL>  insert into xx_test values(4,5,6);
1 row inserted
SQL>  insert into xx_test1 values(6,7,8);
1 row inserted
SQL> declare
  2  cursor c is select col1, col2, col3 from xx_test;
  3  type t is table of c%rowtype;
  4  v t;
  5  begin
  6   open c;
  7  loop
  8  fetch c bulk collect into v limit 1000;
  9   forall i in 1..v.count
10   update xx_test1
11  set col1 = v(i).col2;
12 
13  insert into xx_test1(col1,col2,col3) values(v(i).col1,v(i).col2,v(i).col3);
14 
15  exit when c%notfound;
16  end loop;
17 
18  end;
ORA-06550: line 14, column 50:
PLS-00201: identifier 'I' must be declared
ORA-06550: line 14, column 50:
PLS-00201: identifier 'I' must be declared
ORA-06550: line 14, column 48:
PL/SQL: ORA-00904: : invalid identifier
ORA-06550: line 14, column 4:
PL/SQL: SQL Statement ignoredany ideas? I know that this can be achieved by processing row by row but in my case the cursor retrieves a lot of rows...
Thanks in advance,
Alexander.

Stew, a bulk bind from an OCI client looks as follows:
OCIStmtPrepare()
OCIBindByName()/OCIBindByPos() (binding a host array)
while some-condition loop
  fill host array
  OCIStmtExecute()  (specify the number of elements in array via the iters param)
end loopThe statement is executed once for each array element - according to the OCI guide. So in the case of sending a 100 elements, the statement will be executed (iterated) by Oracle, a 100 times. It however is not exactly clear on how the server-side deals with this execution.
The issue you raised with cursor execution counts, seems to be whether the statement is single statement, or single statement with nested statements.
I have written an OCI client doing the exact same tests as were done in PL/SQL using FORALL in this thread.
If the SQL statement executed is SQL, the cursor that is created is executed only "once" (not exactly true as multiple rows are inserted using a single row DML).
If the statement is PL/SQL, the cursor that is created, is executed "once". So pretty much the same behaviour. However, as this statement contains "nested" SQLs (the actual DML statements), these also need to be parsed and executed as cursors. In which case you see these as being executed a 100 times (once per element in for the bind array).
The issue is whether or not the FORALL DML statement is executed once (as it would appear from the executions column), or not?
It would seem that there is some funky happening (some kind of call optimisation perhaps?) when Oracle deals with an array bind - as the cursor seems to be executed once. But that in fact is not the case as that cursor only inserts a single row. And multiple rows are inserted.
E.g. simplistic example to see how many times the FORALL DML statement is executed:
SQL> create sequence emp_id_seq
  2          start with 1
  3          increment by 1
  4          nocycle
  5          nomaxvalue;
Sequence created.
SQL> --// add a PL/SQL user function wrapper for the sequence
SQL> create or replace function GetNextEmpID return number is
  2          id      number;
  3  begin
  4          select emp_id_seq.NextVal into id from dual;  --// explicit SQL statement
  5          return( id );
  6  end;
  7  /
Function created.
SQL>
SQL> declare
  2          cursor c is
  3          select empno, ename, job from emp;
  4 
  5          type TBuffer is table of c%RowType;
  6          buffer  TBuffer;
  7  begin
  8          open c;
  9          loop
10                  fetch c bulk collect into buffer limit 100;
11 
12                  forall i in 1..buffer.Count
13                          insert into tab1 values( GetNextEmpID(), buffer(i).ename );
14                  exit when c%NotFound;
15          end loop;
16          close c;
17  end;
18  /
PL/SQL procedure successfully completed.
SQL>
SQL> select
  2          executions,
  3          sql_text
  4  from       v$sql
  5  where      sql_text like 'INSERT INTO TAB1%'
  6  or sql_text like 'SELECT EMP_ID_SEQ%';
EXECUTIONS SQL_TEXT
        14 SELECT EMP_ID_SEQ.NEXTVAL FROM DUAL
         1 INSERT INTO TAB1 VALUES( GETNEXTEMPID(), :B1So the insert seems to have been executed once. However, the wrapper was called 14 times and its SQL statement was called 14 times. Once per bind array value.
So there do seem to be some kind of optimisation on the Oracle side - however, it does not mean that the FORALL statement is not using bulk/array binding. It is. And that is what the FORALL statement is designed to do.

Similar Messages

  • How can I run two DML in one FORALL statement?

    How can I run 1) select 2) update in one FORALL for each item as below?
    OPEN FXCUR;
      LOOP
            FETCH FXCUR BULK COLLECT INTO v_ims_trde_oids LIMIT 1000;
            EXIT  WHEN v_ims_trde_oids.COUNT() = 0;
         FORALL i IN v_ims_trde_oids.FIRST .. v_ims_trde_oids.LAST     
              SELECT EXTRACTVALUE(XMLTYPE(CNTNT),'/InboundGTMXML/ProcessingIndicators/ClientCLSEligibleIndicator')        INTO v_cls_ind
              FROM IMS_TOMS_MSGE  WHERE ims_trde_oid = v_ims_trde_oids(i);
             IF v_cls_ind      IS NOT NULL THEN
                      v_cls_ind       := '~2136|S|'||v_cls_ind||'|';
             UPDATE ims_alctn_hstry  SET CHNGE_DATA_1   =concat(CHNGE_DATA_1,v_cls_ind)
             WHERE ims_trde_hstry_id = (select max(ims_trde_hstry_id) from ims_alctn_hstry where ims_trde_oid=v_ims_trde_oids(i));
             DBMS_OUTPUT.PUT_LINE('Trade oid: '||v_ims_trde_oids(i)||'   CLS Eligible Indicator: '||v_cls_ind);
          END IF;
      END LOOP;
      CLOSE FXCUR;Your help will be appreciated.
    Thanks
    Edited by: PhoenixBai on Aug 6, 2010 6:05 PM

    I came through this forum while googling on the issue of 'using two DML's in one FORALL statement.
    Thanks for all the useful information guys.
    I need to extend this functionality a bit.
    My present scenario is as follows:
    FOR I in 1..collection1.count Loop
         BEGIN
              insert into tab1(col1)
              values collection1(I) ;
         EXCEPTION
              WHEN OTHERS THEN
              RAISE_APPLICATION_ERROR('ERROR AT'||collection1(I));
         END;
         BEGIN
              UPDATE tab2
              SET col1 = collection1(I);
         EXCEPTION
              WHEN OTHERS THEN
              RAISE_APPLICATION_ERROR('ERROR AT'||collection1(I));
         END;
    commit;
    END LOOP;
    I need to use the FORALL functionality in this scenario, but without using the SAVE EXCEPTIONS clause keeping in mind that I also need to get value in the
    collection that led to the error.Also, the each INSERT statement has to be followed by an UPDATE and then the cycle goes on(Hence I cannot use 2 FORALL statements for INSERT and UPDATE coz then all the INSERT will be performed at once and similarly the UPDATEs). So I created something like this:
    DECLARE
    l_stmt varchar2(1000);
    BEGIN
    l_stmt := 'BEGIN '||
              'insert into tab1(col1) '||
              'values collection1(I) ; '||
         'EXCEPTION '||
              'WHEN OTHERS THEN '||
              'RAISE_APPLICATION_ERROR(''ERROR AT''|| :1); '||
         'END; '||
         'BEGIN '||
              'UPDATE tab2 '||
              'SET col1 = :1; '||
         'EXCEPTION '||
              'WHEN OTHERS THEN '||
              'RAISE_APPLICATION_ERROR(''ERROR AT''|| :1); '||
         'END;'
    FORALL I in 1..collection1.count
    EXECUTE IMMEDIATE l_stmt USING Collection1(SQL%BULK_EXCEPTIONS(1).ERROR_INDEX);
    END;
    Will this approach work? Or is there any better aproach to this? I am trying to avoid the traditional FOR ..LOOP to achieve better performance of query

  • Retention guarantee causing multiple DML operations to fail ?

    WARNING: Enabling retention guarantee can cause multiple DML operations to fail. Use with caution.
    ^^
    From the Ora Docs (10.2) Section - Introduction to Automatic Undo Management (Undo Retention) states the above.
    This would mean that other DML operations if requiring space in undo, would therefore fail with a ORA-30036 error. Is this correct understanding ?
    If so then one way to avoid this ORA error is to have autoextend defined. ??

    From the Ora Docs (10.2) Section - Introduction to Automatic Undo Management (Undo Retention) states the above. Is it from
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/undo.htm#sthref1482
    >
    This would mean that other DML operations if requiring space in undo, would therefore fail with a ORA-30036 error. Is this correct understanding ? Not all DML operations requiring undo space would fail; only those transactions for which there is no space left in undo tablespace would fail. And yes this is one of the possible error messages that one can get.
    If so then one way to avoid this ORA error is to have autoextend defined. ??Yes but that is just pushing the brick wall two feet away. Besides, with auto extension turned on and inappropriate undo_retention parameter settings, you will have issues with disk space (if any).

  • Date Field Displaying and DML Operations

    Hi all,
    I have an issue with displaying and updating date columns that I'm hoping someone can assist me with.
    I'm using APEX 3.0.1.
    I have a Form page with a number of fields sourced from one database table that are being populated by an Automatic Row Fetch On Load - After Header.
    The Item P6_MONTHFOR is stored as a Date datatype in the table and displayed on the form using the Date Picker (use Item Format Mask). I have a Format Mask set as 'MON-RR'. I want to ensure that the last day of the month is saved back to the database table so have been trying various calculation techniques to try and achieve this but am experiencing a variety of SQL errors!
    I have tried using LAST_DAY(:P6_MONTHFOR) in the Post Calculation Computation, or as a separate Computation After Submit.
    I have also tried having P6_MONTHFOR as a hidden column and using display Items and then trying Item calculations to then update the value of P6_MONTHFOR column prior to DML operations but to no avail.
    The only DML operations allowed on these rows are DELETE and UPDATE and I'm using an Automatic Row Processing (DML) On Submit - After Computations and Validations process to control these operations.
    Any help or suggestions greatly appreciated :-)
    Kind Regards,
    Gary.

    the function LAST_DAY is a date function, expecting a date as input. Since it is all web, the values of items are as string/varchar2. In order to use date-function, you have to first make it a date with to_date() with the format-mask (DD-MON-RR).
    In my opinion Dates are still tricky, it would be great if ApEx would have a DV() function, next to the V() and NV() functions, It is in ApExLib (of Patrick Wolf)
    Simon

  • Query performance on same table with many DML operations

    Hi all,
    I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
    The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
    If i created same table again newly with same data and fire the same select statement, it is taking less time.
    My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
    Thanks in advance,
    Pal

    Try searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
    As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
    For example if you had a table like this where seq_no is populated by a sequence and indexed
    seq_no         NUMBER
    processed_flag VARCHAR2(1)
    trans_date     DATEand then did deletes like:
    DELETE FROM t
    WHERE processed_flag = 'Y' and
          trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
    HTH
    John

  • Oracle 8i array DML operations with LOB objects

    Hi all,
    I have a question about Oracle 8i array DML operations with LOB objects, both CLOB and BLOB. With the following statement in mind:
    INSERT INTO TABLEX (COL1, COL2) VALUES (:1, :2)
    where COL1 is a NUMBER and COL2 is a BLOB, I want to use OCIs array DML functionality to insert multiple records with a single statement execution. I have allocated an array of LOB locators, initialized them with OCIDescriptorAlloc(), and bound them to COL2 where mode is set to OCI_DATA_AT_EXEC and dty (IN) is set to SQLT_BLOB. It is after this where I am getting confused.
    To send the LOB data, I have tried using the user-defined callback method, registering the callback function via OCIBindDynamic(). I initialize icbfps arguments as I would if I were dealing with RAW/LONG RAW data. When execution passes from the callback function, I encounter a memory exception within an Oracle dll. Where dvoid **indpp equals 0 and the object is of type RAW/LONG RAW, the function works fine. Is this not a valid methodology for CLOB/BLOB objects?
    Next, I tried performing piecewise INSERTs using OCIStmtGetPieceInfo() and OCIStmtSetPieceInfo(). When using this method, I use OCILobWrite() along with a user-defined callback designed for LOBs to send LOB data to the database. Here everything works fine until I exit the user-defined LOB write callback function where an OCI_INVALID_HANDLE error is encountered. I understand that both OCILobWrite() and OCIStmtExecute() return OCI_NEED_DATA. And it does seem to me that the two statements work separately rather than in conjunction with each other. So I rather doubt this is the proper methodology.
    As you can see, the correct method has evaded me. I have looked through the OCI LOB samples, but have not found any code that helps answer my question. Oracles OCI documentation has not been of much help either. So if anyone could offer some insight I would greatly appreciate it.
    Chris Simms
    [email protected]
    null

    Before 9i, you will have to first insert empty locators using EMPTY_CLOB() inlined in the SQL and using RETURNING clause to return the locator. Then use OCILobWrite to write to the locators in a streamed fashion.
    From 9i, you can actually bind a long buffer to each lob position without first inserting an empty locator, retrieving it and then writing to it.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by CSimms:
    Hi all,
    I have a question about Oracle 8i array DML operations with LOB objects, both CLOB and BLOB. With the following statement in mind:
    INSERT INTO TABLEX (COL1, COL2) VALUES (:1, :2)
    where COL1 is a NUMBER and COL2 is a BLOB, I want to use OCIs array DML functionality to insert multiple records with a single statement execution. I have allocated an array of LOB locators, initialized them with OCIDescriptorAlloc(), and bound them to COL2 where mode is set to OCI_DATA_AT_EXEC and dty (IN) is set to SQLT_BLOB. It is after this where I am getting confused.
    To send the LOB data, I have tried using the user-defined callback method, registering the callback function via OCIBindDynamic(). I initialize icbfps arguments as I would if I were dealing with RAW/LONG RAW data. When execution passes from the callback function, I encounter a memory exception within an Oracle dll. Where dvoid **indpp equals 0 and the object is of type RAW/LONG RAW, the function works fine. Is this not a valid methodology for CLOB/BLOB objects?
    Next, I tried performing piecewise INSERTs using OCIStmtGetPieceInfo() and OCIStmtSetPieceInfo(). When using this method, I use OCILobWrite() along with a user-defined callback designed for LOBs to send LOB data to the database. Here everything works fine until I exit the user-defined LOB write callback function where an OCI_INVALID_HANDLE error is encountered. I understand that both OCILobWrite() and OCIStmtExecute() return OCI_NEED_DATA. And it does seem to me that the two statements work separately rather than in conjunction with each other. So I rather doubt this is the proper methodology.
    As you can see, the correct method has evaded me. I have looked through the OCI LOB samples, but have not found any code that helps answer my question. Oracles OCI documentation has not been of much help either. So if anyone could offer some insight I would greatly appreciate it.
    Chris Simms
    [email protected]
    <HR></BLOCKQUOTE>
    null

  • Using two premap operator in a map.

    Hi,
    Can I use two premap operator in a map in OWB 10g Release 1?
    If no then Is it possible to do in OWB 10g release 2?
    Please reply.
    Thanks in advance.

    From Oracle® Warehouse Builder User's Guide 10g Release 2 (10.2.0.2)
    A mapping can only contain one Pre-Mapping Process operator. Only constants, mapping input parameters, and output from a Pre-Mapping Process can be mapped into a Post-Mapping Process operator
    Thanks,
    Sutirtha

  • How to write DML operation in a function

    Hi
    Its very urgent for me.
    I am writing DML operation directly in a function and is being called from select statement, it is getting error as "DML Operations cannot be performed inside a query".
    How to write a DML operation inside a function.
    My objective is to call that function from select statement.
    Please help me out.
    Thankd

    No no no. You're committing after each row! soany
    other session running the same query will see the
    changes you're making. Your session will equallysee
    changes caused by running this query in those
    sessions.Other session, yes, but current session will only see
    the changes once it has completed the current
    statement. Otherwise my "rn" column would not have
    gone up sequentially in the above example. it would
    have gone
    1st row rn = 1 (all rows get updated by 1:-
    2,3,4,5,6,7,8,9,10,11)
    2nd row rn = 3 (all rows get update by 1:-
    3,4,5,6,7,8,9,10,11,12)
    3rd row rn = 5 (all rows get update by 1:-
    4,5,6,7,8,9,10,11,12,13)
    4th row rn = 7 (all rows get update by 1:-
    5,6,7,8,9,10,11,12,13,14)
    5th row rn = 9 (all rows get update by 1:-
    6,7,8,9,10,11,12,13,14,15)
    6th row rn = 11 (all rows get update by 1:-
    7,8,9,10,11,12,13,14,15,16)
    7th row rn = 13 (all rows get update by 1:-
    8,9,10,11,12,13,14,15,16,17)
    8th row rn = 15 (all rows get update by 1:-
    9,10,11,12,13,14,15,16,17,18)
    9th row rn = 17 (all rows get update by 1:-
    10,11,12,13,14,15,16,17,18,19)
    10th row rn = 19 (all rows get update by 1:-
    11,12,13,14,15,16,17,18,19,20)
    So the fact the commit happens each time the rows get
    updated, isn't effecting the currently running select
    statement.
    No, actually you DO see the other session changes. This is because it is AUTONOMOUS transaction, and this a function.
    Test by adding:
    create or replace function incvals return number as
    pragma autonomous_transaction;
    v_val number;
    begin
    update t set rn = rn + 1;
    select max(rn) into v_val from t;
    dbms_lock.sleep(1); --add this line
    commit;
    return v_val;
    end;
    And test in two sessions.
    You will NOT get sequential ascending.
    >
    Think about the effect of two parallel sessionsboth
    running this query at the same time, and ask isthis
    sensible?Gawd, no, of course not. Like I said, I'd never use
    this sort of thing myself. I'm just wondering what
    on earth the OP is trying to achieve.
    :)Glad to hear it.

  • DML Operations in Stored Function

    Hi,
    I have used Update Statement in a function. Which is giving an errror (ORA-14551: cannot perform a DML operation inside a query).
    Can I use DML operations in a stored function ?
    (I need help on locking master/transaction tables, i.e if a one user locks a master another user should not get modify access to the master/transactions).
    Thanks
    Ramesh Ganji

    Someone who obviously din't read my previous post in this thread. You should pay attention, programming is all about the details.
    ORA-14551: cannot perform a DML operation inside a query).Then it obviously does more than just "returns a Table Type object". Why are you doing DML in a function?
    PLS-00653: aggregate/table functions are not allowed in PL/SQL scopeWe can only call pipelined functions from SQL queries. So you'll have to ditch the DML or make it an autonomous transaction. Be very careful if you adopt the latter approach.
    Cheers, APC

  • DML operations improves perfomance on a Partioned Table?

    Hi
    We have a simple table (non-partitioned) and we do normal DML operations on it. If we convert that table into 5 partitions then does DML performance improves on it by 5times? To be very specific will READ and WRITES on table will improve? if YES than to which extent. *(considering Table size in TB)*. DB is 11g R2.
    Regards
    Edited by: 905133 on Dec 29, 2011 10:08 PM
    Edited by: 905133 on Dec 29, 2011 10:33 PM

    CKLP,
    I populated a table in my test environment called test with 10 columns and 7 million records.
    its structure is
    col1..col5 are of data type number and are locally indexed
    col6..col 9 are of data type number (not indexed)
    col 10 is of  data type Date
    table is partitioned on col10 by Range. (7 partition for a week (1 partition/day), 1million records in a partition)
    when i inserted 7 million records (i.e one million record/partition). Avg insertion time/partition was 7.5 min. (I inserted records individually in a partition)
    Then I created another table test2 with same no. of columns Indexed and equal amount of records. But increased the no. of partitions. 4 partitions/day 28 partitions for week
    When I inserted 7 million records (one million records/ 4 partitions), Avg insertion Time was 7.1 min. (I inserted records individually in these 4 partition at a single time)
    Insert into test values(...) format was used. and now i know Parallelism can't be used on this format.
    So the point to discuss here is "why did not I achieve a better Insertion Time when i divided a daily partition to 4 partition per day". OR " How can i improve insertion time in such scenarios"
    Oracle DB 11g R2, OS Linux 5, Sun Server x4200 with shared storage.

  • Why we cannot perform DML operations against complex views directly.

    hi
    can any tell me why we cannot perform DML operations against complex views directly.

    Hi,
    It is not easy to perform DML operations on complex views which involve more than one table as said by vissu. The reason being you may not know which columns to be updated/inserted/deleted on the base tables of the views. If it is a simple view containing a single table it is as simple as performing actions on the table.
    For further details visit this
    http://www.orafaq.com/wiki/View
    cheers
    VT

  • Ref:Logging the DML operations on a schema

    Hello,
    I want to log the DML operations on our Master Schema(This schema is accessed by 4-6 applications apart from there own application schemas). I want to log the DML operations done by x.application. Is there any easy way to do it. One solutions is writting the triggers on each table of Master schema and inserting the transactions into audit_schema(it is replica of master schema). Is there any other way to do it. Any solution for this issue is appreciated. Thanks in advance.

    Is there any other way to do itEnable auditing, see the security guide:
    http://download.oracle.com/docs/cd/B19306_01/network.102/b14266/toc.htm
    http://download.oracle.com/docs/cd/B19306_01/network.102/b14266/auditing.htm#i1008289
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/security.htm#sthref2929

  • Two warehouse numbers in one plant

    Is it possible to have two warehouse numbers in one plant and how do i get this to work?Can the same users work in first and second warehouse?What are the things that i need to define searately in second warehouse number?What is the difference if i still have one warehouse number and split two warehouses in 2 different warehouse numbers?

    Is it possible to have two warehouse numbers in one plant and how do i get this to work?
    Sure, you just need to create another WhNum and assign it to the SLoc(s).
    Can the same users work in first and second warehouse?
    Yes, but you will be required to change their WM profile in LRFMD to make it work.
    What are the things that i need to define searately in second warehouse number?
    Not sure I understand the question correctly. You need to have all the WM settings for this warehouse number separate, although some values might be the same (print program, storage types, movement type config, etc.) but they need to be specified for the key of your new WhNum.
    What is the difference if i still have one warehouse number and split two warehouses in 2 different warehouse numbers?
    There are many differences... You will be able to make the movement types config different which is important if your process requirements are different.
    You will have two warehouse monitors RF monitors (LRF1/LRF2 t-code).
    You will be able to specify different internal SU number if you are using storage units.
    You will be able to have the same storage types in different WhNUms, e.g. you can have 001 in WH1 and 001 in WH2 too without conflict.
    Have different print programs, settings in mobile data entry etc.
    Well, there are many other differences. It would be good if you could specify which operations / processes are of interest so we could elaborate about the specific differences you are interested in.

  • Sir,how to find the last DML operations

    Hi,
    Please tell me how to find the last DML Operations at least minimum 30 queries.
    Thanks in advance,

    Shared Pool is a memory location in SGA that contains SQL Statement that are submitted to Oracle for execution. This area is common to the entire database. Its not specific to user.
    So what ever Unique SQL statement that is submitted to the SQL Engine will be available here. Shared Pool has a size limit. That is defined by the parameters SHARED_POOL_SIZE and SHARED_POOL_RESERVED_SIZE. So when the Shared pool becomes full the data needs to be removed from it. That is done in FIFO basis.
    Oracle provides a visibility to this area through dictionary view V$SQLAREA. So this view will not only contain the SQL executed by you but also by every one. Even the one executed by oracle itself.
    So in my opinion what you are asking is not possible to get. You must have some logging mechanism in your application to get this information.

  • Error on Two DML process in a single page by condition

    Hi all,
    I have a page where i have two forms region on table and two DML processes,
    i want to execute these two DML process one by one conditionally, such as initially when page load the second DML process will not execute after page submit, the first DML process will execute and successfully inserted table data then the second DML process will execute.
    I have tried to do it by conditionally executing the them one after one by button pressed but failed to do so, when i submit the page its executing two DML process though i have make the second one to execute conditionally.
    Any help is highly appreciate.
    adnan

    Hi,
    Can you please replicate the same in apex.oracle.com for more clarity?
    Regards,
    Sakthi.

Maybe you are looking for

  • 3TB Fusion and Bootcamp?

    Hi, I noticed this on the Fusion support page: Note: Boot Camp Assistant is not supported at this time on 3TB hard drive configurations. http://support.apple.com/kb/HT5446 I spend most of my time in Windows as opposed to Mac O/S.  Can I use the bootc

  • Exported from Premiere as NTSC, but Encore thinks it's PAL - Help!

    Encore 3.0.0.268 Source video: AVI, 29.97 fps, 720x480, 32kHz 16-bit stereo Exported from Premiere Pro to Encore with the following settings: NTSC, 720x480, 29.97 drop frame [fps], Lower, Quality 4.5 48 kHz, 16 bit, PCM VBR, 1 Pass, Min 2.02, Target

  • POP3 Adapter Issues Biztalk 2013

    Hey Guys,  I'm going through a very strange situation. First my environment: QA Windows 2008 R2 Standard Biztalk Server 2013 Enterprise  2GB of Memory Biztalk and SQL Server 2008 R2 installed in a same virtual server. Pre-Production Windows 2008 R2 E

  • Security & Privacy preference pane Error Mavericks -- NOT WORKING

    Please Help! I get this very worring error in Mavericks, which I just installed

  • Very slow Firewire data transfer when connected to ext. HDD

    Hi, I have both a Mini and a Mac book. When I connect my Macbook upto a WD MyBook Premium ext HDD and use SuperDuper! to back it up, I get an average copy speed of about 20 m/s. On my Mini I get about 2m/s! Both the machines are more or less the late