Simultaneous DML operations

hi there,
we are doing simultaneous DML operations using trigger, that will be reflected in different tables.the question is insertion and deletion operations are fine and working.but the updation are not happening.
CREATE OR REPLACE TRIGGER TRG_REPLICATION
AFTER INSERT or DELETE or UPDATE
ON emp
FOR EACH ROW
BEGIN
IF (inserting) THEN
insert into dept(employee_id,department_name)VALUES( :new.employee_id,:new.department_name);
insert into promo_details(employee_id,employee_name)VALUES( :new.employee_id,:new.employee_name);
insert into emp_details(employeeid,address)values( :new.employee_id,:new.address);
insert into emp_job(job_id,jobtitle,employee_id)values( :new.job_id,:new.job_title,:new.employee_id);
insert into project(proj_name,employee_id,employee_name)values( :new.proj_name,:new.employee_id,:new.employee_name);
ELSIF (deleting) THEN
delete from dept where employee_id=:old.employee_id or department_name=:old.department_name;
delete from promo_details where employee_id=:old.employee_id or employee_name=:old.employee_name;
delete from emp_details where employeeid=:old.employee_id or address=:old.address;
delete from emp_job where job_id=:old.job_id or jobtitle=:old.job_title or employee_id=:old.employee_id;
delete from project where proj_name=:old.proj_name or employee_id=:old.employee_id or employee_name=:old.employee_name;
ELSIF (updating) THEN
update dept set employee_id=:new.employee_id,department_name=:new.department_name where employee_id=:new.employee_id;
update promo_details set employee_id=:new.employee_id,employee_name=:new.employee_name where employee_id=:new.employee_id;
update emp_details set employeeid=:new.employee_id,address=:new.address where employeeid=:new.employee_id;
update emp_job set job_id=:new.job_id,jobtitle=:new.job_title where employee_id=:new.employee_id;
update project set proj_name=:new.proj_name,employee_id=:new.employee_id,employee_name=:new.employee_name where employee_id=:new.employee_id;
END IF;
END;
update is occuring in one table, where it is written on particular table.but it is not reflecting on other tables.
I want update operation should also be done.
Any good solution will be appreciated.

A bit of error handling is needed if you expect the update to update some rows and if it does not, you want to raise an error.
After each update check SQL%rowcount to check how many rows did the update statement modify and raise an error accordingly.

Similar Messages

  • Date Field Displaying and DML Operations

    Hi all,
    I have an issue with displaying and updating date columns that I'm hoping someone can assist me with.
    I'm using APEX 3.0.1.
    I have a Form page with a number of fields sourced from one database table that are being populated by an Automatic Row Fetch On Load - After Header.
    The Item P6_MONTHFOR is stored as a Date datatype in the table and displayed on the form using the Date Picker (use Item Format Mask). I have a Format Mask set as 'MON-RR'. I want to ensure that the last day of the month is saved back to the database table so have been trying various calculation techniques to try and achieve this but am experiencing a variety of SQL errors!
    I have tried using LAST_DAY(:P6_MONTHFOR) in the Post Calculation Computation, or as a separate Computation After Submit.
    I have also tried having P6_MONTHFOR as a hidden column and using display Items and then trying Item calculations to then update the value of P6_MONTHFOR column prior to DML operations but to no avail.
    The only DML operations allowed on these rows are DELETE and UPDATE and I'm using an Automatic Row Processing (DML) On Submit - After Computations and Validations process to control these operations.
    Any help or suggestions greatly appreciated :-)
    Kind Regards,
    Gary.

    the function LAST_DAY is a date function, expecting a date as input. Since it is all web, the values of items are as string/varchar2. In order to use date-function, you have to first make it a date with to_date() with the format-mask (DD-MON-RR).
    In my opinion Dates are still tricky, it would be great if ApEx would have a DV() function, next to the V() and NV() functions, It is in ApExLib (of Patrick Wolf)
    Simon

  • "cannot perform a DML operation inside a query" error when using table func

    hello please help me
    i created follow table function when i use it by "select * from table(customerRequest_list);"
    command i receive this error "cannot perform a DML operation inside a query"
    can you solve this problem?
    CREATE OR REPLACE FUNCTION customerRequest_list(
    p_sendingDate varchar2:=NULL,
    p_requestNumber varchar2:=NULL,
    p_branchCode varchar2:=NULL,
    p_bankCode varchar2:=NULL,
    p_numberOfchekbook varchar2:=NULL,
    p_customerAccountNumber varchar2:=NULL,
    p_customerName varchar2:=NULL,
    p_checkbookCode varchar2:=NULL,
    p_sendingBranchCode varchar2:=NULL,
    p_branchRequestNumber varchar2:=NULL
    RETURN customerRequest_nt
    PIPELINED
    IS
    ob customerRequest_object:=customerRequest_object(
    NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
    condition varchar2(2000 char):=' WHERE 1=1 ';
    TYPE rectype IS RECORD(
    requestNumber VARCHAR2(32 char),
    branchRequestNumber VARCHAR2(32 char),
    branchCode VARCHAR2(50 char),
    bankCode VARCHAR2(50 char),
    sendingDate VARCHAR2(32 char),
    customerAccountNumber VARCHAR2(50 char),
    customerName VARCHAR2(200 char),
    checkbookCode VARCHAR2(50 char),
    numberOfchekbook NUMBER(2),
    sendingBranchCode VARCHAR2(50 char),
    numberOfIssued NUMBER(2)
    rec rectype;
    dDate date;
    sDate varchar2(25 char);
    TYPE curtype IS REF CURSOR; --RETURN customerRequest%rowtype;
    cur curtype;
    my_branchRequestNumber VARCHAR2(32 char);
    my_branchCode VARCHAR2(50 char);
    my_bankCode VARCHAR2(50 char);
    my_sendingDate date;
    my_customerAccountNumber VARCHAR2(50 char);
    my_checkbookCode VARCHAR2(50 char);
    my_sendingBranchCode VARCHAR2(50 char);
    BEGIN
    IF NOT (regexp_like(p_sendingDate,'^[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}$')
    OR regexp_like(p_sendingDate,'^[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}[[:space:]]{1}[[:digit:]]{2}:[[:digit:]]{2}:[[:digit:]]{2}$')) THEN
    RAISE_APPLICATION_ERROR(-20000,cbdpkg.get_e_m(-1,5));
    ELSIF (p_sendingDate IS NOT NULL) THEN
    dDate:=TO_DATE(p_sendingDate,'YYYY/MM/DD hh24:mi:ss','nls_calendar=persian');
    dDate:=trunc(dDate);
    sDate:=TO_CHAR(dDate,'YYYY/MM/DD hh24:mi:ss');
    condition:=condition|| ' AND ' || 'sendingDate='||'TO_DATE('''||sDate||''',''YYYY/MM/DD hh24:mi:ss'''||')';
    END IF;
    IF (p_requestNumber IS NOT NULL) AND (cbdpkg.isspace(p_requestNumber)=0) THEN
    condition:=condition|| ' AND ' || ' requestNumber='||p_requestNumber;
    END IF;
    IF (p_bankCode IS NOT NULL) AND (cbdpkg.isspace(p_bankCode)=0) THEN
    condition:=condition|| ' AND ' || ' bankCode='''||p_bankCode||'''';
    END IF;
    IF (p_branchCode IS NOT NULL) AND (cbdpkg.isspace(p_branchCode)=0) THEN
    condition:=condition|| ' AND ' || ' branchCode='''||p_branchCode||'''';
    END IF;
    IF (p_numberOfchekbook IS NOT NULL) AND (cbdpkg.isspace(p_numberOfchekbook)=0) THEN
    condition:=condition|| ' AND ' || ' numberOfchekbook='''||p_numberOfchekbook||'''';
    END IF;
    IF (p_customerAccountNumber IS NOT NULL) AND (cbdpkg.isspace(p_customerAccountNumber)=0) THEN
    condition:=condition|| ' AND ' || ' customerAccountNumber='''||p_customerAccountNumber||'''';
    END IF;
    IF (p_customerName IS NOT NULL) AND (cbdpkg.isspace(p_customerName)=0) THEN
    condition:=condition|| ' AND ' || ' customerName like '''||'%'||p_customerName||'%'||'''';
    END IF;
    IF (p_checkbookCode IS NOT NULL) AND (cbdpkg.isspace(p_checkbookCode)=0) THEN
    condition:=condition|| ' AND ' || ' checkbookCode='''||p_checkbookCode||'''';
    END IF;
    IF (p_sendingBranchCode IS NOT NULL) AND (cbdpkg.isspace(p_sendingBranchCode)=0) THEN
    condition:=condition|| ' AND ' || ' sendingBranchCode='''||p_sendingBranchCode||'''';
    END IF;
    IF (p_branchRequestNumber IS NOT NULL) AND (cbdpkg.isspace(p_branchRequestNumber)=0) THEN
    condition:=condition|| ' AND ' || ' branchRequestNumber='''||p_branchRequestNumber||'''';
    END IF;
    dbms_output.put_line(condition);
    OPEN cur FOR 'SELECT branchRequestNumber,
    branchCode,
    bankCode,
    sendingDate,
    customerAccountNumber ,
    checkbookCode ,
    sendingBranchCode
    FROM customerRequest '|| condition ;
    LOOP
    FETCH cur INTO my_branchRequestNumber,
    my_branchCode,
    my_bankCode,
    my_sendingDate,
    my_customerAccountNumber ,
    my_checkbookCode ,
    my_sendingBranchCode;
    EXIT WHEN (cur%NOTFOUND) OR (cur%NOTFOUND IS NULL);
    BEGIN
    SELECT requestNumber,
    branchRequestNumber,
    branchCode,
    bankCode,
    TO_CHAR(sendingDate,'yyyy/mm/dd','nls_calendar=persian'),
    customerAccountNumber ,
    customerName,
    checkbookCode ,
    numberOfchekbook ,
    sendingBranchCode ,
    numberOfIssued INTO rec FROM customerRequest FOR UPDATE NOWAIT;
    --problem point is this
    EXCEPTION
    when no_data_found then
    null;
    END ;
    ob.requestNumber:=rec.requestNumber ;
    ob.branchRequestNumber:=rec.branchRequestNumber ;
    ob.branchCode:=rec.branchCode ;
    ob.bankCode:=rec.bankCode ;
    ob.sendingDate :=rec.sendingDate;
    ob.customerAccountNumber:=rec.customerAccountNumber ;
    ob.customerName :=rec.customerName;
    ob.checkbookCode :=rec.checkbookCode;
    ob.numberOfchekbook:=rec.numberOfchekbook ;
    ob.sendingBranchCode:=rec.sendingBranchCode ;
    ob.numberOfIssued:=rec.numberOfIssued ;
    PIPE ROW(ob);
    IF (cur%ROWCOUNT>500) THEN
    CLOSE cur;
    RAISE_APPLICATION_ERROR(-20000,cbdpkg.get_e_m(-1,4));
    EXIT;
    END IF;
    END LOOP;
    CLOSE cur;
    RETURN;
    END;

    Now what exactly would be the point of putting a SELECT FOR UPDATE in an autonomous transaction?
    I think OP should start by considering why he has a function with an undesirable side effect in the first place.

  • Query performance on same table with many DML operations

    Hi all,
    I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
    The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
    If i created same table again newly with same data and fire the same select statement, it is taking less time.
    My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
    Thanks in advance,
    Pal

    Try searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
    As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
    For example if you had a table like this where seq_no is populated by a sequence and indexed
    seq_no         NUMBER
    processed_flag VARCHAR2(1)
    trans_date     DATEand then did deletes like:
    DELETE FROM t
    WHERE processed_flag = 'Y' and
          trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
    HTH
    John

  • Oracle 8i array DML operations with LOB objects

    Hi all,
    I have a question about Oracle 8i array DML operations with LOB objects, both CLOB and BLOB. With the following statement in mind:
    INSERT INTO TABLEX (COL1, COL2) VALUES (:1, :2)
    where COL1 is a NUMBER and COL2 is a BLOB, I want to use OCIs array DML functionality to insert multiple records with a single statement execution. I have allocated an array of LOB locators, initialized them with OCIDescriptorAlloc(), and bound them to COL2 where mode is set to OCI_DATA_AT_EXEC and dty (IN) is set to SQLT_BLOB. It is after this where I am getting confused.
    To send the LOB data, I have tried using the user-defined callback method, registering the callback function via OCIBindDynamic(). I initialize icbfps arguments as I would if I were dealing with RAW/LONG RAW data. When execution passes from the callback function, I encounter a memory exception within an Oracle dll. Where dvoid **indpp equals 0 and the object is of type RAW/LONG RAW, the function works fine. Is this not a valid methodology for CLOB/BLOB objects?
    Next, I tried performing piecewise INSERTs using OCIStmtGetPieceInfo() and OCIStmtSetPieceInfo(). When using this method, I use OCILobWrite() along with a user-defined callback designed for LOBs to send LOB data to the database. Here everything works fine until I exit the user-defined LOB write callback function where an OCI_INVALID_HANDLE error is encountered. I understand that both OCILobWrite() and OCIStmtExecute() return OCI_NEED_DATA. And it does seem to me that the two statements work separately rather than in conjunction with each other. So I rather doubt this is the proper methodology.
    As you can see, the correct method has evaded me. I have looked through the OCI LOB samples, but have not found any code that helps answer my question. Oracles OCI documentation has not been of much help either. So if anyone could offer some insight I would greatly appreciate it.
    Chris Simms
    [email protected]
    null

    Before 9i, you will have to first insert empty locators using EMPTY_CLOB() inlined in the SQL and using RETURNING clause to return the locator. Then use OCILobWrite to write to the locators in a streamed fashion.
    From 9i, you can actually bind a long buffer to each lob position without first inserting an empty locator, retrieving it and then writing to it.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by CSimms:
    Hi all,
    I have a question about Oracle 8i array DML operations with LOB objects, both CLOB and BLOB. With the following statement in mind:
    INSERT INTO TABLEX (COL1, COL2) VALUES (:1, :2)
    where COL1 is a NUMBER and COL2 is a BLOB, I want to use OCIs array DML functionality to insert multiple records with a single statement execution. I have allocated an array of LOB locators, initialized them with OCIDescriptorAlloc(), and bound them to COL2 where mode is set to OCI_DATA_AT_EXEC and dty (IN) is set to SQLT_BLOB. It is after this where I am getting confused.
    To send the LOB data, I have tried using the user-defined callback method, registering the callback function via OCIBindDynamic(). I initialize icbfps arguments as I would if I were dealing with RAW/LONG RAW data. When execution passes from the callback function, I encounter a memory exception within an Oracle dll. Where dvoid **indpp equals 0 and the object is of type RAW/LONG RAW, the function works fine. Is this not a valid methodology for CLOB/BLOB objects?
    Next, I tried performing piecewise INSERTs using OCIStmtGetPieceInfo() and OCIStmtSetPieceInfo(). When using this method, I use OCILobWrite() along with a user-defined callback designed for LOBs to send LOB data to the database. Here everything works fine until I exit the user-defined LOB write callback function where an OCI_INVALID_HANDLE error is encountered. I understand that both OCILobWrite() and OCIStmtExecute() return OCI_NEED_DATA. And it does seem to me that the two statements work separately rather than in conjunction with each other. So I rather doubt this is the proper methodology.
    As you can see, the correct method has evaded me. I have looked through the OCI LOB samples, but have not found any code that helps answer my question. Oracles OCI documentation has not been of much help either. So if anyone could offer some insight I would greatly appreciate it.
    Chris Simms
    [email protected]
    <HR></BLOCKQUOTE>
    null

  • How to find last DML operation in oracle ADF

    how to find last DML operation in oracle ADF
    Please help me
    Thanks
    Damby

    In the base EntityIml class, just override doDML() method as I said.
    (see http://docs.oracle.com/cd/E16162_01/web.1112/e16182/appendix_mostcommon.htm
    "Methods for Creating Your Own Layer of Framework Base Classes")
    So, put a some flag in the session.
    You should not call doDML() method in backing bean, it will be called by framework.
    In the backing bean, you only have to get that information from the session, as follows:
    String last_dml_op = (String)ADFContext.getCurrent().getSessionScope().get("last_dml_op");And voila...

  • How to know which DML operation is taking place on a table within a procedu

    Hii all,
    My DB Version
    SQL> select *
      2  from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - ProductionHow to find what DML Operation is taking place on a particular table within a procedure??
    For suppose I've the below procedure
    create table r_dummy
    name varchar2(4000),
    emp_id number
    Create or replace procedure r_dummy_proc
    p_name          in     varchar2,
    p_emp_id     in     number
    is
    Begin
              Update r_dummy
              set name = p_name
              where emp_id = p_emp_id;
              if sql%rowcount > 1 then
                   dbms_output.put_line('Successfully updated employee name');
              end if;
    End;Here from the code I can identify that an update operation is taking place on table 'R_DUMMY'
    But how to find that without actually viewing the code?? I've hundreds of procedures in my DB and would like to find what DML is taking place on which table and in which procedure.
    Please help with some suggestions.

    And here is the solution
    with t as
      select distinct name,type,text,line
      from user_source s
      where regexp_like(s.text,'cp_ca_dtls','i')
    x as
      select name,line,text
      from
      select name,case when (regexp_instr(text,'(update)|(insert)|(delete)',1,1,1,'i') >0) and regexp_instr(ld,'CP_CA_DTLS',1,1,1,'i') >0
             then line
             else null
             end as line,text
      from
      Select   name,text,line,lead(text) over(partition by name order by line) ld
      from user_source
      where name in
          select distinct name
          from user_source
          where upper(text) like '%CP_CA_DTLS%'
      order by 1 nulls last
      )where line is not null
    select name,line,text
    from t t1
    where regexp_instr(text,'(update)|(insert)|(delete)',1,1,1,'i') >0
    and exists
         select 1
         from t t2
         where t1.name = t2.name
            and t1.type = t2.type
            and t1.line = t2.line
    union
    select name,line,text
    from x

  • How to write DML operation in a function

    Hi
    Its very urgent for me.
    I am writing DML operation directly in a function and is being called from select statement, it is getting error as "DML Operations cannot be performed inside a query".
    How to write a DML operation inside a function.
    My objective is to call that function from select statement.
    Please help me out.
    Thankd

    No no no. You're committing after each row! soany
    other session running the same query will see the
    changes you're making. Your session will equallysee
    changes caused by running this query in those
    sessions.Other session, yes, but current session will only see
    the changes once it has completed the current
    statement. Otherwise my "rn" column would not have
    gone up sequentially in the above example. it would
    have gone
    1st row rn = 1 (all rows get updated by 1:-
    2,3,4,5,6,7,8,9,10,11)
    2nd row rn = 3 (all rows get update by 1:-
    3,4,5,6,7,8,9,10,11,12)
    3rd row rn = 5 (all rows get update by 1:-
    4,5,6,7,8,9,10,11,12,13)
    4th row rn = 7 (all rows get update by 1:-
    5,6,7,8,9,10,11,12,13,14)
    5th row rn = 9 (all rows get update by 1:-
    6,7,8,9,10,11,12,13,14,15)
    6th row rn = 11 (all rows get update by 1:-
    7,8,9,10,11,12,13,14,15,16)
    7th row rn = 13 (all rows get update by 1:-
    8,9,10,11,12,13,14,15,16,17)
    8th row rn = 15 (all rows get update by 1:-
    9,10,11,12,13,14,15,16,17,18)
    9th row rn = 17 (all rows get update by 1:-
    10,11,12,13,14,15,16,17,18,19)
    10th row rn = 19 (all rows get update by 1:-
    11,12,13,14,15,16,17,18,19,20)
    So the fact the commit happens each time the rows get
    updated, isn't effecting the currently running select
    statement.
    No, actually you DO see the other session changes. This is because it is AUTONOMOUS transaction, and this a function.
    Test by adding:
    create or replace function incvals return number as
    pragma autonomous_transaction;
    v_val number;
    begin
    update t set rn = rn + 1;
    select max(rn) into v_val from t;
    dbms_lock.sleep(1); --add this line
    commit;
    return v_val;
    end;
    And test in two sessions.
    You will NOT get sequential ascending.
    >
    Think about the effect of two parallel sessionsboth
    running this query at the same time, and ask isthis
    sensible?Gawd, no, of course not. Like I said, I'd never use
    this sort of thing myself. I'm just wondering what
    on earth the OP is trying to achieve.
    :)Glad to hear it.

  • ORA-14551: cannot perform a DML operation inside a query

    I have a Java method which is deployed as a Oracle function.
    This Java method parses a huge XML & populates this data
    into a set of database tables.
    I have to call this Oracle function in a unix shell script using sqlplus.
    Value returned by this function will be used by the shell script to decide
    what to do next.
    I am calling the Oracle Java function as follows in the shell script:
    echo "SELECT XML_TABLES.RUN_XML_LOADER('$P1','$P2','$P3','$P4') FROM DUAL;\n" | sqlplus $DB_USER > $LOG
    This gives error - "ORA-14551: cannot perform a DML operation inside a query".
    If I have to add a AUTONOMOUS_TRANSACTION pragma to this Java function,
    where to I add it considering, that the definition of the function is in a Java class.
    Can we do it in call spec?
    create or replace package XML_TABLES is
    function RUN_XML_LOADER(xmlFile IN VARCHAR2,
    xmlType IN VARCHAR2,
    outputDir IN VARCHAR2,
    logFileDir IN VARCHAR2) RETURN VARCHAR2 AS
    LANGUAGE JAVA NAME 'XmlLoader.run
    (java.lang.String, java.lang.String, java.lang.String, java.lang.String)
    return java.lang.String';
    end XML_TABLES;
    If not is there any other way to acheive this?
    Thanks in advance.
    Sunitha.

    If I have to add a AUTONOMOUS_TRANSACTION pragma to this Java function,You'd have to write a PL/SQL function that calls the JSP. But I would caution you about using that pragma. It does introduce tremendous complexity into processing.
    As I see it you only need a function to return the result code so why not use a procedure with an OUT parameter?
    Cheers, APC
    Of course Yoann's suggestion of using an anonymous block would work too.
    Message was edited by:
    APC

  • DML operations not getting logged

    Hi,
    I am wanting to collect the stats on the tables on my DB for which I want the DML activities to be populated in the USER_TAB_MODIFICATIONS table. The monitoring on all the tables in the schema has been set to ON. But i still see no records inside the USER_TAB or DBA_TAB_MODIFICATIONS table.
    I can always try executing the dbms_stats.flush_database_monitoring_info for getting this table populated. But I wanted to first know why is this table not getting populated thru the DB processes? And incase any DB level setting is preventing it from getting inserted in the USER_TAB_MODIFICATIONS table, I am not sure if executing the dbms_stats.flush_database_monitoring_info would help me.
    Kindly advice.
    Thanks in advance.

    CrazyAnie wrote:
    Hi,
    Yeah, I know there is a time lapse (approx 3 hrs) between the DML operations on the table and the time when they are logged in the DB. We waited for around a day to get these value populated in the DBA_TAB_MODIFICATIONS. But we are still not able to see any values in this table.
    I am not able to guess the reason behind this. Kindly help.
    Shall I try executing the flush_db_monitoring info to get these values?refer Thread: user_tab_modifications only updated by gather_stats_job?
    user_tab_modifications only updated by gather_stats_job?

  • DML Operations in Stored Function

    Hi,
    I have used Update Statement in a function. Which is giving an errror (ORA-14551: cannot perform a DML operation inside a query).
    Can I use DML operations in a stored function ?
    (I need help on locking master/transaction tables, i.e if a one user locks a master another user should not get modify access to the master/transactions).
    Thanks
    Ramesh Ganji

    Someone who obviously din't read my previous post in this thread. You should pay attention, programming is all about the details.
    ORA-14551: cannot perform a DML operation inside a query).Then it obviously does more than just "returns a Table Type object". Why are you doing DML in a function?
    PLS-00653: aggregate/table functions are not allowed in PL/SQL scopeWe can only call pipelined functions from SQL queries. So you'll have to ditch the DML or make it an autonomous transaction. Be very careful if you adopt the latter approach.
    Cheers, APC

  • DML operations improves perfomance on a Partioned Table?

    Hi
    We have a simple table (non-partitioned) and we do normal DML operations on it. If we convert that table into 5 partitions then does DML performance improves on it by 5times? To be very specific will READ and WRITES on table will improve? if YES than to which extent. *(considering Table size in TB)*. DB is 11g R2.
    Regards
    Edited by: 905133 on Dec 29, 2011 10:08 PM
    Edited by: 905133 on Dec 29, 2011 10:33 PM

    CKLP,
    I populated a table in my test environment called test with 10 columns and 7 million records.
    its structure is
    col1..col5 are of data type number and are locally indexed
    col6..col 9 are of data type number (not indexed)
    col 10 is of  data type Date
    table is partitioned on col10 by Range. (7 partition for a week (1 partition/day), 1million records in a partition)
    when i inserted 7 million records (i.e one million record/partition). Avg insertion time/partition was 7.5 min. (I inserted records individually in a partition)
    Then I created another table test2 with same no. of columns Indexed and equal amount of records. But increased the no. of partitions. 4 partitions/day 28 partitions for week
    When I inserted 7 million records (one million records/ 4 partitions), Avg insertion Time was 7.1 min. (I inserted records individually in these 4 partition at a single time)
    Insert into test values(...) format was used. and now i know Parallelism can't be used on this format.
    So the point to discuss here is "why did not I achieve a better Insertion Time when i divided a daily partition to 4 partition per day". OR " How can i improve insertion time in such scenarios"
    Oracle DB 11g R2, OS Linux 5, Sun Server x4200 with shared storage.

  • Why we cannot perform DML operations against complex views directly.

    hi
    can any tell me why we cannot perform DML operations against complex views directly.

    Hi,
    It is not easy to perform DML operations on complex views which involve more than one table as said by vissu. The reason being you may not know which columns to be updated/inserted/deleted on the base tables of the views. If it is a simple view containing a single table it is as simple as performing actions on the table.
    For further details visit this
    http://www.orafaq.com/wiki/View
    cheers
    VT

  • Ref:Logging the DML operations on a schema

    Hello,
    I want to log the DML operations on our Master Schema(This schema is accessed by 4-6 applications apart from there own application schemas). I want to log the DML operations done by x.application. Is there any easy way to do it. One solutions is writting the triggers on each table of Master schema and inserting the transactions into audit_schema(it is replica of master schema). Is there any other way to do it. Any solution for this issue is appreciated. Thanks in advance.

    Is there any other way to do itEnable auditing, see the security guide:
    http://download.oracle.com/docs/cd/B19306_01/network.102/b14266/toc.htm
    http://download.oracle.com/docs/cd/B19306_01/network.102/b14266/auditing.htm#i1008289
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/security.htm#sthref2929

  • DML operations on Materialized view

    Hi,
    I want to know can we perform DML operations like insert/update on a materialized view?
    Thanks
    Deepak

    Thanks Michaels. I'm able to update/insert into materialized view.
    But I'm having another problem.
    My materialized view is selecting rows on group by condition, but to create a MV as updatable, it should be simple.
    SQL> create materialized view mv_utr_Link
    2 build immediate
    3 refresh force on demand
    4 for update
    5 enable query rewrite
    6 as
    7 select link_id,booking_date
    8 from t_utr
    9 where link_id=246229
    10 group by link_id,booking_date
    11 /
    from t_utr
    ERROR at line 8:
    ORA-12013: updatable materialized views must be simple enough to do fast
    refresh
    If I remove the group by clause, its allowing me to create MV, but that won't solve my problem.
    any workaround on that?
    Thanks
    Deepak

Maybe you are looking for