Handling update operation in Post Insert Trigger

Hi All
We are recording all the data that is being updated or inserted into a Table, (say Table A) by inserting into a custom table whenever an insert or update comes in the trigger.Now suppose a user while updating the data in Table A just fetches the record on front end and click on apply directly without making any changes. This causes an update on Base Table and hence causes an insert into our custom table. As this data is of no use to us..
Is there any way to stop this data being inserted into our table.
I have came across two solutions till now.Please suggest which could be better in terms of performance
1) When the user updates any column in base table and trigger fires. We can check the new values of each column(around 50) with their Old values.In case there is a change we will insert the data into custom table else not.
2)When user updates any column then we are picking all the new values into a record type datatype and same way latest value from custom table.Then performing minus between these two records by making query from dual.
Please let me know if there could be any other way around..
Please Note: My base table could have around 200 columns too
Thanks
AJ

Ajay Sharma wrote:
Hi All
We are recording all the data that is being updated or inserted into a Table, (say Table A) by inserting into a custom table whenever an insert or update comes in the trigger.Now suppose a user while updating the data in Table A just fetches the record on front end and click on apply directly without making any changes. This causes an update on Base Table and hence causes an insert into our custom table. As this data is of no use to us..
Is there any way to stop this data being inserted into our table.
I have came across two solutions till now.Please suggest which could be better in terms of performance
1) When the user updates any column in base table and trigger fires. We can check the new values of each column(around 50) with their Old values.In case there is a change we will insert the data into custom table else not.
2)When user updates any column then we are picking all the new values into a record type datatype and same way latest value from custom table.Then performing minus between these two records by making query from dual.
Please let me know if there could be any other way around..
Please Note: My base table could have around 200 columns too
Thanks
AJOption 3.
Fix the front end to disallow the operation. That option being submitting an update statement which doesn't actually change any data.

Similar Messages

  • Errors with post-insert trigger

    I had a POST-INSERT TRIGGER here.but i encountered some problems.can someone help me with the errors?
    begin
         insert into user_acct
         userid_n, user_m, coy_c, contact_n, emp_n,
         work_loc_c, curr_passwd_t, prt_f, indv_pc_f, pc_deploy_c,      
         ext_email_addr_t, other_na_sys_t, status_c, status_rmk_t, upd_d)
         values
         (:user_acct.userid_n, :user_acct.user_m, :user_acct.coy_c, :user_acct.contact_n, :user_acct.emp_n,
         :user_acct.work_loc_c, :user_acct.curr_passwd_t, :user_acct.prt_f, :user_acct.indv_pc_f, :user_acct.pc_deploy_c,                
         :user_acct.ext_email_addr_t, :user_acct.other_na_sys_t, :user_acct.status_c, :user_acct.status_rmk_t,                
         :user_acct.upd_d);
         exception
              when others then
                   clear_message;
                   Message('Insertion of Applicant Particulars failed');
                   SYNCHRONIZE;
                   RAISE Form_Trigger_failure;
    end;
    begin
    insert into user_acct_na_detl
         na_sys_c)
         values
         (:na_sys_cd.na_sys_c);
         exception
              when others then
                   clear_message;
                   Message('Insertion of Non-Application failed');
                   SYNCHRONIZE;
                   RAISE Form_Trigger_failure;
    end;
              when others then
                   null;      
    end;     
    the error:
    Error 103 at line 31, column 3
    Encountered the symbol 'WHEN' when expecting one if the following:
    begin declare end exception exit for goto if loop mod null
    pragma raise return select update while <an identifier>
    <a double-quoted delimited-identifier><a bind variable><<
    close current delete fetch lock insert open rollback
    savepoint set sql commit<a single-quoted SQL string>
    The symbol "exception"was substituted for "WHEN" to continue
    Error 103 at line 2, column 1
    Encountered the symbol "END"

    i delete the "END" already but when i compile again, i encountered some error again.
    Error 370 at line 27, column
    OTHERS handler must be last among the exception handlers of a block
    Error 0 at line 1, column 1
    Statement ignored

  • Execution of ddl statement  in post-insert trigger

    hi,
    I'm working on headstart 6.5. I wants to execute a DDL statement in post-insert-trigger.The problem is this trigger is executed in between pre-commit and post-forms-commit.In pre-commit the transaction is opened with a insert statement that is inserting "open" in field status which is having a deffered check constraint QMS_NEED_TO_CLOSE_TRANSACTION.In post-forms-commit it will check whether the transaction is open if yes will check the business rule will delete the data that was inserted into qms_transaction in pre-commit trigger and will close the transaction .
         In between if i execute a DDl statement a commit will be performed on the insert statement written in pre-commit trigger.This will violate the check-constraint and we will get the error qms_need_to_close_transaction violated.
         My business logic wants this statement to be executed at the post-insert trigger on the block.Is this possible??
         Does anyone have face the same problem?Whts the workaround for the same?

    Hello,
    You could use the execute_query after the commit_form called, by exeample in a KEY-COMMIT trigger.
    KEY-COMMIT trigger
      Commit_Form ;
      Go_block( 'master_block' ) ;
      Execute_Query ;Francois

  • Post insert trigger

    let say i define a post insert trigger for a datablock , then when this will be execute ?, after the insert is performed , in another words will the trigger wait till a comfirmation message is send from the database or let say that the insertion is not completed becuse of network failure then will the trigger execute?

    Hi,
    In Metalink document 61675.1 you can find the trigger execution sequence for Forms 4.5. I can't find the trigger execution sequence for Forms 6i, but I think this should be the same for most events.
    For the event "create record", see the following triggers:
    Create Record 1. Post-Change Block
    2. When-Validate-Item Block
    3. Post-Text-Item Block
    4. When-Validate-Record Block
    5. Post-Record Block
    6. Post-Block Block
    7. On-Savepoint Form
    8. Pre-Commit Form
    9. Pre-Insert Block
    10. On-Insert Form
    11. Post-Insert Block
    12. Post-Forms-Commit Form
    13. On-Commit Form
    14. Post-Database-Commit Form
    15. Pre-Block Block
    16. Pre-Record Block
    17. Pre-Text-Item Block
    18. When-New-Item-Instance Form
    So it seems the the Post-Insert trigger fires AFTER the insert is performed. The insert will be performed in the On-Insert trigger.
    HTH
    Kind regards,
    Lennart de Vos

  • Error in post insert trigger

    Hi to all,
    In a table I have a field id that must be valorized from sequence.
    It is possible valorize this sequence after the insert ?
    I have tried to create a trigger like this:
    CREATE OR REPLACE TRIGGER MYSCHEMA.POST_USERACCOUNTS_INSERT
    AFTER INSERT
    ON MYSCHEMA.USERACCOUNTS
    REFERENCING NEW AS New OLD AS Old
    FOR EACH ROW
    DECLARE
    tmpVar NUMBER;
    BEGIN
       tmpVar := 0;
       SELECT SEQ_USERACCOUNTS_ID.NEXTVAL INTO tmpVar FROM dual;
       :NEW.USERS_ID := tmpVar;
       :NEW.DATECREATION := SYSDATE;
       EXCEPTION
         WHEN OTHERS THEN
           -- Consider logging the error and then re-raise
           RAISE;
    END POST_USERACCOUNTS_INSERT;But the creation abort with this error:
    ORA-04084: cannot change NEW values for this trigger type
    How can resolve this problem ?
    Thank You for help.
    Greetings...
    Gaetano

    Cause
    New trigger variables can only be changed in before row insert or update triggers.
    Action
    Change the trigger type or remove the variable reference.

  • Post insert error

    hi all
    i have a form for sale entry. i want to insert into an another table
    when i save its contents.
    i used a post insert trigger for this.
    but the form has a detail block. and so this insert as many records as this block has.
    the form has the following blocks.
    sal_master(master block)
         sal_id
         sal_date
         cust_id
    sal_detail(detail block)
         sal_id
         qty
         price
    post insert trigger
    declare     
         a number; -- ledger no
         b number; -- customer id
         c date; -- entry date
         d number; -- credit
    begin
         -- insert into customer ledger
         select nvl(max(l_id)+1,1)
         into a
         from cust_led;
         b:= :sal_master.cust_id;
         c:= :sal_master.sal_date;
         d:= :ser_detail.total;
         insert into cust_led (l_id, cust_id, e_date, credit)
         values ( a, b, c, d);
    exception
         when no_data_found then
         raise FORM_TRIGGER_FAILURE;
    END;
    on block level and form level the result is same.
    i want to insert only one entry in the table for multiple records
    in this form.
    MUhammad Nadeem
    marda (pakistan)
    [email protected]

    move this trigger to your sal_master(master block level) . this will only insert 1 record.

  • Post-insert vs pre-insert

    Dear All,
    can anybody explain the difference b/w pre-insert and post-insert trigger ?
    i need such example which process something that becomes possible in one trigger and no possible in other ?
    Regards
    Kashif Butt

    Here is an example. Please remove any syntax errors before running :-)
    create sequence my_seq;
    create table table_a
    (id number(20) constraint pk_table_a primary key,
    text varchar2(100)
    create table table_b
    (id number(20) constraint pk_table_b primary key,
    a_id number(20) constraint fk1_table_b foreign key references table_a(id),
    text varchar2(100)
    Create a form with a block based on table_a with items "id" and "text"
    "id" is on null canvas (not visible), put "text" on a canvas.
    pre-insert:
    select my_seq.nextval into :table_a.id;
    This can't be done in post-insert. Then the insert will fail because it's the primary key and must be "not null"
    post-insert:
    insert into table_b(id, a_id, text)
    values(my_seq.nextval, :table_a.id, 'Inserted automatically by post-insert');
    This can't be done in the pre-insert trigger, because the foreign key constraint will require the record in table_a
    to be inserted first.

  • Oracle 11g: Oracle insert/update operation is taking more time.

    Hello All,
    In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
    1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
    2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
    Query1: How to find actually oracle is taking more time in insert/updates operation.
    Query2: How to rectify the problem.
    We are having multithread environment.
    Thanks
    With Regards
    Hemant.

    Liron Amitzi wrote:
    Hi Nicolas,
    Just a short explanation:
    If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
    When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
    My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
    The solution here is simply revuild the index once in a while.
    Hope it is clear.
    Liron Amitzi
    Senior DBA consultant
    [www.dbsnaps.com]
    [www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
    This is a misconception of how indexes are working.
    I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
    http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
    "+Index Rebuild Summary+
    +•*The vast majority of indexes do not require rebuilding*+
    +•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
    +•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
    +•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
    +•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
    +•To improve performance, indexes need to be regularly rebuilt is a myth+"
    Good reading,
    Nicolas.

  • Do SQL pros write seperate stored procedures to handle Save and update operations?

    hi friends,
    Currently i'm bit confused with the industry standard of writing store procedures for save and update operations. I have sees people write separate stored procedure to handle save and update operations even they have to write complex queries. Also I have
    seen people writing one store procedure to handle both Save and Update operations even when the queries are complicated. when I asked them why they do it, they said why should waste time on writing another query to update instead write one save store procedure
    to handle all.
    In here there are SQL Pros, Gurus, what would say about this?
    what is the actual industry standard with stored procedure to perform save and update operations, do you write separate ones or do both in one?
    thanks
    I use Visual studio 2012 Ultimate and SQL server 2008 developer edition!

    >what is the actual industry standard with stored procedure to perform save and update operations
    There is no industry standard. Guideline though you want to be happy with the sp-s; same for your peers.
    As noted above the MERGE command is like a Swiss Army knife; it can perform INSERT, UPDATE & DELETE
    in one (huge) statement.
    Make sure you pick good names for the sp-s, document the parameters and comment the logic if any.
    Format the stored procedure for readability:
    http://www.sqlusa.com/sqlformat/
    Kalman Toth Database & OLAP Architect
    SELECT Query Video Tutorial 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Pre-insert trigger is not firing after post built-in

    Hi,
    I have a 10g form in which Post built-in is used in When-button-pressed Trigger. After the post command I am checking some condition,by using the same record which I have posted.But it is not working.
    I have also put the message in the pre-insert trigger but the message is not displaying.
    But the same form iis working fine in form 6i, as I have migrated the forms from form 6i to 10g.

    Yes, In that block there are other items also. I have made the required property no for all the items.
    what exactly we have is
    (if x=y then)on some condition check
    POST;
    After that, form have a select statement in which it is selecting the same row which is being posted above.
    if the select statement gives count of row zero
    raise form_trigger_failure is fired.
    and in the pre-insert trigger form is assigning a value to a block item.

  • ORA-01456 : may not perform insert/delete/update operation

    When I use following stored procedure with crystal reports, following error occurs.
    ORA-01456 : may not perform insert/delete/update operation inside a READ ONLY transaction
    Kindly help me on this, please.
    My stored procedure is as under:-
    create or replace
    PROCEDURE PROC_FIFO
    (CV IN OUT TB_DATA.CV_TYPE,FDATE1 DATE, FDATE2 DATE,
    MSHOLD_CODE IN NUMBER,SHARE_ACCNO IN VARCHAR)
    IS
    --DECLARE VARIABLES
    V_QTY NUMBER(10):=0;
    V_RATE NUMBER(10,2):=0;
    V_AMOUNT NUMBER(12,2):=0;
    V_DATE DATE:=NULL;
    --DECLARE CURSORS
    CURSOR P1 IS
    SELECT *
    FROM FIFO
    WHERE SHARE_TYPE IN ('P','B','R')
    ORDER BY VOUCHER_DATE,
    CASE WHEN SHARE_TYPE='P' THEN 1
    ELSE
    CASE WHEN SHARE_TYPE='R' THEN 2
    ELSE
    CASE WHEN SHARE_TYPE='B' THEN 3
    END
    END
    END,
    TRANS_NO;
    RECP P1%ROWTYPE;
    CURSOR S1 IS
    SELECT * FROM FIFO
    WHERE SHARE_TYPE='S'
    AND TRANS_COST IS NULL
    ORDER BY VOUCHER_DATE,TRANS_NO;
    RECS S1%ROWTYPE;
    --BEGIN QUERIES
    BEGIN
    DELETE FROM FIFO;
    --OPENING BALANCES
    INSERT INTO FIFO
    VOUCHER_NO,VOUCHER_TYPE,VOUCHER_DATE,TRANS_QTY,TRANS_AMT,TRANS_RATE,
    SHOLD_CODE,SHARE_TYPE,ACC_NO,SHARE_CODE,TRANS_NO)
    SELECT TO_CHAR(FDATE1,'YYYYMM')||'001' VOUCHER_NO,'OP' VOUCHER_TYPE,
    FDATE1-1 VOUCHER_DATE,
    SUM(
    CASE WHEN
    --((SHARE_TYPE ='S' AND DTAG='Y')
    SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    ) TRANS_QTY,
    SUM(TRANS_AMT),
    NVL(CASE WHEN SUM(TRANS_AMT)<>0
    AND
    SUM
    CASE WHEN SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    )<>0 THEN
    SUM(TRANS_AMT)/
    SUM
    CASE WHEN SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    ) END,0) TRANS_RATE,
    MSHOLD_CODE SHOLD_CODE,'P' SHARE_TYPE,SHARE_ACCNO ACC_NO,
    SHARE_CODE,0 TRANS_NO
    FROM TRANS
    WHERE ACC_NO=SHARE_ACCNO
    AND SHOLD_CODE= MSHOLD_CODE
    AND VOUCHER_DATE<FDATE1
    --AND
    --(SHARE_TYPE='S' AND DTAG='Y')
    --OR SHARE_TYPE IN ('P','R','B'))
    group by TO_CHAR(FDATE1,'YYYYMM')||'001', MSHOLD_CODE,SHARE_ACCNO, SHARE_CODE;
    COMMIT;
    --TRANSACTIONS BETWEEND DATES
    INSERT INTO FIFO
    TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE
    SELECT TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    CASE WHEN SHARE_TYPE='S' THEN
    NVL(TRANS_RATE-COMM_PER_SHARE,0)
    ELSE
    NVL(TRANS_RATE+COMM_PER_SHARE,0)
    END
    ,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,NULL TRANS_COST,SHARE_TYPE
    FROM TRANS
    WHERE ACC_NO=SHARE_ACCNO
    AND SHOLD_CODE= MSHOLD_CODE
    AND VOUCHER_DATE BETWEEN FDATE1 AND FDATE2
    AND
    ((SHARE_TYPE='S' AND DTAG='Y')
    OR SHARE_TYPE IN ('P','R','B'));
    COMMIT;
    --PURCHASE CURSOR
    IF P1%ISOPEN THEN
    CLOSE P1;
    END IF;
    OPEN P1;
    LOOP
    FETCH P1 INTO RECP;
    V_QTY:=RECP.TRANS_QTY;
    V_RATE:=RECP.TRANS_RATE;
    V_DATE:=RECP.VOUCHER_DATE;
    dbms_output.put_line('V_RATE OPENING:'||V_RATE);
    dbms_output.put_line('OP.QTY2:'||V_QTY);
    EXIT WHEN P1%NOTFOUND;
    --SALES CURSOR
    IF S1%ISOPEN THEN
    CLOSE S1;
    END IF;
    OPEN S1;
    LOOP
    FETCH S1 INTO RECS;
    EXIT WHEN S1%NOTFOUND;
    dbms_output.put_line('OP.QTY:'||V_QTY);
    dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    IF ABS(RECS.TRANS_QTY)<=V_QTY
    AND V_QTY<>0
    AND RECS.TRANS_COST IS NULL THEN
    --IF RECS.TRANS_COST IS NULL THEN
    --dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    --dbms_output.put_line('BAL1:'||V_QTY);
    UPDATE FIFO
    SET TRANS_COST=V_RATE,
    PUR_DATE=V_DATE
    WHERE TRANS_NO=RECS.TRANS_NO
    AND TRANS_COST IS NULL;
    COMMIT;
    dbms_output.put_line('UPDATE TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('OP.QTY:'||V_QTY);
    dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('BAL2:'||TO_CHAR(RECS.TRANS_QTY+V_QTY));
    END IF;
    IF ABS(RECS.TRANS_QTY)>ABS(V_QTY)
    AND V_QTY<>0 AND RECS.TRANS_COST IS NULL THEN
    UPDATE FIFO
    SET
    TRANS_QTY=-V_QTY,
    TRANS_COST=V_RATE,
    TRANS_AMT=ROUND(V_QTY*V_RATE,2),
    PUR_DATE=V_DATE
    WHERE TRANS_NO=RECS.TRANS_NO;
    --AND TRANS_COST IS NULL;
    COMMIT;
    dbms_output.put_line('UPDATING 100000:'||TO_CHAR(V_QTY));
    dbms_output.put_line('UPDATING 100000 TRANS_NO:'||TO_CHAR(RECS.TRANS_NO));
    INSERT INTO FIFO
    TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE,PUR_DATE
    VALUES
    MCL.NEXTVAL,RECS.VOUCHER_NO,RECS.VOUCHER_TYPE,
    RECS.VOUCHER_DATE,RECS.TRANS_QTY+V_QTY,
    RECS.TRANS_RATE,(RECS.TRANS_QTY+V_QTY)*RECS.TRANS_RATE,RECS.SHOLD_CODE,
    RECS.SHARE_CODE,RECS.ACC_NO,
    RECS.DTAG,NULL,'S',V_DATE
    dbms_output.put_line('INSERTED RECS.QTY:'||TO_CHAR(RECS.TRANS_QTY));
    dbms_output.put_line('INSERTED QTY:'||TO_CHAR(RECS.TRANS_QTY+V_QTY));
    dbms_output.put_line('INSERTED V_QTY:'||TO_CHAR(V_QTY));
    dbms_output.put_line('INSERTED RATE:'||TO_CHAR(V_RATE));
    COMMIT;
    V_QTY:=0;
    V_RATE:=0;
    EXIT;
    END IF;
    IF V_QTY>0 THEN
    V_QTY:=V_QTY+RECS.TRANS_QTY;
    ELSE
    V_QTY:=0;
    V_RATE:=0;
    EXIT;
    END IF;
    --dbms_output.put_line('BAL3:'||V_QTY);
    END LOOP;
    V_QTY:=0;
    V_RATE:=0;
    END LOOP;
    CLOSE S1;
    CLOSE P1;
    OPEN CV FOR
    SELECT TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,B.SHARE_CODE,B.ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE, B.SHARE_NAME,A.PUR_DATE
    FROM FIFO A, SHARES B
    WHERE A.SHARE_CODE=B.SHARE_CODE
    --AND A.SHARE_TYPE IS NOT NULL
    ORDER BY VOUCHER_DATE,SHARE_TYPE,TRANS_NO;
    END PROC_FIFO;
    Thanks and Regards,
    Luqman

    Copy from TODOEXPERTOS.COM
    Problem Description
    When running a RAM build you get the following error as seen in the RAM build
    log file:
    14:52:50 2> Updating warehouse tables with build information...
    Process Terminated In Error:
    [Oracle][ODBC][Ora]ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
    (SIGENG02) ([Oracle][ODBC][Ora]ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
    ) Please contact the administrator of your Oracle Express Server application.
    Solution Description
    Here are the following suggestions to try out:
    1. You may want to use oci instead of odbc for your RAM build, provided you
    are running an Oracle database. This is setup through the RAA (relational access
    administrator) maintenance procedure.
    Also make sure your tnsnames.ora file is setup correctly in either net80/admin
    or network/admin directory, to point to the correct instance of Oracle.
    2. Commit or rollback the current transaction, then retry running your
    RAM build. Seems like one or more of your lookup or fact tables have a
    read-only lock on them. This occurs if you modify or add some records to your
    lookup or fact tables but forget to issue a commit or rollback. You need to do
    this through SQL+.
    3. You may need to check what permissions has been given to the relational user.
    The error could be a permissions issue.
    You must give 'connect' permission or roll to the RAM/relational user. You may
    also try giving 'dba' and 'resource' priviliges/roll to this user as a test. Inorder to
    keep it simple, make sure all your lookup, fact and wh_ tables are created on
    a single new tablespace. Create a new user with the above privileges as the
    owner of the tablespace, as well as the owner of the lookup, fact and wh_
    tables, inorder to see if this is a permissions issue.
    In this particular case, the problem was resolved by using oci instead of odbc,
    as explained in suggestion# 1.

  • Update by after insert trigger

    I need to update each new inserted row of data by checking if there were same values in the table. So I tried the After Insert Trigger with the code below :
    CREATE OR REPLACE TRIGGER approve_checker
    after insert on TBL_BILL
    for each row
    declare
    counter integer;
    begin
    SELECT COUNT() INTO counter FROM TBL_BILL WHERE issue_date=:NEW.issue_date AND wagon_no=:NEW.wagon_no ;*
    IF( counter > 1)
    THEN
    UPDATE TBL_BILL SET approved = 2 WHERE id = :NEW.id;
    END IF;
    end approve_checker;
    But errors accured :
    ORA-04091: table RAIL.BILL is mutating, trigger/function may not see it
    ORA-06512: at "RAIL.APPROVE_CHECKER", line 4
    ORA-04088: error during execution of trigger 'RAIL.APPROVE_CHECKER'
    Any help appreciated.

    You cannot SELECT from the table on which the row-level trigger is defined, nor can you UPDATE a table on which the row-level trigger is defined. If you are inserting multiple rows in a single INSERT statement, Oracle can't be sure what rows to update and/or select.
    Is there a reason that you wouldn't just create a unique constraint to enforce this condition?
    Justin

  • How to handle EndDialog message when using Asynchronous Trigger Pattern?

    I have Service A which sends a message to Service B when a update/delete/insert trigger fires.  Service B dequeing logic looks like:
    declare @messagebody xml
    declare @messagetype nvarchar(256)
    declare @cg uniqueidentifier
    declare @ch uniqueidentifier
    DECLARE @messages TABLE(messagetype nvarchar(256),messagebody xml)
    begin try
    begin transaction;
    waitfor
    receive top (1)
    @cg = conversation_group_id,
    @ch = conversation_handle,
    @messagetype = message_type_name,
    @messagebody = cast(message_body as xml)
    from MyPolicyQueue
    ),TIMEOUT @receiveTimeoutMs
    if @messagebody is not null
    begin
    insert into @messages values (@messagetype, @messagebody)
    end
    select * from @messages
    end conversation @ch
    commit
    end try
    ServiceB will end the conversation but afterwards, the queue now has an EndDialog message that never gets processed.
    The intent is asynchronous handling of update messages with the trigger used as the event source.  Service B closes the conversation but trigger does not (It would probably be a very bad thing if trigger blocked waiting for EndDialog message response).
      How do you handle the EndDialog message in this scenario?
    The code is based on this example.
    scott

    A common practice is to create an activated proc on the initiator queue to close the conversation loop after the target service does it's job.  This activated proc simply ends the conversation upon receipt of an EndDialog message type.  If
    an unexpected message type is received (including Error), the proc can end the conversation with error and perhaps log the details to a table that can be monitored for exceptions.
    See Rusanu's artice for a description of this pattern:
    http://rusanu.com/2006/04/06/fire-and-forget-good-for-the-military-but-not-for-service-broker-conversations/
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Update operation transaction timing out

    Hello,
    I have a large data structure that needs to be updated atomically (i.e. within a transaction). I am calling the logical service via a web service map, so I presume the transaction starts when the logical service is invoked. It is receiving a transaction timeout on the server, which I suspect is set to 30 seconds at the WebLogic level by our middleware team. (See stack trace below).
    The update operation SDO XML is at least 2MB.
    1. Is there any way to override this transaction timeout at the service or data space level?
    2. Is there any more efficient way to do this update? Sending a big wad of xml is inefficient, and I am not aware of another format supported by ODSI web services. I cannot use mediator API straight from my client to ODSI.
    3. If I have to break up my save operation into chunks, is there any way to effectively achieve transactional update via multiple web service calls?
    Any advice would be much appreciated.
    Thanks,
    Jeff
    weblogic.xml.query.exceptions.XQuerySystemException: {bea-err}SYS003: Unexpected exception
         at weblogic.xml.query.transaction.TransactionHelper.commit(TransactionHelper.java:96)
         at weblogic.xml.query.transaction.TransactionManager.teardownOnSuccess(TransactionManager.java:178)
         at com.bea.ld.EJBRequestHandler.handleProcessingComplete(EJBRequestHandler.java:1020)
         at com.bea.ld.EJBRequestHandler.handlePusher(EJBRequestHandler.java:979)
         at com.bea.ld.EJBRequestHandler.invokeOperation(EJBRequestHandler.java:323)
         at com.bea.ld.ServerWrapperBean.invoke(ServerWrapperBean.java:153)
         at com.bea.ld.ServerWrapperBean.invokeOperation(ServerWrapperBean.java:80)
         at com.bea.ld.ServerWrapper_s9smk0_ELOImpl.invokeOperation(ServerWrapper_s9smk0_ELOImpl.java:141)
         at com.bea.dsp.ws.RoutingHandler$PriviledgedRunner.run(RoutingHandler.java:96)
         at com.bea.dsp.ws.RoutingHandler.handleResponse(RoutingHandler.java:217)
         at weblogic.wsee.handler.HandlerIterator.handleResponse(HandlerIterator.java:287)
         at weblogic.wsee.handler.HandlerIterator.handleResponse(HandlerIterator.java:271)
         at weblogic.wsee.ws.dispatch.server.ServerDispatcher.dispatch(ServerDispatcher.java:176)
         at weblogic.wsee.ws.WsSkel.invoke(WsSkel.java:80)
         at weblogic.wsee.server.servlet.SoapProcessor.handlePost(SoapProcessor.java:66)
         at weblogic.wsee.server.servlet.SoapProcessor.process(SoapProcessor.java:44)
         at weblogic.wsee.server.servlet.BaseWSServlet$AuthorizedInvoke.run(BaseWSServlet.java:285)
         at weblogic.wsee.server.servlet.BaseWSServlet.service(BaseWSServlet.java:169)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: weblogic.transaction.RollbackException: Transaction timed out after 33 seconds
    BEA1-460E808DE23C779E385F
         at weblogic.transaction.internal.TransactionImpl.throwRollbackException(TransactionImpl.java:1818)
         at weblogic.transaction.internal.TransactionImpl.checkIfCommitPossible(TransactionImpl.java:1708)
         at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:255)
         at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:230)
         at weblogic.xml.query.transaction.TransactionHelper.commit(TransactionHelper.java:94)
         ... 30 more
    Caused by: weblogic.transaction.internal.TimedOutException: Transaction timed out after 33 seconds
    BEA1-460E808DE23C779E385F
         at weblogic.transaction.internal.ServerTransactionImpl.wakeUp(ServerTransactionImpl.java:1734)
         at weblogic.transaction.internal.ServerTransactionManagerImpl.processTimedOutTransactions(ServerTransactionManagerImpl.java:1607)
         at weblogic.transaction.internal.TransactionManagerImpl.wakeUp(TransactionManagerImpl.java:1879)
         at weblogic.transaction.internal.ServerTransactionManagerImpl.wakeUp(ServerTransactionManagerImpl.java:1517)
         at weblogic.transaction.internal.WLSTimer.timerExpired(WLSTimer.java:35)
         at weblogic.timers.internal.TimerImpl.run(TimerImpl.java:273)
         at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
         ... 2 more

    1. Is there any way to override this transaction timeout at the service or data space level?
    You can set trans-timeout-seconds in the weblogic-ejb-jar.xml for your dataspace. You'll find it in <dataspaceName>dspejb.jar in <domain>/dsp/dataSpaces/<dataspaceName>request_handlers.ear. I've include a complete weblogic-ejb-jar.xml (at the end of the post) so you can see where to add it. Just copy-paste the <transaction-descriptor>...</>
    the request handlers ear is created when the dataspace is deployed the first time and during a full-deploy (i.e. you don't have to go back and edit it after incremental deploys)
    2a. Is there any more efficient way to do this update?
    If you described it in painful detail, we might be able to think of something. You might find mutators helpful - http://download.oracle.com/docs/cd/E13167_01/aldsp/docs32/dsp32wiki/Mutators%20for%20Updates.html
    3. If I have to break up my save operation into chunks, is there any way to effectively achieve transactional update via multiple web service calls?
    You could write/host your own webservice on the ODSI server which would call ODSI via the java api. Your ws would expose operations to the following calls from your client
    contextId = myWsStartUserTransaction()
    while( moreChunks)
    myWsUpdate(contextId, chunk)
    myWSEndUserTransaction(contextId)
    Just a heads-up, WLS has an anti-denial-of-service attack setting that rejects input messages over a certain size. I believe the default is 10MB. You can set it in the WLS console.
    - Mike
    <?xml version="1.0"?>
    <!DOCTYPE weblogic-ejb-jar PUBLIC
    '-//BEA Systems, Inc.//DTD WebLogic 8.1.0 EJB//EN'
    'http://www.bea.com/servers/wls810/dtd/weblogic-ejb-jar.dtd'>
    <weblogic-ejb-jar>
    <weblogic-enterprise-bean>
    <ejb-name>Server</ejb-name>
    <stateless-session-descriptor>
    <stateless-clustering>
    <stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
    <stateless-bean-load-algorithm>round-robin</stateless-bean-load-algorithm>
    <!-- random | round-robin | weight-based -->
    <!--
    <stateless-bean-call-router-class-name>beanRouter</stateless-bean-call-router-
    class-name> -->
    <stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idempo
    tent>
    </stateless-clustering>
    </stateless-session-descriptor>
    <enable-call-by-reference>true</enable-call-by-reference>
    <jndi-name>com.bea.ld.apps.dataservicesapp.server</jndi-name>
    <local-jndi-name>com.bea.ld.apps.dataservicesapp.localserver</local-jndi-name>
    <transaction-descriptor>
    <trans-timeout-seconds>300</trans-timeout-seconds>
    </transaction-descriptor>
    </weblogic-enterprise-bean>
    <weblogic-enterprise-bean>
    <ejb-name>ServerTxRequired</ejb-name>
    <stateless-session-descriptor>
    <stateless-clustering>
    <stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
    <stateless-bean-load-algorithm>round-robin</stateless-bean-load-algorithm>
    <!-- random | round-robin | weight-based -->
    <!--
    <stateless-bean-call-router-class-name>beanRouter</stateless-bean-call-router-
    class-name> -->
    <stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idempo
    tent>
    </stateless-clustering>
    </stateless-session-descriptor>
    <enable-call-by-reference>true</enable-call-by-reference>
    <jndi-name>com.bea.ld.apps.dataservicesapp.serverTxRequired</jndi-name>
    <local-jndi-name>com.bea.ld.apps.dataservicesapp.localserverTxRequired</local-
    jndi-name>
    <transaction-descriptor>
    <trans-timeout-seconds>300</trans-timeout-seconds>
    </transaction-descriptor>
    </weblogic-enterprise-bean>
    <weblogic-enterprise-bean>
    <ejb-name>Metadata</ejb-name>
    <stateless-session-descriptor>
    <stateless-clustering>
    <stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
    <stateless-bean-load-algorithm>round-robin</stateless-bean-load-algorithm>
    <!-- random | round-robin | weight-based -->
    <!--
    <stateless-bean-call-router-class-name>beanRouter</stateless-bean-call-router-
    class-name> -->
    <stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idempo
    tent>
    </stateless-clustering>
    </stateless-session-descriptor>
    <enable-call-by-reference>true</enable-call-by-reference>
    <jndi-name>com.bea.ld.apps.dataservicesapp.metadata</jndi-name>
    <transaction-descriptor>
    <trans-timeout-seconds>300</trans-timeout-seconds>
    </transaction-descriptor>
    </weblogic-enterprise-bean>
    </weblogic-ejb-jar>

  • Pre-insert-trigger ignoring assignments

    Hi,
    I have this code in a PRE-INSERT-TRIGGER of a database table block:
    Select emp_seq.Nextval Into :EMP.EMPNO From dual;
    Select Sysdate Into :EMP.LASTCHANGED From dual;
    And i have this code in a PRE-UPDATE-TRIGGER of the same block:
    Select Sysdate Into :EMP.LASTCHANGED From dual;
    given scenario:
    1) query records from the table into a block:
    empno ename lastchanged
    1 Smith 01.12.2008
    2 Johnson 01.12.2008
    2) change empname in any record except no. 1:
    empno ename lastchanged
    1 Smith 01.12.2008
    2 Johannson 01.12.2008
    3) create a new record somewhere above the changed record
    empno ename lastchanged
    1 Smith 01.12.2008
    <null> <null> <null>
    2 Johannson 01.12.2008
    4) insert ename in new record
    empno empname lastchanged
    1 Smith 01.12.2008
    <null> Obama <null>
    2 Johannson 01.12.2008
    5) do_key('commit_form')
    with Forms 6.0.8.23.2 -> working fine
    with Forms 6.0.8.27.0 -> ORA-01400:: cannot insert NULL into ("EMP"."EMPNO")
    Any assignment in the pre-insert-trigger is ignored! Can anyone help me with this bug? Thanks in advance

    message(:EMP.EMPNO) is always Null ...
    Everything works if there's no update of a record with higher record-number in the transaction. But the scenario in post 1 doesnt work with Forms 6.0.8.27.0Ugh -- that's ugly!
    Unfortunately, opening a Service Request with Oracle will get you nowhere, since Oracle no longer supports Forms 6i.
    Yesterday, I experienced something very similar with the same version of Forms, specifically this part: Everything works if there's no update of a record with higher record-number in the transaction.*
    If I updated a higher record number in the block, I could NOT get Forms to subsequently store a value in a column in a prior record. I would set the value in the column, and immediately display the value, and it was null! Fortunately in my case, the stored value was only to enable skipping a database lookup in a subsequent pass, so I just skipped working on a solution.
    However, in your case, the problem is a show-stopper.
    What I found was that if I navigated back to the first record in the block, the problem went away. So maybe try this in your commit process:
        Go_block('ABC');
        First_Record;
        Synchronize;
        Commit_Form;Let us know if that works for you.

Maybe you are looking for

  • How do I install the Macro Media Player?

    Im currently taking an online test for my college course and I cannot proceed without first downloading the macromedia player but Im unable to click the download link because I "need a plugin". PLease help.

  • Photoshop CS6 installation failing with Creative Cloud Subscription

    I have a Creative Cloud subscription and I am having trouble installing Photoshop CS6 with the Adobe Application Manager. The application will download but when installing it gives me an Installation Failed error with the message "The download appear

  • Order units differ

    Hi, We are working on MM-SUS Scenario. Created PO in ECC and got transferred to SUS, we have created Purchase Order Response in SUS. From SUS end it is sucess. But at the ECC we are getting the below error in Inbound IDOC. Error Details Order units d

  • Please Help. Problem Assigning Datasource

    I created a test Master data Infoobject abc_xxx and loaded data and then after testing deleted it. Now when I am trying to assign  a new datasource to an existing Infosource from where I loaded data into the test Infoobject , I am getting error messa

  • Course Pages not loading Theme

    When I view a course page in PREVIEW I see the our theme come though just frim, but when I view that same course on the PUBLIC/Live site, the theme does not come through, at least not all of it. Everything just has white text on a dark background and