Trigger to Update in the same table.

Hi,
I would like to create a trigger that when we insert into table ALL_CAPACITY_SNS_CELL1 it updates column Action on teh same table,if teher is an update, it will update the same table , column ACTION.
This is what I have attempted but seems to be technically wrong.
<pre>
create table ALL_CAPACITY_SNS_CELL1
( A varchar2,
action char(2),
date_executed date);
create or replace
TRIGGER ALL_CAPACITY_HISTORY_TRACKING
AFTER INSERT OR DELETE OR UPDATE ON ALL_CAPACITY_SNS_CELL1 FOR EACH ROW
DECLARE
v_operation VARCHAR2(10) := NULL;
BEGIN
IF INSERTING THEN
v_operation := 'I';
ELSIF UPDATING THEN
v_operation := 'U';
ELSE
v_operation := 'D';
END IF;
IF INSERTING
UPDATE ALL_CAPACITY_SNS_CELL1
SET ACTION =v_operation,
DATE_EXECUTED =sysdate ;
ELSIF UPDATING THEN
UPDATE ALL_CAPACITY_SNS_CELL1
SET ACTION =v_operation,
DATE_EXECUTED =sysdate ;
END IF;
END;
</pre>

CrackerJack wrote:
But above query made all ACTION nul...so we shoudl after insert or update trigger right?Obviously it is not working. You modified my code the way it does not make sense.
IF INSERTING
THEN
:NEW.ACTION := 'I';
IF UPDATING
THEN
:NEW.ACTION := 'U';
ELSE
:NEW.ACTION := 'X';
END IF;The code above checks if trigger fired on INSERT (IF INSERTING). In THEN branch of that IF you check if trigger fired on UPDATE. It makes no sense. Now, if you would use my trigger:
SQL> create table ALL_CAPACITY_SNS_CELL1
  2  ( A varchar2(1),
  3  action char(2),
  4  date_executed date);
Table created.
SQL> create or replace
  2    TRIGGER ALL_CAPACITY_HISTORY_TRACKING
  3      BEFORE INSERT
  4         OR UPDATE
  5      ON ALL_CAPACITY_SNS_CELL1
  6      FOR EACH ROW
  7      BEGIN
  8          IF INSERTING
  9            THEN
10              :NEW.ACTION := 'I';
11            ELSE
12              :NEW.ACTION := 'U';
13          END IF;
14          :NEW.DATE_EXECUTED := sysdate ;
15  END;
16  /
Trigger created.
SQL> insert into ALL_CAPACITY_SNS_CELL1(a) values('A')
  2  /
1 row created.
SQL> insert into ALL_CAPACITY_SNS_CELL1(a) values('B')
  2  /
1 row created.
SQL> alter session set nls_date_format='mm/dd/yyyy hh24:mi:ss'
  2  /
Session altered.
SQL> select * from ALL_CAPACITY_SNS_CELL1
  2  /
A AC DATE_EXECUTED
A I  05/13/2009 08:56:02
B I  05/13/2009 08:56:09
SQL> update ALL_CAPACITY_SNS_CELL1
  2  set a = 'X'
  3  where a = 'B'
  4  /
1 row updated.
SQL> select * from ALL_CAPACITY_SNS_CELL1
  2  /
A AC DATE_EXECUTED
A I  05/13/2009 08:56:02
X U  05/13/2009 08:57:05
SQL> update ALL_CAPACITY_SNS_CELL1
  2  set a = 'Y'
  3  /
2 rows updated.
SQL> select * from ALL_CAPACITY_SNS_CELL1
  2  /
A AC DATE_EXECUTED
Y U  05/13/2009 08:57:21
Y U  05/13/2009 08:57:21
SQL> insert into ALL_CAPACITY_SNS_CELL1(a) values('C')
  2  /
1 row created.
SQL> select * from ALL_CAPACITY_SNS_CELL1
  2  /
A AC DATE_EXECUTED
Y U  05/13/2009 08:57:21
Y U  05/13/2009 08:57:21
C I  05/13/2009 08:57:53
SQL> SY.

Similar Messages

  • UPDATE involving the same table in sub query

    DB version: 11.2
    We have a table called SHP_GC_TRACK which has around 8 million records with partitions . In the below UPDATE, it is updating a column based on the SELECT on the same table in a subquery.
    UPDATE shp_gc_track a
       SET f_tran_proc  = 'Y'
    WHERE last_update_date <
              (SELECT MAX (last_update_date)
                 FROM shp_gc_track b
                WHERE a.shp_trx_rowid = b.shp_trx_rowid
                  AND a.c_shp_inst = b.c_shp_inst
                  AND a.f_tran_proc  = b.f_tran_proc
                  AND b.f_ltr_received = 'D'
                  AND f_rec_code IN ('G', 'W')
                  AND b.f_rec_status = 'B'
                  AND b.c_shp_inst = :b1
       AND a.c_shp_inst = :b1
       AND a.f_ltr_received = 'D'             -----------------> part of composite index
       AND a.f_tran_proc  = 'N'              -----------------> part of composite index
       AND a.f_rec_code IN ('G', 'W')      --------------> part of composite index 
       AND a.f_rec_status = 'B';              -----------------> part of composite index  This UPDATE takes a long time to execute and sometime get hung.
    We have a composite index on four columns f_ltr_received, f_rec_code, f_rec_status, f_tran_proc . Explain plan shows this composite index is being used.
    Is there anyway to rewrite this query or any other suggestions ?

    Steve_74 wrote:
    DB version: 11.2
    We have a table called SHP_GC_TRACK which has around 8 million records with partitions . In the below UPDATE, it is updating a column based on the SELECT on the same table in a subquery.
    UPDATE shp_gc_track a
    SET f_tran_proc  = 'Y'
    WHERE last_update_date <
    (SELECT MAX (last_update_date)
    FROM shp_gc_track b
    WHERE a.shp_trx_rowid = b.shp_trx_rowid
    AND a.c_shp_inst = b.c_shp_inst
    AND a.f_tran_proc  = b.f_tran_proc
    AND b.f_ltr_received = 'D'
    AND f_rec_code IN ('G', 'W')
    AND b.f_rec_status = 'B'
    AND b.c_shp_inst = :b1
    AND a.c_shp_inst = :b1
    AND a.f_ltr_received = 'D'             -----------------> part of composite index
    AND a.f_tran_proc  = 'N'              -----------------> part of composite index
    AND a.f_rec_code IN ('G', 'W')      --------------> part of composite index 
    AND a.f_rec_status = 'B';              -----------------> part of composite index  This UPDATE takes a long time to execute and sometime get hung.
    We have a composite index on four columns f_ltr_received, f_rec_code, f_rec_status, f_tran_proc . Explain plan shows this composite index is being used.
    Is there anyway to rewrite this query or any other suggestions ?Tuning updates with subqueries can be hard :(. Sadly my suggestions below are of the try-it-and-see-what-happens variety - nothing certain
    First, check the index. Is it bitmap or b-tree? If b-tree see if the most restrictive columns are listed first - this can help with b-tree index efficiency. Also if b-tree a composite bitmap for columns with lots of repeating values instead might help
    Its a correlated subquery so you can't just run the subquery first putting the result into a scalar varaiable and using the variable in the SQL instead. You can try putting the results of the subuqery w/join keys in a GTT first using the GTT in the SQL to see if I/O is reduced overall during both operations.
    Do you have the licence for the parallel query option? Using parallel DML (this must be turned on manually) might help. Check the docs for the ALTER SESSION command to do this. Also, the PARALLEL_INDEX() hint might help
    Post the execution plan of the SQL

  • Trigger in mutation - Update another rows in the same table with a trigger

    Hi ,
    I try to do a before update trigger on a table , but the trigger is in mutation. I understand why it do that but my question is :
    How can I update other rows in the same table when a UPDATE is made on my table??????
    Here is my trigger :
    CREATE OR REPLACE TRIGGER GDE_COMPS_BRU_5 BEFORE
    UPDATE OF DEPARTEMENT--, DISCIPLINE, DEG_DEMANDE, CE_ETAB
    ON GDEM.COMPOSITION_SUBV
    FOR EACH ROW
    Organisme : FQRNT-FQRSC
    Date de création : 14-07-2011
    Date de modification :
    Modifié par :
    Auteur : Johanne Plamondon
    Description : Ce déclencheur s'executera lors de la modification
    du responsable dans la table COMPOSITION_SUBV
    DECLARE
    V_OSUSER V$SESSION.OSUSER%TYPE;
    V_PROGRAM V$SESSION.PROGRAM%TYPE;
    V_TERMINAL V$SESSION.TERMINAL%TYPE;
    V_MACHINE V$SESSION.MACHINE%TYPE;
    V_MODULE V$SESSION.MODULE%TYPE;
    V_LOGON_TIME V$SESSION.LOGON_TIME%TYPE;
    V_AUDIT_ID NUMBER;
    vSEQ NUMBER;
    i NUMBER;
    vID DEMANDE.ID%TYPE;
    BEGIN
    begin
    SELECT OSUSER, PROGRAM, TERMINAL,MACHINE,MODULE, LOGON_TIME
    INTO V_OSUSER,V_PROGRAM,V_TERMINAL,V_MACHINE,
    V_MODULE,V_LOGON_TIME
    FROM V$SESSION
    WHERE TYPE = 'USER'
    AND USERNAME = USER
    AND LAST_CALL_ET IN (0,1)
    AND ROWNUM < 2;
    exception when others then null; end;
    IF NVL(:NEW.SC_PART,' ') = 'CHC' THEN
    SELECT COUNT(*)
    INTO i
    FROM DEMANDE
    WHERE DEM_REF = :NEW.DEM_ID
    AND PER_NIP = :NEW.PER_NIP;
    IF i = 1 THEN
    SELECT ID
    INTO vID
    FROM DEMANDE
    WHERE DEM_REF = :NEW.DEM_ID
    AND PER_NIP = :NEW.PER_NIP;
    UPDATE COMPOSITION_SUBV
    SET --CE_ETAB     = :NEW.CE_ETAB,
    --DISCIPLINE  = :NEW.DISCIPLINE,
    DEPARTEMENT = :NEW.DEPARTEMENT,
    --DEG_DEMANDE = :NEW.DEG_DEMANDE,
    DATE_MODIF = SYSDATE,
    USER_MODIF = V_OSUSER
    WHERE DEM_ID = vID
    AND PER_NIP = :NEW.PER_NIP
    AND ANNEE = :NEW.ANNEE;
    END IF;
    END IF;
    /*EXCEPTION
    WHEN OTHERS THEN
    NULL;*/
    END;

    A standard disclaimer, the mutating trigger error is telling you that you really, really, really don't want to be doing this. It generally indicates a major data model problem when you find yourself in a situation where the data in one row of a table depends on the data in another row of that same table. In the vast majority of cases, you're far better off fixing the data model than in working around the problem.
    If you are absolutely sure that you cannot fix the data model and must work around the problem, you'll need
    - A package with a collection (or global temporary table) to store the keys that are modified
    - A before statement trigger that initializes the collection
    - A row-level trigger that adds the keys that were updated to the collection
    - An after statement trigger that iterates over the data in the collection and updates whatever rows need to be updated.
    If you're on 11g, this can be simplified somewhat by using a compound trigger with separate before statement, row-level, and after statement sections.
    Obviously, though, this is a substantial increase in complexity over the single trigger you have here. That's one of the reasons that it's generally a bad idea to work around mutating table exceptions.
    Justin

  • How to update a table (CUSTOMER) on a Report Server with the data from the same table (CUSTOMER) from another server Transaction server?

    I had an interview question that is:
    How to update a table (Customer) on a server ex: Report Server with the data from the same table (Customer) From another server ex: Transaction server?
    Set up steps so inset, update or delete operation takes place across the servers.
    It would be great if someone please enlighten me in details about this process in MS SQL Server 2008 R2.
    Also please describe would it be different for SQL Server 2012?
    If so, then what are the steps?

    I had an interview question that is:
    How to update a table (Customer) on a server ex: Report Server with the data from the same table (Customer) from another server ex: Transaction server?
    Set up steps so that inset, update or delete operation gets done correctly across servers.
    I was not sure about the answer, it would be great if someone please put some light on this and explain in details about this process in MS SQL Server 2008 R2.
    Also it would be very helpful if you please describe would it be different for SQL Server 2012? If so, then what are the steps?

  • MULTIPLE UPDATES/INSERTIONS TO THE SAME TABLE

    How can I update/insert mutiple rows into the same table from one form ?

    Hi,
    Using the portal form on table you can insert only a single row. You can use master-detail form to insert multiple rows.
    Thanks,
    Sharmila

  • Updating values with in the same table for other records matching conditions

    Hi Experts,
    Sorry for not providing the table structure(this is simple structure)
    I have a requirement where i need to update the columns of a table based on values of the same table with some empid and date  match. If the date and empid match then i have take these values from other record and update the one which is not having office details. I need the update query
    Before update my table values are as below
    Sort_num     Emp_id   office      start_date
    1                      101     AUS    01/01/2013
    2                      101                01/01/2013
    3                      101               15/01/2013
    4                      103     USA    05/01/2013
    5                      103               01/01/2013
    6                      103                05/01/2013
    7                      104     FRA    10/01/2013
    8                      104               10/01/2013
    9                      104                01/01/2013
    After update my table should be like below
    Sort_num     Emp_id   office      start_date
    1                      101     AUS    01/01/2013
    2                      101     AUS    01/01/2013
    3                      101               15/01/2013
    4                      103     USA    05/01/2013
    5                      103               01/01/2013
    6                      103    USA     05/01/2013
    7                      104     FRA    10/01/2013
    8                      104     FRA     10/01/2013
    9                      104                01/01/2013
    Thanks in advance

    I do not have time to create the table with data but basically you should be able to code the following
    update table a
    set office = ( select office from table b where b.emp_id = a.emp_id
                                                                 and     b.start_date = a.start_date
                                                                 and     b.office is not null
    where exists ( [same query as in set]  )
    and a.office is null
    I believe that will do the trick.
    HTH -- Mark D Powell --

  • Nsert/Update and Add Column at the same Table and at the "same" Time

    Hello,
    I want Insert/Update and Add Column at the same Table and at the "same" Time but in different sessions.
    Example:
    At first the "insert/update" statement:
    Insert into TestTable (Testid,Value) values (1,5105);
    After that the "add" statement:
    Alter table TestTable add TestColumn number;
    - sadly now I get the message: ORA-00054: resource busy and acquire with NOWAIT specified
    "insert/update" statement:
    Insert into TestTable (Testid,Value) values (2,1135);
    After that the execute commit.
    I don't know when the first session set the commit statement so I want that the DB the "Alter Table..." statement execute if it's possible.
    If it's possible I want to save a repeat loop with the "Alter Table..." statemtent.
    Thanks for ideas

    Well I want to walk in the rain without and umbrella and still stay dry, but it ain't gonna happen.
    You can't run a DDL statement against a table with transactions pending. Session 2 has to wait until session commits or rollbacks (or until the session is killed). That's just the way it is.
    This makes sense if you think about it. The data dictionary has to be consistent across all sessions. If session 2 was allowed to change the table structure whilst session 1 has a pending transaction then the database is in an inconsistent state. This is easier to see if you consider the reverse situation - the ALTER TABLE statement run by session 2 does a DROP COLUMN TESTID rather than adding a column: now what should happen to session 1's INSERT statement? You have retrospectively invalidated a statement that was perfectly legal when it was executed.
    If it's possible I want to save a repeat loop with the "Alter Table..." statemtent.Fnord.
    Cheers, APC

  • Two triggers in the same table and event

    If I created two triggers on the same table and event (before insert), which of them will be triggered first ?...
    The problem from the beginning is that I created the second one as after insert and in the body of the trigger I wrote (:new.xxx:= value) ... then an error (that it should be before insert trigger or update trigger), so I created it as before update ,,, but I have already before update trigger (for primary key)..... and the problem -I think- which of them start first...
    Can make the second one as after insert and make update statement instead of assigning (:new.xxx:= value).....
    Regards

    If I created two triggers on the same table and event
    (before insert), which of them will be triggered
    first ?...As already mentioned, prior to 11g the order of firing is undetermined.
    The problem from the beginning is that I created the
    second one as after insert and in the body of the
    trigger I wrote (:new.xxx:= value) ... then an error
    (that it should be before insert trigger or update
    trigger), so I created it as before update ,,, but I
    have already before update trigger (for primary
    key)..... and the problem -I think- which of them
    start first...If there is a conflict of interest inside the code then you will have to alter your design principle around this to cater for it. Also consider combining the code into a single before insert trigger to prevent any confusion.
    Can make the second one as after insert and make
    update statement instead of assigning (:new.xxx:=
    value).....Attempting to update or query the same table as is causing the trigger to fire will result in a mutating table error. You can't do this.

  • Several FKs to the same table

    Hello,
    I have a master table with several Forign Keys – all of them to the same table. In order to display a column from the lookup table I'm using a select like:
    select     m.code,
         m.lcode1, f1.label,
         m.lcode2, f2.label,
         m.lcode6, f6.label
    from       MTABLE m,
         FTABLE  f1,
         FTABLE  f2,
         FTABLE  f6
    Where      m.lcode1 = f1.code  and
         m.lcode2 = f1.code and
         m.lcode6 = f6.codeIs this the correct and optimal way of doing it?
    Thanks for the help,
    Arie.

    Arie,
    It sounds like perhaps the best way might be to create your forms/reports based upon a view instead of the underlying tables (especially if you are running 10gR1 or 10gR2).
    I say this because:
    1. With 10g (maybe even back in 9i?), you can create INSTEAD_OF triggers, so instead of manipulating (insert, update or delete) the view, the trigger has the code to perform the action to the underlying base tables instead.
    2. This makes development easier, since all your joins are pre-defined in one place instead of re-creating them in numerous places.
    3. Grants can be issued to the views, instead of the various tables, with just select privs to the lookup tables.
    4. Change your join conditions to use the JOIN syntax, LEFT OUTER JOIN, etc., whatever is appropriate for your query. It's much easier to (eventually) figure out and maintain when you have multiple join conditions (especially against multiple tables), when any or all of them could null.
    For details on the JOIN syntax or the INSTEAD_OF triggers, try the on-line Oracle documentation (I'm at home today and don't have all my reference material handy).
    I'm using the above with great success (so far). I'm swamped with different work projects and different bosses and different priorities, so anything that makes my life easier is worth a couple hours of research, even if I have to do it from home in my off-time (if I can fit it into my schedule). I've been working on an HTMLDB (AppEx) application for over a year now, but I've probably only spent about 40-60 hours actually working on it, so I'm always forgetting most of what I've previously learned and done.
    Bill Ferguson

  • After triggers on the same table

    Hello,
    i'm looking for some samples of after triggers which run on the
    same table.
    A stupid example could be to calculate the number of lines of my
    table after a delete event to stock it in each of my rows.
    I would like an example to do that with a row level trigger and
    also with a statement level trigger.
    Thank's in advance

    You can only do what you have asked for in a statement trigger.
    create or replace trigger t_as
    after delete on t
    declare
    v_count pls_integer;
    begin
    select count(*)
    into v_count
    from t;
    update t
    set row_count = v_count; -- update each row
    end;
    /

  • SQL error msg - The DELETE statement conflicted with the SAME TABLE REFERENCE constraint

    Executed as user: ****. The DELETE statement
    conflicted with the SAME TABLE REFERENCE constraint "FK_PARENT_TASK_REF".
    The conflict occurred in database "****", table "****", column
    'PARENT_TASK_ID'. [SQLSTATE 23000] (Error 547) The statement has been
    terminated. [SQLSTATE 01000] (Error 3621). The step failed.
    Does this error msg indicate the whole script failed to execute or was it just a single step/task that failed ?
    What does error msg mean ?
    Anyway to prevent this error msg and ensure script runs successfully

    Hi mdavidh,
    This error occurs because the record  'PARENT_TASK_ID' was referenced by 'FK_PARENT_TASK_REF'.
    Please refer below codes:
    CREATE TABLE MyTable (
    ID INT, primary key(ID), -- primary key
    ID_Parent INT foreign key(ID_Parent) references MyTable(ID), -- foreign key reference the same table
    insert into MyTable(ID,ID_Parent)
    values(0,0);
    insert into MyTable(ID,ID_Parent)
    values(1,0);
    insert into MyTable(ID,ID_Parent)
    values(2,0);
    insert into MyTable(ID,ID_Parent)
    values(3,1);
    insert into MyTable(ID,ID_Parent)
    values(4,3);
    insert into MyTable(ID,ID_Parent)
    values(5,4);
    CREATE TRIGGER MyTrigger
    on MyTable
    instead of delete
    as
    set nocount on
    update MyTable set ID_Parent = null where ID_Parent in (select ID from deleted)
    delete from MyTable where ID in (select ID from deleted)
    Now we could delete records.
    delete from MyTable where ID_Parent=0
    Thanks,
    Candy Zhou

  • Data of column datatype CLOB is moved to other columns of the same table

    Hi all,
    I have an issue with the tables having a CLOB datatype field.
    When executing a simple query on a table with a column of type CLOB it returns error [POL-2403] value too large for column.
    SQL> desc od_stock_nbcst_notes;
    Name Null? Type
    OD_STOCKID N NUMBER
    NBC_SERVICETYPE N VARCHAR(40)
    LANGUAGECODE N VARCHAR(8)
    AU_USERIDINS Y NUMBER
    INSERTDATE Y DATE
    AU_USERIDUPD Y NUMBER
    MODIFYDATE Y DATE
    VERSION Y SMALLINT(4)
    DBUSERINS Y VARCHAR(120)
    DBUSERUPD Y VARCHAR(120)
    TEXT Y CLOB(2000000000)
    NBC_PROVIDERCODE N VARCHAR(40)
    SQL> select * from od_stock_nbcst_notes;
    [POL-2403] value too large for column
    Checking deeply, some of the rows have got the data of the CLOB column moved in another column of the table.
    When doing select length(nbc_providercode) the length is bigger than the datatype of the field (varchar(40)).
    When doing substr(nbc_providercode,1,40) to see the content of the field, a portion of the Clob data is retrieved.
    SQL> select max(length(nbc_providercode)) from od_stock_nbcst_notes;
    MAX(LENGTH(NBC_PROVIDERCODE))
    162
    Choosing one random record, this is the stored information.
    SQL> select length(nbc_providerCode), text from od_stock_nbcst_notes where length(nbc_providerCode)=52;
    LENGTH(NBC_PROVIDERCODE) | TEXT
    -------------------------+-----------
    52 | poucos me
    SQL> select nbc_providerCode from od_stock_nbcst_notes where length(nbc_providerCode)=52;
    [POL-2403] value too large for column
    SQL> select substr(nbc_providercode,1,40) from od_stock_nbcst_notes where length(nbc_providercode)=52 ;
    SUBSTR(NBC_PROVIDERCODE
    Aproveite e deixe o seu carro no parque
    The content of the field is part of the content of the field text (datatype CLOB, containts an XML)!!!
    The right content of the field must be 'MTS' (retrieved from Central DB).
    The CLOB is being inserted into the Central DB, not into the Client ODB. Data is synchronized from CDB to ODB and the data is reaching the client in a wrong way.
    The issue can be recreated all the time in the same DB, but between different users the "corrupted" records are different.
    Any idea?

    939569 wrote:
    Hello,
    I am using Oracle 11.2, I would like to use SQL to update one column based on values of other rows at the same table. Here are the details:
    create table TB_test (myId number(4), crtTs date, updTs date);
    insert into tb_test(1, to_date('20110101', 'yyyymmdd'), null);
    insert into tb_test(1, to_date('20110201', 'yyyymmdd'), null);
    insert into tb_test(1, to_date('20110301', 'yyyymmdd'), null);
    insert into tb_test(2, to_date('20110901', 'yyyymmdd'), null);
    insert into tb_test(2, to_date('20110902', 'yyyymmdd'), null);
    After running the SQL, I would like have the following result:
    1, 20110101, 20110201
    1, 20110201, 20110301
    1, 20110301, null
    2, 20110901, 20110902
    2, 20110902, null
    Thanks for your suggestion.How do I ask a question on the forums?
    SQL and PL/SQL FAQ

  • Can we use both INSERT and UPDATE at the same time in JDBC Receiver

    Hi All,
    I would like to know is it possible to use both INSERT and UPDATE at the same time in one interface because I have a requirement in which I have to perform both the task.
    user send the file which contains both new and old record and I need to save those in MS SQL database.
    If the record exist then use UPDATE otherwise use INSERT.
    I looked on sdn but didn't find any blog which perform both the things at the same time.
    Interface Requirement
    FILE -
    > PI -
    > JDBC(INSERT & UPDATE)
    I am thinking to use JDBC Lookup but not sure if it good to use for bulk record.
    Can somebody please suggest me something or send me the link of any blog or anything to solve this problem.
    Thanks,

    Hi ,
              If I have understood properly the scenario properly,you are not performing insert and update together. As you posted
    "If the record exist then use UPDATE otherwise use INSERT."
    Thus you are performing either an insert or an update which depends on outcome of a search if the records already exist in database or not. Obviously to search the tables you need " select * from ...  where ...." query. If your query returns some results you proceed with update since this means there are some old records already in database. If your query returns no rows  you proceed with "insert into tablename....." since there are no old records present in database.
      Now perhaps the best method to do the searching, taking a decision to insert or update, and finally insert or update operation is to be done by a stored procedure in MS SQL database.  A stored procedure is a subroutine available to applications accessing a relational database system. Here the application is PI server.   If you need further help on how to write and call stored procedure in MS SQL you can look into these links
    http://www.daniweb.com/web-development/databases/ms-sql/threads/146829
    http://www.sqlteam.com/article/stored-procedures-parameters-inserts-and-updates
    [ This part you can ignore, Since its not sure that you will face this situation
        Still you might face some problems while your scenario runs. Lets consider this scenario, after the stored procedure searches the database it found no rows. Thus you proceed with an insert operation. If your database table is being accessed by multiple applications (or users) other than yours then it is very well possible that after the search operation completed with a null result, an insert/update operation has been performed by some other application with the same primary key. Now when you are trying to insert another row with same primary key you get an error message like "duplicate entry not possible for same primary key value". Thus you need to be careful in this respect. MS SQL has a feature called "exclusive locks ". Look into these links for more details on the subject
    http://msdn.microsoft.com/en-us/library/aa213039(v=sql.80).aspx
    http://www.mssqlcity.com/Articles/Adm/SQL70Locks.htm
    http://www.faqs.org/docs/ppbook/r27479.htm
    http://msdn.microsoft.com/en-US/library/ms187373.aspx
    http://msdn.microsoft.com/en-US/library/ms173763.aspx
    http://msdn.microsoft.com/en-us/library/e7z8d5hf(v=vs.80).aspx
    http://mssqlserver.wordpress.com/2006/11/08/locks-in-sql/
    http://www.mollerus.net/tom/blog/2008/03/using_mssqls_nolock_for_faster_queries.html
        There must be other methods to avoid this problem. But the point is you need to be sure that all access to database for insert/update operations are isolated.
    regards
    Anupam

  • Two LOVs in same UIX page based on the same table

    Hi group,
    Recently I ran into a problem with a UIX page that has two LOVs based on the same database table. In Emp and Job terminology, here's what I tried to do.
    Suppose we have use the Employees and Jobs tables from the HR scheme with one small modification. Add a column called PREFERRED_JOB_ID of type VARCHAR2(10) which is exactly the same as JOB_ID. Hypothetically speaking, this column will be used to allow Employees to select their preferred job in case they want to change their jobs.
    Next, create business components in JDeveloper based on the Employees and Jobs table. In order to be able to generate the LOVs for Job and PreferredJob create two ViewObjects, one called JobsLookupView and one called PreferredJobsLookupView. These both need to be based upon the same Jobs EntityObject. Also include the Jobs EntityObject twice in the EmployeesView ViewObject and give one the alias PreferredJobs. Include the JobTitle attributes of Jobs and PreferredJobs in EmployeesView so they can contain the JobTitles selected in our future LOVs.
    Next create a ViewController project if it's not already there and enable JHeadstart on it. Create an Application Structure File and create the lookups for the two LOVs. When I run the application I see this happening:
    When I click the flashlight icon next to the PreferredJob LOV and select a job, the PreferredJob field is selected in the UIX page but no job title appears. When I next first select the Job LOV and select a new job and then select the PreferredJob LOV again and select a different job, the job that was selected in the Job LOV is now also entered in the PreferredJob field. When I make sure the PreferredJob field isn't required (remove the required="yes" property on the messageLovInput entry for PreferredJobTitle and remove the PreferredJobTitle field from the addRequiredRowItems list in the UIX page) I can save the changes I made in the Employees.uix page. The same JobId is stored in the database for Job and PreferredJob.
    This behaviour is probably due to cacheing issues because both LOVs and the EmployeesView ViewObject use one EntityObject for four values in three ViewObjects.
    So I then modified my model a bit. I created a new EntityObject called PreferredJobs based on the JOBS table. I modified the EmployeesView ViewObject to use this EntityObject for the PreferredJobTitle attribute and modified the PreferredJobsLookupView to use this EntityObject.
    I needed to modify a few things as well in the Application Structure File (which were prompts and whether or not the attribute should be visible in a table) and after regenerating I made sure the PreferredJob attribute isn't required in the UIX page. When I then run the application again, I never see the JobTitle I select in any LOV allthough the PreferredJob field is selected when I select a Job in the PreferredJob LOV. The correct JobId now is stored in the database though.
    Has anyone ever encoutered this behaviour? Would anyone know how to get two LOVs based on the same table in one UIX page?
    Thanks in advance,
    Wouter van Reeven
    AMIS

    OK I figured it out. When I added the second Lookup ViewObject (PreferredJobs) no additional Association was created. Therefore, ADF BC wasn't able to figure out which field to update. When I added the Association between the PreferredJobs Lookup ViewObject and the Employees ViewObject everything started working ok.
    I then recreated my inital situation: one EntityObject for Jobs and one for Employees, now with two Associations between them. After modifying the Employees ViewObject and making sure the Jobs EntityObject was referred twice and via the corresponding Association, everything started working ok.
    Greets, Wouter
    AMIS

  • How to find out the volume of the data updated in the custom table

    Hi,
    I need to find out the the volume of the data inserted or updated in the the custom table(Y tables).I have tried by the sm37.the job running in to update the table but i didnot get thde amount of the data.and if get the volume of the data which being updated in the custom table is there any option to control that amount to being updated?
    Thanks in advance .....waiting for the respone.

    Hi Sreenivas.
    How did you find the solution to this? Trying to do the same thing!
    Cheers,
    Tom

Maybe you are looking for

  • Flash Builder 4.7 on OSX 10.9 (Mavericks)

    Hey Everyone, If you've updated to OS X Mavericks BETA and you're unable to start Flash Builder 4.7 visit the following page and download the the file on the right. Install Java 6 and run Flash Builder and everything should work as normal. http://sup

  • S30 'no longer available' on U.S. Lenovo site ???

    Can someone from Lenovo pls check the Lenovo.com site?  Getting an error msg about S30 no longer available when I get to the buy/configure page. Update:  So, I start up a live chat w/guy from India and he says the S30 is now discontinued?? What's goi

  • 2.4gHz iMac take apart question...

    Hi, I have a 2.4ghz 20" iMac. When started up from cold there is a horrible noise that seems to come from the bottom right hand corner of the machine - To the right of the Apple logo. I am sure its a fan and not the drive. Sooo... In no particular or

  • Loading millions of rows using SQL*loader to a table with constraints

    I have a table with constraints and I need to load millions of rows in it using SQL*Loader. What is the best way to do this, means what SQL*Loader options to use, for getting the best loading performance and how to deal with constraints? Regards

  • Visual Studio Like IDE for AS?

    I don't like feeding the big green monster (microsoft) more than I have to but I think Macromedia (adobe?) should make programming with action script more like programming in visual studio 2005. I program for fun and to be honest, its not that fun wh