INSERTALLRECORDS

Hi ,
For resolving the timestamp conflict I have the below steps in replication file .
MAP swilkes.date_test, TARGET swilkes.date_test, &
REPERROR (21000, EXCEPTION), &
SQLEXEC (ID lookup, ON UPDATE, &
QUERY "select count(*) conflict from date_test where t_id = ? and &
t_timestamp > ?", &
PARAMS (p1 = t_id, p2 = t_timestamp), BEFOREFILTER, ERROR REPORT, &
TRACE ALL),&
FILTER (lookup.conflict = 0, ON UPDATE, RAISEERROR 21000);
MAP swilkes.date_test, TARGET swilkes.date_test_exc, EXCEPTIONSONLY, &
INSERTALLRECORDS, &
COLMAP (USEDEFAULTS, errtype = "UPDATE FILTER FAILED.");
I want to include also INSERTALLRECORDS for any duplicate records in target database table in the same replicat file .
Please let me know the steps.
Thanks

I mean to say if any duplicate record comes to target database the error will show ora -001 Unique key constraint violated .I want for such kind errors the insert statement should convert to updates or the insert statement should get replicate .Also can we configure another exception table or can we use the same exception table for this also along with timestamp conflict exception table .
Thanks

Similar Messages

  • A question about transaction consistency between multible target tables

    Dear community,
    My replication is ORACLE 11.2.3.7->ORACLE 11.2.3.7 both running on linux x64 and GG version is 11.2.3.0.
    I'm recovering from an error when trail file was moved away while dpump was writing to it.
    After moving the file back dpump abended with an error
    2013-12-17 11:45:06  ERROR   OGG-01031  There is a problem in network communication, a remote file problem, encryption keys for target and source do
    not match (if using ENCRYPT) or an unknown error. (Reply received is Expected 4 bytes, but got 0 bytes, in trail /u01/app/ggate/dirdat/RI002496, seqno 2496,
    reading record trailer token at RBA 12999993).
    I googled for It and found no suitable solution except for to try "alter extract <dpump>, entrollover".
    After rolling over trail file replicat as expected ended with
    REPLICAT START 1
    2013-12-17 17:56:03  WARNING OGG-01519  Waiting at EOF on input trail file /u01/app/ggate/dirdat/RI002496, which is not marked as complete;
    but succeeding trail file /u01/app/ggate/dirdat/RI002497 exists. If ALTER ETROLLOVER has been performed on source extract,
    ALTER EXTSEQNO must be performed on each corresponding downstream reader.
    So I've issued "alter replicat <repname>, extseqno 2497, extrba 0" but got the following error:
    REPLICAT START 2
    2013-12-17 18:02:48 WARNING OGG-00869 Aborting BATCHSQL transaction. Detected inconsistent result:
    executed 50 operations in batch, resulting in 47 affected rows.
    2013-12-17 18:02:48  WARNING OGG-01137  BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01004 Aborted grouped transaction on 'M.CLIENT_REG', Database error
    1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "M"."CLIENT_REG" SET "CLIENT_CODE" =
    :a1,"CORE_CODE" = :a2,"CP_CODE" = :a3,"IS_LOCKED" = :a4,"BUY_SUMMA" = :a5,"BUY_CHECK_CNT" =
    :a6,"BUY_CHECK_LIST_CNT" = :a7,"BUY_LAST_DATE" = :a8 WHERE "CODE" = :b0>).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.CHECK OCI Error ORA-00001:
    unique constraint (M.CHECK_PK) violated (status = 1). INSERT INTO "M"."CHECK"
    ("CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    The report stated the following:
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 0 records
    Report at 2013-12-17 18:02:48 (activity since 2013-12-17 18:02:46)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    At that time I came to the conclusion that using etrollover was not a good idea Nevertheless I had to upload my data to perform consistency check.
    My mapping templates are set up as the following:
    LS.CHECK->M.CHECK
    LS.CHECK->M.TL_CHECK
    (such mapping is set up for every table that is replicated).
    TL_CHECK is a transaction log, as I name it,
    and this peculiar mapping is as the following:
    ignoreupdatebefores
    map LS.CHECK, target M.CHECK, nohandlecollisions;
    ignoreupdatebefores
    map LS.CHECK, target M.TL_CHECK ,colmap(USEDEFAULTS,
    FILESEQNO = @GETENV ("RECORD", "FILESEQNO"),
    FILERBA = @GETENV ("RECORD", "FILERBA"),
    COMMIT_TS = @GETENV( "GGHEADER", "COMMITTIMESTAMP" ),
    FILEOP = @GETENV ("GGHEADER","OPTYPE"), CSCN = @TOKEN("TKN-CSN"),
    RSID = @TOKEN("TKN-RSN"),
    OLD_CODE = before.CODE
    , OLD_STATE = before.STATE
    , OLD_IDENT_TYPE = before.IDENT_TYPE
    , OLD_IDENT = before.IDENT
    , OLD_CLIENT_REG_CODE = before.CLIENT_REG_CODE
    , OLD_SHOP = before.SHOP
    , OLD_BOX = before.BOX
    , OLD_NUM = before.NUM
    , OLD_NUM_VIRT = before.NUM_VIRT
    , OLD_KIND = before.KIND
    , OLD_KIND_ORDER = before.KIND_ORDER
    , OLD_DAT = before.DAT
    , OLD_SUMMA = before.SUMMA
    , OLD_LIST_COUNT = before.LIST_COUNT
    , OLD_RETURN_SELL_CHECK_CODE = before.RETURN_SELL_CHECK_CODE
    , OLD_RETURN_SELL_SHOP = before.RETURN_SELL_SHOP
    , OLD_RETURN_SELL_BOX = before.RETURN_SELL_BOX
    , OLD_RETURN_SELL_NUM = before.RETURN_SELL_NUM
    , OLD_RETURN_SELL_KIND = before.RETURN_SELL_KIND
    , OLD_INSERTED = before.INSERTED
    , OLD_UPDATED = before.UPDATED
    , OLD_REMARKS = before.REMARKS), nohandlecollisions, insertallrecords;
    As PK violation fired for CHECK, I've changed nohandlecollisions to handlecollisions for LS.CHECK->M.CHECK mapping and restarted an replicat.
    To my surprise it ended with the following error:
    REPLICAT START 3
    2013-12-17 18:05:55 WARNING OGG-00869 Aborting BATCHSQL transaction. Database error 1 (ORA-00001:
    unique constraint (M.CHECK_PK) violated).
    2013-12-17 18:05:55 WARNING OGG-01137 BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:05:55 WARNING OGG-01003 Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-00869 OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK)
    violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55 WARNING OGG-01004 Aborted grouped transaction on 'M.TL_CHECK', Database error 1
    (OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO
    "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26)).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.TL_CHECK OCI Error
    ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    I've expected that batchsql will fail cause it does not support handlecollisions, but I really don't understand why any record was inserted into TL_CHECK and caused PK violation, cause I thought that GG guarantees transactional consistency and that any transaction that caused an error in _ANY_ of target tables will be rollbacked for _EVERY_ target table.
    TL_CHECK has PK set to (FILESEQNO, FILERBA), plus I have a special column that captures replication run number and it clearly states that a record causing PK violation was inserted during previous run (REPLICAT START 2).
    BTW report for the last shows
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 1 records
    Report at 2013-12-17 18:05:55 (activity since 2013-12-17 18:05:54)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    So somebody explain, how could that happen?

    Write the query of the existing table in the form of a function with PRAGMA AUTONOMOUS_TRANSACTION.
    examples here:
    http://www.morganslilbrary.org/library.html

  • Issue with exception Handling in GG

    Hi,
    I have bi-directional DML replication setup. I have written a code in replication parameter for handling the exception , Exception handling is working fine my replicate process is not getting ABENDED but Issue is I am not geeting any rows in EXCEPTION table.I had gone through replicat report, there I had seen GG is trying to inser duplicate records in EXCEPTION TABLE and it is failing because of that .
    **Command for create Exception Table is-**
    create table ggs_admin.exceptions (
    rep_name      varchar2(8) ,
    table_name      varchar2(61) ,
    errno      number ,
    dberrmsg      varchar2(4000) ,
    optype               varchar2(20) ,
    errtype           varchar2(20) ,
    logrba               number ,
    logposition          number ,
    committimestamp      timestamp,
    CONSTRAINT pk_exceptions PRIMARY KEY (logrba, logposition, committimestamp)
    USING INDEX
    TABLESPACE INDX1
    TABLESPACE dbdat1
    My replication parameter is-
    GGSCI (db) 1> view params rep2
    -- Replicator parameter file to apply changes
    REPLICAT rep2
    ASSUMETARGETDEFS
    USERID ggs_admin, PASSWORD ggs_admin
    DISCARDFILE /u01/app/oracle/product/gg/dirdat/rep2_discard.dsc, PURGE
    -- Start of the macro
    MACRO #exception_handler
    BEGIN
    , TARGET ggs_admin.exceptions
    , COLMAP ( rep_name = "REP2"
    , table_name = @GETENV ("GGHEADER", "TABLENAME")
    , errno = @GETENV ("LASTERR", "DBERRNUM")
    , dberrmsg = @GETENV ("LASTERR", "DBERRMSG")
    , optype = @GETENV ("LASTERR", "OPTYPE")
    , errtype = @GETENV ("LASTERR", "ERRTYPE")
    , logrba = @GETENV ("GGHEADER", "LOGRBA")
    , logposition = @GETENV ("GGHEADER", "LOGPOSITION")
    , committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP"))
    , INSERTALLRECORDS
    , EXCEPTIONSONLY;
    END;
    -- End of the macro
    REPERROR (DEFAULT, EXCEPTION)
    --REPERROR (-1, EXCEPTION)
    --REPERROR (-1403, EXCEPTION)
    MAP scr.order_items, TARGET scr.order_items;
    MAP scr.order_items #exception_handler();
    GGSCI (db) 2>view params rep2
    MAP resolved (entry scr.order_items):
    MAP "scr"."order_items" TARGET ggs_admin.exceptions , COLMAP ( rep_name = "REP2" , table_name = @GETENV ("GGHEADER", "TABLENAME") , errno = @GETENV ("LASTERR", "DB
    ERRNUM") , dberrmsg = @GETENV ("LASTERR", "DBERRMSG") , optype = @GETENV ("LASTERR", "OPTYPE") , errtype = @GETENV ("LASTERR", "ERRTYPE") , logrba = @GETENV ("GGHEADER"
    , "LOGRBA") , logposition = @GETENV ("GGHEADER", "LOGPOSITION") , committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP")) , INSERTALLRECORDS , EXCEPTIONSONLY;;
    Using the following key columns for target table GGS_ADMIN.EXCEPTIONS: LOGRBA, LOGPOSITION, COMMITTIMESTAMP.
    2012-08-30 09:09:00 WARNING OGG-01154 SQL error 1403 mapping scr.order_items to scr.order_items OCI Error ORA-01403: no data found, SQL <DELETE FROM "scr"."order_items" WHERE "SUBSCRIBER_ID" = :b0>.
    2012-08-30 09:09:00 WARNING OGG-00869 OCI Error ORA-00001: unique constraint (GGS_ADMIN.PK_EXCEPTIONS) violated (status = 1). INSERT INTO "GGS_ADMIN"."EXCEPTIONS" ("R
    EP_NAME","TABLE_NAME","ERRNO","DBERRMSG","OPTYPE","ERRTYPE","LOGRBA","LOGPOSITION","COMMITTIMESTAMP") VALUES (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8).
    2012-08-30 09:09:00 WARNING OGG-01004 Aborted grouped transaction on 'GGS_ADMIN.EXCEPTIONS', Database error 1 (OCI Error ORA-00001: unique constraint (GGS_ADMIN.PK_EX
    CEPTIONS) violated (status = 1). INSERT INTO "GGS_ADMIN"."EXCEPTIONS" ("REP_NAME","TABLE_NAME","ERRNO","DBERRMSG","OPTYPE","ERRTYPE","LOGRBA","LOGPOSITION","COMMITTIMES
    TAMP") VALUES (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8)).
    2012-08-30 09:09:00 WARNING OGG-01003 Repositioning to rba 92383 in seqno 8.
    2012-08-30 09:09:00 WARNING OGG-01154 SQL error 1403 mapping scr.order_items to scr.order_items OCI Error ORA-01403: no data found, SQL <DELETE FROM "scr"."order_items" WHERE "SUBSCRIBER_ID" = :b0>.
    2012-08-30 09:09:00 WARNING OGG-01154 SQL error 1 mapping scr.order_items to GGS_ADMIN.EXCEPTIONS OCI Error ORA-00001: unique constraint (GGS_ADMIN.PK_EXCEPTIONS)
    violated (status = 1). INSERT INTO "GGS_ADMIN"."EXCEPTIONS" ("REP_NAME","TABLE_NAME","ERRNO","DBERRMSG","OPTYPE","ERRTYPE","LOGRBA","LOGPOSITION","COMMITTIMESTAMP") VAL
    UES (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8).
    2012-08-30 09:09:00 WARNING OGG-01003 Repositioning to rba 92383 in seqno 8.
    When I am running command
    select * from exceptions;
    no row selected.
    Please help. Why duplicat rows trying to insert in Exception table.

    Remove (disable) the constraint on the exceptions table and see if inserts will take place. Do you really need that primary key?

  • Replicat abended

    Hi,
    im my active -active replication setup, today i'm getting the below error message , can you please help me to resolve this ,
    2011-06-15 13:30:09 GGS ERROR 101 Oracle GoldenGate Delivery for Oracle,
    REP_TAR2.prm: Must be IGNORE, DISCARD, ABEND, EXCEPTION, TRANSABORT or RETRYOP
    Where i need to set the above parameters, please find below the map statements which is configured in both replicat prm files
    MAP replica.test, TARGET replica.item_descr, &
    REPERROR ( DEFAULT , EXCEPTION ) , &
    SQLEXEC ( ID detect_conflict, &
    ON UPDATE, &
    QUERY "SELECT 0 conflict FROM replica.test WHERE batch_id=:p_batch_id AND total=:p_total_before", &
    PARAMS(p_batch_id=batch_id,p_total_before=BEFORE.total), &
    ALLPARAMS REQUIRED , &
    BEFOREFILTER, &
    EXEC MAP, &
    TRACE ALL, &
    ERROR RAISE );
    also i have written a procedure on both side for mismatch,
    create or replace PROCEDURE ggadmin.h (
    p_batch_id IN NUMBER
    , p_total_after IN NUMBER
    , p_total_before IN NUMBER)
    IS
    BEGIN
    UPDATE replica.test
    SET total = total + (p_total_after - p_total_before)
    WHERE batch_id = p_batch_id;
    END h;
    Please help to resolve the 2011-06-15 13:30:09 GGS ERROR 101 Oracle GoldenGate Delivery for Oracle,
    REP_TAR2.prm: Must be IGNORE, DISCARD, ABEND, EXCEPTION, TRANSABORT or RETRYOP
    when i start the replicat process both side it goes ABENDED status... below are the full replicat process entries please advice...
    -- Replicat process
    REPLICAT REP_SRC1
    -- Environment Settings
    USERID ggadmin, PASSWORD ggadmin
    -- Discard file path
    DISCARDFILE E:\ggs\dirrpt\rep1_dsc.rpt, append
    -- Grneral Parameters
    ASSUMETARGETDEFS
    -- Truncate parameter
    --GETTRUNCATES
    DDL INCLUDE ALL
    DDLERROR DEFAULT IGNORE RETRYOP
    --HANDLECOLLISIONS
    APPLYNOOPUPDATES
    -- Mapping parameters
    GETAPPLOPS
    IGNOREREPLICATES
    -- This starts the macro
    --MAP replica.*, target replica.*;
    MAP replica.test , TARGET replica.test, &
    REPERROR ( DEFAULT, EXCEPTION ), &
    SQLEXEC ( ID detect_conflict, &
    ON UPDATE, &
    QUERY "SELECT 0 conflict FROM replica.test WHERE batch_id=:p_batch_id AND total=:p_total_before", &
    PARAMS(p_batch_id=batch_id,p_total_before=BEFORE.total), &
    ALLPARAMS REQUIRED, &
    BEFOREFILTER, &
    EXEC MAP, &
    TRACE ALL, &
    ERROR RAISE );
    MACRO #exception_handler
    BEGIN
    , TARGET ggadmin.exceptions
    , EXCEPTIONSONLY
    , SQLEXEC ( SPNAME ggadmin.h, &
    PARAMS(p_batch_id = batch_id, &
    p_total_after = total, &
    p_total_before = BEFORE.total)
    , EXEC MAP
    , TRACE ALL
    , COLMAP ( rep_name = "rep_src1"
    , table_name = @GETENV ("GGHEADER", "TABLENAME")
    , errno = @GETENV ("LASTERR", "DBERRNUM")
    , dberrmsg = @GETENV ("LASTERR", "DBERRMSG")
    , optype = @GETENV ("LASTERR", "OPTYPE")
    , errtype = @GETENV ("LASTERR", "ERRTYPE")
    , logrba = @GETENV ("GGHEADER", "LOGRBA")
    , logposition = @GETENV ("GGHEADER", "LOGPOSITION")
    , committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP")
    , batch_id=batch_id
    , batch_id_before = BEFORE.batch_id
    --, total_after = total
    , total_before = BEFORE.total
    , record_image = @GETENV ("GGHEADER", "BEFOREAFTERINDICATOR"))
    , INSERTALLRECORDS
    END;
    -- This ends the macro
    Edited by: Atp on Jun 15, 2011 5:49 PM

    Hi Steven,
    Thanks for your reply,
    the problem is space between ( and DEFAULT also between EXCEPTION and ) after removing the space its working fine....
    REPERROR (DEFAULT, EXCEPTION), &

  • Exception table getting dropped

    Hi,
    when i dropped a table from the source, the exception table at the target also getting dropped and a table created at the source will also create/override exception table with the new table's structure. So i am wonding if anyone has any idea or
    have gone through similar situation.
    Here is my replicat setup:
    -- Identify the Replicat group:
    REPLICAT CEPREP1
    -- Use HANDLECOLLISIONS while Source is Active
    --HANDLECOLLISIONS
    -- State that source and target definitions are identical:
    ASSUMETARGETDEFS
    -- Specify database login information as needed for the database:
    USERID goldengate_owner, PASSWORD *******
    -- This starts the macro for exception handler
    MACRO #exception_handler
    BEGIN
    , TARGET goldengate_owner.exceptions
    , COLMAP ( rep_name = "CEPREP1"
    , table_name = @GETENV ("GGHEADER", "TABLENAME")
    , errno = @GETENV ("LASTERR", "DBERRNUM")
    , dberrmsg = @GETENV ("LASTERR", "DBERRMSG")
    , optype = @GETENV ("LASTERR", "OPTYPE")
    , errtype = @GETENV ("LASTERR", "ERRTYPE")
    , logrba = @GETENV ("GGHEADER", "LOGRBA")
    , logposition = @GETENV ("GGHEADER", "LOGPOSITION")
    , committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP"))
    , INSERTALLRECORDS
    , EXCEPTIONSONLY;
    END;
    -- This ends the macro for exception handler
    -- Specify error handling rules:
    REPERROR (DEFAULT, EXCEPTION)
    REPERROR (DEFAULT2, ABEND)
    REPERROR (-1, EXCEPTION)
    REPERROR (-1403, EXCEPTION)
    DDL INCLUDE ALL
    DDLERROR DEFAULT IGNORE RETRYOP
    MAP GGATE_T1.*, TARGET GGATE_T1.*;
    MAP GGATE_T1.* #exception_handler()
    MAP GGATE_T2.*, TARGET GGATE_T2.*;
    MAP GGATE_T2.* #exception_handler()
    MAP OPS$ORACLE.*, TARGET OPS$ORACLE.*;
    MAP OPS$ORACLE.* #exception_handler()
    Edited by: pbhand on Jun 29, 2011 4:22 PM

    You have "DDL INCLUDE ALL", which means any "drop table" on the source DB will be replicated to the target DB.
    If you don't want to replicate DDL, then just omit this statement altogether (DDL replication is not enabled by default).
    If you want to replicate DDL but not for specific objects, then use "DDL INCLUDE {what_you_want}" or "DDL INCLUDE ALL, EXCLUDE {what_you_dont_want}". See the GG Admin guide (ch 14) for how DDL synchronization can be configured[1]...
    For example,
    <pre>
    DDL INCLUDE ALL, EXCLUDE OBJNAME "goldengate_owner.exceptions”
    </pre>
    or,
    <pre>
    DDL INCLUDE OBJNAME "goldengate_owner.*", EXCLUDE OBJNAME "goldengate_owner.exceptions”
    </pre>
    Note: it could be the formatting of the message, but "MAP GGATE_T1., TARGET GGATE_T1.;" doesn't look right. You should always have "schema.tablename", where tablename could be a wildcard... perhaps it was supposed to be => MAP GGATE_T1.*, TARGET GGATE_T1.*;" ?
    Cheers,
    -Michael
    [1] http://download.oracle.com/docs/cd/E18101_01/index.htm

  • SQLEXEC on each new trail file or just on replicat start to capture current FILESEQNO?

    Dear community,
    I need to capture fileseqno of the trail file that replicat is processing. But I only need to capture it once without overhead of adding fileseqno column to target tables.
    I clearly understand that using colmap ( usedefaults, FILESEQNO = @GETENV ("RECORD", "FILESEQNO") ) inserts fileseqno value for every record captured from logs, but what I'm trying to accomplish is to react on the event of changing fileseqno number while replicat is running.
    Following application logic I actually only need to capture fileseqno of the first trail, that is processed by replicat.
    What I've tried so far is using following parameters for replicat (given two tables to be replicated):
    table m.EventHead,
    sqlexec( id set_first_seq_no1, spname pk_replication_accounting.set_first_seq_no, params( a_fileseqno = @GETENV( "GGFILEHEADER", "FILESEQNO" ) ), afterfilter ),
    filter (@coltest( set_first_seq_no1.a_fileseqno, NULL ) or @coltest( set_first_seq_no1.a_fileseqno, MISSING ) or @coltest( set_first_seq_no1.a_fileseqno, INVALID ) ),
    eventactions( ignore record );
    table m.EventTail,
    sqlexec( id set_first_seq_no2, spname pk_replication_accounting.set_first_seq_no, params( a_fileseqno = @GETENV( "GGFILEHEADER", "FILESEQNO" ) ), afterfilter ),
    filter (@coltest( set_first_seq_no2.a_fileseqno, NULL ) or @coltest( set_first_seq_no2.a_fileseqno, MISSING ) or @coltest( set_first_seq_no2.a_fileseqno, INVALID ) ),
    eventactions( ignore record );
    pk_replication_accounting.set_first_seq_no is defined within package as
    procedure set_first_seq_no( a_fileseqno in out pls_integer );
    With filter clause I've tried to instruct GG to perform SQLEXEC only for the first record captured for every table, but with no success. Stored procedure is fired multiple times upon every record in trail file.
    As far as I understand standalone SQLEXEC is not capable of obtaining value of @GETENV( "GGFILEHEADER", "FILESEQNO" ), though I have not tried it yet.
    Another way, that I see, is to instruct extract to add one fake record for every new trail file and then to process it within map clause with SQLEXEC. For example if SOURCEISTABLE had per table effect, then we could get our single record for every trail using dual table:
    sourceistable
    table dual;
    Still I don't know how to achieve required behavior.
    Please help, if you know some workarounds.

    Managed to capture current fileseqno for every replicat start with following parameters:
    ignoreupdatebefores
    map m.EventHead, target gg.tmp_gg_dummy, handlecollisions, colmap ( id = @GETENV ("RECORD", "FILERBA") )
    sqlexec( id set_first_seq_no1, spname pk_replication_accounting.set_first_seq_no, params(a_fileseqno = @GETENV( "GGFILEHEADER", "FILESEQNO" ) ), afterfilter ),
    filter (@getenv( "STATS", "TABLE", "GG.TMP_GG_DUMMY", "DML" ) = 0), insertallrecords;
    map m.EventTail, target gg.tmp_gg_dummy, handlecollisions, colmap ( id = @GETENV ("RECORD", "FILERBA") )
    sqlexec( id set_first_seq_no2, spname pk_replication_accounting.set_first_seq_no, params(a_fileseqno = @GETENV( "GGFILEHEADER", "FILESEQNO" ) ), afterfilter ),
    filter (@getenv( "STATS", "TABLE", "GG.TMP_GG_DUMMY", "DML" ) = 0), insertallrecords;
    tmp_gg_dummy is defined as the following:
    create global temporary table gg.tmp_gg_dummy ( id number( 14, 0 ) ) on commit delete rows;
    alter table gg.tmp_gg_dummy add constraint tmp_gg_dummy_pk primary key ( id );
    Procedure is fired only once per every replicat start and report file shows the following:
    From Table m.EventHead to GG.TMP_GG_DUMMY:
      Stored procedure set_first_seq_no1:
             attempts:         0
           successful:         0
    From Table m.EventTail to GG.TMP_GG_DUMMY:
           #                   inserts:         0
           #                   updates:         1
           #                   deletes:         0
           #                  discards:         0
      Stored procedure set_first_seq_no2:
             attempts:         1
           successful:         1
    though original mapping from m.EventTail has
    inserts:   
    69
    updates:   
    21
    befores:   
    21
    deletes:    
    0
    discards:    
    0

  • Replicate Tables as Staging Tables in a DWH

    Hi
    We are in process of building a Data Mart, for the staging area we are considering to use GG as the Change Data Capture tool.
    The idea is in the source (OLTP Database) we have a table called T1, this table has no PK and it's partitioned by day, generates around 20 million of rows per day and a few hundreds of thousands of changes such as Updates and Deletes. In the staging area we will setup a table called STG_T1 with same structure as T1 plus a few columns such as
    @GETENV("TRANSACTION" , "CSN")
    @GETENV("GGHEADER", "COMMITTIMESTAMP")
    @GETENV("GGHEADER", "LOGRBA")
    @GETENV("GGHEADER", "LOGPOSITION")
    @GETENV("GGHEADER", "OPTYPE")
    @GETENV("GGHEADER", "BEFOREAFTERINDICATOR")
    All the changes will be comverted to INSERTS using INSERTALLRECORDS in the replicat. This has a problem, since we dont have PK in the source we dont know how to identify a row's change history in the source in STG_T1.
    Has anyone got experience replicating OLTP to a Staging Area using OGG and the ETL basics to propage the changes to the Fact Tables from the Staging Area?
    Thanks

    If there is no primary key on the source, when you do the ADD TRANDATA, all columns will be supplementally logged. This is probably what you want so that you will have all columns when you apply the operation as an insert on the target.
    Even if you don't have a primary key on the target table, you can give Replicat a KEYCOLS on the MAP of one of the target columns - it won't really make any difference what column you pick since you are only going to only be applying inserts so Replicat does not have to format a WHERE clause. However, with no primary key on the target side, you do want to make sure you have enough information on the record to make each row unique.
    I would suggest you take a look at the following MOS articles to help guide you:
    What Tokens need to included in the transaction to make it unique for Insertallrecords to be used in the replicat [ID 1340823.1]
    Oracle GoldenGate - Best Practice: Creating History Tables [ID 1314698.1]
    Oracle GoldenGate Best Practice - Oracle GoldenGate for ETL Tools [ID 1371706.1]
    Let us know if you still have further questions.
    Best regards,
    Marie

  • Error Handling in Oracle GoldenGate

    Hi:
    i connect two databses with Oracle GoldenGate in Active-Active bidirectional replication process.
    i required when any replicat fails it colud be sote in table with values. like i made the change in source database , while replicating it fails then this record store in a table.
    i want to use as schema for all tables. there is no common column in all the tables.
    Could you please help me out.
    Regards,
    Abhishek

    You can map errors into an exceptions table. You can use one table for many errors, but you have to code whatever to make the insert work. You can do a one-to-one mapping too, just depends on how granular you want the exceptions table to be. The hard part is making sure you account for expected errors.
    -- Specify mapping of exceptions to exceptions table:
    MAP <owner>.*, TARGET <owner>.<exceptions>, EXCEPTIONSONLY;
    An example shown on pages 96-97 of 11.1 Admin guide:
    MAP swilkes.date_test, TARGET swilkes.date_test, &
    REPERROR (21000, EXCEPTION), &
    SQLEXEC (ID lookup, ON UPDATE, &
    QUERY "select count(*) conflict from date_test where t_id = ? and &
    t_timestamp > ?", &
    PARAMS (p1 = t_id, p2 = t_timestamp), BEFOREFILTER, ERROR REPORT, &
    TRACE ALL),&
    FILTER (lookup.conflict = 0, ON UPDATE, RAISEERROR 21000);
    MAP swilkes.date_test, TARGET swilkes.date_test_exc, EXCEPTIONSONLY, &
    INSERTALLRECORDS, &
    COLMAP (USEDEFAULTS, errtype = "UPDATE FILTER FAILED.");
    An exceptions table could look like:
    --8 characters max for the replicat name
    --varchar2(30) via object naming rules in Oracle for the table name
    --error returned is numeric, and so on
    create table ggs_admin.exceptions
    ( rep_name varchar2(8)
    , table_name varchar2(30)
    , errno number
    , dberrmsg varchar2(4000)
    , optype varchar2(20)
    , errtype varchar2(20)
    , committimestamp timestamp);

  • Lookup on source side

    Newbie to goldengate, so please excuse
    I need to replicate some columns from table emp on database A to table emp on database B
    Assume Table emp structure:
    empid,ename,deptid
    On the same database I have dept table
    deptid, deptname
    On target emp structure
    ename,deptname
    Now i need define to extract process for table emp, It has to lookup for deptname from dept table. And the replicat process should be able to populate emp structure on target.

    This is what i have tried:
    EMP1.SQL is the name of the file generated using defgen utility as there is difference in structure in source and target. Naming convention of the sql does it matter?
    replicat process is abending with table or view does not exist.
    extract extemp
    userid ggsowner,password ggsowner
    TRANLOGOPTIONS ASMUSER sys@ASM, ASMPASSWORD test EXCLUDEUSER ggsowner
    targetdefs ./dirsql/emp1.sql
    rmthost test,mgrport 7809
    rmttrail /oracle/gg/dirdat/er
    table scott.emp;
    table scott.dept;
    replicat erp1
    userid ggsowner, password ggsowner
    sourcedefs ./dirsql/emp1.sql
    map scott.emp, scott. webpos.emp_tar, &
    sqlexec (ID d1, &
    QUERY " select deptname from dept where deptid = :v_dept_id",&
    PARAMS (v_dept_id = deptid)),&
    COLMAP (USEDEFAULTS,ename=ename,deptid=deptid,deptname = @GETVAL(d1.deptname));
    reperror (DEFAULT, EXCEPTION)
    INSERTALLRECORDS
    map scott.emp , target ggsowner.exceptions,
    exceptionsonly,
    colmap (rep_name = "RTARGET1"
    ,table_name = @GETENV ("GGHEADER", "TABLENAME")
    ,errno = @GETENV ("LASTERR", "DBERRNUM")
    , dberrmsg = @GETENV ("LASTERR", "DBERRMSG")
    , optype = @GETENV ("LASTERR", "OPTYPE")
    , errtype = @GETENV ("LASTERR", "ERRTYPE")
    , logrba = @GETENV ("GGHEADER", "LOGRBA")
    , logposition = @GETENV ("GGHEADER", "LOGPOSITION")
    , committimestamp = @GETENV ("GGHEADER", "COMMITTIMESTAMP")
    , beforeafter = @GETENV ("GGHEADER", "BEFOREAFTERINDICATOR"));

  • Invalid option for MAP: MAP.

    Hi ,
    My replication file is running fine with below contents
    MAP PROD.f0911, TARGET PROD.f0911, &
    REPERROR (21000, EXCEPTION), &
    SQLEXEC (ID lookup, ON UPDATE, &
    QUERY "select count(*) conflict from PROD.f0911 where ID_NO = :p1 and &
    UPDT_DT > :p2", &
    PARAMS ( p1 = ID_NO, p2 = UPDT_DT ), BEFOREFILTER, ERROR REPORT, &
    TRACE ALL),&
    FILTER (lookup.conflict = 0, ON UPDATE, RAISEERROR 21000);
    MAP PROD.f0911, TARGET PROD.f0911_EI, EXCEPTIONSONLY, &
    INSERTALLRECORDS, &
    COLMAP (USEDEFAULTS ,
    dberr = @GETENV ("lasterr", "dberrnum");
    REPERROR (-1403, EXCEPTION)
    MAP PROD.f0911, TARGET PROD.f0911_EI,
    EXCEPTIONSONLY,
    INSERTALLRECORDS,
    COLMAP (USEDEFAULTS,
    dberr = @GETENV ("lasterr", "dberrnum");
    But when I am adding
    MACRO #exception_handler
    BEGIN
    ,TARGET PROD.f0911
    ,UPDATEINSERTS
    ,EXCEPTIONSONLY
    END;
    REPERROR (-1, EXCEPTION)
    MAP PROD.f0911, TARGET PROD.f0911;
    MAP PROD.f0911 #exception_handler()
    It is throwing error ---- ERROR OGG-00212 Invalid option for MAP: MAP.
    Can somebody please help how can I proceed with all the conditions.
    Thanks

    As per our requirement if there is "0001 -unique constraint violated" error while replication the insert record should be update in target table .So I have used
    UPDATEINSERT statement iin macro and it goes to the same main table but for others it has mapped to EXCEPTION table prodf0911_EI , so the record will go to exception table .But we need the record should go to main table and insert record should update in target table only in case of 0001 - unique key constraint violated error.

  • GG Filter Records based on User Tokens

    In the replicat param file, I would love to map only those data where a given USER token is not null. I tried to use @STRLEN, and this leads to error "incorrect filter" in ggserror.log
    MAP source.table1, TARGET target.table1,
    INSERTALLRECORDS,
    COLMAP (USEDEFAULTS,
    ID = 1
    FILTER ( @STRLEN (@TOKEN ("MYTOKEN") > 0 AND @RANGE (1,2) );
    Repaced FILTER with WHERE clause. But same error.
    WHERE ( @TOKEN ("MYTOKEN") = @PRESENT AND @TOKEN ("MYTOKEN") <> @NULL);
    Can someone tell me, if they have used user token as a filtering parameter in Map for Replicat statement? If yes, how ? I simply want to check - if my token is present and not null - then insert this record.

    you wrote: FILTER ( @STRLEN (@TOKEN ("MYTOKEN") > 0 AND @RANGE (1,2) )
    should be: FILTER ( @STRLEN (@TOKEN ("MYTOKEN")) > 0 AND @RANGE (1,2) )
    You didn't close the bracket of @STRLEN function.

  • Colmap syntax errors

    below is what I have and the syntax is incorrect with COLMAP, can someone give me some hint?
    REPLICAT RPABUJDE
    SETENV (ORACLE_HOME = /oracle/blah)
    SETENV (ORACLE_SID=tst)
    USERID ogg, PASSWORD ogg
    INSERTALLRECORDS
    DISCARDFILE ./dirrpt/ogg.rpt, PURGE
    MAP tst.blah, TARGET tst.blah, &
    COLMAP (USEDEFAULTS, modified_date = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), &
    creation_time = (@GETENV ("GGHEADER", "OPTYPE")= "INSERT"), &
    modify_time = (@GETENV ("GGHEADER", "OPTYPE")= "UPDATE"));
    thanks

    If you want to maintain create time for insert operation and modified time for update operations then you can make use of below KM article
    1450299.1 - How to maintain created date and updated date for a record using Oracle GoldenGate
    MAP <source_schema>.source_table, target <source_schema>.<target_table>,
    COLMAP (usedefaults,
    ogg_create_date = @IF (@STREQ (@GETENV ("GGHEADER", "OPTYPE"), "INSERT"), @DATENOW (), @COLSTAT (MISSING)),
    ogg_update_date = @IF (@VALONEOF (@GETENV ("GGHEADER", "OPTYPE"), "UPDATE", "SQL COMPUPDATE", "PK UPDATE" ), @DATENOW (), @COLSTAT (MISSING)));

  • Replicate filter to identify inconsitence in active-active configurations

    Hi,
    I am looking for the best way for Goldengate to handle consistency in an active-active configuration.
    I would like every transaction to be checked at the target by the replicate process if the before value of every column match the value in the target table column and row.
    My databases have 400 tables and I am looking for an easy configuration to identify transactions where the before values doesnt match the actual value in the target table.
    One solution is to trap the filter exception for each table, but I hope You may help me with a better solution:
    replicat r_gtmgts
    userid goldengate, password goldengate
    assumetargetdefs
    discardfile ./dirrpt/r_gtmgts.dsc, append
    GETUPDATEBEFORES
    REPERROR (21000, EXCEPTION)
    MAP user1.table1, TARGET user1.table1,
    SQLEXEC (ID tab394, ON UPDATE,
    QUERY "select count(*) c from user1.table1 where 1=1 and DBID_=:p1 and STATE_=:p4 and TIMESTAMP_=:p3",
    PARAMS (p1=BEFORE.DBID_,p4=BEFORE.STATE_,p3=BEFORE.TIMESTAMP_), BEFOREFILTER, ERROR REPORT, TRACE ALL),
    FILTER (ON UPDATE, 0 <> tab394.c, RAISEERROR 21000);
    MAP user1.table1 TARGET user1.konflikt_gg
    EXCEPTIONSONLY,INSERTALLRECORDS,EVENTACTIONS (STOP),
    COLMAP (konflikt_type = "UPDATE KONFLIKT OPPDAGET", eier = "USER1", tabell = "TABLE1", dato = @DATENOW(), pk = @STRCAT(DBID_), beskrivelse = @STRCAT("DBID_=", BEFORE.DBID_,">", DBID_, ", ","STATE_=", BEFORE.STATE_,">", STATE_, ", ","TIMESTAMP_=", BEFORE.TIMESTAMP_,">", TIMESTAMP_));
    ...repeated for each of the 400 tables.
    Thanks for Your help!

    Hi,
    I followed ur suggestion also now also same problem.I think i did some mistake in configuration parameters.
    now i will post my ext1,dpump and rep1 parameters what i given in source and destination.
    please correct me where i did mistake it is very important to me.
    Source Hostname : node11
    Destination Hostname : node12
    source side :*
    mgr configuration file :-_
    PORT 7809
    USERID ggs_owner, PASSWORD ggs_owner
    PURGEOLDEXTRACTS /u01/app/oracle/product/gg/dirdat/ex, USECHECKPOINTS
    ext1 configuration file :-_
    ---- In before last line i given DDL MAPPED IN -----------
    EXTRACT ext1
    USERID ggs_owner, PASSWORD ggs_owner
    EXTTRAIL /u01/app/oracle/product/gg/dirdat/lt
    TABLE scott.emp;
    I removed that DDL line and after i tried but i don't know y it's not working.
    dpump configuration file :-_
    EXTRACT dpump
    PASSTHRU
    RMTHOST node12 MGRPORT 7809
    RMTTRAIL /u01/app/oracle/product/gg/dirdat/rt
    TABLE scott.emp;
    in this file also i given *,* between node12 and MGRPORT once i given *,* between that means the extract dpump is automatically goes to ABENDING state*
    Y it automatically goes that state i dont know please if i did any mistake in configuration side means please correct me.
    Destination side:*
    mgr  configuration file :-*
    PORT 7809
    USERID ggs_owner, PASSWORD ggs_owner
    PURGEOLDEXTRACTS /u01/app/oracle/product/dirdat/ex, USECHECKPOINTS
    replicat rep1 configuration file :-*
    REPLICAT rep1
    ASSUMETARGETDEFS
    USERID ggs_owner, PASSWORD ggs_owner
    DDL INCLUDE MAPPED
    MAP scott.emp, TARGET scott.emp;
    Source side:
    SQL> conn scott/tiger
    Connected.
    SQL> alter table emp add new_col varchar2(2);
    Table altered.
    SQL> desc emp;
    Name Null? Type
    EMPNO NOT NULL NUMBER(4)
    ENAME VARCHAR2(10)
    JOB VARCHAR2(9)
    MGR NUMBER(4)
    HIREDATE DATE
    SAL NUMBER(7,2)
    COMM NUMBER(7,2)
    DEPTNO NUMBER(2)
    NEW_COL VARCHAR2(2)
    Destination side:
    SQL> conn scott/tiger
    Connected.
    SQL> desc emp;
    Name Null? Type
    ----------------------------------------- -------- ---------------------------- EMPNO NOT NULL NUMBER(4)
    ENAME VARCHAR2(10)
    JOB VARCHAR2(9)
    MGR NUMBER(4)
    HIREDATE DATE
    SAL NUMBER(7,2)
    COMM NUMBER(7,2)
    DEPTNO NUMBER(2)
    now also it is not working please give me reply as soon as possible and tel me where i did mistake in the above configuration files .
    Send me solution as soon as possible.
    Regards,
    Ram

  • Error in loading Target table doesn't exist in target schema

    I am having the following error to load data to the target
    table: Connection failed to HR_TGT.orders ora-000942.
    'Table or view does not exist'
    The table exists and the odi super user has all the priviliges
    over tables in HR_TGT.
    What could be wrong with my configuration

    I am using the original Oracle Integration KM.
    My Replicat doesn't start any longer
    Here are my configuration files
    Source Files
    ODISC.prm
    extract ODISC
    userid OGATE, password OGATE
    exttrail C:\GGSRC/dirdat/ODISoc/oc
    TABLE HR_SRC.ORDER_LINES;
    ODISD.prm
    defsfile C:\GGSRC/dirdef/ODISC.def, purge
    userid OGATE, password OGATE
    TABLE
    HR_SRC.ORDER_LINES;
    Target Files
    C:\ggstg\diroby\ODIT1T.oby
    dblogin userid OGATE, password OGATE
    add checkpointtable ODIW.ODIOGGCKPT
    add replicat ODIT1A1, exttrail C:\GGSTG/dirdat/ODIT1op/op, checkpointtable ODIW.ODIOGGCKPT
    stop replicat ODIT1A1
    start replicat ODIT1A1
    C:\ggstg\dirprm\ODIT1A1.prm
    replicat ODIT1A1
    userid OGATE, password OGATE
    discardfile C:\GGSTG/dirrpt/ODIT1.dsc,
    purge
    source defs C:\GGSTG/dirdef/ODISC.def
    map HR_SRC.ORDER_LINES, TARGET HR_STG.ORDER_LINES, KEYCOLS (ORDER_ID, PRODUCT_ID);
    map HR_SRC.ORDER_LINES, target ODIW.J$ORDER_LINES, KEYCOLS (ORDER_ID, PRODUCT_ID, WINDOW_ID), INSERTALLRECORDS, OVERRIDEDUPS,
    COLMAP (
    ORDER_ID = ORDER_ID,
    PRODUCT_ID = PRODUCT_ID,
    WINDOW_ID = @STRCAT(@GETENV("RECORD", "FILESEQNO"), @STRNUM(@GETENV("RECORD", "FILERBA"), RIGHTZERO, 10))
    I have deleted the extract ODISC and ODIT1P as well as the checkpionttable ODIW.ODIOGGCKPT
    Recreated the items.
    I did same thing with the replicat ODIT1A1
    The extracts had always started with little difficulties but the replicat once started but didn't transmit integrated data to
    the target.
    Now it is not starting at all.
    I have deleted and recreate it but the same results

  • Log table in replicate

    Hi,
    I would like to register transactions not fullfilling a replicate filter to a log table and then abend.
    When starting up the replicate process again I would like to start with the transaction initiating the abend.
    I have tried the following, but the process starts on the next transaction.
    How can i handle this?
    My code:
    replicat r_gtmgts
    userid goldengate, password goldengate
    assumetargetdefs
    discardfile ./dirrpt/r_gtmgts.dsc, append
    GETUPDATEBEFORES
    REPERROR (21000, EXCEPTION)
    MAP user1.table1, TARGET user1.table1,
    SQLEXEC (ID tab394, ON UPDATE,
    QUERY "select count(*) c from user1.table1where 1=1 and DBID_=:p1 and STATE_=:p4 and TIMESTAMP_=:p3",
    PARAMS (p1=BEFORE.DBID_,p4=BEFORE.STATE_,p3=BEFORE.TIMESTAMP_), BEFOREFILTER, ERROR REPORT, TRACE ALL),
    FILTER (ON UPDATE, 0 <> tab394.c, RAISEERROR 21000);
    MAP user1.table1 TARGET user1.konflikt_gg
    EXCEPTIONSONLY,INSERTALLRECORDS,EVENTACTIONS (STOP),
    COLMAP (konflikt_type = "UPDATE KONFLIKT OPPDAGET", eier = "USER1", tabell = "TABLE1", dato = @DATENOW(), pk = @STRCAT(DBID_), beskrivelse = @STRCAT("DBID_=", BEFORE.DBID_,">", DBID_, ", ","STATE_=", BEFORE.STATE_,">", STATE_, ", ","TIMESTAMP_=", BEFORE.TIMESTAMP_,">", TIMESTAMP_));
    Thanks for Your help!

    Duplicate post.
    See:
    How do I implement a log table for transaction initiating an abend?

Maybe you are looking for

  • How can i pass the Input value to the sql file in the korn shell ??

    Hi, How can i pass the Input value to the sql file in the korn shell ?? I have to pass the 4 different values to the sql file and each time i pass the value it has to generate the txt file for that value like wise it has to generate the 4 files at ea

  • Summary Report

    Hi, I have a requirement to generate a report similar to what "COMPUTE SUM" SQL*Plus command generates. I cannot this feature of Oracle due to some code limitations. Data in my table looks like: SQL> SELECT * FROM TEST;    USER_ID USER_NAME          

  • Installed new security patch problem

    Yesterday installed the latest OSX 10.5.8 security patch and now all incoming mail is being rejected with 550 - no such user, etc. Erased users and re-entered them still no joy. Any help/ideas appreciated. Bruce

  • B560 bluetooth file transfer problem?

    hi I like when I upload a file with Bluetooth gives this error? where is  the problem?

  • IDVD - is there a way to randomize my photos?

    First - this is my first Mac EVER. Please forgive me - I searched the forum and found nothing... Im trying to make a DVD for clients with their photos using iDVD. My old program (Photodex) would let me rearrange the photos all at once without having