Triggers in Target Table

Hi,
How can i avoid a complete error in the scenario execution of an interface that use a destination table with a trigger. This trigger has a rule thats validate the input on each insertion.
Tks,

Well, the idea is to use an Oracle cursor. I actually used Oracle Incremental Update (PL SQL) KM because it has the cursor based insert implemented. All we need to do is to add exceptions handling to it:
/* Use of a PL-SQL bloc to perform the Insert (management of LONGS and LOBS etc.) */
declare cursor myCursor is
select      <%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "((INS AND (NOT TRG)) AND REW)")%>
     <%=odiRef.getColList(",", "[EXPRESSION]\t[COL_NAME]", ",\n\t", "", "((INS AND TRG) AND REW)")%>
from     <%=odiRef.getTable("L","INT_NAME","W")%>
where     IND_UPDATE = 'I'
begin
     /* Loop over the Cursor and execute the insert statement */
     for aRecord in myCursor loop
begin          insert into      <%=odiRef.getTable("L","TARG_NAME","A")%>
               <%=odiRef.getColList("", "[COL_NAME]", ",\n\t\t\t", "", "((INS AND (NOT TRG)) AND REW)")%>
               <%=odiRef.getColList(",", "[COL_NAME]", ",\n\t\t\t", "", "((INS AND TRG) AND REW)")%>
          values
               <%=odiRef.getColList("", "aRecord.[COL_NAME]", ", \n\t\t\t", "", "((INS AND (NOT TRG)) AND REW)")%>
               <%=odiRef.getColList(",", "aRecord.[COL_NAME]", ", \n\t\t\t", "", "((INS AND TRG) AND REW)")%>
exception
when others then
insert into <error table name with schema>  values(aRecord.<primary key of the rejected record>);
end;     end loop;
end;
Please note, I used this only for error analysis and as soon as I figured out what was happening, I designed a set based error isolation procedure. So my exception handling was just storing primary keys of the rejected rows. You are welcome to process your exceptions according to your needs.
By the way, if your errors are raised by a trigger, you may find an error message that the trigger sends in the ODI error message for the failed step.

Similar Messages

  • Loading multiple files into a target table parallely

    Hi
    We are building an interface to load from flat files to an Oracle table. The input can be more than one files all of th same format. I have created a variable for the flat file data store. I have created 2 packages
    In package 1
    I have put odi file wait to trigger whenever files come into the directory . The list of files is written to a metadata table table. A procedure is triggered which picks a record from the metadata table and calls the scenario for calls package 2 by passing the selected record as parameter.
    In package 2
    The variable value is set to the parameter passed from package 1 procedure
    The interface runs to load the flat file = to the variable value into the target table
    Now if the directory has more than one flat file , the procedure in package 1 calls the package 2 scenario more than one times. This causes sequential load from each flat file to the target table. Is there a way wherein the multiple flat files can be loaded in parallel to the same target table? At this point ,I am not sure if this is possible as there is only one interface and one variable for the source file

    As per your requirement , your process seems to be always continues like a loop[ reason being as you have mentiond - I dont want to wait for all files to come in. As soon as a file comes in the file has to be loaded.  ] .
    You can solve the issue of file capture using OdiFileWait to let you know when the files have arrived.
    Tell me this what do you plan to when the File loading is complete, will that be in the same folder and when new files come will that update with the same name or different name , becuase irresptive of same file or different if the files are present in the same folder after loading into target , then we might do repetitive loading of files. Please tell me how do you plan to handle this.
    When you have plan to use LKM file to sql , what is the average number of source records coming into each sample files. ?
    Just to add to above question , what is the average numner of parallel loads you are expecting and what is the time interval between each paralled loading you are expecting.

  • Is it possible to run an update via target tables post load command

    Is it possible to run a sql update statement via post load command of target table?
    What does it mean by post load command, does it gets triggered only after a successful execution of a data flow (to Target table)?
    Or it can also gets triggered even if a dataflow fails?
    this is teh sql which i would like to run via post if dataflow success only.
    Update tbl_job_status set end_time=nvl(NULL,sysdate()) where job_id = (select max job_id from tbl_job_status where j_dataflowname='Chk_Order_Delta')

    I do not believe it will necessary run if the df fails. If you want your job status table updated no matter what (apart from a complete DS meltdown), you'll want to wrap things in a try-catch to make sure you're handling the update. You can do it in a script, or with a dataflow. A nice way to handle all this sort of operational metadata tracking stuff is to write yourself a library of custom functions and then consistently call them fore & aft of jobs or workflows or whatever components you want to track.  If you've got access to a Rapid Mart, you might look at what they've done there -- nice stuff (although you can't just "borrow" it).

  • Can not Load CLOB data 32k to target table

    SQL> DESC testmon1 ;
    Name Null? Type
    FILENAME VARCHAR2(200)
    SCANSTARTTIME VARCHAR2(50)
    SCANENDTIME VARCHAR2(50)
    JOBID VARCHAR2(50)
    SCANNAME VARCHAR2(200)
    SCANTYPE VARCHAR2(200)
    FAULTLINEID VARCHAR2(50)
    RISK VARCHAR2(5)
    VULNNAME VARCHAR2(2000)
    CVE VARCHAR2(200)
    DESCRIPTION CLOB
    OBSERVATION CLOB
    RECOMMENDATION CLOB
    SQL> DESC test_target;
    Name Null? Type
    LOCALID NOT NULL NUMBER
    DESCRIPTION NOT NULL CLOB
    SCANTYPE NOT NULL VARCHAR2(12)
    RISK NOT NULL VARCHAR2(6)
    TIMESTAMP NOT NULL DATE
    VULNERABILITY_NAME NOT NULL VARCHAR2(2000)
    CVE_ID VARCHAR2(200)
    BUGTRAQ_ID VARCHAR2(200)
    ORIGINAL VARCHAR2(50)
    RECOMMEND CLOB
    VERSION VARCHAR2(15)
    FAMILY VARCHAR2(15)
    XREF VARCHAR2(15)
    create or replace PROCEDURE proc1 AS
    CURSOR C1 IS
    SELECT FAULTLINEID,VULNNAME,scanstarttime, risk,
    dbms_lob.substr(DESCRIPTION,dbms_lob.getlength(DESCRIPTION),1) "DESCR",dbms_lob.substr(OBSERVATION,dbms_lob.getlength(OBSERVATION),1) "OBS",
    dbms_lob.substr(RECOMMENDATION) "REC",CVE
    FROM testmon1;
    c_rec C1%ROWTYPE;
    descobs clob;
    FSCAN_VULN_TRANS_REC VULN_TRANSFORM_STG%ROWTYPE;
    TIMESTAMP varchar2(50);
    riskval varchar2(10);
    pCTX PLOG.LOG_CTX := PLOG.init (pSECTION => 'foundscanVuln Procedure',
    pLEVEL => PLOG.LDEBUG,
    pLOG4J => TRUE,
    pLOGTABLE => TRUE,
    pOUT_TRANS => TRUE,
    pALERT => TRUE,
    pTRACE => TRUE,
    pDBMS_OUTPUT => TRUE);
    amount number;
    buffer varchar2(32000);
    BEGIN
    ---INITIALIZE THE LOCATOR FOR CLOB DATA TYPE
    select observation into descobs from testmon1 where rownum=1;
    OPEN C1;
    loop
    fetch C1 INTO c_rec;
    exit when C1%NOTFOUND;
    --LOAD THE DESCRIPTION FIELD FROM CURSOR AND WRITE IT TO THE CLOB LOCATOR descobs.
    dbms_lob.Write(descobs,dbms_lob(c_rec.DESCR),1,c_rec.DESCR);
    ------APPEND THE OBSERVATION FIELD FROM CURSOR TO THE CLOB LOCATOR descobs.
    dbms_lob.Writeappend(descobs,dbms_lob(c_rec.DESCR),c_rec.OBS);
    -- dbms_output.put_line ('the timestamp is :'||c_rec.scanstarttime);
    --dbms_lob.write(descobs,amount,1,buffer);
    descobs:=c_rec.OBS;
    --dbms_lob.read(descobs,amount,1,buffer);
    --dbms_lob.append(c_rec.DESCR,c_rec.OBS);
    --descobs:=c_rec.OBS;
    --dbms_output.put_line ('the ADDED DESCROBS is :'||dbms_lob.substr(c_rec.DESCR,dbms_lob.getlength(c_rec.DESCR),1));
    dbms_output.put_line ('the ADDED DESCRIPTION AND OBSERVATION is :'||descobs);
    --dbms_output.put_line ('the DESCROBS buffer  is :'||buffer);
    SELECT DESCRIPTION INTO FSCAN_VULN_TRANS_REC.DESCRIPTION
    FROM TESTMON1 WHERE ROWNUM=1;
    ---------LOAD THE DESCRIPTION+ observation value into the target table description
    DBMS_LOB.WRITE(FSCAN_VULN_TRANS_REC.DESCRIPTION, dbms_lob.getlength(descobs),1,descobs);
    TIMESTAMP:=substr(c_rec.scanstarttime,1,10)||' '|| substr(c_rec.scanstarttime,12,8);
    IF c_rec.risk <3
    THEN riskval:='Low';
    ELSIF c_rec.risk <6
    THEN riskval:='Medium';
    ELSIF c_rec.risk <10
    THEN riskval:='High';
    END IF;
    FSCAN_VULN_TRANS_REC.TIMESTAMP:=TO_DATE(TIMESTAMP, 'YYYY/MM/DD HH24:MI:SS');
    FSCAN_VULN_TRANS_REC.risk:= riskval;
    --dbms_lob.append(c_rec.DESCR,c_rec.OBS);
    FSCAN_VULN_TRANS_REC.DESCRIPTION:=c_rec.DESCR;
    FSCAN_VULN_TRANS_REC.RECOMMEND:=c_rec.REC;
    FSCAN_VULN_TRANS_REC.LocalID:=to_number(c_rec.FAULTLINEID);
    FSCAN_VULN_TRANS_REC.SCANTYPE:='FOUNDSCAN';
    FSCAN_VULN_TRANS_REC.CVE_ID:=c_rec.CVE;
    FSCAN_VULN_TRANS_REC.VULNERABILITY_NAME:=c_rec.VULNNAME;
    -- dbms_output.put_line ('the plog timestamp is :'||timestamp);
    -- dbms_output.put_line ('the timestamp is :'||riskval);
    --dbms_output.put_line ('the recommend is :'||FSCAN_VULN_TRANS_REC.RECOMMEND);
    --dbms_output.put_line ('the app desc is :'||FSCAN_VULN_TRANS_REC.DESCRIPTION);
    insert into test_target values FSCAN_VULN_TRANS_REC;
    End loop;
    close C1;
    commit;
    EXCEPTION
    WHEN OTHERS THEN
    -- dbms_output.put_line ('Data not found');
    -----------dbms_output.put_line (sqlcode|| ':'||sqlerrm);
    end proc1;
    using dbms_lob package is not helping. Either DB stops responding. Or the Observation field ( which has max length >300000) can not be loaed into a CLOB variable.
    Please help or give me a sample code that helps.

    select     
         BANKING_INSTITUTION.BANK_REF_CODE     C1_BANK_ID,
         BANKING_INSTITUTION.NAME_BANK     C2_BANK_NAME,
         BANKING_INSTITUTION.BANK_NUMBER     C3_BANK_NUMBER,
         BANKING_INSTITUTION.ISO_CODE     C4_GBA_CODE,
         BANKING_INSTITUTION.STATUS     C5_STATUS,
         BANKING_INSTITUTION.SOURCE     C6_SOURCE,
         BANKING_INSTITUTION.START_DATE_BANK     C7_START_DATE,
         BANKING_INSTITUTION.ADDRESS_BANK     C8_BANK_ADDRESS1
    from     REF_DATA_DB.BANKING_INSTITUTION BANKING_INSTITUTION
    where     (1=1)
    insert /*+ append */ into XXSVB.C$_0XXSVB_BANKS_STAGING
         C1_BANK_ID,
         C2_BANK_NAME,
         C3_BANK_NUMBER,
         C4_GBA_CODE,
         C5_STATUS,
         C6_SOURCE,
         C7_START_DATE,
         C8_BANK_ADDRESS1
    values
         :C1_BANK_ID,
         :C2_BANK_NAME,
         :C3_BANK_NUMBER,
         :C4_GBA_CODE,
         :C5_STATUS,
         :C6_SOURCE,
         :C7_START_DATE,
         :C8_BANK_ADDRESS1
    )

  • Null value in target table

    Hi All
    I am using ODI 10.1.13
    I have declred teh Target datastore
    and mapped the target datastore with source in teh interface
    But when ODI is creating teh target database all the columns are created with accepting NULL Properties.
    where as i hv declared some as null and some with notnull.
    when i checked in IKM Oracle Incrimental KM(MERGE)-> create target table step
    the coding is
    create table <%=odiRef.getTable("L", "TARG_NAME", "A")%>
         <%=odiRef.getTargetColList("", "[COL_NAME]\t[DEST_CRE_DT] NULL", ",\n\t", "")%>
    due to which the target table is created with all field havine NULL properties.
    But i want the columns should hv the properties declared in the datastore.Eg. some columns with NULL and some columns with NOT NULL properties.
    Please help..
    Gourisankar

    In the IKM Merge , change the null to + odiRef.getInfo("DEST_DDL_NULL") so finally something like this <%=odiRef.getTargetColList("", "[COL_NAME]\t[DEST_CRE_DT] " + odiRef.getInfo("DEST_DDL_NULL"), ",\n\t", "")%> For bulk insert try SQL Control Append

  • Table compare deleting rows which does not exist in target table

    Hi Gurus,
    I am struggling with an issue in Data Services.
    I have a job which uses Table Compare, then History Preserving and then a Key Generation transforms.
    There is every possibility that data would get deleted from the source table.
    Now, I want to delete them from the target table also.
    I tried Detect deleted rows but it is not working.
    Could some one please help me on this issue.
    Thanks,
    Raviteja.

    Doesn't history preserving really only operate on "Update" rows.  Wouldn't it only process the deletes if you turned the "Preserve Delete row(s) as update row(s)" on?
    I would think if you turned on Detect Delete rows in the Table compare and did not turn this on in the history preserving it would retain those rows as delete rows and effectively remove them from the target.
    Preserve delete row(s) as update row(s)
    Converts DELETE rows to UPDATE rows in the target warehouse and, if you previously set effective date values (Valid from and Valid to), sets the Valid To value to the execution date. Use this option to maintain slowly changing dimensions by feeding a complete data set first through the Table Comparison transform with its Detect deleted row(s) from comparison table
    option selected.

  • How to load data into 3 different target tables usin BODS ?

    Hello Friends,
    I have 5 different source tables with same field definitions, Now I want to load all the records into three/four different target tables (Flat file, SQL Server, XML, and Oracle ) Could anyone please tell me how to do this task ?
    Thanks in Advance,
    Bheem.

    Hello Bheem,
    You can create separated dataflow for each target as suggested by Bala, this is a good choice when evaluating you scenario.
    If you put all targets in the same dataflow you may experience problems if one of them is down (as you have different servers as targets).
    BODS will send the data simultaneously to all targets and when one fails, the load will be break and you may have the tables not sinc anyways (if you plan to use a single dataflow to avoid this situation, it won't do it).
    So the best option is to create separated dataflow and put inside another one so you can run a singe dataflow that will call each one of the target and if onw fails the other will complete accordingly.
    Then if you put them inside a error trapping, you may even make your dataflow to retry the load prior abend the job.
    Think about cascading your dataflows and let the leaf level simple as it can be so it will be easier to schedule and debug your process.
    Pay attention to datatypes and other conversions you might need when working with more than one source/target.
    Regards,

  • How to delete some date in target table at a mapping?

    How to delete some date in target table at a mapping?
    I extract date from source tabel into target table,
    but before extract date I want to delete some date from target?
    how to do?

    Just to change a bit of terminology in the reply, within the mapping, click on operator properties and choose TRUNCATE/INSERT.
    Note that truncate is dependent on constraints, so you probably must disable those before doing this. You can of course do DELETE/INSERT...
    Jean-Pierre

  • How do I create a target table with the same PK as the source table?

    I am trying to create a target table in a mapping that will end up with the same primary key as the source table.
    It is a simple map that simply uses a subset of the columns of the source table in the target table. I was wanting to create and bind a new table by dragging the columns I want from the source to the initially blank target table operator, change the column names and create a primary key to match the source table.
    I can't seem to be able to create a constraint on the table in the mapping. I can create the constraint after the table is created and boound to the database object but the PK doesn't carry back into the mapping.
    I need it in the mapping so I can use the UPDATE/INSERT operation and use the 'All Constraints' implementation. The mapping won't let me validate the object without the PK on it in the map.
    Believe it or not folks, I am getting better at this.
    Thanks very much for the guidance.
    Gary

    Hi Gary
    You are close, you are really close... :-))
    You need to do exactly as you propose plus one extra step. Build the map as you describe, binding the new table to the target. Then you edit the table definition to add the primary key and any other constraints you need. After this is the step that you are missing.
    You need to do the following:
    1. Go back and re-edit the map
    2. Right click on the table
    3. From the pop up menu, select Reconcile Inbound
    4. Set any operators that you need for the UPDATE/INSERT
    5. Save the map
    6. Commit your changes
    The first three steps above make the map read in the indexes and constraints that you set on the table. Finally, you need to deploy the table and then deploy the map.
    Hope this helps
    Regards
    Michael

  • How to get source table name according to target table

    hi all
    another question:
    once a map was created and deployed,the corresponding information was stored in the repository and rtr repository.My question is how to find the source table name according to the target table,and in which table these records are recorded.
    somebody help me plz!!
    thanks a lot!

    This is a query that will get you the operators in a mapping. To get source and targets you will need some additional information but this should get you started:
    set pages 999
    col PROJECT format a20
    col MODULE format a20
    col MAPPING format a25
    col OPERATOR format a20
    col OP_TYPE format a15
    select mod.project_name PROJECT
    , map.information_system_name MODULE
    , map.map_name MAPPING
    , cmp.map_component_name OPERATOR
    , cmp.operator_type OP_TYPE
    from all_iv_xform_maps map
    , all_iv_modules mod
    , all_iv_xform_map_components cmp
    where mod.information_system_id = map.information_system_id
    and map.map_id = cmp.map_id
    and mod.project_name = '&Project'
    order by 1,2,3
    Jean-Pierre

  • How to Assign a default value and Increement in my Target table ........?

    Hi,
    In my target table I want to Increment a value and store it in the table ........
    the increement shud start frm ...........5001,5002,5003..................etc
    How can I achieve this...........?
    I created an numeric Variable and in assigned the default as 5000 nad action as Historize.........
    In the target Datastore ......I assigned VAR+1.......
    but in the Target column I got 5001 for all the columns........
    Which way is the best way to deal with the Variables.......
    Thanks
    AK
    Message was edited by:
    AK2008

    Hi Gurusank and Micropole,
    Thanks for reply and info.....
    I am trying out one of the solutions first........
    Gurusank,
    I have created a sequence........whether I need to mention the schema and table name ,column.........also ( I gave all the information) and Increement column =1 and saved it .......
    Then I follwed the procedure as u have mentioned .....
    And in the Target field also I gave the same the same.......for tat column I gave choose (target for execution).
    And I directly ran the interface ..........
    I got the folowing Error:
    287 : 42000 : java.sql.SQLException: ORA-02287: sequence number not allowed here
    java.sql.SQLException: ORA-02287: sequence number not allowed here
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:185)
         at oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStatement.java:633)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1086)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:2984)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3057)
         at com.sunopsis.sql.SnpsQuery.executeUpdate(SnpsQuery.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execStdOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Thread.java:534)
    Could u plz let me know where I am doing the mistake
    Thanks
    AK

  • Insert/select one million rows at a time from source to target table

    Hi,
    Oracle 10.2.0.4.0
    I am trying to insert around 10 million rows into table target from source as follows:
    INSERT /*+ APPEND NOLOGGING */ INTO target
    SELECT *
    FROM source f
    WHERE
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);There is a unique index on target table on col1,col2
    I was having issues with undo and now I am getting the follwing error with temp space
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPI believce it would be easier if I did bulk insert one million rows at a time and commit.
    I appriciate any advice on this please.
    Thanks,
    Ashok

    902986 wrote:
    NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);
    I don't know if it has any bearing on the case, but is that WHERE clause on purpose or a typo? Should it be:
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.COL1 and f.col2 = m.col2);Anyway - how much of your data already exists in target compared to source?
    Do you have 10 million in source and very few in target, so most of source will be inserted into target?
    Or do you have 9 million already in target, so most of source will be filtered away and only few records inserted?
    And what is the explain plan for your statement?
    INSERT /*+ APPEND NOLOGGING */ INTO target
    SELECT *
    FROM source f
    WHERE
            NOT EXISTS(SELECT 1 from target m WHERE f.col1 = m.col2 and f.col2 = m.col2);As your error has to do with TEMP, your statement might possibly try to do a lot of work in temp to materialize the resultset or parts of it to maybe use in a hash join before inserting.
    So perhaps you can work towards an explain plan that allows the database to do the inserts "along the way" rather than calculate the whole thing in temp first.
    That probably will go much slower (for example using nested loops for each row to check the exists), but that's a tradeoff - if you can't have sufficient TEMP then you may have to optimize for less usage of that resource at the expense of another resource ;-)
    Alternatively ask your DBA to allocate more room in TEMP tablespace. Or have the DBA check if there are other sessions using a lot of TEMP in which case maybe you just have to make sure your session is the only one using lots of TEMP at the time you execute.

  • Don't have primary Key in Target table getting errror while creating index

    Hi All,
    I don't have primary key column in target table, while exicuting mapping I got a error while creating INDEX.
    Could you please help how to slove this

    Hi,
    That is a process definition issue.
    If you don't have a PK then:
    1) or you don't execute updates
    2)or you have an alternate Key to update it.
    Case 1) just change the KM to IKM Control Append
    Case 2) at interface, go to each column what is Alternate Key and check it as key (click at column and check the box Key at bottom of propriety window).
    Does it work to you?

  • A question about transaction consistency between multible target tables

    Dear community,
    My replication is ORACLE 11.2.3.7->ORACLE 11.2.3.7 both running on linux x64 and GG version is 11.2.3.0.
    I'm recovering from an error when trail file was moved away while dpump was writing to it.
    After moving the file back dpump abended with an error
    2013-12-17 11:45:06  ERROR   OGG-01031  There is a problem in network communication, a remote file problem, encryption keys for target and source do
    not match (if using ENCRYPT) or an unknown error. (Reply received is Expected 4 bytes, but got 0 bytes, in trail /u01/app/ggate/dirdat/RI002496, seqno 2496,
    reading record trailer token at RBA 12999993).
    I googled for It and found no suitable solution except for to try "alter extract <dpump>, entrollover".
    After rolling over trail file replicat as expected ended with
    REPLICAT START 1
    2013-12-17 17:56:03  WARNING OGG-01519  Waiting at EOF on input trail file /u01/app/ggate/dirdat/RI002496, which is not marked as complete;
    but succeeding trail file /u01/app/ggate/dirdat/RI002497 exists. If ALTER ETROLLOVER has been performed on source extract,
    ALTER EXTSEQNO must be performed on each corresponding downstream reader.
    So I've issued "alter replicat <repname>, extseqno 2497, extrba 0" but got the following error:
    REPLICAT START 2
    2013-12-17 18:02:48 WARNING OGG-00869 Aborting BATCHSQL transaction. Detected inconsistent result:
    executed 50 operations in batch, resulting in 47 affected rows.
    2013-12-17 18:02:48  WARNING OGG-01137  BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01004 Aborted grouped transaction on 'M.CLIENT_REG', Database error
    1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "M"."CLIENT_REG" SET "CLIENT_CODE" =
    :a1,"CORE_CODE" = :a2,"CP_CODE" = :a3,"IS_LOCKED" = :a4,"BUY_SUMMA" = :a5,"BUY_CHECK_CNT" =
    :a6,"BUY_CHECK_LIST_CNT" = :a7,"BUY_LAST_DATE" = :a8 WHERE "CODE" = :b0>).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.CHECK OCI Error ORA-00001:
    unique constraint (M.CHECK_PK) violated (status = 1). INSERT INTO "M"."CHECK"
    ("CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    The report stated the following:
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 0 records
    Report at 2013-12-17 18:02:48 (activity since 2013-12-17 18:02:46)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    At that time I came to the conclusion that using etrollover was not a good idea Nevertheless I had to upload my data to perform consistency check.
    My mapping templates are set up as the following:
    LS.CHECK->M.CHECK
    LS.CHECK->M.TL_CHECK
    (such mapping is set up for every table that is replicated).
    TL_CHECK is a transaction log, as I name it,
    and this peculiar mapping is as the following:
    ignoreupdatebefores
    map LS.CHECK, target M.CHECK, nohandlecollisions;
    ignoreupdatebefores
    map LS.CHECK, target M.TL_CHECK ,colmap(USEDEFAULTS,
    FILESEQNO = @GETENV ("RECORD", "FILESEQNO"),
    FILERBA = @GETENV ("RECORD", "FILERBA"),
    COMMIT_TS = @GETENV( "GGHEADER", "COMMITTIMESTAMP" ),
    FILEOP = @GETENV ("GGHEADER","OPTYPE"), CSCN = @TOKEN("TKN-CSN"),
    RSID = @TOKEN("TKN-RSN"),
    OLD_CODE = before.CODE
    , OLD_STATE = before.STATE
    , OLD_IDENT_TYPE = before.IDENT_TYPE
    , OLD_IDENT = before.IDENT
    , OLD_CLIENT_REG_CODE = before.CLIENT_REG_CODE
    , OLD_SHOP = before.SHOP
    , OLD_BOX = before.BOX
    , OLD_NUM = before.NUM
    , OLD_NUM_VIRT = before.NUM_VIRT
    , OLD_KIND = before.KIND
    , OLD_KIND_ORDER = before.KIND_ORDER
    , OLD_DAT = before.DAT
    , OLD_SUMMA = before.SUMMA
    , OLD_LIST_COUNT = before.LIST_COUNT
    , OLD_RETURN_SELL_CHECK_CODE = before.RETURN_SELL_CHECK_CODE
    , OLD_RETURN_SELL_SHOP = before.RETURN_SELL_SHOP
    , OLD_RETURN_SELL_BOX = before.RETURN_SELL_BOX
    , OLD_RETURN_SELL_NUM = before.RETURN_SELL_NUM
    , OLD_RETURN_SELL_KIND = before.RETURN_SELL_KIND
    , OLD_INSERTED = before.INSERTED
    , OLD_UPDATED = before.UPDATED
    , OLD_REMARKS = before.REMARKS), nohandlecollisions, insertallrecords;
    As PK violation fired for CHECK, I've changed nohandlecollisions to handlecollisions for LS.CHECK->M.CHECK mapping and restarted an replicat.
    To my surprise it ended with the following error:
    REPLICAT START 3
    2013-12-17 18:05:55 WARNING OGG-00869 Aborting BATCHSQL transaction. Database error 1 (ORA-00001:
    unique constraint (M.CHECK_PK) violated).
    2013-12-17 18:05:55 WARNING OGG-01137 BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:05:55 WARNING OGG-01003 Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-00869 OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK)
    violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55 WARNING OGG-01004 Aborted grouped transaction on 'M.TL_CHECK', Database error 1
    (OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO
    "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26)).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.TL_CHECK OCI Error
    ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    I've expected that batchsql will fail cause it does not support handlecollisions, but I really don't understand why any record was inserted into TL_CHECK and caused PK violation, cause I thought that GG guarantees transactional consistency and that any transaction that caused an error in _ANY_ of target tables will be rollbacked for _EVERY_ target table.
    TL_CHECK has PK set to (FILESEQNO, FILERBA), plus I have a special column that captures replication run number and it clearly states that a record causing PK violation was inserted during previous run (REPLICAT START 2).
    BTW report for the last shows
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 1 records
    Report at 2013-12-17 18:05:55 (activity since 2013-12-17 18:05:54)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    So somebody explain, how could that happen?

    Write the query of the existing table in the form of a function with PRAGMA AUTONOMOUS_TRANSACTION.
    examples here:
    http://www.morganslilbrary.org/library.html

  • Can someone pls Help me: How to create a Fake target table(Dummy)  in DAC

    Hi There,
    Can someone pls Help me in creating a Fake target table(Dummy) in DAC?
    Thanks,
    Raghu

    Raghu
    You need to create a task so that you can hook sources and targets together.
    First, Design --> Tables tab --> New --> Below 'Edit' -->Table Type (Source) for your source tables.
    Design --> Tables tab --> New --> Below 'Edit' -->Check warehouse and Table Type (Fact or Fact Temporary) for your target tables.
    Points if helpful
    Kris

Maybe you are looking for

  • Sharing internet with VPN

    Hi All, I usually use the built-in airport to share my LAN to other machines in my house, but sometimes when I had to use juniper VPN to connect to my office network, other machines will lose the ability to connect to internet. Does anyone know how t

  • Unable to reference Hibernate libraries in NW CE 7.1

    Hi, I am unable to reference Hibernate liberaries in NW CE 7.1. Though I have followed the steps mentioned in the some example (blog), still getting the same error. Can somebody help me if this is limitation or if I missed any setting. Thanks in adva

  • Display chinese word on ALV

    I have a program that will generate a report in ALV format, some of fields included chinese word, can't show properly, but it can be shown properly when export to spreadsheet. How can i display those chinese word in ALV report? Thanks!

  • Cumulative totals in report

    Hi, I've a table having customer code, customer name, dr_cr, amount fields SELECT a.CUCODE, a.cuname, a.DR_CR, decode(a.DR_CR,'D',a.AMOUNT) Debit, decode(a.DR_CR,'C',a.AMOUNT) Credit, nvl(decode(a.DR_CR,'D',a.AMOUNT),0)-nvl(decode(a.DR_CR,'C',a.AMOUN

  • I can't get Photoshop cs3 extended to work with Mavericks? Ideas?

    I currently have photoshop cs3 and just upgraded to Mavericks on my Mac. No I cannot use cs3. Is there anything I can do to get cs3 to respond?