Updating target tables....

Hello,
I am actually in quite a predicament. I need to update a table from a view. The table has 3 columns(*col1, col2,col3*) initialized as NULLs in all the records noting that the other columns have data inside...... So what I basically do is the following:
I map the view columns to the table and the click on the table in order to view the table operator properties then set the loading type to UPDATE. Then under conditional loading in the Target filter for update I insert the following condition:
( INOUTGRP1.*col4* IS NOT NULL ) which shows a successful validation.
Taking into consideration that the Match by Constraints is choosen to be NO_CONSTRAINTS...
After that I click on col1, col2 and col3 respectively in order to set the Match column when updating row to Yes....
Unfortunately, these warnings appears saying the following:
+VLD-2753: All mapped attributes are used in matching criteria in 15_REC.
It will be a meaningless update action if all the attributes that are mapped are used for matching. The update statement will select the rows which satisfy the match condition and update it with the same values. Specify at least one mapped attribute of MY_TABLE to be used for updating by setting Update Use for Matching to No : ( COL1 COL2 COL3).+
+VLD-2761: Unable to generate Merge statement.
Merge statement cannot be generated because column COL4 is used for both matching and update. A matching column cannot be updated in a merge statement.+
So if anyone has any kind of idea please help as soon as possible.... Since this is actually urgent....
Many thanks already grateful for reading through,
Hossam

Hey David,
Well the columns that I want to update are col1, col2 and col3 ..... I don't want to match any column between the view and the table I just want the carry on the update under the condition that table1.col1, table1.col2 and table1.col3 are set to NULL....

Similar Messages

  • Updating target table with date in owb using mapping

    I want to update the effetive begin date (month begin date) when we load the target table. I have field in target table has Eff_updated_dt defined as DATE.
    If I use the Data generator operator and then expression operator (trunc (sysdate,'mm' ) it giving error. It says data generator operator should be connected to flat file .
    What other operators can I use to update the column with month begin data.
    Also which operator I have use to have sequence key in the table.
    Thanks

    you can always use a constant for the date field.
    just create a constant of type date and give it the value you want, trunc (sysdate,'mm' ) and connect it to your target column, Eff_updated_dt defined.
    you have a sequence operator that you can use to link to your "sequence key" as well.
    Borkur

  • Updating target table in OWB

    I have target table with columns structure_id,case_num,user_id,user_name,eff_beg_dt,eff_end_dt . I have to update the eff_end_dt in target table and insert a new row with new eff_beg_dt if any of the info in the column changes like user_id or User_name based on case_num.
    I have in source table case_num,user_id,user_name only which I got from previous process (delta change rows only). Now for all this case_num in source_table I want to update the eff_end_dt to current months end date and insert a new row with source tables case_num,user_id,user_name with eff_beg_dt with next month start date.
    How to achive this in OWB . I am very new to OWB and when I try to put loading type has update what should be the match by contraint.
    I have to match case_num in source to case_num in target and update the eff_end_date . But case_num in not a unique key
    Can you pl walk me thr the process
    thanks

    Hi,
    As I understand the problem looks straightforward.
    1. First make a mapping where you drag all the fields from the source table to the target table. The target table loading type should be set to UPDATE. The update matching condition should be set to case_num = case_num from the source table. The field that needs to be updated is you eff_end_date. Update it with the appropriate value. Set the match by constraints to "No Constraints" and set the matching conditions in the target table field level.
    2. Then make a second mapping which will do just the inserts. All the rows appearing in the source (which is your delta rows I think) should be mapped to the respective fields in the target table. The target table load type should be set to INSERT. Set the eff_beg_date as per your logic.
    The above two steps can be done within one mapping I believe, by setting the target load order.
    Hope this helps
    -AP

  • Updating Target Table using DBMS_Scheduler for every 10 seconds

    Hi All,
    Am new to DBMS_Scheduler Package,
    Based on the examples I have written an anonymous block to create job,,, It needs a correction in repeat_interval value,
    I have two table emp (Source) and sk_emp (Target),
    I want sk_emp table to be update for every 10 seconds,
    CREATE PROCEDURE sk_insert_records
    IS
    BEGIN
       MERGE INTO sk_emp se
          USING emp e
          ON (e.empno = se.empno)
          WHEN MATCHED THEN
             UPDATE
                SET se.ename = e.ename, se.job = e.job, se.mgr = e.mgr,
                    se.hiredate = e.hiredate, se.sal = e.sal, se.comm = e.comm,
                    se.deptno = e.deptno
                WHERE se.empno = e.empno
          WHEN NOT MATCHED THEN
             INSERT (empno, ename, job, mgr, hiredate, sal, comm, deptno)
             VALUES (e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm,
                     e.deptno);
    END;I have written the above procedure and below is my anonymous block of dbms_scheduler
    BEGIN
       DBMS_SCHEDULER.create_job (job_name              => 'empdetails',
                                  job_type                 => 'STORED_PROCEDURE',
                                  job_action               => 'ngshare.sk_insert_records',
                                  number_of_arguments      => 0,
                                  start_date               => TRUNC (SYSDATE),
                                  repeat_interval          => 'FREQ=DAILY;BYSECOND=10',
                                  end_date                 => NULL,
                                  job_class                => 'DEFAULT_JOB_CLASS',
                                  enabled                  => TRUE,
                                  auto_drop                => FALSE,
                                  comments                 => 'DONE BY SUNIL'
    END;Can you please let me know the solution,,
    if my repeat_interval is correct, can you please let me know what else need to be done,
    This is my Version
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    "CORE     11.2.0.1.0     Production"
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - ProductionThanks for your help
    Edited by: NSK2KSN on Jul 26, 2010 11:06 AM

    Hi It's working, Problem was in repeat_interval it's working now,
    Thanks,
    I changed
      repeat_interval          => 'FREQ=DAILY;BYSECOND=10',to
      repeat_interval          => 'FREQ=SECONDLY;BYSECOND=10',Thanks,
    Edited by: NSK2KSN on Jul 26, 2010 11:14 AM

  • How do I update/insert into a target table, rows after date X

    Hi all
    I have a mapping from source table A to target table B. Identical table structure.
    Target table A updates rows and inserts rows daily. Every week I want to synchronize this with table B.
    I have CREATION_DATE and LAST_UPDATE_DATE on both tables. I want to pass in a parameter to this mapping of date X which tells the mapping:
    "if CREATION_DATE is past X then do an insert of this row in B, if LAST_UPDATE_DATE is past X then do an update of this row in B"
    Please can you help me work out how to map this correctly as I am new to OWB.
    Many thanks
    Adi

    Hi,
    You can achieve this by -
    1. Create a control table, say Control_Table, with structure
    Map Name, last_load_date. Populate this table with the mappings that synchronizes your Table B.
    2. Alter mapping, that loads Table B to use the above control table to get all the records from Table A, you have to join Table A and Control_Table with the condition -
    Control_Table.Map_Name = < mapping name>
    AND ( TableA.Creation_Date > Control_Table.last_load_date
    OR TableA.Last_Update_Date > Control_Table.last_load_date )
    3. Then use UPDATE/INSERT on the Table B based on the Keys. This should take care of INSERT ( if not present) / UPDATE (if the row already exists).
    4. Schedule the mapping to run on weekly basis.
    5. You have to maintain the Control_Table to keep changing the values for Last_Load_Date to pick the data since the last time Table B is synchronized.
    HTH
    Mahesh

  • In OWB I need to update the target table with same field for match/update

    In OWb I am trying to update the target table with the match and the update on the same field can this be done. I am getting a error match merge error saying you cannot update and match on the same field. But in SQl my select is
    Update table
    set irf = 0
    where irf = 1
    and process_id = 'TEST'
    Hwo do i do this in OWB.

    table name is temp
    fields in the table
    field1 number
    field2 varchar2(10)
    field3 date
    values in the table are example
    0,'TEST',05/29/2009
    9,'TEST',05/29/2009
    0,'TEST1',03/01/2009
    1,'TEST1',03/01/2009
    In the above example I need to update the first row field1 to 1.
    Update temp
    set field1 = 1
    where field1 = 0
    and field2 = 'TEST'
    when I run this I just need one row to be updated and it should look like this below
    1,'TEST',05/29/2009
    9,'TEST',05/29/2009
    0,'TEST1',03/01/2009
    1,'TEST1',03/01/2009
    But when I run my mapping I am getting the rows like below the second row with 9 also is getting updated to 1.
    1,'TEST',05/29/2009
    1,'TEST',05/29/2009
    0,'TEST1',03/01/2009
    1,'TEST1',03/01/2009

  • Merge update source table and delete from target table problem

    Hello Friends, 
    I am a newbie in SQL Server world and I am in a situation where I need to delete the bunch of records from the TARGET table using the values from the SOURCE table. 
    The TARGET table has close to 400 Million records, so I need to delete the records in small batches of about ~10,000 rows.
    I figured out a way to delete in batches by refering the following 2 posts
    http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes
    http://dba.stackexchange.com/questions/1750/methods-of-speeding-up-a-huge-delete-from-table-with-no-clauses
    I think my best option to delete and update in 1 pass would be through using Merge statement, so for that I constructed following SQL.
    MERGE dbo.table1 AS TARGET
    USING 
    SELECT File_name FROM dbo.table2
    WHERE  FILE_DESC = 'EDI'
    AND [Processed_date] < DATEADD (WEEK, -10, Getdate ()) AS SOURCE
    ON (TARGET.File_name = SOURCE.File_name)
    WHEN MATCHED THEN DELETE (FROM THE TARGET)
    WHEN MATCHED 
        THEN UPDATE SET SOURCE.PROCESS_delete_date = GETDATE()
    But, when executed, it throws following error and I am struggling to figure out what is wrong with the above syntax. 
    Msg 156, Level 15, State 1, Line 3
    Incorrect syntax near the keyword 'SELECT'.
    Msg 156, Level 15, State 1, Line 5
    Incorrect syntax near the keyword 'AS'.
    Can any expert please help a newbie as I learn the new way.
    Thanks a lot.

    Visakh, we can have more than 1 matched clause in merge as per the Microsoft sql statement, but we need to add a condition along with the match. thanks for your prompt response on this query. your query is logically fine but when executed, it throws
    following error. Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DELETE'. Msg 102, Level 15, State 1, Line 8 Incorrect syntax near ')'. remember, my server machine is 2005 version & my work machine is 2012. dont know why your query
    is not working.
    MERGE is available only from 2008 onwards
    Yes you're correct but again not more than two  MATCHED clauses even if you specify condition
    see MSDN documentation below
    WHEN MATCHED THEN <merge_matched>
    Specifies that all rows of target_table that match the rows returned by <table_source> ON <merge_search_condition>, and satisfy any additional search condition, are either updated or deleted according to the <merge_matched> clause.
    The MERGE statement can have at most two WHEN MATCHED clauses. If two clauses are specified, then the first clause must be accompanied by an AND <search_condition> clause
    from
    http://msdn.microsoft.com/en-us/library/bb510625.aspx
    Also I guess Composable DML which used is also not present in 2005
    So in your case you can try this instead
    DECLARE @DELETED_FILES table
    File_Name varchar(100)
    DELETE t
    OUTPUT DELETED.File_Name INTO @DELETED_FILES
    FROM dbo.table1 t
    INNER JOIN dbo.table2 s
    ON t.File_name = s.File_name
    WHERE s.FILE_DESC = 'EDI'
    AND s.[Processed_date] < DATEADD (WEEK, -10, Getdate ())
    UPDATE r
    SET r.Process_Delete_Date = GETDATE()
    FROM dbo.table2 r
    INNER JOIN @DELETED_FILES AS p
    ON p.File_Name = r.File_Name
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Is it possible to run an update via target tables post load command

    Is it possible to run a sql update statement via post load command of target table?
    What does it mean by post load command, does it gets triggered only after a successful execution of a data flow (to Target table)?
    Or it can also gets triggered even if a dataflow fails?
    this is teh sql which i would like to run via post if dataflow success only.
    Update tbl_job_status set end_time=nvl(NULL,sysdate()) where job_id = (select max job_id from tbl_job_status where j_dataflowname='Chk_Order_Delta')

    I do not believe it will necessary run if the df fails. If you want your job status table updated no matter what (apart from a complete DS meltdown), you'll want to wrap things in a try-catch to make sure you're handling the update. You can do it in a script, or with a dataflow. A nice way to handle all this sort of operational metadata tracking stuff is to write yourself a library of custom functions and then consistently call them fore & aft of jobs or workflows or whatever components you want to track.  If you've got access to a Rapid Mart, you might look at what they've done there -- nice stuff (although you can't just "borrow" it).

  • Interface performs insert rather then update on target table

    I'm cannot figure out how to control my Interface to update target records rather then insert new records when a source value changes.
    I have primary keys defined.

    Turns out one of my primary key columns was being set inside the interface, the value did not exist in the source, so it was marked for update.
    Thanks!

  • Table compare deleting rows which does not exist in target table

    Hi Gurus,
    I am struggling with an issue in Data Services.
    I have a job which uses Table Compare, then History Preserving and then a Key Generation transforms.
    There is every possibility that data would get deleted from the source table.
    Now, I want to delete them from the target table also.
    I tried Detect deleted rows but it is not working.
    Could some one please help me on this issue.
    Thanks,
    Raviteja.

    Doesn't history preserving really only operate on "Update" rows.  Wouldn't it only process the deletes if you turned the "Preserve Delete row(s) as update row(s)" on?
    I would think if you turned on Detect Delete rows in the Table compare and did not turn this on in the history preserving it would retain those rows as delete rows and effectively remove them from the target.
    Preserve delete row(s) as update row(s)
    Converts DELETE rows to UPDATE rows in the target warehouse and, if you previously set effective date values (Valid from and Valid to), sets the Valid To value to the execution date. Use this option to maintain slowly changing dimensions by feeding a complete data set first through the Table Comparison transform with its Detect deleted row(s) from comparison table
    option selected.

  • How do I create a target table with the same PK as the source table?

    I am trying to create a target table in a mapping that will end up with the same primary key as the source table.
    It is a simple map that simply uses a subset of the columns of the source table in the target table. I was wanting to create and bind a new table by dragging the columns I want from the source to the initially blank target table operator, change the column names and create a primary key to match the source table.
    I can't seem to be able to create a constraint on the table in the mapping. I can create the constraint after the table is created and boound to the database object but the PK doesn't carry back into the mapping.
    I need it in the mapping so I can use the UPDATE/INSERT operation and use the 'All Constraints' implementation. The mapping won't let me validate the object without the PK on it in the map.
    Believe it or not folks, I am getting better at this.
    Thanks very much for the guidance.
    Gary

    Hi Gary
    You are close, you are really close... :-))
    You need to do exactly as you propose plus one extra step. Build the map as you describe, binding the new table to the target. Then you edit the table definition to add the primary key and any other constraints you need. After this is the step that you are missing.
    You need to do the following:
    1. Go back and re-edit the map
    2. Right click on the table
    3. From the pop up menu, select Reconcile Inbound
    4. Set any operators that you need for the UPDATE/INSERT
    5. Save the map
    6. Commit your changes
    The first three steps above make the map read in the indexes and constraints that you set on the table. Finally, you need to deploy the table and then deploy the map.
    Hope this helps
    Regards
    Michael

  • Don't have primary Key in Target table getting errror while creating index

    Hi All,
    I don't have primary key column in target table, while exicuting mapping I got a error while creating INDEX.
    Could you please help how to slove this

    Hi,
    That is a process definition issue.
    If you don't have a PK then:
    1) or you don't execute updates
    2)or you have an alternate Key to update it.
    Case 1) just change the KM to IKM Control Append
    Case 2) at interface, go to each column what is Alternate Key and check it as key (click at column and check the box Key at bottom of propriety window).
    Does it work to you?

  • A question about transaction consistency between multible target tables

    Dear community,
    My replication is ORACLE 11.2.3.7->ORACLE 11.2.3.7 both running on linux x64 and GG version is 11.2.3.0.
    I'm recovering from an error when trail file was moved away while dpump was writing to it.
    After moving the file back dpump abended with an error
    2013-12-17 11:45:06  ERROR   OGG-01031  There is a problem in network communication, a remote file problem, encryption keys for target and source do
    not match (if using ENCRYPT) or an unknown error. (Reply received is Expected 4 bytes, but got 0 bytes, in trail /u01/app/ggate/dirdat/RI002496, seqno 2496,
    reading record trailer token at RBA 12999993).
    I googled for It and found no suitable solution except for to try "alter extract <dpump>, entrollover".
    After rolling over trail file replicat as expected ended with
    REPLICAT START 1
    2013-12-17 17:56:03  WARNING OGG-01519  Waiting at EOF on input trail file /u01/app/ggate/dirdat/RI002496, which is not marked as complete;
    but succeeding trail file /u01/app/ggate/dirdat/RI002497 exists. If ALTER ETROLLOVER has been performed on source extract,
    ALTER EXTSEQNO must be performed on each corresponding downstream reader.
    So I've issued "alter replicat <repname>, extseqno 2497, extrba 0" but got the following error:
    REPLICAT START 2
    2013-12-17 18:02:48 WARNING OGG-00869 Aborting BATCHSQL transaction. Detected inconsistent result:
    executed 50 operations in batch, resulting in 47 affected rows.
    2013-12-17 18:02:48  WARNING OGG-01137  BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01004 Aborted grouped transaction on 'M.CLIENT_REG', Database error
    1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "M"."CLIENT_REG" SET "CLIENT_CODE" =
    :a1,"CORE_CODE" = :a2,"CP_CODE" = :a3,"IS_LOCKED" = :a4,"BUY_SUMMA" = :a5,"BUY_CHECK_CNT" =
    :a6,"BUY_CHECK_LIST_CNT" = :a7,"BUY_LAST_DATE" = :a8 WHERE "CODE" = :b0>).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.CHECK OCI Error ORA-00001:
    unique constraint (M.CHECK_PK) violated (status = 1). INSERT INTO "M"."CHECK"
    ("CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    The report stated the following:
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 0 records
    Report at 2013-12-17 18:02:48 (activity since 2013-12-17 18:02:46)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    At that time I came to the conclusion that using etrollover was not a good idea Nevertheless I had to upload my data to perform consistency check.
    My mapping templates are set up as the following:
    LS.CHECK->M.CHECK
    LS.CHECK->M.TL_CHECK
    (such mapping is set up for every table that is replicated).
    TL_CHECK is a transaction log, as I name it,
    and this peculiar mapping is as the following:
    ignoreupdatebefores
    map LS.CHECK, target M.CHECK, nohandlecollisions;
    ignoreupdatebefores
    map LS.CHECK, target M.TL_CHECK ,colmap(USEDEFAULTS,
    FILESEQNO = @GETENV ("RECORD", "FILESEQNO"),
    FILERBA = @GETENV ("RECORD", "FILERBA"),
    COMMIT_TS = @GETENV( "GGHEADER", "COMMITTIMESTAMP" ),
    FILEOP = @GETENV ("GGHEADER","OPTYPE"), CSCN = @TOKEN("TKN-CSN"),
    RSID = @TOKEN("TKN-RSN"),
    OLD_CODE = before.CODE
    , OLD_STATE = before.STATE
    , OLD_IDENT_TYPE = before.IDENT_TYPE
    , OLD_IDENT = before.IDENT
    , OLD_CLIENT_REG_CODE = before.CLIENT_REG_CODE
    , OLD_SHOP = before.SHOP
    , OLD_BOX = before.BOX
    , OLD_NUM = before.NUM
    , OLD_NUM_VIRT = before.NUM_VIRT
    , OLD_KIND = before.KIND
    , OLD_KIND_ORDER = before.KIND_ORDER
    , OLD_DAT = before.DAT
    , OLD_SUMMA = before.SUMMA
    , OLD_LIST_COUNT = before.LIST_COUNT
    , OLD_RETURN_SELL_CHECK_CODE = before.RETURN_SELL_CHECK_CODE
    , OLD_RETURN_SELL_SHOP = before.RETURN_SELL_SHOP
    , OLD_RETURN_SELL_BOX = before.RETURN_SELL_BOX
    , OLD_RETURN_SELL_NUM = before.RETURN_SELL_NUM
    , OLD_RETURN_SELL_KIND = before.RETURN_SELL_KIND
    , OLD_INSERTED = before.INSERTED
    , OLD_UPDATED = before.UPDATED
    , OLD_REMARKS = before.REMARKS), nohandlecollisions, insertallrecords;
    As PK violation fired for CHECK, I've changed nohandlecollisions to handlecollisions for LS.CHECK->M.CHECK mapping and restarted an replicat.
    To my surprise it ended with the following error:
    REPLICAT START 3
    2013-12-17 18:05:55 WARNING OGG-00869 Aborting BATCHSQL transaction. Database error 1 (ORA-00001:
    unique constraint (M.CHECK_PK) violated).
    2013-12-17 18:05:55 WARNING OGG-01137 BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:05:55 WARNING OGG-01003 Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-00869 OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK)
    violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55 WARNING OGG-01004 Aborted grouped transaction on 'M.TL_CHECK', Database error 1
    (OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO
    "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26)).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.TL_CHECK OCI Error
    ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    I've expected that batchsql will fail cause it does not support handlecollisions, but I really don't understand why any record was inserted into TL_CHECK and caused PK violation, cause I thought that GG guarantees transactional consistency and that any transaction that caused an error in _ANY_ of target tables will be rollbacked for _EVERY_ target table.
    TL_CHECK has PK set to (FILESEQNO, FILERBA), plus I have a special column that captures replication run number and it clearly states that a record causing PK violation was inserted during previous run (REPLICAT START 2).
    BTW report for the last shows
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 1 records
    Report at 2013-12-17 18:05:55 (activity since 2013-12-17 18:05:54)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    So somebody explain, how could that happen?

    Write the query of the existing table in the form of a function with PRAGMA AUTONOMOUS_TRANSACTION.
    examples here:
    http://www.morganslilbrary.org/library.html

  • How to sort columns in the target table

    I have a simple mapping which I am trying to design. There's only one table on the source and one in the target . There are no filter conditions, only thing is I want the target table to be sorted.
    Literally, say
    Src is source table has 3 columns x,y,z
    Trg is dest table and has 3 columns a,b,c
    x--->a
    y---->b
    z---->c
    The SQL should be
    select x,y,z from src order by x,y.
    I could do the mapping but the order by ..I could not do it .
    IKM used: IKM BIAPPS Oracle Incremental Update

    Why can't you use simple UPDATE command in EXECUTE SQL Task as below,
    DROP TABLE SSN
    DROP TABLE STAGING
    DROP TABLE STUDENT
    CREATE TABLE SSN(pn_id VARCHAR(100),ssn BIGINT)
    INSERT INTO SSN VALUES('000616850',288258466)
    INSERT INTO SSN VALUES('002160790',176268917)
    CREATE TABLE Staging (ssn BIGINT, id INT, pn_id BIGINT, name VARCHAR(100), subject VARCHAR(100),grade VARCHAR(10), [academic year] INT, comments VARCHAR(100))
    INSERT INTO Staging VALUES(288258466, 1001, '770616858','Sally Johnson', 'English','A', 2005,'great student')
    INSERT INTO Staging VALUES(176268917, 1002, '192160792','Will Smith', 'Math','C', 2014,'no comments')
    INSERT INTO Staging VALUES(444718562, 1003, '260518681','Mike Lira', 'Math','B', 2013,'no comments')
    CREATE TABLE Student(id INT,pn_id BIGINT,subject VARCHAR(100), [academic year] INT, grade VARCHAR(10), comments VARCHAR(100) )
    INSERT INTO Student VALUES(1001, '000616850', NULL,NULL,NULL ,NULL)
    INSERT INTO Student VALUES(1002, '002160790', NULL,NULL,NULL ,NULL)
    UPDATE Student SET Subject = C.Subject, [academic year]=C.[academic year], grade=C.grade,comments=C.comments
    FROM SSN A INNER JOIN Student B
    ON A.pn_id=B.pn_id INNER JOIN Staging C
    ON A.ssn = C.ssn
    SELECT * FROM Student
    Regards, RSingh

  • How to load multiple (2 or more) target tables simultaneously?

    Hi,
    I have a requirement where I have to load 2 target tables simultaneously. Reason same primary key value has to be loaded at both the targets.
    For eg:
    Source table:
    col1, col2, col3, col4, col5.... col20
    Target 1:
    T1C1=unique value from seq1
    T1C2=col1
    T1C3=col2
    T1C9=col8
    Target 2:
    T2C1=same value as inserted above from seq1
    T2C2=col9
    T2C3=col10
    T2C4=col11
    T2C13=col20
    Any response is highly appreciated.
    Thanks for your time in advance.
    -- CHikk

    Hi, please, call me Cezar! :D
    OK, that is a overview guide, are you confident to go thru KM customizing? If not, let me know.
    1) For achieve that I figured out to use the UD's (UD1..UD5) fields from interface
    2) Create 2 news option at IKM like: TABLE_UD1 and TABLE_UD2
    3) change the KM code (C$ and I$) for generate code for each column group (UD1 and UD2 update and insert code). The I$ must have all columns.
    4) at Interface level you will drag and drop both target tables at source, right-click on each one and add it to the target as temp table.
    5) drop the targets from source area
    6) for each group of columns from Target1 and target2 check the columns as UD1 and UD2 respectively and the sequence must be checked as UD1 and UD2 because it is common to the both target table.
    7) add the real sources and create all mapping.
    8) now just go to the Flow table and put the correspondent table name at each option. You should use an ODI API to get the right schema when change the context to production.
    I just describe the general architecture once the technical implementation needs KM customize knowledge.
    Make any sense and/or help you?
    Be free to contact me at my email (see profile).

Maybe you are looking for