Refering the target table records in the transfering quey

Hi all
I am trying to load some records using my job in DI in the target table. The query I should use is a bit tricky. While I'm loading records into the target table using query, I should check whether one of the columns has been used in transferring record or not. As I want to have a unique value on one column. It is distinct, distinct get the unique records, I need have unique value in one column accross whole the table.
I noticed it's not possible to refer to target column in the Query object to see whether that value has been used already there or not. But how can I address this requirement? Do you have any experience?
I write the SQL Code here which I should use in Query object in Data Integrator:
In the target table, every city should just come in one and only record.
INSERT INTO target
     Effective_From_Date,
     Effective_To_Date,
     Business_Unit_ID,
     Provider_ID
SELECT distinct
     table1.Effective_From_Date,
     table2.Effective_To_Date,
     table4.city_ID,
     table4.provider_ID
FROM
     table1 a
     INNER JOIN table2 b
          ON (a.typeID = b.typeID)
     INNER JOIN table3 c
          ON (a.professionID = c.professionID)
     INNER JOIN table4 d
          ON (c.city_ID = d.city_ID)
               WHERE  NOT EXISTS
                                (SELECT * FROM target e 
                           WHERE d.city_ID = e.Business_Unit_ID)
Thanks.

You can use the target table as a source table as well, just drag is into your dataflow again and select Source instead of Target this time.  Then you can outer join the new source target table to your query (I might do this in a second query instead of trying to add it to the existing one).
You could also use a lookup function to check the target table.  In this case you'd also have to add a second query to check the result of your lookup.
Worst case, you can just throw that whole SQL query you've already created into a SQL transform and then use that as your source.

Similar Messages

  • How to soft delete a row from the target table?

    Could someone help me on this requirement?
    How to implement the below logic using only ODI? I am able to implement the below logic with the "DELETE_FLAG" as "N".
    I want to make the latest record with the flag as "N" and all the previous other records with the flag as "D".
    Thanks a lot in advance.
    I have a source table "EMP".
    EMP
    EMPID FIRST_NAME
    1 A
    2 B
    First name is changed from A to C and then, C to D etc. For each data change, I would add a target row and mark the latest row as "N" and the rest as "D". The target table would contain the following data:
    Target_EMP
    EMPID FIRST_NAME DELETE_FLAG
    1 A D
    1 C D
    1 D N

    The problem is that I can't delete the row cause it demands from me to fill the mandatory field previously. It takes place when the key field is ROWID. In other cases delete is succesful.

  • How to execute query to store the result in the target table column ?

    Hi
    Source: Oracle
    Target: Oracle
    ODI: 11g
    I have an interface which loads the data from source table to target. Some of the columns in the target tables are automatically mapped with source table. Some of the column remain un-mapped. Those who remain un-mapped, I want to load the values in that column by executing an query. So can anybody tell me where I should mention that query whose result would become the value of the specific column.
    -Thanks,
    Shrinivas

    Actually I select the column from the target table then in the Property Inspector-->Mapping properties-->Implementation
    tab I have written the query which retrieve the value for that column. Is the right place to write the query? How can do this ?
    -Shrinivas

  • I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY WUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ??

    I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY QUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM .I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ?? HOW IT IS ??

    Hi Kishore,
    First identify deleted records by selecting "Detect deleted rows from comparison table" feature in Table Comparison
    Then Use Map Operation with Input row type as "delete" and output row type as "delete" to delete records from target table.

  • How to delete rows in the target table using interface

    hi guys,
    I have an Interface with source as src and target as tgt both has company_code column.In the Interface i need like if a record with company_code already exists we need to delete it and insert the new one from the src and if it is not availble we need to insert it.
    plz tell me how to achieve this?
    Regards,
    sai.

    gatha wrote:
    For this do we need to apply CDC?
    I am not clear on how to delete rows under target, Can you please share the steps to be followed.If you are able to track the deletes in your source data then you dont need CDC. If however you cant - then it might be an option.
    I'll give you an example from what im working on currently.
    We have an ODS, some 400+ tables. Some are needed 'Real-Time' so we are using CDC. Some are OK to be batch loaded overnight.
    CDC captures the Deletes no problem so the standard knowledge modules with a little tweaking for performance are doing the job fine, it handles deletes.
    The overnight batch process however cannot track a delete as its phyiscally gone by the time we run the scenarios, so we load all the insert/updates using a last modified date before we pull all the PK's from the source and delete them using a NOT EXISTS looking back at the collection (staging) table. We had to write our own KM for that.
    All im saying to the OP is that whilst you have Insert / Update flags to set on the target datastore to influence the API code, there is nothing stopping you extending this logic with the UD flags if you wish and writing your own routines with what to do with the deletes - It all depends on how efficient you can identify rows that have been deleted.

  • Row should get added in the target as soon as the data in the source table

    I have done the following:
    * The source table is part of the CDC process.
    * I have started the journal on the source table.
    Whenever I change the data in the source, I expect the target to get a new row added with a new sequence number as the surrogate key. I find that even though the source data changes, the new row does not get added.
    Could someone point out to me why is the new row not getting added?

    Step 1 - Sequence Number
    create a sequence in your rdbms namely
    CREATE SEQUENCE SEQUENCE_NAME
    MINVALUE 1
    MAXVALUE 99999
    START WITH 1
    INCREMENT BY 1
    YOU can use the above sequence in your mapping in this way
    schema_name.sequence_name.nextval executed on Target option .
    Next select only Insert option for sequence column .
    Click on the Source datastore and in the Properties panel you will find an option called " Journalized Data Only " . Now whenever this interface runs , only the journalized data gets transferred.
    The other way to see the journalized data from the source side is right click on the source datastore under the model which is journalized and now go to " changed data capture " and then to " journal data .. "
    Now you can see only the journzalied data.
    As CDC creates as trigger at the source , so whenever there is change in the source it gets captured at the target whenver you run the interface above interface with Journalized data only option.
    I hope iam clear and elaborate now.
    Thanks

  • How to sort columns in the target table

    I have a simple mapping which I am trying to design. There's only one table on the source and one in the target . There are no filter conditions, only thing is I want the target table to be sorted.
    Literally, say
    Src is source table has 3 columns x,y,z
    Trg is dest table and has 3 columns a,b,c
    x--->a
    y---->b
    z---->c
    The SQL should be
    select x,y,z from src order by x,y.
    I could do the mapping but the order by ..I could not do it .
    IKM used: IKM BIAPPS Oracle Incremental Update

    Why can't you use simple UPDATE command in EXECUTE SQL Task as below,
    DROP TABLE SSN
    DROP TABLE STAGING
    DROP TABLE STUDENT
    CREATE TABLE SSN(pn_id VARCHAR(100),ssn BIGINT)
    INSERT INTO SSN VALUES('000616850',288258466)
    INSERT INTO SSN VALUES('002160790',176268917)
    CREATE TABLE Staging (ssn BIGINT, id INT, pn_id BIGINT, name VARCHAR(100), subject VARCHAR(100),grade VARCHAR(10), [academic year] INT, comments VARCHAR(100))
    INSERT INTO Staging VALUES(288258466, 1001, '770616858','Sally Johnson', 'English','A', 2005,'great student')
    INSERT INTO Staging VALUES(176268917, 1002, '192160792','Will Smith', 'Math','C', 2014,'no comments')
    INSERT INTO Staging VALUES(444718562, 1003, '260518681','Mike Lira', 'Math','B', 2013,'no comments')
    CREATE TABLE Student(id INT,pn_id BIGINT,subject VARCHAR(100), [academic year] INT, grade VARCHAR(10), comments VARCHAR(100) )
    INSERT INTO Student VALUES(1001, '000616850', NULL,NULL,NULL ,NULL)
    INSERT INTO Student VALUES(1002, '002160790', NULL,NULL,NULL ,NULL)
    UPDATE Student SET Subject = C.Subject, [academic year]=C.[academic year], grade=C.grade,comments=C.comments
    FROM SSN A INNER JOIN Student B
    ON A.pn_id=B.pn_id INNER JOIN Staging C
    ON A.ssn = C.ssn
    SELECT * FROM Student
    Regards, RSingh

  • How to update fields in the target table in correspondance with the source file values

    Environment: win7, SQL server 2008 R2
    Application: Microsoft Management SQL Studio 2008 R2, Business Intelligence 2008 - SSIS
    SSIS competency level: Novice
    Problem: I have been trying to update some of the fields in the destination table,student table, in reference to data set in the staging table and ssn table.  I was able to insert/load new data to the destination using look up transformation
    while the driver is ssn (data mapping) but i couldn't know how to update some of the fields in the student table while keeping the orignal pn_id of both tables(ssn and student tables), because pn_id already exists in the SSN table and student table. There
    are other records also associated with the pn_id so I am not allowed to update the pn_id in the destination tables. For example,
    SSN Table (pn_id,ssn)
    ('000616850',288258466)
    ('002160790',176268917)
    Staging Table (ssn, id, pn_id, name, subject, academic year, comments)
    (288258466, 1001, '770616858',Sally Johnson, English,A, 2005,'great student')
    (176268917, 1002, '192160792',Will Smith, Math,38000,C, 2014,'no comments')
    (444718562, 1003, '260518681',Mike Lira, Math,38000,B, 2013,'no comments')
    Student Table (destination table)(id,pn_id,subject,academic year, grade, comments):
    (1001, '000616850', ' ',' ', ,'')
    (1002, '002160790', ' ',' ', ,'')
    Expected Results:
    My goal is to have student table updated as the following:
    Student Table
    (1001, '000616850', 'English','A' ,2005 ,'great student')
    (1002, '002160790', 'Math ',' C',2014 ,'no comments')
    please advise

    Why can't you use simple UPDATE command in EXECUTE SQL Task as below,
    DROP TABLE SSN
    DROP TABLE STAGING
    DROP TABLE STUDENT
    CREATE TABLE SSN(pn_id VARCHAR(100),ssn BIGINT)
    INSERT INTO SSN VALUES('000616850',288258466)
    INSERT INTO SSN VALUES('002160790',176268917)
    CREATE TABLE Staging (ssn BIGINT, id INT, pn_id BIGINT, name VARCHAR(100), subject VARCHAR(100),grade VARCHAR(10), [academic year] INT, comments VARCHAR(100))
    INSERT INTO Staging VALUES(288258466, 1001, '770616858','Sally Johnson', 'English','A', 2005,'great student')
    INSERT INTO Staging VALUES(176268917, 1002, '192160792','Will Smith', 'Math','C', 2014,'no comments')
    INSERT INTO Staging VALUES(444718562, 1003, '260518681','Mike Lira', 'Math','B', 2013,'no comments')
    CREATE TABLE Student(id INT,pn_id BIGINT,subject VARCHAR(100), [academic year] INT, grade VARCHAR(10), comments VARCHAR(100) )
    INSERT INTO Student VALUES(1001, '000616850', NULL,NULL,NULL ,NULL)
    INSERT INTO Student VALUES(1002, '002160790', NULL,NULL,NULL ,NULL)
    UPDATE Student SET Subject = C.Subject, [academic year]=C.[academic year], grade=C.grade,comments=C.comments
    FROM SSN A INNER JOIN Student B
    ON A.pn_id=B.pn_id INNER JOIN Staging C
    ON A.ssn = C.ssn
    SELECT * FROM Student
    Regards, RSingh

  • The size of the target table grows abnormaly

    hi all,
    I am curently using OWB (version 9 2.0 4 to feed some tables.
    we have created a new database 9.2.0.5 for a new datawarehouse.
    I have an issue that I really can not explain about the increase size of the target tables.
    I take the exemple of a parameter table that contains 4 fields and only 12 rows.
    CREATE TABLE SSD_DIM_ACT_INS
    ID_ACT_INS INTEGER,
    COD_ACT_INS VARCHAR2(10 BYTE),
    LIB_ACT_INS VARCHAR2(80 BYTE),
    CT_ACT_INS VARCHAR2(10 BYTE)
    TABLESPACE IOW_OIN_DAT
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    NOCACHE
    NOPARALLEL;
    this table is feed by a mapping and I use the update/insert option, which generates a Merge.
    first the table is empty, I run the maping and I add 14 lines.
    the size of the table is now 5 Mo !!
    then I delete 2 lines by sql with TOAD
    I run a again the mapping. It updates 12 lines and add 2 lines.
    at this point,the size of the table has increased of 2 Mo (1 Mo by line !!)
    the size of the table is now 7 Mo !!
    I do the same again and I get a 9 Mo table
    when I delete 2 lines with a SQL statement and create them manually, the size of the table does not change.
    when I create a copy of the table with an insert select sql statement the size becomes equal to 1 Mo which is normal.
    Could someone explain me how this can be possible.
    is it a problem with the database ? with the configuration of OWB ?
    what should I check ?
    Thank you for your help.

    Hi all
    We have found the reason of the increasing.
    Each mapping has a HINT which is defaulted to PARALLEL APPEND. as I understand it, it is use by OWB to determine if an insert allocates of not new space for a table when it runs and insert.
    We have changed each one to PARALLEL NOAPPEND and now, it's correct.

  • In OWB I need to update the target table with same field for match/update

    In OWb I am trying to update the target table with the match and the update on the same field can this be done. I am getting a error match merge error saying you cannot update and match on the same field. But in SQl my select is
    Update table
    set irf = 0
    where irf = 1
    and process_id = 'TEST'
    Hwo do i do this in OWB.

    table name is temp
    fields in the table
    field1 number
    field2 varchar2(10)
    field3 date
    values in the table are example
    0,'TEST',05/29/2009
    9,'TEST',05/29/2009
    0,'TEST1',03/01/2009
    1,'TEST1',03/01/2009
    In the above example I need to update the first row field1 to 1.
    Update temp
    set field1 = 1
    where field1 = 0
    and field2 = 'TEST'
    when I run this I just need one row to be updated and it should look like this below
    1,'TEST',05/29/2009
    9,'TEST',05/29/2009
    0,'TEST1',03/01/2009
    1,'TEST1',03/01/2009
    But when I run my mapping I am getting the rows like below the second row with 9 also is getting updated to 1.
    1,'TEST',05/29/2009
    1,'TEST',05/29/2009
    0,'TEST1',03/01/2009
    1,'TEST1',03/01/2009

  • Issue with INSERT INTO, throws primary key violation error even if the target table is empty

    Hi,
    I am running a simple
    INSERT INTO Table 1 (column 1, column 2, ....., column n)
    SELECT column 1, column 2, ....., column n FROM Table 2
    Table 1 and Table 2 have same definition(schema).
    Table 1 is empty and Table 2 has all the data. Column 1 is primary key and there is NO identity column.
    This statement still throws Primary key violation error. Am clueless about this? 
    How can this happen when the target table is totally empty? 
    Chintu

    Nope thats not true
    Either you're not inserting to the right table or in the background some other trigger code is getting fired which is inserting into some table which causes a PK violation. 
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How to gather stats on the target table

    Hi
    I am using OWB 10gR2.
    I have created a mapping with a single target table.
    I have checked the mapping configuration 'Analyze Table Statements'.
    I have set target table property 'Statistics Collection' to 'MONITORING'.
    My requirement is to gather stats on the target table, after the target table is loaded/updated.
    According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
    Analyze Table Statements
    If you select this option, Warehouse Builder generates code for analyzing the target
    table after the target is loaded, if the resulting target table is double or half its original
    size.
    My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
    I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
    Thanks for your help in advance...
    ~Salil

    Hi
    Unfortunately we have to disable automatic stat gather on the 10g database.
    My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
    So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
    Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
    If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
    Any ideas to overcome this issue is appreciated.
    Thanks
    Salil

  • Duplicates in the target table.

    Hi, I am working on ODI 10.
    In one of my interface when ever I executes there are some duplicates are coming to the target table.
    Say if the count of the rows around 5000 in the source table and in the target it would be around 120000. Even after using the distinct rows in the flow control some bugs are coming.
    Can you please help how solve this...
    Note:In source table one column contains surrogate key.
    IKM oracle control append is the KM I am using

    Using the Control Append IKM will always add the data that is in the Source to the Target, unless you truncate or delete from the Target first. If you have data in the Source that has already been loaded to the Target, and you do not truncate the Target prior to the next load, you will have duplicates.
    Are you truncating the Target or is the Source data always "new" each time the Interface is run?
    Regards,
    Michael Rainey

  • How do I make Merge operation into the target table case insensitive?

    Hi All,
    We have a target table that has a varchar 2 column called nat_key and a map that copies over data from a source table into the target table.
    Based on wheteher the values in the nat_key column matches between the source and the target, an update or an insert has to be done into the target table.
    Let us say target table T has the following row
    nat_key
    EQUIPMENT
    Now, my source table has the same in a different case
    nat_key
    equipment
    I want these rows to be merged .
    In the OWB map, I have given the property of nat_key column in the target table as 'Match while updating' = 'Yes'. Is there a built in feature in OWB, using which I can make this match as case insensitive?
    Basically, I want to make OWB generate my mapping code as
    if UPPER(target. nat_key)=upper(source.nat_key) then update...else insert.
    Note: There is a workaround with 'Alter Session set nls_sort=binary_ci and nls_comp=linguistic', but this involves calling a pre-mapping operator to set these session parameters.
    Could anyone tell me if there is a simpler way?

    Hi,
    use an expression operator to get nat_key in upper case. Then use this value for the MERGE. Then nat_key will only be stored in upper case in your target table.
    If you have historic data in the target table you have to update nat_key to upper case. This has to be done only once and is not necessary if you start with an empty target table.
    Regards,
    Carsten.

  • Is it possible to get the updated table records based on date & time.

    Is it possible to get the updated table records based on date & time in oracle.
    Thanks in advance.

    no, actually i am asking update records using 'UPDATE
    or DELETE' statement, but not insert statement.
    Is it possible?
    I think we can do using trigger on table, but problem
    is if i am having 20 tables means i have to write 20
    trigger. i don't want like this.Of course it's still possible, typically you'll find applications with a column LAST_UPDATE, probably a LAST_UPDATED_BY and so on column. You don't say what your business need is, if you just want a one of query of updates to particular records and have a recent version of Oracle, then flashback query may well help, if you want to record update timestamps you either have to modify the table, or write some code to store your updates in an audit table somewhere.
    Niall Litchfield
    http://www.orawin.info/

Maybe you are looking for

  • Can't get LUA Patching set to enabled

    Hey all, I've been following the following guides to get Standard Users to get update sautomatically without UAC prompts. Using these links... Initial Start...http://www.grouppolicy.biz/2010/06/updated-how-to-make-adobe-reader-more-secure-using-grou

  • My Ipod Touch won't load the music or video.

    Whenever I click on music, it'll say "updating library, this may take a few minutes" and it'll show the music. Originally, it wouldn't show the music, it would just show that message then go back to home screen. But now, all it does is when I select

  • Leading Zeros in the Numeric Field

    I am working with accounting numbers that may start with zero (0). After this information is inputted and you tab or hit return the leading zero is removed and goes to the first numberic digit. How can I make the leading zero display? Thanks,

  • Problem viewing Query Result

    Hi- I occasionally come across this issue when I run any query in SQL Developer (3.0.02), the Query Result tab shows a bold red Exclamation point and the display grid is blank. It says "Fetched 50 rows in 0.235 seconds" but the results aren't display

  • I can't access to the Support Community

    Hi there, when I try to enter to the Support Community from my iDevices or in a different Computer I can't (I only have access from my MacBook Pro), when I use my Apple ID the Support Community request me a user name ... But I have one !!! What I can