Duplicates in the target table.
Hi, I am working on ODI 10.
In one of my interface when ever I executes there are some duplicates are coming to the target table.
Say if the count of the rows around 5000 in the source table and in the target it would be around 120000. Even after using the distinct rows in the flow control some bugs are coming.
Can you please help how solve this...
Note:In source table one column contains surrogate key.
IKM oracle control append is the KM I am using
Using the Control Append IKM will always add the data that is in the Source to the Target, unless you truncate or delete from the Target first. If you have data in the Source that has already been loaded to the Target, and you do not truncate the Target prior to the next load, you will have duplicates.
Are you truncating the Target or is the Source data always "new" each time the Interface is run?
Regards,
Michael Rainey
Similar Messages
-
How to sort columns in the target table
I have a simple mapping which I am trying to design. There's only one table on the source and one in the target . There are no filter conditions, only thing is I want the target table to be sorted.
Literally, say
Src is source table has 3 columns x,y,z
Trg is dest table and has 3 columns a,b,c
x--->a
y---->b
z---->c
The SQL should be
select x,y,z from src order by x,y.
I could do the mapping but the order by ..I could not do it .
IKM used: IKM BIAPPS Oracle Incremental UpdateWhy can't you use simple UPDATE command in EXECUTE SQL Task as below,
DROP TABLE SSN
DROP TABLE STAGING
DROP TABLE STUDENT
CREATE TABLE SSN(pn_id VARCHAR(100),ssn BIGINT)
INSERT INTO SSN VALUES('000616850',288258466)
INSERT INTO SSN VALUES('002160790',176268917)
CREATE TABLE Staging (ssn BIGINT, id INT, pn_id BIGINT, name VARCHAR(100), subject VARCHAR(100),grade VARCHAR(10), [academic year] INT, comments VARCHAR(100))
INSERT INTO Staging VALUES(288258466, 1001, '770616858','Sally Johnson', 'English','A', 2005,'great student')
INSERT INTO Staging VALUES(176268917, 1002, '192160792','Will Smith', 'Math','C', 2014,'no comments')
INSERT INTO Staging VALUES(444718562, 1003, '260518681','Mike Lira', 'Math','B', 2013,'no comments')
CREATE TABLE Student(id INT,pn_id BIGINT,subject VARCHAR(100), [academic year] INT, grade VARCHAR(10), comments VARCHAR(100) )
INSERT INTO Student VALUES(1001, '000616850', NULL,NULL,NULL ,NULL)
INSERT INTO Student VALUES(1002, '002160790', NULL,NULL,NULL ,NULL)
UPDATE Student SET Subject = C.Subject, [academic year]=C.[academic year], grade=C.grade,comments=C.comments
FROM SSN A INNER JOIN Student B
ON A.pn_id=B.pn_id INNER JOIN Staging C
ON A.ssn = C.ssn
SELECT * FROM Student
Regards, RSingh -
The size of the target table grows abnormaly
hi all,
I am curently using OWB (version 9 2.0 4 to feed some tables.
we have created a new database 9.2.0.5 for a new datawarehouse.
I have an issue that I really can not explain about the increase size of the target tables.
I take the exemple of a parameter table that contains 4 fields and only 12 rows.
CREATE TABLE SSD_DIM_ACT_INS
ID_ACT_INS INTEGER,
COD_ACT_INS VARCHAR2(10 BYTE),
LIB_ACT_INS VARCHAR2(80 BYTE),
CT_ACT_INS VARCHAR2(10 BYTE)
TABLESPACE IOW_OIN_DAT
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
LOGGING
NOCACHE
NOPARALLEL;
this table is feed by a mapping and I use the update/insert option, which generates a Merge.
first the table is empty, I run the maping and I add 14 lines.
the size of the table is now 5 Mo !!
then I delete 2 lines by sql with TOAD
I run a again the mapping. It updates 12 lines and add 2 lines.
at this point,the size of the table has increased of 2 Mo (1 Mo by line !!)
the size of the table is now 7 Mo !!
I do the same again and I get a 9 Mo table
when I delete 2 lines with a SQL statement and create them manually, the size of the table does not change.
when I create a copy of the table with an insert select sql statement the size becomes equal to 1 Mo which is normal.
Could someone explain me how this can be possible.
is it a problem with the database ? with the configuration of OWB ?
what should I check ?
Thank you for your help.Hi all
We have found the reason of the increasing.
Each mapping has a HINT which is defaulted to PARALLEL APPEND. as I understand it, it is use by OWB to determine if an insert allocates of not new space for a table when it runs and insert.
We have changed each one to PARALLEL NOAPPEND and now, it's correct. -
In OWB I need to update the target table with same field for match/update
In OWb I am trying to update the target table with the match and the update on the same field can this be done. I am getting a error match merge error saying you cannot update and match on the same field. But in SQl my select is
Update table
set irf = 0
where irf = 1
and process_id = 'TEST'
Hwo do i do this in OWB.table name is temp
fields in the table
field1 number
field2 varchar2(10)
field3 date
values in the table are example
0,'TEST',05/29/2009
9,'TEST',05/29/2009
0,'TEST1',03/01/2009
1,'TEST1',03/01/2009
In the above example I need to update the first row field1 to 1.
Update temp
set field1 = 1
where field1 = 0
and field2 = 'TEST'
when I run this I just need one row to be updated and it should look like this below
1,'TEST',05/29/2009
9,'TEST',05/29/2009
0,'TEST1',03/01/2009
1,'TEST1',03/01/2009
But when I run my mapping I am getting the rows like below the second row with 9 also is getting updated to 1.
1,'TEST',05/29/2009
1,'TEST',05/29/2009
0,'TEST1',03/01/2009
1,'TEST1',03/01/2009 -
Issue with INSERT INTO, throws primary key violation error even if the target table is empty
Hi,
I am running a simple
INSERT INTO Table 1 (column 1, column 2, ....., column n)
SELECT column 1, column 2, ....., column n FROM Table 2
Table 1 and Table 2 have same definition(schema).
Table 1 is empty and Table 2 has all the data. Column 1 is primary key and there is NO identity column.
This statement still throws Primary key violation error. Am clueless about this?
How can this happen when the target table is totally empty?
ChintuNope thats not true
Either you're not inserting to the right table or in the background some other trigger code is getting fired which is inserting into some table which causes a PK violation.
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
How to gather stats on the target table
Hi
I am using OWB 10gR2.
I have created a mapping with a single target table.
I have checked the mapping configuration 'Analyze Table Statements'.
I have set target table property 'Statistics Collection' to 'MONITORING'.
My requirement is to gather stats on the target table, after the target table is loaded/updated.
According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
Analyze Table Statements
If you select this option, Warehouse Builder generates code for analyzing the target
table after the target is loaded, if the resulting target table is double or half its original
size.
My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
Thanks for your help in advance...
~SalilHi
Unfortunately we have to disable automatic stat gather on the 10g database.
My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
Any ideas to overcome this issue is appreciated.
Thanks
Salil -
Removing duplicates in the Internal Table
Dear friends,
Could any one of you kindly help me with a code to delete the duplicates in the internal table, but each duplicate should be counted, how many times that appeared and should be displayed again as a report with the messages and no of times that message appeared.
Thank you,
Best Regards,
subramanyeshwerYou can try something like this.
report zrich_0001.
data: begin of itab1 occurs 0,
fld1 type c,
fld2 type c,
fld3 type c,
end of itab1.
data: begin of itab2 occurs 0,
fld1 type c,
fld2 type c,
fld3 type c,
end of itab2.
data: counter type i.
itab1 = 'ABC'. append itab1.
itab1 = 'DEF'. append itab1.
itab1 = 'GHI'. append itab1.
itab1 = 'DEF'. append itab1.
itab1 = 'GHI'. append itab1.
itab1 = 'DEF'. append itab1.
itab2[] = itab1[].
sort itab1 ascending.
delete adjacent duplicates from itab1.
loop at itab1.
clear counter.
loop at itab2 where fld1 = itab1-fld1
and fld2 = itab1-fld2
and fld3 = itab1-fld3.
counter = counter + 1.
endloop.
write:/ itab1-fld1, itab1-fld2, itab1-fld3,
'Number of occurances:', counter.
endloop.
Regards,
Rich Heilman -
How to soft delete a row from the target table?
Could someone help me on this requirement?
How to implement the below logic using only ODI? I am able to implement the below logic with the "DELETE_FLAG" as "N".
I want to make the latest record with the flag as "N" and all the previous other records with the flag as "D".
Thanks a lot in advance.
I have a source table "EMP".
EMP
EMPID FIRST_NAME
1 A
2 B
First name is changed from A to C and then, C to D etc. For each data change, I would add a target row and mark the latest row as "N" and the rest as "D". The target table would contain the following data:
Target_EMP
EMPID FIRST_NAME DELETE_FLAG
1 A D
1 C D
1 D NThe problem is that I can't delete the row cause it demands from me to fill the mandatory field previously. It takes place when the key field is ROWID. In other cases delete is succesful.
-
How do I make Merge operation into the target table case insensitive?
Hi All,
We have a target table that has a varchar 2 column called nat_key and a map that copies over data from a source table into the target table.
Based on wheteher the values in the nat_key column matches between the source and the target, an update or an insert has to be done into the target table.
Let us say target table T has the following row
nat_key
EQUIPMENT
Now, my source table has the same in a different case
nat_key
equipment
I want these rows to be merged .
In the OWB map, I have given the property of nat_key column in the target table as 'Match while updating' = 'Yes'. Is there a built in feature in OWB, using which I can make this match as case insensitive?
Basically, I want to make OWB generate my mapping code as
if UPPER(target. nat_key)=upper(source.nat_key) then update...else insert.
Note: There is a workaround with 'Alter Session set nls_sort=binary_ci and nls_comp=linguistic', but this involves calling a pre-mapping operator to set these session parameters.
Could anyone tell me if there is a simpler way?Hi,
use an expression operator to get nat_key in upper case. Then use this value for the MERGE. Then nat_key will only be stored in upper case in your target table.
If you have historic data in the target table you have to update nat_key to upper case. This has to be done only once and is not necessary if you start with an empty target table.
Regards,
Carsten. -
I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY QUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM .I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ?? HOW IT IS ??
Hi Kishore,
First identify deleted records by selecting "Detect deleted rows from comparison table" feature in Table Comparison
Then Use Map Operation with Input row type as "delete" and output row type as "delete" to delete records from target table. -
How to execute query to store the result in the target table column ?
Hi
Source: Oracle
Target: Oracle
ODI: 11g
I have an interface which loads the data from source table to target. Some of the columns in the target tables are automatically mapped with source table. Some of the column remain un-mapped. Those who remain un-mapped, I want to load the values in that column by executing an query. So can anybody tell me where I should mention that query whose result would become the value of the specific column.
-Thanks,
ShrinivasActually I select the column from the target table then in the Property Inspector-->Mapping properties-->Implementation
tab I have written the query which retrieve the value for that column. Is the right place to write the query? How can do this ?
-Shrinivas -
Refering the target table records in the transfering quey
Hi all
I am trying to load some records using my job in DI in the target table. The query I should use is a bit tricky. While I'm loading records into the target table using query, I should check whether one of the columns has been used in transferring record or not. As I want to have a unique value on one column. It is distinct, distinct get the unique records, I need have unique value in one column accross whole the table.
I noticed it's not possible to refer to target column in the Query object to see whether that value has been used already there or not. But how can I address this requirement? Do you have any experience?
I write the SQL Code here which I should use in Query object in Data Integrator:
In the target table, every city should just come in one and only record.
INSERT INTO target
Effective_From_Date,
Effective_To_Date,
Business_Unit_ID,
Provider_ID
SELECT distinct
table1.Effective_From_Date,
table2.Effective_To_Date,
table4.city_ID,
table4.provider_ID
FROM
table1 a
INNER JOIN table2 b
ON (a.typeID = b.typeID)
INNER JOIN table3 c
ON (a.professionID = c.professionID)
INNER JOIN table4 d
ON (c.city_ID = d.city_ID)
WHERE NOT EXISTS
(SELECT * FROM target e
WHERE d.city_ID = e.Business_Unit_ID)
Thanks.You can use the target table as a source table as well, just drag is into your dataflow again and select Source instead of Target this time. Then you can outer join the new source target table to your query (I might do this in a second query instead of trying to add it to the existing one).
You could also use a lookup function to check the target table. In this case you'd also have to add a second query to check the result of your lookup.
Worst case, you can just throw that whole SQL query you've already created into a SQL transform and then use that as your source. -
How to delete rows in the target table using interface
hi guys,
I have an Interface with source as src and target as tgt both has company_code column.In the Interface i need like if a record with company_code already exists we need to delete it and insert the new one from the src and if it is not availble we need to insert it.
plz tell me how to achieve this?
Regards,
sai.gatha wrote:
For this do we need to apply CDC?
I am not clear on how to delete rows under target, Can you please share the steps to be followed.If you are able to track the deletes in your source data then you dont need CDC. If however you cant - then it might be an option.
I'll give you an example from what im working on currently.
We have an ODS, some 400+ tables. Some are needed 'Real-Time' so we are using CDC. Some are OK to be batch loaded overnight.
CDC captures the Deletes no problem so the standard knowledge modules with a little tweaking for performance are doing the job fine, it handles deletes.
The overnight batch process however cannot track a delete as its phyiscally gone by the time we run the scenarios, so we load all the insert/updates using a last modified date before we pull all the PK's from the source and delete them using a NOT EXISTS looking back at the collection (staging) table. We had to write our own KM for that.
All im saying to the OP is that whilst you have Insert / Update flags to set on the target datastore to influence the API code, there is nothing stopping you extending this logic with the UD flags if you wish and writing your own routines with what to do with the deletes - It all depends on how efficient you can identify rows that have been deleted. -
How can I insert an aggregated column name as a string in the target table?
I have a large source table, with almost 70 million records. I need to pull the sum of four of the columns into another target table, but instead of having the same four target columns I just want to have two.
So, let's take SUM(col1), SUM(col2), SUM(col3), and SUM(col4) from the source DB & insert them into the target like this:
SOURCE_COLUMN
| AMOUNT
1 col1
| SUM_AMOUNT
2 col2
| SUM_AMOUNT
3 col3
| SUM_AMOUNT
4 col4
| SUM_AMOUNT
I know how to do this in four separate Data Flows using the source, an Aggregation Transformation, a Derived Column (to hard code the SOURCE_COLUMN label), and destination... but with this many records, it takes over 3 hours to run because it has to loop
through these records four separate times instead of once. Isn't there a way to do this with one Data Flow? I'm thinking maybe Conditional Split?
Any help is appreciated, thanks!Hi ,
This could be achieved using UNPIVOT transformation. The below sample uses the below source query
SELECT 1 AS COL1,2 AS COL2,3 AS COL3,4 AS COL4
setup the UNPIVOT transformation as below
The output of unpivot transformation will be as below
Hope this helps.
Best Regards Sorna -
Loading the different result sets in the same sequence for the target table
Dear all,
I have 5 tables say A,B,C,D as my source and i made 3 joins P,Q,R .the result sets of these 3 joins are loading into a target table X but with 3 different targets with same table name.
I created one sequence say Y as my target table has primary key and mapped to three different targets for the same target table which i need to load.
But after deployed and executed successfully ,i am able to load the data from three join result sets with differeent sequence numbers.
I am looking to load data like this.
If First Result set P has 10 Records,SEcond Result Set Q Has 20 and the third result set has 30 records then while loading data into first target it creates the seq for the 10 records from 1..10 and while loading the data for second result set ,it creates the sequence from 11 ...20 and while loading the third target with the third result set it creates the sequence from 21 ----30.
But i am looking to load the three result sets in the sequence 1to 10 but not like creating fresh sequence for each result set.
how can we achieve this in owb?
any solution for this will be appreciated.
thank you
kumarMy design is like following
SRC1
---->Join1--------------------------->Target1( Table X)<-----Seq1
SRC2
SRC3
----> Join2----------->Target2(Table X)<----Seq1
SRC4
-----> Join3 -------> Target3(Table X)<-----Seq1
SRC5
Here the three 3 targets are for the same Table X as well sequence is same i.e seq1
If the First Join has 10 rows ,Seq1 generates sequence in 1 to 10 while loading target1
But while loading second target,Same Seq1 is generating new sequence from 11 but i am looking to load target2 and target 3 starting from sequence 1 but not from 11 or so.
As per your comments :
you want to load 3 sources to one target with same sequence numbers?
yes
Are you doing match from the other two sources on first source by id provided by sequence (since this is the primary key of the table)?
No
can you please tell me how to approach for this?
Thank You
Kumar
Maybe you are looking for
-
Could anyone please tell me how to tuning my program?
I have already executed the program with SE30. I think that the program has some problem SQL in form "GET_BACK_ORDER (Fetch VBAP, Net % = <b>26.2</b>)" and "GET_PENDING_ORDER (Fetch AUFK, Net % = <b>25.7</b>)" but I can not find the data that
-
My apple password is just not working while trying to update my software even after resting it!
Somebody please help!
-
SAP MM:Purchase info record
Hi experts, Can anyone tell me the complete process of purchase info record (ME11).Complete screen steps what are the tabs are there in ME11.I want screenshots of ME11 Hope for good answer from experts Regards sandhya
-
How to search for all files w Metadata mismatch?
I Upgraded recently from LR2.6 to 3.3. Many of my images have an Exclamation Mark on them, following the upgrade, informing me that there is a Metadata Mismatch... When I click on it I get this Dialogue box: Firstly I think LR developers are playing
-
Hi, I am using sbRIO9606 with NI9694 digital IO breakout RMC. I am trying to make SPI communication ready on this board. So, I started with the basic DIO operation like, read and write signals on the DIO. When I write some data onto the DIO, I am abl