How to gather stats on the target table

Hi
I am using OWB 10gR2.
I have created a mapping with a single target table.
I have checked the mapping configuration 'Analyze Table Statements'.
I have set target table property 'Statistics Collection' to 'MONITORING'.
My requirement is to gather stats on the target table, after the target table is loaded/updated.
According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
Analyze Table Statements
If you select this option, Warehouse Builder generates code for analyzing the target
table after the target is loaded, if the resulting target table is double or half its original
size.
My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
Thanks for your help in advance...
~Salil

Hi
Unfortunately we have to disable automatic stat gather on the 10g database.
My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
Any ideas to overcome this issue is appreciated.
Thanks
Salil

Similar Messages

  • How to sort columns in the target table

    I have a simple mapping which I am trying to design. There's only one table on the source and one in the target . There are no filter conditions, only thing is I want the target table to be sorted.
    Literally, say
    Src is source table has 3 columns x,y,z
    Trg is dest table and has 3 columns a,b,c
    x--->a
    y---->b
    z---->c
    The SQL should be
    select x,y,z from src order by x,y.
    I could do the mapping but the order by ..I could not do it .
    IKM used: IKM BIAPPS Oracle Incremental Update

    Why can't you use simple UPDATE command in EXECUTE SQL Task as below,
    DROP TABLE SSN
    DROP TABLE STAGING
    DROP TABLE STUDENT
    CREATE TABLE SSN(pn_id VARCHAR(100),ssn BIGINT)
    INSERT INTO SSN VALUES('000616850',288258466)
    INSERT INTO SSN VALUES('002160790',176268917)
    CREATE TABLE Staging (ssn BIGINT, id INT, pn_id BIGINT, name VARCHAR(100), subject VARCHAR(100),grade VARCHAR(10), [academic year] INT, comments VARCHAR(100))
    INSERT INTO Staging VALUES(288258466, 1001, '770616858','Sally Johnson', 'English','A', 2005,'great student')
    INSERT INTO Staging VALUES(176268917, 1002, '192160792','Will Smith', 'Math','C', 2014,'no comments')
    INSERT INTO Staging VALUES(444718562, 1003, '260518681','Mike Lira', 'Math','B', 2013,'no comments')
    CREATE TABLE Student(id INT,pn_id BIGINT,subject VARCHAR(100), [academic year] INT, grade VARCHAR(10), comments VARCHAR(100) )
    INSERT INTO Student VALUES(1001, '000616850', NULL,NULL,NULL ,NULL)
    INSERT INTO Student VALUES(1002, '002160790', NULL,NULL,NULL ,NULL)
    UPDATE Student SET Subject = C.Subject, [academic year]=C.[academic year], grade=C.grade,comments=C.comments
    FROM SSN A INNER JOIN Student B
    ON A.pn_id=B.pn_id INNER JOIN Staging C
    ON A.ssn = C.ssn
    SELECT * FROM Student
    Regards, RSingh

  • How to delete rows in the target table using interface

    hi guys,
    I have an Interface with source as src and target as tgt both has company_code column.In the Interface i need like if a record with company_code already exists we need to delete it and insert the new one from the src and if it is not availble we need to insert it.
    plz tell me how to achieve this?
    Regards,
    sai.

    gatha wrote:
    For this do we need to apply CDC?
    I am not clear on how to delete rows under target, Can you please share the steps to be followed.If you are able to track the deletes in your source data then you dont need CDC. If however you cant - then it might be an option.
    I'll give you an example from what im working on currently.
    We have an ODS, some 400+ tables. Some are needed 'Real-Time' so we are using CDC. Some are OK to be batch loaded overnight.
    CDC captures the Deletes no problem so the standard knowledge modules with a little tweaking for performance are doing the job fine, it handles deletes.
    The overnight batch process however cannot track a delete as its phyiscally gone by the time we run the scenarios, so we load all the insert/updates using a last modified date before we pull all the PK's from the source and delete them using a NOT EXISTS looking back at the collection (staging) table. We had to write our own KM for that.
    All im saying to the OP is that whilst you have Insert / Update flags to set on the target datastore to influence the API code, there is nothing stopping you extending this logic with the UD flags if you wish and writing your own routines with what to do with the deletes - It all depends on how efficient you can identify rows that have been deleted.

  • How to update fields in the target table in correspondance with the source file values

    Environment: win7, SQL server 2008 R2
    Application: Microsoft Management SQL Studio 2008 R2, Business Intelligence 2008 - SSIS
    SSIS competency level: Novice
    Problem: I have been trying to update some of the fields in the destination table,student table, in reference to data set in the staging table and ssn table.  I was able to insert/load new data to the destination using look up transformation
    while the driver is ssn (data mapping) but i couldn't know how to update some of the fields in the student table while keeping the orignal pn_id of both tables(ssn and student tables), because pn_id already exists in the SSN table and student table. There
    are other records also associated with the pn_id so I am not allowed to update the pn_id in the destination tables. For example,
    SSN Table (pn_id,ssn)
    ('000616850',288258466)
    ('002160790',176268917)
    Staging Table (ssn, id, pn_id, name, subject, academic year, comments)
    (288258466, 1001, '770616858',Sally Johnson, English,A, 2005,'great student')
    (176268917, 1002, '192160792',Will Smith, Math,38000,C, 2014,'no comments')
    (444718562, 1003, '260518681',Mike Lira, Math,38000,B, 2013,'no comments')
    Student Table (destination table)(id,pn_id,subject,academic year, grade, comments):
    (1001, '000616850', ' ',' ', ,'')
    (1002, '002160790', ' ',' ', ,'')
    Expected Results:
    My goal is to have student table updated as the following:
    Student Table
    (1001, '000616850', 'English','A' ,2005 ,'great student')
    (1002, '002160790', 'Math ',' C',2014 ,'no comments')
    please advise

    Why can't you use simple UPDATE command in EXECUTE SQL Task as below,
    DROP TABLE SSN
    DROP TABLE STAGING
    DROP TABLE STUDENT
    CREATE TABLE SSN(pn_id VARCHAR(100),ssn BIGINT)
    INSERT INTO SSN VALUES('000616850',288258466)
    INSERT INTO SSN VALUES('002160790',176268917)
    CREATE TABLE Staging (ssn BIGINT, id INT, pn_id BIGINT, name VARCHAR(100), subject VARCHAR(100),grade VARCHAR(10), [academic year] INT, comments VARCHAR(100))
    INSERT INTO Staging VALUES(288258466, 1001, '770616858','Sally Johnson', 'English','A', 2005,'great student')
    INSERT INTO Staging VALUES(176268917, 1002, '192160792','Will Smith', 'Math','C', 2014,'no comments')
    INSERT INTO Staging VALUES(444718562, 1003, '260518681','Mike Lira', 'Math','B', 2013,'no comments')
    CREATE TABLE Student(id INT,pn_id BIGINT,subject VARCHAR(100), [academic year] INT, grade VARCHAR(10), comments VARCHAR(100) )
    INSERT INTO Student VALUES(1001, '000616850', NULL,NULL,NULL ,NULL)
    INSERT INTO Student VALUES(1002, '002160790', NULL,NULL,NULL ,NULL)
    UPDATE Student SET Subject = C.Subject, [academic year]=C.[academic year], grade=C.grade,comments=C.comments
    FROM SSN A INNER JOIN Student B
    ON A.pn_id=B.pn_id INNER JOIN Staging C
    ON A.ssn = C.ssn
    SELECT * FROM Student
    Regards, RSingh

  • Issue with INSERT INTO, throws primary key violation error even if the target table is empty

    Hi,
    I am running a simple
    INSERT INTO Table 1 (column 1, column 2, ....., column n)
    SELECT column 1, column 2, ....., column n FROM Table 2
    Table 1 and Table 2 have same definition(schema).
    Table 1 is empty and Table 2 has all the data. Column 1 is primary key and there is NO identity column.
    This statement still throws Primary key violation error. Am clueless about this? 
    How can this happen when the target table is totally empty? 
    Chintu

    Nope thats not true
    Either you're not inserting to the right table or in the background some other trigger code is getting fired which is inserting into some table which causes a PK violation. 
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How to soft delete a row from the target table?

    Could someone help me on this requirement?
    How to implement the below logic using only ODI? I am able to implement the below logic with the "DELETE_FLAG" as "N".
    I want to make the latest record with the flag as "N" and all the previous other records with the flag as "D".
    Thanks a lot in advance.
    I have a source table "EMP".
    EMP
    EMPID FIRST_NAME
    1 A
    2 B
    First name is changed from A to C and then, C to D etc. For each data change, I would add a target row and mark the latest row as "N" and the rest as "D". The target table would contain the following data:
    Target_EMP
    EMPID FIRST_NAME DELETE_FLAG
    1 A D
    1 C D
    1 D N

    The problem is that I can't delete the row cause it demands from me to fill the mandatory field previously. It takes place when the key field is ROWID. In other cases delete is succesful.

  • How to execute query to store the result in the target table column ?

    Hi
    Source: Oracle
    Target: Oracle
    ODI: 11g
    I have an interface which loads the data from source table to target. Some of the columns in the target tables are automatically mapped with source table. Some of the column remain un-mapped. Those who remain un-mapped, I want to load the values in that column by executing an query. So can anybody tell me where I should mention that query whose result would become the value of the specific column.
    -Thanks,
    Shrinivas

    Actually I select the column from the target table then in the Property Inspector-->Mapping properties-->Implementation
    tab I have written the query which retrieve the value for that column. Is the right place to write the query? How can do this ?
    -Shrinivas

  • How do I make Merge operation into the target table case insensitive?

    Hi All,
    We have a target table that has a varchar 2 column called nat_key and a map that copies over data from a source table into the target table.
    Based on wheteher the values in the nat_key column matches between the source and the target, an update or an insert has to be done into the target table.
    Let us say target table T has the following row
    nat_key
    EQUIPMENT
    Now, my source table has the same in a different case
    nat_key
    equipment
    I want these rows to be merged .
    In the OWB map, I have given the property of nat_key column in the target table as 'Match while updating' = 'Yes'. Is there a built in feature in OWB, using which I can make this match as case insensitive?
    Basically, I want to make OWB generate my mapping code as
    if UPPER(target. nat_key)=upper(source.nat_key) then update...else insert.
    Note: There is a workaround with 'Alter Session set nls_sort=binary_ci and nls_comp=linguistic', but this involves calling a pre-mapping operator to set these session parameters.
    Could anyone tell me if there is a simpler way?

    Hi,
    use an expression operator to get nat_key in upper case. Then use this value for the MERGE. Then nat_key will only be stored in upper case in your target table.
    If you have historic data in the target table you have to update nat_key to upper case. This has to be done only once and is not necessary if you start with an empty target table.
    Regards,
    Carsten.

  • The size of the target table grows abnormaly

    hi all,
    I am curently using OWB (version 9 2.0 4 to feed some tables.
    we have created a new database 9.2.0.5 for a new datawarehouse.
    I have an issue that I really can not explain about the increase size of the target tables.
    I take the exemple of a parameter table that contains 4 fields and only 12 rows.
    CREATE TABLE SSD_DIM_ACT_INS
    ID_ACT_INS INTEGER,
    COD_ACT_INS VARCHAR2(10 BYTE),
    LIB_ACT_INS VARCHAR2(80 BYTE),
    CT_ACT_INS VARCHAR2(10 BYTE)
    TABLESPACE IOW_OIN_DAT
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    NOCACHE
    NOPARALLEL;
    this table is feed by a mapping and I use the update/insert option, which generates a Merge.
    first the table is empty, I run the maping and I add 14 lines.
    the size of the table is now 5 Mo !!
    then I delete 2 lines by sql with TOAD
    I run a again the mapping. It updates 12 lines and add 2 lines.
    at this point,the size of the table has increased of 2 Mo (1 Mo by line !!)
    the size of the table is now 7 Mo !!
    I do the same again and I get a 9 Mo table
    when I delete 2 lines with a SQL statement and create them manually, the size of the table does not change.
    when I create a copy of the table with an insert select sql statement the size becomes equal to 1 Mo which is normal.
    Could someone explain me how this can be possible.
    is it a problem with the database ? with the configuration of OWB ?
    what should I check ?
    Thank you for your help.

    Hi all
    We have found the reason of the increasing.
    Each mapping has a HINT which is defaulted to PARALLEL APPEND. as I understand it, it is use by OWB to determine if an insert allocates of not new space for a table when it runs and insert.
    We have changed each one to PARALLEL NOAPPEND and now, it's correct.

  • How to delete some date in target table at a mapping?

    How to delete some date in target table at a mapping?
    I extract date from source tabel into target table,
    but before extract date I want to delete some date from target?
    how to do?

    Just to change a bit of terminology in the reply, within the mapping, click on operator properties and choose TRUNCATE/INSERT.
    Note that truncate is dependent on constraints, so you probably must disable those before doing this. You can of course do DELETE/INSERT...
    Jean-Pierre

  • How do I write into multiple target tables in DIFFERENT schemas?

    It is easy to have a mapping that writes into 2 or more tables it's results. I now need that all these tables are in different schemas!
    When I create a 2nd warehouse target with a 2nd location and configure this location to be a different schema on the database, validation tells me, that everything is okay.
    When I generate it, there are several warnings, when I execute it, it doesn't work :( It complains that it cannot find <something>.
    I'm sorry, I don't have the error-message at hand :(
    I've you got an idea, how I could have different schemas for my tables, please let me know!

    Art,
    Could it be that the target schema into which you install the runtime components does not have privileges on the tables in the other schemas? You have to have at least the right privileges (INSERT, UPDATE, DELETE) on the target tables in the other schemas in order for this to work. However, then there should be no reason, assuming your tables are in different modules related to different locations.
    Thanks,
    Mark.

  • Duplicates in the target table.

    Hi, I am working on ODI 10.
    In one of my interface when ever I executes there are some duplicates are coming to the target table.
    Say if the count of the rows around 5000 in the source table and in the target it would be around 120000. Even after using the distinct rows in the flow control some bugs are coming.
    Can you please help how solve this...
    Note:In source table one column contains surrogate key.
    IKM oracle control append is the KM I am using

    Using the Control Append IKM will always add the data that is in the Source to the Target, unless you truncate or delete from the Target first. If you have data in the Source that has already been loaded to the Target, and you do not truncate the Target prior to the next load, you will have duplicates.
    Are you truncating the Target or is the Source data always "new" each time the Interface is run?
    Regards,
    Michael Rainey

  • I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY WUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ??

    I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY QUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM .I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ?? HOW IT IS ??

    Hi Kishore,
    First identify deleted records by selecting "Detect deleted rows from comparison table" feature in Table Comparison
    Then Use Map Operation with Input row type as "delete" and output row type as "delete" to delete records from target table.

  • Gather Stats on Newly Partitioned Table

    I partitioned an existing table containing 92 million rows. The method was using dbms_redefinition, whereby I started the redef and then added the indexes and constraints last. After partitioning, I did not gather stats on any of the partitions that were created and I did not analyze any of the indexes. Then I loaded an additional 4 million records into on of the partitions of the newly partitioned table. I ran dbms gather stats on this particular partition and it took over 15 hours. Normally it only takes 4 hours to run dbms gather stats on the individual partitions, so I stopped it after 15 hours. When I monitored it while it was running, it looked like it was taking a really long time gathering stats on the indexes. Is this normal for a newly partitioned table? Is there something I can to prevent it from taking so long when I run gather stats? Oracle Version 10.2.0.4

    -- Gather PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_partition_name, estimate_percent => 20, cascade=> FALSE,granularity => 'PARTITION');
    -- Gather GLOBAL INDEX Statistics
    for i in (select * from sys.dba_indexes where table_owner = upper(v_table_owner)
    and table_name = upper(v_table_name) and partitioned = 'NO'
    order by index_name)
    loop
    SYS.DBMS_STATS.gather_index_stats(ownname => upper(v_table_owner), indname => i.index_name,
    estimate_percent => 20, degree => NULL);
    end loop;
    -- Gather SUB-PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_subpartition_name, estimate_percent => 20, cascade=> TRUE,granularity => 'ALL');

  • Refering the target table records in the transfering quey

    Hi all
    I am trying to load some records using my job in DI in the target table. The query I should use is a bit tricky. While I'm loading records into the target table using query, I should check whether one of the columns has been used in transferring record or not. As I want to have a unique value on one column. It is distinct, distinct get the unique records, I need have unique value in one column accross whole the table.
    I noticed it's not possible to refer to target column in the Query object to see whether that value has been used already there or not. But how can I address this requirement? Do you have any experience?
    I write the SQL Code here which I should use in Query object in Data Integrator:
    In the target table, every city should just come in one and only record.
    INSERT INTO target
         Effective_From_Date,
         Effective_To_Date,
         Business_Unit_ID,
         Provider_ID
    SELECT distinct
         table1.Effective_From_Date,
         table2.Effective_To_Date,
         table4.city_ID,
         table4.provider_ID
    FROM
         table1 a
         INNER JOIN table2 b
              ON (a.typeID = b.typeID)
         INNER JOIN table3 c
              ON (a.professionID = c.professionID)
         INNER JOIN table4 d
              ON (c.city_ID = d.city_ID)
                   WHERE  NOT EXISTS
                                    (SELECT * FROM target e 
                               WHERE d.city_ID = e.Business_Unit_ID)
    Thanks.

    You can use the target table as a source table as well, just drag is into your dataflow again and select Source instead of Target this time.  Then you can outer join the new source target table to your query (I might do this in a second query instead of trying to add it to the existing one).
    You could also use a lookup function to check the target table.  In this case you'd also have to add a second query to check the result of your lookup.
    Worst case, you can just throw that whole SQL query you've already created into a SQL transform and then use that as your source.

Maybe you are looking for

  • Multiple click boxes in the same page

    I'm trying to include 4 click boxes in the same page in Captivate 5. The idea is when user clicks on each box, the content reveals and stays on the screen. The first click box works fine. But when click the second box, the content shows only for less

  • O Communication Idocs

    Hi I have an error. In QAS, I have an Idoc to File scenario. I triggered an idoc from WE19 by adding data manually and it reached XI. I went to BD10 and then tried to trigger from there. I got an error like this 1 Master Idoc set up for message type

  • SOAP to IDoc scenario without BPM

    Hi, I have 2 scenario's 1) IDoc-SOAP - -response -- IDoc scenario -- (have found some links  to do it without BPM) 2) SOAP -- RFC request -- RFC response -- e-mail (no links found) how can i achive both these scenario's without BPM. Currently i have

  • I just upgraded to a 4th generation ipod touch. How do I get all of my stuff from my 2nd generation ipod touch to my new one?

    I have already tried to do a back up on my second generation iPod, but for some reason it wouldn't sync to my new iPod because it wasn't compatible. I literally want to take everything from my old iPod  (the contacts, photos, notes, everything) and p

  • Remove unwanted objects with content-aware healing

    This question was posted in response to the following article: http://help.adobe.com/en_US/photoshopelements/using/WS287f927bd30d4b1f-1a883eeb12e2803aa4d -7fdd.html