Truncate target table

Hi All,
I need to truncate the target table before inserting records into it. Can you please where should I set this option in OWB mappings, so that it can be done from within OWB.
Rgds
Arnab

Arnab,
Just set the 'Loading Type' to TRUNCATE/INSERT on the Target table in the Mapping.
- Jojo

Similar Messages

  • Initial load of small target tables via flashback query?

    A simple question.
    Scenario: I’m currently building a near real time warehouse, streaming some basic facts and dimension tables between two databases. I’m considering building a simple package to "reset" or reinitialize the dimensions an all-round fix for variety of problem scenarios (since they are really small, like 15 000 rows each). The first time I loaded the target tables I utilized data pump with good success, however since streams transforms data on the way a complete reload is somewhat more complex.
    Considered solution: Ill just write a nice flashback query via db-link fetching data from a specific (recent) SCN and then I reinitialize the table on that SCN in streams...
    Is this a good idea? Or is there something obvious like a green and yellow elephant in the gift shop that I overlooked? Why I’m at all worried is because in the manuals this solution is not mention among the supported ways to do the initial load of a target table and I’m thinking there is a reason for this?

    I have a series of streams with some tranformations feeding rather small dimensional tables, I want to make this solution easy to manage even when operations encounter difficult replication issues, so Im developing a PL/SQL package that will:
    1) Stop all streams
    2) Clear all errors
    3) Truncate target tables
    4) Reload them including transformation (using a SELECT AS OF "ANY RECENT SCN" from target to source over dblink)
    5) Using this random recent SCN I will re-instantiate the tables
    6) Start all streams
    As you see datapump even if it works is rather difficult to utilize when you tranform data from A to B, using AS OF I not only get a constant snapshot from the source, I also get the exact SCN for it.
    What do you think? Can I safely use SELECT AS OF SCN instead of datapump with SCN and still get a consisten sollution?
    For the bigger FACT tables im thinking about using the same SELECT AS OF SCN but there with particular recent paritions as targets only and thus not having to reload the whole table.
    Anyways this package would ensure operations that they can recover from any kind of disaster or incomplete recovery on both source and target databases, and just re-instantiate the warehouse within minutes.

  • How to delete some date in target table at a mapping?

    How to delete some date in target table at a mapping?
    I extract date from source tabel into target table,
    but before extract date I want to delete some date from target?
    how to do?

    Just to change a bit of terminology in the reply, within the mapping, click on operator properties and choose TRUNCATE/INSERT.
    Note that truncate is dependent on constraints, so you probably must disable those before doing this. You can of course do DELETE/INSERT...
    Jean-Pierre

  • Lookup Table and Target Table are the same

    Hi All,
    I have a requirement in which I have to lookup the target table and based on the records in it, I need to load a new record into the target table.
    Being very specific,
    Suppose I have a key column which when changes I want to generate a new id and then insert this new value.
    The target table record structure looks like this
    list_id list_key list_name
    1 'A' 'NAME1'
    1 'A' 'NAME2'
    1 'A' 'NAME3'
    2 'B' 'NAME4'
    2 'B' 'NAME5'
    As shown the target table list_id changes only when the list key changes. I need to generate the list_id value from within OWB mapping.
    Can anyone throw some light as to how this can be done in OWB???
    regards
    -AP

    Hello, AP
    You underestimate the power of single mapping :) If you could tolerate using additional stage table (with is definitly recomended in case your table from example will account a lot of rows).
    You underestimate the power of single mapping :) It you could tolerate using additional stage table (witch is definitely recommended in case your table from example will account a lot of rows), you could accomplish all you need within one mapping and without using PLSQL function. This is true as far as you could have several targets within one mapping.
    Source ----------------------------------------------------- >| Join2 | ---- > Target 2
    |------------------------ >|Join 1| --> Lookup table -->|
    Target Dedup >|
    Here “Target” – your target table. “Join 1“ – operator covers operations needed to get existing key mapping (from dedup) and find new mappings. Results are stored within Lookup Table target (operation type TRUNCATE/INSERT).
    “Join 2” is used to perform final lookup and load it into the “Target 2” – the same as “Target”
    The approach with lookup table is fast and reliable and could run on Set base mode. Also you could revisit lookup table to find what key mapping are loaded during last load operation.
    Serhit

  • How to gather stats on the target table

    Hi
    I am using OWB 10gR2.
    I have created a mapping with a single target table.
    I have checked the mapping configuration 'Analyze Table Statements'.
    I have set target table property 'Statistics Collection' to 'MONITORING'.
    My requirement is to gather stats on the target table, after the target table is loaded/updated.
    According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
    Analyze Table Statements
    If you select this option, Warehouse Builder generates code for analyzing the target
    table after the target is loaded, if the resulting target table is double or half its original
    size.
    My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
    I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
    Thanks for your help in advance...
    ~Salil

    Hi
    Unfortunately we have to disable automatic stat gather on the 10g database.
    My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
    So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
    Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
    If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
    Any ideas to overcome this issue is appreciated.
    Thanks
    Salil

  • Duplicates in the target table.

    Hi, I am working on ODI 10.
    In one of my interface when ever I executes there are some duplicates are coming to the target table.
    Say if the count of the rows around 5000 in the source table and in the target it would be around 120000. Even after using the distinct rows in the flow control some bugs are coming.
    Can you please help how solve this...
    Note:In source table one column contains surrogate key.
    IKM oracle control append is the KM I am using

    Using the Control Append IKM will always add the data that is in the Source to the Target, unless you truncate or delete from the Target first. If you have data in the Source that has already been loaded to the Target, and you do not truncate the Target prior to the next load, you will have duplicates.
    Are you truncating the Target or is the Source data always "new" each time the Interface is run?
    Regards,
    Michael Rainey

  • How to use outer join in Target table

    hi all:
    how to use outer joing and trunc in target table?

    Good morning,
    Can you be more specific?
    What do you want to join, what needs to be truncated, what's the Loading Type of your target table-object, etc.
    In general the following applies.
    An outer join is specified on the JOIN operator, under Operator Properties you can specify the Join Condition, for example:
    tab1.uk_col1 = tab2.uk_col1(+)
    and
    tab1.uk_col2 = tab2.uk_col2(+)Truncating a table in a mapping can only be done in combination with inserting the dataset, setting the loading type of a table to TRUNCATE/INSERT.
    Apart from that there is a predefined OWB transformation called WB_TRUNCATE_TABLE, this can also be used to truncate a table.
    Cheers, Patrick

  • How to truncate fact tables using wb_truncate_table

    Hi,
    I've got a sequence of mappings that load various staging tables, dimensions and fact tables.
    At present, I truncate the fact tables manually before loading the dimensions (to avoid foreigh key errors), but would like to write mappings to do the truncations.
    I tried creating mappings using wb_truncate_table in a Pre-Mapping Process operator (using a constant for the filename), but can't figure out how to get this to actually truncate the fact table.
    The manual says to "connect the output attribute of the Pre-Mapping Process operator to the input group of a target operator." However the PMP operator in my mapping doesn't have an output operator (and the add button is greyed out).
    Sorry if I'm missing something obvious, but can anybody help, or advise on a better way to truncate my fact tables before reloading the dimensions?
    Thanks in advance.
    Chris

    got that to work using WB_TRUNCATE_TABLE in a PMP in the first mapping to >populate one of the fact table's dimensions. The problem I was having was in >trying to create a mapping just to do the truncate, but I see now that's not the >way to go.If you want to create a mapping that just truncate some table(s) - it's very easy.
    1. Just create your own function that returns, let's say, char. Inside this function, call WB_TRUNCATE_TABLE and return dummy char (e.g. '1'). Then create some dummy table with only one colum of char type.
    2. Create mapping. place constant operator, your function as transformation operator, and dummy table. Link them CONSTANT - TRANSFORMATION - TABLE.
    That's all. If you want to truncate more tables within one mapping, just add more FUNCTIONS and add more attributes (table names) to the constant operator.

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

  • Whats the difference between Data Synchronization and Data Replication with a truncated Target?

    Hi Brian: Sorry, the Data Replication task wizard does not support ODBC sources. I don't think there's an easy way to truncate the target tables using a Data Synchronization task, but you should be able to read data from an ODBC source and write it to your target. Cheers,Josh

    I would like to replicate data using an ODBC connection as the source, but the connection does not show up in Data Replication (only Data Sync).  I want to truncate and load everything from the source--can I just use Data Sync for this process? Thanks!!

  • Can not Load CLOB data 32k to target table

    SQL> DESC testmon1 ;
    Name Null? Type
    FILENAME VARCHAR2(200)
    SCANSTARTTIME VARCHAR2(50)
    SCANENDTIME VARCHAR2(50)
    JOBID VARCHAR2(50)
    SCANNAME VARCHAR2(200)
    SCANTYPE VARCHAR2(200)
    FAULTLINEID VARCHAR2(50)
    RISK VARCHAR2(5)
    VULNNAME VARCHAR2(2000)
    CVE VARCHAR2(200)
    DESCRIPTION CLOB
    OBSERVATION CLOB
    RECOMMENDATION CLOB
    SQL> DESC test_target;
    Name Null? Type
    LOCALID NOT NULL NUMBER
    DESCRIPTION NOT NULL CLOB
    SCANTYPE NOT NULL VARCHAR2(12)
    RISK NOT NULL VARCHAR2(6)
    TIMESTAMP NOT NULL DATE
    VULNERABILITY_NAME NOT NULL VARCHAR2(2000)
    CVE_ID VARCHAR2(200)
    BUGTRAQ_ID VARCHAR2(200)
    ORIGINAL VARCHAR2(50)
    RECOMMEND CLOB
    VERSION VARCHAR2(15)
    FAMILY VARCHAR2(15)
    XREF VARCHAR2(15)
    create or replace PROCEDURE proc1 AS
    CURSOR C1 IS
    SELECT FAULTLINEID,VULNNAME,scanstarttime, risk,
    dbms_lob.substr(DESCRIPTION,dbms_lob.getlength(DESCRIPTION),1) "DESCR",dbms_lob.substr(OBSERVATION,dbms_lob.getlength(OBSERVATION),1) "OBS",
    dbms_lob.substr(RECOMMENDATION) "REC",CVE
    FROM testmon1;
    c_rec C1%ROWTYPE;
    descobs clob;
    FSCAN_VULN_TRANS_REC VULN_TRANSFORM_STG%ROWTYPE;
    TIMESTAMP varchar2(50);
    riskval varchar2(10);
    pCTX PLOG.LOG_CTX := PLOG.init (pSECTION => 'foundscanVuln Procedure',
    pLEVEL => PLOG.LDEBUG,
    pLOG4J => TRUE,
    pLOGTABLE => TRUE,
    pOUT_TRANS => TRUE,
    pALERT => TRUE,
    pTRACE => TRUE,
    pDBMS_OUTPUT => TRUE);
    amount number;
    buffer varchar2(32000);
    BEGIN
    ---INITIALIZE THE LOCATOR FOR CLOB DATA TYPE
    select observation into descobs from testmon1 where rownum=1;
    OPEN C1;
    loop
    fetch C1 INTO c_rec;
    exit when C1%NOTFOUND;
    --LOAD THE DESCRIPTION FIELD FROM CURSOR AND WRITE IT TO THE CLOB LOCATOR descobs.
    dbms_lob.Write(descobs,dbms_lob(c_rec.DESCR),1,c_rec.DESCR);
    ------APPEND THE OBSERVATION FIELD FROM CURSOR TO THE CLOB LOCATOR descobs.
    dbms_lob.Writeappend(descobs,dbms_lob(c_rec.DESCR),c_rec.OBS);
    -- dbms_output.put_line ('the timestamp is :'||c_rec.scanstarttime);
    --dbms_lob.write(descobs,amount,1,buffer);
    descobs:=c_rec.OBS;
    --dbms_lob.read(descobs,amount,1,buffer);
    --dbms_lob.append(c_rec.DESCR,c_rec.OBS);
    --descobs:=c_rec.OBS;
    --dbms_output.put_line ('the ADDED DESCROBS is :'||dbms_lob.substr(c_rec.DESCR,dbms_lob.getlength(c_rec.DESCR),1));
    dbms_output.put_line ('the ADDED DESCRIPTION AND OBSERVATION is :'||descobs);
    --dbms_output.put_line ('the DESCROBS buffer  is :'||buffer);
    SELECT DESCRIPTION INTO FSCAN_VULN_TRANS_REC.DESCRIPTION
    FROM TESTMON1 WHERE ROWNUM=1;
    ---------LOAD THE DESCRIPTION+ observation value into the target table description
    DBMS_LOB.WRITE(FSCAN_VULN_TRANS_REC.DESCRIPTION, dbms_lob.getlength(descobs),1,descobs);
    TIMESTAMP:=substr(c_rec.scanstarttime,1,10)||' '|| substr(c_rec.scanstarttime,12,8);
    IF c_rec.risk <3
    THEN riskval:='Low';
    ELSIF c_rec.risk <6
    THEN riskval:='Medium';
    ELSIF c_rec.risk <10
    THEN riskval:='High';
    END IF;
    FSCAN_VULN_TRANS_REC.TIMESTAMP:=TO_DATE(TIMESTAMP, 'YYYY/MM/DD HH24:MI:SS');
    FSCAN_VULN_TRANS_REC.risk:= riskval;
    --dbms_lob.append(c_rec.DESCR,c_rec.OBS);
    FSCAN_VULN_TRANS_REC.DESCRIPTION:=c_rec.DESCR;
    FSCAN_VULN_TRANS_REC.RECOMMEND:=c_rec.REC;
    FSCAN_VULN_TRANS_REC.LocalID:=to_number(c_rec.FAULTLINEID);
    FSCAN_VULN_TRANS_REC.SCANTYPE:='FOUNDSCAN';
    FSCAN_VULN_TRANS_REC.CVE_ID:=c_rec.CVE;
    FSCAN_VULN_TRANS_REC.VULNERABILITY_NAME:=c_rec.VULNNAME;
    -- dbms_output.put_line ('the plog timestamp is :'||timestamp);
    -- dbms_output.put_line ('the timestamp is :'||riskval);
    --dbms_output.put_line ('the recommend is :'||FSCAN_VULN_TRANS_REC.RECOMMEND);
    --dbms_output.put_line ('the app desc is :'||FSCAN_VULN_TRANS_REC.DESCRIPTION);
    insert into test_target values FSCAN_VULN_TRANS_REC;
    End loop;
    close C1;
    commit;
    EXCEPTION
    WHEN OTHERS THEN
    -- dbms_output.put_line ('Data not found');
    -----------dbms_output.put_line (sqlcode|| ':'||sqlerrm);
    end proc1;
    using dbms_lob package is not helping. Either DB stops responding. Or the Observation field ( which has max length >300000) can not be loaed into a CLOB variable.
    Please help or give me a sample code that helps.

    select     
         BANKING_INSTITUTION.BANK_REF_CODE     C1_BANK_ID,
         BANKING_INSTITUTION.NAME_BANK     C2_BANK_NAME,
         BANKING_INSTITUTION.BANK_NUMBER     C3_BANK_NUMBER,
         BANKING_INSTITUTION.ISO_CODE     C4_GBA_CODE,
         BANKING_INSTITUTION.STATUS     C5_STATUS,
         BANKING_INSTITUTION.SOURCE     C6_SOURCE,
         BANKING_INSTITUTION.START_DATE_BANK     C7_START_DATE,
         BANKING_INSTITUTION.ADDRESS_BANK     C8_BANK_ADDRESS1
    from     REF_DATA_DB.BANKING_INSTITUTION BANKING_INSTITUTION
    where     (1=1)
    insert /*+ append */ into XXSVB.C$_0XXSVB_BANKS_STAGING
         C1_BANK_ID,
         C2_BANK_NAME,
         C3_BANK_NUMBER,
         C4_GBA_CODE,
         C5_STATUS,
         C6_SOURCE,
         C7_START_DATE,
         C8_BANK_ADDRESS1
    values
         :C1_BANK_ID,
         :C2_BANK_NAME,
         :C3_BANK_NUMBER,
         :C4_GBA_CODE,
         :C5_STATUS,
         :C6_SOURCE,
         :C7_START_DATE,
         :C8_BANK_ADDRESS1
    )

  • Null value in target table

    Hi All
    I am using ODI 10.1.13
    I have declred teh Target datastore
    and mapped the target datastore with source in teh interface
    But when ODI is creating teh target database all the columns are created with accepting NULL Properties.
    where as i hv declared some as null and some with notnull.
    when i checked in IKM Oracle Incrimental KM(MERGE)-> create target table step
    the coding is
    create table <%=odiRef.getTable("L", "TARG_NAME", "A")%>
         <%=odiRef.getTargetColList("", "[COL_NAME]\t[DEST_CRE_DT] NULL", ",\n\t", "")%>
    due to which the target table is created with all field havine NULL properties.
    But i want the columns should hv the properties declared in the datastore.Eg. some columns with NULL and some columns with NOT NULL properties.
    Please help..
    Gourisankar

    In the IKM Merge , change the null to + odiRef.getInfo("DEST_DDL_NULL") so finally something like this <%=odiRef.getTargetColList("", "[COL_NAME]\t[DEST_CRE_DT] " + odiRef.getInfo("DEST_DDL_NULL"), ",\n\t", "")%> For bulk insert try SQL Control Append

  • Truncating a table from inside an application

    Hi all,
    Got a new problem to tackle. I am creating an application to calculate a very complex production bonus based on alot of different variables...
    I am working through retrieving all the data and created an apex temp table called site_bonus. I dump all the retireved values I get from Oracle for attendance, SIT data on discipline etc into the temp table as a count of number of occureneces to start the calculation of the bonus factors - here's the gotcha.....
    Prior to dumping the new run data into the temp table I want to truncate the table to remove anyprevious run data so that I dont get duplicate records cuased by a previous run. I created an on demand application process that basically does the following:
    truncate table "SITE_BONUS" /
    ( I basically copied it right from SQL workshop inside apex,by retrieving the SQL fromthe table truncate process where it works....
    When I run it anywhere inside my page as an on demand process It fails with the following error:
    ORA-06550: line 1, column 16: PLS-00103: Encountered the symbol "TABLE" when expecting one of the following: := . ( @ % ; The symbol ":= was inserted before "TABLE" to continue.
    Error
    OK
    I have tried it as a plsql process, an on demand process....and I get teh same error from any page inside my app and I cant figure out why I can do this from SQL worksop but cant call it from a page within my app.....
    Any ideas?
    Edited by: DSULLIVAN on Oct 28, 2009 3:13 PM - I corrected the typo

    You are trying : trucate
    trucate table "SITE_BONUS" /It should be : tuncate
    truncate table SITE_BONUSHope this helps,
    Sam
    +Please reward good answers by marking them correct or helpful!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Table compare deleting rows which does not exist in target table

    Hi Gurus,
    I am struggling with an issue in Data Services.
    I have a job which uses Table Compare, then History Preserving and then a Key Generation transforms.
    There is every possibility that data would get deleted from the source table.
    Now, I want to delete them from the target table also.
    I tried Detect deleted rows but it is not working.
    Could some one please help me on this issue.
    Thanks,
    Raviteja.

    Doesn't history preserving really only operate on "Update" rows.  Wouldn't it only process the deletes if you turned the "Preserve Delete row(s) as update row(s)" on?
    I would think if you turned on Detect Delete rows in the Table compare and did not turn this on in the history preserving it would retain those rows as delete rows and effectively remove them from the target.
    Preserve delete row(s) as update row(s)
    Converts DELETE rows to UPDATE rows in the target warehouse and, if you previously set effective date values (Valid from and Valid to), sets the Valid To value to the execution date. Use this option to maintain slowly changing dimensions by feeding a complete data set first through the Table Comparison transform with its Detect deleted row(s) from comparison table
    option selected.

  • How to load data into 3 different target tables usin BODS ?

    Hello Friends,
    I have 5 different source tables with same field definitions, Now I want to load all the records into three/four different target tables (Flat file, SQL Server, XML, and Oracle ) Could anyone please tell me how to do this task ?
    Thanks in Advance,
    Bheem.

    Hello Bheem,
    You can create separated dataflow for each target as suggested by Bala, this is a good choice when evaluating you scenario.
    If you put all targets in the same dataflow you may experience problems if one of them is down (as you have different servers as targets).
    BODS will send the data simultaneously to all targets and when one fails, the load will be break and you may have the tables not sinc anyways (if you plan to use a single dataflow to avoid this situation, it won't do it).
    So the best option is to create separated dataflow and put inside another one so you can run a singe dataflow that will call each one of the target and if onw fails the other will complete accordingly.
    Then if you put them inside a error trapping, you may even make your dataflow to retry the load prior abend the job.
    Think about cascading your dataflows and let the leaf level simple as it can be so it will be easier to schedule and debug your process.
    Pay attention to datatypes and other conversions you might need when working with more than one source/target.
    Regards,

Maybe you are looking for

  • Post with clearing through transaction code f-04

    Hi While clearing the Gl account Manually with transaction code F-04 we are getting the Error Consolidated companies SASA and ' ' are different Diagnosis The number of the affiliated company must be clear for the selected document type for all line i

  • Why does my new Airport Extreme accept the old password for a guest user?

    I have a new N/AC Airport Extreme Time Capsule that a friend can access only by using the password for an older "G" Airport base station from a previous visit. She can't get onto my current network using the correct/current passwrod. Anybody have any

  • Non cumulative key figures

    Hi I understand that there are 3 types of non cumulative key figues with these optiones. 1) cumulative KF with exception aggregation 2) with associated single delta(cumulative) 3)with associated IN and Outflow(cumulative) I know that we use type 3 in

  • Very strange iPod 120 gig problems

    About one month ago, my 120 gb iPod Classic began to randomly pause when moved and skip songs and videos. I reset, restored, checked for iTunes and iPods updates, and reinstalled, but the problems continued. After experimenting with different brands

  • Adobe Reader 11 - nonEnglish ver. is still in English after install nonEnglish version

    I downloaded the slovak version of Adobe Reader 11 <AdbeRdr11000_sk_SK>. After install the application is in english !? I have reinstalled windows 7 and only driver's is on it. In Edit > Preferences > language is language sets to: Slovak. So I uninst