Truncate/Insert

LS,
Oracle 10g
OWB 10g
When using Truncate/Insert as the Loading Type, what would be the most efficient statement: "trunc table A drop storage"or "trunc table A reuse storage"?
I'm replacing 6 milj rows by 6 milj plus 100.000.
If the table is shrunk to the initial extent by the source OWB generates, I loose a lot of time to overhead.
Am I able to configure the truncate anywhere in OWB? In what release to come will I be able to do so?
Regards,
André Klück

There is no exact error
When I run the map from Deployment manager I got the message finished with errors.
If I run it in a debug session all work's perfect.
I can select the Inserted rows.
If I run it from deployment manager with Insert only it work's also perfect.
Only if I change from INSERT to TRUNCATE/INSERT I got that problem and only with deployment manager not with debug session
regards
Andreas

Similar Messages

  • What is difference of truncate/insert and delete/insert ?

    hi all:
    what is difference of truncate/insert and delete/insert ?

    Hi,
    Truncate will truncate the table, which means there is a DDL operation to empty the table. DDL operations cannot be rolled back, but truncate is much faster than delete (because of that reason). If you do not require recovery of the deleted records in case of a failure, then truncate/insert is more optimal than delete/insert. Notice that for truncate to work, you cannot have enabled foreign keys pointing to the table. Truncate will never perform cascaded deletes.
    Hope this explains.
    Mark.

  • Design thoughs CTAS vs TRUNCATE / INSERT

    Would like to hear your thoughts on the speed / other pro's and con's of
    Create Table As Select * from table
    and
    truncate table
    insert into /*+ append */ select * from table;
    Consider at present we have force logging on.

    FourEyes wrote:
    Would like to hear your thoughts on the speed / other pro's and con's of
    Create Table As Select * from table This requires that the table doesn't exist already. If it's something being done over and over, then it's generally bad practice to drop and recreate database objects in production code.
    truncate table
    insert into /*+ append */ select * from table;This is typically what people would do to "clear down and refresh" data in a table.
    The other alternative would be to have a materialized view that you just refresh on demand.
    Speed wise, there's little difference between your two methods.

  • VLD-1119: Unable to generate Multi-table Insert statement for some or all t

    Hi All -
    I have a map in OWB 10.2.0.4 which is ending with following error: -
    VLD-1119: Unable to generate Multi-table Insert statement for some or all targets.*
    Multi-table insert statement cannot be generated for some or all of the targets due to upstream graphs of those targets are not identical on "active operators" such as "join".*
    The map is created with following logic in mind. Let me know if you need more info. Any directions are highly appreciated and many thanks for your inputs in advance: -
    I have two source tables say T1 and T2. There are full outer joined in a joiner and output of this joined is passed to an expression to evaluate values of columns based on
    business logic i.e. If T1 is available than take T1.C1 else take T2.C1 so on.
    A flag is also evaluated in the expression because these intermediate results needs to be joined to third source table say T3 with different condition.
    Based on value taken a flag is being set in the expression which is used in a splitter to get results in three intermediate tables based on flag value evaluated earlier.
    These three intermediate tables are all truncate insert and these are unioned to fill a final target table.
    Visually it is something like this: -
    T1 -- T3 -- JOINER1
    | -->Join1 (FULL OUTER) --> Expression -->SPLITTER -- JOINER2 UNION --> Target Table
    | JOINER3
    T2 --
    Please suggest.

    I verified that their is a limitation with the splitter operator which will not let you generate a multi split having more than 999 columns in all.
    I had to use two separate splitters to achieve what I was trying to do.
    So the situation is now: -
    Siource -> Split -> Split 1 -> Insert into table -> Union1---------Final tableA
    Siource -> Split -> Split 2 -> Insert into table -> Union1

  • JDBC insert with XMLTYPE data type

    Hi,
    SOAP to JDBC scenario. Oracle 11G as a receiver.
    Requirement is to  insert whole xml payload message in one of Oracle table fields as a xml string. Target oracle DB table column is defined with XMLTYPE data type, it has capacity to hold xml data more than 4GB. I am using graphical mapping with direct INSERT statement.
    When I try to insert xml payload with smaller size transaction goes through. However when the xml payload size increases it is giving following error in JDBC receiver communication channel monitoring.
    Could not execute statement for table/stored proc. "TABLE_NAME" (structure "StructName") due to java.sql.SQLException: ORA-01704: string literal too long
    Here is insert statement as in communication channel monitoring. (Note: XML payload with bold characters is truncated)
    INSERT INTO  TABLE_NAME (REQ_ID, OUTAGE_OBJ, TIMESTAMP, PROCESSED_FLAG) VALUES (VAL1, <?xml version="1.0" encoding="UTF-8"?>............</>, TO_DATE(2010-11-15 10:21:52,YYYY-MM-DD HH24:MI:SS), N)
    Any suggestions to handle this requirement?
    Thank you in advance.
    Anand More.

    Hi Anand,
    The problem here is definitely the length of the SQL query. i.e "INSERT INTO ......... VALUES......."
    This is what i got when i searched for this ORACLE error code:
    ORA-01704: string literal too long
    Cause: The string literal is longer than 4000 characters.
    Action: Use a string literal of at most 4000 characters. Longer values may only be entered using bind variables.
    Please ask a ORACLE DB expert on how to handle this Also i am not sure how can we handle Bind Varibales in SAP PI.
    I hope this helps.
    Regards, Gaurav.

  • How to skip a row to be inserted in staging table

    Hi Everyone,
    Actually i am transforming data from a source table to staging table and then staging to final table. I have generated a primary key using sequence. As i put the insert method of staging table as truncate/insert. So every time when mapping is loaded, staging table will be truncated and new data is inserted but as i am using sequence in staging table, it will give new numbering to old data from source table and it will duplicated this data into final target table. So for this reason i am using key look up on some of input attributes and than using expression i am trying to avoid duplication. In each output attributes in expression, I am putting the case statement
    boldCASE WHEN INGRP1.ROW_ID IS NULL
    THEN
    INGRP1.ID
    END*bold*
    Due to this condition i am getting error
    bold
    Warning
    ORA-01400: cannot insert NULL into ("SCOTT"."STG_TARGET_TABLE"."ROW_ID")
    bold
    But i am stuck when the value of row_id is null, at this what condition or statement should i write to skip the insertion of the data. I want to insert data only when ROW_ID IS NULL.
    Kindly help me out.
    Thanks
    Regards
    Suhail Dayer

    You don't need the tables to be identical to use MINUS, only the "Select List" must match. Assuming you have the same Business key (one or more columns that uniquely identifies a row from your source data) in both the source and final table you can do the following:
    - Use a Set Operation where the result is the Business Key of the Staging table MINUS the Business Key of the final table
    - The output of the Set Operation is then joined to the Staging table to get the rest of the attributes for these rows
    - The output of the Join is inserted into the final table
    This will make sure only rows with new Business Keys are loaded.
    Hope this helps,
    Roald

  • Update/Insert mode couldn't apply to SQL Server source via Transparent Gateway.

    Hi all,
    OWB version is 9.0.2.56,database version is 9i Release 2.
    When set load type to "Insert",mapping works fine from SQL Server to Oracle table,via Transparent Gateway 901.
    But "Update/Insert" always runs error with "Fatal error or maximum number of errors exceeded".while I define both source and destination table to oracle tables in mapping,"Update/Insert" works file.
    Any suggestion?

    Hi Ignor,
    Thanks for your reply!
    The SQL Server version is SQL Server 2000.
    I just want to ETL data from a SQL Server table via Transparent gateway into oracle table,the source table has a increment seq field,so I want to define mapping using update/insert loading method in OWB,according to this seq field,if there is the same seq in the target table,do update,if not,insert the row.I just find out that if I use a middle table in oracle to do truncate/insert mode from SQL Server source table,then do update/insert mode from the middle one to target table,it works.But can't do directly from source to target using update/insert mode.Does this means update/insert mode only apply to mapping from local table to local table?
    Regards,
    Robbin

  • Slow Insert APPEND into Temporary Table

    Hello,
    We did the following test on Oracle11g 11.1.0.7 database:
    create global temporary table test_tab
    as select * from tab1 where rownum <= 1;
    insert into test_tab select * from tab1 where rownum <= 500000;
    commit;
    Elapsed time: 00:00:04.56
    Statistic
    80 recursive calls
    26360 db block gets
    10606 consistent gets
    4729 physical reads
    2543400 redo size
    399 bytes sent via SQL*Net to client
    340 bytes received via SQL*Net from client
    6 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    500000 rows processed
    truncate table test_tab;
    insert /*+ append */ into test_tab select * from tab1 where rownum <= 500000;
    commit;
    Elapsed time: 00:00:09.35
    Statistic
    84 recursive calls
    4900 db block gets
    4738 consistent gets
    4698 physical reads
    1128 redo size
    376 bytes sent via SQL*Net to client
    354 bytes received via SQL*Net from client
    6 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    500000 rows processed
    Note that insert APPEND generates much less redo size: 1128 vs 2543400 . Now the question: why insert APPEND into the temporary table is two times slower than the ordinary insert? Potentially, it should run faster because of lower redo size... Any ideas?

    If you run the truncate / insert / truncate / insert append again are the timings the same ?
    Wondering if the temp tablespace needed to extend for the first append.

  • Truncate target table

    Hi All,
    I need to truncate the target table before inserting records into it. Can you please where should I set this option in OWB mappings, so that it can be done from within OWB.
    Rgds
    Arnab

    Arnab,
    Just set the 'Loading Type' to TRUNCATE/INSERT on the Target table in the Mapping.
    - Jojo

  • Truncate table load order broken

    Hi,
    I have a mapping where two tables are truncate/insert. I need one to be truncate/insert at the start, and the other later in the mapping. Using the Target Load Order feature, I order my targets so that this happens.
    However, when the code is generated - for both set and row-based - , I get this:
    Initialize("P_JOB_RUN_ID", "P_PROCESS_AUDIT_ID",
    p_env);
    -- Initialize all batch status variables
    "FACT_SIO_PROC_RJCT_RESET_T_St" := FALSE;
    "FACT_SIO_St" := FALSE;
    "FACT_SIO_RJCT_St" := FALSE;
    IF get_trigger_success THEN
    Truncate_Targets;
    ...and Truncate_Targets contains statements that will truncate both tables.
    If Initialize succeeds both tables are truncated ignoring the Target Load Order.
    Is it possible to use Target Load Order with truncate/insert?
    Cheers
    Steve

    Hi,
    as an alternative you can use DELETE/INSERT instead of TRUNCATE/INSERT. This empties the corresponding table immediately before inserting. Of course, this is only recommendable if the expected count of rows to delete doesn't exceed some 10.000, since DELETE is hard work for the RDBMS compared to TRUNCATE.
    regards
    Thomas

  • DELETE/INSERT

    Hi,
    Am working on a Mapping that requires a delte insert functionality. That is, the rows in target matching a specific column should only be deleted.
    So, modified loading type to delete/insert, match by constraints to none, and then set match column when deleting to yes for the specific column
    The mapping is but doing a blanket delete. Can somebody suggest a workaround or tell where the problem is.
    OWB 9.2.0.2
    Regards
    Jojo

    OK. I have partitioned the fact table based on year and changed the loading type to TRUNCATE/INSERT. Under "Conditional Loading" I have set the 'Target Filter for Delete' to INOUTGRP1.YEAR='2004'.
    When I execute the mapping to replace data for year 2004, all the data is deleted (including the data for year 2003) and the new 2004 data is inserted. How can I just replace the data for year 2004 and leave the data for year 2003 intact?

  • Source tables getting truncated

    While using OWB 10gR2, sometimes after executing mappings, I discover that the source tables just got truncated, which is not something I planned to do. I do want the target tables to be truncated before they get populated. So while creating the mappings, in the Table Operator Properties, I set the loading type to 'none' for all the source tables and 'truncate/insert' for all the target tables. Is this correct, or is it something else likely causing the problem?
    Thanks a lot

    You need not change the table load properties for the source tables. Leave it as it is (it is by default Insert, I think). Change the target table load properties to Truncate/Insert. This should be okay. I think the source tables were set to Truncate unknowingly which caused the problem.
    Sometimes, OWB does not deploy the current version of the package and we see the old version of the code in the database. I faced this problem quite often, The best way out is to drop the package from the back end and re-create thepackage from OWB.
    Regards
    -AP

  • Truncate tables in pre-mapping process

    I am using OWB 9.0.4 and I have 10 staging area tables that I will be loading in one mapping. I want to truncate all of these tables before I load them. I can not use the TRUNCATE/INSERT option on the target tables because each table will be loaded from two different source tables and I can't specify order in the mapping so I can't have one TRUNCATE/INSERT and the other just INSERT. I would prefer not to have two mappings. I have also tried doing a union on the two source tables but this was not working out well. If I use a pre-mapping operator and select the WB_TRUNCATE_TABLE function, how do I specify multiple tables?
    Is there a better way to do this?

    WB_TRUNCATE_TABLE function takes in only one parameter, so you would have to have 10 pre-mapping processes with that function, or a single one with a custom function.
    I guess you figured the other two choices yourself:
    - If your sources have the same number of attributes and matching datatypes - map them through a Set Operator in a single map with TRUNCATE/INSERT loading type
    - If your sources are different, put them in different maps, with TRUNCATE/INSERT on one and INSERT on the other. Then you can control the order of map execution.
    Nikolai

  • Delete the matched, then insert

    Hi,
    I want to implement a loading situation like this: When there is match, delete all matched records, then insert the new record into the table.
    I used delete/insert operator property by specifying one attribute for delete match. But it ended up delete all records in the table and insert the new records, instead of just delete the matched.
    Is there loading type or operator property that allows me to handle this in mapping without going through transformation? Thanks
    Tracy

    Hello Tracy,
    As you have noticed there is little difference between DELETE/INSERT and TRUNCATE/INSERT (main difference is that the former allows rollback in case of failure, the latter more efficient).
    There is no way to archive what you want in OWB 10R1 in a single mapping (unless cheating and doing everything in PL/SQL!).
    You can split your problem into two separate mappings:
    1) delete all records from your target that match your source
    2) insert your source records into the target
    Alternatively, you can
    1) delete all records from your target that match your source except one (using eg. ROWID)
    2) update your target with source rows
    Regards, Hans Henrik

  • Create a table in SQL with datatype equivalent to LongBlob

    I have a mySQL or phpMyadmin table (nor sure) (with longblob fields) that I want to convert to SQL Server.
    Here is a link to a Rar with two files, the 'ORIGINAL CODE.sql' is the original code sample and the 'NEW_SQL_CODE.sql' is the code I am writing in SQL to create a database.
    Click to download the two files.
    I fail to make the insert in the 'NEW_SQL_CODE.sql', it says (translated from spanish) something like "The binary data will be truncated"
    INSERT INTO inmuebles_fotos (ci_inm, pos, foto, mini, comentario, inet, impr_cartel, impr_visita) VALUES
    (6, 0, 0xffd8ffe000104a46494600010100000100010...etc...
    I don’t know how if I have defined the wrong data type (image) equivalent to the MySQL LongBlob. All I want to do is to make that insert in SQL and save that image as jpg if possible. I don't know if it's not posible in SQL and can only
    be done in MySQL.
    Thanks for any help.

    The original table is not mine; I am just trying to save the images as .jpg in hard drive.
    Here is the original table I have that has 500Mb in pictures, in the sample there is only 1 picture:
    CREATE TABLE IF NOT EXISTS `inmuebles_fotos` (
    `ci_inm` int(10) unsigned DEFAULT NULL,
    `pos` smallint(6) DEFAULT NULL,
    `foto` longblob,
    `mini` longblob,
    `comentario` varchar(100) DEFAULT NULL,
    `inet` tinyint(3) unsigned DEFAULT '0',
    `impr_cartel` smallint(6) DEFAULT '0',
    `impr_visita` smallint(6) DEFAULT '0'
    ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
    And here is the equivalent table in SQL that I am trying to create to import al registers so I can save the pictures from SQL Server that is what we use here.
    CREATE TABLE [dbo].[inmuebles_fotos2](
    [ci_inm] [int] NULL,
    [pos] [int] NULL,
    [foto] [image] NULL,
    [mini] [image] NULL,
    [comentario] [varchar](1) NULL,
    [inet] [int] NULL,
    [impr_cartel] [int] NULL,
    [impr_visita] [int] NULL
    Sorry for the trouble, I am trying everything I get my hands on until I get to save those images in “0x1234567890ABCDE…….” Format.
    I'll try anything you sugest me but I have only use SQL Server so that's why I'm trying this road first.
    Thanks for your help.

Maybe you are looking for

  • Informatics PC 8.6.0 error on windows server 2008 enterprise.

    Hi, when iam trying to install INFO PC Server on windows 2008 enterprise server. It is able to configure domain but last but one step...cannot start services error. Error: Use the error below in catalina.out and node.log in the server/tomcat/logs dir

  • Import Material & PO

    Hi While making import PO, Wat are the mandatory and optional things which we need to do for the material and PO creation.. Is thr any procedure kind of thing.. Vijay

  • Help with field breaks

    Hi I have a sql query which brings back 3 rows of data with 3 columns. Normally I would use a report to represent the data on the page so it would look like below - Type Name Address C Ash 1 Test Road C John 12 Testing Avenue O Space 9 Space Road Wha

  • ITunes crashes when I access playlists

    Like others after updateing to latest version of iTunes 10.6, iTunes crashes only after I try to access playlists that have artwork.  It seems I can get to the library if I leave it in list view.  But unfortuneately those playlists that are saved in

  • Songs skip when streaming from iCloud

    This problem occurs consistently. The track will skip as soon as it is fully loaded. To explain clearly: When you start a track which is not stored locally it "streams" from iCloud. (More like it is downloaded and stored temporarily than actual strea