CTAS, INSERT million of records

Hi All,
I have a table which has approx 20 million record, I will delete about 10 million record. After some research, I decided to perform CTAS + TRUNCATE rather than DELETE.
I will be executing the following statements, TEMP_TABLE will retain the record I wish to keep.
1. CREATE TEMP_TABLE AS ( SELECT * FROM ACTUAL_TABLE WHERE DATE BETWEEN X AND Y );
2. TRUNCATE TABLE ACTUAL_TABLE;
3. INSERT /*+ append */ INTO ACTUAL_TABLE (SELECT * FROM TEMP_TABLE);
4. DROP TABLE TEMP_TABLE;
My production database is running on Oracle 10g, I am worried the INSERT operation might fail due to the large amount of records to be transferred (Est. approx. 10 million records)
I have read by adding append hints might help during the insert process and UNDO tablespace might not be sufficient for huge record inserts.
How can I improve the process and avoid any possible failures?
Please kindly advise.
Thank you.

What do you mean by your INSERT will fail. Is your UNDO Tablespace not AUTO EXTENDABLE?
Another option is instead of doing the insert, you can
1. drop the ACTUAL_TABLE
2. Rename the TEMP_TABLE as ACTUAL_TABLE
3. Re Create the indexes, constraints, Triggers on the table back.
Edited by: Karthick_Arp on Apr 29, 2009 10:15 PM

Similar Messages

  • Best way to insert millions of records into the table

    Hi,
    Performance point of view, I am looking for the suggestion to choose best way to insert millions of records into the table.
    Also guide me How to implement in easier way to make better performance.
    Thanks,
    Orahar.

    Orahar wrote:
    Its Distributed data. No. of clients and N no. of Transaction data fetching from the database based on the different conditions and insert into another transaction table which is like batch process.Sounds contradictory.
    If the source data is already in the database, it is centralised.
    In that case you ideally do not want the overhead of shipping that data to a client, the client processing it, and the client shipping the results back to the database to be stored (inserted).
    It is must faster and more scalable for the client to instruct the database (via a stored proc or package) what to do, and that code (running on the database) to process the data.
    For a stored proc, the same principle applies. It is faster for it to instruct the SQL engine what to do (via an INSERT..SELECT statement), then pulling the data from the SQL engine using a cursor fetch loop, and then pushing that data again to the SQL engine using an insert statement.
    An INSERT..SELECT can also be done as a direct path insert. This introduces some limitations, but is faster than a normal insert.
    If the data processing is too complex for an INSERT..SELECT, then pulling the data into PL/SQL, processing it there, and pushing it back into the database is the next best option. This should be done using bulk processing though in order to optimise the data transfer process between the PL/SQL and SQL engines.
    Other performance considerations are the constraints on the insert table, the triggers, the indexes and so on. Make sure that data integrity is guaranteed (e.g. via PKs and FKs), and optimal (e.g. FKs should be indexes on the referenced table). Using triggers - well, that may not be the best approach (like for exampling using a trigger to assign a sequence value when it can be faster done in the insert SQL itself). Personally, I avoid using triggers - I rather have that code residing in a PL/SQL API for manipulating data in that table.
    The type of table also plays a role. Make sure that the decision about the table structure, hashed, indexed, partitioned, etc, is the optimal one for the data structure that is to reside in that table.

  • What is the best approach to insert millions of records?

    Hi,
    What is the best approach to insert millions of record in table.
    If error occurred while inserting the record then how to know which record has failed.
    Thanks & Regards,
    Sunita

    Hello 942793
    There isn't a best approach if you do not provide us the requirements and the environment...
    It depends on what for you is the best.
    Questions:
    1.) Can you disable the Constraints / unique Indexes on the table?
    2.) Is there a possibility to run parallel queries?
    3.) Do you need to know the rows which can not be inserted if the constraints are enabled? Or it is not necessary?
    4.) Do you need it fast or you have time to do it?
    What does "best approach" mean for you?
    Regards,
    David

  • Inserting Millions of records-Please Help!

    Hi All,
    I have a scenario where I hvae to query MARA and filter out some articles and then query WLK1 table(Article/Site Combination) and insert the records to a Custom (z) table.the result maybe millions of records,
    can anyone tell me a efficient way to insert large number of records? This is urgent.Please help.
    Warm Regards,
    Sandeep Shenoy

    This is a sample code i am using in one of my programs. You can try similar way and insert into custom table with every loop pass.
    I am considering 2000 records at a time. You can decide the no and code accordingly.
      if not tb_bkpf[] is initial.
    fetching the data from BSEG for each 1000 entries in BKPF to
    reduce the overhead of database extraction.
        clear l_lines .
        describe table tb_bkpf lines l_lines.
        if l_lines >= 1.
          clear: l_start, l_end.
          do.
            l_start = l_end + 1.
            l_end = l_end + 2000.
            if l_end > l_lines.
              l_end = l_lines.
            endif.
            append lines of tb_bkpf from l_start to l_end to tb_bkpf_temp.
    Populating the tb_bseg_tmp in the order of the database table
            select bukrs
                   belnr
                   gjahr
                   buzei
                   shkzg
                   dmbtr
                   hkont
                   matnr
                   werks
              from bseg
              appending table tb_bseg_tmp
              for all entries in tb_bkpf_temp
              where bukrs = tb_bkpf_temp-bukrs and
                    belnr = tb_bkpf_temp-belnr and
                    gjahr = tb_bkpf_temp-gjahr and
                    hkont in s_hkont.
            refresh tb_bkpf_temp.
            if l_end >= l_lines.
              exit.
            endif.
          enddo.
        endif.

  • How to insert million of record

    Dear All,
    I need your expert opinion.
    To insert 1 million( will be called in parallel 15 jobs) of records can we use
    Insert into table1 (col1,col2,col3) Select col1,col2,col2 from table2
    concern : buffer memory or any error like rollback segment;
    we have 10-15 columns of almost 15 bytes and its production enviorment
    And what if the records are near 15 million?
    Edited by: ma**** on 06-Jun-2012 05:34

    845712 wrote:
    Hi,
    We can use bulk collect and for all, think it could be easier.
    Thanks,
    BrijI'm sorry but this is not correct:
    Check what Tom Kyte recommends here: [url:http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:760210800346068768]http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:760210800346068768
    the mantra:
    o You should do it in a single SQL statement if at all possible.
    o If you cannot do it in a single SQL Statement, then do it in PL/SQL.
    o If you cannot do it in PL/SQL, try a Java Stored Procedure.
    o If you cannot do it in Java, do it in a C external procedure.
    o If you cannot do it in a C external routine, you might want to seriously
    think about why it is you need to do it…
    So, go with the last one, a single sql statement... with /*+ APPEND */ if you understand what that does and if it applies....
    Regards.
    Al

  • Inserting millions of records into new table based on condition

    Hi All,
    We have a range partitioned table that contains 950,000,000 records (since from 2004) which is again list sub-partitioned on status. Possible values of stauts are 0,1,2,3 and 4.
    The requirement is to get all the rows with status 1 and date less than 24-Aug 2011. (Oracle 11g R2).
    I trying below code
    CREATE TABLE RECONCILIATION_TAB PARALLEL 3 NOLOGGING  
    AS SELECT /*+ INDEX(CARDS_TAB STATUS_IDX) */ ID,STATUS,DATE_D
    FROM CARDS_TAB
    WHERE DATE_D < TO_DATE('24-AUG-2011','DD-MON-YYYY')
    AND STATUS=1; CARDS_TAB has tow global indexes one on status and another on date_d.
    Above query is running for last 28Hrs! Is this the right approach?
    With Regards,
    Farooq Abdulla

    You said the table was range partitioned but you didn't say by what. I'm guessing the table is range partitioned by DATE_D. Is that a valid assumption?
    You said that the table was subpartitioned by status. If the table is subpartitioned by status, what do you mean that the data is randomly distributed? Surely it's confined to particular subpartitions, right?
    What is the query plan without the hint?
    What is the query plan with the hint?
    Why do you believe that adding the hint will be beneficial?
    Justin

  • Best way to Insert Millions records in SQL Azure on daily basis?

    I am maintaining millions of records in Sql Server 2008 R2 and now i am intended to migrate these on SQL Azure.
    In existing system with SQL Server 2008 R2, few SSIS packages and Stored Procedures are firstly truncate the existing records and then perform Insert operation on the table which holds
    approx 26 Million records in 30 mins. on Daily basis (as system demands).
    When i migrate these on SQL Azure, i am unable to perform these operations in a
    faster way as i did in SQL 2008. Sometimes i got Request timeout error.
    While searching for faster way, many of them suggest for Batch process or BCP. But Batch processing is NOT suitable in my case because it takes much time to insert those records. I required some faster and efficient way on SQL Azure.
    Hoping for some good suggestions.
    Thanks in advance :)
    Ashish Narnoli

    +1 to Frank's advice.
    Also, please upgrade your Azure SQL Database server to
    V12 as you will receive higher performance on the premium tiers.  As you scale-up your database for your bulk insert, remember that
    SQL Database charges by the hour. To minimize costs, scale back down when the inserts have completed.

  • Insert million records

    Hi Guys,
    I have a web application that has an upload feature. Records from the file that has been uploaded will be inserted to the database (I'm using TimesTen as of now). Assuming that the file contains million of records, it will, for sure, make the transaction quite slow. My question is how can I make this transaction faster? I know in TimesTen that there is ttbulkcp command that insert records from the file, but I'm not sure if this can be done in Java. Suggestion please! Thanks

    Rexcel wrote:
    can you guide me how?http://www.javaworld.com/javaworld/jw-12-2000/jw-1229-traps.html

  • What's the best way to delete 2.4 million of records from table?

    We are having two tables one is production one and another is temp table which data we want to insert into production table. temp table having 2.5 million of records and on the other side production table is having billions of records. the thing which we want to do just simple delete already existed records from production table and then insert the remaining records from temp to production table.
    Can anyone guide what's the best way to do this?
    Thanks,
    Waheed.

    Waheed Azhar wrote:
    production table is live and data is appending in this table on random basis. if i go insert data from temp to prod table a pk voilation exception occured bcoz already a record is exist in prod table which we are going to insert from temp to prod
    If you really just want to insert the records and don't want to update the matching ones and you're already on 10g you could use the "DML error logging" facility of the INSERT command, which would log all failed records but succeeds for the remaining ones.
    You can create a suitable exception table using the DBMS_ERRLOG.CREATE_ERROR_LOG procedure and then use the "LOG ERRORS INTO" clause of the INSERT command. Note that you can't use the "direct-path" insert mode (APPEND hint) if you expect to encounter UNIQUE CONSTRAINT violations, because this can't be logged and cause the direct-path insert to fail. Since this is a "live" table you probably don't want to use the direct-path insert anyway.
    See the manuals for more information: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9014.htm#BGBEIACB
    Sample taken from 10g manuals:
    CREATE TABLE raises (emp_id NUMBER, sal NUMBER
       CONSTRAINT check_sal CHECK(sal > 8000));
    EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('raises', 'errlog');
    INSERT INTO raises
       SELECT employee_id, salary*1.1 FROM employees
       WHERE commission_pct > .2
       LOG ERRORS INTO errlog ('my_bad') REJECT LIMIT 10;
    SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog;
    ORA_ERR_MESG$               ORA_ERR_TAG$         EMP_ID SAL
    ORA-02290: check constraint my_bad               161    7700
    (HR.SYS_C004266) violatedIf the number of rows in the temp table is not too large and you have a suitable index on the large table for the lookup you could also try to use a NOT EXISTS clause in the insert command:
    INSERT INTO <large_table>
    SELECT ...
    FROM TEMP A
    WHERE NOT EXISTS (
    SELECT NULL
    FROM <large_table> B
    WHERE B.<lookup> = A.<key>
    );But you need to check the execution plan, because a hash join using a full table scan on the <large_table> is probably something you want to avoid.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to make this faster?? read millions of record from txt file

    Hi there,
    I got an issue. There is a txt file contains 2 million records, I also got another file contains over 10000 numbers. Now, I need to compare this 10000 numbers with that 2 million records if any records contains a number which belongs to 10000 number set, i retrieve this record and keep it. later on, when i finish the comparison i'll write all the result records into a txt file.
    What kind of data structure shall i use to keep the records and numbers? how to make the comparison quicker? Any idea will do!
    Thanks!

    if i were to do it, i will insert bout the records into the db. then do an sql statement on the two tables to get the results. Then get the rs and output it to another text file.
    just my opinion. not sure if this is faster.
    Message was edited by:
    clarenceloh

  • Database table with potentially millions of records

    Hello,
    We want to keep track of user's transaction history from the performance database.  The workload statistics contain the user transaction history information, however since the workload performance statistics are intended for temporary purposes and data from these tables are deleted every few months, we loose all the user's historical records.
    We want to keep track of the following in a table that we can query later:
    User ID      - Length 12
    Transaction  - Length 20
    Date         - Length 8
    With over 20,000 end users in production this can translate into thousands of records to be inserted into this table daily.
    What is the best way to store this type of information?  Is there a specific table type that is designed for storing massive data quantity?  Also, over time (few years) this table can grow into millions or hundreds of millions of records.  How can we manage that in terms of performance and storage space?
    If anyone has worked with database tables with very large amounts of records, and would like to share your experiences, please let us know how we could/should structure this function in our environment.
    Best Regards.

    Hi SS
    Alternatively, you can use a <u>cluster table</u>. For more help refer to F1 help on <b>"IMPORT TO / EXPORT FROM DATABASE"</b> statements.
    Or you can store data as a <u>file</u> on the application server using <b>"OPEN DATASET, TRANSFER, CLOSE DATASET"</b> statements.
    You can also select to archieve data of older than some definite date.
    You can also mix your alternatives for the recent and archieve data.
    *--Serdar <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=qbk%2bsag%2bjiw%3d">[ BC ]</a>

  • Copy million of records

    Hi!
    I need to copy millions of records (call details records) from an older to new table on new
    database.
    I create the follow procedure, but with lot of performance problems. I only copy
    200000 records/hour. Could you suggest another process? Table is partitioned by day and
    have an index on column TID.
    PROCEDURE sendCdrs(
    inOperador IN VARCHAR2,
    inOperacao IN VARCHAR2,
    outError OUT VARCHAR2,
    outErrorNbr OUT VARCHAR2,
    outNbrCDRs OUT NUMBER)
    IS
    CURSOR myCdrs (inDay NUMBER) IS
    SELECT *
         FROM CDRV4_SCP_DSCP_CAMEL
         WHERE day = inDay;
    rowCdr      CDRV4_SCP_DSCP_CAMEL%ROWTYPE;
    vToday      PLS_INTEGER := to_number(to_char(SYSDATE, 'ddd'));
    vNbrMinimumDays PLS_INTEGER;
    vDiffDays PLS_INTEGER;
    vCommit     PLS_INTEGER;
    BEGIN
    outErrorNbr := '501';
    outNbrCDRs := 0;
    SELECT nbrcommit, nbrminimumdays, diffdays
    INTO vCommit, vNbrMinimumDays, vDiffDays
    FROM SMP_CENTRAL_XDRS
    WHERE rownum = 1;
    outErrorNbr := '505';
    IF vCommit IS NULL OR vNbrMinimumDays IS NULL OR vDiffDays IS NULL THEN
    outError := '910 Error in conf table';
    RETURN;
    END IF;
    outErrorNbr := '510';
    IF ((vToday - vDiffDays) < (vToday - vNbrMinimumDays)) THEN
         OPEN myCdrs (vToday);
         LOOP
         FETCH myCdrs INTO rowCdr;
         EXIT WHEN myCdrs%NOTFOUND;
         --Insercao dos cdrs no destino 
              INSERT INTO CDRV4_SCP_DSCP_CAMEL@DBL_SMP_STRESS_CORE
              VALUES rowCdr;
              DELETE FROM CDRV4_SCP_DSCP_CAMEL
              WHERE tid = rowCdr.tid;
              outNbrCDRs := outNbrCDRs + 1;
              --Commit de x em x
              IF MOD(outNbrCDRs, vCommit) = 0 THEN
              COMMIT;
              END IF;
         END LOOP;
         CLOSE myCdrs;
         --Confirmar as alteracoes quando outNbrCDRs < x
         COMMIT;
    END IF;
    outErrorNbr := RET_OK;
    outError := RET_OK;
    EXCEPTION
    WHEN OTHERS THEN
    outError := '910 Erro BD';
    ROLLBACK;
    IF myCdrs%ISOPEN THEN
    CLOSE myCdrs;
    END IF;
    END sendCdrs;
    Thanks a lot
    André

    Hi,
    Why not just set up a materialized view and have the database do the copying for you? Just set up to refresh at the end of every day.

  • How can I read, millions of records and write as *.csv file

    I have to return some set of columns values(based on current date) from the database (could be million of record also) The dbms_output can accomodate only 20000 records. (I am retrieving thru a procedure using cursor).
    I should write these values to a file with extn .csv (comma separated file) I thought of using a utl_file. But I heard there is some restriction on the number of records even in utl_file.
    If so, what is the restriction. Is there any other way I can achive it? (BLOB or CLOB ??).
    Please help me in solving this problem.
    I have to write to .csv file, the values from the cursor I have concatinated with "," and now its returning the value to the screen (using dbms_output, temporarily) I have to redirect the output to .csv
    and the .csv should be in some physical directory and I have to upload(ftp) the file from the directory to the website.
    Please help me out.

    Jimmy,
    Make sure that utl_file is properly installed, make sure that the utl_file_dir parameter is set in the init.ora file and that the database has been re-started so that it will take effect, make sure that you have sufficient privileges granted directly, not through roles, including privileges to the file and directory that you are trying to write to, add the exception block below to your procedure to narrow down the source of the exception, then test again. If you still get an error, please post a cut and paste of the exact code that you run and any messages that you received.
    exception
        when utl_file.invalid_path then
            raise_application_error(-20001,
           'INVALID_PATH: File location or filename was invalid.');
        when utl_file.invalid_mode then
            raise_application_error(-20002,
          'INVALID_MODE: The open_mode parameter in FOPEN was
           invalid.');
        when utl_file.invalid_filehandle then
            raise_application_error(-20002,
            'INVALID_FILEHANDLE: The file handle was invalid.');
        when utl_file.invalid_operation then
            raise_application_error(-20003,
           'INVALID_OPERATION: The file could not be opened or
            operated on as requested.');
        when utl_file.read_error then
            raise_application_error(-20004,
           'READ_ERROR: An operating system error occurred during
            the read operation.');
        when utl_file.write_error then
            raise_application_error(-20005,
                'WRITE_ERROR: An operating system error occurred
                 during the write operation.');
        when utl_file.internal_error then
            raise_application_error(-20006,
                'INTERNAL_ERROR: An unspecified error in PL/SQL.');

  • Insert order by records into a view with a instead of trigger

    Hi all,
    I have this DML query:
    INSERT INTO table_view t (a,
                              b,
                              c,
                              d,
                              e)
          SELECT   a,
                   b,
                   c,
                   d,
                   e
            FROM   table_name
        ORDER BY   dtable_view is a view with an INSTEAD OF trigger and table_name is a table with my records to be inserted.
    I need the ORDER BY clause because in my trigger i call a procedure who treat each record and insert into a table, used in the view. I need to garantee these order.
    If i put an other SELECT statement outside, like this:
    INSERT INTO table_view t (a,
                              b,
                              c,
                              d,
                              e)
          SELECT   a,
                   b,
                   c,
                   d,
                   e
            FROM   table_name
        ORDER BY   dIt works. But I can put these new SELECT because these query is created automatic by Oracle Data Integrator.
    What I'm asking you is if there any solution to this problem without changing anything in the Oracle Data Integrator. Or, in other words, if there is any simple solution other than to add a new SELECT statement.
    Thanks in advance,
    Regards.

    Sorry... copy+paste error :)
    INSERT INTO table_view t (a,
                              b,
                              c,
                              d,
                              e)
        SELECT   *
          FROM   (  SELECT   a,
                             b,
                             c,
                             d,
                             e
                      FROM   table_name
                  ORDER BY   d)I need to insert him by a D column order, because my trigger needs to validate each record and insert him. I have some restrictions. For example, my records are:
    2     1     2006     M
    1     2     2007 M
    1     3     2007     S 2007
    1     2     2007     S 2007
    2     1     2009     S
    2     1     2009     S
    I want to insert the 'M' records first and then the 'S' records because the 'S' records only makes sense in target table is exists 'M' records
    Regards,
    Filipe Almeida

  • How to insert a specific record into alv

    Hi everyone,
      here is my problem:
        I put an ALV GRID control in my screen , it display some records.
        and I create a new button on the toolbar, and in the "handle user command class" , I need to
    select a record from DB table into a work area.
        after I select the record I need, here is the key question:
          *I want to insert the record into the ALV at a specific row position which the user decided,
    what should I do?*
      METHOD HANDLE_ON_USER_COMMAND.
        CASE E_UCOMM.
          WHEN CL_GUI_ALV_GRID=>MC_FC_LOC_INSERT_ROW.
          WHEN 'FC_ASSIGN'.
    ***I do some selection here  
            select %%%$$%%%^&&** into GS_HOLIDAYS.
    *** after the selection, what should I do then????
    *** I want to insert the work area GS_HOLIDAYS into ALV at a specific position
    *** e.g.  into the 3rd row.
    *** how can I achieve that????
    ***call a method or something??????I don't know    
          WHEN 'FC_DELETE'.
          WHEN OTHERS.
        ENDCASE.
      ENDMETHOD.
    pls don't let me go through the programs in package "SLIS",because I have already done that and
    I haven't solved my problems yet.
    Thanks for your help.

    All u need to do is on user command for inserting new records u just insert a blank record in your internal table that u are displaying with required style informaion for making it editable and then refresh alv display by method REFRESH_TABLE_DISPLAY.
    Thanks & Regards,
    Vivek Gaur

Maybe you are looking for

  • Iphone 3- Calendar Won't Sync

    I've had my 3GS phone for 1.5 years and have been syncing on a normal basis. As of yesterday, it won't sync with my outlook calendar. It says "can't sync calendar cause sync server failed on your iphone." Any Advice?

  • Nokia suite 3.8.48 and Windows (8) Contacts

    Nokia Suite syncs 3710 fold and suite, but not Windows contacts. How to do? Previously I used Windows XP and Outlook without any problems. Would be great with some help.

  • XI HA help: SLD inconsistent data, RWB component monitoring show abnornally

    Hello guru, I have installed XI HA on HPUX Service Guard and Oracle 10g. two nodes: xiprd1("CI"), xiprd2(DI). one package: dbPXI(DB, NFS), one pcakge: wdscsPXI(Web Dispatcher, ASCS, SCS), XI version: 7.0 SP11 I followed note 951910 to configure HA. b

  • BROWSE BUTTON?

    I have been looking at sites that sell printing and or tee shirts online. Some allow the user to click a button which opens a window to find files on their computer and uploads them to the site owner? 1) How does the button work (special software, ja

  • Swc plug-ins for Catalyst?

    Hey, are there any plan to include the ability to plug in components to catalyst and be able to use them, for example to plug in the papervision3D swc and to be able to visually configure these features ... not sure if this would be possile, but it's