Copy 500 million records between 2 tables

Hi,
I have 2 tables T1 and T2 in the same database and schema as well.
But the Size of data in T1 is around 500 million. Which is the best way to Copy data from T1 to T2.
How fast is the SQL PLUS Copy Command?
Will BULK COLLECT into collection and FOR ALL statements help in doing the job at high speeds?
Are there any other options?(I only want to use SQL PLUS or PL/SQL)
please let me know if there is a way out.
Thanks,
Vijay

Hi ,
Thank you very much for the response.
This is a good option too but afraid about the
logging it does at the back end. What kind of logging. Maybe you can switch it of (Triggers?).
And more over it works row by row.No it does not. It is not the same as
INSERT INTO T2 SELECT FROM T1 where ID = 1;
INSERT INTO T2 SELECT FROM T1 where ID = 2;
INSERT INTO T2 SELECT FROM T1 where ID = 3;
INSERT INTO T2 SELECT FROM T1 where ID = 4;
INSERT INTO T2 SELECT FROM T1 where ID = 5;
INSERT INTO T2 SELECT FROM T1 where ID = 6;
It does all at once.
INSERT INTO T2 SELECT FROM T1;
Also the time is very limited for my job to complete
copying data.
So I am skeptical to use this Option.
Everything else will be slow compared to that.
**Also the 2 tables have Same Data Definition.
Incase there is a better way to do this. Please
share.
Thank you again for the help.
VijayGood luck
Message was edited by:
Sven Weller

Similar Messages

  • Delete 50 Million records from a table with 60 Million records

    Hi,
    I'm using oracle9.2.0.7 on win2k3 32bit.
    I need to delete 50M rows from a table that contains 60M records. This db was just passed on to me. I tried to use the delete statement but it takes too long. After reading the articles and forums, the best way to delete that many records from a table is to create a temp table, transfer the data needed to the temp table, drop the big table then rename temp table to big table. But the key here is in creating an exact replica of the big table.I have gotten the create table, indexes and constraints script in the export file from my production DB. But in the forums I read, I noticed that I haven't gotten the create grant script, is there a view I could use to get this? Can dbms.metadata get this?
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    I just want to make sure that I haven't left anything out. Kindly help.
    Thanks and Best Regards

    Can dbms.metadata get this?
    Yes, dbms_metadata can get the grants.
    YAS@10GR2 > select dbms_metadata.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST') from dual;
    DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST')
      GRANT SELECT ON "YAS"."TEST" TO "SYS"
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    There are triggers, foreign keys referencing this table (which will not permit you to drop the table if you do not take care of them), snapshot logs on the table, snapshots based on the table, etc...

  • How to Copy the specific record of one table to other table(same structure)

    Hello,
    i have develop a form and some buttons on it. Suppose that Form is based on TABLE_1, which has the fiollowing structure.
    ( Bill_No Number(5),
    Bill_Date Date,
    Bill_amount Number(6),
    Description Varchar2(60)
    My requirement is when i save the any record and after that i want to copy all the contents of that record to Table_2.
    On the form there is a button named as Let say "Copy current Record to Table_2". So when i press the button, the record is copied from Table_1 to Table_2.
    (Note: - Table_2 has the same structure as Table_1.)
    Please help me to solve this problem.
    Thanks in advance.

    i would prefer a pure database-solution, means put a database-trigger BEFORE INSERT FOR REACH ROW on your table 1 and do the insert there.
    If you do it in PRE-INSERT-Trigger in forms it will work for data entered via forms-application, but what happens if data is inserted via a different frontend?
    CREATE OR REPLACE TRIGGER TRG_TABLENAME_BRI
    BEFORE INSERT
    ON TABLENAME
    FOR EACH ROW
    BEGIN
      -- Do your insert here
      INSERT INTO TABLE2 (
        COL1,
        COL2
      ) VALUES (
        :NEW.COL1, -- (or OLD.COL1)
        :NEW.COL2  -- (or OLD.COL2)
    END;
    /Edited by: aweiden on 22.09.2008 08:08

  • Finding unmatched records between two tables

    Hi,
    Suppose there are two tables - A and B and they have a couple of common key fields.  I  want to select those records from A which do not have a matching record in B. What will be most efficient way to do that?
    Thank you.

    Hey,
    Look at this link .......... i think this may help you a bit to achieve ur requirement
    http://help.sap.com/saphelp_470/helpdata/en/fc/eb35f8358411d1829f0000e829fbfe/content.htm
    Comparing Internal Tables: -
    Internal tables can be compared with the operands that are used to compare other data objects. The most important criteria for comparing the internal table are the number of lines they contain. The larger the number of lines, the larger it is for comparisons. If the both the internal tables have same number of lines, then they are compared line by line. The operands used for comparisons are LE, LT, GE, GT, EQ, NE.
    Except for EQ, the comparison stops at the first pair of components that identifies the condition false.

  • Not able to copy all the record from the table?

    Hi All,
    I have a table Table_1 with 5 crores of data. I have created the same table structure Table_2 like Table_1 and trying to insert the entire data from Table_1 to Table_2 by use of the below code:
    CREATE OR REPLACE PROCEDURE insert_prc (limit_in IN PLS_INTEGER)
    IS
        cursor cur_insert
        IS
        SELECT  *
        FROM    Table_1;
        type tabtype_insert IS TABLE OF cur_insert%ROWTYPE INDEX BY PLS_INTEGER;
        v_tabtype_insert   tabtype_insert;
        v_limit_rows    NUMBER := 1000;
        v_start    PLS_INTEGER;
        v_end      PLS_INTEGER;
        v_update_count  NUMBER;
        v_bulk_errors   NUMBER;   
    begin
        DBMS_SESSION.free_unused_user_memory;
        show_pga_memory (limit_in || ' - BEFORE');
        v_start := DBMS_UTILITY.get_cpu_time;
        BEGIN
            open cur_insert;
            LOOP
                FETCH cur_insert BULK COLLECT INTO v_tabtype_insert LIMIT v_limit_rows;
                FORALL i IN 1..v_tabtype_insert.COUNT SAVE EXCEPTIONS
                INSERT INTO  Table_2
                VALUES v_tabtype_insert(i);
                EXIT WHEN v_tabtype_insert.COUNT < v_limit_rows;
                COMMIT;
            END LOOP;
            CLOSE cur_insert;
        EXCEPTION
        WHEN OTHERS
        THEN
            v_update_count := 0;
            v_bulk_errors := SQL%BULK_EXCEPTIONS.COUNT;
            dbms_output.put_line('Number of INSERT statements that failed : ' ||v_bulk_errors);
            dbms_output.put_line('*******************************************************************************************************************');
            /*FOR i IN 1..v_bulk_errors
            LOOP
                dbms_output.put_line('An Error ' || i || ' was occured '|| SQL%BULK_EXCEPTIONS(i).ERROR_INDEX ||
                                    ' during update of Actuator Model: '|| v_tabtype_mtl_items(SQL%BULK_EXCEPTIONS(i).ERROR_INDEX) ||
                                    ' . Oracle error : '|| SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
            END LOOP;   */
            dbms_output.put_line('*******************************************************************************************************************');
        END; 
          v_end := DBMS_UTILITY.get_cpu_time;
          DBMS_OUTPUT.put_line (   'Elapsed CPU time for limit of '
                                || limit_in
                                || ' = '
                                || TO_CHAR (v_end - v_start)/100
          show_pga_memory (limit_in || ' - AFTER');
    end insert_prc;
    CREATE OR REPLACE PROCEDURE APPS.show_pga_memory (
       context_in   IN   VARCHAR2 DEFAULT NULL
    SELECT privileges required on:
       SYS.v_$session
       SYS.v_$sesstat
       SYS.v_$statname
    Here are the statements you should run:
    GRANT SELECT ON SYS.v_$session TO schema;
    GRANT SELECT ON SYS.v_$sesstat TO schema;
    GRANT SELECT ON SYS.v_$statname TO schema;
    IS
       l_memory   NUMBER;
    BEGIN
       SELECT st.VALUE
         INTO l_memory
         FROM SYS.v_$session se, SYS.v_$sesstat st, SYS.v_$statname nm
        WHERE se.audsid = USERENV ('SESSIONID')
          AND st.statistic# = nm.statistic#
          AND se.SID = st.SID
          AND nm.NAME = 'session pga memory';
       dbms_output.put_line(CASE
                                   WHEN context_in IS NULL
                                      THEN NULL
                                   ELSE context_in || ' - '
                                END
                             || 'PGA memory used in session = '
                             || TO_CHAR (l_memory)
    END show_pga_memory;
    /From the above procedure i am able to insert only some 5000000 data. Remaining 4 crores data is not inserted. But the program says it is completed sucessfully.
    Note: Table_2 is the partitioned table and Table_1 is non partitioned table.
    Can anyone please what is the problem on above code?
    Thanks

    user212310 wrote:
    -- Using BULK COLLECTS and FORALL's will consume more resources.Ya i will agree that. That's what i am using LIMIT clause passing value as 1000. It means PL/SQL will reuse the same 1000 elements in the collection each time the data is fetched and thus also reuse the same memory. Even if my table grows in size, the PGA consumption will remain stable.Limit or not, your process will consume more resources (and take longer) than the one i showed you. AND it's many many many more lines of code (harder to maintain, etc...).
    user212310 wrote:
    -- If you don't have a reason (aside from misguided understandings as to which is more performant) to use BULK COLLECTS and FORALL's then you should go with the direct INSERT INTO SELECT * method.The reason i am using BULK COLLECT is to reduce the execution time of the procedure.
    Please let me know if i misunderstood something.
    ThanksYes, you have.
    Please read this
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:760210800346068768

  • Records need to be passed between Internal tables

    Hi Experts,
    I'm working in BADI and have three internal tables say IT1, IT2 and IT3 where BELNR is the key field for all the tables. Now I have a field RECEIPT from IT1.
    My requirement is, I want to fetch only those records where IT1-RECEIPTS = X  and then need to pass only these records to IT2 and IT3.
    Is there any possibility to pass the records between internal tables. Please clarify me as how can I achieve it.
    BR,
    RAM.

    Hello,
    DATA: wa_tab LIKE LINE OF it1.
    LOOP AT it1 INTO wa_tab WHERE RECEIPTS EQ 'X'.
          APPEND wa_tab to it2.
          APPEND wa_tab to it3.
    ENDLOOP.

  • Update performance on a 38 million records table

    Hi all,
    I´m trying to create a script to update a table that have around 38 million records. That table isn´t partitioned and I just have to update one CHAR(1 byte) field and set it to 'N'.
    The Database is 10g r2 running on a Unix TRU64.
    The script I create have a LOOP on a CURSOR that Bulk 200.000 records by pass and do a FORALL to update the table by ROWID.
    The problem is, on the performances tests that method took about 20 minutes to update 1 million rows and should take about 13 hours to update all table.
    My question is: Is that any way to improve the performance?
    The Script:
    DECLARE
    CURSOR C1
    IS
    SELECT ROWID
    FROM RTG.TCLIENTE_RTG;
    type rowidtab is table of rowid;
    d_rowid rowidtab;
    v_char char(1) := 'N';
    BEGIN
    OPEN C1;
    LOOP
    FETCH C1
    BULK COLLECT INTO d_rowid LIMIT 200000;
    FORALL i IN d_rowid.FIRST..d_rowid.LAST
    UPDATE RTG.TCLIENTE_RTG
    SET CLI_VALID_IND = v_char
    WHERE ROWID = d_rowid(i);
    COMMIT;
    EXIT WHEN C1%NOTFOUND;
    END LOOP;
    CLOSE C1;
    END;
    Kind Regards,
    Fabio

    I'm just curious... Is this a new varchar2(1) column that has been added to that table? If so will the value for this column remain 'N' for the future for the majority of the rows in that table?
    Has this column specifically been introduced to support one of the business functions in your application / will it not be used everywhere where the table is currently in use?
    If your answers to above questions contain many yes'ses, then why did you choose to add a column for this that needs to be initialized to 'N' for all existing rows?
    Why not add a new single-column table for this requirement: the single column being the pk-column(s) of the existing table. And the meaning being if a pk is present in this new table, then the "CLI_VALID_IND" for this client is 'yes'. And if a pk is not present, then the "CLI_VALID_IND" for this client is 'no'.
    That way you only have to add the new table. And do nothing more. Of course your SQL statements in support for the business logic of this new business function will have to use, and maybe join, this new table. But is that really a huge disadvantage?

  • How to get large no of records from adf table

    Hi All,
    I'm working on adf.I'm using jdeveloper 11.1.1.5 version.In my adf application i have 1000 records in my adf table.I want to get the 500 selected records from that table at a time.But im not able to get huge number of records.How can i get the records from adf table.Please give me ur valuable suggestions.
    Thanks!

    Hi.
    for large record check the oficial doc.
    http://docs.oracle.com/cd/E24382_01/web.1112/e16182/bcadvvo.htm#CEGDBJEJ
    PAGE_RANGING is the best option for large tables.
    and these maybe will help you
    Re: Performance scrolling large ADF tables
    Re: Expert opinion needed: Best practices to handle huge rowsets on UI

  • Sending email after deleting the records in a table

    Hi
    I am deleting the records in a table. After deleting the records,
    i want to send an email to another person. I am planning to follow this steps.
    1. Create a trigger on the table(AFTER DELETE ON table FOR EACH ROW)
    to copy the deleted records to temporary table.
    2. Read the temporary table and send an email.
    Is there any other way we can do with out creating temporary table ?.
    Govind

    I don't know what you plan to use to send the mail but here's a solution that would work.
    -- Create a send mail procedure
    create or replace procedure send_mail (
    sender      IN VARCHAR2,
    recipient   IN VARCHAR2,
    message     IN VARCHAR2)
    IS
      mailhost VARCHAR2(30) := 'localhost';
      mail_conn utl_smtp.connection;
    BEGIN
    mail_conn :=  utl_smtp.open_connection(mailhost, 25);
      utl_smtp.helo(mail_conn, mailhost);
      utl_smtp.mail(mail_conn, sender);
      utl_smtp.rcpt(mail_conn, recipient);
      utl_smtp.data(mail_conn, message);
      utl_smtp.quit(mail_conn);
    END;
    /-- Create the trigger to email deleted rows
    create or replace trigger email_del_rows
    after delete on <table>
    for each row
    declare
    msg varchar2(2000);
    begin
    msg := 'COL1  COL2  COMPANY NAME  DATE'||chr(10);
    msg := msg||:old.col1||'    '||:old.col2||'    '||:old.company_name||'       '||:old_date|| chr(10);
    msg := msg||'END OF FILE';
    send_mail('SENDER','[email protected]',msg);
    end;
    /You can make it look pretty but you get the basic idea.

  • Best way to update 8 out of10 million records

    Hi friends,
    I want to update a table 8 million records of a table which has 10 millions records, what could be the best strategy if the table has a BLOB column with 600GB worth of data. BLOB itself is 550GB.  I am not updating the BLOB column.
    Usually with non-BLOB data i have tried doing "CREATE TABLE new_table as select <do the update "here"> from old_table;" method .
    How should i approach this one?

    @Mark D Powell
    To give you a background my client faced this problem  a week ago , This is part of a daily cleanup activity .
    Right now i don't have the access to it due to security issue . I could only take few AWR reports and stats when the access window was opened. So basically next time when i get the access i want to close the issue once and for all
    Coming to your questions:
    So what is wrong with just issuing an update to update all 8 Million rows? 
    In a previous run , of a single update with full table scan in the plan with no parallel degree it started reading from UNDO(current_obj=-1 on event "db file sequential read" wait event) and errored out after 24 hours with tablespace full on the tablespace which contains the BLOB data(a separate tablespace)
    To add to the problem redo log files were sized too less , about 50MB only .
    The wait events (from DBA_HIST_ACTIVE_SESS_HISTORY )for the problematic sql id shows
    -  log file switch (checkpoint incomplete) and log file switch completion as the events comprising 62% of the wait events
    -CPU 29%.
    -db file sequential read 6%.
    -direct path read 2% and others contributing a little.
    30 % of the samples "db file sequential read" had a current_obj#=-1 & p1 showing undo file id.
    Is there any concurrent DML against this table? If not, the parallel DML would be an option though it may not really be needed. 
    I think there was in the previous run and i have asked to avoid in the next run.
    How large are the base table rows?
    AVG_ROW_LEN is 227
    How many indexes are effected by the update if any?
    The last column of the primary key column is the only column to be updated ( i mean used in the "SET" clause of the update)
    Do you expect the update will cause any row migration?
    Yes i think so because the only column which is going to be updated is the same column on which the table is partitioned.
    Now if there is a lot of concurrent DML on the table you probably want to use pl/sql so you can loop through the data issuing a commit every N rows so as to not lock other concurrent sessions out of the table for too long a period of time.  This may well depend on if you can write a driving cursor that can be restarted in the event of interruption and would skip over rows that have already been updated.  If not you might want to use a driving table to control the processing.
    Right now to avoid UNDO issue i have suggested to use PL/SQL approach & have asked increasing the REDO size to atleast 10 times more.
    My big question after seeing the wait events profile for the session is:
    Which was the main issue here , redo log size or the reading from UNDO which hit the update statement. The buffer gets had shot to 600 million , There are only 220k blocks in the table.

  • How to Load 100 Million Rows in Partioned Table

    Dear all,
    I a workling in VLDB application.
    I have a Table with 5 columns
    For ex- A,B,C,D,DATE_TIME
    I CREATED THE RANGE (DAILY) PARTIONED TABLE ON COLUMN (DATE_TIME).
    AS WELL CREATED NUMBER OF INDEX FOR EX,
    INDEX ON A
    COMPOSITE INDEX ON DATE_TIME,B,C
    REQUIREMENT
    NEED TO LOAD APPROX 100 MILLION RECORDS IN THIS TABLE EVERYDAY ( IT WILL LOAD VIA SQL LOADER OR FROM TEMP TABLE (INSERT INTO ORIG SELECT * FROM TEMP)...
    QUESTION
    TABLE IS INDEXED SO I AM NOT ABLE TO USE SQLLDR FEATURE DIRECT=TRUE.
    SO LET ME KNOW WHAT THE BEST AVILABLE WAY TO LOAD THE DATA IN THIS TABLE ????
    Note--> PLEASE REMEMBER I CAN'T DROP AND CREATE INDEX DAILY DUE TO HUGE DATA QUANTITY.

    Actually a simpler issue then what you seem to think it is.
    Q. What is the most expensive and slow operation on a database server?
    A. I/O. The more I/O, the more latency there is, the longer the wait times are, the bigger the response times are, etc.
    So how do you deal with VLT's? By minimizing I/O. For example, using direct loads/inserts (see SQL APPEND hint) means less I/O as we are only using empty data blocks. Doing one pass through the data (e.g. apply transformations as part of the INSERT and not afterwards via UPDATEs) means less I/O. Applying proper filter criteria. Etc.
    Okay, what do you do when you cannot minimize I/O anymore? In that case, you need to look at processing that I/O volume in parallel. Instead of serially reading and writing a 100 million rows, you (for example) use 10 processes that each reads and writes 10 million rows. I/O bandwidth is there to be used. It is extremely unlikely that a single process can fully utilised the available I/O bandwidth. So use more processes, each processing a chunk of data, to use more of that available I/O bandwidth.
    Lastly, think DDL before DML when dealing with VLT's. For example, a CTAS to create a new data set and then doing a partition exchange to make that new data set part of the destination table, is a lot faster than deleting that partition's data directly, and then running a INSERT to refresh that partition's data.
    That in a nutshell is about it - think I/O and think of ways to use it as effectively as possible. With VLT's and VLDB's one cannot afford to waste I/O.

  • Help: Loading of 500 million rows

    Hi all,
    I have to load 500 million rows to 5 tables based on some criteria.in case of failure i have to start from the row which caused the error.i should not roll back the inseted rows. could you please help me with best logic?
    thanks
    thiru

    Hi,
    Depending on your non-mentioned DB-version, you could use DML Error Logging here.
    http://tkyte.blogspot.com/2005/07/how-cool-is-this.html
    http://www.oracle.com/technology/oramag/oracle/06-mar/o26performance.html
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tables.htm#ADMIN10261

  • Sql Except query to Display mismatched records along with Table names

    Hi
    I am using below query to display mismatch records between two tables
    SELECT * FROM table1
    EXCEPT
    SELECT * FROM table2
    UNION
    SELECT * FROM table2
    EXCEPT
    SELECT * FROM table1
    This displays mismatched records like below
    Sunil  1000  india
    Sunil 1500  india
    I would like to display even the table names in the result For ex;
    Sunil  1000  india  Table1
    Sunil 1500  india   Table2
    Can you please help us in this regard.

    cnk_gr's query should work for you. 
    One change that I would make is to use UNION ALL, not UNION.  UNION eliminates duplicate rows, which means SQL has to do additional work (sort the result and then check for duplicates). 
    So if you can have duplicates and don't want them in your result, then you would use UNION.  And if you can have duplicates and you want the duplicates in the result, you would use UNION ALL.  But in cases like this, where you know you cannot have
    duplicates (because column 1 contains 'TABLE1' for every row in the first half and column 1 contains 'TABLE2' for every row returned from the second half of the query), you should always use UNION ALL.  It will be more efficient.
    Tom

  • Insert Matching Records from Lookup Table to Main Table

    First off, I want to say many thanks for all the help that I've been provided on here with my other posts. I really feel as though my SQL knowledge is much better than it was even a few short weeks ago, largely in part to this forum.
    I ran into a snag, which I'm hoping someone can provide me some guidance on. I have 2 tables an import table and a lookup table. What I need to have happen is anytime there are matches between the "Types" in the 2 tables, I need a single instance
    of the "Type" and all corresponding fields from the lookup table appended to the import table. There will only be a single instance of each type in the "Lookup" table. Below is an example of how the data might look and the results that
    I would need appended.
    tblLookup
    Type Name Address City
    A Dummy1 DummyAddress No City
    B Dummy2 DummyAddress No City
    C Dummy3 DummyAddress No City
    tblImport
    Type Name Address City
    A John Maple Miami
    A Mary Main Chicago
    A Ben Pacific Eugene
    B Frank Dove Boston
    Data that would be appended to tblImport
    Type Name Address City
    A Dummy1 DummyAddress No City
    B Dummy2 DummyAddress No City
    As you can see only a single instance will be inserted even though there may be multiple instances in the import table. This is the part that I'm struggling on. Any assistance would be appreciated.

    I'm not really sure how else to explain it. With my example, the join would be on "Type" As you can see, there are 2 matching records between the tables (A and B). I would need a single instance of A and B to be inserted into the import table. 
    Below is a SQL statement, which I guess is what you're asking for but it will not do what I need it to do. With the example that I have below, it would insert multiple instances of type "A" into the import table.
    INSERT INTO tblImport (Type, Name, Address, City)
    Select tblLookup.Type, tblLookup.Name,
    tblLookup.Address, tblLookup.City)
    From tblLookup
    Join tblImport on tblLookup.Type = tblImport.Type

  • Performance issue - insert records from db2 tables

    I have a table say emp in oracle database and i have the emp table in db2 database. My job is to pull all the records (million records) from db2 table to oracle emp table. My insert statement is like below. I am connecting to the db2 database using dblink.
    insert into emp
    select * from emp_db2, dept_db2
    where emp_db2.dno = dept_db2.dno
    and dept_db2.dno = 10;
    The statement is still running. How to improve the performance ?
    please suggest.
    thanks,
    Vinodh

    Vinodh2 wrote:
    1.how much is your select query is taking? 1 day over . still running.
    2.What is the row count from the query? 85632978 records
    3.Whats your expected completion time? 30 minutes
    I am not getting the explain plan because the query is still running.
    do as below
    SQL> set autotrace traceonly explain
    SQL> select sysdate from dual;
    Execution Plan
    Plan hash value: 1388734953
    | Id  | Operation      | Name | Rows     | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT |     |     1 |     2   (0)| 00:00:01 |
    |   1 |  FAST DUAL      |     |     1 |     2   (0)| 00:00:01 |
    -----------------------------------------------------------------

Maybe you are looking for