Fastest way to copy/duplicate  a table?

I want to copy a Oracle 8.0.5 DB table to another table but with different name. What is the fastest way to do this? I currently use INSERT INTO table_name AS (SELECT xxxxxxxxx).

A CTAS (create table as select) with no logging and in parallel (in you have the additional prcessors) would typically be the fastest
Dom

Similar Messages

  • Is this the fastest way to copy files?

    I'm looking for a way to take 1 file and make many copies of it. Is this fastest way you know of?
    import java.io.*;
    import java.nio.channels.*;
    public static void applyFiles( File origFile, File[] files )
      FileInputStream f1 = new FileInputStream( origFile.getAbsolutePath() );
      FileChannel source = f1.getChannel();
      FileOutputStream f2[] = new FileOutputStream[ files.length ];
      FileChannel target;
      for( int x=0; x < files.length; x++ )
        if( origFile.getAbsolutePath() != files[x].getAbsolutePath() )
          f2[x] = new FileOutputStream( files[x].getAbsolutePath() );
          target = f2[x].getChannel();
          source.transferTo(0, source.size(), target);
    }

    2 questions from your code...
    1) I assume the last line should read
    out.write(buffer,0,numRead);
    2) Doesnt this just read in a piece at a time and write that piece to each file? Isn't that degrading performance to have so many files open and waiting at once? Would it make more sense to read into a StringBuffer, then write all the data to each file at a time?
    Thanks
    I'd have to say that your question is loaded. :)
    Without knowing anything about your target system, I'd
    have to say no, this is not the fastest way to copy a
    file to many files. This will end up reading the file
    once for every time you want to copy it. Which may
    work fine if the file is small, but not when the file
    that is being copied is larger then free ram. Or if
    the file channel implementation sucks.
    For the general case, where you don't know how big the
    file will be beforehand I'd say that this is a better
    algorithim.
    public static void oneToManyCopy( File source, File[]
    dest ) throws Exception(s) {
    FileInputStream in = new FileInputStream( source );
    FileOutputStream out[] = new
    w FileOutputStream[dest.length];
    for ( int i = 0 ; i < dest.length;++i)
    out[i] = new FileOutputStream(dest);
    byte buffer[] = new byte[1024]; // or whatever size
    e you like
    int numRead;
    while ( ( numRead = in.read(buffer,0,buffer.length)
    ) ) > -1 ) {
    for ( int i = 0 ; i < out.length; ++i )
    out.write(buffer,0,numRead);

  • Fastest way to write out internal table to database table ?

    Hi friends,
    my question is, what is the fastest way to write about 1,5 mill. of rows from an internal table to a database table ?
    points will be awarded immediately,
    thanks for your help,
    clemens

    Hi Clemens,
    If you just want to write (INSERT) 1.5 million rows of an internal table into a database table, use:
    INSERT <table name> FROM TABLE <itab>.
    Transaction Log Size could be a problem, therefore writing in packages could help, but this depends on your row size, your database configuration and on the current changes to your database. May be it runs in one package, if the rows are small (few bytes) then one package will be the fastest but you'll not much faster than with reasonable packages (3-20 MBytes). On Oracle with rollback segments you will probably have no problems at 1.5 million rows.
    Best regards
    Ralph

  • What is the fastest way to delete duplicate songs?

    I will spend hours deleting songs from the duplicate veiw unless someone has a faster way to delete duplicates.

    Import the photos with your computer as with any other digitial camera. Most computer photo importing apps include an option to delete the photos from the camera after the import process is complete.
    Or select Edit followed by selecting multiple photos that you want to delete followed by selecting Delete.

  • Fastest way to find duplicates

    10.2.0.5
    I would be intersted if anything was added in 11g.
    I have a 148 gb table that is not partitioned and does not have a unique index. I am not allowed to add an index to see if there are duplicates (I know how to do this to get the row ids).
    Group by generates too much temp even if I increase hash and sort area size.
    I can try parallel, but this does not seem to help much if the table is not partitioned.
    Is there anythign better than select count(distinct fiields).
    Is an analytic better?
    anything new in 11g that is better?

    What I used to do when I had more data than Oracle could handle was something like this in korn shell:
    echo '
    username/password
    set linesize 200
    set pages 0
    set heading off
    set termout off
    set verify off
    set echo off
    set feedback off
    -- any other sets needed
    select * from bigtable
    /'|sqlplus -s|grep -v ^$|sort > j$$1
    echo '
    username/password
    set linesize 200
    set pages 0
    set heading off
    set termout off
    set verify off
    set echo off
    set feedback off
    -- any other sets needed
    select * from bigtable
    /'|sqlplus -s|grep -v ^$|sort -u> j$$2
    diff j$$1 j$$2
    rm j$$*I think I may have fed these into pipes and diff'd the pipes to avoid temp files, but I don't remember, it was like 10-20 years ago. There might be some way to tee the sqlplus output to two pipes which then do the differential sorts and feed pipes to diff, but I never tried that.
    Edit: I meant select unique identifier, not select *, of course.
    Edited by: jgarry on Jun 30, 2011 8:47 AM

  • Fastest way to copy a DVD burned with eMac????

    I need to make multiple copies (more than 30) of a DVD I made for a family reunion. The DVD is about 2 hours long. I have made a disc image on my hard drive, but the burning is still taking 30 mins per disc. Is there a faster way? Are there external DVD burners or copiers that I could purchase that are Mac compatible? Thanks

    There is a workaround which allows you to "burn" DVD as a disc image and burn the disc afterwards. It is known as Easter Egg or "Hurz and Pfurz" on
    http://homepage.mac.com/geerlingguy/macsupport/mac_help/pages/15-burn_idvdother.html
    Once you get this installed you can make DVD images which you can play using the DVD player in the applications folder and burn using Disc Utility, full instructions on
    http://docs.info.apple.com/article.html?artnum=42724
    but you just use the second half to burn the DVD. This means you can readily produce your multiple copies over a period of time without re-running iDVD. iDVD5 has this built in.
    As a simpler alternative, once you have your first copy you can produce the others using Disk Copy. You can leave the image on your hard disc in between burn sessions.

  • Fastest way to fill an InDesign table with data

    Hello,
    I have to fill several InDesign tables with the content of my database.
    I have the database in memory and fill the cells in two Loops (For Each row..., For Each col...).
    But it is so slow! Is there a faster way?
    Here a code snippet of the solution today:
                For Each row In tableRecord
                    Dim inDRow = table.Rows.AsEnumerable().ElementAt(intRow)
                    For Each content In row
                        Dim cell = inDRow.Cells.AsEnumerable().ElementAt(content.Index)
                        cell.Contents = content.Value
                    Next
                    intRow+=1
                Next
    Thank you for help!
    Best regards
    Harald

    Hi, Harald!
    "This should be faster: table.Contents=Array. Or not?"
    Surprisingly is was not. It was slower. A lot slower.
    The array was gathered by (here ExtendScript(JavaScript) dummy code) :
    myArray = myTable.contents;
    Then I did operate on the array. Not on the table object or its cell objects. No direct access to InDesign's DOM objects. Just the built array.
    My text file was written by populating it with a string of the array:
    myString = myArray.join("separatorString");
    separatorString was something that was never used as contents in the table.
    Something like "§§§"…
    After importing the text file I used the convertToTable() method providing the separatorString as separator for the first and second argument with the number of columns as third argument. The number of columns was known from my original table.
    var myNewTable = myText.convertToTable("separatorString", "separatorString", myNumberOfColumns);
    Alternatively you could also remove the table after building the array and assign "myString" as contents for the insertionPoint of the removed table in the story. I think I tested that as well, but do not know, if there is a difference in speed opposed to placing a text file with the same contents (I think it was, but not I'm not sure anymore). So I ended up with:
    1. Contents of table to Array
    2. Array manipulation
    3. Array to String
    4. Write String as file
    5. Remove table
    6. Place file at InsertionPoint of (now removed) table
    Also to note: This was in InDesign CS5 with a very large table.
    Things could have changed in InDesign versions with 64-Bit support.
    But I did not test that yet. The customer I wrote this script for is still on CS5.
    Uwe

  • Fastest way to copy/import photos from cd

    Hi I have about 300 cds with an average of 200MB of photos on them.  I would like to get these photos onto my hard drive.  Is there software available to help speed up the process? Are there cd drives which are significantly faster than the MacBook Pro's standard cd/dvd drive (MATSHITA DVD-R   UJ-868)?
    Thanks for any suggestions -
    MacBook Pro
    OS X 10.6.8

    There aremany out there, get one with fastest conecton and tranfer rate.(not usb2)
    http://www.google.com/search?q=fasest+external+hard+drive+for+mac+photo+storage& hl=en&gbv=2&gs_l=heirloom-hp.1.1.0i13l10.2755.14253.0.20935.16.16.0.0.0.0.163.22 59.0j16.16.0...0.0...1c.1.SOlQNbEHFDA&oq=fasest+external+hard+drive+for+mac+phot o+storage

  • What's the fastest way to copy 151GB, 375000 files from win 2003 server to win 2008 server

    Non techie here.
    I have a project where I need to get 151GB of data spread over 375000+ files moved from a win 2003 FAP to a 2008 server. Copy, xCopy, Robocopy all take in excess of 50hours to move it to an external HDD. Has to be external move for security reasons.
    I have 40 hours max to get it off and onto the new server.
    Cheers 
    Ian

    I copied over 12TB in 24 hours using the method below. A lot of this depends on your infrastructure. The scripts I used are unmodified for your case. I suggest you give them a look and understand the process and change it to fit your needs.
    There are 2 parts. The first is a Main script that schedules PowerShell jobs that actually do the work. The main script will read a file called JobCount every loop to see how many jobs it can run at one time, the reason I did this was to change the number
    of jobs depending on Day (production) times and Night times. Also, the Main loop reads a nomig file that tells the script, don't move these folders because they were our test cases, you can even do test cases during the migration since you can modify the file
    while the script is running. The example was use to move thousands of home folders. Using Robocopy if you tell a single command to do everything, it will take hours to start, just looking around. If you do one root folder at a time, it will run much faster
    which is the reason I created this. If you have a small number root folders, you may want to point it at folders where you do have a lot of subs, remember you can have more than one main process running in different runspaces.
    Main Script 
    VVVVVVVVVV
    $homeOld = (Get-ChildItem -Path \\server\share | select name)
    $JobRun = 10
    $i = 0
    $Count = $homeOld.Count
    foreach ($homeDir in $homeOld) { 
        $i = $i + 1
        $Sdir = $homeDir.Name
        Write-Progress -Activity "Migrating Homes" -Status "Processing $i of $Count"  -PercentComplete ($i/$homeOld.Count*100) -CurrentOperation "Next to Process $Sdir"
        $not = gc \\serverl\share\script\nomig.txt -ErrorAction "Stop"
        $JobRun = gc \\server\share\script\jobcount.txt -ErrorAction "Stop"
        if ($not -notcontains ($homeDir.Name).ToLower()) {
                While ((Get-Job -State "Running").Count -gt ($JobRun-1)) {
                    Start-Sleep -Seconds 3
                if ((Get-Job -State "Completed").Count -gt 0) {
                $Comp = Get-Job -State "Completed"
                foreach ($Job in $Comp) { 
                    $outfile = $Job.name + ".txt"
                    Receive-Job -Job $Job | Out-File -FilePath "\\server\share\verify\$outfile"
                    Remove-Job -Job $Job}
                Start-Job -Name $Sdir -ArgumentList "\\server\share\$Sdir", "\\newserver\share\$Sdir", "/COPYALL", "/MIR", "/W:1", "/R:1", "/MT:5" -FilePath \\server\share\script\robothread.ps1 > $null
        else {
            Write-Host $HomeDir.Name " Excluded" -ForegroundColor Green
    =====
    Thread Script - where Robocopy does the work.
    VVVVVVVVV
    & robocopy $args[0] $args[1] $args[2] $args[3] $args[4] $args[5] $args[6]
    ============
    This comes with no warranty, it is just an idea I used to do a very fast copy with permissions and all attributes, where no other method was useable.
    Thanks,
    Allan
    Allan

  • Fastest way to move from one itab to another?

    What is the fastest way to move one internal table to another internal table (assuming two tables of similar structure)?
    a) append lines of table1 to table2.
    b) loop at table1.
    Move: table1-field1 to table2-field1,
    table1-field2 to table2-field2.
    Append table2.
    Endloop.
    c) table2[] = table1[].
    d) loop at table1.
    Move-corresponding table1 to table2.
    Endloop.
    e) move table1 to table2.
    I think it is option a). Is it correct?

    Hi,
    Yes option a. is fastest : append lines of table1 to table2
    In particular, the quickest way to fill a table line by line is to append lines to a standard table, since a standard table cannot have a unique key and therefore appends the lines without having to check the existing lines in the table.
    APPEND LINES OF inttab1 TO inttab2.
    This statement appends the whole of ITAB1 to ITAB2. ITAB1 can be any type of table, but its line type must be convertible into the line type of ITAB2.
    This method of appending lines of one table to another is about 3 to 4 times faster than appending them line by line in a loop. After the APPEND statement, the system field SY-TABIX
    contains the index of the last line appended. When you append several lines to a sorted table, you must respect the unique key (if defined), and not violate the sort order. Otherwise, a runtime
    error will occur.
    Reference : SDN ABAP Book.
    thanx.

  • Copy / Duplicate a CD works for the MacOS but not for Windows

    Hi all,
    i burnt some jpg's on a CD with my Mac (drag & drop within
    the Finder). This went fine: i can read the CD on my Mac
    and on my Windows-PC.
    As i need y couple of copies from this CD, i thougth that
    the "drag & drop" thing within the Finder is to much effort
    and i created an image of the CD with the DiskUtility.
    After that i burnt the image on new, empty CDs. All of them
    worked fine on my Mac, but non of them were readable on
    any Windows-PC.
    So i started a workaround with the "burning-folder". That
    works, but it is a little bit uncomfartable in comparison
    to working with images.
    Sadly i found no way to create an burn an image which works
    on Windows-PCs.
    Is there no way to copy / duplicate CD, that are readable
    on Windows-PCs?
    Thanks a lot, Stefan

    Hm, i testet something and i do not know, why two different things come out:
    1) i began to create an image from the CD, which i created at first, which works
    on my Mac and on my Windows-System.
    So i could choose the following formats: read-only, compressed, read/write, DVD-/CD-Master.
    2) if i create an image from an existing folder from my harddisk, i have the same options
    PLUS "hybrid-image". If i choose this format and burn it, the CD works on my Windows-System, too.
    So the question is, why the hybrid-image only available for images which are created
    from hardisk-folders and not from an CD?!

  • How to make a table copy - the fastest way????

    Hi,
    Well, I have the following task to do in the minimal possible time:
    Task: duplicate a table in my instance database. The source table name is SPC.
    The destination table name is SPC_REP. Both are partitioned.
    The trouble is my source table has 40 million rows, but I need to copy only 27 million related to a column which I will use a condition in where clause.
    Ok, so I have some questions:
    Question 1)
    What's the fastest way to provide this copy?
    Export/import, Insert into, sqlloader (this one I've never used)
    Question 2)
    I am planning to do this by export/import utility. But I don't want to waste time doing first the export process and only after all doing import process. I would like to do export and import simultaneously to save time as table has so many rows.
    Question 3)
    If I use export/import I will not be able to create the copy table on the same schema as it is my source table, correct? Correct me, if I'm wrong...
    Thank you all,
    And any hint will be appreciated.

    Export & import probably isn't the best solution here. It's not particularly fast, reuires that you create an additional copy of the data in your dump file, involves the cost of pulling all the data out of Oracle and puts it back in, etc.
    Transportable tablespaces would be an option, assuming your partitioned table isn't in the same tablespace as a bunch of other opjects.
    Personally, I would probably do a
    CREATE TABLE <<copy name>>
    AS
    SELECT *
      FROM <<original name>>
    WHERE 1=2
    INSERT /*+ APPEND */
      INTO <<copy name>>
      NOLOGGING
    SELECT *
      FROM <<original name>>
    WHERE <<something>>Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC
    Message was edited by:
    Justin Cave
    APPEND & NOLOGGING may be faster so I added that.
    Message was edited by:
    Justin Cave

  • How to print the table values in fastest way?

    Dear Friends,
    I'm having table in my application and i need to print the table values. For that I used print() method to print the table values.
    boolean complete = tableObj.print(mode, header, footer, showPrintDialog, null, interactive,
                                  null);But, the time taken for displaying print dialouge box so late and printing operation is done very slowly.
    Could anyone please tell me is there any better way and fastest way to print the table values?
    Thanks in advance

    Hi,
    In the Module pool you will have fields. For those fields you have created the name also. Assign those name to work area and from there to internal table.
    And for your requirement viceversa you need to done.
    With Regards,
    Sumodh.P

  • There are over 4000 duplicates in my iTunes. What is the fastest way to delete them?

    There are over 4000 duplicates in my iTunes. What is the fastest way to delete them?

    Hello there, yandere69keita.
    The following Knowledge Base clarifies your concern about your My Photo Stream counting towards your iCloud storage:
    iCloud: My Photo Stream FAQ
    http://support.apple.com/kb/ht4486
    Does My Photo Stream use my iCloud storage?
    No. Photos uploaded to My Photo Stream do not count against your iCloud storage.
    Thanks for reaching out to Apple Support Communities.
    Cheers,
    Pedro.

  • Best way to copy table from one database to any other database

    I need to write an application to perform database table copy from one kind of database to another kind of database.
    I just wrote a simple JDBC program which essentially does the something like below:
    PreparedStatement statement = null;
    ResultSet rs = null;
    try
    System.out.println("insertSQL:" + insertSQL.toString());      
    statement = target.prepareStatement(insertSQL.toString());
    System.out.println("selectSQL:" + selectSQL.toString());
    rs = source.executeQuery(selectSQL.toString());
    int rows = 0;
    while (rs.next())
    rows++;
    for (int i = 1; i <= columns.size(); i++)
    statement.setString(i, rs.getString(i));
    statement.execute();
    System.out.println("Copied " + rows + " rows.");
    But problem with this one is that it takes lot of time( more than 60 mins) for 100k records transfer. Would there be any faster way to do this?
    TIA...

    Thanks...
    I am using now batch update mechanism and set the fetchsize of resultSet cursor.
    Now I need to copy a table with almost 10 million records to the target database. My program works fine but it takes more than 3 hours.
    I am copying from Postgres table to MS SQL server table.
    Is there any other way or more better appraoch to make this more faster?
    TIA..

Maybe you are looking for