Populate large volume of data in the quickest way.

In Oracle 9i, what is the best tools to populate large amount of data is minimium amount of time?
I heard that SQLLDR DIRECT PATH loading could work. Is there any other tools available? What I need to do is to populate a large set of dummy data to stress test the perofrmance of the database system.
Any suggestion will help!
Thank you very much.

Hi,
There are various options provided by Oracle like External tables, SQL Loader etc.
SQL Loader is the fastest from the lot. You may refer the docs for a full help on SQL Loader.
Regards.

Similar Messages

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • What is the 'quickest' way to read char data from a txt file

    Hello,
    What is the 'quickest' way to read character data from a txt file stored on the phone to be displayed into the screen?
    Regards

    To be even a bit more constructive...
    Since J2me does not have a BufferedInputStream, it will help to implement it yourself. It's much faster since you read large blocks at ones in stread of seperate chars.
    something line this lets you read lines very fast:
      while ( bytesread < filesize ) {
             length = configfile.read( buff, 0, buff.length );
             // append buffer to temp String
             if ( length < buff.length ) {
                byte[]  buf  = new byte[length];
                System.arraycopy( buff, 0, buf, 0, length );
                tmp.append( new String( buf ) );
             } else {
                tmp.append( new String( buff ) );
             // look in tmp string for \r\n
             idx1 = tmp.toString().indexOf( "\r\n" );
             while ( idx1 >= 0 ) {
                //if found, split into line and rest of tmp
                line = tmp.toString().substring( 0, idx1 );
             /// ... do with it whatever you want ... ////
                tmp = new StringBuffer( tmp.toString().substring( idx1 + 2 ) );
                idx1 = tmp.toString().indexOf( "\r\n" );
             bytesread += length;
          }

  • I have a 4 month long text message thread conversation. What's the quickest way to get to the very beginning of the conversation?

    I have a 4 month long text message thread conversation. What's the quickest way to get to the very beginning of the conversation?

    The startup.exe file is locate in the \NI-RT\STARTUP directory. 'cd NI-RT' then 'cd STARTUP' should get you to the right place after making the connection with ftp. I don't recognize PH_EXEC.EXE. Logging in as anonymous is fine. I don't recall seeing documentation on the file structure and locations on the module but I'm sure there is some. I just wandered around w/ftp.
    The program to update the node should be below. Enter the IP address, the green light will confirm the node is found.
    The version function scans the existing startup.exe for a string beginning and ending with " RU Version " and prints everything in between as the version.
    The autostart function will copy the existing ni-rt.ini file to your computer, mod
    ify the autostart as requested, then put the file back. Leaves the copy on the PC, which can be handy.
    Send file will request a file of the format startup*.dat which will be renamed startup.exe and ftp'd to the module. Lets you keep version numbers in the filename on the PC side. Your directory structure probably won't match mine so you'll have to browse for your file.
    Reboot module reboots the module.
    Hope this is useful, let me know if anything doesn't make sense. Thanks.
    Matt
    Attachments:
    RNconfig.exe ‏855 KB

  • HT1417 What's the quickest way to delete duplicates without going song by song?

    What is the quickest way to delete duplicates from i tunes library without going song by song?  I just transferred my music from my ipod classic to my computer and now I have a lot of duplicate music.  Most is from CD's I reburned or music I was able to get back on my new computer through I tunes.  It is from my personal i pod (one of 4).  I finally figured out how to get it off the ipod and back onto my newer computer. I made sure to go to advanced setting and check "keep i tunes folder organized" but it still saved all duplicates.

    Apple's official advice is here... HT2905 - How to find and remove duplicate items in your iTunes library. It is a manual process and the article fails to explain some of the potential pitfalls.
    Use Shift > View > Show Exact Duplicate Items to display duplicates as this is normally a more useful selection. You need to manually select all but one of each group to remove. Sorting the list by Date Added may make it easier to select the appropriate tracks, however this works best when performed immediately after the dupes have been created.  If you have multiple entries in iTunes connected to the same file on the hard drive then don't send to the recycle bin.
    Use my DeDuper script if you're not sure, don't want to do it by hand, or want to preserve ratings, play counts and playlist membership. See this thread for background and please take note of the warning to backup your library before deduping.
    (If you don't see the menu bar press ALT to show it temporarily or CTRL+B to keep it displayed)
    tt2

  • What is the quickest way to access the Settings on an iPad?

    What is the quickest way to access the Settings on an iPad? (I mean, the usual cogwheel "app" called Settings)
    In particular, I'm looking for a quicker way (or a more convenient one) to access it from an open application, than hitting the "home" button, possibly swiping the home pages, and tapping on it.
    (I know, I know, I already placed it in the bottom bar, so I have it available from all pages...)
    It would be great if I could cut another step or two. I mean, there are situations where I need to use the Settings app repeatedly. I wonder why it was not included in the iOS 7 swipe-from-bottom menu, while the "Camera" or "Countdown Timer / Clock" made the cut.
    I know everybody will have a different opinion on what should be included in that menu, and choices have to be made, but still...
    for example, on my Android smartphone the newer OS updates placed a shortcut to the settings app in the upper-right corner of the swipe-down menu, which is like the most prominent place you could place it in.
    Anyway, do you know of one or more alternative ways to access the Settings app?
    (Oh, and yeah, it just occurred to me that I can four-finger swipe it, if I have accessed it recently, but again if I access another app or two in between it will become at least as cumbersome as going through the home page).

    Anyway, do you know of one or more alternative ways to access the Settings app?
    Short answer, no. If you have a suggestion for future iOS functionality, use the feedback functionality found elsewhere on this site.
    Barry

  • HT1349 what is the quickest way to move all my iTunes music, video, etc. to a new computer? any help would be appreciated.  thanks.  p.s. i purchased a belkin transfer cable - will that work?

    hi - i'm a newbie - what is the quickest way to move all my iTunes music, video, etc. to a new computer? any help would be appreciated.  thanks.
    p.s. i purchased a belkin transfer cable - will that work?  i have an iPod Touch.

    You copy it from one computer to the other.
    Type "move itunes library" into the google search bar.
    You have posted in the iphone forum.

  • What is the quickest way of moving a itunes music library from a windows 7 machine to an apple Macbook pro?

    What is the quickest way of moving an itunes music library from a windows 7 machine to an apple Macbook pro?

    If you wany Everything from your PC itunes on your MBP... Then... See Here...
    Move iTunes Library from PC to MAC
    http://www.macworld.com/article/146958/2010/03/move_itunes_windows_mac.html

  • Whats the quickest way to export packages,tables etc from one enviroment

    Hi
    whats the quickest way of more loads of packages, tables, indexes etc. from one enviroment to another?
    I did some things in apex.oracle.com workspace to test apex now I want to move it across to my xe installation.

    Hello,
    2 'fast' options really -
    1) Export of application + Export/DataPump of schema
    This works if you want a complete 'mirror' from one environment to another of the schema objects
    2) Supporting Objects
    Bundle all your requirements together with the application export.
    The Supporting Objects feature absolutely rocks and yet very very (very!) few people seem to use it.
    John.
    http://jes.blogs.shellprompt.net
    http://apex-evangelists.com

  • What's the quickest way to reset my security questions because I don't remember my answers

    What's the quickest way to reset my security questions because I don't remember my answers

    http://lmgtfy.com/?q=How+to+reset+apple+security+questions

  • What's the quickest way to open Access Connections?

    What's the quickest way to open Access Connections, apart from its Taskbar level meter?
    I prefer to use the Power Manager battery meter instead, as it looks pretty ugly when both are used, and they take up too much space.
    Fn+F5 no longer gives any access to Access Connections either.  And it is not integrated into the standard Windows network system tray icon's right-click menu, like Power Manager is with the standard power system tray icon.
    You cannot even type Access Connections in the Start Menu search box, as the shortcut is called something else, which I can never remember.
    Ideally, I would at least like to be able to access it though the Fn+F5 box again (I'm pretty sure it was possible in versions).
    jason404 - X220 (my sixth ThinkPad)

    If you are still interested in using a translator, I recommend the [https://addons.mozilla.org/en-US/firefox/addon/gtranslator/?src=userprofile gTranslator] add-on.<br>
    This add-on is similar to the Google Chrome translation bar.

  • What is the quickest way to incorporate a new template's styles into an existing template?

    Recently my company have developed a new template. I am creating a document using much content which already exists in an older template with different paragraph styles, table formats and page formats. What is the quickest way to incorporate the new styles so I don't have to individually update the table contents. The paragraph catelog and tables have different titles in the two documents.
    Thanks,
    Niall.

    Do you know where I can find a copy of Template Mapper? I went to http://ig5authoringtools.com/plugin-directory/single-sourcing/templatemapper/ and it is not there.
    Thanks in advance

  • What's the quickest way to slice up this image...

    I found this image and would like to slice it up into separate images. What's the quickest way to do this without having to manally select each one or manually draw slice lines?
    My idea was to get them all surrounded by a marquee or path, and then somehow export that marquee or path area into a file.
    The first part of creating the marquee and path was easy...
    I first surround the entire image with a black border so that it intersects the horizontal black divider lines
    I then used the magic want tool to select the black border which also selected the black divider lines
    I next inverted the selection... so now all the sections are selected perfectly
    Finally I right-clicked on the selection and picked "Make Work Path"
    But now how do I export all the work path sections into individual files?
    I would appreciate any help or advice... thanks.
    -Pete

    I was able to do it with a selection I saved named selection for a single frame. Then recording an action that I played moved the selection and played again repeated this till all frames were created. I then deleted the background layer aligned all layer to the top and created a frame animation.
    The script export layer to files will create files for each frame.

Maybe you are looking for

  • TS3212 I have downloaded the latest version of Itunes but, when I check what about itunes it says the version is still 10.3.1

    Also I have mac os, not windows. And I am trying to upgrade not install itunes

  • XI Repository Export CMS Read timed out error

    when iam exporting the objects the export fails with the message..Sent on 12/14/07 at 9:04 PM: Unable to establish connection to CMS server http://servername:51000. Unable to transfer the following transport lists:  Export list for SWCV (send time =

  • How to delete desktop photo

    I am new to iPhoto6 and while experimenting with it I selected a JPEG pix, cropped it then hit DONE. The pix disappeared from iPhoto and went directly to my desktop, uncropped. I cannot find this pix in any of my files or folders so I can delete it.

  • TRANSACCION IDLB

    HOLA EXPERTOS ¿HAY ALGUN TRUCO O FORMA DE MODIFICAR PILAS-LIBROS DE FACTURAS MÓDULO SD EN PRODUCTIVO POR TRANSACCIÓN IDLB? ACTUALMENTE, CADA VEZ QUE NECESITO MODIFICAR UNA PILA, TENGO QUE INGRESAR A DESARROLLO, MODIFICAR, GENERAR ORDEN DE TRANSPORTE.

  • Close script output window

    So after YEARS of working with MS SQL, I'm now working with Oracle. One of the biggest problems I'm having is that in Microsoft SSMS, you can hit Ctrl+R to minimize the query results window. 1) Is there a shortcut key to minimize the script output wi