Loading large text files in JTextArea

JTextArea can't seem to load an 8MB text file, as doing so will throw an OutOfMemoryError. The problem is, 8MB really isn't that much data, and there are files in our system which are much bigger than that. And I'm at a loss to find why a mere 8MB is causing Java to run out of its 64MB of memory.
So I'm starting to try another approach.
Here's the idea:
    public void doStuff() {
        Reader reader = dataStore.retrieve(...);
        RandomAccessFile tempFile = new RandomAccessFile(...);
        FileChannel tempChannel = tempFile.getChannel();
        // CODE OMITTED: copying reader to tempChannel as Java chars.
        // Now, map the file into memory.
        CharBuffer charBuffer = tempChannel.map(FileChannel.MapMode.READ_WRITE, 0, fileLength).asCharBuffer();
        Document doc = new CharBufferDocument(charBuffer);
        textArea.setDocument(doc);
    }This should work in theory. The problem I'm having is figuring out how to make this custom document.
Here's what I have so far:
    public class CharBufferDocument extends PlainDocument {
        private CharBufferDocument(CharBuffer charBuffer) {
            super(new CharBufferContent(charBuffer));
    class CharBuffeclass CharBufferContent implements AbstractDocument.Content {
        private CharBuffer charBuffer;
        private CharBufferContent(CharBuffer charBuffer) {
            this.charBuffer = charBuffer;
        public Position createPosition(int offset) throws BadLocationException {
            return new FixedPosition(offset);
        public int length() {
            return charBuffer.length();
        public UndoableEdit insertString(int where, String str) throws BadLocationException {
            throw new UnsupportedOperationException("Editing not supported");
        public UndoableEdit remove(int where, int nitems) throws BadLocationException {
            throw new UnsupportedOperationException("Editing not supported");
        public String getString(int where, int len) throws BadLocationException {
            Segment segment = new Segment();
            getChars(where, len, segment);
            return segment.toString();
        public void getChars(int where, int len, Segment txt) throws BadLocationException {
            char[] buf = new char[len];
            // Sync this, as the get method moves the cursor.
            synchronized (this) {
                charBuffer.get(buf, where, len);
                charBuffer.rewind();
            txt.array = buf;
            txt.offset = where;
            txt.count = len;
    class FixedPosition implements Position {
        private int offset;
        private FixedPosition(int offset) {
            this.offset = offset;
        public int getOffset() {
            return offset;
    }When I run this, I get a text area which only shows one character. What's happening is that my getChars(int,int,Segment) method is being called from Swing's classes, and only being asked for one character!
Does anyone have any idea how this is supposed to work? It seems that if Swing only ever asks for the first character, I'm never going to be able to display 8,000,000 characters. :-)

Not too sure though how to go about reading the last 5 lines say.One solution would be to read in an increasingly large block (estimate the typical line size * 5 + some bonus) of the file starting at position file size[i] - block size. As long as the block doesn't contains 5 complete lines (count newline chars), increase it by a given size and try again. Should still be faster than scanning the whole file from start to end.

Similar Messages

  • Loading large text files into java vectors and outof memmory

    Hi there,
    need your help for the following:
    i'm trying to load large ammoubnts of data into a Vector in order to concatenate several text files and treat them, but i'm getting outofmemory error. I even tried using xml structure and saving to database but the error is still the same. Can you help?
    thanks
    here's the code:
    public void Concatenate() {
    try {
    //for(int i=0;i<1;i++) {
    vEntries =  new Vector();
    for(int i=0;i<BopFiles.length;i++) {
    MainPanel.WriteLog("reading file " + BopFiles[i] + "...");
    FileInputStream fis = new FileInputStream(BopFiles);
    BufferedInputStream bis = new BufferedInputStream(fis);
    DataInputStream in = new DataInputStream(bis);
    String line = in.readLine();
    Database db = new Database();
    Connection conn = db.open();
    while(line != null) {
    DivideLine(BopFiles[i], line);
    line = in.readLine();
    FreeMemory(db, conn);
    MainPanel.WriteLog("Num of elements: " + root.getChildNodes().getLength());
    MainPanel.WriteLog("Done!");
    } catch (Exception e) {
    e.printStackTrace();
    public void DivideLine(String file, String line) {
         if (line.toLowerCase().startsWith("00694")) {
              Header hd = new Header();
              hd.headerFile = file;
              hd.headerLine = line;
              vHeaders.add(hd);
         } else if (line.toLowerCase().startsWith("10694")) {
              Line entry = new Line();
              Vector vString = new Vector();
              Vector vType = new Vector();
              Vector vValue = new Vector();
              entry.name = line.substring(45, 150).trim();
              entry.number = line.substring(30, 45).trim();
              entry.nif = line.substring(213, 222).trim();
              entry.index=BopIndex;
              entry.message=line;
              entry.file=file;
              String series = line.substring(252);
              StringTokenizer st = new StringTokenizer(series, "A");
              while (st.hasMoreTokens()) {
                   String token=st.nextToken();
                   if(!token.startsWith(" ")) {
                        vString.add(token);
                        vType.add(token.substring(2,4));
                        vValue.add(token.substring(4));
                   token=null;
              entry.strings= new String[vString.size()];
              vString.copyInto(entry.strings);
              entry.types= new String[vType.size()];
              vType.copyInto(entry.types);
              entry.values= new String[vType.size()];
              vValue.copyInto(entry.values);
              vEntries.add(entry);
              entry=null;
              vString=null;
              vType=null;
              vValue=null;
              st=null;
              series=null;
              line=null;
              file=null;
              MainPanel.SetCount(BopIndex);
              BopIndex ++;
    public void FreeMemory(Database db, Connection conn) {
    try {
    //db.update("CREATE TABLE entries (message VARCHAR(1000))");
                   db.update("DELETE FROM entries;");
                   PreparedStatement ps = null;
                   for( int i=0; i<vEntries.size(); i++ ) {
                        Line entry = (Line) vEntries.get(i);
                        String value = "" + entry.message;
                        if(!value.equals("")) {
                             try {
                                  ps = conn.prepareStatement("INSERT INTO entries (message) VALUES('" + Tools.RemoveSingleQuote(value) + "');");
                                  ps.execute();
                             } catch(Exception e) {
                                  e.printStackTrace();
                                  System.out.println("error in number->" + i);
                   MainPanel.WriteLog("Releasing memory...");
         vEntries = null;
         vEntries = new Vector();
         System.gc();
              } catch (Exception e1) {
                   e1.printStackTrace();

    Well, i need to treat those contents, and calculate values withing those files, so wrinting files using FileInputstream wont do. for instance i need to get line 5 from file 1, split it, grab a value according to its class (value also taken) and compare it with another line of another file, adding those values to asingle file.
    that's why i need vector capabilities, but since these files have more than 5 Mb each, an out of memory error is returned by loading those values into vector.
    A better explanaition:
    Each file has a line like
    CLIENTNUM CLASS VALUE
    so if the client is the same withing 2 files, i need to sum the lines into a single file.
    If class is the same, then sum values, if not add it to the front.
    we could have a final line like
    CLIENTNUM CLASS1 VALUE1 CLASS2 VALUE2

  • Loading large text files to find duplicates

    Hi there,
    I have several files with 500 chars long at each line, and for each one i need to get a number (at index 213, 222), to make sure only one exists.
    I also need to take the last 250 lines to manipulate them according to the number given.
    The problem is that i've tried to store those results either in a vector or in a a table (using hsqldb), but in both case i get out of memory error while processing more than 74000 results.
    So what to do?
    Here's the code:
         public void Concatenate() {
              try {
                   vClients =  new Vector();
                   GroupIdentical = Main.getProp("javabop.group.identical", "N");
                   if(GroupIdentical.equalsIgnoreCase("s")) vNifs =  new Vector();
                   for(int i=0;i<BopFiles.length;i++) {
                        BoPPanel.WriteLogPane("A ler ficheiro " + BopFiles[i] + "...");
                        FileInputStream fis = new FileInputStream(BopFiles);
                        BufferedInputStream bis = new BufferedInputStream(fis);
                        DataInputStream in = new DataInputStream(bis);
                        String line = in.readLine();
                        //BoPPanel.SearchPane.append("\n Ficheiro " + BopFiles[i] + "\n\n");
                        while(line != null) {
                             if(line.toLowerCase().startsWith("10694")) {
                                  GetEntry(BopFiles[i], line);
                                  //BoPPanel.SearchPane.append(line + "\n");
                             } else if (line.toLowerCase().startsWith("00694")) {
                                  Header hd = GetHeader(BopFiles[i], line);
                                  vHeaders.add(hd);
                             line = in.readLine();
                        fis.close();
                        bis.close();
                        in.close();
                        System.gc();
                   BoPPanel.WriteLogPane("Numero de elementos obtidos nos ficheiros: " + vClients.size());
                   BoPPanel.WriteLogPane("Concatena��o conclu�da!");
                   //if(GroupIdentical.equalsIgnoreCase("s")) FindDuplicated();
              } catch (Exception e) {
                   e.printStackTrace();
                   Main.WriteLogFile(e.getMessage());
         public Header GetHeader(String file, String line) {
              Header hd = new Header();
              hd.headerFile = file;
              hd.headerLine = line;
              vHeaders.add(hd);
              return hd;
         public void Saveintable(int num, int nif, String file, int index, String series, String line) {
              try {
                   Database db = new Database();
                   Connection conn = db.open();
                   //db.update("DROP TABLE Save");
                   //db.update("CREATE TABLE Save ( num INTEGER, nif INTEGER, file VARCHAR(100), index INTEGER, series VARCHAR(150), line VARCHAR(500))");
                   //db.update("DELETE FROM Save;");
                   String sqlInsert="INSERT INTO Save (num, nif, file, index, series, line) "
                        + " VALUES (?,?,?,?,?,?)";
                        PreparedStatement prep = conn.prepareStatement(sqlInsert);
                        prep.setInt(1, num);
                        prep.setInt(2, nif);
                        prep.setString(3, file);
                        prep.setInt(4, index);
                        prep.setString(5, series);
                        prep.setString(6, line);
                        prep.executeUpdate();
                        //prep.close();
                        //conn.close();
              } catch(Exception e) {
                   e.printStackTrace();
         public void GetEntry(String file, String line) {
              String series = line.substring(252).trim();
              String numberstr = line.substring(30, 45).trim();
              String nifstr = line.substring(213, 222).trim();
              int num=0;
              if(!numberstr.equals("")) num=Integer.parseInt(numberstr);
              int nif=0;
              if(!nifstr.equals("")) nif=Integer.parseInt(nifstr);
              if(GroupIdentical.equalsIgnoreCase("s") && !nifstr.equals("")) vNifs.add(nifstr);
              Saveintable(num, nif, file, BopIndex, series, line);
              BoPPanel.SetCount(BopIndex);
              BopIndex ++;

    here's the example fo 2 lines:
    10694000000000200000000000000H000000001000504AAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAA                                                                                                                     195501231504PRT50YYYYYYYYY                     3 04000000000029000A                                                           
    10694000000000300000000000000H000000001000153BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB                                                                                                                       195110151105PRT501XXXXXXXXX                     2 04000000000079680A                                                            I need to take out the numbers YYYYYYYYY and XXXXXXXXXXX and see if they match (there are 4 files with a total of 840000 lines and i need to concatenate all info from the files in only one and if they are the same i need to sum the values ending with A (29000A and 79680A).
    Now imagine this for those 840000 lines....

  • Arbitrary waveform generation from large text file

    Hello,
    I'm trying to use a PXI 6733 card hooked up to a BNC 2110 in a PXI 1031-DC chassis to output arbitrary waveforms at a sample rate of 100kS/s.  The types of waveforms I want to generate are generally going to be sine waves of frequencies less than 10 kHz, but they need to be very high quality signals, hence the high sample rate.  Eventually, we would like to go up to as high as 200 kS/s, but for right now we just want to get it to work at the lower rate. 
    Someone in the department has already created for me large text files > 1GB  with (9) columns of numbers representing the output voltages for the channels(there will be 6 channels outputting sine waves, 3 other channels with a periodic DC voltage.   The reason for the large file is that we want a continuous signal for around 30 minutes to allow for equipment testing and configuration while the signals are being generated. 
    I'm supposed to use this file to generate the output voltages on the 6733 card, but I keep getting numerous errors and I've been unable to get something that works. The code, as written, currently generates an error code 200290 immediately after the buffered data is output from the card.  Nothing ever seems to get enqued or dequed, and although I've read the Labview help on buffers, I'm still very confused about their operation so I'm not even sure if the buffer is working properly.  I was hoping some of you could look at my code, and give me some suggestions(or sample code too!) for the best way to achieve this goal.
    Thanks a lot,
    Chris(new Labview user)

    Chris:
    For context, I've pasted in the "explain error" output from LabVIEW to refer to while we work on this. More after the code...
    Error -200290 occurred at an unidentified location
    Possible reason(s):
    The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.
    To avoid this error, you can do any of the following:
    1. Increase the size of the background buffer by configuring the buffer.
    2. Increase the number of samples you write each time you invoke a write operation.
    3. Write samples more often.
    4. Reduce the sample rate.
    5. Change the data transfer mechanism from interrupts to DMA if your device supports DMA.
    6. Reduce the number of applications your computer is executing concurrently.
    In addition, if you do not need to write every sample that is generated, you can configure the regeneration mode to allow regeneration, and then use the Position and Offset attributes to write the desired samples.
    By default, the analog output on the device does what is called regeneration. Basically, if we're outputting a repeating waveform, we can simply fill the buffer once and the DAQ device will reuse the samples, reducing load on the system. What appears to be happening is that the VI can't read samples out from the file fast enough to keep up with the DAQ card. The DAQ card is set to NOT allow regeneration, so once it empties the buffer, it stops the task since there aren't any new samples available yet.
    If we go through the options, we have a few things we can try:
    1. Increase background buffer size.
    I don't think this is the best option. Our issue is with filling the buffer, and this requires more advanced configuration.
    2. Increase the number of samples written.
    This may be a better option. If we increase how many samples we commit to the buffer, we can increase the minimum time between writes in the consumer loop.
    3. Write samples more often.
    This probably isn't as feasible. If anything, you should probably have a short "Wait" function in the consumer loop where the DAQmx write is occurring, just to regulate loop timing and give the CPU some breathing space.
    4. Reduce the sample rate.
    Definitely not a feasible option for your application, so we'll just skip that one.
    5. Use DMA instead of interrupts.
    I'm 99.99999999% sure you're already using DMA, so we'll skip this one also.
    6. Reduce the number of concurrent apps on the PC.
    This is to make sure that the CPU time required to maintain good loop rates isn't being taken by, say, an antivirus scanner or something. Generally, if you don't have anything major running other than LabVIEW, you should be fine.
    I think our best bet is to increase the "Samples to Write" quantity (to increase the minimum loop period), and possibly to delay the DAQmx Start Task and consumer loop until the producer loop has had a chance to build the queue up a little. That should reduce the chance that the DAQmx task will empty the system buffer and ensure that we can prime the queue with a large quantity of samples. The consumer loop will wait for elements to become available in the queue, so I have a feeling that the file read may be what is slowing the program down. Once the queue empties, we'll see the DAQmx error surface again. The only real solution is to load the file to memory farther ahead of time.
    Hope that helps!
    Caleb Harris
    National Instruments | Mechanical Engineer | http://www.ni.com/support

  • IDVD won't load large project file..Help!

    iDVD won't load large project file once I close the program for the first time since finishing the project.  Basically, I made a compilation DVD of numerous individual files spread across quite a few folders/sub-menus totaling about 236 minutes on a Dual Layer DVD+R.  I closed the project prior to burning but now, iDVD gets the rainbow wheel everytime you try to load that particular project.  What's the deal?
    Notes:
    -iDVD does still load if I want to load another (smaller/simple project) or start a new one.
    -I've tried to start over from scratch a make the exact same project.  When I exited out and went to load it, it did the same thing.
    -I'm running Mac OSX 10.6.8 Snow Leopard and iDVD 7.1.2.
    Any suggestions?

    Premiere Elements is not part of the Cloud... and it has a different forum
    http://forums.adobe.com/community/premiere_elements?view=discussions
    When you go to the correct forum, you need to provide a LOT more information
    From the Premiere Elements Information FAQ http://forums.adobe.com/thread/1042180
    •What operating system? This should include specific minor version numbers, like "Mac OSX v10.6.8"---not just "Mac".
    •Have you installed any recent program or OS updates? (If not, you should. They fix a lot of problems.)
    •What kind(s) of image file(s)? When talking about camera raw files, include the model of camera.
    •If you are getting error message(s), what is the full text of the error message(s)?
    •What were you doing when the problem occurred?
    •What other software are you running?
    •Tell us about your computer hardware. How much RAM is installed?  How much free space is on your system (C:) drive?
    •Has this ever worked before?  If so, do you recall any changes you made to Premiere Elements, such as adding Plug-ins, brushes, etc.?  Did you make any changes to your system, such as updating hardware, printers or drivers; or installing/uninstalling any programs?
    And some other questions...
    •What are you editing, and does your video have a red line over it BEFORE you do any work?
    •Have you viewed the tutorial links at http://forums.adobe.com/thread/1275830 page?
    •Which version of Quicktime do you have installed?

  • How can I use sql loader to load a text file into a table

    Hi, I need to load a text file that has records on lines tab delimited into a table. How would I be able to use the sql loader to do this? I am using korn shell to do this. I am very new at this...so any kind of helpful examples or documentation would be very appreciated. I would love to see some examples to help me understand if possible. I need help! Thanks alot!

    You should check out the documentation on SQL*Loader in the online Oracle document titled Utilities. Here's a link to the 9iR2 version of it: http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96652/part2.htm#436160
    Hope this helps.

  • How to load a text file?

    I'm trying to load a text file and I've run into a wall. So
    far, I've loaded swf files, image files and XML files without
    problems. But the text files just don't seem to load.
    It's not a tough situation. The files are in the same
    directory as the application. When I run it, I see no sign that the
    file has loaded, but haven't found the error trapping to tell me
    what is (not) going on.
    The code below was culled from examples on the web, they seem
    fairly consistent, and they are copied pretty much exactly. and I
    put them both into a single action to test them.
    Any clues as to what is not happening? I need to read an old
    txt data file that was used with AS2, with pairs of names and
    variables. The first example just reads a couple of variables. The
    second just reads some random text. I'm just trying to see what
    works here, so I can incorporate working code into the real
    program.
    Thanks for your help.

    Thanks Joergen,
    I put in the changes you suggested and still no sign that the
    files are even being opened., but it's very good to have the
    feedback. Just the fact that someone is getting them to run gives
    me confidence in the basic code. I'll hack on it more and let you
    know what the answer turns out to be for me.
    Thanks.

  • How to load a text file int JEditorPane and highlight some words (Urgent !)

    I want to load a text file into a JEditorPane and then highlights some keywords such as while,if and else.I am using an EditorKit in order to style the JEditorPane. I have no difficulty while giving the input through the keyboard but lots of exceptions are thrown if i try to load the string from a file.

    Hi,
    I think the setCharacterAttributes(int offset, int length, AttributeSet s, boolean replace) will solve the problem.
    You can create your own Styled Document and set it to the Editor Pane.

  • Loading a text file on startup into a hashtable

    I am trying to load to seperate textfiles into the same hashtable.
    The textfiles are something like an employee and employee number everything goes in at the same time. How do i load these text files into the hashtable. Also any good tutorials or code samples on hashtables would be greatly appreciated.
    Thanks In Advance

    You read the text files, one line at a time, and break up each line in whatever way you need to. Then you choose the bits to put in the hashtable (key and value) for each line and put those bits into the hashtable.
    http://java.sun.com/docs/books/tutorial/essential/index.html
    http://java.sun.com/docs/books/tutorial/collections/index.html

  • Loading a text file to a buffer

    Hi,
    I have a very simple question,
    I have a text file which I want to read and load it in a string buffer, but I don't want to user BufferedReader and append line by line,
    is it possible to load the text file into StringBuffer without any loops / appends?
    Thanks,
    S.

    File file = new File("fileName.fileExtension");
    byte[] c = new char[file.length()];
    FileReader reader = new FileReader(file);
    try{
        reader.read(c);
    }catch(IOException e) { System.out.println("An error has ocurred");}
    String makeMeString = new String(c);
    Stringbuffer buff = new StringBuffer(makeMeString);

  • Loading a text file from a relative location in shockwave

    Hi all
    I'm REALLY sorry I have to post about such an inane problem.
    I've googled this and looked in forums and found several
    suggestions but none of them are working for me. I want to load a
    text file from the same directory as my movie. I can get this
    working in authoring mode, in a projector, and in a shockwave movie
    on the local machine (using getNetText("/text.txt")).
    But when i put the shockwave movie on a server, it does NOT
    work. The macromedia documentation says that getNetText works with
    relative URLs. But when I put in the full URL, it does work, so
    obviously the shockwave movie is having a problem finding the text
    file which IS in the same directory as it.
    Any suggestions / ideas? All i want to do is access a text
    file :)
    Sorry again for such a lame problem
    Mike

    thanks - this worked. I also had to not use a repeat loop to
    wait for netDone() to return true.
    Thanks again
    MIke

  • Error loading a text file during creation of data load rule

    Hi,
    I am trying to load this text file but whenever I open File -> Open Data File and then click on the file, it says "Invalid Blank Character in name". I tried changing the name of the file and everything but I do not understand what it really means. Can anyone help me out please? This seems like a simple error but I am unable to figure it out. Thanks.
    -- Adi

    As Glenn said, there should not be any space in the path that causes the error.For instance if you have ur file in the desktop(C:\Documents and Settings\Desktop), there would be space in between Documents and Settings. To avoid this, you could directly save in your local disk drive(C:\, D:, E:\).
    Regards
    Cnee

  • Loading a text file in a gzip or zip archive using an applet to a String

    How do I load a text file in a gzip or zip archive using an applet to a String, not a byte array? Give me both gzip and zip examples.

    This doesn't work:
              try
                   java.net.URL url = new java.net.URL(getCodeBase() + filename);
                   inputStream = new java.io.BufferedInputStream(url.openStream());
                   if (filename.toLowerCase().endsWith(".txt.gz"))
                        inputStream = (java.io.InputStream)(new java.util.zip.GZIPInputStream(inputStream));
                   else if (filename.toLowerCase().endsWith(".zip"))
                        java.util.zip.ZipInputStream zipInputStream = new java.util.zip.ZipInputStream(inputStream);
                        java.util.zip.ZipEntry zipEntry = zipInputStream.getNextEntry();
                        while (zipEntry != null && (zipEntry.isDirectory() || !zipEntry.getName().toLowerCase().endsWith(".txt")))
                             zipEntry = zipInputStream.getNextEntry();
                        if (zipEntry == null)
                             zipInputStream.close();
                             inputStream.close();
                             return "";
                        inputStream = zipInputStream;
                   else
                   byte bytes[] = new byte[10000000];
                   byte s;
                   int i = 0;
                   while (((s = inputStream.read())) != null)
                        bytes[i++] = s;
                   inputStream.close();
            catch (Exception e)
              }

  • "how to load a text file to oracle table"

    hi to all
    can anybody help me "how to load a text file to oracle table", this is first time i am doing, plz give me steps.
    Regards
    MKhaleel

    Usage: SQLLOAD keyword=value [,keyword=value,...]
    Valid Keywords:
    userid -- ORACLE username/password
    control -- Control file name
    log -- Log file name
    bad -- Bad file name
    data -- Data file name
    discard -- Discard file name
    discardmax -- Number of discards to allow (Default all)
    skip -- Number of logical records to skip (Default 0)
    load -- Number of logical records to load (Default all)
    errors -- Number of errors to allow (Default 50)
    rows -- Number of rows in conventional path bind array or between direct path data saves (Default: Conventional path 64, Direct path all)
    bindsize -- Size of conventional path bind array in bytes (Default 256000)
    silent -- Suppress messages during run (header, feedback, errors, discards, partitions)
    direct -- use direct path (Default FALSE)
    parfile -- parameter file: name of file that contains parameter specifications
    parallel -- do parallel load (Default FALSE)
    file -- File to allocate extents from
    skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE)
    skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE)
    commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE)
    readsize -- Size of Read buffer (Default 1048576)
    external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE
    (Default NOT_USED)
    columnarrayrows -- Number of rows for direct path column array (Default 5000)
    streamsize -- Size of direct path stream buffer in bytes (Default 256000)
    multithreading -- use multithreading in direct path
    resumable -- enable or disable resumable for current session (Default FALSE)
    resumable_name -- text string to help identify resumable statement
    resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200)
    PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. An example of the former case is 'sqlldr scott/tiger foo'; an example of the latter is 'sqlldr control=foo userid=scott/tiger'. One may specify parameters by position before but not after parameters specified by keywords. For example, 'sqlldr scott/tiger control=foo logfile=log' is allowed, but 'sqlldr scott/tiger control=foo log' is not, even though the position of the parameter 'log' is correct.
    SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\PFS2004.CTL LOG=D:\PFS2004.LOG BAD=D:\PFS2004.BAD DATA=D:\PFS2004.CSV
    SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\CLAB2004.CTL LOG=D:\CLAB2004.LOG BAD=D:\CLAB2004.BAD DATA=D:\CLAB2004.CSV
    SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CTL LOG=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.LOG BAD=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.BAD DATA=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CSV

  • Editing and changing large text file

    hi,
    new to this, so bare with me.
    got a large text file 44meg and i need to change some values in it.
    example:
    TSX ;20030102;40302216;40300579;1980;1900;3762000
    i need to change the lines so that they read:
    TSX ;20030102;302216;300579;1980;1900;3762000
    thus removing the leading 40 in the middle cols.
    Thanks in advance
    john

    crap, small mistake
    1) use BufferedReader to read in the file line by line (BufferedReader.readLine())
    2a) for each line, split it on the semicolons (String.split())
    2b) change the middle value using String.substring()
    2c) construct a new line by appending all strings in the array returned by 2a) to eachother
    2d) write this new line to a file using PrintStream (PrintSteam.println())
    3) when done, close both the reader and the printstream.

Maybe you are looking for

  • New Photosmart Plus B209 a-m Will not connect wirelessly **HELP**

    I just purchased this printer and i have tried every option I can think of and find online to get this printer to connect to my netgear router, I had a wireless canon printer that i had for a  while that quit so i went and bought this printer and now

  • Account locked out

    On windows 8.1 app when I try and login to my account it says locked out becauseof too many failed attempts. This is rubbish as it worrks on my Android Tablet. I do not have a microsoft account and basically that seems to be the only option with Skyp

  • Infotype 0008

    Hi,      While Hiring an Employee,the Basic salary was entered as 10,000/- instead of 11,000/-.It was noticed after two Months,Now what is the procedure to give that Employee the remaining 1,000/- salary for Jan,Feb&11,000/- from March? Regards, Chik

  • How to capture audio encoded by pcm at 16 Bits,2 channel, 44KHz samples by using crossbridge

    sorry for trouble you,  i want realtime encode aac+ audio using adobe flash, but i can't get raw pcm audio using as3 api 's micphone interface. SampleDataEvent.SAMPLE_DATA. so i want to use directshow to replace to get raw audio data, but i don't kno

  • I used automator to rename some pages files, and now they can't be opened

    I used automator to remove some text from the filenames of a group of pages files, and now they can't be opened. What went wrong?