Question about reading a very big file into a buffer.

Hi, everyone!
I want to randomly load several characters from
GB2312 charset to form a string.
I have two questions:
1. Where can I find the charset table file? I have used
google for hours to search but failed to find GB2312 charset
file out.
2. I think the charset table file is very big and I doubted
whether I can loaded it into a String or StringBuffer? Anyone
have some solutions? How to load a very big file and randomly
select several characters from it?
Have I made myself understood?
Thanks in advance,
George

The following can give the correspondence between GB2312 encoded byte arrays and characters (in hexadecimal integer expression).
import java.nio.charset.*;
import java.io.*;
public class GBs{
static String convert() throws java.io.UnsupportedEncodingException{
StringBuffer buffer = new StringBuffer();
String l_separator = System.getProperty("line.separator");
Charset chset = Charset.forName("EUC_CN");// GB2312 is an alias of this encoding
CharsetEncoder encoder = chset.newEncoder();
int[] indices = new int[Character.MAX_VALUE+1];
for(int j=0;j<indices.length;j++){
       indices[j]=0;
for(int j=0;j<=Character.MAX_VALUE;j++){
    if(encoder.canEncode((char)j)) indices[j]=1;
byte[] encoded;
String data;
for(int j=0;j<indices.length;j++) {
     if(indices[j]==1) {
            encoded =(Character.toString((char)j)).getBytes("EUC_CN");
                      for(int q=0;q<encoded.length;q++){
                      buffer.append(Byte.toString(encoded[q]));
                      buffer.append(" ");
            buffer.append(": 0x");buffer.append(Integer.toHexString(j));
            buffer.append(l_separator);
    return buffer.toString();
//the following is for testing
/*public static void main(String[] args) throws java.lang.Exception{
   String str = GBs.convert();
   System.out.println(str);*/

Similar Messages

  • Spilt a big file into 3 files??

    Hello,
    Is there any software I can use to spilt a very big file into
    3 or 4 small files please?
    Thanks.

    Use one of these applications to split the file.
    (14020)

  • How do i open a VERY big file?

    I hope someone can help.
    I did some testing using a LeCroy LT342 in segment mode. Using the
    Labview driver i downloaded the data over GPIB and saved it to a
    spreadsheet file. Unfortunately it created very big files (ranging from
    200MB to 600MB). I now need to process them but Labview doesn't like
    them. I would be very happy to split the files into an individual file
    for each row (i can do this quite easily) but labview just sits there
    when i try to open the file.
    I don't know enough about computers and memory (my spec is 1.8GHz
    Pentium 4, 384MB RAM) to figure out whether if i just leave it for long
    enough it will do the job or not.
    Has anyone any experience or help they could offer?
    Thanks,
    Phil

    When you open (and read) a file you usually move it from your hard disk (permanent storage) to ram.  This allows you to manipulate it in high speeds using fast RAM memory, if you don't have enough memory (RAM) to read the whole file,  you will be forced to use virtual memory (uses swap space on the HD as "virtual" RAM) which is very slow.  Since you only have 384 MB of RAM and want to process Huge files (200MB-600MB) you could easily and inexpensively upgrade to 1GB of RAM and see large speed increases.  A better option is to lode the file in chunks looking at some number of lines at a time and processing this amount of data and repeat until the file is complete, this will be more programming but will allow you to use much lass RAM at any instance.
    Paul
    Paul Falkenstein
    Coleman Technologies Inc.
    CLA, CPI, AIA-Vision
    Labview 4.0- 2013, RT, Vision, FPGA

  • VERY big files (!!) created by QuarkXPress 7

    Hi there!
    I have a "problem" with QuarkXPress 7.3 and I don't know if this is the right forum to ask...
    Anyway, I have createed a document, about 750 pages, with 1000 pictures placed in it. I have divided it in 3 layouts.
    I'm saving the file and the file created is 1,20GB !!!
    Isn't that a very big file for QuarkXPress??
    In that project there are 3 layouts. I tried to make a copy of that file and delete 2 of 3 layouts and the project's file size is still the same!!
    (Last year, I had created (almost) the same document and as I checked that document now, its size is about 280 MB!!)
    The problem is that I have "autosave" on (every 5 or 10 minutes) and it takes some time to save it !
    Can anyone help me with that??
    Why does Quark has made SO big file???
    Thank you all for your time!

    This is really a Quark issue and better asked in their forum areas. However, have you tried to do a Save As and see how big the resultant document is?

  • How to read a whole text file into a pl/sql variable?

    Hi, I need to read an entire text file--which actually contains an email message extracted from a content management system-- into a variable in a pl/sql package, so I can insert some information from the database and then send the email. I want to read the whole text file in one shot, not just one line at a time. Shoud I use Utl_File.Get_Raw or is there another more appropriate way to do this?

    how to read a whole text file into a pl/sql variable?
    your_clob_variable := dbms_xslprocessor.read2clob('YOUR_DIRECTORY','YOUR_FILE');
    ....

  • To read data from exel file into sap

    hi all,
    How to read data from exel file into the internal table in abap?
    Regards,
    sugeet.

    Hi Sugeet,
    Use the following code.
    DATA : BEGIN OF tbl_asset occurs 0,
             anlkl LIKE anla-anlkl,          " Asset Class
             bukrs LIKE anla-bukrs,          " Company Code
             ranl1 LIKE ra02s-ranl1,         " Asset #
             txt50 LIKE anla-txt50,          " Description 1
             txa50 LIKE anla-txa50,          " Description 2
             sernr LIKE anla-sernr,          " Serial #
             invnr LIKE anla-invnr,          " Inventory #
             menge LIKE anla-menge,          " Quantity
             meins LIKE anla-meins,          " Base UOM
             inken LIKE anla-inken,          " Inventory
    END OF tbl_asset.
    DATA : w_filename TYPE IBIPPARMS-path,
           w_file     TYPE string.
    start-of-selection.
    *popup for file path from user
    CALL FUNCTION 'F4_FILENAME'
    EXPORTING
       PROGRAM_NAME        = SYST-CPROG
       DYNPRO_NUMBER       = SYST-DYNNR
    IMPORTING
       FILE_NAME           = w_filename          .
    MOVE w_filename TO w_file .
    * upload data
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        FILENAME                      =  w_file
        FILETYPE                      = 'ASC'
        HAS_FIELD_SEPARATOR           = 'X'
      TABLES
        DATA_TAB                      = tbl_asset
      EXCEPTIONS
       FILE_OPEN_ERROR               = 1
       FILE_READ_ERROR               = 2
       NO_BATCH                      = 3
       GUI_REFUSE_FILETRANSFER       = 4
       INVALID_TYPE                  = 5
       NO_AUTHORITY                  = 6
       UNKNOWN_ERROR                 = 7
       BAD_DATA_FORMAT               = 8
       HEADER_NOT_ALLOWED            = 9
       SEPARATOR_NOT_ALLOWED         = 10
       HEADER_TOO_LONG               = 11
       UNKNOWN_DP_ERROR              = 12
       ACCESS_DENIED                 = 13
       DP_OUT_OF_MEMORY              = 14
       DISK_FULL                     = 15
       DP_TIMEOUT                    = 16
       OTHERS                        = 17
    IF SY-SUBRC <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    for HAS_FIELD_SEPARATOR Use
    'X': Fields are separated by tabs.
    SPACE: Fields are not separated by tabs. In this case, the table must contain only one column or all columns must be contained in the file in their entire length.
    Hope it helps...
    Lokesh
    Pls. reward appropriate points

  • Upload of very big files (300MB+)

    Hello all,
    I am trying to create application in HTMLDB to store files via webbrowser in DB(BLOB). I created all needed components and now stuck with the problem of uploading big files. Basicaly when file is over 100MB it becomes unreliable and for big files does not work at all. Can somebody help me to figure out how to upload big files into htmldb application. Any hints and sugestions are welcome. Examples will be even more appreciated.
    Sincerely,
    Ian

    Ian,
    When you say "big files does not work at all", what do you see in the browser? Is no page returned at all?
    When a file is uploaded, it takes some amount of time to simply transfer the file from the client to modplsql. If you're on a local Gbit network, this is probably fast. If you're doing this over a WAN or over the Internet, this is probably fairly slow. As modplsql gets this uploaded file, it writes it to a temporary BLOB. Once fully received, modplsql will then insert this into the HTML DB upload table.
    I suspect that the TimeOut directive in Apache/Oracle HTTP Server is kicking in here. The default setting for this is 300 (5 minutes).
    I believe the timeout is reset by modplsql during file transfer to avoid a timeout operation while data is still being sent. Hence, I believe the insertion of your large file into the file upload table is taking longer than the TimeOut directive.
    The easy answer is to consider increasing your TimeOut directive for Apache/Oracle HTTP Server.
    The not so easy answer is to investigate why it takes so long for this insert, and tune the database accordingly.
    Hope this helps.
    Joel

  • Question about reading csv file into internal table

    Some one (thanks those nice guys!) in this forum have suggested me to use FM KCD_CSV_FILE_TO_INTERN_CONVERT to read csv file into internal table. However, it can be used to read a local file only.
    I would like to ask how can I read a CSV file into internal table from files in application server?
    I can't simply use SPLIT as there may be comma in the content. e.g.
    "abc","aaa,ab",10,"bbc"
    My expected output:
    abc
    aaa,ab
    10
    bbb
    Thanks again for your help.

    Hi Gundam,
    Try this code. I have made a custom parser to read the details in the record and split them accordingly. I have also tested them with your provided test cases and it work fine.
    OPEN DATASET dsn FOR input IN TEXT MODE ENCODING DEFAULT.
    DO.
    READ DATASET dsn INTO record.
      PERFORM parser USING record.
    ENDDO.
    *DATA str(32) VALUE '"abc",10,"aaa,ab","bbc"'.
    *DATA str(32) VALUE '"abc","aaa,ab",10,"bbc"'.
    *DATA str(32) VALUE '"a,bc","aaaab",10,"bbc"'.
    *DATA str(32) VALUE '"abc","aaa,ab",10,"b,bc"'.
    *DATA str(32) VALUE '"abc","aaaab",10,"bbc"'.
    FORM parser USING str.
    DATA field(12).
    DATA field1(12).
    DATA field2(12).
    DATA field3(12).
    DATA field4(12).
    DATA cnt TYPE i.
    DATA len TYPE i.
    DATA temp TYPE i.
    DATA start TYPE i.
    DATA quote TYPE i.
    DATA rec_cnt TYPE i.
    len = strlen( str ).
    cnt = 0.
    temp = 0.
    rec_cnt = 0.
    DO.
    *  Start at the beginning
      IF start EQ 0.
        "string just ENDED start new one.
        start = 1.
        quote = 0.
        CLEAR field.
      ENDIF.
      IF str+cnt(1) EQ '"'.  "Check for qoutes
        "CHECK IF quotes is already set
        IF quote = 1.
          "Already quotes set
          "Start new field
          start = 0.
          quote = 0.
          CONCATENATE field '"' INTO field.
          IF field IS NOT INITIAL.
            rec_cnt = rec_cnt + 1.
            CONDENSE field.
            IF rec_cnt EQ 1.
              field1 = field.
            ELSEIF rec_cnt EQ 2.
              field2 = field.
            ELSEIF rec_cnt EQ 3.
              field3 = field.
            ELSEIF rec_cnt EQ 4.
              field4 = field.
            ENDIF.
          ENDIF.
    *      WRITE field.
        ELSE.
          "This is the start of quotes
          quote = 1.
        ENDIF.
      ENDIF.
      IF str+cnt(1) EQ ','. "Check end of field
        IF quote EQ 0. "This is not inside quote end of field
          start = 0.
          quote = 0.
          CONDENSE field.
    *      WRITE field.
          IF field IS NOT INITIAL.
            rec_cnt = rec_cnt + 1.
            IF rec_cnt EQ 1.
              field1 = field.
            ELSEIF rec_cnt EQ 2.
              field2 = field.
            ELSEIF rec_cnt EQ 3.
              field3 = field.
            ELSEIF rec_cnt EQ 4.
              field4 = field.
            ENDIF.
          ENDIF.
        ENDIF.
      ENDIF.
      CONCATENATE field str+cnt(1) INTO field.
      cnt = cnt + 1.
      IF cnt GE len.
        EXIT.
      ENDIF.
    ENDDO.
    WRITE: field1, field2, field3, field4.
    ENDFORM.
    Regards,
    Wenceslaus.

  • How to read a very BIG XML-File

    Hello together,
    how can I read a XML file of e.g. 2 GByte in ABAP ( Release 4.6C ). The file should be read from the Application-Server ( not from the Front-End ).
    Problem: Too much MEMORY is needed by the upload of the complete file into an internal table. In order to produce a stream, the complete file must be uploaded. The parser works with parse_event ( Event-triggered ) and is not creating a DOM. The parsing itself is no problem.
    Possible solution: Feed the stream with parts of the file. But how !?
    Here is some coding of the program:
      data: l_xml_filename    type localfile,
            l_event_sub       type i,
            l_boolean         type c,
            l_subrc           type i.
    Datei mit Größe in XML-Tabelle einlesen:
    l_xml_filename = 'I:TEMPGeboIsp-PSGK37.xml'.
      perform read_file_in_xml_table using    p_xml_filename
                                     changing g_xml_table
                                              g_xml_size.
    XML-Dokument aus Tabelle bilden:     =========================
    iXML factory erzeugen:
      go_ixml = cl_ixml=>create( ).
    stream factory erzeugen:
      p_streamfactory = go_ixml->create_stream_factory( ).
    XML-Dokument erzeugen:
      p_xml_document = go_ixml->create_document( ).
    Input-Stream erzeugen:
      p_istream = p_streamfactory->create_istream_itable(
                          table = g_xml_table
                          size  = g_xml_size ).
      p_parser = go_ixml->create_parser( stream_factory = p_streamfactory
                                         istream        = p_istream
                                         document       = p_xml_document ).
      l_event_sub = if_ixml_event=>co_event_element_pre +
                    if_ixml_event=>co_event_element_post.
      call method p_parser->set_event_subscription( l_event_sub ).
      l_boolean = p_parser->set_dom_generating( ' ' ).
    The program works fine but needs too much memory because of the big internal table.
    I would be very happy if somebody could help me!
    Thanks in advance!
    Thomas
    P.s.: German answers are welcome!  
    Message was edited by:
            Thomas13 Scheuermann
    Message was edited by:
            Thomas13 Scheuermann

    Hello myself,
    nobody has answered my question, so now I answer myself!!  
    The wrong part is to read the file with "open dataset" and to create the inputstream with
    p_istream = p_streamfactory->create_istream_itable(
    table = g_xml_table
    size = g_xml_size ).
    Better ist to create the inputstream with
    p_istream = p_streamfactory->create_istream_uri(
    .......................PUBLIC_ID = ''
    .......................SYSTEM_ID = '
    applserver\I$\TEMP\Datei.XML' ).
    In this way no space is needed for the file.
    Best regards,
    Thomas
    Message was edited by:
            Thomas13 Scheuermann

  • Unable to read big files into string object  and java.lang.OutOfMemory Prob

    Hi All,
    I have an application that uses applet and servlet communication. On the client side I am reading an large xml file of 12MB size (using JFileChooser) and converting the file to an string object using below code. But I am getting java.lang.OutOfMemory on the client side . But the same below code works fine for small xml files which are less than 4MB sizes:
    BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(file),"UTF8"), 1024*12);
    String s, s2 = new String();
    while((s = in.readLine())!= null)
         s2 += s + "\n";
    I even tried below code but still java.lang.OutOfMemory is coming:
    while (true)
         int i = in.read();
         if (i == -1)
              break;
         sb.append(i);
    Please let me know what am I doing wrong here ...
    Thanks & Regards,
    Sony.

    Using a String is bad for the following reason:
    When you initially create the String, it has a certain memory size (allocated length if you will). As you keep appending to this String, then memory reallocation will occur over and over, slowing your program down dramatically (ive seen with a 16k x 8 Char file taking 30 secs to read into memory using Strings in this way)
    A Better way would be if you knew the number of characters in the XML file (Using some File size method for example) Then you can use a StringBuffer, which will pre allocate enough space (or try to, it may just be that you cannot create a string as large as you need). You can use toString() method to get the resultant in a String Object (the extra allocated space at the end of the Buffer will be removed)
    StringBuffer strBuf = new StringBuffer(xxxx);
    Where xx is the length (int). Assuming that you are only allowed to enter an int to the constructor then (platform depedant) an int is 2^31 at maximum (or whatever) which allows 2.14e9 characters, therefore an xml file being totally filed would allow a size of ~2048 MB to be read in.
    Try it and see.

  • Unable to read big files into string object & java.lang.OutOfMemory Problem

    Hi All,
    I have an application that uses applet and servlet communication. On the client side I am reading an large xml file of 12MB size (using JFileChooser) and converting the file to an string object using below code. But I am getting java.lang.OutOfMemory on the client side . But the same below code works fine for small xml files which are less than 4MB sizes:
    BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(file),"UTF8"), 1024*12);
    String s, s2 = new String();
    while((s = in.readLine())!= null)
    s2 += s + "\n";
    I even tried the below code but still java.lang.OutOfMemory is coming:
    while (true)
    int i = in.read();
    if (i == -1)
    break;
    sb.append(i);
    Please let me know what am I doing wrong here ...

    Hi,
    I could avoid the java.lang.OutOfMemory error using below code. But using below code I could read small files of sizes less than 4MB
    but with large files of 12 MB the below code just simply hangs and I am unable to print the string object namely 's'.
    My purpose is to construct an String or StringBuffer object out the user uploaded xml file at the client side and pass that object to server for processing. So how can I construct such object avoid memory problem and increasing the performance of such operations.
    BufferedInputStream in = new BufferedInputStream(new FileInputStream(file));
    byte[] b = new byte[in.available()];
    in.read(b, 0, b.length);
    String s = new String(b, 0, b.length);
    in.close();
    Thanks & Regards,
    Sony.

  • I want to read a Matlab MAT file into labview.

    I do not have Matlab.  I have a series of *.mat files that are created by another program that I want to read into LabVIEW.  The *.mat files should contain a very long 2 dimensional array of complex numbers, with only 2 columns.  I am hoping that the *.mat format is straight forward.  I need to read this into a LabVIEW array to manipulate it.
    Has this been done?  I've found several routines that allow you to save data from LabVIEW into a *.mat file so that Matlab can read it, but I have not seen anything that goes in my direction.  Any help is appreciated.
    Mike

    Oh, yeah, that certainly makes a big difference.
    You're even luckier that I was still bored, so I whipped something together.
    See attached.
    Couple of things:
    (1) I couldn't test it for all data types so the "Parse Data" VI may need to be tweaked for some of the data types.
    (2) Your file contained multiple variables and the one matrix of complex values was a 1D array so you may need to insert a "Transpose 2D Array" function where the bundle function is for your 2D array.
    (3) A couple of the variables were character arrays but Matlab stores the individual characters as floating point numbers between 0 and 255 representing ASCII-encoded characters. These are the "XUnit" and "YUnit" arrays.
    For your reference, the MAT file format is at http://www.mathworks.com/access/helpdesk/help/pdf_doc/matlab/matfile_format.pdf
    Oh, I think I may have left breakpoints in when I saved the VI. Sorry about that. You should remove them when you run the VI so they don't become annoying.
    Message Edited by smercurio_fc on 02-09-2006 05:07 PM
    Attachments:
    Read Level 4 MAT File.zip ‏69 KB

  • Reading a text data file into memory

    hi,
    I have a text file which contains data, The text file is parsed and objects are created. The problem is the text file is quite huge measuring upto 1.8~2 Mb. The format of the text file is as follows
    Units: METRIC (atm, m3)
    * Step: 1 Time: 0.00
    * Average Field Pressure : 204.14
    * Region 1 Pressure : 204.14
    Well GROUP Layer Blk_Pressure BHP ResRate OilRate WaterRate GasRate KhProd Windex PindeWELLTYPE
    1 FIELD 1 204.14 49.33 6601.22 6568.10 37.14 538.07 99999.00 260.35 99999.00 P
    1 FIELD 2 204.14 50.34 6558.13 6525.23 36.90 534.56 99999.00 260.35 99999.00 P
    1 FIELD 3 204.14 51.35 6515.04 6482.36 36.65 531.04 99999.00 260.35 99999.00 P
    1 FIELD Tot 204.14 50.34 19674.40 19575.69 110.69 1603.67 99999.00 99999.00 99999.00 P
    2 FIELD 1 204.14 377.66 7573.96 0.00 7403.68 0.00 99999.00 260.35 99999.00 I
    2 FIELD 2 204.14 378.40 7606.33 0.00 7435.32 0.00 99999.00 260.35 99999.00 I
    2 FIELD 3 204.14 379.14 7638.70 0.00 7466.96 0.00 99999.00 260.35 99999.00 I
    2 FIELD Tot 204.14 378.40 22818.99 0.00 22305.95 0.00 99999.00 99999.00 99999.00 I
    * Step: 2 Time: 20.23
    * Average Field Pressure : 300.11
    * Region 1 Pressure : 300.11
    Well GROUP Layer Blk_Pressure BHP ResRate OilRate WaterRate GasRate KhProd Windex PindeWELLTYPE
    1 FIELD 1 194.20 49.33 858.83 853.40 5.36 68.22 99999.00 260.35 99999.00 P
    1 FIELD 2 194.48 50.34 871.71 866.22 5.42 69.35 99999.00 260.35 99999.00 P
    1 FIELD 3 194.76 51.35 884.86 879.29 5.48 70.49 99999.00 260.35 99999.00 P
    1 FIELD Tot 194.48 50.34 2615.40 2598.91 16.26 208.06 99999.00 99999.00 99999.00 P
    2 FIELD 1 370.40 377.66 912.25 0.00 891.74 0.00 99999.00 260.35 99999.00 I
    2 FIELD 2 371.26 378.40 895.75 0.00 875.61 0.00 99999.00 260.35 99999.00 I
    2 FIELD 3 372.12 379.14 879.29 0.00 859.52 0.00 99999.00 260.35 99999.00 I
    2 FIELD Tot 371.26 378.40 2687.28 0.00 2626.86 0.00 99999.00 99999.00 99999.00 I
    The Step goes on till like 3000, I am creating an object for each step which inturn has nested object for each well and the well in turn for each layer. In the above case of step 3 the object would be
    class Step 2{
    inner class well
    {     //for well 1
    inner class layer { // for layer 1 }
    inner class layer {/ for layer 2  }
    inner class layer {/ for layer 3  }
    inner class well
    {     //for well 2
    inner class layer { // for layer 1 }
    inner class layer {/ for layer 2  }
    inner class layer {/ for layer 3  }
    This architecture of mine is proving to be heavy as I would end up with around 9000 Java objects in memory, though my classes only have have int, float, String data items. I am using this data to plot graphs, so I guess it wouldnt be optimal to read data from text file for each plot.
    So in short the problem is can anyone tell a better way to read the data into memory ? given that there could be 3000 steps in the format given above.
    Thanks
    AM

    I have implemented and it takes around 30-45 sec to parse and also the GUI has become very slow. I query the objects for multiple combinations of graphs.
    The data from the objects is being used to feed the graphs on my GUI. I have a number of options on my GUI for different kinds of graphs, for each combination chosen the Objects are queried for the data. The GUI is written using Swing.
    So is there anyway I can fine tune the application, any tips about the object architecture or how to improve the speed. I am also explicitly running the garbage collector a few times in my program. Also how can I make JVM occupy lesser memory so that my program can have more memory.
    Thanks
    am

  • Question about Reader DRM activation

    Hi All
    I got a paper "FAQ ACS Discontinuation of Sales, November 30, 2004"
    I have a question about ACS.
    As I konw, ACS is discontinued effective November 30, 2004.
    and I read a section in paper below:
    Q. Will Adobe Reader and Acrobat users get support from Adobe for eBook issues?
    A. Support is not available for the free Adobe Reader, however, assistance for user activation is
    available until December 31, 2006.
    Does it mean Reader activation service will be stop until December 31, 2006?
    That's a big problem for our ebook service!!

    Adobe communicated in January to ACS customers that the DRM Activator service would remain operational until December 31, 2007, one year longer than previously announced.
    If you didn't receive this announcement, send your contact info (email and surface mail) to "[email protected]".
    You'll receive an autotmated reply that your email bounced but it will then be manually confirmed.

  • A very big file

    i have a very big text file . almost 800 MB . in some lines i have a special pattern "from the end" is present. i want to pick up those lines.
    whats are the possible efficient solutions ?
    as the file is large , i hesitate to use BufferedReader + readLine + indexOf("from the end") because its not efficient . it reads all the lines.
    is RandomAccessFle will be a good solution ? but RAF dont have readLine + indexOf("from the end") .how do i solve it then ?
    i have java 1.4 . in java 1.4 what is the best solution for solving this kind of problem ?

    Under all circumstance you'll have to read all bytes
    of the file once.Not true! If the file consists of bytes with a character representation that uses only one byte (e.g. ASCII or iso-8859-x) then the following class if very efficient.
    import java.io.*;
    import java.util.*;
    public class GetLinesFromEndOfFile
        static public class BackwardsFileInputStream extends InputStream
            public BackwardsFileInputStream(File file) throws IOException
                assert (file != null) && file.exists() && file.isFile() && file.canRead();
                raf = new RandomAccessFile(file, "r");
                currentPositionInFile = raf.length();
                currentPositionInBuffer = 0;
            public int read() throws IOException
                if (currentPositionInFile <= 0)
                    return -1;
                if (--currentPositionInBuffer < 0)
                    currentPositionInBuffer = buffer.length;
                    long startOfBlock = currentPositionInFile - buffer.length;
                    if (startOfBlock < 0)
                        currentPositionInBuffer = buffer.length + (int)startOfBlock;
                        startOfBlock = 0;
                    raf.seek(startOfBlock);
                    raf.readFully(buffer, 0, currentPositionInBuffer);
                    return read();
                currentPositionInFile--;
                return buffer[currentPositionInBuffer];
            public void close() throws IOException
                raf.close();
            private final byte[] buffer = new byte[4096];
            private final RandomAccessFile raf;
            private long currentPositionInFile;
            private int currentPositionInBuffer;
        public static List<String> head(File file, int numberOfLinesToRead) throws IOException
            return head(file, "ISO-8859-1" , numberOfLinesToRead);
        public static List<String> head(File file, String encoding, int numberOfLinesToRead) throws IOException
            assert (file != null) && file.exists() && file.isFile() && file.canRead();
            assert numberOfLinesToRead > 0;
            assert encoding != null;
            LinkedList<String> lines = new LinkedList<String>();
            BufferedReader reader= new BufferedReader(new InputStreamReader(new FileInputStream(file), encoding));
            for (String line = null; (numberOfLinesToRead-- > 0) && (line = reader.readLine()) != null;)
                lines.addLast(line);
            reader.close();
            return lines;
        public static List<String> tail(File file, int numberOfLinesToRead) throws IOException
            return tail(file, "ISO-8859-1" , numberOfLinesToRead);
        public static List<String> tail(File file, String encoding, int numberOfLinesToRead) throws IOException
            assert (file != null) && file.exists() && file.isFile() && file.canRead();
            assert numberOfLinesToRead > 0;
            assert (encoding != null) && encoding.matches("(?i)(iso-8859|ascii|us-ascii).*");
            LinkedList<String> lines = new LinkedList<String>();
            BufferedReader reader= new BufferedReader(new InputStreamReader(new BackwardsFileInputStream(file), encoding));
            for (String line = null; (numberOfLinesToRead-- > 0) && (line = reader.readLine()) != null;)
                // Reverse the order of the characters in the string
                char[] chars = line.toCharArray();
                for (int j = 0, k = chars.length - 1; j < k ; j++, k--)
                    char temp = chars[j];
                    chars[j] = chars[k];
                    chars[k]= temp;
                lines.addFirst(new String(chars));
            reader.close();
            return lines;
        public static void main(String[] args)
            try
                File file = new File("/usr/share/dict/words");
                int n = 10;
                    System.out.println("Head of " + file);
                    int index = 0;
                    for (String line : head(file, n))
                        System.out.println(++index + "\t[" + line + "]");
                    System.out.println("Tail of " + file);
                    int index = 0;
                    for (String line : tail(file, "us-ascii", n))
                        System.out.println(++index + "\t[" + line + "]");
            catch (Exception e)
                e.printStackTrace();
    }

Maybe you are looking for

  • Adobe Contribute CS3 Connection Error

    I am trying to use Adobe Contribute CS3 Version 3.1 to connect our web server.  I can use WS_FTP and WinSCP to connect to it fine.  Our web server is running under CentOS 5.x.  When I try to connect using the connection wizard in Contribute, I recevi

  • IPod Touch 2g not recognised by iTunes

    With the new iTunes 11.1.4, my iPod Touch 2g is no longer recognized by iTunes (Windows 8 64 bit), although it appears in 'My Computer'. I have tried all the suggestions in the appropriate 'Help' page. A search of the internet shows that this is not

  • HT1766 How do I resolve the error "iTunes cannot restore backup because there is not enough free space on iPod"?

    I have an iPod touch 4th generation 8gb and im having a time zone issue on it, so I just backed up and I want to restore from backup. When I try to restore from backup it tells me that there is not enough space on  my ipod to restore backup. I have 6

  • JFileChooser.showSaveDialog ()

    I have to save a file using JFileChooser fc=new JFileChooser() //a dded all the filters fc.showSaveDialog(); If we write a file name in the file name text box of chooser and pressed SAVE button, Its saving with the selected file name. But I have to a

  • CS3 Illustrator and Indesign problems

    I am still using CS3 and have recently reinstalled (uninstalled from old comp) onto a new computer with double the capacity - now 8GB 64 bit OS Wind 7 Prof.  Since it has been on the new computer both Illustrator and Indesign close down on opening.