JTextPane editor large file & performance

Hi,
I'm using a JTextPane to edit XML files. A jflex parser split xml file in tokens and using a custom document (extends
DefaultStyledDocument) i color syntax:
doc.setCharacterAttributes(token.getCharBegin() + change,
     token.getCharEnd()     - token.getCharBegin(),
        Token_Styles_Define.getStyle(token.getDescription()),  true);My problem is to load and edit large xml file, for exemple a 400kb xml file takes 30 seconds and for 700kb 1Mb i get java heap space.
I google it and i found :
" Define a limit that JTextPane/JEditorPane can handle well (like 500KB or 1MB). You'll only need to load a chunk of the file into the control with this size.
Start by loading the 1st partition of the file.
Then you need to interact with the scroll container and see if it has reached the end/beginning of the current chunk of the file. If so, show a nice waiting cursor and load the previous/next chunk to memory and into the text control.
The loading chunk is calculated from your current cursor position in the file (offset).
loading chunk = offset - limit/2 to offset + limit/2
The text on the JTextPane/JEditorPane must not change when loading chunks or else the user feels like is in another position of the file.
This is not a trivial solution but if you don't find any other 3rd party control to do this I would go this way. " (bruno conde)
Is this a good solution and can anybody give me an exemple (link tutorial project) ? or are any other solution?
How can we improve jtextpane performance?
Thx

Yes, i allready done that (using SwingUtilities.invokeLater and updates are only performed on document objects that are not attached to text components at the time that the updates are made)
The slow speed of insertion in a JTextPane has at least two culprits. The first is simply an effect of the Model-View-Controller architecture: If the document into which strings are being inserted is serving as the model for a visible text pane, each insertion will trigger a re-rendering of the display, which may in turn trigger other user interface updates. For interactive editing, this is exactly the behavior that one would want, since the results of user input should be immediately visible when editing text pane content. For initializing a text pane with a large amount of content, or, in general, making large “batch” edits, the overhead of repeated UI updates is signficant.
The second source of sluggishness is an implementation detail of DefaultStyledDocument. Unlike most Swing classes, which should only be modified in the AWT thread, DefaultStyledDocument is designed to be thread-safe. This requires DefaultStyledDocument to implement locking, so that multiple threads do not try to update the document contents at the same time. This locking imposes a small amount of overhead on each insertion or other document modification, which can add up to a long delay for large and complex text insertions.
[Faster JTextPane Text Insertion|http://javatechniques.com/public/java/docs/gui/jtextpane-speed-part1.html]
but is still slow (reading file)...and the other problem is with memory usage.
I found this subject on many other forums but not with a satisfactory answer ......must be a solution
Thx for your answers guys!

Similar Messages

  • Photoshop Cs5, opengl, large files, performance issues.

    Hello
    I have fresh win7 64-bit install.
    Also I have fully upgraded system, new cool whql drivers and my old file form Cs4.
    Specs:
    Intel D5400xs board
    Dual quad xeons
    8Gig ram
    Nvidia 9800GT 512 Card (Maybe it's not a quadro or other fancy sh*, but like I saw earlier on forums,users have same problems with high-end cards like I with cheap chip)
    Dedicated diskn'n'channel for scratch disk.
    Wacom intuos 3
    Clean system with no other software.
    Fully supported opengl - ps says.
    No artifacts.
    And the problem is:
    I have a file created in Cs4 [~300 small simple layers (website), ~50 smart-objects, and 9000x3000px72ppi]
    Time to open file 30-40sec. Interaction with this file is very very slow.
    Zooming hang-up ps for a 5-10sec.
    Wich settings should I use for speed up Cs5?
    Cs4 works much faster. I made that file in CS4 with no problems.
    Grtz.

    Ok, maybe I'm not a specialist, but I found the answer in my case.
    Memory usage
    Let photoshop Use:  70% (~5000MB)
    Hisotry&Cache
    History states:  69
    Cache levels:  8
    Cache tile size: 132K instead of 1024K (Here is turbo option ; )
    GPU Settings
    Advanced with vertical sync and anti-alias Guides and Path
    Now ->  Opening file 10s, no hang-up's, fast zooming and moving - really fast.
    So if U have file like me, use settings described above.
    If anybod has more info about performance please write about it.
    Grtz

  • [BUG] Performance Issue When Editing Large FIles

    Are other people experiencing this on the latest JDevelopers (11.1.1.4.0 and the version before it) ?
    People in the office have been experiencing extremely slow performance happening seemingly at random while editing Java files. We have been using JDev for almost 10 years and this has only become an issue for us recently. Typically we only use the Java editing and the database functionality in JDev.
    I have always felt that the issue has been related to network traffic created by Clearcase and have never really paid attention, but a few days ago after upgrading to the latest version of JDev, for the first time I started having slowdowns that are affecting my speed of work and decided to look into it,
    The main symptom is the editor hanging for an unknown reason in the middle of typing a line (even in a comment) or immediately after hitting the carriage return. All PCs in the office have 2Gig or more RAM and are well within recommended spec.
    I've been experimenting for a few days to try and determine what exactly is at the root of the slowness. Among the things I have tried:
    o cutting Clearcase out of the equation; not using it for the project filesystem; not connectiing to it in JDev
    o Not using any features other than Java editing in JDev (no database connections)
    o never using split panes for side by side editing
    o downloading the light version of JDev
    o Increasing speed of all pop-ups/dynamic helpers to maximum in the options
    o disabling as many helpers and automations as possible in the options
    None of these have helped. Momentary freezes of 3-5 seconds while editing are common. My basic test case is simply to move the cursor from one line to another and to type a simple one-line comment that takes up most of the line. I get the freeze most usually right after typing the "//" to open the comment - it happens almost 100% of the time.
    I have however noticed a link to the file size/complexity.
    If I perform my tests on a small/medium sized file of about 1000 lines (20-30 methods) performance is always excellent.
    If I perform my test on one of our larger files 10000 lines ( more than 100 methods) the freezes while editing almost always occur.
    It looks like there is some processor intensive stuff going on (which cannot be turned off via the options panel) which is taking control of the code editor and not completing in a reasonable amount of time on large Java files. I have a suspicion that it's somehow related to the gutter on the right hand side which show little red and yellow marks for run-time reports of compile errors and warnings and haven't found any way to disable it so I can check.

    Just a small follow-up....
    It looks like the problem is happening on only a single Java file in our product! Unfortunately it happens to be the largest and most often updated file in the project.
    I'm still poking around to figure out why JDev is choking consistently on this one particular file and not on any of the others. The size/complexity is not much bigger than the next largest which can be edited without problems. The problem file is a little unusual in that it contains a large number of static functions and members.
    Nice little mystery to solve.

  • Loading large files in Java Swing GUI

    Hello Everyone!
    I am trying to load large files(more then 70 MB of xml text) in a Java Swing GUI. I tried several approaches,
    1)Byte based loading whith a loop similar to
    pane.setText("");
                 InputStream file_reader = new BufferedInputStream(new FileInputStream
                           (file));
                 int BUFFER_SIZE = 4096;
                 byte[] buffer = new byte[BUFFER_SIZE];
                 int bytesRead;
                 String line;
                 while ((bytesRead = file_reader.read(buffer, 0, BUFFER_SIZE)) != -1)
                      line = new String(buffer, 0, bytesRead);
                      pane.append(line);
                 }But this is gives me unacceptable response times for large files and runs out of Java Heap memory.
    2) I read in several places that I could load only small chunks of the file at a time and when the user scrolls upwards or downwards the next/previous chunk is loaded , to achieve this I am guessing extensive manipulation for the ScrollBar in the JScrollPane will be needed or adding an external JScrollBar perhaps? Can anyone provide sample code for that approach? (Putting in mind that I am writting code for an editor so I will be needing to interact via clicks and mouse wheel roatation and keyboard buttons and so on...)
    If anyone can help me, post sample code or point me to useful links that deal with this issue or with writting code for editors in general I would be very grateful.
    Thank you in advance.

    Hi,
    I'm replying to your question from another thread.
    To handle large files I used the new IO libary. I'm trying to remember off the top of my head but the classes involved were the RandomAccessFile, FileChannel and MappedByteBuffer. The MappedByteBuffer was the best way for me to read and write to the file.
    When opening the file I had to scan through the contents of the file using a swing worker thread and progress monitor. Whilst doing this I indexed the file into managable chunks. I also created a cache to further optimise file access.
    In all it worked really well and I was suprised by the performance of the new IO libraries. I remember loading 1GB files and whilst having to wait a few seconds to perform the indexing you wouldn't know that the data for the JList was being retrieved from a file whilst the application was running.
    Good Luck,
    Martin.

  • OOM happens inside Weblogic 10.3.6 when application uploads large files

    Oracle Fusion BI Apps application is uploading large files (100+ MB) onto Oracle Cloud Storage. This application works properly when ran outside weblogic server. When deployed on Fusion Middleware Weblogic 10.3.6, during upload of large files we get this OOM error
    java.lang.OutOfMemoryError: allocLargeObjectOrArray: [B, size 268435472
        at jrockit/vm/Allocator.allocLargeObjectOrArray(JIZ)Ljava/lang/Object;(Native Method)
        at jrockit/vm/Allocator.allocObjectOrArray(Allocator.java:349)[optimized]
        at weblogic/utils/io/UnsyncByteArrayOutputStream.resizeBuffer(UnsyncByteArrayOutputStream.java:59)[inlined]
        at weblogic/utils/io/UnsyncByteArrayOutputStream.write(UnsyncByteArrayOutputStream.java:89)[optimized]
        at com/sun/jersey/api/client/CommittingOutputStream.write(CommittingOutputStream.java:90)
        at com/sun/jersey/core/util/ReaderWriter.writeTo(ReaderWriter.java:115)
        at com/sun/jersey/core/provider/AbstractMessageReaderWriterProvider.writeTo(AbstractMessageReaderWriterProvider.java:76)
        at com/sun/jersey/core/impl/provider/entity/InputStreamProvider.writeTo(InputStreamProvider.java:98)
        at com/sun/jersey/core/impl/provider/entity/InputStreamProvider.writeTo(InputStreamProvider.java:59)
        at com/sun/jersey/api/client/RequestWriter.writeRequestEntity(RequestWriter.java:300)
        at com/sun/jersey/client/urlconnection/URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:213)
        at com/sun/jersey/client/urlconnection/URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
    Looks like weblogic is using its default Weblogic HTTP handler, switching to Sun HTTP handler via start up JVM/Java Option "-DUseSunHttpHandler=true" solves the OOM issue.
    Seems instead of streaming the file content with some fixed size byte array its being on the whole file into memory during upload.
    Is it possible to solve this OOM by changing any setting of Weblogic HTPP handler without switching to Sun HTTP handler as there are many other application deployed on this weblogic instance?
    We are concerned whether there will be any impact on performance or any other issue.
    Please advice, highly appreciate your response.
    Thanks!

    Hi,
    If you have a back then restore the below file back and then try to start weblogic:
    \Oracle\Middleware\user_projects\domains\<domain_name>\config\config.lok
    Thanks,
    Sharmela

  • FTP Sender Adapter with EOIO for Large files

    Hi Experts,
    I am Using XI 3.0 version,
    My scenario is File->XI->FIle, I need to pick the files from FTP Server , there are around 50 files in it each of 10 MB Size.
    So i have to pick the files from FTP folder in the same order as they put into folder , i,e.., FIFO.
    So in Sender FTP Comm channel , i am Specifying
    Qos = EOIO
    queue name =  ACCOUNT
    whether the files will be picked in FIFO into the queue XBTO_ACCOUNT?
      So my question is,
    what is the procedure to specify the parameters so that it will process in FIFO?
      What would be the BEST PRACTICE to acheive it for better performance point of view.
    Thanks
    Sai.

    Hi spantaleoni ,
    I want to process the files using FTP protocol which should be in FIFO ,
    ie.., Files placed first in a folder should be picked up first and remaining one next and so on.
    So if i use FTP ,
    Then to process the files in sequence i.e, FIFO
    the processing parameters
    Qos= EOIO
    Queue name = ACCOUNT
    would process the files in FIFO.
    and to process large files (10MB size) what would be the best  Polling interval secs.
    Thanks,
    Sai

  • CS6 very slow saving large files

    I have recently moved from PS CS5 to CS6 and have noticed what seems to be an increase in the amount of time it takes to save large files. At the moment I am working with a roughly 8GB .psb and to do a save takes about 20 minutes. For this reason I have had to disable the new autosave features, otherwise it just takes far too long.
    CS5 managed to save larger files, more quickly and with less memory available to it. Looking at system resources while Photoshop is trying to save, it is not using its full allocation of RAM specified in performance preferences and there is still space free on the primary scratch disc. The processor is barely being used and disc activity is minimal (Photoshop might be writing at 4mb/s max, often not at all though according to Windows).
    I am finding the new layer filtering system invaluable so would rather not go back to CS5. Is this is a known issue or is there something I can do to speed up the saving process?
    Thanks.

    Thanks for the quick replies.
    Noel: I did actually experiment with turing off 'maximize compatibility' and compression and it had both good and bad effects. On the plus side it did reduce the save time somewhat, to somewhere a little over 10 minutes. However it also had the effect of gobbling up ever more RAM while saving, only leaving me with a few hundred mb's during the save process. This is odd in itself as it actually used up more RAM than I had allocated in the preferences. The resulting file was also huge, almost 35 GB. Although total HD space isn't a problem, for backing up externally and sharing with others this would make things a real headache.
    Curt: I have the latest video driver and keep it as up to date as possible.
    Trevor: I am not saving to the same drive as the scratch dics, although my primary scratch disc does hold the OS as well (it's my only SSD). The secondary scratch disc is a normal mechanical drive, entirely separate from where the actual file is held. If during the save process my primary scratch disc was entirely filled I would be concerned that that was an issue, but it's not.
    Noel: I have 48 GB with Photoshop allowed access to about 44 of that. FYI my CPU's are dual Xeon X5660's and during the save process Photoshop isn't even using 1 full core i.e. less than 4% CPU time.

  • Remote Desktop Connection Drops when opening a large file or Transferring a large file

    I am running a Dell R720 Windows 2008 R2 Server. When I open a Large PDF or transfer a large file to the server, the server drops the remote desktop connection. I do not see any errors and no events are reported. I can access the server via
    iDRAC 7 enterprise and the server is still up and functioning properly; however, the remote desktop connection can only be restored after the server is rebooted. I have read the following article below do not see any conflicts.
    http://support.microsoft.com/kb/2477133/en-us
     That said, the issue happens when:
    1. opening a large PDF
    2. Using a UNC path to transfer a large file
    3. Using Hyper-V to import a .VHD (another large file)
    Any help is appreciated - Thanks in advance

    Hi,
    Thank you for posting in Windows Server Forum.
    Does this issue facing for single user or multiple users?
    Have you tried on other system? IS it, facing same issue.
    From description it seems network issue, please check whether there is any drop for network connection or it works on low bandwidth. You need to see there is no loss from bandwidth perspective. There are other certain reason which can drop the connection or
    performance as remote desktop works on many different points. 
    For try you can autotune disable and check with following command.
    netsh interface tcp set global autotuning=disabled
    To renable follow beneath command
    netsh interface tcp set global autotuninglevel=normal
    When you are remote desktop to the remote server, please set the connection speed accordingly to optimize performance and might it will resolve your case of dropping connection.
    More information.
    Announcing the Remote Desktop Protocol Performance Improvements in Windows Server 2008 R2 and Windows 7 white paper
    http://blogs.msdn.com/b/rds/archive/2010/02/05/announcing-the-remote-desktop-protocol-performance-improvements-in-windows-server-2008-r2-and-windows-7-white-paper.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

  • Why does the Reduced Size PDF command create larger files?

    I have been a happy user of Adobe Acrobat vers. 7 until Adobe ceased the support of that version and I was unable to print and do other activities since my last Mac OS vers.10.6.8 update, so I was forced to upgrade to Adobe's Acrobat's X latest version 10.0.0.  I had hope to be able to produce Adobe Acrobat 3D files but found out later that Adobe does not do that anymore and I would have to buy it as a plugin from Tetra 4D for another $399.00, oh well. I then wanted to reduce the file size of some larger pdfs I had edited.  It took a while to locate the command location, now in File>Save>Reduced SizePDF, once I activated the command on my file it actually increased its size!!  Why did this happen?  Should I continue to use my disabled, Adobe unsupported vers 7 to get the superior performance I once had?  This issue was mentioned in an earlier thread that started out as to where to locate this command but the resultant larger file issue remained unanswered.  Currently unhappy on my purchase.  Will Adobe offer a free upgrade to fix this?

    I agree with Peter. Use Pages '09, but keep a document migration strategy present. When you create a Pages document in Pages '09, export it to Word .doc too. Export any current content in Pages v5.2.2, and then stop using it. That means no double-clicked documents.
    Pages v5 is not an evolution of Pages '09 — it is an incomplete rewrite, and uses a different, internal document architecture than did Pages '09. This internal architecture is to blame for the export translation bloat into Word's different document architecture. It does not matter whether you type a single character, or multiple pages, you will get about a 500Kb .docx file.
    Opening, and resaving this monster .docx file in MS Word in Office for Mac 2011, LibreOffice, or TextEdit will result in a very small .docx footprint.

  • File Adapter Not Reading Large Files

    Dear Experts,
    Enviornment :-
    OS:-Linux
    Jdeveloper:- 11.1.1.6
    SOA:-11.1.1.6
    Weblogic:-10.3.6
    JDK:-SUN
    Allocated RAM:-16GB
    Currently we are in UAT Phase and now we are facingan  issue in reading large file.Below is the Design details of the service
    FileAdapter(Read)-->Bpel(Business login,Using FlowN)-->FileAdapter(Write CSV),JMS Adapter(AQ JMS topics)
    In this case at the time of read itself we are facing issue. File adapter reading xml file but in receive activity receives input data as
    xmlDocKey:1C135990067411E3BFA6B5087B629F9DI
    I really couldn;t understand about the error. Even i tried reading as Opaque format and still end up with same error.
    In order to make sure i have create mediator and tried reading the file, in case i was able to read file upto 15mb with out any error.Also i tried as  "read as Attachment" in Bpel component and able to read the attachment upto 7 mb file, but this is hitting the performance side.
    I request some one please let me know why the file adapter is giving XmlDOCKey rather then XMLContent to the inputvariable
    Regards,
    Tarak

    Can you check your BPEL Properties in EM?
    Go to Soa-infra > right Click > SOA Administration > BPEL properties
    increase the Dispatcher Engine Thread = 10, Dispatcher Invoke Threads = 60 and Dispatcher Engine Threads to 90
    Click on "More BPEL Configuration Properties"
    Increase the  DispatcherMaxRequestDepth from 600 to 1000.
    Bounce the server and see if works..
    Bounce the server and try again.
    If this fails, try get a threshold by increasing the file size until it fails again. 

  • Process large file using BPEL

    My project have a requirement of processing large file (10 MB) all at once. In the project, the file adapter reads the file, then calls 5 other BPEL process to do 10 different validations before delivering to oracle database. I can't use debatch feature of adapter because of Header and detail record validation requirement. I did some performace tuing (eg: auditlevel to minimum, logging level to error, JVM size to 2GB etc..) as per performance tuing specified in Oracle BPEL user guide. We are using 4 CPU, 4GB RAM IBM AIX 5L server. I observed that the Receive activity in the begining of each process is taking lot of time, while other transient process are as per expected.
    Following are statistics for receive activity per BPEL process:
    500KB: 40 Sec
    3MB: 1 Hour
    Because we have 5 BPEL process, so lot of time is wasted in receive activity.
    I did't try 10 MB so far, because of poor performance figure for 3 MB file.
    Does any one have any idea how to improve performance of begining receive activity of BPEL process?
    Thanks
    -Simanchal

    I believe the limit in SOA Suite is 7MB if you want to use the full payload and perform some kind of orchastration. Otherwise you need to do some kind of debatching, which you stated will not work.
    SOA Suite is not really designed for your kind of use case as it needs to parocess this file in memory, when any transformation occurs it can increase this message between 3 - 10 times. If you are writing to a database why can you read the rows one by one?
    If you are wanting to perform this kind of action have a look at ODI (Oracle Data Integrator). I Also believe that OSB (Aqua Logic) can handle files upto 200MB this this can be an option as well, but it may require debatching.
    cheers
    James

  • Large file doesn't work in a message-splitting scenario

    Hello,
    I'm trying to measure the XI performance in a message-splitting scenario by having the file adapter to pull the xml file below and send it to XI to perform a simple "message split w/o BPM" and generate xml files for each record in another folder.   I tried 100, 500, 1000 records and they all turned out fine.  Once I tried the 2000-records case, the status of that message is WAITING in RWB and I checked sxmb_moni, the message is in "recorded for outbound messaging", I couldn't find any error.
    Is there some kinda of threshold that can be adjusted for large files?  Thank you.
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:MultiXML xmlns:ns0="urn:himax:b2bi:poc:multiMapTest">
       <Record>
          <ID>1</ID>
          <FirstName>1</FirstName>
          <LastName>1</LastName>
       </Record>
    </ns0:MultiXML>

    Hello,
    The Queue ID is "XBTO1__0000".  I double-clicked it, it took me to the "qRFC Monitor (Inbound Queue) screen, and showed a XBT01_0000 queue with 3 Entries.
    I double-clicked on that, and it took me to another screen that showed a queue with same same and "Running" in Status.
    I double-clicked on that, and i saw three entries.  For the latest entry, its StatusText says "Transaction recorded".
    I double-clicked on that, it took me to Function Builder:Display SXMS_ASYNC_EXEC.
    What do I from here?

  • Publishing large files.

    Ok, My team has run into a significant problem with CS3 and
    we need some help fast. We have a rather large file (28meg swf)
    that we have created in CS3. And whatever i try i cannot find a
    solution for this to work on our LMS or even Stand alone.
    To be specific: I have tried using the 100% preloader used
    for adobe connect (instead of the standard 60% preloader). with no
    apparent change to perfomance.
    I have tried exporting to Flash 8 which causes an error,
    tries to change the framerate to 5 fps, not to mention creates a
    mass amount of external files witch for this project will not work.
    I even tried importing the swf into a flash wrapper for a
    custom preloader, but that throws an import error as well.
    Daisy chaining is not an option for this course.
    I am consistantly getting slower and slower performance with
    each slide and the audio skips more and more of the clip as the
    course goes on.
    Any suggestions would be greatly appreciated.
    M.

    If the project is getting slower and slower on your machine,
    is it going to run at all on your end-user's PCs? Bottom line, you
    seem to have found the answer to "how big is too big?", and the
    only solution I know of is to redesign the project so it stays
    within the capabilities of your users PCs.
    What is causing the size issue? Is the project "audio heavy"?
    Or is it a matter of too many slides? Or perhaps the size of the
    project slides (in pixels) is creating too large an image issues
    with the backgrounds? Seems your only option might be to take a
    cold hard look at the design, identify the cause of the problem,
    and then take corrective action at that level. Not what you wanted
    to hear, I'm sure, but I'm trying to be honest with you - let's
    face it, a 28 Mb SWF is a very large file.
    As an aside (only) ... I noted there seems to be confusion
    with semantics in your post. The product is Captivate 3 but you
    make reference to CS3; is this just a slip of the tongue (so to
    speak), or are you confusing the latest version of Flash (CS3) with
    the latest version of Captivate (3)? I mention it because you
    specified Flash 8 as your Flash version ... and I'm confused by
    what seems to be your confusion. In any case, frame rates are
    changed once the project is within Flash, but you state you can't
    get it into Flash, so again, I'm confused by the description
    offered.

  • Ok to store large files on desktop?

    Ok to store large files on desktop?
    Are there consequences (running slow) when storing a large files or numerous files on the desktop?
    Sometimes I use the desktop to stores photos until I complete a project. There are other files on the desktop that I could remove if it causes the computer to slow down. If not, I'll keep them there because it's easy and convenient.
    I have 4G of memory.

    Hi Junk,
    I can't think of any consequences to storing your large files on your desktop. After all, from your system's point of view, your desktop is just another folder. However, there is some system consequence to storing lots of files there, as it will decrease performance. But with 4GB of memory, I'm not sure how much of a difference you might see. And there's a big gap between "I'm storing 15 photos on my desktop" and "I can no longer see my hard disk icon in this mess." If the idea of even potential system slowdowns bothers you, create a work folder and leave it on your desktop to drop all of those photos in. If you're not getting too insane with the amount, though, I'm sure you'll be fine.
    Hope that helps!
    —Hazy

  • How can I get to read a large file and not break it up into bits

    Hi,
    How do I read a large file and not get the file cut into bits so each has its own beginning and ending.
    like
    1.aaa
    2.aaa
    3.aaa
    4....
    10.bbb
    11.bbb
    12.bbb
    13.bbb
    if the file was read on the line 11 and I wanted to read at 3 and then read again at 10.
    how do I specify the byte in the file of the large file since the read function has a read(byteb[],index,bytes to read);
    And it will only index in the array of bytes itself.
    Thanks
    San Htat

    tjacobs01 wrote:
    Peter__Lawrey wrote:
    Try RandomAccessFile. Not only do I hate RandomAccessFiles because of their inefficiency and limited use in today's computing world, The one dominated by small devices with SSD? Or the one dominated by large database servers and b-trees?
    I would also like to hate on the name 'RandomAccessFile' almost always, there's nothing 'random' about the access. I tend to think of the tens of thousands of databases users were found to have created in local drives in one previous employer's audit. Where's the company's mission-critical software? It's in some random Access file.
    Couldn't someone have come up with a better name, like NonlinearAccessFile? I guess the same goes for RAM to...Non-linear would imply access times other than O(N), but typically not constant, whereas RAM is nominally O(1), except it is highly optimised for consecutive access, as are spinning disk files, except RAM is fast in either direction.
    [one of these things is not like the other|http://www.tbray.org/ongoing/When/200x/2008/11/20/2008-Disk-Performance#p-11] silicon disks are much better at random access than rust disks and [Machine architecture|http://video.google.com/videoplay?docid=-4714369049736584770] at about 1:40 - RAM is much worse at random access than sequential.

Maybe you are looking for

  • Can we use ad hoc query to get a temporary report of payroll result?

    Hi,All can we use ad hoc query to get a temporary report of payroll result?or we have to get a report of payroll result by customizing and ABAP? and how can I customizing a report on the basis of standard SAP report for payroll?

  • Blackberry z10 10.0.10.90 Telstra OS build 10.0.10.69​0 Problems with Fonts

    Hi, I need some help. I recently upgraded to 10.0.10.90 after receiving an update prompt. I run my font size as 5 and this never was a problem with the menus however i noticed after the upgrade that some text in the menus in the settings are cut off.

  • Is your IT asset inventory log complete?

    Are you still using a manual process like bar code scanners tolog your technology assets?Do you know the hidden costs of manual data collection?Great news, business is booming and as a result the company is expanding. So is your inventory of racks fi

  • Reading Url Parameters

    I'm trying to read a value passed in the url parameter from my xdo file. What are the steps? I'm using XML Publisher Enterprise 5.6.2. Thanks...

  • Computer is "not authorized" but it is

    I am trying to synch my iPhone 4s with MacPro (early 2011) but I keep getting a message that computer is not authorized. Yet it is, and every time i "re" authorize the computer the message says it is already authorized.  So how can I get past this me