Most efficient way to write 4 bytes at the start of any file.

Quick question: I want to write 4 bytes at the start of a file without overriding the current bytes in the file. E.g. push bytes 0-4 along... Is my only option writing the bytes into a new file then writing the rest of the file after? RAF is so close but overrides :(.
Thanks Mel

I revised the code to use a max of 8MB buffers for both the nio and stdio copies...
Looks like NIO is a pretty clear winner... but your milage may vary, lots... you'd need to test this 100's of times, and normalize, to get any "real" metrics... and I for one couldn't be bothered... it's one of those things that's "fast enough"... 7 seconds to copy a 250 MB file to/from the same physical disk is pretty-effin-awesome really, isn't it? ... looks like Vista must be one of those O/S's (mentioned in the API doco) which can channel from a-to-b without going through the VM.
... and BTW, it took the program which produced this file 11,416 millis to write it (from an int-array (i.e. all in memory)).
revised code
package forums;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.channels.FileChannel;
class NioBenchmark1
  private static final double NANOS = Math.pow(10,9);
  private static final int BUFF_SIZE = 8 * 1024 * 1024; // 8
  interface Copier {
    public void copy(File source, File dest) throws IOException;
  static class NioCopier implements Copier {
    public void copy(File source, File dest) throws IOException {
      FileChannel in = null;
      FileChannel out = null;
      try {
        in = (new FileInputStream(source)).getChannel();
        out = (new FileOutputStream(dest)).getChannel();
        final int buff_size = Math.min((int)source.length(),BUFF_SIZE);
        long n = -1;
        int pos = 0;
        while ( (n=in.transferTo(pos, buff_size, out)) == buff_size ) {
          pos += n;
      } finally {
        if(in != null) in.close();
        if(out != null) out.close();
  static class NioCopier2 implements Copier {
    public void copy(File source, File dest) throws IOException {
      if ( !dest.exists() ) {
        dest.createNewFile();
      FileChannel in = null;
      FileChannel out = null;
      try {
        in = new FileInputStream(source).getChannel();
        out = new FileOutputStream(dest).getChannel();
        final int buff_size = Math.min((int)in.size(),BUFF_SIZE);
        long n = -1;
        int pos = 0;
        while ( (n=out.transferFrom(in, 0, buff_size)) == buff_size ) {
          pos += n;
      } finally {
        if(in != null) in.close();
        if(out != null) out.close();
  static class IoCopier implements Copier {
    private byte[] buffer = new byte[BUFF_SIZE];
    public void copy(File source, File dest) throws IOException {
      InputStream in = null;
      FileOutputStream out = null;
      try {
        in = new FileInputStream(source);
        out = new FileOutputStream(dest);
        int count = -1;
        while ( (count=in.read(buffer)) != -1) {
          out.write(buffer, 0, count);
      } finally {
        if(in != null) in.close();
        if(out != null) out.close();
  public static void main(String[] arg) {
    final String filename = "SieveOfEratosthenesTest.txt";
    //final String filename = "PrimeTester_SieveOfPrometheuzz.txt";
    final File src = new File(filename);
    System.out.println("copying "+filename+" "+src.length()+" bytes");
    final File dest = new File(filename+".bak");
    try {
      time(new IoCopier(), src, dest);
      time(new NioCopier(), src, dest);
      time(new NioCopier2(), src, dest);
    } catch (Exception e) {
      e.printStackTrace();
  private static void time(Copier copier, File src, File dest) throws IOException {
    System.gc();
    try{Thread.sleep(1);}catch(InterruptedException e){}
    dest.delete();
    long start = System.nanoTime();
    copier.copy(src, dest);
    long stop = System.nanoTime();
    System.out.println(copier.getClass().getName()+" took "+((stop-start)/NANOS)+" seconds");
output
C:\Java\home\src\forums>"C:\Program Files\Java\jdk1.6.0_12\bin\java.exe" -Xms512m -Xmx1536m -enableassertions -cp C:\Java\home\classes forums.NioBenchmark1
copying SieveOfEratosthenesTest.txt 259678795 bytes
forums.NioBenchmark1$IoCopier took 14.333866455 seconds
forums.NioBenchmark1$NioCopier took 7.712665715 seconds
forums.NioBenchmark1$NioCopier2 took 6.206867074 seconds
Press any key to continue . . .Having said that... The NIO has lost a fair bit of it's charm... testing transferTo's return value and maintaining your own position in the file is "cumbsome" (IMHO)... I'm not even certain that mine is completely correct (?n+=pos or n+=pos+1?).... hmmm..
Cheers. Keiths.

Similar Messages

  • What's the most efficient way to serve a file from a servlet?

    I have a servlet that does various different things depending on the needs. Sometimes it dynamically generates content, and sometimes all it does is send a file out, with no alteration.
    What is the most efficient way to just send a file?
    One option:
    OutputStream os = response.getOutputStream();
    InputStream is = new FileInputStream(...)
    (send all the bytes from is to os, the regular way using a buffer)Another option is to say:
    RequestDispatcher rd = response.getRequestDispatcher(fileName);
    rd.forward();Any other options? What's the prefered way of doing this?
    I know the rule of "don't optimize too early" but this is a situation where we need to get the maximum amount of files served with the hardware we have, and it's going to be a lot of static files, so efficiency is important.
    Thanks

    Ok, that's what I thought. It would be nice if there were a "response.sendStream(InputStream input)" method in the ServletResponse class. Even nicer would be a sendFile or sendChannel or something. This is probably a common usage and it's a place where the container has many opportunities for optimization. For example, it could call the operating systems send_file kernel call so the entire transfer would be done directly from the disk controller to the ether card (on systems that support that).
    For now I'll just do my own buffered copy.

  • What is the most efficient way of passing large amounts of data through several subVIs?

    I am acquiring data at a rate of once every 30mS. This data is sorted into clusters with relevant information being grouped together. These clusters are then added to a queue. I have a cluster of queue references to keep track of all the queues. I pass this cluster around to the various sub VIs where I dequeue the data. Is this the most efficient way of moving the data around? I could also use "Obtain Queue" and the queue name to create the reference whenever I need it.
    Or would it be more efficient to create one large cluster which I pass around? Then I can use unbundle by index to pick off the values I need. This large cluster can have all the values individually or it co
    uld be composed of the previously mentioned clusters (ie. a large cluster of clusters).

    > I am acquiring data at a rate of once every 30mS. This data is sorted
    > into clusters with relevant information being grouped together. These
    > clusters are then added to a queue. I have a cluster of queue
    > references to keep track of all the queues. I pass this cluster
    > around to the various sub VIs where I dequeue the data. Is this the
    > most efficient way of moving the data around? I could also use
    > "Obtain Queue" and the queue name to create the reference whenever I
    > need it.
    > Or would it be more efficient to create one large cluster which I pass
    > around? Then I can use unbundle by index to pick off the values I
    > need. This large cluster can have all the values individually or it
    > could be composed of the previously mentioned clusters (i
    e. a large
    > cluster of clusters).
    It sounds pretty good the way you have it. In general, you want to sort
    these into groups that make sense to you. Then if there is a
    performance problem, you can arrange them so that it is a bit better for
    the computer, but lets face it, our performance counts too. Anyway,
    this generally means a smallish number of groups with a reasonable
    number of references or objects in them. If you need to group them into
    one to pass somewhere, bundle the clusters together and unbundle them on
    the other side to minimize the connectors needed. Since the references
    are four bytes, you don't need to worry about the performance of moving
    these around anyway.
    Greg McKaskle

  • What is the most efficient way to turn an array of 16 bit unsigned integers into an ASCII string such that...?

    What is the most efficient way to turn a one dimensional array of 16 bit unsigned integers into an ASCII string such that the low byte of the integer is first, then the high byte, then two bytes of hex "00" (that is to say, two null characters in a row)?
    My method seems somewhat ad hoc. I take the number, split it, then interleave it with 2 arrays of 4095 bytes. Easy enough, but it depends on all of these files being exactly 16380 bytes, which theoretically they should be.
    The size of the array is known. However, if it were not, what would be the best method?
    (And yes, I am trying to read in a file format from another program)

    My method:
    Attachments:
    word_array_to_weird_string.vi ‏18 KB

  • What is the best, most efficient way to read a .xls File and create a pipe-delimited .csv File?

    What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File?
    Thanks in advance for your review and am hopeful for a reply.
    ITBobbyP85

    You should have no trouble doing this in SSIS. Simply add a data flow with connection managers to an existing .xls file (excel connection manager) and a new .csv file (flat file). Add a source to the xls and destination to the csv, and set the destination
    csv parameter "delay validation" to true. Use an expression to define the name of the new .csv file.
    In the flat file connection manager, set the column delimiter to the pipe character.

  • The most efficient way to search a large String

    Hi All,
    2 Quick Questions
    QUESTION 1:
    I have about 50 String keywords -- I would like to use to search a big String object (between 300-3000 characters)
    Is the most efficient way to search it for my keywords like this ?
    if(myBigString.indexOf("string1")!=1 || myBigString.indexOf("string2")!=1 || myBigString.indexOf("string1")!=1 and so on for 50 strings.)
    System.out.println("it was found");
    QUESTION 2:
    Can someone help me out with a regular expression search of phone number in the format NNN-NNN-NNNN
    I would like it to return all instances of that pattern found on the page .
    I have done regular expressions, in javascript in vbscript but I have never done regular expressions in java.
    Thanks

    Answer 2:
    If you have the option of using Java 1.4, have a look at the new regular expressions library... whose package name I forget :-/ There have been articles published on it, both at JavaWorld and IBM's developerWorks.
    If you can't use Java 1.4, have a look at the jakarta regular expression projects, of which I think there are two (ORO and Perl-like, off the top of my head)
    http://jakarta.apache.org/
    Answer 1:
    If you have n search terms, and are searching through a string of length l (the haystack, as in looking for a needle in a haystack), then searching for each term in turn will take time O(n*l). In particular, it will take longer the more terms you add (in a linear fashion, assuming the haystack stays the same length)
    If this is sufficient, then do it! The simplest solution is (almost) always the easiest to maintain.
    An alternative is to create a finite state machine that defines the search terms (Or multiple parallel finite state machines would probably be easier). You can then loop over the haystack string a single time to find every search term at once. Such an algorithm will take O(n*k) time to construct the finite state information (given an average search term length of k), and then O(l) for the search. For a large number of search terms, or a very large search string, this method will be faster than the naive method.
    One example of a state-search for strings is the Boyer-Moore algorithm.
    http://www-igm.univ-mlv.fr/~lecroq/string/tunedbm.html
    Regards, and have fun,
    -Troy

  • I am giving my old MacBook Air to my granddaughter.  What is the most efficient way to erase all the data on it?

    I am giving my old MacBook Air to my granddaughter.  What is the most efficient way to erase the data?

    You have two options.....
    One is to do a clean reinstall of your OS - if you still have the USB installer that came with your Macbook Air...
    The second option is to create a new user (your granddaugher's name).....Deauthorize your Macbook Air from your Itunes and Appstore.....
    Restart your Macbook after you've created your granddaughter's user name, login under your granddaughter's username and delete your username.
    Search your Macbook for your old files and delete them.....
    Good luck...

  • What is the most efficient way to compare two Lists?

    List A{itemId,itemName} [1,xyz] [9,iyk] [4,iuo] .......
    List B{itemId,item price} [2,999] [9,888] [1, 444].......
    Assume A will be a much larger list than B
    I am trying to find all the items with same itemiId. what would be the most efficient way to do that?
    Thanks!

    Tinkerbell. wrote:
    BigDaddyLoveHandles wrote:
    You wrote:
    Can we assume that an itemId only occurs once in each list? You're the one making claims and assumptions, not me.No in #4 I asked the OP to verify an assumption.An assumption that couldn't possibly be true. Why are you wasting our time?

  • What is the most efficient way to have full access to the front panel on RT Labview?

    I have a RT machine that needs to do its job and also port the front panel to an external machine over the network. What is the most efficient way to do it? Using as little of the RT time as possible but providing full functionality to the RT front panel.
    So far I have been using it directly from Labview - running the VI on a remote (RT) and have the front panel on local Labview (WINDOWS). I know I can do it with also through WWW (not very happy with that though).
    LV2009 SP1.
    Thanks

    Running a compiled executable on the RT target, rather than running it within the development environment, is probably slightly more efficient but limits you to the web interface.  If you're running within the LabVIEW environment, I doubt there's a noticeable difference in efficiency from the RT perspective between the web server and the LabVIEW front panel, although that's mostly a guess (I would expect the RT system to send identical data in each case, once the front panel is loaded).  Those are your only options in modern LabVIEW versions.  In LabVIEW 7.1 you could build an executable that acted as the front panel for an RT system, but that feature does not exist in recent versions.  However, a quick search turned up this document with code to approximately duplicate that behavior, perhaps it will work for you?

  • What is the most efficient way to convert a static site to a responsive site using Dreamweaver?

    I need to convert an old site made in Dreamweaver to be responsive to any monitor size. What is the most efficient way to do this?

    Depending on what you have to work with and how it was coded, it might be doable and then again not.  Suffice it to say, there are no magic buttons that will do this for you. Also consider that mobile & tablet users interact differently with their web devices. So your navigation & forms must be finger friendly.  Also images & content must make mobile users happy without killing their dataplans.  There's a lot of planning that goes into making a good Responsive Web site.
    Nancy O.

  • What is the most efficient way to post several stories to multiple html pages

    What is the most efficient way to post five stories to multiple html pages?
    Currently they are all html files saved in DW with some divs sharing content and other divs are unique.
    I've experimented with saving stories as library items and dropping into other html but wondered if there is a more efficient or dynamic process.

    Server-Side Includes.
    http://forums.adobe.com/message/2112460#2112460
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    http://alt-web.com/
    http://twitter.com/altweb

  • Moving content from iMovie to iMovie - what's the most efficient way?

    If I edit and create movies in iMovie08 on one iMac and I want to transfer the content to iMovie08 another iMac 200 miles away , what's the most efficient way? I prefer not to burn DVDs, because I want to do further work on the movies on the second iMac's iMovie (so I can share them with iTunes and sync them with Apple TV).

    >PDPageAddCosContents(destPage, PDPageGetCosContents(srcPage));
    Does this method (PDPageGetCosContent) exist? It would be easy enough
    to create, but I don't see it in the document.
    More seriously, I have a vague memory that it is a Really Bad Thing to
    share the same Contents objects between multiple pages. Maybe
    something to do with page deletion, can't remember.
    >PDPageAddCosResource(destPage, PDPageGetCosResources(srcPage));
    These two methods are not symmetric, and PDPageAddCosResource doesn't
    work that way, sadly...
    Aandi Inston

  • Just got girlfriend a new iPad2. Her iMac is a PowerPC G5 (Tiger version 10.4.11) with 512 mb RAM. What's the simplest, most efficient way to get her iPad2 up and running and synced to her Mac?

    Just got girlfriend a new iPad2. Her iMac is a PowerPC G5 (Tiger version 10.4.11) with 512 mb RAM. What's the simplest, most efficient way to get her iPad2 up and running and synced to her Mac?

    Most of the Apple store sales people and some of the genius bar people are only knowledgable on Apple's more recent offerings. They are not very knowledgable, I found, on older PowerPC based Apple computers, I'm afraid.
    Here's the real scoop.
    Your girlfriend's G5 can only install up to OS X 10.5 Leopard. This is the last compatible OS X version for PowerPC users.
    OS X 10.6 Snow Leopard and OS X10.7 Lion are for newer Intel CPU Apple computers.
    Early iMac G5's can only have up to 2 GBs of RAM.
    Later iMac G5's (2005-2006) could take up to 2.5 GBs of RAM
    2 GBs of RAM will run OS X 10.5 Leopard just fine.
    The very latest iTunes (10.5.2) can be installed and runs on both PowerPC and Intel CPU Macs.
    However, there are certain new iTunes feature that won't work without an Intel Mac.
    One of iOS and iTunes feature is sync'g wirelessly over WiFi.
    This will not work unless you have an iDevice running iOS 5 and Intel Mac running 10.6 Snow Leopard or better.
    Although, I was disappointed I would not be able to do this with my G4 Mac, it's not a biggie problem for me.
    So, your girlfriend's computer should be fine for what she intends to use it for.
    The Apple people either just plain didn't know or we're trying to get you to think about buying a new Mac.
    At least, as of now, not truly necessary.
    If Apple, at some later point, drops support for PowerPC users running 10.5, then would be the time to consider a new or "newer" Intel CPU Mac.
    My planned Mac computer upgrade is seeking out a " newer" last version G5 for my "new" Mac.
    I can't afford, right now, to replace all of my core PowerPC software with Intel versions, so I need to stick with the older PowerPC Macs for the time being. The last of the G5's is what I seek.

  • Most Efficient Way to Populate My Column?

    I have several very large tables, some of them are partitioned tables.
    I want to populate every row of one column in each of these tables with the same value.
    1.] What's the most efficient way to do this given that I cannot make the table unavailable to the users during this operation?
    I mean, if I were to simply do:
    update <table> set <column>=<value>;
    then I think I'll lock every row for writing by others until the commit. I figured there might be another way that makes better sense in my case.
    2.] Are there any optimizer hints I might be able to take advantage of here? I don't use hints much but with such a long running operation I'll take any help I can get.
    Thank you

    1. May be a better solution exists.
    Since you do not want to lock the table...
    Save the ROWID's of all the rows in that table in a temporary table.
    Write a routine which will loop through this temporary table and use the ROWID to update the main table and issue commit at regular intervals.
    However, this does not take into account the rows that would be added to main table after the temporary table has been created.
    2. Not that I am aware of.

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

Maybe you are looking for

  • Drill by is not enabled in BO 4.0 WebI report

    Hi, In one of my report I have to do Drill by. But when I right click on the dimension I am able to see only Drill Down that is enabled. Drill by is not enabled. Please let me know how to enable Drill By in BO 4.0 WebI report. Thanks in advance Lavan

  • Really slow disk access

    our iMac runs like a dog. I've max'd RAM, cleaned and optimized the hard drive. Configured with two non-admin users (myself & my wife - I need right-click but it drives here crazy), and an admin account. iTunes and iPhoto libraries are shared, and th

  • My keys for F1 to F12 are messed up and I can't seem to fix it in the system pref.

    I can't seem to sync my F1 to F12 Keys to what they are supposed to do. I checked all the system preference settings, even set all to default and nothiung works. Can someone please help?

  • Write to existing file using servlets

    Every time I try to read an existing file using my current set of servlets, I get "File cannot be found". I don't understand what the problem is because I'm using the exact same location as the code where I read the document. Is there something else

  • Cisco ISE inline posture node Posture assessment query

    Hi all, i read the user guide for the ISE 1.1 and in the Inline posture section, I picked up the following text which concerned me if I understand it right... "In a deployment, such as outlined in the example, when more endpoints connect to the wirel