Read large images fast?

Hello all,
I'm not sure if this is the appropriate place to post this but here goes. I am working on creating an image viewer application and so far its going great. But i've noticed one problem when reading large images typically over 1 megabyte in size. These images using <tt> ImageIO.read() </tt> takes anywhere from 1 to two seconds to finish loading. I've searched all over the web for faster methods of reading large images but I haven't had any luck. Here is an example of what code i'm using to read an image.
try {
     BufferedImage b = ImageIO.read(someLargeImageFile); // this is slow for images over 1MB in size.
} catch(Exception err) {
}So my question is does anyone have any tips or tricks I could use to read large images faster. I see programs like iPhoto reading images up to 5MB in size in under one second. Just wondering what I could do to read these kind of images just as fast.
Thank you for your help :)
Nathan D.

user5287726 wrote:
What version of the JVM are you using? How much RAM is available? What OS are you running? What kind of storage are the images on? How many other users and how many other processes are fighting over that storage?
And if the JVM is recent and has a lot of RAM available, and you have eliminated any storage bottlenecks, have you tried Java Advanced Imaging?
http://java.sun.com/javase/technologies/desktop/media/jai/
I've used it to process image files in the hundreds of MB size range, and while I don't have any benchmarks handy I'd be surprised if those files were processed much if any slower than the processing speed you're getting with your much smaller files.I'm using the most current JVM, lots of RAM is available the application i'm developing can use up to 1.5 GB of RAM on the current machine. I'm using Mac OS 10.6 but the problem occurs on all operating systems i've tested it on including, Windows XP, Windows Vista, Windows 7, Mac OS 10.4, 10.5, 10.6, and Linux. I develop the software on an iMac which I just recently reformatted to factory settings so there are little to no other processes fighting over that storage.
I've heard of JAI and just really never gave it a second thought cause I never really needed any more features then ImageIO gave me. However does Java Advanced Imaging read images in a different way then ImageIO would? I'm beginning to think that it isn't so much the actual size of the file but the dimensions. Even small files in the 100kilobyte range load slow because there dimensions are large for example 3000x3000. Though i'm assuming those 100 megabyte images you processed with speed also had large dimensions?
Now for a moment I was thinking it was something in the BufferedImage class that was taking a long time to process and not the ImageIO class. However the following proves that wrong.
try {
     ImageIO.read(someLargeImageFile);
} catch(Exception err) {
}The above code doesn't even use a buffered image so I guess its down to something in the ImageIO class. I'll see if I have better luck with JAI but can someone explain why this might be faster then using ImageIO.read() or if there are any other tips or ticks to reading large images such as these faster?
Thank you for your help :)
Edited by: neptune692 on Jan 6, 2011 8:35 AM

Similar Messages

  • Need to read/paint large images fast. Help!

    Hello Java community,
    I am having trouble writing a program which can read large image files from disk (~ 1-2 MB) and paint them to the screen while maintaining a frame rate of 15 or 20 fps. Right now I am doing this through a Producer/Consumer scheme. I have one thread which constantly creates ImageIcons (using the constructor which accepts a String file path), and places them on a blocking queue (production) with a maximum capacity of 5 or 10. I have a second thread which removes an image from the queue (consumption) and paints it to the JFrame every so many milliseconds to honor the frame rate.
    My problem is that this approach is not fast enough or smooth enough. With smaller images it works fine, but with larger images it cannot maintain a high frame rate. It also seems to consume more memory than it should. I am assuming that once I paint a new ImageIcon, the old one goes out of scope and is freed from memory by the Garbage Collector. However, even with a queue capacity of 5 or 10, I am getting out-of-memory exceptions. I plan on trying the flush() method of the Image class to see if that helps to recover resources. After searching around a bit, I see that there are many different ways to read/load an image, but giving a local path to an ImageIcon and letting it load the image seems to be a safe way to go, especially because in the documentation for ImageIcon, it says that it pre-loads the image using Media Tracker.
    Any help, ideas, or links would be appreciated!
    Thanks,
    Elliot

    Thanks for another insightful response.
    I've played a bit more, so let me update you. I am preloading images, the blocking queue (FIFO) is full when the animation begins. Right now, the queue capacity is 10, but for smaller images it can be closer to 40 or 50. The image reader queue reads images and places them in this queue. Oddly, the problem does not seem to be that the reader thread can't keep up. I added a print statement which displays the size of the queue before each removal and it remains very close to the capacity, at least for the my test images which are only 400 x 400, approximately 100 KB each.
    I've tried animating the images in two distinct ways. My first approach, as I mentioned in the original question, was to animate the images using purely swing components and no manual painting of any kind. This is always nice because it frames the image perfectly and the code is minimal. To accomplish this I simply had the image reader thread pass in the image path to the constructor of ImageIcon and put the resulting ImageIcon in the queue. The animator thread had a simple swing timer which fired an ActionEvent every so-many milliseconds (in the event dispatch thread). In the actionPerformed method I simply wrote a few lines to remove the head of the queue and I used the JLabel setIcon(ImageIcon) to update the display. The code for this was very nice, however the animation was choppy (not flickering).
    In my second approach, which was a bit longer, I created a class called AnimationPanel which extended JPanel and implemented Runnable. I simply overrode paintComponent and inside I painted a member Image which was set in the thread loop. Rather than storing ImageIcons, the queue stored Images. The reader thread used ImageIO.read(...) to generate an Image. I used simple logic in the thread loop to monitor the fps and to fire repaints. This approach suffered from the same problem as the one above. The animation was choppy.
    I found the following things to be true:
    - the reader can keep up with the animator
    - if I run the image reader and perform a basic polygon animation, both of my approaches animate smoothly at high frame rates. In other words, it's not the work (disk reads) being done by the reader that is causing the lag
    - I believe the slowness can be attributed to the following calls: label.setIcon(imageIcon) or g.drawImage(image, 0, 0, this). This leads me to believe that the images are not fully processed as they are being placed on the queue. Perhaps Java is waiting until they are being used to fully read/process the images.
    - I need to find a way to process the images fully before trying to paint them, but the I felt that my approaches above were doing that. I considered have the reader frame actually generate a JLabel or some object that can be displayed without additional processing.
    - your idea about AWT components is a good one and I plan on trying that.
    Elliot

  • How to increase performance speed of Photoshop CS6 v13.0.6 with trasformations in LARGE image files (25,000 X 50,000 pixels) on IMac3.4 GHz Intel Core i7, 16 GB memory, Mac OS 10.7.5?   Should I purchase a MacPro?

    I have hundreds of these large image files to process.  I frequently use SKEW, WARP, IMAGE ROTATION features in Photoshop.  I process the files ONE AT A TIME and have only Photoshop running on my IMac.  Most of the time I am watching SLOW progress bars to complete the transformations.
    I have allocated as much memory as I have (about 14GB) exclusively to Photoshop using the performance preferences panel.  Does anyone have a trick for speeding up the processing of these files--or is my only solution to buy a faster Mac--like the new Mac Pro?

    I have hundreds of these large image files to process.  I frequently use SKEW, WARP, IMAGE ROTATION features in Photoshop.  I process the files ONE AT A TIME and have only Photoshop running on my IMac.  Most of the time I am watching SLOW progress bars to complete the transformations.
    I have allocated as much memory as I have (about 14GB) exclusively to Photoshop using the performance preferences panel.  Does anyone have a trick for speeding up the processing of these files--or is my only solution to buy a faster Mac--like the new Mac Pro?

  • While opening specific Document "Not enough Memory" while rendering the Document with Large Image

    When Opening a PDF-File with a Large Image, the Adobe Reader and Acrobat will Crash while rendering with the Message Out of Memory.
    After some investigations this error is reproducable on Windows 2008 R2 and 2012 R2 Machines running on HP ProLiant BL460c Gen8.
    The Servers are updated with the latest Firmware from HP.
    This error does not occure on a Virtual Machine running Windows 2008 R2, so it seems like a Hardware dependent Bug.
    When disabling the Option Show large Images, the error will not Show up. But this is neither a solution nor a workarround, as we Need to Display all Information in the PDF.
    Thanks for any Help

    Hello SumitV
    Thanks for your answer.
    Indeed the Output File can be opened without any Errors. It's a good workarround, but it's not a solution.
    Why does it work (and much faster) on a Virtual Machine, but not on a very Powerful HP Machine?
    I hope Adobe will investigate in this case to solve it completely.
    We have a test machine where we could install some debug tools, are there any to get a deeper analysis of this problem?
    Thanks a lot.

  • Displaying large images

    On blog or news item pages etc. it would be common to display
    an image as no more than a certain width, regardless of the actual
    image size, simply for fast loading and consistent visual
    presentation. However, I would like users to be able to see the
    image full size if required.
    The obvious approaches are when clicking on the image, to
    either show it full size in the current browser window or to create
    a new window to show it, probably using JavaScript to open a window
    the same size as the image.
    Both have their pros and cons and the other issue is that it
    is always annoying to click on an image only to discover that the
    original was no bigger anyway!
    Does anyone have any thoughts on how best to handle images?
    Thanks.

    > On blog or news item pages etc. it would be common to
    display an image as
    > no
    > more than a certain width, regardless of the actual
    image size
    This is mistaken. A large image will fetch as its native size
    regardless of
    how you display it on the page.
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.dreamweavermx-templates.com
    - Template Triage!
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    http://www.macromedia.com/support/search/
    - Macromedia (MM) Technotes
    ==================
    "UsuallyConfused" <[email protected]> wrote
    in message
    news:f7i90a$lns$[email protected]..
    > On blog or news item pages etc. it would be common to
    display an image as
    > no
    > more than a certain width, regardless of the actual
    image size, simply for
    > fast
    > loading and consistent visual presentation. However, I
    would like users to
    > be
    > able to see the image full size if required.
    >
    > The obvious approaches are when clicking on the image,
    to either show it
    > full
    > size in the current browser window or to create a new
    window to show it,
    > probably using JavaScript to open a window the same size
    as the image.
    >
    > Both have their pros and cons and the other issue is
    that it is always
    > annoying to click on an image only to discover that the
    original was no
    > bigger
    > anyway!
    >
    > Does anyone have any thoughts on how best to handle
    images? Thanks.
    >

  • Reading large graphics files as manageable blocks?

    I am looking either for source code or a library that would allow me to easily read a graphics file in manageable blocks. For example given an image that is 20000x200000 pixels, I would like to be able to read only a block of 400x400, from a given offset x/y in the image.
    I need this since I am writing a program that needs to handle very large images (I can't assume user has more memory than the files), but does not need to display the whole image in one go, and but would let the user navigate within the image. Formats I am looking to have support with are JPEG, PNG and TIFF.

    I agree with Gerd.  If you are working with binaries, why not use U8 instead of doubles.
    If the file is indeed 50MB, then the array should be expecting 52428800 elements, not 50000000.  So if you read the file in a loop and populate an element at a time, you could run out of memory fast because any additional element insertion above 50000000 may require additional memory allocation of the size above 50000000 (potentially for each iteration).  This is just speculation since I don't see the portion of your code that populates the array.
    Question:  Why do you need an array?  What do you do with the data after you read it?  I agree with Altenbach, 50MB is not that big, so working with a file of such a size should not be a problem.

  • Large Images + MPE Hardware = Decreased Performance?

    I have been enjoying the performance gain of using the GPU on my graphics card (GTS 250) even though it is not officially supported. I have found that some timelines that take the CPU to 95% in software mode play back using only 25% using the GPU.
    I have found only one anomaly so far and I am not sure if it is specific to the mercury playback engine using hardware, using an unsupported card, or just my card/system in particular.  If I place large (over 4,320 pixels alone the long edge) Jpeg pictures on the timeline (DSLR 1080p @29.97), animated the position and scale using the standard motion effect and apply a cross dissolve, the MPE using hardware will crash during export or bring it to a very slow grind.  It is the only case that I have found where exporting in software mode is much faster.
    However, if I reduce all the images so that the longest edge is below 4,000 pixels, the speed of the MPE using hardware returns and exports work fine.  I am curious to here if others have noticed any performance lag/problems from using large images?
    PPRO CS5 5.0
    Nvidia GTS 250
    Nvidia Driver Version 197.?

    Found it...it was on Karle Soule's website.  Here is what he says about CS5 and maximum resolutions.
    " In the Adobe Mercury Engine, the maximum timeline resolution is 10240 pixels by 8192 pixels, more than enough for any mastering resolution we'll see in the near future. The maximum resolution for clips dropped in any timeline will be limited to 256 megapixels in any direction. So, for example, footage from a 32000 by 8000 pixel sensor could be imported and dropped onto a timeline."
    From the bottom of this page: http://blogs.adobe.com/VideoRoad/mercury/
    I am sure that Photoshop does a better job scaling images, but for projects with a lot of images, it just does not seem very practical to crop/scale each image base on how much panning will be done.  My project is not slowing down due to larger images and playback on the timeline is great, I was just wondering why MPE using hardware bogs down.
    My today's standards, an image of 4,300 pixels along the long edge is not that big....
    I have found that the problem is related to the cross dissolve.  If I remove the cross dissolve the sequence will export with all the GPU goodness speed.  Make me wonder if both images have to be loaded by the GPU to calculate the dissolve.  If that is the case, then two images over 4096 would put my card over the 8192 maximum image texture...however, the GTX 285 has the same maximum image texture, so I wonder if the problem still occurs on my card/system or all GPU rendering?

  • Problem rotating a large image

    Application description:
    A JFrame with a control panel at the bottom, which contains a couple of JSlider's which controls the scaling, translation and rotation of a BufferedImage using an AffineTransform.
    JDK used:
    1.5.0_04
    Problem description:
    The application works fine when using a "normal" sized image. But when I use a large image (1600x1200) the application seems to freeze. The GUI isn't drawn, just the infamous grey rectangle. I've narrowed it down to the rotation. If I comment out the AffineTransform.rotate call, the application works fine and the very large image is displayed in all it's glory, with translate and scale still applied.
    Is this due to some limitation of AffineTransform?

    Scale and translate would be less intensive than
    rotate.possibly, but my guess is probably not.
    If you have a world matrix that everything gets multiplied through, rotate, tranlate and scale all should be pretty fast.
    To rotate such a large image would be very cpu
    intensive and probably require a substantial amount
    of memory. You may want to check your computerAgain maybe, but IMO very unlikely. If Java uses a world matrix (and I've got no idea how else it could do things like scale or rotate) then there should be no or almost no penalty for transformations. And you certainly wouldn't need any more memory.
    performance during the operation to determine if your
    computer is being bogged down.Would be difficult to test for individual graphics operations, but you could check as to whether the CPU gets pinned while painting.

  • When a pop up window comes up it is - search bookmarks and history window! I cannot log into my bank as login button should open new window to log in but I get the search page. I cannot see larger images as again I get the search bookmarks and history pa

    When a pop up window comes up it is - search bookmarks and history window! I cannot log into my bank as login button should open new window to log in but I get the search page. I cannot see larger images as again I get the search bookmarks and history page etc. Happens on all options that should open new page. I am so frustrated, this has been happening since Firefox updated itself 2 days ago to Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8 ( .NET CLR 3.5.30729; .NET4.0C) was fine before that. using windows vista. Can you please advise what I should do? Also can you go back to previous version? Error console eg
    Warning: Error in parsing value for 'cursor'. Declaration dropped.
    Source File: https://ib.nab.com.au/nabib/styles/menu_nab.css?id=009
    Line: 116
    ib.nab.com.au : server does not support RFC 5746, see CVE-2009-3555 and Warning: Selector expected. Ruleset ignored due to bad selector.
    Source File: https://ib.nab.com.au/nabib/styles/nabstyle.css?id=014
    Line: 837
    == This happened ==
    Every time Firefox opened
    == 2 days ago after update.

    Do you have that problem when running in the Firefox SafeMode?
    [http://support.mozilla.com/en-US/kb/Safe+Mode]
    ''Don't select anything right now, just use "Continue in SafeMode."''
    If not, see this:
    [http://support.mozilla.com/en-US/kb/troubleshooting+extensions+and+themes]

  • Unable to create large image file in iphoto

    I make a lot of panoramic images by stitching together overlapping images with hugin in TIFF format, then edit them in iphoto and use it to export them in jpeg format for greater versitality of use. So far I haven't had any trouble, even with large images as big as ~25,000x2000 pixels and ~90MB file size.
    However, I have come across a problem recently trying to export an edited version of a particularly large one - 38,062x1799 136MB file size. I can export an unedited version to jpeg without trouble, but when I tried to export an edited version it gave me an error saying it couldn't create the file. I looked around here for solutions and found the suggestion to check the file size in finder, it was zero. After experimenting I've found that the file size goes to zero as soon as any changes are made to the file, the edited image can still be viewed in iphoto, but does not display if you try to zoom in. I have tried making the TIFF file again with hugin to see if it is a problem with the file, but I also experience the same problem with the newly made file.
    I am working on a Macbook air (1.7 GHz Intel Core i5, 4 GB 1333 MHz DDR3 Memory, Intel HD Graphics 3000 384 MB, Version 10.7.5 OS X) in iphoto '11 version 9.2.2 (629.40). Any suggestions for getting round this issue?

    I thought of something that might help with working out what's going on.
    After I make any edits to the image, if I move on to the info tab the image initially goes a bit blurry as it often does with large images, but instead of coming up as a clear image after a few seconds like normal, I get the image I've attached here and the file size changes to zero. To be able to see the thumbnail of the edited image I need to come back out to the library again, but the file size is still zero. If I revert to original the file size is restored.

  • Is it possible to downsize a large image on the server side (before serving it to the user)?

    One of the banes of my existence is resizing the same image several times because of the various contexts it appears in. For example, the main image in an article will appear bigger than it will on the index page, where many article teasers are placed. Then there's the image that goes in the Facebook "share" og. Then there's... you get the idea.
    Same image, different contexts, lots of Photoshop resizing, lots of files to keep track of... but I save on bandwidth.
    On the flip side, I can target the same (large) image and simply downsize via traditional HTML (width/height) on the browser end, but that will mean downloading a file that is 50-75% larger than what is actually needed. If the front page teaser displays a 1280px image at 500px, that's a huge (and potentially costly) waste of bandwidth.
    HOWEVER...
    If I could do the same thing on the SERVER side... in other words, tell the server to take only the pixels it needs from the (large) image and serve only THOSE to the end user... then the same image could be used in each and every context, and only the necessary bandwidth is used.
    Is this do-able? If so, what is the process formally called, and where would I begin to learn how to do it?
    Thanks!

    That's amazing. I didn't think it was possible without saving files on the server first.
    This will suit my needs just fine, but allow me to push my luck even further with the following "hail mary" question : would it be possible for the server to serve the PNG with a width of 100% of the container? In other words, the behavior would mimic width=100% but the server would still only serve as many bytes as are needed to fill that space.
    Not only would I not have to create separate files for every resize (we fixed that problem with your suggestion) but I wouldn't even have to customize every link to a specific size.
    I'm not expecting an affirmative on this one, but it was worth asking. =)

  • Large images not displaying properly in Acrobat Pro 9

    When creating PDF documents from any software (Illustrator, Corel, InDesign, etc), color objects over 50% of the page size have white fill. If object has an outline, the outline will appear normally, but the fill will be white. I've searched forums everywhere and have found no answer. Most forum topics on this are issue focus on settings in the program that exported the PDF. I am 99% sure this is not an export problem, but is something in Acrobat itself (a missed setting? Corrupt program file?)  
    To provide the best information, I've done a little scientific research. Here is what I know:
    Currently running Acrobat Pro 9.5.5
    This is only happening on ONE install of Acrobat 9. Documents display correctly in Reader X and in Acrobat 9 on other machines.
    Started happening a few weeks ago, I have no idea what changed.
    "Show Large Images" in preferences dialog is checked. Unchecking it and rechecking it does nothing.
    Have installed all available updates to the software
    I experimented with different object sizes. At 50% of page size, objects appeared normal, at 51% filled object disappears.
    Does not affect photos, only affects vector objects.
    I welcome all ideas to fix this.
    ALSO - Am running this on Windows Vista 64 Bit

    Hello? Is there nobody in the Adobe universe that has an actual solution to my problem?
    Other things I've tried to solve this:
    repair the install.
    uninstall and reinstall
    cuss at the computer
    throw a tantrum.
    I will pay someone a whole dollar for a solution that works.

  • Illustrator Image Trace doesn't work with large images

    Illustrator's Image Trace doesn't seem to work at all with large images.  The attached image shows the following:
    A) The original raster cropped tightly.
    B) Converting the raster with default "Black and White Logo" settings.  Results in 242 paths and 4792 anchors.
    C) Adding a massive amount of white space and then converting the raster with default "Black and White Logo" settings.  Results in 407 paths and 1620 anchors.
    For whatever reason Illustrator can't seem to handle large images.  This test shows it's not an issue with the image being too complex since all I've done is add white space and the file was saved without compression so there should be no noise.
    This type of glitch started with CS6 with the new Image Trace tool.  Is there a fix for this?  Maybe setting Image Trace into legacy mode?

    Moving to Illustrator forum.

  • Trying to link a thumbnail in one frame to a larger image in another frame GOLIVE 9...

    Hi.
    I am using Adobe Golive 9.
    I found a tutorial on the internet that allows you to create an image gallery with clickable thumbnails
    http://www.tutorialhero.com/click-48179-create_an_image_gallery_with_clickable_thumbnails. php
    However, the tutorial is designed for a page without frames.
    I want to be able to do this using frames.
    I want to link a thumbnail on a page in one frame to a larger image on another page, so that when your mouse is over the thumbnail,
    the larger version appears in the other frame.
    This is what my frames look like so you get an idea of what I'm trying to do
    And here is a picture of the thumbnail in the lower frame and the larger image in the frame above it.
    Could someone please give me STEP BY STEP instructions on how to do this?  I'm new to Golive, using version GOLIVE 9, and I know this can be accomplished using "Set Image URL"  but have no idea how to do it.
    Thank you for your help all.
    Chris.

    A link in one frame is designed to call another page into some other frame, not an image. You can't use SetImageUrl across different pages. You'd probably have to put each of your large images on a page, and call each page each time. I'm not sure why you're so set on using frames, they have many disadvantages.

  • Lightroom 4. I have a large image (scanned) 10,000 X 14000 which does not Render when I try to print or export.   Regular sized images work fine.

    In Lightroom 4:Mac OS 10.6.8.     I have an expecially large image (scanned) which is 10,000 X 14,000.  I have edited it but when I try to print or export the image, it starts to "Rendering Page" but gets stuck and is unable to complete the task before it times out.    When I use normal sized photo images (raw), it works fine.  I suspect the problem is related to the size of the file.   Is there some way I can reduce the size of the image while in Lightlroom?   Or is there any other solution? 

    You originally posted on a thread that really wasn't related to your problem, so I branched the discussion.
    Check the Windows System Event Viewer to see the details of the crash, most importantly the name of the faulting module.
    Most crashes on Open or Save are due to OS bugs (especially on MacOS) or bugs in third party extensions to the OS open and save dialogs.

Maybe you are looking for