Editing very large images

I have several very large images that require minor editing. The tasks that I require are only basic rotation, cropping, and maybe minor color adjustments.
The problem is that the images range from 20780 x 15000 px down to 11150 x 2600 px.
With images this size, when I try to perform any operation on the entire image, like a 3 deg rotation, fireworks simply times out. Maybe it is because it can't generate the preview image at that size, maybe it simply can't calculate the image at that size. If it is just the preview image, is there a way to lower the quality (say, jpeg quality of 5%) of the preview image only but when I export the image, I can keep it at 100%?
Thank you,
-Adam
(Yes, I did search, but the forums search seemed mildly useless and engine results consistently returned nothing)

Fireworks is not designed to work with images of this size. It's a screen graphics application. Photoshop would be a better option here, or other software designed for working with high resolution files.
Jim Babbage

Similar Messages

  • Please help!! "Can't open the illustration. This artwork contains a very large image that can not...

    Hi all, I am subscribing Illustrator CS6 16.0.0 and using it on Windows 7.
    Few days ago I was working with one file, I saved it successfully and now when I'm trying to open it an error message occures "Can't open the illustration. This artwork contains a very large image that can not be read on this version of AI. Please try with 64-bit version." and ALMOST ALL of the objects (vector) are missing as if they were deleted!
    It's kind of strange since I created the file with the same program and everything was working properly before.
    Please Please advice further steps for recovering my file.

    Thank you so much! the file is recovered (as well as my emotional state )
    The finding of the day - apparently I have two versions of AI in my PC!

  • Creating a custom DataBuffer for very LARGE images

    I would like to work with very large images (up to 100MB) in an application.
    Only for image manipulating, not rendering for screen.
    My idea is to write my own DataBuffer which use Hard Drive,maybe with some "intelligent" swapping.
    But At first performance doesn�t matter.
    I try this:ColorModel cm = ColorModel.getRGBdefault();
    SampleModel sm = cm.createCompatibleSampleModel(w,h);
    DataBuffer db =  new  myDataBufferInt(size);
    WritableRaster raster = Raster.createWritableRaster(sm,db,null);But somebody don�t like myDataBuffer
    and of course it is type DataBuffer.TYPE_INTthrows:java.awt.image.RasterFormatException: IntegerComponentRasters must haveinteger DataBuffers
    at sun.awt.image.IntegerComponentRaster.<init>(IntegerComponentRaster.java:155)
    at sun.awt.image.IntegerInterleavedRaster.<init>(IntegerInterleavedRaster.java:111)
    at sun.awt.image.IntegerInterleavedRaster.<init>(IntegerInterleavedRaster.java:78)
    at java.awt.image.Raster.createWritableRaster(Raster.java:980)
    public  class myDataBufferInt extends DataBuffer {
    public myDataBufferInt(int size) {
            super(TYPE_INT,size);
        . . . }The question is how to manage large image?
    What kind of subclasses I have to write?
    Thank you all
    P.S: I don�t want to use: java -Xmsn -Xmxn

    i have the same problem let me know please if you have an answer?

  • What is the best way to handle very large images in Captivate?

    I am just not sure the best way to handle very large electrical drawings.
    Any suggestions?
    Thanks
    Tricia

    Is converting the colorspace asking for trouble?  Very possibly!  If you were to do that to a PDF that was going to be used in the print industry, they'd shoot you!  On the other hand, if the PDF was going online or on a mobile device – they might not care.   And if the PDF complies with one of the ISO subset standards, such as PDF/X or PDF/A, then you have other rules in play.  In general, such things are a user preference/setting/choice.
    On the larger question – there are MANY MANY ways to approach PDF optimization.  Compression of image data is just one of them.   And then within that single category, as you can see, there are various approaches to the problem.  If you extend your investigation to other tools such as PDF Enhancer, you'd see even other ways to do this as well.
    As with the first comment, there is no "always right" answer.  It's entirely dependent on the user's use case for the PDF, requirements of additional standard, and the user's needs.

  • Editing very large MP3 Files

    Hello,
    Sorry for my bad English! I am French!
    I recently bought a Sony digital recorder (ICD-PX312) that records MP3 files.
    I recorded my noisy neighbour, in continuous mode for three days. (I was not at home)
    PX312 recorder has produced several files which maximum size is  511,845 kB corresponding to a lenght of 24H 15min.
    Every file, checked with Mediainfo, have the same properties. (48kbps 44.1KHz)
    I can read them with VLC media player without any problem, but I need to edit these files to keep only the interesting parts.
    If I open (drag and drop or open file) these files with Audition 1.0 (I came from Cool-Edit), I found a very strange behavior.
    The 24H 15min files are opened with a lenght of 1H and a half or so!.
    I gave a try opening with Audition 3.0 and the result is strange too, and more "funny":
    The lenght of one 24H15min file is 1H and a half
    The lenght of another 24H 15min file is 21H or so.
    In the Audition TMP directory, Audition has created several 4 GB temporay files that correspond to the limit of the WAV files.
    I made a test with a 128kpbs, 44.1khz. This 511,845 kB file is  9H 5min long
    Audition read it as 4H40min
    The TMP file is 2,897,600 kB, far below the 4 GB limit
    It seems Audition 1 and 3 (I read CS5 has the same problem too) does not share the WAV conversion very well.
    Is it due to Audition itself or to the Fraunhofer MP3 codec?
    Is there any workaround from Adobe?
    As I am an AVS4YOU client, I tried AVS Audio Editor. It opens every file with the good lenght but does not have the needed editing functions. That demonstrates that it is possible to share very large files.
    Many thanks in advance for any help or idea, because Audition is the editor I need!
    Best Regards

    SteveG wrote :
    t's not their 'bug' at all. MP3 files work fine, and Adobe didn't invent the wav file format, Microsoft did. It's caused by the 32-bit unsigned integer to record the file size headerincorporated into wav files, which limits their size to one that can be described within that limit - 4Gb.
    May be I was not clear enough in my explanation
    I Agree partly with you.
    I Agree with you when you wrote that the 4 GB limit is inherant to the Microsoft WAV format.
    When reading 24H MP3 Files, Audition 3.0 creates, in TMP folder, two chunk of 4 GB and another smaller chunk, to try to overcome the limit. This cutting process is exactly what I am expecting from such a software.
    The problem - and the bug - is that the duration  that is "extracted" in audition is smaller than that of the original MP3 (e.g. 21H 30min instead of 24H 15min). Some part of the original MP3 files has been skipped in the cuttng/conversion process.
    I dont think Microsoft is involved in this "cutting" process that belongs to Adobe an/or Fraunhofer. This is why I am surprised

  • Keynote unable to handle very large images

    I need to use a very long image (1341 x 21221px), and dragging it into my presentation causes it to be scaled down and blurry. I have tried in Keynote 6.0 and 6.1. (More context: I use an animated pan from the top of the image to the bottom, so cropping/scaling is not an option.)
    I have unchecked "Scaled placed images to fit on slide" in the preferences. Keynote 5.x was able to handle this OK. The new Keynote will not.
    If anyone has a workaround, I'd love to hear it.

    I need to use a very long image (1341 x 21221px), and dragging it into my presentation causes it to be scaled down and blurry. I have tried in Keynote 6.0 and 6.1. (More context: I use an animated pan from the top of the image to the bottom, so cropping/scaling is not an option.)
    I have unchecked "Scaled placed images to fit on slide" in the preferences. Keynote 5.x was able to handle this OK. The new Keynote will not.
    If anyone has a workaround, I'd love to hear it.

  • Bridge CS4 with very large image archves

    I have just catalogued 420,000 images from 3TB of my photographic collection. For those interested, the master cache files became quite large and took about 9 days of continuous processing:
    cache size: 140 gb
    file count: 991,000
    folder count: 3000
    All cache files were also exported to the disk directories.
    My primary intent was to use the exported cache files as a "quick" browsing mechanism with bridge. Of course, "quick" is a rather optimistic word, however is is very significantly faster than having bridge rebuild caches as needed.
    I am now trying to decide if it is worth keeping the master bridge cache because of the limitations of the bridge implementation which is not very flexible as to where and when the master cache adds new temporary cache entries.
    Any suggestions as to the value of keeping the master cache would be appreciated. I don't really need key word or other rating systems since I presently use a simple external data base for this type for image location.
    I am also interested in knowing if the "500,000" entry cache limitation is real - or if more than 500,000 images can be stored in the master cache since I will be exceeding this image count next year.

    I have a bridge 5 cache system with 600,000 images over 8 TB of networded disk.  I too use this to "speed up" the browsing process and rely primarily on key word processing to group images.  The metadata indexing is, for practical purposes, totally useless (it never ceases to amaze me about why Adobe things it useful for me to know how many images were taken with a lens focal length of 73mm - or some other equally useless statistic).  The only thing I can think of that is are serious missing keyword indexing feature is the ability to have a key word associated with a directory.  For example, I have shot many dozens of dance, theatre, music and sports productions - it would be much more useful to cataloge a directory that has the key words "Theatre" and "Romeo and Juliette" than attempt to key-word each individual image.   It is, of course, possible to work around the restrictions but that is very unclean and certainly less than desireable.   Key-wording a project (i.e. a directory) is a totally different kettle of fish than key-wording an image.  I also find the concept of the "collection" is very useful and well implemented.
    I do maintain a complete cache build of my system.  It is spread over two master caches, one for the first 400,000 images and a second for the next 400,000 (I want to stay within the 500,000 cache size limit - it is probabley associated with the MYSQL component of Bridge and I think may have problems if you exceed the limit by a substantial amount.  With Bridge on CS3, when the limit was exceeded , the cache system self-distructed and I had to rebuild).
    The only thing I can think of (and that seems to be part of Adobe's design) is that Bridge will rebuild the master cache for a working directory "for no apparent reason" such as when certain (unknown) changes are made to ACR,   Other automatic rebuilds have been reported by others however Adobe does not comment upon when or what casuses a rebuild.  Of course, this has serious impact upon getting work done - it is a bloody pain to have bridge suddenly process 1500 thumbs and preview extracts simply to keep the master cache completely and perfectly synchronized (in terms of image quality) with what might be displayed if you happen if you want to load a raw image into photoshop.  This strategy is IMHO completely out of step with how (at least I) use the browsing features of bridge.
    It may be of concern that Adobe may, for design reasons, change the format of the directory cache files and you will have to completely rebuild all master and directory caches yet again - which is a real problem if you have many hundreds of thousands of images.  This happened when the cache system changed from CS3 to CS4 - and Adobe did not provide conversion programme to migrate the old to new format.  This significantly adds to the rebuild time since each raw image must be completely reprocessed.  My current rebuild of the master cache has taken over two elapsed weeks of contunuous running.
    It would be nice if Adobe would allow some control over what is recorded in the master cache - for example, "do you wish meta data to be indexed".
    (( as an aside, adobe does not comment upon why, when using Bridge to import images from a CF card, results in the building a .xmp file with nothing but meta data for each raw file.  I am at a loss to speculate what really useful thing results other than maybe speeding up the processing of the (IMHO useless) aspects of meta data ))
    To answer your quiestion, I do think the master cache is worth keeping - and we can pray that Adobe puts more though process into why the master cache exists and who uses the present type of information indexed within the cache.

  • Very large images

    How do i assemble an extremely large format image to be used in a convention display?

    The latest two major versions allow you to disable compression right from the preferences.
    But please describe the actual problem you're trying to solve.  Are you actually trying to edit big files and having problems, or just thinking ahead?
    Does your computer have the resources to edit huge documents?
    Are you trying to meet certain goals for size and ppi?
    -Noel

  • Editing very large panorama

    Hi all,
    I work actually on a huge project, a gigapixel panorama. The picture will go over the photoshop limitation around 300'000px and will be around 394110px large.
    I will save the picture in psb format, and try to do the maximum corrections in the source pictures, but i may need to do editing after the stitching process.
    Did you know if it is possible to edit this picture by cutting it in more usefull size pictures?
    I will not do contrast or light corrections than can affect the final image if i merge the tiles picture, but do small corrections like blur people face, car immatriculation etc...
    Thanks by advance
    Regards

    Sure. Cutting up the picture into smaller pieces will make it easier to work with. Bert Monroy (do a google search) works in this way for some of his really large scale paintings. Of course, running in 64bit and having tons of RAM in your Mac will help your cause in a substantial way.

  • Resizing a large image(86400 x 43200) into smaller tiles

    Hello,
    I am working with a VERY large image (86400 x 43200). I need to resize this into smaller images (7200x7200) the fixed size crop tool is painstakingly slow, is there a plugin or automation script that will do this for me?
    My system should be able to handle this:
    Phenom 9600
    XFX 5770
    4gb DDR 1066
    Windows 7-x86
    Photoshop CS4
    Any Help would be greatly appreciatated!
    -James

    You may be able to change that behavior by adjusting the amount of RAM you allow Photoshop to use in the Edit - Preferences - Performance dialog.  Keep in mind you'll need to shut down and restart Photoshop after having made a change to the setting.
    -Noel

  • Can Photoshop Elements handle very large files?

    I have some very large image files (up to 20,000 x 30,000 pixels).  Microsoft paint can not open them because of resource limitations.
    I don't want to do anything real fancy.  Maybe extend a square matrix with white pixels or overwrite sections with white pixels.
    Can Photoshop Elements deal with such large images, say just a chunk at a time?  Or does it require to have the entire image in memory at one time?
    Thanks.

    This note from Adobe indicates that PSE5 and PSE6 could open such a file; probably the limits for PSE7 and PSE8 are either the same, or would allow even larger images to be opened.

  • Safari 7.0.1 does not load large images

    When I use safari and try to load an image with a resolution above, say, 2000x2000, I get a black box, sometimes with a grey area in the corner. The correct image displays for a brief moment, about a half second, but not enough for my viewing pleasure. This usually happens when I click imgur links on reddit, such as this one. I just searched a very large image on google, and was able to load it no problem. I've tried turning off all extensions, but even with no extensions the images do not load. Any advice is much appreciated. Thank you!

    Turning auto graphics off solved the issue for me: http://www.youtube.com/watch?v=QxSlQ-GNbik
    Seems like Safari won´t render large images when you are using integrated grapchis.

  • Working with large images

    i need to work with very large images (3000x3000 pixels) and they take a lot of memory (50MB). thing is that they don't use a lot of colors but they are loaded in memory with 24bit color model.
    it is jpeg image and it is decoded to buffered image with standard jpeg decoder. i want to use buffered iamge because it is very simple to work with, however, i need to find a way to reduce memory consumption.
    is it possible to somehow use different color model to reduce color count to 256, which is enough for me. i am clueless about imaging and i simple haven't got a time to get into it. i need some hints about this. i just want to know is it poosible (and how) to reduce size of BufferedImage in memory by decreasing it's number of colors.
    i was thinking about using gif encoder, but it works with images, not bufferedimages and conversion takes lot of time and is not a solution.
    thanx!
    dario

    You can create a byte indexed BufferedImage. This uses just 1 byte per pixel.
    Use
    public BufferedImage(int width,
    int height,
    int imageType,
    IndexColorModel cm)
    with
    imageType == BufferedImage.TYPE_BYTE_BINARY
    and create a color palette with
    new IndexColorModel()

  • Handling very large diagrams in Pages?

    I am writing a book that requires sometimes the use of large diagrams. These are vector-based diagrams (PDF). Originally, I planned to use iBooks Author and widgets to let the user zoom/pan/scroll and use other nice interactive stuff, but after having tried everything I have decided to give up on iBooks Author and iBooks for now because of its dismal handling of images (pixels only, low resolution, limited size only, etc.).
    I am planning to move my project over to Pages. Not having the 'interactive widget'  approach means I need some way to handle large images. I have been thinking about putting very large images in multiple times on different pages with different masks. Any other possible trick? Can I have documents with multiple page sizes? Do I need a trick like the one above or can an ePub book be zoomed/panned/scrolled, maybe using something different to read it than iBooks?

    Peter, that was indeed what I expected. But it turns out iBooks Author can take PDF, but iBooks cannot and iBooks Author renders them to low resolution images (probably PNG) when compiling the .ibook form the .iba.
    Even if you use PNG in the first place, the export function of iBooks Author (either to PDF or to iBook) create low resolution renders.
    The iBooks format is more a web-based format. The problem lies not in what iBooks Author can handle, but in how it compiles these to the iBooks format. It uses the same export function for PDF, making also PDF export ugly and low-res.
    iBooks Author has more drawbacks, for instance, if you have a picture and you change the image inside the picture, you can't. You have to teplace the entire picture. That process breaks all the links to the picture.
    iBooks Author / iBooks is by far not mature.

  • Creating Very Big images from Flex

    Hi,
    I am trying to create a small image manipulator. In which I am having Some image in background and then I am adding other
    images, text etc. to it and capturing bitmapdata of the entire canvas and saving the image at the server side. Now my issue is that I want to convert those images (created from flex bitmapdata via byteArray ) to very large images i.e. in the dimenstions of the feets.Can any body suggest how to acheive this.As i tried to scale the bitmapdata the quality gets down repildly. And again there is the limiations of the size in bitmap data,and hence my am far enough from my
    actual requirement.
    Thnx in Advance,
    Shardul Singh

    This code works:
    public class ImagePanel extends JPanel
    Image bufferedImage;
    Image grass;
    //grass image is loaded up here;
    public void paint(Graphics g)
              super.paintComponent(g);
              bufferedImage = createImag(15*increm,14*increm);
              Graphics bufferedGraphics = bufferedImage.getGraphics();
              bufferedGraphics.drawImage(grass, 0, 0, this);
              bufferedGraphics.drawImage(grass, 32,32, this);
              bufferedGraphics.drawImage(grass, 64,64, this);
              g.drawImage(bufferedImage, 0, 0, this);
    However this code does not:
    public class ImagePanel extends JPanel
    Image bufferedImage;
    Image grass;
    //grass image is loaded up here;
    public void setUpImage()
    bufferedImage = createImag(15*increm,14*increm);
              Graphics bufferedGraphics = bufferedImage.getGraphics();
              bufferedGraphics.drawImage(grass, 0, 0, this);
              bufferedGraphics.drawImage(grass, 32,32, this);
              bufferedGraphics.drawImage(grass, 64,64, this);
    public void paint(Graphics g)
    setUpImage();
              super.paintComponent(g);
              g.drawImage(bufferedImage, 0, 0, this);
    I want to have a method that sets the image up so I can move it around, add to it, take away from it, etc. that is what I want setUpImage() to do. I can't figure out why it works if I do it the first way and not the second way.

Maybe you are looking for

  • How do I configure a cisco 1131 AP to use WPA2 enterprise and authenticate to Active Directory

    I have a Win2008 server set up as a radius server (192.168.32.71) and a stand alone AP (192.168.201.9) The AP is config is below: version 12.3 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password

  • How to get the macbook air by students discount in pune , maharashtra  india

    I want to buy macbook air 2013 128 gb i5 13 inch by student discount please tell me how to get the discount in pune , india.

  • Raid Level 5+6 under 10.5.6 ?

    we couldn't use our SCSI RAID Level 6 under 10.52/10.53/10.54/10.5/5 heavy CPU load after a while and crash of the AFP now the RAID is for sale ! Murphy's low, seems that Apple has fixed some kinds of problems with RAID http://support.apple.com/kb/HT

  • Photoshop CS - RAW Plug-In

    I recently purchased a Nikon D60 and am unable to find a RAW plug-in that will work with Adobe Photoshop CS.  I tried Camera Raw 2.4, but still get the error - can not complete this request, this is not the right kind of document. I would appreciate

  • Schrift Import in InDesign CS 5.5 Windows XP

    Ich habe ein Problem mit der Installation von Schriftarten im InDesign CS 5.5 Ist: OS: Windows XP SP3 / SW: InDesign CS 5.5 Beim öffnen der Datei erscheint die Meldung, Schriftart nicht gefunden. Danach kann ich nach der Schriftart suchen, diese wird