OutOfMemoryError when creating a large BufferedImage

I need to create a very large tiff file which is made up of a bunch of smaller images (note: not a multi-page tiff). If I try to do it all in memory, I get an OutOfMemoryException. The image is extremely large, about 2500x123000 pixels.
I believe that the correct way to do this is create an ImageWriter whose outputStream points to a File. I'd then write an initial "background" image using writer.write(BufferedImage), then load a bunch of smaller images and append them to various places using writer.replacePixels.
My problem is the inital background image. How do I tell my ImageWriter to write a background image of 2500x123000 without creating a BufferedImage of that size in memory? Is it something to do with tiling? Is there some other way to tell my ImageWriter the dimension of the file I'm trying to write?

This is a hack for writing big tiff using BufferedImage with height 1.
          Iterator<ImageWriter> iter = ImageIO.getImageWritersByFormatName("TIF");
          TIFFImageWriter writer = (TIFFImageWriter) iter.next();
          ImageOutputStream stream = ImageIO.createImageOutputStream(new FileOutputStream("d:/test.tif"));
          writer.setOutput(stream);
          System.out.println(writer);
          BufferedImage image = new DummyBufferedImage(10000, 10000, BufferedImage.TYPE_INT_RGB);
          writer.write(image);
          writer.dispose();
          stream.close();And DummyBufferedImage:
class DummyBufferedImage extends BufferedImage {
        int dummyHeight;
        public DummyBufferedImage(int width, int height, int imageType) {
            super(width, 1, imageType);
            dummyHeight = height;
            Graphics g = createGraphics();
            g.setColor(Color.WHITE);
            g.fillRect(0, 0, getWidth(), 1);
        public int getHeight() {
            return dummyHeight;
        public Raster getData(Rectangle rect) {
            SampleModel sm = getRaster().getSampleModel();
            SampleModel nsm = sm.createCompatibleSampleModel(rect.width,
                    rect.height);
            WritableRaster wr = Raster.createWritableRaster(nsm,
                    rect.getLocation());
            int width = rect.width;
            int height = rect.height;
            int startX = rect.x;
            int startY = rect.y;
            Object tdata = null;
            for (int i = startY; i < startY+height; i++)  {
                tdata = getRaster().getDataElements(startX,i-startY,width,1,tdata);
                wr.setDataElements(startX,i,width,1, tdata);
            return wr;
    };

Similar Messages

  • OutOfMemoryError when retrieving large resultset

    In my application I have one type of object (Call it O for now) that has
    a lot of data in the database (+50000 rows). Now I need to create an
    object of class T that has a m-n binding to 40000 instances of O. In the
    database this is mapped to a link table between the table for O and T.
    Now I get an OutOfMemoryError when I perform the following code to add
    one T to the aforementioned O:
    PersistenceManager pm = VRFUtils.getPM(getServlet());
    T t = new T();
    //Fill arbitrary fields
    t.setToelichting(mtForm.getToelichting());
    //Add T to a set of O's
    Set os = new HashSet();
    Query q = pm.newQuery(O.class,"aangemaakt==parmaanmaak");
    q.declareParameters("java.util.Date parmaanmaak");
    os.addAll(q.execute(field));
    t.setOs(os);
    //Make T persistent
    pm.currentTransaction().begin();
    pm.makePersistent(toelichting);
    pm.currentTransaction().commit();
    pm.close();
    After debugging I've found that the OutOfMemoryError occurs even when I
    don't make anything persistent, but simply run the query that retrieves
    40000 records and do a c.size() on the result.
    I'm running Kodo against MySQL using Optimistic transactions.
    What I must appreciate, is that the OutOfMemoryError does not upset the
    rest of the Kodo system.
    Please advise,
    Martin van Dijken
    PS: The c.size() issue I've been able to resolve using KodoQuery's
    setResult("count(this)"), but I can't think of anything like that, that
    would apply to this situation.

    As you may know, Kodo has several settings that allow you to use large
    result sets without running out of memory (assuming your driver supports
    advanced features):
    http://www.solarmetric.com/Software/Documentation/latest/docs/ref_guide_dbsetup_lrs.html
    Kodo also allows you to apply these settings to persistent fields, so
    that the entire collection or map field does not reside in memory:
    http://www.solarmetric.com/Software/Documentation/latest/docs/ref_guide_pc_scos.html#ref_guide_pc_scos_proxy_lrs
    Unfortunately, you are trying to create a relation containing more
    objects than your system can handle, apparently, and when creating a
    relation you pretty much have to load all the related objects into
    memory at once right now.
    In the next beta release of 3.1, there is a solution to this problem.
    Our next release no longer holds hard references to objects that have
    been flushed. So by using the proper large result set settings as noted
    above, and by making your relation a large result set relation also as
    noted above, and by adding only a few hundred objects at a time from the
    query result to the relation and then manually flushing, you should be
    able to perform a transaction in which all the queried objects are
    transferred from the query result to the large result set relation
    without exhausting memory.

  • OutOfMemoryError When Sending Large Image

    Hi All,
    We are developing a midlet that is esentially a photo blogging app. We are using an HTTP POST to send the image to the web server. The code is working properly as we are able to send images up to about 80KB.
    However, when sending images larger than ~80KB, the midlet gets an OutOfMemoryError.
    The image is being sent in chunked data packets, so shouldn't this mean we could send any size file since it would just keep sending more data chunks until it has reached the end of the file?...
    Has anyone else out there encountered this or perhaps know of a work around?
    Any help would be greatly appreciated.
    Thanks!
    Jim

    We are currently loading the entire image into memory at the moment which is
    probably causing the OutOfMemory exceptions. Would you know how native
    phone applications send images through HTTP which are many times the
    size of available memory?
    The only way I could think of is to somehow connect the input stream
    which gets the image from the phone's memory and the output stream
    which writes out the HTTP data. By doing this no new byte array will get
    declared (explicitly anyway) to temporarily hold the entire image in the phone's
    memory. I'm wondering could I accomplish something like that by somehow
    collapsing these two chunks of code into one that declares no arrays to hold
    the entire image in memory?
    Here's our current code for reference:
    Getting the image from the phone
    theFile is a FileConnection object which references the image file we want
                        InputStream fileInputStream = theFile.openInputStream();
                        fileContent = new byte[(int)filesize];
                        fileInputStream.read(fileContent);
                        fileInputStream.close();Writing image to HTTP output stream
    data is a byte[] array holding the entire image in memory
    httpOut is a DataOutputStream
    SuperViewerMidlet.httpOut.write(data);

  • Error when creating index with parallel option on very large table

    I am getting a
    "7:15:52 AM ORA-00600: internal error code, arguments: [kxfqupp_bad_cvl], [7940], [6], [0], [], [], [], []"
    error when creating an index with parallel option. Which is strange because this has not been a problem until now. We just hit 60 million rows in a 45 column table, and I wonder if we've hit a bug.
    Version 10.2.0.4
    O/S Linux
    As a test I removed the parallel option and several of the indexes were created with no problem, but many still threw the same error... Strange. Do I need a patch update of some kind?

    This is most certainly a bug.
    From metalink it looks like bug 4695511 - fixed in 10.2.0.4.1

  • Where are files downloaded to on the mac when creating a pdf from web pages?

    where on the mac HD are files downloaded to  when creating a pdf from web pages?
    Im creating web pages from the whole site so creating a large document, so wondered where these fiels are stored on the mac so I can delete when finished as dont want to clog up the hard drive.

    Look at the LiveCycle server products.

  • When creating a comment summary, I want to end up with a pdf of comments only. In addition, I do not want author, date, time, or page number to show in my comments summary. How can I do this?

    When creating a comment summary, I want to end up with a pdf of comments only. In addition, I do not want author, date, time, or page number to show in my comments summary. How can I do this? I do a lot of reading, and when I summarize my comments, I like to save these comments in a larger document I compile over time. I typically create a comment summary, export it to a text editor and find myself having to delete author, date, time, page number, and anything else that is not the raw and net comment. It would save me some good time if someone showed me how to do this. The reason I do not want any additional information is because it simply adds stuff I do not need in my compilation of summaries for large amounts of papers over time.

    What you're describing can't be achieved using the built-in comments summary function of Acrobat, but it can certainly be done if a custom-made script it used to generate the summary because then you're in full control of what's included in it and what's not.

  • CS5 "save as" format menu changed when creating new document

    Hi,
    When creating a new document in photoshop CS5 i cannot save as a jpeg or png or the other large list of file types i used to have in the save as format menu,
    below is what i used to be able to save as but can only do this when actually opening a jpeg file and not when creating a new document
    this is the options i have now
    i haven't been using photoshop that long and am no genius with pc's so i'm a little lost here, does anyone know why this has happened and if there is a solution to this, your help will be greatly appreciated. my spec are windows 7 64bit, core i7, nvidia 260gtx.

    It looks as if you are creating documents in a format incompatible with certain file types.
    For example, check in the New dialog that it is set for RGB Color – 8-bit.

  • How to set SAXParser at command-line interface to create a large XML file

    Hi,
    I am trying to create a large XML file (more than 50 MB) by selecting from Oracle database but failed because of "out of memory" error. According to "Oracle XML Developer Guide", we should use SAXParser to parsing a large XML file. But there is no example to show how to set SAXParser at command-line
    Following is what I use to get xml files. It works only when the file is small.
    java OracleXML getXML -DateFormat -withDTD -rowsetTag PO_HDR -conn
    "jdbc:oracle:oci8:@server_name" -user "ID/password" "select * from table_name"
    When I set SAXParser at the way below,
    java oracle.xml.parser.v2.SAXParser OracleXML getXML -DateFormat -withDTD -rowsetTag PO_HDR -conn
    "jdbc:oracle:oci8:@server_name" -user "ID/password" "select * from table_name"
    it failed with the error message: "In class oracle.xml.parser.v2.SAXParser: void main(String argv[]) is not defined"
    Does anyone know how to solve the problem? I'll be appreciated very much for your help.
    Yi

    here are my ideas.
    register the xml schema.
    using xmldom, generate the desired xml output and return as xmltype.
    then you can use something like this to check.
    declare
    xmldoc xmltype ;
    begin
       -- populate xmldoc from you xmldom function
       -- validate against XML schema
       xmldoc.isSchemaValid(schema_url, root_element);
       if xmldoc.isSchemaValid = 1 then
            --valid schema
       else
            --invalid
       end if;
    end

  • When creating a PDF from the scanner, it only scans one side despite the settings.

    When creating a PDF from the scanner, it only scans one side. I am using a fi-7160 that supports scanning both sides at once. The Scanner software is configured to scan both sides, and so are the presets in Adobe Acrobat Pro. I've only gotten the setting to work with the Custom Scan setting. Has anyone else had this problem, and what is the solution?
    Additionally, the software bundled with the scanner with automatically omit the back sides of a page if it is blank. I am scanning large files with occasional double sided pages. Is there a way to avoid having to manually delete blank pages when creating a new PDF from the scanner? It takes twice as long because I'm using the OCR stabilization.

    Hi anne,
    Could you please let me know what Acrobat and OS version are you using.
    Does this happen in case of any particular PDF or all PDF files?
    Try repairing Acrobat and then again check.
    Hope to get your response.
    Regards,
    Anubha

  • Selecting an object fails when creating a new dashboard using the 'New Dashboard and Widget Wizard'

    Hi,
    I am using SCOM 2012 SP1
    I have recently been experiencing an issue when trying to create a new dashboard using the wizard. I get to the step where you select a group or object to add to the  Scope and Counters section. When I try to search for a particular object such as a
    port or interface it times out and returns an error. The error is shown below:
    Please provide the following information to the support engineer if you have to contact Microsoft Help and Support :
    Microsoft.EnterpriseManagement.Presentation.DataAccess.DataAccessDataNotFoundException: Exception reading objects ---> Microsoft.EnterpriseManagement.Common.UnknownDatabaseException: The query processor ran out of internal resources and could not
    produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer
    Support Services for more information.
       at Microsoft.EnterpriseManagement.Common.Internal.ServiceProxy.HandleFault(String methodName, Message message)
       at Microsoft.EnterpriseManagement.Common.Internal.EntityObjectsServiceProxy.GetManagedEntitiesByManagedEntityTypesAndCriteriaWithInstanceQueryOptions(IList`1 managedEntityTypeIds, IList`1 managedEntityBaseTypeIds, IList`1 criterias, String
    languageCode, InstanceQueryOptions instanceQueryOptions)
       at Microsoft.EnterpriseManagement.InstancesManagement.GetObjectsReaderInternal[T](ICollection`1 criteriaCollection, ObjectQueryOptions queryOptions)
       at Microsoft.EnterpriseManagement.Management.DataProviders.ManagedEntityProvider.GetManagedEntitiesByClass(ICollection`1 baseTypeNames, String criteriaString, List`1 valueDefinitions, List`1 sortValueDefinitions, Int32 maxEntities, String typePropertyName,
    String typeWithIconPropertyName, Boolean propertyCollectionRequested)
       --- End of inner exception stack trace ---
       at Microsoft.EnterpriseManagement.Presentation.DataAccess.DataProviderCommandMethodInvoker.Invoke()
       at Microsoft.EnterpriseManagement.Monitoring.DataProviders.RetryCommandExecutionStrategy.Invoke(IDataProviderCommandMethodInvoker invoker)
       at Microsoft.EnterpriseManagement.Presentation.DataAccess.DataProviderCommandMethod.Invoke(CoreDataGateway gateWay, DataCommand command)
       at Microsoft.EnterpriseManagement.Presentation.DataAccess.CoreDataGateway.ExecuteInternal[TResult](DataCommand command)
       at Microsoft.EnterpriseManagement.Presentation.DataAccess.CoreDataGateway.<ExecuteAsync>b__0[TResult](<>f__AnonymousType0`1 data)
    It appears to be complaining about internal resources, but we have not experienced this issue before and both the management and SQL servers have plenty of resource. The databases are held on a seperate dedicated sql server and I have noticed that when I
    try to search for the object, the sqlservr.exe process immediately consumes 50% CPU while the query is running. As soon as it ends (errors) the sqlservr process drops back to a reasonable number.
    No changes have been made to either the management server or sql server. We were able to create dashboards one day, but then recieved this error the next.
    Has anyone else seen this issue? or can anyone explain what is happening?
    Many thanks

    Hi,
    Asposted earlier,hadt he same problem
    when creating a Dashboard widget type
    State.
    Ones olution I found for this was as follows:
    When creating the State widget,the option
    to specify the criterion I left blank
    and finished the creation of the dashboard.
    With that managed to solve the problem.Of course this
    is not the solution, or the root cause
    of the problem.
    I will examine more about this issue, and sohave
    the ultimate solution and why such behavior,I post
    here.
    thank you
    Wilsterman Fernandes

  • Best way to group/create a large mosaic of film clips?

    I created this in Apple Motion and am trying to recreate in AE. It's so super simple really but very beautiful.
    http://www.youtube.com/watch?v=NI5FKQS3vtw
    Short rant: As an actual professional graphic designer (who does occasional video and 3d work) I've more and more realizing Apple doesn't care about working professionals and thier software shows it. I made this clip, and the UI slowly began to degrade and then it wouldn't render certain clips, now it takes 15 minutes just to load the file.
    I've tried creating a pre-comp but I'm concerned when I create it, it isn't the best way to accomplish what I need in the end. I just need to create one large mosaic, then have it as one layer so I can zoom in and out of it, create masks with it, etc. I wish there was a way to create one HUGE canvas (108000p LOL) so I can lay everything out exactly how I need it, then plop that into my 1080p project.
    Can anyone provide a few ideas / features / directions to research and move forward with?
    Thanks so much!

    No, unfortunately there is no easy way to do this kind of thing. Creating pre-comps over pre-comps is pretty much way to go.
    Mylenium

  • FRM-92101 when creating excel file through webutil

    I am getting forms error FRM-92101 error when loading a large amount of records into an excel file using webutil
    FRM-92101
    A Network Error has occurred
    The forms client has attempted to reconnect to the server 1 time(s) without success
    I can successfully create the excel file without the the FRM-92101 when loading only 100 out of 2,000 records from the datablock.
    But if I try to load 1,000 out of the 2,000 records then in the middle of the loading process I get the FRM-92101 error.
    What is causing this error? A timeout issue with forms? How much the excel file can actually handle? Could it be an security issue on the network?
    What can be done to remedy the problem? Just save data little bits at a time?
    Thanks,
    Michelle

    I was running the form from my DS and when I put the form on the application server there were no FRM-92101 errors.
    I'm not sure why running the form from my DS caused this error and running the form from the application server didn't?
    Thanks,
    Michelle

  • Creating a large catalogue with InDesign CC

    I am attempting to create a large catalogue with Adobe InDesign CC, and presume that the data-merge function is the way to do this? If i use the data merge function, will the template I create adjust to varying amounts of data within the data file - eg some items will take up a quarter of a page, other items may take up a full page. Also, if I use the data merge function, can I manipulate the data manually afterwards - eg for mistakes, alignments, etc? Thanks

    How large is large?
    You can use data merge. I would be tempted to run it one record per page and run one of the scripts that will connect each frame to flow into each other. Then make the needed frame adjustments on each page. Not as onerous as it sounds. A script can be had on this site:
    http://www.loicaigon.com/en/solutions-en/downloads/
    Like Gert writes, edits in the merge file will require a new merge unless you purchase one of the commercial plug-ins.
    Set up paragraph and character styles in a sample of the data so when laying out for the merge you can handle most all (or all) of the styles you need.
    If you want a healthy learning curve, and you can obtain the data as an XML file, you can also go that route. Same procedure, make a smaller XML file, create the needed styles, map them and import the XML. Your frames will be whole page already and the data will flow from frame to frame without "stitching" the frames back together.
    Mike

  • PS Touch needs a warning message when importing files larger than 2048x2048 max resolution

    I opened 6500x5000 px files into PS Touch in iPad for some minor retouching. PS Touch - without notification - reduced the images to fit within 2048 x 2048 px. It happily let me open these files, work on them and save and never let me know it was reducing the file size, rendering all the work I did utterly useless since 2048x2048 is far too small for print res for these files.
    PS Touch needs a notification or warning when importing files larger than the app's max resolution. Resizing files without notification is just asinine.

    Hi Jeff,
    For improvements or feature requests - please create an Idea for others to vote for:
    Thanks,
    Ignacio

  • ESO application creates a large number of temporary file

    Hello,
    To summarize, when users run large queries, use attachments, of generate PDFs, etc, temporary files are created in /sourcing/tmp folder. Files older than 24 hours are cleared from this folder when logs roll over.
    Our /sourcing filesystem is not sized to handle this u2018featureu2019 and I want to look for options. I am not keen on having the application filesystem /sourcing have a large quantity of temporary files going through it. The option that I can think of to get around it is to create a new filesystem or reuse an existing one and softlink /sourcing/tmp to it. Weu2019ll have to do this for each application server.
    Does anybody have any suggestions as to how to avoid this problem?
    Does anybody have any sizing inputs for sizing /sourcing/tmp.
    Thanks,
    Dnyandev

    All,
    The number of .blob and .bin files that get generated are large when u run volume testing. SAP has indentified it as a bug in the problem.
    Does anybody have any solution.
    We are currently using SAP ESourcing CLM 5.1
    Thanks,
    Dnyandev Kondekar

Maybe you are looking for

  • 0LANGU - missing in text table of an InfoObject - 0PLANT

    Hi there, *I'm loking for a great assistance*. The IO 0PLANT was fine, in BI7.0, and i viewed transformation code in InfoSpoke (not OHD), migrated from BW3.5 and when i gave the syntax check option it says "0LANGU" as unknown field in /BI0/TPLANT tab

  • Can i Watch Live Cricket Matches on Apple TV

    Hi I want some advise and suggestion about Apple TV. I want play live cricket matches on Apple TV. Pleaase suggest me some site about cricket online at mobile. i have one site but its result no better. please suggest me some better streaming sites.

  • Automatic Row Processing and Casade Delete

    I have Automatic Row Processing for delete on a table, however I have FK's dependent on this table. How do I do a cascade delete using Automatic Row Processing DML for a delete ? Any way besides writing a trigger ?

  • Possible to have "sub-templates" in Dreamweaver ?

    Hello, I'm new to website building using dreamweaver, but I purchased a tutorial and I'm really starting to get it, though I have a question, let's say I want to create a website similar to this one : www.franzferdinand.co.uk All pages of that site h

  • AR totals by customer

    Hello, I don't know what table should I use to get the AR totals by customer number KUNNR and gl period.  I have to create an interface file.  Thank you, Elena