RandomAccessFile and BufferedImage

Is it possible to put lots of BufferedImage inside a File Using RandomAccessFile?
If yes, how can i do this? How will be the file, and how to read the file?
thanks... This is urgent! Please!

Some formats, like jpeg and especially tiff allow for a series of images in a single file,
but I don't think that's the case with png. Must your file format be png? If your source images
are pngs it doesn't necessarily imply that you can't write them out in a different format,
although there are always potential issues there (transparency, etc...).
For example, here is a demo that reads four seperate images, writes them to a single file
and reads them back from that file to display the result.
import java.awt.*;
import java.awt.image.*;
import java.io.*;
import java.net.*;
import java.util.*;
import javax.imageio.*;
import javax.imageio.stream.*;
import javax.swing.*;
public class IOExample {
    public static void main(String[] args) throws IOException {
        String urlPrefix = "http://www3.us.porsche.com/english/usa/carreragt/modelinformation/experience/desktop/bilder/icon";
        String urlSuffix = "_800x600.jpg";
        int SIZE = 4;
        BufferedImage[] images = new BufferedImage[SIZE];
        for(int i=1; i<=SIZE; ++i)
            images[i-1] = ImageIO.read(new URL(urlPrefix + i + urlSuffix));
        File file = new File("test.jpeg");
        file.delete();
        int count = writeImages(images, file);
        if (count < SIZE)
            throw new IOException("Only " + count + " images written");
        images = null;
        images = readImages(file);
        if (images.length < SIZE)
            throw new IOException("Only " + images.length + " images read");
        display(images);
    public static void display(BufferedImage[] images) {
        JFrame f = new JFrame("IOExample");
        f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        JPanel p = new JPanel(new GridLayout(0,1));
        for(int j=0; j<images.length; ++j) {
            JLabel label = new JLabel(new ImageIcon(images[j]));
            label.setBorder(BorderFactory.createEtchedBorder());
            p.add(label);
        f.getContentPane().add(new JScrollPane(p));
        f.setSize(400,300);
        f.setLocationRelativeTo(null);
        f.setVisible(true);
    //suffix is "jpeg", "gif", "png", etc... according to your service providers
    public static ImageWriter getWriter(String suffix) throws IOException {
        Iterator writers = ImageIO.getImageWritersBySuffix(suffix);
        if (!writers.hasNext())
            throw new IOException("no writers for suffix " + suffix);
        return (ImageWriter) writers.next();
    public static ImageReader getReader(String suffix) throws IOException {
        Iterator readers = ImageIO.getImageReadersBySuffix(suffix);
        if (!readers.hasNext())
            throw new IOException("no reader for suffix " + suffix);
        return (ImageReader) readers.next();
    public static int writeImages(BufferedImage[] sources, File destination) throws IOException {
        if (sources.length == 0) {
            System.out.println("Sources is empty!");
            return 0;
        } else {
            ImageWriter writer = getWriter(getSuffix(destination));
            ImageOutputStream out = ImageIO.createImageOutputStream(destination);
            writer.setOutput(out);
            System.out.println("can write sequence = " + writer.canWriteSequence());
            writer.prepareWriteSequence(null);
            for(int i=0; i<sources.length; ++i)
                writer.writeToSequence(new IIOImage(sources, null, null), null);
writer.endWriteSequence();
return sources.length;
public static BufferedImage[] readImages(File source) throws IOException {
ImageReader reader = getReader(getSuffix(source));
ImageInputStream in = ImageIO.createImageInputStream(source);
reader.setInput(in);
ArrayList images = new ArrayList();
GraphicsConfiguration gc = getDefaultConfiguration();
try {
for(int j=0; true; ++j)
images.add(toCompatibleImage(reader.read(j), gc));
} catch(IndexOutOfBoundsException e) {
return (BufferedImage[]) images.toArray(new BufferedImage[images.size()]);
public static String getSuffix(File file) throws IOException {
String filename = file.getName();
int index = filename.lastIndexOf('.');
if (index == -1)
throw new IOException("No suffix given for file " + file);
return filename.substring(1+index);
//make compatible with gc for faster rendering
public static BufferedImage toCompatibleImage(BufferedImage image, GraphicsConfiguration gc) {
int w = image.getWidth(), h = image.getHeight();
int transparency = image.getColorModel().getTransparency();
BufferedImage result = gc.createCompatibleImage(w, h, transparency);
Graphics2D g = result.createGraphics();
g.drawRenderedImage(image, null);
g.dispose();
return result;
public static GraphicsConfiguration getDefaultConfiguration() {
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice gd = ge.getDefaultScreenDevice();
return gd.getDefaultConfiguration();
Another option that always works is to use zip format on your file, and put whatever you want
into it. The classes in package java.util.zip make this straightforward.

Similar Messages

  • Differnce between using RandomAccessFile and Outputstream

    Hi,
    I'm implementing a download manager , i read some source using RandomAccessFile and the other using Outbutstream and buffedOutputstream
    so any one can give me a quick summary between them and which is better ?

    BigDaddyLoveHandles wrote:
    sabre150 wrote:
    BigDaddyLoveHandles wrote:
    BigDaddyLoveHandles wrote:
    It's useless to speculate about unseen code written by an unknown hand.That's my {color:green}Phrase Of The Day.{color}Interesting that I also used the word 'speculate' but my brain must be slower than yours since I was a minute behind you!Don't you know that on Fridays it's useless trying to keep up with me? I wear my Ninja outfit!:-) I'm surprised you can get one that fits!

  • Canvas and BufferedImage display different.

    I use the same method to paint the Canvas and the BufferedImage. But the Graphics2D drawString method
    shows up correctly on the Canvas but not on the BufferedImage (JPEG). Why is there a difference?
    I use the Graphics2D to rotate the text 90 degrees.
    I can't think of any reason why the display comes out different.
    Any suggestions???
    Thanks.

    Here is the code for putting the Canvas graphics onto the BufferedImage (JPEG).
    public void save(HCanvas comp) {
         int w = comp.getWidth();
         int h = comp.getHeight();
         Frame frame = new Frame();
         BufferedImage im = new BufferedImage(w,h,BufferedImage.TYPE_INT_RGB);
         Graphics2D g2 = im.createGraphics();
         g2.setPaint(Color.white);
         g2.fillRect(0,0,w,h);
         g2.setPaint(Color.black);          // back to default color
         comp.paint(g2);                              // Graphics argument
         g2.dispose();
         FileDialog fd = new FileDialog(frame,"Save As JPEG",FileDialog.SAVE);
         fd.show();
         try {
              String dum;
              dum = fd.getDirectory() + fd.getFile();
              OutputStream out = new FileOutputStream(dum);
              JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
              encoder.encode(im);
              out.flush();
              out.close();
              im.flush();
         } catch (Exception FileNotFoundException) {
              String errormsg = "File Not Assigned";
              JOptionPane.showMessageDialog(frame, errormsg);

  • Canvas and BufferedImage display drawString differently.

    I use the same method to paint the Canvas and the BufferedImage. But the Graphics2D drawString method
    shows up correctly on the Canvas but not on the BufferedImage (JPEG). Why is there a difference?
    I use the Graphics2D to rotate the text 90 degrees.
    I can't think of any reason why the display comes out different.
    Any suggestions???
    Thanks.

    Here is the code for putting the Canvas graphics onto the BufferedImage (JPEG).
    public void save(HCanvas comp) {
    int w = comp.getWidth();
    int h = comp.getHeight();
    Frame frame = new Frame();
    BufferedImage im = new BufferedImage(w,h,BufferedImage.TYPE_INT_RGB);
    Graphics2D g2 = im.createGraphics();
    g2.setPaint(Color.white);
    g2.fillRect(0,0,w,h);
    g2.setPaint(Color.black); // back to default color
    comp.paint(g2); // Graphics argument
    g2.dispose();
    FileDialog fd = new FileDialog(frame,"Save As JPEG",FileDialog.SAVE);
    fd.show();
    try {
    String dum;
    dum = fd.getDirectory() + fd.getFile();
    OutputStream out = new FileOutputStream(dum);
    JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
    encoder.encode(im);
    out.flush();
    out.close();
    im.flush();
    } catch (Exception FileNotFoundException) {
    String errormsg = "File Not Assigned";
    JOptionPane.showMessageDialog(frame, errormsg);

  • Apple OS X and BufferedImage

    I have written an application using 12 bit grayscale images with a Custom BufferedImage type (using a BandedSampleModel and ComponentColorModel).
    One of my clients works with Apple computers using OSX verson 10.0.4.
    When loading an image there is the folloeing error log:
    bad image type 11, kCGSErrorIllegalArgument : CGSImageCreateWithPixels: Unknown pixel encoding\
    kCGSErrorCannotComplete : CGSImageCreateWithDescriptionUnknown image description\
    kCGSErrorFailure : RIPContext(0x26970a0): CGSImage: illegal image description\
    kCGSErrorIllegalArgument : CGSImageCreateWithPixels: Unknown pixel encoding\
    kCGSErrorCannotComplete : CGSImageCreateWithDescriptionUnknown image description\
    kCGSErrorFailure : RIPContext(0x26970a0): CGSImage: illegal image description\
    For me it seems that the image model of java.awt.image is not fully supported.
    Is this a known probem ?
    (Even when registering to their developer program I could not get access to their bug database)
    Will it be solved when upgrading to OS X 10.1 ?

    For those who have a similar problem. I sent a bug report to Apple and it is filed with the id #2852642.

  • Socket and bufferedimage

    Hi all.
    I have a server that creates a buffered image from a webcam and i need to send it to the clients jframe to be displayed. I don't know how to send/receive the bufferedimages.
    I'd appreciate any help on this. Thanks

    forgot something - i'm using sockets

  • Parsing Problems? ImageIcon and BufferedImage

    Hi to everyone
    Could I ask if I will to send ImageIcon with BufferedImage as its arguement. Then what should I receieve from the server? ImageIcon only or ImageIcon with BufferedImage as its arugement?
    Any help will be appreciated. Thank You.

    I am sending ImageIcon over to the server, hence the server will receive as ImageIcon.
    However, when I want to pass the ImageIcon which being parse to Image, this step is ok. Lastly when I want to parse the Image to BufferedImage image, I am encountering parsing problems, hence could u advise me further.
    The error message "Receiving failed: java.lang.ClassCastException"
    Any help will be appreciated. Thank You.
         public void getImage()
              //Object input = null;
            Image images = null;
              try{
                   imgIcon = (ImageIcon)oin.readObject();
                   System.out.println("Icon Width: "+imgIcon.getIconWidth());
                   while((images = imgIcon.getImage()) != null)
                        System.out.println("Images Width: "+Image.getWidth(images));
                        BufferedImage bimage = (BufferedImage)images; // PROBLEM LIES HERE
                        System.out.println("Bimage Width: "+bimage.getWidth());
                        ImageServerConnection.writeImage(bimage, "happy.jpg");
                        //String fname = "joalicia";
                        //create an new folder to store those sent images.
                        //File f = new File ("c:\\Sweet", fname);
                        //create(f);
              }catch(Exception e){
                   System.err.println("Receiving failed: "+ e);
         }

  • Help with RandomAccessFile and arabic writing with it

    hello every one am trying to write an arabic text to a file using RandomAccessFile i know how to do it with Writer by just setting the encoding to utf-8 but the proplem is how to really do it with a RandomAccessFile because i need a RandomAccessFile so ican write in a different positions in the file , if any one has an example with another language can he tell me it i really need this

    Well, there's the writeUTF() method. But I don't think you are going to be happy with RandomAccessFile. Unless you can tell in advance how many bytes your String will require when converted to UTF-8, your idea of writing to different positions in the file is likely to cause problems when things start to overlap.

  • Reading and Writing from a text file at the same time

    I know who to use the Scanner and PrintWriter to read from and write to a .txt file. But these are limited. How can I read and write at the same time? Such as open a file and change every third character or change every second word to something else and then write it back. I found this [http://java.sun.com/docs/books/tutorial/essential/io/|http://java.sun.com/docs/books/tutorial/essential/io/] but its a little over my head. Is this the only way to do it?

    wrote:
    You are using buffered reads and writes I would assume, right? Also, how do you think most programs handle this sort of thing? I don't believe I'm using buffering.
    My code looks something like this
    //...necessary imports
    //then
    Scanner inFile = new Scanner (new file("filename1.txt"));
    PrintWriter outFile = new PrintWriter ("filename2.txt");
    //then stuff like
    int x = inFile.hasNextInt();
    outFile.println(x);
    camickr wrote:If you are changing the data "in place", that is none of the data in the file is shifted, then you can use a RandomAccessFile.
    Otherwise, you've been given the answer above.What is RandomAccessFile? Is it what I have a link to? Basically what I do is I write a bunch of numbers to a txt file and then change the numbers I don't need anymore to 0. So say I had 0 1 2 3 4 5 6 7 etc. I would like to to open the txt file and change every second one to 0 so then I'd have only odd numbers and 0s.
    I looked at the documentation for RandomAccessFile and it seems like it might be what I need.
    Thankyou both for your help so far. I took a java course in high school and they only taught me one way to get data from text files and that is what I just showed you. So maybe this questions are really stupid. lol
    Edited by: qw3n on Jun 13, 2009 7:46 PM

  • How to read a file and save the line number of  the last line read?

    Hi,
    I am using RandomAccessFile and file as my class. I am trying to read a log file as it gets updated and print it out to a java window. i so far have the framework setup but dont know how to save the last line number so.
    i need this variable so i dont reprint out the same line numbers to the java window. please advise.
    thanks!

    hi,
    i now have the line number of the last line read, but now when i reopen the file, how can i skip that number of lines?
    thanks,

  • 1.4.2 and 1.5 serialization portability

    Hello,
    I came across the problem of reading class previously serialized w/ jdk 1.4.2_02 using jdk 1.5.
    I have embedded object database and everything works fine with the jdk1.4. The problem is happening when I am trying to run the program with jre 1.5 . (the program is compiled with jdk 1.4.2)
    OS is Windows2000.
    Any suggestions? Did anybody encounter this type of the problem?
    Thank for your help.
    Leo
    The exception is happening in the following method:
    //fields
    public BTreePageHeader m_header;
    public long [] m_key; // array of key values
    public long [] m_recPtr; // record pointers
    public long [] m_link; // links to other pages
    public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException
    // read back key information
    m_header = (BTreePageHeader)in.readObject();
    // read link arrays
    m_key = (long [])in.readObject(); // *********here is exception with JDK1.5************
    m_recPtr = (long [])in.readObject();
    m_link = (long [])in.readObject();
    Following is the exception:
    java.io.StreamCorruptedException
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1323)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1600)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1290)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:339)
    at tfcomp.core.Store.Db.BTreePage.readExternal(BTreePage.java:102)
    at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1753)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1711)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:339)
    at tfcomp.core.Store.Db.ObjectDatabaseFile.readObject(ObjectDatabaseFile.java:390)

    Hi mlk,
    It is not a Bean; I am using internal object database that is based in RandomAccessFile and serialization of the objects.
    I am a bit confused with the statement that serialization is not portable between versions - is there more detailed analysis of that somethere...? this is a major problem for me. It was my understanding that as long as one uses Externilizable or/and putting serialVersionUID, there should not be any problem and backwards compatibility is supported. If this is not true - all Object databases have a problem
    If anybody has experiense with this or can direct me information related to this problem I would much appreciate your help.
    Best regards,
    Leo

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

  • RandomAccessFile date position changed  while increasing font size of text

    i have created RandomAccessFile and write data in a specific locations using seek() in it but problem is when i changed font size of that file data position is changed please anybody can guide me how can i solve it.

    fonts have nothing to do with randomaccessfiles. Did you "open" the file in some rich text editor then save it again, and then try to read it using the program? If so, don't do that a random access file is not generally meant to be a human-readable format, and it definately is not meant to be a human writable format.

  • Problems sending bufferedimages through a socket

    Hi everyone,
    I've got a server program that takes a screen capture as a bufferedimage, and sends it through a socket to be saved by the client program. The programs work fine, however, whenever I'm transferring the image with the code at the bottom of this post, I notice that the collision LED on my network hub flashes and I can't figure out why (Obviously there's a collision, but I don't know what causes it or how to stop it from happening). I've googled a bit for collisions and bufferedimages, but have come up with El Zilcho. I've also gone through the IO tutorial and the Socket and BufferedImage API's, but havent learned anything to solve my problem.
    Also, As it is now, if I want multiple images, I need to disconnect and reconnect repeatedly. I tried sending multiple bufferedimages throught the socket, but get the following exception:
    java.lang.illegalargumentexception:  im == null!Does anyone know of a way where I can send multiple bufferedimages through the socket without reconnecting every single time? Thanks
    Server code:
    Toolkit toolkit = Toolkit.getDefaultToolkit();
    Dimension screenSize = toolkit.getScreenSize();
    Rectangle screenRect = new Rectangle(screenSize);
    Robot robot = new Robot();
    BufferedImage image = robot.createScreenCapture(screenRect));
    ImageIO.write(image, "jpeg", fromClient.getOutputStream());
    fromclient.shutdownOutput();
    fromclient.close();Client code:
    BufferedImage receivedImage = ImageIO.read(toServer.getInputStream)
    toServer.close();Any help would be greatly appreciated. TIA
    Jay

    Have you tried.
    ImageIO.write(image1, "jpeg", fromClient.getOutputStream());
    ImageIO.write(image2, "jpeg", fromClient.getOutputStream());
    ImageIO.write(image3, "jpeg", fromClient.getOutputStream());
    ImageIO.write(image4, "jpeg", fromClient.getOutputStream());
    ImageIO.write(image5, "jpeg", fromClient.getOutputStream());
    ImageIO.write(image6, "jpeg", fromClient.getOutputStream());
    ImageIO.write(image7, "jpeg", fromClient.getOutputStream());

  • Flex and concurrent access

    I am going to work on a new project. This project is a real time scanning  processing monitor. The application would launch from an html wrapper. Also a few more processes will also start from there using JavaScript code:
    oReadfromScanner1 = new ActiveXObject("comportreader.classname");
    oReadfromScanner2 = new ActiveXObject("comportreader.classname");
    I am going to have up to 10 scanner readers which will update Flex client screen.
    There will be many times when readers try to update the client in exact same time. What is a design pattern to manage simultaneous access to Flex? Or there will be no problem at all?
    Thanks

    Thanks for the feedback. This is still bothering me,
    yes I could have a static RandomAccessFile and
    synchronise on this, but I really want concurrent
    access.
    I've implemented a locking mechanism to prevent
    different RandomAccessFile instances updating the
    same record - is this not a waste if only one
    RandomAccessFile can write to the file anyway?
    Or is there another Java class I can use to access
    the file in this way?
    Thanks for the help.Hi,
    if the intention of using multiple instanced of RandamAccessFile is concurrent access, then i feel your locking mechanism doesnt achieve the purpose..
    also, at any case, you may not plan for full concurrency in updating a file....
    it is more prone to malfunctions..
    probably, to enhance performance, you can lock only the part your code that actually writes to the file, like io.write() , in this way you can perform all business logic with respect to writing and serialize only the actual file writing...
    even in this case, you must be sure that writing to different part of the file, doesnt really impact other parts of the file which might be manipulated by other threads..
    i have one more thought on this,
    if updating different parts of the file doesnt affect content of other parts of the file,
    then can you think of having different files itself?
    if using different files is not a good idea, then
    probably think of using some buffering mechanism, like collect all data concurrently and periodically update the actual file from the buffer.. just a raw idea but all depends on your system needs & requirements.. ..

Maybe you are looking for

  • MBP 17 2009 LCD does not turn on after coming back from sleep

    Hi, I feel like crying. I bought my 1st Mac 3-4 days ago and I have not regretted anything more than this. 1st some mov files that Quicktime windows can play but Mac does not (feel like pulling my hair why as the videos with same settings taken with

  • Problems in Fundamental Design of V3 Process.

    HI In this forum some one discussed on V1 V2 AND V3, but I have some questions. In There Discussion, They mentioned that there is some Problems in Fundamental Design of V3 Process. Like 1.     If there are multiple app servers in Different time zone,

  • ThinkPad E335 function keys

    My Edge E335 came with Win 7 but I later added Win 8.1 in dual boot.  Win 8.1, free of all the Lenovo software, is better in use for me except that the function keys all have the Lenovo laptop functions for camera, screen brightness and so on rather

  • SCCM Office 2010 Deployment - "already compliant"?

    Hi, I am deploying Office 2010 successfully to a group of machines on our network. It looks like there are machines that state "already compliant" even though the install definetely has not gone through. I checked some of the logs no record. Also rei

  • How many BGP peers does the 3548 switch support?

    Is it possible to run more than 40 peers on a single switch? What is the limitation if not?