Fastest way to load a BufferedImage on a JPanel

I am looking for the fastest way to display a BufferedImage on a JPanel.
I am using JAI to take in photo files (JPG, BMP, GIF, TIFF, PNG) and create thumbnails (BufferedImage).
I was reading through the forums and saw you can either
1)overwrite the Graphics method or
2)imageicon->JLabel->JPanel
Currently, I am doing number 2, but I was wondering what the best way truly is.

as you arn't doing any animation or that kind of thing, using Swing Components will work just fine.

Similar Messages

  • Fastest way to load csv into oracle BE table

    I have csv file which is having 10 million records in it . what is the fastest approach to load this data to oracle BE table .
    I am using Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production With the Partitioning, OLAP and Oracle Data Mining options .
    csv format
    first_name,last_name,occupation,address
    above all the fields are of varchar data type.
    I have tried to use external table while inserting its taking too much time .
    Thanks

    hi,
    You can use sql loader.
    options (skip=1)
    LOAD DATA
    INFILE 'csv file path'
    truncate /append
    INTO table table_name
    FIELDS TERMINATED BY ','
    OPTIONALLY ENCLOSED BY '"'
    trailing nullcols
    column name ,
    column name,
    column name
    {code}
    Create this file as ctl(control file.) on the lcoation where your csv file is.
    Execute it by goint to the path where your csv file resides.
    {code}
    cmd
    cd path where your csv file resides
    sqllder userid=username/password@database name control=your_control_file_name
    {code}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Fastest way to load an image into memory

    hi, ive posted before but i was kinda vague and didnt get much of a response so im going into detail this time. im making a java program that is going to contol 2 robots in a soccer competition. ive decided that i want to go all out and use a webcam instead of the usual array of sensors so the first step is to load an image into the memory. (ill work on the webcam once ive got something substanical, lol) since these robots have to be rather small (21cm in diameter) i can only use some pretty crappy processors. the robots are going to be both running gentoo linux on a 600 mhz processor, therefore it is absoleutely vital i have a fast method of loading and analyzing images. i need to be able to both load the image quickly, and more importainly analyze it quickly. ive looked at pixelgrabber which looks good, but ive read several things about javas image handling beging crap. these articles are a few years old, and im therefore wonding if there anything from the JAI that will do this for me. thx in advance.

    well i found out why i was having so much trouble
    installing JAI. im now back on windows and i cant
    stand it, so hopefully the bug will be fixed soon. oIt's not so bad. I mean, that's why we love Java! Once your linux problem is fixed you can just transfer your code there as is.
    well. i like the looks of JAI.create() but im not so
    sure how to use it. at this stage im tying to load an
    image from a file. i no to do this with pixelgrabber
    you use getcodebase(), but i dont know how to do it
    with JAI.create. any advice is appreciated. thx.Here are some example statements for handling a JAI image. There are other ways, I'm sure, but I've no idea which are faster, sorry. But this is quite fast on my machine.
    PlanarImage pi = JAI.create("fileload", imgFilename);
    WritableRaster wr = Raster.createWritableRaster(pi.getSampleModel(), null);
    int width = pi.getWidth(); // in case you need to loop through the image
    int height = pi.getHeight();
    wr = pi.copyData(); // copy data from the planar image to the writable one
    wr.getPixel(w,h,pixel); //  pixel is an int array with three elements.
    int red = pixel[0];     // to find out what's the red component
    int[] otherPixel = {0,0,0}
    wr.setPixel(w,h,otherPixel); // make pixel at location (w,h) black.                And here's a link with sample code using JAI. I've tried several of the programs there and they work.
    https://jaistuff.dev.java.net/

  • Fastest way to load data From MS Access to Oracle DB

    Hi All,
    We get an Access DB every week which we need to load to an Oracle DB, currently we have been using SQL loader but the process is slightly painful and horribly boring, so m trying to do some automation kind of stuff. I was thinking of doing the whole thing using a java application and then schedule it to run it at some pre-decided time. The approach I took was to read the access file and then load it using PreparedStatements to the oracle DB. But going through various threads in the forum, i found that its going to b a bit too slow (and the record count in my tables is around 600,000). Can there be a better way to do this? anyone done something similar b4.

    Well the only reason I want to use Java (i may go for C#) is that i dont want to spend time manually creating those CSV files from Access. Can that be done using something else?So use java to create the CSV files.
    And have you actually tried going straight to Oracle? What exactly is your time constraint?
    And another issue is that I sometimes have to make some adjustments (rounding off) to the data which is usually through query, but that is usually done after the data has been loaded in the DB.Which would make it irrelevant to actually moving the data then. If you are cleaning the data with SQL already then it is simple to wrap that in a proc and do it that way. Presumably you are loading to temp (non-production in-lined) tables first, then cleaning and moving.

  • Fastest Way to Load Text Files Containing Thousands of Words

    Hi, can anyone give me suggestions for fast loading of files containing many bytes in the form of a single word on a different line. The average file size is 45 kb. I'm currently splitting the words into separate files each containing 4000 words a piece. I use a different thread to read each file and then add all of the words into a single vector, formed by joining the vectors of each reader thread. Any suggestions for speeding up the process? (Currently, it takes about 30 seconds to load 52,000 words). My OS is Windows2000

    Something seems wrong. There is no way that what you describe should take anywhere close to that amount of time, unless the computer is a constraint. I created a file of 56,000 lines, each line consisted of the byte string "abcdefgh" plus a \r\n,
    for a total of 10 bytes per line - a total file size of 560,000 bytes (547 kb)
    The following program read the file, created an ArrayList, and added the 56,000 lines to the list in less than 1 second, about what I expected. You might want to run it and see what you get.
    import java.io.*;
    import java.util.*;
    class Zxin
        public static void main(String[] args)
            throws IOException
            FileInputStream fis = new FileInputStream(
                "C:/Documents and Settings/Chuck/My Documents/Java/junk/word.txt");
            InputStreamReader isr = new InputStreamReader(fis);
            BufferedReader br = new BufferedReader(isr);
            List<String> list = new ArrayList<String>();
            String data;
            while ((data = br.readLine()) != null)
                list.add(data);
            System.out.println("The list contains " + list.size() + " entries");
            System.out.println("The first entry in the list is: " + list.get(0));
            br.close();
    }It printed these 2 lines:
    The list contains 56000 entries
    The first entry in the list is: abcdefgh

  • Fastest way to load data

    Option 1:
    Insert statement with:
    table mode: NOLOGGING    
    insert mode: APPEND        
    archivelog mode: noarchive log mode  
    Option 2:
    CTAS with NOLOGGING mode
    Both options above would generate no redo log. Which one is better for performance? I'm loading large volume or rows (a few million) on a daily basis and this is a staging table so there is no problem to reprocess in case of failure.

    Jonathan,
    > Insert /*+ append */ can optimise for indexes by capturing the necessary column values as the data is loaded and then creating the indexes without needing to re-read the table
    How did you do to came to this conclusion?
    I did a simple test (t2 has a single column index) and got the following trace files
    1- Direct path load
    insert /*+ append */ into t2
    select * from t1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          1          0           0
    Execute      1      0.05       0.08          3        140         87        1000
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.05       0.09          3        141         87        1000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 39
    Rows     Row Source Operation
          1  LOAD AS SELECT  (cr=140 pr=3 pw=3 time=84813 us)
       1000   TABLE ACCESS FULL T1 (cr=5 pr=0 pw=0 time=92 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      control file sequential read                    8        0.00          0.00
      db file sequential read                         2        0.00          0.00
      direct path write                               1        0.02          0.02
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        3.49          3.49
    2- Conventional load
    insert
    into t2
    select * from t1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          1          0           0
    Execute      1      0.02       0.00          1         22        275        1000
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.03       0.01          1         23        275        1000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 39
    Rows     Row Source Operation
       1000  TABLE ACCESS FULL T1 (cr=5 pr=0 pw=0 time=31 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.00          0.00
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        1.21          1.21
    The trace file of the append hint contains a direct path write and a control file sequential read wait events confirming the direct path insert. But I still have not found in the trace file an indication on how the index is maintained separately from the table during a direct path load. Additionally I see that in both trace files there are two TABLE ACCESS FULL T1.
    What I used to know is that during a direct path insert indexes are maintained differently from their table. Mini indexes are built on the incoming data and are finally merged with the physical index. But I don't see this also in the trace file.
    However, In the append trace file there is the following select (occurring before the insert statement) that does not exist in the normal insert
    select pos#,intcol#,col#,spare1,bo#,spare2
    from
    icol$ where obj#=:1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2      0.00       0.01          0          0          0           0
    Fetch        4      0.00       0.00          0          7          0           2
    total        8      0.00       0.01          0          7          0           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS BY INDEX ROWID ICOL$ (cr=3 pr=0 pw=0 time=25 us)
          1   INDEX RANGE SCAN I_ICOL1 (cr=2 pr=0 pw=0 time=21 us)(object id 40)
    I am not sure if this has a relation with the mini-index pre-built data to be merged with the physical index.
    That is my question : where to see this in the trace file?
    Thanks
    Mohamed Houri

  • What is the best way(fastest way) to loads sound files

    Hi,
    I am using adobe edge version 3 to create a website....I have added sounds to the website usings adobe edge itself.I use many sound(256 files,total size 6 mb)....When someone visits the website it takes a long time to load.How can i prevent this ???
    Thank you
    Nithin

    You can check some of the tips mentioned in this link - http://www.adobe.com/inspire/2014/02/edge-animate-audio.html for handling audio.
    Thanks and Regards,
    Sudeshna Sarkar

  • Fastest way to read a 5MB textfile (200,000 lines)?

    My current code:
    String file = "";
    BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(filename)));
    while(in.ready()) file = file.concat(in.readLine());Unfortunately it is awfully slow (starting with 2,000 lines per second but dropping to like 100 lines per second over time). What is the fastest way to load the content of that file in a string and out of curiosity, why are my lines per second dropping?
    Thank you for your help!

    tjacobs01 wrote:
    This is exactly what I've done - it's the fastest thing out there. One thing to comment on though: Using a BufferedInputStream is not necessary; BufferedInputStream only improves performance if you're doing inefficient reading (such as reading blocks by newlines). If you doubt what I'm saying set up a quick performance test bed and test it... I did this a while back which is why I know the truth :)I believe you. Thanks for the tip.
    In my case I needed the BIS because I only wanted to load the 40 MB audio file in 1 MB chunks at a time
    to process it and store it in my own representation.

  • Best Way to Load Data in Hash Partition

    Hi,
    I have partitioning by Hash on a Large Table of 5 TB. We have to load Data say more than 500GB daily on that table from ETL.
    What is the best way to Load data into that Big Table which has hash Partition .
    Regards
    Sahil Soni

    Do you have any specific requirements to match records to lookup tables or it just a straight load - that is an insert?
    Do you have any specific performance requirements?
    The easiest and fastest way to load data into Oracle is via external file and parallel query/parallel insert. Remember that parallel DML is not enabled by default and you have to do so via alter session command. You can leverage multiple CPU cores and direct path operation to perform the load.
    Assuming your database is on a linux/unix server - you could NFS load the file if it is on a remote system, but then you will be most likely limited by network transfer speed.

  • Fastest way to view clips

    I have a hard drive camera and it creates hundreds of clips. I want to look at each clip and then create subclips. Is there a way to speed up the process of opening up each clip individually and then setting my in and out points? Can I put them all in viewer stacked up and then just exit the one I'm done with and go to the next? Any keyboard shortcuts anyone can recommend? I would rather not throw them all into the timeline, wait to export all the clips into one .mov file and then edit them that way, but I know that is an option. Any advice would help.

    use the keyboard. It's the fastest way. cmd-4 takes you to the browser, use the arrow keys to select the clip you want to view. Up and down arrows bring you from bin to bin, when a bin is selected. To open and enter a bin, use the left and right arrow. Once in the bin use the up and down arrows again to go up and down within the bin. Press the return key to load the clip in the viewer and the spacebar or "L" key to play. cmd-4 to go back to the browser and select another clip. If you want to rename a clip after you have watched it, press the Enter key (not the return key) and you will open the dialog box for the name. You can rip through the browser items without ever going to the mouse, which is what really slows you down.
    In addition, if you have Leopard you can use the Quicklook feature in the Finder window and just use the up and down arrows to cycle through the clips. You don't even need to press the spacebar because every time you arrow to a new clip it automatically plays. You don't even need to launch an application to view them.
    Just a couple of time saving features. If you have Adobe, you can use Bridge in the same way, but it's easier to name and add metadata to the clips before you import them into FCP.

  • Back to English Lang at reboot, Fastest way to start WM, Yaourt error

    Hi guys I need some help,
    I installed arch yesterday. I set every thing to "Turkish" i had no problems till this morning. I started my computer and the language was back to english.
    # /etc/rc.conf - Main Configuration for Arch Linux
    # LOCALIZATION
    # LOCALE: available languages can be listed with the 'locale -a' command
    # HARDWARECLOCK: set to "UTC" or "localtime", any other value will result
    # in the hardware clock being left untouched (useful for virtualization)
    # TIMEZONE: timezones are found in /usr/share/zoneinfo
    # KEYMAP: keymaps are found in /usr/share/kbd/keymaps
    # CONSOLEFONT: found in /usr/share/kbd/consolefonts (only needed for non-US)
    # CONSOLEMAP: found in /usr/share/kbd/consoletrans
    # USECOLOR: use ANSI color sequences in startup messages
    LOCALE="tr_TR.UTF-8"
    HARDWARECLOCK="UTC"
    TIMEZONE="Europe/Istanbul"
    KEYMAP="trq"
    CONSOLEFONT=
    CONSOLEMAP=
    USECOLOR="yes"
    [/quote]
    [color=#FF0000]
    In my locale.gen is only turkish uncommented.
    What is the fastest way to get the WM loaded. I'm not using a login manager but i won't every time type "xinit /usr/bin/openbox-session". I will autologin and autoload the WM.
    xinitrc will open a security hole because it won't lock the virtual terminal and I'm not sure if a bash script is the right way.  What would you suggest?
    The last time I installed yaourt I had the problem that it couldn't find any packages from the AUR. Do I have to do someting to make it work after I installed it? I tried it out-of-box.
    And if somebody is right now using ubuntu can he give me the screenshots for the font settings in firefox and gnome font settings (right click menu settings).
    Thanks to everyone who wants to help or just look at this thread.
    Last edited by Pliskin (2011-01-01 13:23:32)

    Happy new year everybody
    I just solved the locale problem. If someone (like me) will use the autostart.sh in the official openbox site will get this error. Look at this:
    # Run the system-wide support stuff
    . $GLOBALAUTOSTART
    # Programs to launch at startup
    hsetroot ~/wallpaper.png &
    xcompmgr -c -t-5 -l-5 -r4.2 -o.55 &
    # SCIM support (for typing non-english characters)
    export LC_CTYPE=ja_JP.utf8
    export XMODIFIERS=@im=SCIM
    export GTK_IM_MODULE=scim
    export QT_IM_MODULE=scim
    scim -d &
    # Programs that will run after Openbox has started
    (sleep 2 && fbpanel) &
    I removed the ja_JP.utf-8 and this problem was solved. Thanks for @Cedeel and @karol for their help.
    Now I still don't know in which  way I should load my openbox. Details in the first post

  • Fastest way to removeAll

    I need to remove quite a few entries from a cache that match a certain criteria. There is cache.putAll(Map) for en-masse cache loading, but there is no equivalent cache.removeAll(Set keys).
    What is the fastest way to achieve this? I am thinking this:
            Filter f = new EqualsFilter(...)
             cache.invokeAll(f, new InvocableMap.EntryProcessor(){
                   public Object process(Entry entry) {
                        entry.remove(false);
                        return null;
                   public Map processAll(Set setEntries) {
                        Iterator iter = setEntries.iterator()
                        while(iter.hasNext()){
                             Entry entry = (Entry) iter.next();
                                   entry.remove(false);
                                    return null;
             });Is there a faster way?
    Thanks
    Ghanshyam

    Hi Ghanshyam,
    for non-transactional replicated caches, you can simply use the
    cache.keySet().removeAll(keys)
    for non-transactional partitioned caches, you can use either the same (provided you use a recent version, in older versions it had some performance impact).
    Since 3.1 you can also use the following:
    cache.invokeAll(keys, new ConditionalRemove(AlwaysFilter.INSTANCE));
    For transactional caches you should not call the keySet().
    For transactional caches, depending on the isolation level, you can iterate them and remove them individually (for GET_COMMITTED or EXTERNAL).
    For optimistic transactional caches with a transaction validator registered and having REPEATABLE_GET or SERIALIZABLE isolation level, before the iteration and removal you should also get all entries for the keys with transactionMap.getAll(keys), so they are enlisted to the transaction validator with a bulk read, and not with individual reads upon remove.
    This advices might not be the most optimal, but I believe, they are more or less so.
    Best regards,
    Robert

  • What's the fastest way to look up a User's Work Items?

    Hi, my SCSM environment has a few different service desks and plenty of customer drive-by's.  Sometimes when a customer comes up and asks for a status, the SD personnel can't find their ticket quickly. The fastest way right now is to 1.) click on CONFIGUATION
    ITEMS; 2.) Search for the Affected User; 3.) Click on the Affected User; 4.) Click on the Affected User's Related Items; 5.) Search the related items for the proper Work Item; 6.) Load the Work Item.
    It's not bad, but it's a lot of steps and I'm wondering if anybody has a faster method in place.

    Just to poke holes in the salesmanship, Consider that you might have an easier time in doing any of the following:
    Create a view that lists all open incidents, and make the first column Affected User. Analysts can filter or sort by this column to get quick info
    The search box in the top right corner can be set to users, right click the down arrow to see all options. 
    the search box in the top right can be used to search work item IDs or titles.

  • Is this the fastest way to copy files?

    I'm looking for a way to take 1 file and make many copies of it. Is this fastest way you know of?
    import java.io.*;
    import java.nio.channels.*;
    public static void applyFiles( File origFile, File[] files )
      FileInputStream f1 = new FileInputStream( origFile.getAbsolutePath() );
      FileChannel source = f1.getChannel();
      FileOutputStream f2[] = new FileOutputStream[ files.length ];
      FileChannel target;
      for( int x=0; x < files.length; x++ )
        if( origFile.getAbsolutePath() != files[x].getAbsolutePath() )
          f2[x] = new FileOutputStream( files[x].getAbsolutePath() );
          target = f2[x].getChannel();
          source.transferTo(0, source.size(), target);
    }

    2 questions from your code...
    1) I assume the last line should read
    out.write(buffer,0,numRead);
    2) Doesnt this just read in a piece at a time and write that piece to each file? Isn't that degrading performance to have so many files open and waiting at once? Would it make more sense to read into a StringBuffer, then write all the data to each file at a time?
    Thanks
    I'd have to say that your question is loaded. :)
    Without knowing anything about your target system, I'd
    have to say no, this is not the fastest way to copy a
    file to many files. This will end up reading the file
    once for every time you want to copy it. Which may
    work fine if the file is small, but not when the file
    that is being copied is larger then free ram. Or if
    the file channel implementation sucks.
    For the general case, where you don't know how big the
    file will be beforehand I'd say that this is a better
    algorithim.
    public static void oneToManyCopy( File source, File[]
    dest ) throws Exception(s) {
    FileInputStream in = new FileInputStream( source );
    FileOutputStream out[] = new
    w FileOutputStream[dest.length];
    for ( int i = 0 ; i < dest.length;++i)
    out[i] = new FileOutputStream(dest);
    byte buffer[] = new byte[1024]; // or whatever size
    e you like
    int numRead;
    while ( ( numRead = in.read(buffer,0,buffer.length)
    ) ) > -1 ) {
    for ( int i = 0 ; i < out.length; ++i )
    out.write(buffer,0,numRead);

  • Is there a way to load just iTunes with no add ons?

    Is there a way to load iTunes only without add ons?

    Hi topelovely,
    Were you finding that Muse was overwriting the pages with custom code even when none of the following applied?
    - The page was changed
    - Something used by the page was changed (e.g., styles, swatches, etc.)
    - You were using a new version of Muse
    Thanks,
    Abhishek

Maybe you are looking for

  • Optimum servo pid loop rate

    Hi, I want some confirmation on the optimum pid loop rate and the fastest servo motor operation that we can acheive with Pxi-7352 running on Window. The manual of 7352 says "62 μs PID loop update rate for up to 2 axes". The requirment is to acheive p

  • *.txt Files in *.Jar

    Hi! In my project I have both Images and txt files that needs to follow the jar. I have no problem getting the images to work using this example: http://www.devx.com/tips/Tip/5697 URL myurl = this.getClass().getResource("png/1.png"); Toolkit tk = thi

  • AC_FL_RunContent and php variable from recordset

    Hi, can anybody point me to a resource on how to grab the name of the .swf file from my db and pass it to the active content javascript? If I just replace the the movie name with a php echo that grabs the file name from the db there is no playback. T

  • Error 718: did not respond in a timely manner

    Hi We have had a VPN solution in place for many years. We have a Windows Server 2012 Standard edition member server on our domain with NPS/RRAS setup on it. Routing and remote access has always worked fine. There is just one NPS policy which is setup

  • Policy Based Routing with VPN Client configuration

    Hi to all, We have a Cisco 2800 router in our company that also serves as a VPN server. We use the VPN Client to connect to our corporate network (pls don't laugh, I know that it is very obsolete but I haven't had the time lately to switch to SSL VPN