Best efficiently  way to transform a large rows ResultSet to Vector

Best efficiently way to transform a large rows ResultSet to a 2-dimensional ArrayList or Vector in a minute ?
If you can help me solve this problem , I 'll give you my duke .

Why don't you use info objects? For example your table is having col1(number),col2(text/varchar),col3(text/varchar), create an info object like this,
class MytableInfo implements java.io.Serializable {
private int col1;
private String col2;
private String col3;
public MytableInfo(int col1,String col2,String col3) {
this.col1=col1;
this.col2=col2;
this.col3=col3;
public int getCol1() {
return col1;
//Getter for other two properties too.
and in your ResultSet class,
Vector v = new Vector();
while(rs.next()) v.add(new MytableInfo(rs.getInt(1),rs.getString(2),rs.getString(3));
return v;
So, it will be easier for retrieving the values later and it is a clean way of doing it.
Sudha

Similar Messages

  • The most efficient way to search a large String

    Hi All,
    2 Quick Questions
    QUESTION 1:
    I have about 50 String keywords -- I would like to use to search a big String object (between 300-3000 characters)
    Is the most efficient way to search it for my keywords like this ?
    if(myBigString.indexOf("string1")!=1 || myBigString.indexOf("string2")!=1 || myBigString.indexOf("string1")!=1 and so on for 50 strings.)
    System.out.println("it was found");
    QUESTION 2:
    Can someone help me out with a regular expression search of phone number in the format NNN-NNN-NNNN
    I would like it to return all instances of that pattern found on the page .
    I have done regular expressions, in javascript in vbscript but I have never done regular expressions in java.
    Thanks

    Answer 2:
    If you have the option of using Java 1.4, have a look at the new regular expressions library... whose package name I forget :-/ There have been articles published on it, both at JavaWorld and IBM's developerWorks.
    If you can't use Java 1.4, have a look at the jakarta regular expression projects, of which I think there are two (ORO and Perl-like, off the top of my head)
    http://jakarta.apache.org/
    Answer 1:
    If you have n search terms, and are searching through a string of length l (the haystack, as in looking for a needle in a haystack), then searching for each term in turn will take time O(n*l). In particular, it will take longer the more terms you add (in a linear fashion, assuming the haystack stays the same length)
    If this is sufficient, then do it! The simplest solution is (almost) always the easiest to maintain.
    An alternative is to create a finite state machine that defines the search terms (Or multiple parallel finite state machines would probably be easier). You can then loop over the haystack string a single time to find every search term at once. Such an algorithm will take O(n*k) time to construct the finite state information (given an average search term length of k), and then O(l) for the search. For a large number of search terms, or a very large search string, this method will be faster than the naive method.
    One example of a state-search for strings is the Boyer-Moore algorithm.
    http://www-igm.univ-mlv.fr/~lecroq/string/tunedbm.html
    Regards, and have fun,
    -Troy

  • Most efficient way to work with large projects

    I have just purchased Premier elements and need to produce a two hour edited project.
    Would the best way be to break the task into a few smaller projects and join them up after editing or just to create on long project?
    Any advice please

    I use Documents to Go. And I don't have the premium version but it does have online syncing in the premium version. Both the regular and premium version have a desktop program that allows you to sync documents without going through iTunes. Add the ability to do online syncing such as sugarsync, dropbox, etc, might help.
    I dont' think they automatically update. You have to manually do it. But the 'price' of manually updating the documents is that the program works off line. So type when you're away from the internet, update it when you get online again and move on.
    I also like how it interfaces with regular word documents
    Something you can look into and see if it looks like it may work.

  • What is the most efficient way of converting relatively large program to pervious version?

    I am trying to convert 8.5 labview program to 8.2 version.  When I do that it creates many more subfolders with VIs that are not converted.
    What is the best way to do this?  I tried the mass compile but converting the high level  program creates many folders that links to other areas?
    Am I missing anything?
    Thanks for any help.
    Chetna 

    Hi Chetna,
    How are you confirming that the VIs saved in the sub-folders after conversion have not been converted to a previous version? 
    If you open the VI at the top of the hierarchy and 'Save for Previous Version...' it saves the open VI and all it's depencies for the chosen version. The reason you see all the sub-folders is because the relative path between VIs is maintained.
    This KnowledgeBase article: Can I Save VIs in My Current LabVIEW for Use in a Previous Version? describes this process.
    There is also a Community Example: Programmatically Save VIs for Previous Version that could also save time if you have a lot of VIs to convert.
    I hope this helps!
    Tanya V
    National Instruments
    LabVIEW Platform Product Support Engineer

  • What is the efficient way to migrate the large DB (over 1TB) from 9i to 11g

    Can any body give a suggestion for migrate the large DB (Over 1TB) from 9i to 11g?

    Hi;
    Can any body give a suggestion for migrate the large DB (Over 1TB) from 9i to 11g?Please check below
    Minimizing Downtime During Production Upgrade [ID 478308.1]
    Master Note For Oracle Database Upgrades and Migrations [ID 1152016.1]
    Different Upgrade Methods For Upgrading Your Database [ID 419550.1]
    I suggest also Please check my blog
    http://heliosguneserol.wordpress.com/2010/06/17/move-to-oracle-database-11g-release-2-wiht-mike-dietrich/
    In this pdf you can see patch of to upgrade db from x to n wiht many senerios wiht all related metalinks notes which is created by Oracle worker Mike Dietrich
    Regard
    Helios

  • What is the best, most efficient way to read a .xls File and create a pipe-delimited .csv File?

    What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File?
    Thanks in advance for your review and am hopeful for a reply.
    ITBobbyP85

    You should have no trouble doing this in SSIS. Simply add a data flow with connection managers to an existing .xls file (excel connection manager) and a new .csv file (flat file). Add a source to the xls and destination to the csv, and set the destination
    csv parameter "delay validation" to true. Use an expression to define the name of the new .csv file.
    In the flat file connection manager, set the column delimiter to the pipe character.

  • What is the best way of compressing a large 3 hour final cut file

    What is the best way of compressing a large 3 hour final cut file. I shot the play and it is in final cut and I rndered it so now I have a 22gb file that I need to put on a dvd . Any suggestions
    Thanks
    Macbook Pro
    2.3 GHz Intel Core i7
    with Final Cut Pro 7.0.3
    using Lion as operating system

    Presuming your menus aren't complicated(include audio or video) the total size for the MPG-2 and AC3 files should probably be under 8GB for a dual layer disk. The inspector will tell you the estimated size. 2-pass varible bit rate would be recommended.
    Trying to fit 3 hours on a DVD-5 will only bring very noticable quality hits. Compressor will let you change the average bit rate so that you can fit 174 minutes but trade-off isn't worth it in my opinion.
    Be aware that dual-layer -R and +R media may not play well for everyone everywhere.
    I presume you are not making 1000 or more copies? If you were replication could solve this.
    One other alternative would be to break-up the show into two parts and spread it across two DVD-5s.

  • Best way to organize a large photo library on a 10.6.8 Macbook?

    I have an older Macbook pro- version 10.6.8. 
    I am looking for the best way to organize my large library of photos (approx. 15,000).  I have both personal as well as professional photos that I would like to organize and keep separately. I currently use iPhoto'09 version 8.1.2 but it has become very slow and I would like to find a solution that uses better organization.  I am considering keeping my personal photos in iPhoto and my professional photos in another organization app.  I was thinking about purchasing Aperture, however it is only available for Mac OS X version 10.7.5 or later and the same goes for the latest version of iPhoto. 
    I am a little confused about what my options are.  Can anyone recommend anything?  I am looking simply for a good organization tool, I am not interested in photo editing or anything beyond that.

    Well now we know that the speed problem is in your old library. Repair it.
    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Repair Database. If that doesn't help, then try again, this time using Rebuild Database.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. (In Library Manager it's the FIle -> Rebuild command)
    This will create an entirely new library. It will then copy (or try to) your photos and all the associated metadata and versions to this new Library, and arrange it as close as it can to what you had in the damaged Library. It does this based on information it finds in the iPhoto sharing mechanism - but that means that things not shared won't be there, so no slideshows, books or calendars, for instance - but it should get all your events, albums and keywords, faces and places back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one. 
    Backing Up:
    Time machine will back up, yes. Just be sure it's set up correctly.
    Most Simple Back Up
    Drag the iPhoto Library from your Pictures Folder to another Disk. This will make a copy on that disk.
    Slightly more complex:
    Use an app that will do incremental back ups. This is a very good way to work. The first time you run the back up the app will make a complete copy of the Library. Thereafter it will update the back up with the changes you have made. That makes subsequent back ups much faster. Many of these apps also have scheduling capabilities: So set it up and it will do the back up automatically. Examples of such apps: Chronosync or DejaVu . But are many others. Search on MacUpdate
    Regards
    TD 

  • What is the best way to handle very large images in Captivate?

    I am just not sure the best way to handle very large electrical drawings.
    Any suggestions?
    Thanks
    Tricia

    Is converting the colorspace asking for trouble?  Very possibly!  If you were to do that to a PDF that was going to be used in the print industry, they'd shoot you!  On the other hand, if the PDF was going online or on a mobile device – they might not care.   And if the PDF complies with one of the ISO subset standards, such as PDF/X or PDF/A, then you have other rules in play.  In general, such things are a user preference/setting/choice.
    On the larger question – there are MANY MANY ways to approach PDF optimization.  Compression of image data is just one of them.   And then within that single category, as you can see, there are various approaches to the problem.  If you extend your investigation to other tools such as PDF Enhancer, you'd see even other ways to do this as well.
    As with the first comment, there is no "always right" answer.  It's entirely dependent on the user's use case for the PDF, requirements of additional standard, and the user's needs.

  • Best way to upload a large 25 minute video and where to?

    best way to upload a large 25 minute video and where to?

    Just a couple minutes surfing - YouTube only allows 15 minute videos from general users -- to upload larger apparently you have to be a trusted long time source.   There is also a restriction on the file size.
    YouTubes help section should give you all the details.
    I don't watch online videos - so I just used a query in ASK.Com to get some quick tips.

  • What is the most efficient way of passing large amounts of data through several subVIs?

    I am acquiring data at a rate of once every 30mS. This data is sorted into clusters with relevant information being grouped together. These clusters are then added to a queue. I have a cluster of queue references to keep track of all the queues. I pass this cluster around to the various sub VIs where I dequeue the data. Is this the most efficient way of moving the data around? I could also use "Obtain Queue" and the queue name to create the reference whenever I need it.
    Or would it be more efficient to create one large cluster which I pass around? Then I can use unbundle by index to pick off the values I need. This large cluster can have all the values individually or it co
    uld be composed of the previously mentioned clusters (ie. a large cluster of clusters).

    > I am acquiring data at a rate of once every 30mS. This data is sorted
    > into clusters with relevant information being grouped together. These
    > clusters are then added to a queue. I have a cluster of queue
    > references to keep track of all the queues. I pass this cluster
    > around to the various sub VIs where I dequeue the data. Is this the
    > most efficient way of moving the data around? I could also use
    > "Obtain Queue" and the queue name to create the reference whenever I
    > need it.
    > Or would it be more efficient to create one large cluster which I pass
    > around? Then I can use unbundle by index to pick off the values I
    > need. This large cluster can have all the values individually or it
    > could be composed of the previously mentioned clusters (i
    e. a large
    > cluster of clusters).
    It sounds pretty good the way you have it. In general, you want to sort
    these into groups that make sense to you. Then if there is a
    performance problem, you can arrange them so that it is a bit better for
    the computer, but lets face it, our performance counts too. Anyway,
    this generally means a smallish number of groups with a reasonable
    number of references or objects in them. If you need to group them into
    one to pass somewhere, bundle the clusters together and unbundle them on
    the other side to minimize the connectors needed. Since the references
    are four bytes, you don't need to worry about the performance of moving
    these around anyway.
    Greg McKaskle

  • Best way to transform a jigsaw puzzle to landscape

    Hello,
    I am trying to puppet warp a big square that looks like a complete jigsaw puzzle and warp it to a rolling hill landscape. What befuddles me is how to rotate the plane around its center? Another words, if I put a skewer into the middle of the plane or puzzle, and take the corner and rotate it around that axis or the in 3D talk it would be the y axis. Every place I put the axis poin and rotate the plane, it rotates in the z axis, or around like a clock. So, to clarify, if I could put my palm down onto teh center of this plane, and with the other hand, take the corner of this plane and spin it around a bit, how could I do this move in Photoshop?
    Do I just continue to warp it so it has that appearance, or can I rotate it as shown in my image?? I just want the perspective to be more like the puzzle is coming out of the lower right corner and extending out to the top left corner perspective. instead of cming from the bottom and out to background as it is here. Thank you.
    Laurie

    Why don't you use info objects? For example your table is having col1(number),col2(text/varchar),col3(text/varchar), create an info object like this,
    class MytableInfo implements java.io.Serializable {
    private int col1;
    private String col2;
    private String col3;
    public MytableInfo(int col1,String col2,String col3) {
    this.col1=col1;
    this.col2=col2;
    this.col3=col3;
    public int getCol1() {
    return col1;
    //Getter for other two properties too.
    and in your ResultSet class,
    Vector v = new Vector();
    while(rs.next()) v.add(new MytableInfo(rs.getInt(1),rs.getString(2),rs.getString(3));
    return v;
    So, it will be easier for retrieving the values later and it is a clean way of doing it.
    Sudha

  • Transformi​ng large data arrays

    Hi,
    I believe this is quite a simple question but I am trying to find the most efficient way of doing this, currently I have acquired multi channel binary data in files that that can be upto and above 1GB. The data is stored in one file in the order (say when acquiring 3 channels) channel 1, channel 2, channel 3, channel 1..... and so on. I need to then convert this data into a spreadsheet file, .txt is fine and also transform it into voltages and reorientate it so the new file would be a 2d array in the form:
    channel 1 channel 2 channel 3
    channel 1 channel 2 channel 3.......... and so on
    Currently I do this very simply by reading the I16 binary data, changing it to voltages by multiply by 10/32768 (I work with the range of 10v to -10v and the binary is 16bit) decimating the 1d array and building it to a 2d array and saving this.
    The problem is when doing this to large files the system runs out of memory, I was wondering if there is a way just to part of the file at a time instead of all at the same time and just appending it to the saved file?
    Thanks
    Charlie

    Hi raj,
    here's a picture (worth a thousand words ):
    Functions look different in LV7.1 (from left to right): Open file, read text file, write text file, close file.
    And you should note your LabView version when you ask for example code!
    Message Edited by GerdW on 01-24-2008 01:30 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome
    Attachments:
    readwrite_ex.png ‏4 KB

  • What is the efficient way of working with tree information in a table?

    hi all,
    i have to design a database to store,access,add or delete(the tree nodes) the tree information. what is the efficient way of accomplishing it?
    let's assume the information to be stored in the table is parent,child and type(optional).The queries should be very generic(should be able to work with any database).
    anybody has any suggestions?I have to work with large data.
    quick response is highly appreciated.
    thanks in advance,
    rahul

    Did you check out this link?
    http://www.intelligententerprise.com/001020/celko1_1.shtml
    Joe Celko has really gave some interesting way to implement tree in a rdbms.
    Best wishes
    Anubrata

  • Most efficient way to insert into a story with many floating images?

    I have a document with many floating images. They must all be floating because otherwise I do not get the Caption numbering right and it is impossible (because some images take an entire page) to use only anchored images.
    Now, I have to insert a large part into the middle / at the beginning of a section. If I do that the text will flow, but the images remain in place. This is extremely slow working because of all teh movements I have to do on all the images. What I would like to do is to let the pages after the page where I am entering remain the same and when I insert text and images this should just move up one page at a time. Then, at the end, I can do the fine tuning of attaching both parts again. Is there a way to do that in an efficient way?
    What I now often do is move all the floating images out of the pages, then insert the new stuff, move the images back, then make sure all the references are fine (e.g. if they refer to an image on another page, the style becomes paragraph number + page numer). For some sections this is a hideous amount of work (many, many images).
    Is there a smarter way. I have been thinking about splitting and later rejoining a section. If I add pages to one section, the following sections are not damaged, after all.
    What is the best way to do this?
    Thanks in advance.

    I have a document with many floating images. They must all be floating because otherwise I do not get the Caption numbering right and it is impossible (because some images take an entire page) to use only anchored images.
    Now, I have to insert a large part into the middle / at the beginning of a section. If I do that the text will flow, but the images remain in place. This is extremely slow working because of all teh movements I have to do on all the images. What I would like to do is to let the pages after the page where I am entering remain the same and when I insert text and images this should just move up one page at a time. Then, at the end, I can do the fine tuning of attaching both parts again. Is there a way to do that in an efficient way?
    What I now often do is move all the floating images out of the pages, then insert the new stuff, move the images back, then make sure all the references are fine (e.g. if they refer to an image on another page, the style becomes paragraph number + page numer). For some sections this is a hideous amount of work (many, many images).
    Is there a smarter way. I have been thinking about splitting and later rejoining a section. If I add pages to one section, the following sections are not damaged, after all.
    What is the best way to do this?
    Thanks in advance.

Maybe you are looking for

  • My ipod will not sync to my windows

    i have tryed and tryed to sync my ipod to update it but it will not sync up and i was just wondering is it because i do not have a reguler i pod charger

  • Oracle 10gR2 RAC EE 10.2.0.5 problem with one node.

    Hi, I have an Oracle Rac 10gR2 10.2.0.5 EE on SLES 10 on ibm ppc. My problem is that in crs don't appear that database is start in one instance. However the database is start when I use srvctl or sqlplus crsctl query crs softwareversion CRS active ve

  • Execute Unix command from Java program running on windows

    Hello, I need to write a java program that fetches file count in a particular unix machine directory. But java program will be running on windows machine. The program should execute unix list command and fetch the output of that command. Can anyone h

  • Internet Speed Decrease

    A few days ago, I dropped my MBP off at a Apple store to see why the keyboard and track pad were unresponsive. The genius there told me he would swap out the Airport card and replace it with a new one. Know that before this whole insident occured, my

  • Select statement on cube

    Hi I am writing a routine (select statement) on the characterstics which has to get the data from other cube. My scenairo: Extracting the data from the flat file. In the cube i have a characteristics for which I need to pull the data from other cube.