Most efficient way to get a  connection from a defined connection -pool [whole message]

Having recently load-tested the application we are developing I noticed that
one of the most expensive (time-wise) calls was my fetch of a db-connection
from the defined db-pool. At present I fetch my connections using :
private Connection getConnection() throws SQLException {
try {
Context jndiCntx = new InitialContext();
DataSource ds =
(DataSource)
jndiCntx.lookup("java:comp/env/jdbc/txDatasource");
return ds.getConnection();
} catch (NamingException ne) {
myLog.error(this.makeSQLInsertable("getConnection - could not
find connection"));
throw new EJBException(ne);
In other parts of the code, not developed by the same team, I've seen the
same task accomplished by :
private Connection getConnection() throws SQLException {
return DriverManager.getConnection("jdbc:weblogic:jts:FTPool");
From the performance-measurements I made the latter seems to be much more
efficient (time-wise). To give you some metrics:
The first version took a total of 75724ms for a total of 7224 calls which
gives ~ 11ms/call
The second version took a total of 8127ms for 11662 calls which gives
~0,7ms/call
I'm no JDBC guru som i'm probably missing something vital here. One
suspicion I have is that the second call first find the jdbc-pool and after
that makes the very same (DataSource)
jndiCntx.lookup("java:comp/env/jdbc/txDatasource") in order to fetch the
actual connection anyway. If that is true then my comparison is plain wrong
since one call is part of the second. If not, then the second version sure
seems a lot faster.
Apart from the obvious performance-differences in the two above approaches,
is there any other difference one should be aware of (transaction-context
for instance) between the two ? Basically I'm working in an EJB-environment
on weblogic 7.0 and looking for the most efficient way to get hold of a
db-connection in code. Comments anyone ?
//Linus Nikander - [email protected]

Linus Nikander wrote:
Thank you for both your replies. As per your suggestions I've improved my
connectionhandling (I ended up implementing the Service Locator pattern as a
matter of fact).
One thing still puzzles me though. Which (and why) is the "proper" way to
fetch the actual dataSource. As I stated before in the code I've seen two
approaches within the code I've got.
1. myDs = myServiceLocator.getDataSource("jdbc:weblogic:jts:FTPool");
2. myDs = myServiceLocator.getDataSource("java:comp/env/jdbc/tgsDB");
where getDataSource does a dataSource = (DataSource)
initialContext.lookup(dataSourceName); dataSourceName being the input-string
obviously.
tgsDB is defined as
<reference-descriptor>
<resource-description>
<res-ref-name>jdbc/tgsDB</res-ref-name>
<jndi-name>tgs-dataSource</jndi-name>
</resource-description>
</reference-descriptor>
in weblogic-ejb-jar.xml
From what I can understand by your answer, you don't recommend using the
JNDI-lookup way of getting the connection at all ?Correct.
The service locator that
I implemented will still perform a JNDI lookup, but only once. Will the fact
that I'm talking to an RMI-object anyway significantly impact performance
(when compared to you non-jndi-method) ?In some cases, for earlier 7.0s, maybe yes. For the very latest, it shouldn't
hurt.
>
>
In my two examples above. If i use version 1. How will the server know
whether to give me a TX-bound connection and when not to ? In version 1
FTPool maps to a pool with both TX and non-TX datasources. In version 2.
tgsDB maps directly to a TX-dataSource.
I might be asking a lot of strange questions, probably because I'm just
getting the hang of all the resource-reference issues that EJBs are
associated with.Bear with me ;)
//Linus
"Joseph Weinstein" <[email protected]> wrote in message
news:[email protected]...
Hi. As Jon said, the lookups are redundant. Because you showed that otherway,
I will infer that this code is always being run in serverside code. Good.I will give you
a third way which is much better than either of the ones you showed. Thefirst method
you showed has a problem for all but the latest sps, your jdbc objectswill all be
going through an unnecessary level of indirection because you are gettingan rmi jdbc
object which talks to the jts driver object.
The second, faster method you showed also has a serious problem! Oneshould
never call DriverManager methods in multithreaded JDBC programs becauseall
DriverManager calls are class-synchronized, including some small internalones like
DriverManager.println(), which all JDBC drivers and even the constructorfor
SQLException call, so one slow getConnection() call can inadvertantly haltall other
JDBC being done in the whole JVM! Also, for JVMs that have lots of jdbcdrivers
registered, DriverManager is inefficient because it simply sends your URLand
properties to every driver it has registered until it finds one thatdoesn't throw an
exception and returns a connection.
Here's the fastest way:
// do once and reuse driver object everywhere. Can be used by multiplethreads
Driver d =(Driver)Class.forName("weblogic.jdbc.jts.Driver").newInstance();
Then, whenever you want a connection:
public myJDBCMethod()
Connection c = null; // always a method level object
try {
c = d.connect("jdbc:weblogic:jts:FTPool", null);
... do all the jdbc for the method...
c.close();
c = null;
catch (Exception e) {
... do whatever, if needed...
finally {
// close connection regardless of failure or exit path
if (c != null) try {c.close();}catch (Exception ignore){}
Joe
Linus Nikander wrote:
Having recently load-tested the application we are developing I noticed
that
one of the most expensive (time-wise) calls was my fetch of adb-connection
from the defined db-pool. At present I fetch my connections using :
private Connection getConnection() throws SQLException {
try {
Context jndiCntx = new InitialContext();
DataSource ds =
(DataSource)
jndiCntx.lookup("java:comp/env/jdbc/txDatasource");
return ds.getConnection();
} catch (NamingException ne) {
myLog.error(this.makeSQLInsertable("getConnection - couldnot
find connection"));
throw new EJBException(ne);
In other parts of the code, not developed by the same team, I've seenthe
same task accomplished by :
private Connection getConnection() throws SQLException {
return DriverManager.getConnection("jdbc:weblogic:jts:FTPool");
From the performance-measurements I made the latter seems to be muchmore
efficient (time-wise). To give you some metrics:
The first version took a total of 75724ms for a total of 7224 callswhich
gives ~ 11ms/call
The second version took a total of 8127ms for 11662 calls which gives
~0,7ms/call
I'm no JDBC guru som i'm probably missing something vital here. One
suspicion I have is that the second call first find the jdbc-pool andafter
that makes the very same (DataSource)
jndiCntx.lookup("java:comp/env/jdbc/txDatasource") in order to fetch the
actual connection anyway. If that is true then my comparison is plainwrong
since one call is part of the second. If not, then the second versionsure
seems a lot faster.
Apart from the obvious performance-differences in the two aboveapproaches,
is there any other difference one should be aware of(transaction-context
for instance) between the two ? Basically I'm working in anEJB-environment
on weblogic 7.0 and looking for the most efficient way to get hold of a
db-connection in code. Comments anyone ?
//Linus Nikander - [email protected]

Similar Messages

  • Most efficient way to get document names?

    I was wondering what is the most efficient way to get the document names in a container? Use the built in 'name' index somehow, or is there an 'efficient' XPath/XQuery?
    We've been using the XPath /* which is fine with small instances, but causes a java heap out of member error on large XML instances i.e. /* gets everything which is not ideal when all we want are document names.
    Thx in advance,
    Ant

    Hi Antony,
    Here is an example for retrieving the document names on c++:
    void doQuery(XmlContainer &container,
    XmlQueryContext &context,
    const std::string &XPath)
    XmlResults results(container.queryWithXPath(0, XPath, &context));
    // Iterate through the result set as is normal
    XmlDocument theDocument;
    while(results.next(theDocument))
    std::cout << "Found document named: "
    << theDocument.getName() << std::endl;
    Regards,
    Bogdan Coman

  • Just got girlfriend a new iPad2. Her iMac is a PowerPC G5 (Tiger version 10.4.11) with 512 mb RAM. What's the simplest, most efficient way to get her iPad2 up and running and synced to her Mac?

    Just got girlfriend a new iPad2. Her iMac is a PowerPC G5 (Tiger version 10.4.11) with 512 mb RAM. What's the simplest, most efficient way to get her iPad2 up and running and synced to her Mac?

    Most of the Apple store sales people and some of the genius bar people are only knowledgable on Apple's more recent offerings. They are not very knowledgable, I found, on older PowerPC based Apple computers, I'm afraid.
    Here's the real scoop.
    Your girlfriend's G5 can only install up to OS X 10.5 Leopard. This is the last compatible OS X version for PowerPC users.
    OS X 10.6 Snow Leopard and OS X10.7 Lion are for newer Intel CPU Apple computers.
    Early iMac G5's can only have up to 2 GBs of RAM.
    Later iMac G5's (2005-2006) could take up to 2.5 GBs of RAM
    2 GBs of RAM will run OS X 10.5 Leopard just fine.
    The very latest iTunes (10.5.2) can be installed and runs on both PowerPC and Intel CPU Macs.
    However, there are certain new iTunes feature that won't work without an Intel Mac.
    One of iOS and iTunes feature is sync'g wirelessly over WiFi.
    This will not work unless you have an iDevice running iOS 5 and Intel Mac running 10.6 Snow Leopard or better.
    Although, I was disappointed I would not be able to do this with my G4 Mac, it's not a biggie problem for me.
    So, your girlfriend's computer should be fine for what she intends to use it for.
    The Apple people either just plain didn't know or we're trying to get you to think about buying a new Mac.
    At least, as of now, not truly necessary.
    If Apple, at some later point, drops support for PowerPC users running 10.5, then would be the time to consider a new or "newer" Intel CPU Mac.
    My planned Mac computer upgrade is seeking out a " newer" last version G5 for my "new" Mac.
    I can't afford, right now, to replace all of my core PowerPC software with Intel versions, so I need to stick with the older PowerPC Macs for the time being. The last of the G5's is what I seek.

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • Iterators - most efficient way to get last object?

    Collection of objects, such as from an EJB, and I want to only get the last object. What is the most efficient way of doing this?
    TIA!

    addendum to previous post.
    Allthough, that test would call to mind the question what are you going to do if it is 0? Throw an exception, as the code stands it will throw an index out of bounds exception to the calling process which could be handled there, and in truth probably should be.
    Modified and expanded example
    public class ListThingie{
         java.util.ArrayList listOfThingies = null
         public ListThingie(){
              super() ;
              listOfThingies = new java.util.ArrayList() ;
              buildListOfThingies() ;
         public void buildListOfThingies(){
              //... build the ArrayList here
         public Object getLastThingie() throws IndexOutOfBoundsException{
              return listOfThingies.get((listOfThingies.size()-1)) ;
    public class ThingieProcessor{
         public static void main( String[] args){
              try{
                   ListThingie listThingie = new ListThingie() ;
                   System.out.println(listThingie.getLastThingie()) ;
              catch(IndexOutOfBoundsException e){
                   System.out.println(e.getMessage()) ;
    }There, that's better.

  • What's the most efficient way to serve a file from a servlet?

    I have a servlet that does various different things depending on the needs. Sometimes it dynamically generates content, and sometimes all it does is send a file out, with no alteration.
    What is the most efficient way to just send a file?
    One option:
    OutputStream os = response.getOutputStream();
    InputStream is = new FileInputStream(...)
    (send all the bytes from is to os, the regular way using a buffer)Another option is to say:
    RequestDispatcher rd = response.getRequestDispatcher(fileName);
    rd.forward();Any other options? What's the prefered way of doing this?
    I know the rule of "don't optimize too early" but this is a situation where we need to get the maximum amount of files served with the hardware we have, and it's going to be a lot of static files, so efficiency is important.
    Thanks

    Ok, that's what I thought. It would be nice if there were a "response.sendStream(InputStream input)" method in the ServletResponse class. Even nicer would be a sendFile or sendChannel or something. This is probably a common usage and it's a place where the container has many opportunities for optimization. For example, it could call the operating systems send_file kernel call so the entire transfer would be done directly from the disk controller to the ether card (on systems that support that).
    For now I'll just do my own buffered copy.

  • Most efficient way to get row count with a where clause

    Have not found a definitive answer on how to do this.  Is there a better way to get a row count from a large table that needs to satisfy a where clause like so:
    SELECT COUNT(*) FROM BigTable WHERE TypeName = 'ABC'
    I have seen several posts suggesting something like 
    SELECT COUNT(*) FROM BigTable(NOLOCK);
    and 
    selectOBJECT_NAME(object_id),row_count from sys.dm_db_partition_stats
    whereOBJECT_NAME(object_id)='BigTable'but I need the row count that satisfies my where clause (i.e.: WHERE TypeName = 'ABC')

    It needs index to improve the performance. 
    - create index on typename column
    - create a indexed view to do the count in advance
    -partition on type name (although it's unlikely) then get count from the system tables...
    all those 3 solutions are about indexing.
    Regards
    John Huang, MVP-SQL, MCM-SQL, http://www.sqlnotes.info

  • What is the most efficient way to use Remove Grain from After Effects on large Premiere Pro project?

    I've tried to use Dynamic Linking both ways but it doesn't the effects don't appear no matter how dramatic I make them. Why couldn't there just be Remove Grain in Premiere Pro so I wouldn't have to go through the headache of figuring this out. I've tried importing the Premiere Pro project into After effects where everything comes up and, since there's so many files, it doesn't seam feasibly or convenient at all. Trying Dynamic Link vice versa didn't lend results either. What can I do??? Projects range from about a half hour to an hour and a half. Thank you.

    The situations are well lit; it might be lack of knowledge on the camera settings itself.
    Sort that and your workflow issue dissapears instantly.
    There's nothing more efficient?
    Yep...read above... but ...any Plugin. eg Neat Video... will take long processing time.
    Fixing grain..actually "noise"...is always an image  degrading process as well.  It blurs!

  • Which is more efficient way to get result set from database server

    Hi,
    I am working on a project where I require to query database to fetch result set and then iterate through the resultset. Now, What I want is that I want to create one single java code that would call many different SQLs and create a list out of resultset. There are two approaches for me.
    1.) To create a txt file where I can store my queries. My java program can read this file and get the appropriate query to be used.
    2.) To create a stored procedure containing the queries and call the stored procedure from my java program. Also, not that some of the queries needs to be created dynamically depending upon the parameteters supplied.
    Out of these two approches which is optimum and why?
    Also, following things to be noted.
    1. At times I want to create where clause of the query dynamically depenending upon the parameters passed.
    2. I want one single java file that will handle all database calls.
    3. Paramters to the stored procedure can be passed using array descriptor.
    4. Conneciton I am making using JNDI.
    Please do provide me optimum way of out these two. You may also suggest some other approaches, if any.
    Thanks,
    Rajan
    Edited by: RP on Jun 26, 2012 11:11 PM

    RP wrote:
    In case of queries stored in text files. I will require to replace some pre defined placeholder with actual parameters and then pass that modified query to db engine. Even I liked the second approach as it is more easily maintainable. There are a couple of issues. Shared SQL is one. Irrespective of the method used, the SQL cursor that is created needs to have bind variables. This ensures re-usability of the cursor. This reduces the risk of Shared Pool fragmentation. This lowers hard parsing and reduces CPU utilisation.
    Another issue is flexibility. If the SQL cursors are created by stored procedures, this code resides on the server and abstracts the Java client from the complexities of SQL and SQL performance. The code can easily be updated and fine tuned to deliver faster/better SQL cursors, or modified to take new Oracle features, changes in data model, and so on, into consideration. This stored proc can be updated without having to touch or recompile a single byte of Java client code.
    There's also the security issue. What is more secure? SQL encapsulated in stored procs in a secure database and server environment? Or SQL "encapsulated" in text files on the client?
    The less code you have running on the client, the less code you have running in the wild that can be compromised without having to first compromise the server.
    Only I was worried about any performace issue might happen using this approach. Performance is not a factor of who creates the SQL cursor.
    Whether Java client creates a SQL cursor, or a PL/SQL stored proc creates a SQL cursor, or a .Net client creates a SQL cursor - that SQL cursor does not know what the client is. It does not care what the client is. The SQL cursor performs as well as it is capable of.. given the execution plan, data volumes, server resources and speed/performance of the server.
    The client language and SQL cursor interface used by the client (there are several in PL/SQL), determines the performance of the client's interaction with the cursor (e.g. round trips to the database when interfacing with the cursor). The client language (and its client interface to the cursor) does not dictate the actual performance of that SQL cursor on the database (does not make joins faster, or I/O faster)
    One more question, Will my java program close the cursor that I opened in Procedure?That you need to ask your Java code. Java code leaking ref cursors are unfortunately all too common. You need to make sure that your Java client interface to SQL cursors, closes the cursor handle when done.

  • [11g] most efficient way to calculate size of xmltype type column

    I need to check the current size of some xmltype column in a BIU trigger.
    I don't think it's good to use
      length(:new.xml_data.GetStringVal());
    or
      dbms_lob.GetLength(:new.xml_data.GetClobVal());
    What's the most efficient way to get the storage size?
    I don't need the string serialized size.
    It could also be the internal storage size (incl. administration data overhead).
    - thanks!
    regards,
    Frank

    > May I ask for what reason you need to know it?
    I need to handle very large XML document output, which currently hits the internal xmltype limitation of 4GByte, when aggregating XML document fragments for this.
    > You'll get a relevant answer if you give us relevant information :
    > - exact db version
    SELECT * FROM PRODUCT_COMPONENT_VERSION;
    product
    version
    status
    1
    NLSRTL
    11.2.0.3.0
    Production
    2
    Oracle Database 11g Enterprise Edition
    11.2.0.3.0
    64bit Production
    3
    PL/SQL
    11.2.0.3.0
    Production
    4
    TNS for Linux:
    11.2.0.3.0
    Production
    > - DDL of your table
    > XML stored as XMLType datatype can use different storage models, depending on the version.
    I don't use dedicated storage clause.
    But i am hitting the problem already for aggregation into some xmltype variable in PL/SQL
    - BEFORE writing back to a result table.
    Can i avoid such problems, when writing to a table DIRECTLY w/o intermediate xmltype PL/SQL variable
    - depending on the storage clause?
    The reason why asking how to get the size of some xmltype (in table column and/or in PL/SQL variable) is, that i am thinking of a threshold detection.
    In case threshold is reached: outsource XML fragment so far to some separate CLOB storage, and insert a smaller meta-information reference representing that in the output document.
    Finally leave up to the client system to use <xs:include> (or alike) to construct the complete document.
    rgds,
    Frank

  • Most efficient way to import data from Excel to InDesign?

    Hi all,
    I'm designing a college prospectus which includes 400+ course listings. At the moment, these listings exist as a vast Excel sheet with fields like course type, course code, description etc.
    I'm familiar with importing Excel data into InDesign and formatting tables/creating table styles and such, but the problem I'm having is that the data is in multiple columns per course in the Excel sheet, but will be arranged in one column per course with multiple rows in the InDesign document. I can't seem to find a way to easily convert these columns into rows.
    Can anyone help me with an efficient way to get the data into the layout without laborious copying and pasting or formatting?
    Many thanks in advance!

    Hi,
    Dans excel coller/ transpose

  • Moving content from iMovie to iMovie - what's the most efficient way?

    If I edit and create movies in iMovie08 on one iMac and I want to transfer the content to iMovie08 another iMac 200 miles away , what's the most efficient way? I prefer not to burn DVDs, because I want to do further work on the movies on the second iMac's iMovie (so I can share them with iTunes and sync them with Apple TV).

    >PDPageAddCosContents(destPage, PDPageGetCosContents(srcPage));
    Does this method (PDPageGetCosContent) exist? It would be easy enough
    to create, but I don't see it in the document.
    More seriously, I have a vague memory that it is a Really Bad Thing to
    share the same Contents objects between multiple pages. Maybe
    something to do with page deletion, can't remember.
    >PDPageAddCosResource(destPage, PDPageGetCosResources(srcPage));
    These two methods are not symmetric, and PDPageAddCosResource doesn't
    work that way, sadly...
    Aandi Inston

  • Most efficient way to make an ordered list?

    Hi all,
    I have a simulation where I have agents running around in an environment. These agents need to get the closest agent near them, and they have to do this a whole bunch of times each cycle -- closest predator to them, closest prey, closest mate, next-closest mate if that one isn't interested, etc. etc.
    I realized that it would be faster if I just got all the agents in the vicinity ONCE, kept them in a sorted list, and then everytime you want the closest agent of a certain type, just search from the bottom of the list up until you find the first one of that type.
    So here's the question: to make that list, I ask the environment for all the agents in the vicinity, and then go through them one-by-one. I find out the distance between myself and that agent. I then.... what?
    I could put them all into an ArrayList, and then apply some sorting algorithm to that ArrayList. Or I can try to insert them in the right order WHILE I'm making the ArrayList. Or maybe there's some better Collection object that would be even more efficient -- somehow pushing them in and out of stacks, or whatever else smart programmers think up.
    Can anyone suggest the most efficient way to do this? This is something that every agent has to do every step, so efficiency is key.
    Thanks!
    Edit: As a note, calculating the distance between two agents isn't free, and if I either sort or insert as I'm making it, the naive implementation (i.e. the way I would do it...) would require re-checking this distance for every agent in the list every single time a new agent was added. So maybe I could make some use of a HashMap, so that I can store these distances?
    Edited by: TimQuinn on Oct 15, 2009 9:35 AM

    TimQuinn wrote:
    Ok, thanks for all the great suggestions. I think that caching the distance in a wrapper object is a big plus, and then I can run some tests and see if using a built-in collections sorting algorithm or using a treeset is faster in my specific case. Thanks!
    Any thoughts as to the idea of creating one giant map of the distance from each agent to every other agent just once, rather than having each agent work out their own distances to each other agent? I feel like this would be faster (at the expense of memory), but don't know how I'd start approaching it.Well, your idea of the Map would probably work. You would have to make some object that pairs up two agents, something like this:
    public class AgentPair {
       private final Agent a1, a2;
       public AgentPair(Agent a1, Agent a2) { //...
       public boolean equals(Object other) {
          //if distance is a symmetric operation (a.distanceTo(b) == b.distanceTo(a)) then the reverse pair should also be equal
       public int hashCode() {
          return 37*a1.hashCode() + a2.hashCode();
    }This assumes either that you don't override equals or hashCode in Agent, or that you properly override both.
    Then you would have a map that you can populate, given an array of all agents:
    Map<AgentPair, Double> distMap;
    Agent[] agents;
    for ( int i = 0; i < agents.length-1; i++ ) {
       for ( int j = i; j < agents.length; j++ ) {
          distMap.put(new AgentPair(agents, agents[j]), agents[i].distanceTo(agents[j]));
    }Alternatively, if you're not sure you'll definitely use all of the computations, you could alternatively test to see if the computation result is in the map, and if not, perform it; otherwise, use the cached result.
    Any thoughts? Should I make a new post? Abandon the idea?As always, try it and see.  Essentially, it will be faster assuming these two conditions:
    1) You would have to do more than n ^2^ (well actually n choose 2) distance calculations otherwise, and
    2) Computing the distance costs more than retrieving from distMap (this +isn't+ a given!).  If each Agent had a sequential ID, you could do a two-dimensional array of doubles which would speed up lookups.
    Edited by: endasil on 15-Oct-2009 3:39 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Most efficient way to consume log files

    Hello everyone,
    I've been absent from the forums for awhile but I'm back at it now... 
    I have a question about the most efficient way to consume log files.  I read in Powershell in action, by Bruce Payette that using a switch statement with a regex worked pretty well, that being said I haven't tried it yet. Select-string is working pretty
    well for me but I have about 10 different entry types that I need to search logs for every 5 minutes and I'm scanning about 15 GB of logs at every interval.  Anyway, if anyone has information about how to do something like that as quickly as possible
    I'd appreciate it.
    1.  piping log files that meet my criteria to select-string
       - This seems to work well but I don't like searching the same files over and over again
    2. running logs through get-content and then building a filter statement
      - This is ok but it seems to use up a fair bit of memory
    3. Some other approach that I haven't thought of yet.
    Anyway, I know this is a relatively nebulous question, sorry about that.  I'm hoping that someone on here knows a really good way to find strings in logs files quickly.
    Hope that helps! Jason

    You can sometimes squeeze out more speed at the expense of memory usage, but filters are pretty fast. I don't see a benefit to holding the whole file in memory, in this case.
    As I mentioned earlier, though, C# code will usually blow PowerShell away in terms of execution time.  Here's a rewrite of what I just did (just for the INI Section pattern, to keep the post size down):
    $string = @'
    #Comment Line
    [Ini-Style Section Line]
    Key = Value Line
    192.168.0.1 localhost
    Some line that doesn't match anything.
    Set-Content -Path .\test.txt -Value $string
    Add-Type -TypeDefinition @'
    using System;
    using System.Text.RegularExpressions;
    using System.Collections;
    using System.IO;
    public interface ILineParser
    object ParseLine(string line);
    public class IniSection
    public string Section;
    public class IniSectionParser : ILineParser
    public object ParseLine(string line)
    object o = null;
    Match match = Regex.Match(line, @"^\s*\[([^\]]+)\]\s*$");
    if (match.Success)
    o = new IniSection() { Section = match.Groups[1].Value };
    return o;
    public class LogParser
    public static IEnumerable ParseFile(string fileName, ILineParser[] lineParsers)
    using (StreamReader sr = File.OpenText(fileName))
    string line;
    while ((line = sr.ReadLine()) != null)
    foreach (ILineParser parser in lineParsers)
    object result = parser.ParseLine(line);
    if (result != null)
    yield return result;
    $parsers = @(
    New-Object IniSectionParser
    $results = [LogParser]::ParseFile("$pwd\test.txt", $parsers)
    $results
    Instead of defining separate classes for each type of line and output object, you could probably do something more generic with delegates (similar to how I used ScriptBlock.Invoke() in the PowerShell example), but it might sacrifice some speed to do so.

  • I still have my iPod Nano 3rd generation, but my old computer crashed and I did not have itunes backed up, so I lost my library. Is there a way to get the songs from my old ipod to new computer library and then on to the new ipod?

    I still have my iPod nano 3rd generation, but my old computer crashed and I did not have itunes backed up. Is there a way to get the music from ipod to new computer library and then on to the new ipod ?

    Save all the photos from your Droid into a folder on your computer.
    Connect your device to your computer. On iTunes left Pane, select iPhone under 'Devices'; then on the right Pane, select PHOTO tab.  Make sure Sync Photo box is checked, and "from" box selected the folder that has Droid's photos.  Then click the SYNC or APPLY button on the lower right window.

Maybe you are looking for

  • Print to xps-printer problem from Adobe Reader 11.0.06

    Steps to reproduce bug: 1. Add XPS printer (using new local port and Microsoft XPS Document Writer v4) 2. Open a pdf file in Adobe Reader XI (11.0.06.70) 3. Choose command Print 4. Choose XPS printer 5. Click print Results: nothing happend Expected r

  • Problem using custom tags

    Hi All, I am trying to run an test application where in the jsp and tld are as below: FirstTag.jsp: <html> <head>      <title>Your first JSP tag : FirstTag</title>      <style>      p, b { font-family:Tahoma,Sans-Serif; font-size:10pt; }      b { fon

  • Why can't I Click and Drag?

    Yesterday I was clicking and dragging items on my desktop, but today I can't. I can select them but I can't move them. How do I fix this? I have a Macbook Pro, 10.5.8

  • Zen Micro - New Models and Australian Pric

    I am looking at purchasing a Zen Micro however before I do I have a few queries: ** According to the US Creative website, a 4GB and 6GB Micro will be launched very soon. ) Does anyone know when these variations will be available in Australia? 2) And

  • Parental Controls causing login to hang

    Parental Controls in Leopard - trouble. I have an iMac G5 (ambient light sensor), 1.8Ghz, 2GB RAM. Previously ran Parental Controls, having one managed account, plus other unmanaged and admin accounts. I prepared for Leopard upgrade: disabled and sto