Zoom in/out on panel -- efficient way?

I need a panel to display an image, I want to be able to zoom in/out on that image, and have the panel resized (which would need to adjust the image being shown so that it fills the panel)....below is sort of what I was thinking...but is there a more efficient way?
public void paint(Graphics g) {
     at =  new AffineTransform();
     int w = getParent().getWidth();
     int h = getParent().getHeight();
     int picW = pic.getWidth();
     int picH = pic.getHeight();
     double newW = ((double) w / (double) picW);
     double newH = ((double) h / (double) picH);
     at.scale(newW, newH);
     ((Graphics2D) g).drawImage(myImage, at, null);
}Thanks

1) Don't extent paint(), extend paintComponent()
2) Keep a local copy of the image in your class. Use Image.getScaledImage()
whenever the user changes the zoom level to modify your local copy. Have
paintComponent reference the local copy.
This way, you're only performing an expensive computation when the zoom
actually changes, which is relatively infrequent, rather than every time you
have to repaint, which is relatively often.

Similar Messages

  • Efficient way of saving documents uploaded by users

    Hello Experts,
    We are looking out for an efficient way of storing of documents uploaded by the users. Below is the explanation of our scenario in detail.
    We are working on the SAP E-Recruiting module and have designed custom Interactive Adobe Forms & Web Dynpro Components for the client. An end user (initiating manager) would fill in the form & upload file(s) in support of his statement & click on the "submit" button. As of now we are achieving the upload functionality through WDA's FileUpload ui element & dumping the file contents into a field of type RAWSTRING.
    We now want to change our approach of using the RAW_STRING field & go for a more efficient one. I have been googling around and read a bit about approaches like:
    Class CL_BDS_DOCUMENT_SET
    Class CL_FITV_GOS
    Function Module BDS_BUSINESSDOCUMENT_CREATEF
    But all these 3 approaches seem to be targetted towards some particular business object. They expect some sort of class name or the other to be passed on to them as input. However ours is a complete custom requirement wherein the entire forms data is going into a Z-table so none of these approaches would hold good. (Please correct me if I am wrong when I say that coz I haven't personally worked on any of them till date!) Awaiting your expert opinions on this matter.
    Regards,
    Uday
    Any ideas please?
    Edited by: Uday Gubbala on Mar 4, 2010 10:56 AM

    See if this sample code helps....we hav DMS ( Document management System )
    MOVE: 'DRW' TO DOCUMENTDATA-DOCUMENTTYPE,
            'C:\Users\sci30\Desktop\test.doc' TO
             DOCUMENTDATA-DOCFILE1,
            'TEST DESCRIPTION' TO  DOCUMENTDATA-DESCRIPTION.
    *          move 'WR' to documentdata-STATUSINTERN.
      MOVE 'WRD' TO DOCUMENTDATA-WSAPPLICATION2.
      CLEAR : WA_OBJ_LINK.
      MOVE 'MARA' TO WA_OBJ_LINK-OBJECTTYPE.
      MOVE HEADDATA-MATERIAL TO WA_OBJ_LINK-OBJECTKEY.
      APPEND WA_OBJ_LINK TO GT_OBJ_LINK.
      CALL FUNCTION 'BAPI_DOCUMENT_CREATE2'
        EXPORTING
          DOCUMENTDATA               = DOCUMENTDATA
    *         HOSTNAME                   =
    *         DOCBOMCHANGENUMBER         =
    *         DOCBOMVALIDFROM            =
    *         DOCBOMREVISIONLEVEL        =
    *         CAD_MODE                   = ' '
    *         PF_FTP_DEST                = ' '
    *         PF_HTTP_DEST               = ' '
    *         DEFAULTCLASS               = 'X'
        IMPORTING
    *         DOCUMENTTYPE               =
    *         DOCUMENTNUMBER             =
    *         DOCUMENTPART               =
    *         DOCUMENTVERSION            =
         RETURN                     =    RETURN_DOCUBAPI
       TABLES
    *         CHARACTERISTICVALUES       =
    *         CLASSALLOCATIONS           =
    *         DOCUMENTDESCRIPTIONS       =
          OBJECTLINKS                = GT_OBJ_LINK
    *         DOCUMENTSTRUCTURE          =
    *         DOCUMENTFILES              =
    *         LONGTEXTS                  =
    *         COMPONENTS                 =
      COMMIT WORK.

  • What is the most efficient way to have full access to the front panel on RT Labview?

    I have a RT machine that needs to do its job and also port the front panel to an external machine over the network. What is the most efficient way to do it? Using as little of the RT time as possible but providing full functionality to the RT front panel.
    So far I have been using it directly from Labview - running the VI on a remote (RT) and have the front panel on local Labview (WINDOWS). I know I can do it with also through WWW (not very happy with that though).
    LV2009 SP1.
    Thanks

    Running a compiled executable on the RT target, rather than running it within the development environment, is probably slightly more efficient but limits you to the web interface.  If you're running within the LabVIEW environment, I doubt there's a noticeable difference in efficiency from the RT perspective between the web server and the LabVIEW front panel, although that's mostly a guess (I would expect the RT system to send identical data in each case, once the front panel is loaded).  Those are your only options in modern LabVIEW versions.  In LabVIEW 7.1 you could build an executable that acted as the front panel for an RT system, but that feature does not exist in recent versions.  However, a quick search turned up this document with code to approximately duplicate that behavior, perhaps it will work for you?

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • SQL query with multiple tables - what is the most efficient way?

    Hello I am learning PL/SQL. I have a simple procedure where I need to find number of employees and departments per location as per user input of location_id.
    I have 3 Tables:
    LOCATIONS
    location_id (pk)
    location_name
    DEPARTMENTS
    department_id (pk)
    location_id (fk)
    department_name
    EMPLOYEES
    employee_id (pk)
    department_id (fk)
    employee_name
    1 Location can have 0-MANY Departments
    1 Employee has 1 Department
    Here is the query I came up with for PL/SQL procedure:
    /*Ecount, Dcount are NUMBER variables */
    SELECT SUM (EmployeeCount), COUNT(DepartmentNumber)
         INTO Ecount, Dcount
         FROM     
         (SELECT COUNT(employee_id) EmployeeCount, department_id DepartmentNumber
              FROM employees
              GROUP BY department_id
              HAVING department_id IN
                        (SELECT department_id
                        FROM departments
                        WHERE location_id = userInput));
    I do get the correct result, but I am just wondering if my query is on the right track and if there is a more "efficient" way of doing this.
    Thanks in advance for helping a newbie out.

    Hi,
    Welcome to the forum!
    Something like this will be more efficient:
    SELECT    COUNT (employee_id)               AS ECount
    ,       COUNT (DISTINCT department_id)     AS DCount
    FROM       employees
    WHERE       department_id IN (     SELECT     department_id
                        FROM      departments
                        WHERE      location_id = :userInput
    ;You should also try a join instead of the IN subquery.
    For efficiency, do only the things you need to do.
    For example, you don't need a count of employees in each department, so don't compute one. That means you won't need the in-line view, so don't have one.
    You don't need PL/SQL for this job, so don't use PL/SQL if you don't have to. (I realize this question was out of context, so you may have good reasons for doing this in PL/SQL.)
    Do all filtering as early as possible. Don't waste effort computing things that won't be used .
    A particular example of this is: Never use a HAVING clause when you can use a WHERE clause. What's the difference between a WHERE clause and a HAVING clause? The WHERE clause is applied before aggregate functions are computed, and the HAVING clause is applied after; there's no other difference. Therefore, if the HAVING clause isn't referencing an aggregate function, it could be done in a WHERE clause instead.

  • Advice needed: Efficient way to scan users

    Hi all,
    I wish to know the efficient way to scan users in Lighthouse. I need to write a workflow that checkout all the users and perform some updates. This workflow should run everyday at midnight.
    I have created a scanner myself. Basically what It did are:
    1. call FormUtils.getUsers method to return all users' name into a variable.
    2. loop through this list and call a subprocess workflow to process every user. This subprocess checks out a user view, performs updates, and then checks in view.
    This solution is not efficient at all since it causes my JVM to be Out of Memory. (1G RAM assigned to JVM with about 78,000 users)
    Any advice is highly appreciated. Thank you.
    Steve

    Ok...I know understand what you are doing and why you need this.
    A long, long, long time ago (back in 3.x days) the deferred task scanner was really bad. Its nightly scan would scan ALL users each time. This is fine when your client had 4k users...but not when it has 140k users.
    Additionally, the "set deferred task" function had problems with two tasks with the same name "i.e. disable resource" since it used the name as the xml object name which can not be duplicated.
    soooo, to beat this I rewrote the deferred task handler to allow me to do all of this. Part of this was to add a searchable field called 'nextTaskDate' on the user object. After each workflow this 'date" is updated so it is always correctly populated with the users "next deferred task date"
    each night the scanner runs and querys all users with a nextTaskDate of today. This then gives us a result set that we can iterate over instead of having to list each user and search for tasks. It's a billion times faster.
    Your best bet is to store the task date in miliseconds and make your query a "all users with next task date BEFORE now"...this way if the server is hosed you can execute tasks you may have missed.
    We have an entire re-usable implmentation framework that we have patented (of which this code is a part) that answers most of these types of issues you are bringing up. It makes these implementations much much simpler, faster, scalable and maintainable.
    this make sense?
    Dana Reed
    AegisUSA
    Denver, CO 80211
    [email protected]
    773.412.3782
    "Now hiring best-in-class IdM architects. Inquire via emai"

  • A more efficient way to assure that a string value contains only numbers?

    Hi ,
    I'm using Oracle 9.2.0.6.
    I was curious to know if there was any way I could write a more efficient query to determine if a string value contains only numbers.
    Here's my current query. This SQL is from a sub query in a Join clause.
    select distinct cta.CUSTOMER_TRX_ID, to_number(cta.SALES_ORDER) SALES_ORDER
                from ra_customer_trx_lines_all cta
                where length(cta.SALES_ORDER) = 6
                and cta.SALES_ORDER is not null
                and substr(cta.SALES_ORDER,1,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,2,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,3,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,4,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,5,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,6,1) in('1','2','3','4','5','6','7','8','9','0')This is a string where I'm finding A-Z-a-z characters and '/' and '-' characters in all 6 positions, plus there are values that are longer than 6 characters. That's what the length(cta.SALES_ORDER) = 6 is for. Also, of course. some cells are NULL.
    So the question is, is there a more efficient way to screen out only the values in this field that are 6 character numbers or is what I have the best I can do?
    Thanks,

    I appreciate all of your very helpfull workarounds. The cost is a little better in all cases than my original where clause.
    To address the discussion that's popped up about design from this question, I can say a few things that should clear , at least, my situation up.
    First of all this custom quoting , purchase order , and sales order entry system WAS written by a bunch a of 'bad' coders who didn't document their work and then left. We don't even have an ER diagram
    The whole project that I'm only a small part of is literally trying to put Humpty Dumpty together again and then move it from a bad custom solution into Oracle Applications.
    We're rebuilding, documenting, and doing ETL. This is one of your prototypical projects from hell.
    It's a huge database project so we're taking small bites as a time. Hopefully, somewhere right before Armageddon hits, this thing will be complete.
    But until then,..., well,..., you know the drill.
    Thanks Again.

  • What is the efficient way of working with tree information in a table?

    hi all,
    i have to design a database to store,access,add or delete(the tree nodes) the tree information. what is the efficient way of accomplishing it?
    let's assume the information to be stored in the table is parent,child and type(optional).The queries should be very generic(should be able to work with any database).
    anybody has any suggestions?I have to work with large data.
    quick response is highly appreciated.
    thanks in advance,
    rahul

    Did you check out this link?
    http://www.intelligententerprise.com/001020/celko1_1.shtml
    Joe Celko has really gave some interesting way to implement tree in a rdbms.
    Best wishes
    Anubrata

  • Most efficient way to loop though many similarlly named fields?

    Hi,
    I have a 5 page document with each page containing appx. 50 similarly named fields.    E.g. Viol1Num, Viol2Num, Vio3Num ...  Viol50Num.
    I am looking for an efficient way of programming a loop to look at each field in Javascript so I can do some manipulations in those fields on what the user entered.
    In FormCalc I've previously used the 'foreach' function similar to:
    foreach (Field1, Field2, Field3.....Field50) do
         'BLAH'
    endfor
    however, that gets really lengthy, especially when dealing with subsequent pages where I have to start adding 'topmostSubform.Page2.' in front of each field name so that I can access from the first page all of the fields on subsequent pages.  Also, I need to do this in Javascript, not FormCalc.
    For example, in JS I am using this loop to mark all fields as read only:
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readOnly";
    How could I do something similar to that so I could look through each field and perform actions on it without having to list out every single field name?
    I tried altering that to look at fields instead of field properties, but I couldn't get it to run.
    Thanks.

    If this is an LCD form then you're better off asking over at the LCD forum.

  • Most efficient way to loop through similarly named fields?

    Hi,
    I have a 5 page document with each page containing appx. 50 similarly named fields.    E.g. Viol1Num, Viol2Num, Vio3Num ...  Viol50Num.
    I am looking for an efficient way of programming a loop to look at each field in Javascript so I can do some manipulations in those fields on what the user entered.
    In FormCalc I've previously used the 'foreach' function similar to:
    foreach (Field1, Field2, Field3.....Field50) do
         'BLAH'
    endfor
    however, that gets really lengthy, especially when dealing with subsequent pages where I have to start adding 'topmostSubform.Page2.' in front of each field name so that I can access from the first page all of the fields on subsequent pages.  Also, I need to do this in Javascript, not FormCalc.
    For example, in JS I am using this loop to mark all fields as read only:
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readOnly";
    How could I do something similar to that so I could look through each field and perform actions on it without having to list out every single field name?
    I tried altering that to look at fields instead of field properties, but I couldn't get it to run.
    Thanks.

    I have solved my issue.   It took some battling in javascript using xfa.resolveNode.
    I have 5 pages, each consisting of a series of 60 fields named Viol1Num, Viol2Num, Viol3Num .... Viol60Num.
    If when this javascript runs, it detects a blank field, then insert a '3' into it.
    The below is the javascript which runs for the second page of this document.
    while (LoopCounter < 61) {
    if ((LoopCounter != 21) && (LoopCounter != 22)) {
    if((xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue == null) | (xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue == "")) {
    xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue = 3;
    LoopCounter = LoopCounter + 1

  • Most efficient way to insert into a story with many floating images?

    I have a document with many floating images. They must all be floating because otherwise I do not get the Caption numbering right and it is impossible (because some images take an entire page) to use only anchored images.
    Now, I have to insert a large part into the middle / at the beginning of a section. If I do that the text will flow, but the images remain in place. This is extremely slow working because of all teh movements I have to do on all the images. What I would like to do is to let the pages after the page where I am entering remain the same and when I insert text and images this should just move up one page at a time. Then, at the end, I can do the fine tuning of attaching both parts again. Is there a way to do that in an efficient way?
    What I now often do is move all the floating images out of the pages, then insert the new stuff, move the images back, then make sure all the references are fine (e.g. if they refer to an image on another page, the style becomes paragraph number + page numer). For some sections this is a hideous amount of work (many, many images).
    Is there a smarter way. I have been thinking about splitting and later rejoining a section. If I add pages to one section, the following sections are not damaged, after all.
    What is the best way to do this?
    Thanks in advance.

    I have a document with many floating images. They must all be floating because otherwise I do not get the Caption numbering right and it is impossible (because some images take an entire page) to use only anchored images.
    Now, I have to insert a large part into the middle / at the beginning of a section. If I do that the text will flow, but the images remain in place. This is extremely slow working because of all teh movements I have to do on all the images. What I would like to do is to let the pages after the page where I am entering remain the same and when I insert text and images this should just move up one page at a time. Then, at the end, I can do the fine tuning of attaching both parts again. Is there a way to do that in an efficient way?
    What I now often do is move all the floating images out of the pages, then insert the new stuff, move the images back, then make sure all the references are fine (e.g. if they refer to an image on another page, the style becomes paragraph number + page numer). For some sections this is a hideous amount of work (many, many images).
    Is there a smarter way. I have been thinking about splitting and later rejoining a section. If I add pages to one section, the following sections are not damaged, after all.
    What is the best way to do this?
    Thanks in advance.

  • Most efficient way to make an ordered list?

    Hi all,
    I have a simulation where I have agents running around in an environment. These agents need to get the closest agent near them, and they have to do this a whole bunch of times each cycle -- closest predator to them, closest prey, closest mate, next-closest mate if that one isn't interested, etc. etc.
    I realized that it would be faster if I just got all the agents in the vicinity ONCE, kept them in a sorted list, and then everytime you want the closest agent of a certain type, just search from the bottom of the list up until you find the first one of that type.
    So here's the question: to make that list, I ask the environment for all the agents in the vicinity, and then go through them one-by-one. I find out the distance between myself and that agent. I then.... what?
    I could put them all into an ArrayList, and then apply some sorting algorithm to that ArrayList. Or I can try to insert them in the right order WHILE I'm making the ArrayList. Or maybe there's some better Collection object that would be even more efficient -- somehow pushing them in and out of stacks, or whatever else smart programmers think up.
    Can anyone suggest the most efficient way to do this? This is something that every agent has to do every step, so efficiency is key.
    Thanks!
    Edit: As a note, calculating the distance between two agents isn't free, and if I either sort or insert as I'm making it, the naive implementation (i.e. the way I would do it...) would require re-checking this distance for every agent in the list every single time a new agent was added. So maybe I could make some use of a HashMap, so that I can store these distances?
    Edited by: TimQuinn on Oct 15, 2009 9:35 AM

    TimQuinn wrote:
    Ok, thanks for all the great suggestions. I think that caching the distance in a wrapper object is a big plus, and then I can run some tests and see if using a built-in collections sorting algorithm or using a treeset is faster in my specific case. Thanks!
    Any thoughts as to the idea of creating one giant map of the distance from each agent to every other agent just once, rather than having each agent work out their own distances to each other agent? I feel like this would be faster (at the expense of memory), but don't know how I'd start approaching it.Well, your idea of the Map would probably work. You would have to make some object that pairs up two agents, something like this:
    public class AgentPair {
       private final Agent a1, a2;
       public AgentPair(Agent a1, Agent a2) { //...
       public boolean equals(Object other) {
          //if distance is a symmetric operation (a.distanceTo(b) == b.distanceTo(a)) then the reverse pair should also be equal
       public int hashCode() {
          return 37*a1.hashCode() + a2.hashCode();
    }This assumes either that you don't override equals or hashCode in Agent, or that you properly override both.
    Then you would have a map that you can populate, given an array of all agents:
    Map<AgentPair, Double> distMap;
    Agent[] agents;
    for ( int i = 0; i < agents.length-1; i++ ) {
       for ( int j = i; j < agents.length; j++ ) {
          distMap.put(new AgentPair(agents, agents[j]), agents[i].distanceTo(agents[j]));
    }Alternatively, if you're not sure you'll definitely use all of the computations, you could alternatively test to see if the computation result is in the map, and if not, perform it; otherwise, use the cached result.
    Any thoughts? Should I make a new post? Abandon the idea?As always, try it and see.  Essentially, it will be faster assuming these two conditions:
    1) You would have to do more than n ^2^ (well actually n choose 2) distance calculations otherwise, and
    2) Computing the distance costs more than retrieving from distMap (this +isn't+ a given!).  If each Agent had a sequential ID, you could do a two-dimensional array of doubles which would speed up lookups.
    Edited by: endasil on 15-Oct-2009 3:39 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Most efficient way to consume log files

    Hello everyone,
    I've been absent from the forums for awhile but I'm back at it now... 
    I have a question about the most efficient way to consume log files.  I read in Powershell in action, by Bruce Payette that using a switch statement with a regex worked pretty well, that being said I haven't tried it yet. Select-string is working pretty
    well for me but I have about 10 different entry types that I need to search logs for every 5 minutes and I'm scanning about 15 GB of logs at every interval.  Anyway, if anyone has information about how to do something like that as quickly as possible
    I'd appreciate it.
    1.  piping log files that meet my criteria to select-string
       - This seems to work well but I don't like searching the same files over and over again
    2. running logs through get-content and then building a filter statement
      - This is ok but it seems to use up a fair bit of memory
    3. Some other approach that I haven't thought of yet.
    Anyway, I know this is a relatively nebulous question, sorry about that.  I'm hoping that someone on here knows a really good way to find strings in logs files quickly.
    Hope that helps! Jason

    You can sometimes squeeze out more speed at the expense of memory usage, but filters are pretty fast. I don't see a benefit to holding the whole file in memory, in this case.
    As I mentioned earlier, though, C# code will usually blow PowerShell away in terms of execution time.  Here's a rewrite of what I just did (just for the INI Section pattern, to keep the post size down):
    $string = @'
    #Comment Line
    [Ini-Style Section Line]
    Key = Value Line
    192.168.0.1 localhost
    Some line that doesn't match anything.
    Set-Content -Path .\test.txt -Value $string
    Add-Type -TypeDefinition @'
    using System;
    using System.Text.RegularExpressions;
    using System.Collections;
    using System.IO;
    public interface ILineParser
    object ParseLine(string line);
    public class IniSection
    public string Section;
    public class IniSectionParser : ILineParser
    public object ParseLine(string line)
    object o = null;
    Match match = Regex.Match(line, @"^\s*\[([^\]]+)\]\s*$");
    if (match.Success)
    o = new IniSection() { Section = match.Groups[1].Value };
    return o;
    public class LogParser
    public static IEnumerable ParseFile(string fileName, ILineParser[] lineParsers)
    using (StreamReader sr = File.OpenText(fileName))
    string line;
    while ((line = sr.ReadLine()) != null)
    foreach (ILineParser parser in lineParsers)
    object result = parser.ParseLine(line);
    if (result != null)
    yield return result;
    $parsers = @(
    New-Object IniSectionParser
    $results = [LogParser]::ParseFile("$pwd\test.txt", $parsers)
    $results
    Instead of defining separate classes for each type of line and output object, you could probably do something more generic with delegates (similar to how I used ScriptBlock.Invoke() in the PowerShell example), but it might sacrifice some speed to do so.

  • Most efficient way to get document names?

    I was wondering what is the most efficient way to get the document names in a container? Use the built in 'name' index somehow, or is there an 'efficient' XPath/XQuery?
    We've been using the XPath /* which is fine with small instances, but causes a java heap out of member error on large XML instances i.e. /* gets everything which is not ideal when all we want are document names.
    Thx in advance,
    Ant

    Hi Antony,
    Here is an example for retrieving the document names on c++:
    void doQuery(XmlContainer &container,
    XmlQueryContext &context,
    const std::string &XPath)
    XmlResults results(container.queryWithXPath(0, XPath, &context));
    // Iterate through the result set as is normal
    XmlDocument theDocument;
    while(results.next(theDocument))
    std::cout << "Found document named: "
    << theDocument.getName() << std::endl;
    Regards,
    Bogdan Coman

  • Most efficient way to do multiple crops on many images?

    I have a large number of images shot in the default 4:3 aspect ratio. I need to print almost 200 as 4x6, and and undetermined but certainly large number as 8x10, so I have a lot of cropping to do. What would seasoned Aperture users suggest as the most efficient way to do this? I've thought of two possibilities:
    1. Duplicate every image I need to print in both sizes and crop one for each size print. This is the best option I've thought of, but it would certainly eat a lot of drive space.
    2. Do all the 8x10 crops, revert to original, and do the 4x6 crops. Saves disk space, but leave me with only the 4x6 crop in Aperture. (Sounds like I want to have my cake and eat it too, I suppose.)
    Anyway, there are a lot of you out there who have logged a lot more Aperture hours than I have. Is there a better workflow I have not considered?
    Thanks,
    Ben

    Hello Ben,
    the beauty of Aperture is, that you can have many versions of an image without needing much extra disk space.
    To have three versions of an image cropped to different aspect ratios, don't create duplicate master images but use (from the Aperture main menu)  "Photos -> Duplicate version" or "New Version from Master". Then crop this new version to a different aspect ratio. Aperture will not really render a new image file but just store the cropping rectangle to be able to create the cropped image when you export or print it.
    So you can have an original version, a 8x10 version, a 4x6 version in your library without needing much extra space - that is one of the rare occasions when you can have your cake and eat it too
    Regards
    Léonie

Maybe you are looking for

  • How to add new fields to picklist of search criteria for opportunities

    Hi Friends, Could you please tell me how to add new fields to the picklist of search criteria of Opportunities in WEBCLIENT(CRM 2007). Regards, Vijay

  • My First CSS Site

    Hi everyone, I have created my first CSS based page and was wondering if people could take a look at it and let me know if there are any "Best Practices" I should be aware of that will help streamline my code. The link is at http://jjcstudios.com/woo

  • Condition Records in COPA

    Hello, I want to know the Tcode where we load the condtion records into  condition Types in COPA. Nettem.

  • Invoking JDialog on every exception

    Hello, In my application I want to a feature that would show up a dialog when an exception (checked or unchecked) is thrown. The simpliest solution - putting all of main() in try/catch block, and adding dialog code in every catch, is not very good I

  • How to close/delete open reservations

    Hi experts kindly provide me solution to close all pending/open reservations till some selective period. if there is any setting in customizing by which we can set time frame of reservation of 1 week or 2-3 days, if that reservation is not used then