Most efficient way to constantly read, queue and parse multi-size​d RS232 data (multi-thr​eaded)

I've tried tackling this problem a few different ways, and figured it was time to get some others' advice.  My system essentially works, although it looks like a hackjob and not entirely confident.
My RS-232 connection has the following properties/constraints:
-Will be getting unsolicited data at a high data rate.  (ya thats subjective, but assume near constant at whatever baud rate its set at up to 115200)
-Different segment lengths of data receiving
-Two stop bytes (0x10 0x03), while start byte is 0x10.  (Byte stuffing/packing implemented)
-There is a size byte within a packet (3rd one in), however currently relying on stop bytes only.
I have tried the ComCallback within CVI only to find that it is VERY slow at processing events compared to implementing it manually in its own thread.  In addition, it can only trigger 1 stop bit, not 2.  Triggering on size is sometimes okay, but I found that it was possible it would get triggered on only part of the data, the when its called the qlength be larger, and then sometimes I would only read part of a data packet, and half of my segment was still in the queue.  And sometimes, I would get semaphore locks and lots of waiting, and ya it was a mess (hence you will see lots of CMT locks commented out)
I tried implementing a FIFO type queue (copied below), but I have very little experience in doing this, and may not be very efficient in the way I implemented and could definately use some advice in this area.  Also perhaps in how thread safe I have everything.
I thought about a circular buffer, but since data can be different segment lengths, it kind of makes it difficult to cleanly wrap around and read.  I think its still possible, just may require additional checks which I haven't seen done anywhere when google searching. (and made me think a FIFO queue was better).
So if anyone has any good suggestions or examples, that would be great.  Using Labwindows 2010.
//in Main before GUI loaded
programRunning = 1;
CmtScheduleThreadPoolFunction (DEFAULT_THREAD_POOL_HANDLE, ComCallback, NULL, &funcID);
/* Function Used to parse Com data */
static int ComCallback (void *callbackData)
static int bufLen = 0;
int strLen,qLen=0;
unsigned char tempBuf[1024];
int packet_length = 0;
char temp_string[250];
unsigned char *start_ptr;
int i;
int start_offset;
while(programRunning) {
//First lets copy all data from com port to output buffer
if( com_open) qLen = GetInQLen (comPort);
if( qLen <= 0 ) {
ProcessSystemEvents();
} else {
if( qLen > 1024) qLen = 1024-bufLen; //set max length to read
strLen = ComRd (comPort, tempBuf, qLen);
//CmtGetLock(lock);
memcpy(readBuf+bufLen,tempBuf,strLen);
bufLen += strLen;
//CmtReleaseLock(lock);
//Try to read until we hit stop bytes, or until we think we have at least 1 or 2 packets to process
if(tempBuf[strLen-2] != 0x10 && tempBuf[strLen-1] != 0x03 && bufLen < 100 ) )
goto skip;
//ensure start pointer is at beginning command (0x10 0xAA) (in event we just started reading in middle of packet???)
i = 0;
while ( (readBuf[i] != 0x10) && (readBuf[i+1] != 0xAA) && (i < strLen) )
//start_ptr++;
i++;
start_offset = i;
parse_some_more:
start_ptr = readBuf + start_offset;
//lets try to do one packet at a time.
for(i = start_offset; i < bufLen-start_offset-1; i++)
if(readBuf[i] == 0x10)
if(readBuf[i+1] == 0x03)
i += 2;
break;
//trial of unpacking/unstuffing byte buffer
else if( readBuf[i+1] == 0x10 ) //two tens in a row, remove it
memmove( &readBuf[i],&readBuf[i+1],bufLen-i);
bufLen--;
//at this point, we should have a full packet. What if we don't....???
packet_length = i - start_offset;
//CmtGetLock(lock);
ParseResponse(start_ptr,packet_length);
PostPacketToOutputBuffer(start_ptr,packet_length);
if(start_ptr[start_offset] == 0x10 && start_ptr[start_offset+1] == 0xAA )
com_Send_Acknowledge(comPort);
start_offset = packet_length + start_offset;
if(start_offset < bufLen-1)
goto parse_some_more;
//For now, assume everything else we don't care about in buffer, however, probably shouldn't. Should it be bufLen -=start_offset; if so, need to handle partial data better; timeout??
bufLen = 0;
//CmtReleaseLock(lock);
skip:
return 0;
Thanks!

Hi ngay528,
I think there is a great example for you to use that comes with CVI which can be found by clicking Find Examples on the splash screen or Help >> Find Examples in your project. From there, click into Optimizing Applications >> Multithreading. In that folder there is a project that is called BuffNoDataLoss that shows how to create a thread safe queue and setup a producer/consumer type program. In this example the data is a random sine wave but could be adapted to your RS232 data. If you have any questions concerning this example please let me know but this should be a great starting point.
Patrick H | National Instruments | Software Engineer

Similar Messages

  • What is the best, most efficient way to read a .xls File and create a pipe-delimited .csv File?

    What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File?
    Thanks in advance for your review and am hopeful for a reply.
    ITBobbyP85

    You should have no trouble doing this in SSIS. Simply add a data flow with connection managers to an existing .xls file (excel connection manager) and a new .csv file (flat file). Add a source to the xls and destination to the csv, and set the destination
    csv parameter "delay validation" to true. Use an expression to define the name of the new .csv file.
    In the flat file connection manager, set the column delimiter to the pipe character.

  • Just got girlfriend a new iPad2. Her iMac is a PowerPC G5 (Tiger version 10.4.11) with 512 mb RAM. What's the simplest, most efficient way to get her iPad2 up and running and synced to her Mac?

    Just got girlfriend a new iPad2. Her iMac is a PowerPC G5 (Tiger version 10.4.11) with 512 mb RAM. What's the simplest, most efficient way to get her iPad2 up and running and synced to her Mac?

    Most of the Apple store sales people and some of the genius bar people are only knowledgable on Apple's more recent offerings. They are not very knowledgable, I found, on older PowerPC based Apple computers, I'm afraid.
    Here's the real scoop.
    Your girlfriend's G5 can only install up to OS X 10.5 Leopard. This is the last compatible OS X version for PowerPC users.
    OS X 10.6 Snow Leopard and OS X10.7 Lion are for newer Intel CPU Apple computers.
    Early iMac G5's can only have up to 2 GBs of RAM.
    Later iMac G5's (2005-2006) could take up to 2.5 GBs of RAM
    2 GBs of RAM will run OS X 10.5 Leopard just fine.
    The very latest iTunes (10.5.2) can be installed and runs on both PowerPC and Intel CPU Macs.
    However, there are certain new iTunes feature that won't work without an Intel Mac.
    One of iOS and iTunes feature is sync'g wirelessly over WiFi.
    This will not work unless you have an iDevice running iOS 5 and Intel Mac running 10.6 Snow Leopard or better.
    Although, I was disappointed I would not be able to do this with my G4 Mac, it's not a biggie problem for me.
    So, your girlfriend's computer should be fine for what she intends to use it for.
    The Apple people either just plain didn't know or we're trying to get you to think about buying a new Mac.
    At least, as of now, not truly necessary.
    If Apple, at some later point, drops support for PowerPC users running 10.5, then would be the time to consider a new or "newer" Intel CPU Mac.
    My planned Mac computer upgrade is seeking out a " newer" last version G5 for my "new" Mac.
    I can't afford, right now, to replace all of my core PowerPC software with Intel versions, so I need to stick with the older PowerPC Macs for the time being. The last of the G5's is what I seek.

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • Most efficient way to open a new TextEdit doc at a specific place

    Suppose I've navigated in the Finder to some deep dark location in the folder hierarchy...   I want a new text file titled "notes" in this folder, i.e. to this path.
    Assuming TextEdit is already open, what is the most efficient way of creating and begin adding text to a new TextEdit file in that location?
    Tried this:  Maybe the usual Save dialog is aware of the current folder, so I can choose "New Document" in TextEdit's Dock icon pulldown, and then choose that path when I do File --> Save, choose the Save dialog's expanded view, and pull down the Where: selection.  Nope.  The current path is not there.
    Tried this:  Keeping an empty TextEdit document named untitled on my desktop.  Drag-copying that to the current folder.  Rename the file to "notes", open it, and start editing  That works, except it is clumsy.
    Is there a better way?
    Please forgive me if I'm missing something incredibly obvious.

    The problem with that is a new file has nothing in it.
    If you save a dummy file on the desktop someplace or use a real file in each of your locations, you could right click and duplicate it, then doubleclick to open it in the program of choice.
    Another option would to be to create a AppleScript that took the current open windows pathname and create and save a Textfile there.
    You save the app in the Dock and only have to click on it once, it automatically quits when mission accomplished.

  • Most efficient way to load XML file data into tables

    I have a complex XML file running into MBs. I want to load it's data into 7-8 tables.
    Which way will be better:
    1) Use SQL Loader to actually load directly into the 7-8 tables directly by modifying the control card.
    Is this really possible and feasible? I am not even sure about it
    2) Load data as XML Type in a table and register it. Then extract from there to load into various tables.
    Please help. I have to find the most efficient way of doing it.
    Regards,
    Sudhir

    Yes it is possible to use SQL*Loader to parse and load XML, but that is not what it was designed for and so is not recommended. You also don't need to register a schema, just to load/store/parse XML in the DB either.
    So where does that leave you?
    Some options
    {thread:id=410714} (see page 2)
    {thread:id=1090681}
    {thread:id=1070213}
    Those talk some about storage options and reading in XML from disk and parsing XML. They should also give you options to consider. Without knowing more about your requirements for the effort, it is difficult to give specific advice. Maybe your 7-8 tables don't exist and so using Object Relational Storage for the XML would be the best solution as you can query/update tables that Oracle creates based off the schema associated to the XML. Maybe an External Table definition works better for reading the XML into the system because this process will happen just once. Maybe using WebDAV makes more sense for loading XML to be parsed (I don't have much experience with this, just know it is possible from what I've read on the forums). Also, your version makes a difference as you have different options available depending upon the version of Oracle.
    Hope all that helps as a starter.
    Edited by: A_Non on Jul 8, 2010 4:31 PM
    A great example, see the answers by mdrake in {thread:id=1096784}

  • Most efficient way to do some string manipulation

    Greetings,
    I need to cleanse some data in a string by replacing unsafe characters with encoded equivalents. (FYI, this is for the purpose of transforming "unsafe" characters into encoded values as data inside an XML document).
    The following code accomplishes the task:
    Note that a string "currentValue" contains the data to be cleansed.
    A string, "encodedValue" contains the result.
      for (counter = 0; counter < currentValue.length(); counter++)
        addChar = (currentValue.substring(counter,counter+1));
        if (addChar.equals("<"))
          addChar = "#60;";
        if (addChar.equals(">"))
          addChar = "#62;";
        if (addChar.equals("="))
          addChar = "#61;";
        if (addChar.equals("\""))
          addChar = "#34;";
        if (addChar.equals("'"))
          addChar = "#39;";
        if (addChar.equals("'"))
          addChar = "#39;";
        if (addChar.equals("/"))
          addChar = "#47;";
        if (addChar.equals("\\"))
          addChar = "#92;";
        encodedValue += addChar;
      } // forI'm sure there is a way to make this more efficient. I'm not exactly "new" to java, but I am learning on my own with no formal training and often take a "brute force" approach with my initial effort.
    What would be the most efficient way to re-do the above?
    TIA,
    --Paul Galvin
    Integrated Systems & Services Group

    im a c++ programmer so im not totally up on these java classes either but...from a c++ stand point you might want to consider using the if else statment.
    by using if else, you only test the character until you find the actual "violating" character and skip the rest of the tests.
    also, you might trying using something to check for alphaNumeric cases first and use the continue keyword when you find one. since more of your characters are probably safe than unsafe you can skip all the ifs/if else statement and only do one test on the good characters. (i just looked for a way to test that and i didnt find one. c++ has a function that does that by checking the ascii number range. dont think that works in java. but maybe you can find one, it would reduce the number of tests probably.)
    happy hunting,
    txjump :)

  • Most efficient way to place images

    I am composing a Catalog with a lot of images along with the text.  The source images are often not square (perfectly vertical, portrait).  I also want to add a thin line frame around each one in InDesign to spurce up the look.  I'm spending a lot of time in Photoshop straightening images, because rotating in Indesign to get the image straight results in a non-straight frame.
    Should I create a small library of frames that I place, then place non-straight images in them (and how do I do that) and rotate in InDesign?  Etc?
    What would be the most efficient way to do this?
    Thanks

    To tag onto what Peter said, when you click on the image with the Direct Selection tool you can also use the up and down arrow in the rotation Dialog (where you enter the angle, at the top) to easily change the rotation.
    Also, when you place images in InDesign you can select a number of images at once and continually click the document (or image frame) and place all the images you selected to import. To clarify, you can have a whole bunch of empty image frames on the page then go to file > place and select all your images, then continually click and place them inside each empty frame.

  • Most efficient way to make an ordered list?

    Hi all,
    I have a simulation where I have agents running around in an environment. These agents need to get the closest agent near them, and they have to do this a whole bunch of times each cycle -- closest predator to them, closest prey, closest mate, next-closest mate if that one isn't interested, etc. etc.
    I realized that it would be faster if I just got all the agents in the vicinity ONCE, kept them in a sorted list, and then everytime you want the closest agent of a certain type, just search from the bottom of the list up until you find the first one of that type.
    So here's the question: to make that list, I ask the environment for all the agents in the vicinity, and then go through them one-by-one. I find out the distance between myself and that agent. I then.... what?
    I could put them all into an ArrayList, and then apply some sorting algorithm to that ArrayList. Or I can try to insert them in the right order WHILE I'm making the ArrayList. Or maybe there's some better Collection object that would be even more efficient -- somehow pushing them in and out of stacks, or whatever else smart programmers think up.
    Can anyone suggest the most efficient way to do this? This is something that every agent has to do every step, so efficiency is key.
    Thanks!
    Edit: As a note, calculating the distance between two agents isn't free, and if I either sort or insert as I'm making it, the naive implementation (i.e. the way I would do it...) would require re-checking this distance for every agent in the list every single time a new agent was added. So maybe I could make some use of a HashMap, so that I can store these distances?
    Edited by: TimQuinn on Oct 15, 2009 9:35 AM

    TimQuinn wrote:
    Ok, thanks for all the great suggestions. I think that caching the distance in a wrapper object is a big plus, and then I can run some tests and see if using a built-in collections sorting algorithm or using a treeset is faster in my specific case. Thanks!
    Any thoughts as to the idea of creating one giant map of the distance from each agent to every other agent just once, rather than having each agent work out their own distances to each other agent? I feel like this would be faster (at the expense of memory), but don't know how I'd start approaching it.Well, your idea of the Map would probably work. You would have to make some object that pairs up two agents, something like this:
    public class AgentPair {
       private final Agent a1, a2;
       public AgentPair(Agent a1, Agent a2) { //...
       public boolean equals(Object other) {
          //if distance is a symmetric operation (a.distanceTo(b) == b.distanceTo(a)) then the reverse pair should also be equal
       public int hashCode() {
          return 37*a1.hashCode() + a2.hashCode();
    }This assumes either that you don't override equals or hashCode in Agent, or that you properly override both.
    Then you would have a map that you can populate, given an array of all agents:
    Map<AgentPair, Double> distMap;
    Agent[] agents;
    for ( int i = 0; i < agents.length-1; i++ ) {
       for ( int j = i; j < agents.length; j++ ) {
          distMap.put(new AgentPair(agents, agents[j]), agents[i].distanceTo(agents[j]));
    }Alternatively, if you're not sure you'll definitely use all of the computations, you could alternatively test to see if the computation result is in the map, and if not, perform it; otherwise, use the cached result.
    Any thoughts? Should I make a new post? Abandon the idea?As always, try it and see.  Essentially, it will be faster assuming these two conditions:
    1) You would have to do more than n ^2^ (well actually n choose 2) distance calculations otherwise, and
    2) Computing the distance costs more than retrieving from distMap (this +isn't+ a given!).  If each Agent had a sequential ID, you could do a two-dimensional array of doubles which would speed up lookups.
    Edited by: endasil on 15-Oct-2009 3:39 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Most Efficient Way to Populate My Column?

    I have several very large tables, some of them are partitioned tables.
    I want to populate every row of one column in each of these tables with the same value.
    1.] What's the most efficient way to do this given that I cannot make the table unavailable to the users during this operation?
    I mean, if I were to simply do:
    update <table> set <column>=<value>;
    then I think I'll lock every row for writing by others until the commit. I figured there might be another way that makes better sense in my case.
    2.] Are there any optimizer hints I might be able to take advantage of here? I don't use hints much but with such a long running operation I'll take any help I can get.
    Thank you

    1. May be a better solution exists.
    Since you do not want to lock the table...
    Save the ROWID's of all the rows in that table in a temporary table.
    Write a routine which will loop through this temporary table and use the ROWID to update the main table and issue commit at regular intervals.
    However, this does not take into account the rows that would be added to main table after the temporary table has been created.
    2. Not that I am aware of.

  • Most efficient way to consume log files

    Hello everyone,
    I've been absent from the forums for awhile but I'm back at it now... 
    I have a question about the most efficient way to consume log files.  I read in Powershell in action, by Bruce Payette that using a switch statement with a regex worked pretty well, that being said I haven't tried it yet. Select-string is working pretty
    well for me but I have about 10 different entry types that I need to search logs for every 5 minutes and I'm scanning about 15 GB of logs at every interval.  Anyway, if anyone has information about how to do something like that as quickly as possible
    I'd appreciate it.
    1.  piping log files that meet my criteria to select-string
       - This seems to work well but I don't like searching the same files over and over again
    2. running logs through get-content and then building a filter statement
      - This is ok but it seems to use up a fair bit of memory
    3. Some other approach that I haven't thought of yet.
    Anyway, I know this is a relatively nebulous question, sorry about that.  I'm hoping that someone on here knows a really good way to find strings in logs files quickly.
    Hope that helps! Jason

    You can sometimes squeeze out more speed at the expense of memory usage, but filters are pretty fast. I don't see a benefit to holding the whole file in memory, in this case.
    As I mentioned earlier, though, C# code will usually blow PowerShell away in terms of execution time.  Here's a rewrite of what I just did (just for the INI Section pattern, to keep the post size down):
    $string = @'
    #Comment Line
    [Ini-Style Section Line]
    Key = Value Line
    192.168.0.1 localhost
    Some line that doesn't match anything.
    Set-Content -Path .\test.txt -Value $string
    Add-Type -TypeDefinition @'
    using System;
    using System.Text.RegularExpressions;
    using System.Collections;
    using System.IO;
    public interface ILineParser
    object ParseLine(string line);
    public class IniSection
    public string Section;
    public class IniSectionParser : ILineParser
    public object ParseLine(string line)
    object o = null;
    Match match = Regex.Match(line, @"^\s*\[([^\]]+)\]\s*$");
    if (match.Success)
    o = new IniSection() { Section = match.Groups[1].Value };
    return o;
    public class LogParser
    public static IEnumerable ParseFile(string fileName, ILineParser[] lineParsers)
    using (StreamReader sr = File.OpenText(fileName))
    string line;
    while ((line = sr.ReadLine()) != null)
    foreach (ILineParser parser in lineParsers)
    object result = parser.ParseLine(line);
    if (result != null)
    yield return result;
    $parsers = @(
    New-Object IniSectionParser
    $results = [LogParser]::ParseFile("$pwd\test.txt", $parsers)
    $results
    Instead of defining separate classes for each type of line and output object, you could probably do something more generic with delegates (similar to how I used ScriptBlock.Invoke() in the PowerShell example), but it might sacrifice some speed to do so.

  • Iterators - most efficient way to get last object?

    Collection of objects, such as from an EJB, and I want to only get the last object. What is the most efficient way of doing this?
    TIA!

    addendum to previous post.
    Allthough, that test would call to mind the question what are you going to do if it is 0? Throw an exception, as the code stands it will throw an index out of bounds exception to the calling process which could be handled there, and in truth probably should be.
    Modified and expanded example
    public class ListThingie{
         java.util.ArrayList listOfThingies = null
         public ListThingie(){
              super() ;
              listOfThingies = new java.util.ArrayList() ;
              buildListOfThingies() ;
         public void buildListOfThingies(){
              //... build the ArrayList here
         public Object getLastThingie() throws IndexOutOfBoundsException{
              return listOfThingies.get((listOfThingies.size()-1)) ;
    public class ThingieProcessor{
         public static void main( String[] args){
              try{
                   ListThingie listThingie = new ListThingie() ;
                   System.out.println(listThingie.getLastThingie()) ;
              catch(IndexOutOfBoundsException e){
                   System.out.println(e.getMessage()) ;
    }There, that's better.

  • What's the most efficient way to serve a file from a servlet?

    I have a servlet that does various different things depending on the needs. Sometimes it dynamically generates content, and sometimes all it does is send a file out, with no alteration.
    What is the most efficient way to just send a file?
    One option:
    OutputStream os = response.getOutputStream();
    InputStream is = new FileInputStream(...)
    (send all the bytes from is to os, the regular way using a buffer)Another option is to say:
    RequestDispatcher rd = response.getRequestDispatcher(fileName);
    rd.forward();Any other options? What's the prefered way of doing this?
    I know the rule of "don't optimize too early" but this is a situation where we need to get the maximum amount of files served with the hardware we have, and it's going to be a lot of static files, so efficiency is important.
    Thanks

    Ok, that's what I thought. It would be nice if there were a "response.sendStream(InputStream input)" method in the ServletResponse class. Even nicer would be a sendFile or sendChannel or something. This is probably a common usage and it's a place where the container has many opportunities for optimization. For example, it could call the operating systems send_file kernel call so the entire transfer would be done directly from the disk controller to the ether card (on systems that support that).
    For now I'll just do my own buffered copy.

  • What is the most efficient way of passing large amounts of data through several subVIs?

    I am acquiring data at a rate of once every 30mS. This data is sorted into clusters with relevant information being grouped together. These clusters are then added to a queue. I have a cluster of queue references to keep track of all the queues. I pass this cluster around to the various sub VIs where I dequeue the data. Is this the most efficient way of moving the data around? I could also use "Obtain Queue" and the queue name to create the reference whenever I need it.
    Or would it be more efficient to create one large cluster which I pass around? Then I can use unbundle by index to pick off the values I need. This large cluster can have all the values individually or it co
    uld be composed of the previously mentioned clusters (ie. a large cluster of clusters).

    > I am acquiring data at a rate of once every 30mS. This data is sorted
    > into clusters with relevant information being grouped together. These
    > clusters are then added to a queue. I have a cluster of queue
    > references to keep track of all the queues. I pass this cluster
    > around to the various sub VIs where I dequeue the data. Is this the
    > most efficient way of moving the data around? I could also use
    > "Obtain Queue" and the queue name to create the reference whenever I
    > need it.
    > Or would it be more efficient to create one large cluster which I pass
    > around? Then I can use unbundle by index to pick off the values I
    > need. This large cluster can have all the values individually or it
    > could be composed of the previously mentioned clusters (i
    e. a large
    > cluster of clusters).
    It sounds pretty good the way you have it. In general, you want to sort
    these into groups that make sense to you. Then if there is a
    performance problem, you can arrange them so that it is a bit better for
    the computer, but lets face it, our performance counts too. Anyway,
    this generally means a smallish number of groups with a reasonable
    number of references or objects in them. If you need to group them into
    one to pass somewhere, bundle the clusters together and unbundle them on
    the other side to minimize the connectors needed. Since the references
    are four bytes, you don't need to worry about the performance of moving
    these around anyway.
    Greg McKaskle

  • I am giving my old MacBook Air to my granddaughter.  What is the most efficient way to erase all the data on it?

    I am giving my old MacBook Air to my granddaughter.  What is the most efficient way to erase the data?

    You have two options.....
    One is to do a clean reinstall of your OS - if you still have the USB installer that came with your Macbook Air...
    The second option is to create a new user (your granddaugher's name).....Deauthorize your Macbook Air from your Itunes and Appstore.....
    Restart your Macbook after you've created your granddaughter's user name, login under your granddaughter's username and delete your username.
    Search your Macbook for your old files and delete them.....
    Good luck...

Maybe you are looking for