Efficient methods???

I have written this piece of code that reads the content of an
array of textfield.The content of the textfields represent one of
several instructions.They are read,broken into two parts and they
are looked up in a switch statement and if valid the appropriate
action is executed.Also,i want to be able to add more commands.At the moment i have to add to the switch statement.This works but i was wondering if there is a more efficient way to design it.e.g to create a class for each class instruction and let the program determine (at runtime) which of these classes to execute.And to add a new one all i have to do is create a new class.
Two ways have been suggested.I would appreciate
if someone can take a look at it and tell how and why
they are more efficient.
Thank you
p.s:the commands are
in the form of integers e.g. 0000.This is split into
two and passed to command as in:
int do11=00 do22=00
command(do11,do22)
My method
private class do implements ActionListener
{ public void actionPerformed(ActionEvent event)
  { for(km=start2;km<Size && Stop==false;km++)
    { String do1=read.substring(0,2);
      int do11=(int)(Integer.parseInt(do1));
      int rem=read.length()-2+2;
      String do2=read.substring(2,rem);
      int do22=(int)(Integer.parseInt(do2));
      Command(do11,do22);
public void Command(int code1,int code2)
{ switch(code1)
  { case 1:
    String copy11=MemoryContent[operand].getText();
    calculator2.setText("   "+copy11);
    break;
    case 2:
    int copy31=(int)(Integer.parseInt(MemoryContent[operand].getText   ()));
    int copy32=(int)(Integer.parseInt(calculator2.getText()));
    int sum=copy31+copy32;
    //calculator2 is a label
    calculator2.setText(sum+"");
    break;
    case 3:
    //do something
    break;
    case n:
//n indicates any other integer that may be added in future
    //do something
    break;
    default:
    error.setText("Invalid Command");
}Suggested method 1
http://forum.java.sun.com/thread.jsp?forum=4&thread=228069
partial to reply #4
Suggested method 2
class do ....
{  public interface CommandDo
{     public void doIt(int arg);
final  CommandDo[] cmds =
{    new CommandDo()
      {      public void doIt(int arg)
      {        ... do command 0....
    new CommandDo()
    {      public void doIt(int arg)
    {        ... do command 1....
    },  ... more commands ..
public void command(int cmd, int arg)
{   cmds[cmd].doIt(arg);
   }

I'm not sure what your code is doing but it sounds
like you need to use the command pattern.
What you need is a Command interface with an execute()
method. Now when you want to add new commands to your
system just implement the execute method a different
way. The next step is how you know which command to
execute. I think you have digits representing
commands so you need a HashMap with the key being the
digits and the value being an instance of the command
to execute when you see those digits. Your huge
switch statement becomes a simple hash lookup which
returns a command, then call the command's execute
method. I have not used this method myself as I have not had time to read up on it. However, I do have a link to a file that talks about it which I have not really had time to look in to. I really should get to it when I have the time.
http://www.javaworld.com/javatips/jw-javatip68_p.html

Similar Messages

  • I need a more efficient method of transferin​g data from RT in a FP2010 to the host.

    I am currently using LV6.1.
    My host program is currently using Datasocket to read and write data to and from a Field Point 2010 system. My controls and indicators are defined as datasockets. In FP I have an RT loop talking to a communication loop using RT-FIFO's. The communication loop is using Publish to send and receive via the Datasocket indicators and controls in the host program. I am running out of bandwidth in getting data to and from the host and there is not very much data. The RT program includes 2 PID's and 2 filters. There are 10 floats going to the Host and 10 floats coming back from the Host. The desired Time Critical Loop time is 20ms. The actual loop time is about 14ms. Data is moving back and forth between Host and FP several times a second without regularity(not a problem). If I add a couple more floats each direction, the communications goes to once every several seconds(too slow).
    Is there a more efficient method of transfering data back and forth between the Host and the FP system?
    Will LV8 provide faster communications between the host and the FP system? I may have the option of moving up.
    Thanks,
    Chris

    Chris, 
    Sounds like you might be maxing out the CPU on the Fieldpoint.
    Datasocket is considered a pretty slow method of moving data between hosts and targets as it has quite a bit of overhead assosciated with it.  There are several things you could do. One, instead of using a datasocket for each float you want to transfer (which I assume you are doing), try using an array of floats and use just one datasocket transfer for the whole array.  This is often quite a bit faster than calling a publish VI for many different variables.
    Also, as Xu mentioned, using a raw TCP connection would be the fastest way to move data.  I would recommend taking a look at the TCP examples that ship with LabVIEW to see how to effectively use these. 
    LabVIEW 8 introduced the shared variable, which when network enabled, makes data transfer very simple and is quite a bit faster than a comparable datasocket transfer.  While faster than datasocket, they are still slower than just flat out using a raw TCP connection, but they are much more flexible.  Also, the shared variables can fucntion in the RT fifo capacity and clean up your diagram quite a bit (while maintaining the RT fifo functionality).
    Hope this helps.
    --Paul Mandeltort
    Automotive and Industrial Communications Product Marketing

  • Efficient method to insert large number of data into table

    Hi,
    I have a procedure that accepts an input parameter, that contains, a comma seperated values as input.
    Something like G12-UHG,THA-90HJ,NS-98039,........There can be more than 90,000 values in that comma seperated input paramter.
    What is the most efficient way to do an insert in this case?.
    3 methods I have in mind are :
    1) Get individual tokens from CSV and use a plain old loop and do an insert.
    2) Use BULK COLLECT & FOR ALL. However I don't know how to do this, since input is not from cursor, rather a parameter.
    3) Use Table collections. Again this involves plain old looping through the collection. Same as 1st method.
    Please do suggest the most efficient method.
    Thanks

    90,000 values?? Whats the data type of the input parameter?
    you can use the string to row conversion trick if you want and do a single insert
    SQL> with t as (select 'ABC,DEF GHI,JKL' str from dual)
      2  select regexp_substr(str,'[^,]+', 1, level) list
      3  from t connect by level <= NVL( LENGTH( REGEXP_REPLACE( str, '[^,]+', NULL ) ), 0 ) + 1
      4  /
    LIST
    ABC
    DEF GHI
    JKL Edited by: Karthick_Arp on Feb 13, 2009 2:18 AM

  • IFRAME into iMOVIE - most efficient method for importing?

    What would be the most efficient method for importing iFrame movies from a camera into iMovie?
    iFrame i suppose to save time and work more efficiently in lue of quality but I don't seem to find I way to import the movies faster than in other formats.
    On a second note, inporting in iMovie from DV (tape) cameras dramaticly reduced the image quality. Do we still have the same issue when importing an iFrame movie?
    Thank you for your help!

    Im completly new myself to importing IFRAME into Imovie 11 as i only got my new Panasonic X920 Camcorder 2 days ago.Can you please tell me is there a big drop in quality from 1080 60p to IFRAME Quality.

  • Most efficient method to process 2 million plus records from & to a Ztable

    Hi All,
    My requirement is as follows:
    There is a table which has 20 and odd columns, and close to 2 million records.
    Initially only 5 or 6 columns will have data. Now the requirement is to fetch them and populate the remaining columns of the table by looking into other tables.
    Looking for the most efficient method to handle this as the data count is huge.
    There should be an optimum balance between memory usage and time consumption.
    Kindly share your expertise in this regard.
    Thanks
    Mani

    Write   a Program to Download the data for that  table column to be filled  into Local file   .XLS Format.
    Then   Write  a report  for Uploading the data  from the  local file   .XLS to    database table through internal table  itab.
    Loop at  itab .
    UPDATA  database table   where   condition  of the primary fields.
    endloop.
    first  try this  in the  development  and testing  server  , then go for the Production.
    But   take  backup of  full exsisting  Production data into   local file  and also take neccesary approvals  for   doing this task  .
    Reward  Points if it is usefull..
    Girish

  • Efficient method?

    I wanted to know which of the following is an efficient method of declaring Strings.
    a) String a = new String();or
    b) String a = "";Which of the following is more efficient when declaring a larger number of String objects [say 10]?

    "" is more efficient, because the literal gets constructed at runtime and is used at every literal declaration of the same thing.
    new String() is less, because as of this writing, no optimizing compiler is allowed to optimize out object creation (because in the limit case, it still has to load the class). This may or may not be true for Strings, however, so all I can do is guess and refer you to the source code of your local jvm.
    It doesn't really matter, however. "" is much clearer in its intentions than new String(). Use ""'s for writing literals, like "HI!", and the constructor for anything else, like making a string from a char array.
    ~Cheers

  • RMAN Backup successful check - efficient method

    I am looking for efficient single method/commands to check weekly RMAN backups -ok. eg july03-july10 all backup ok, if someone can post single command to check weekly backup(or from this date to this date) would help...
    Right now i just, i just go to backup log dir and do
    grep -i 'ERROR' *07-08-2009.log
    grep -i 'ERROR' *07-09-2009.log and for each date....which is not so efficient & its manual

    Generate a script to check V$RMAN_STATUS and V$RMAN_OUTPUT

  • Efficient method to read a Setup file ? Config VIs ?

    Hello All:
    I am developing a project with a large setup file which is linked to a global variable in my code. Each section in the setup file maps to a cluster in my global variable. 
    Typically I use config VIs to read and modify the setup file from my code. Of late I've been using the openG config VIs since they are easy to work with clusters. Since speed is a major concern, is there an efficient way to do this OR an alternative to config VIs ?
    Kudos always welcome for helpful posts

    The real answer to this problem is to use reference objects - objects you can get to anywhere that contain arbitrary data that can be easily modified.  I can recommend two - LV2 globals and single element queues.  Both are discussed, with code samples, in this thread.  I have also attached a short and amusing tutorial on large program development which addresses many of the issues you are seeing (LV 7.1 and 7.0 formats).
    For complex configuration files, you can't beat one of the free generic hierarchical file systems.  I usually use HDF5, but there are others.  You can find a LabVIEW API for an older version of HDF5 here.  Note that the learning curve is fairly steep and the VIs are not multi-thread safe, so don't try to use them in two places at once.  If you do, you will get errors at best and corrupt your file at worst.
    Let us know if you have any more problems.
    This account is no longer active. Contact ShadesOfGray for current posts and information.
    Attachments:
    LargeGUIApplicationsInLabVIEW.zip ‏711 KB
    LargeGUIApplicationsInLabVIEW_70.zip ‏735 KB

  • Efficient method to save data to disk in RT?

    What options are available to read data from a buffer and save it to the hard disk drive in RT? What method requires the least processor overhead...or perhaps can be set to run in the background without any processor intervention at all?

    In Traditional NI-DAQ 7.0, buffered analog input (a la AI Read.vi) uses DMA to transfer points from an E-Series board to a software buffer on the PC. The uP doesn't have to intervene for this to occur thanks to DMA. However, the uP does have to read from the software buffer and write the points to disk.
    In a real-time scenario, we generally recommend that customers acquire the data in their time-critical VI and then pass the data to a normal priority VI using RT FIFOs (shipping in 7.0, otherwise available on the web). The RT FIFOs are non-blocking queues that are expressly designed for moving data from a time-critical VI to a lower priority VI safely.
    Acquired data would thus follow a path like this:
    E-Series -> PC software buffer ->
    time-critical VI in LabVIEW -> normal priority VI in LabVIEW -> disk
    The first transfer from board to software buffer uses DMA and happens transparently to the user. The second transfer occurs within AI Read or AI Singlescan. Third transfer you must program using RT FIFOs. Last transfer you must program with File I/O VIs.

  • What is an efficient method to create presets?

    Hello all,
    I am trying to create presets for my VI and I would like some input from others who've created methods for presets.  I saw this post and saw that it was possible to create a pre defined preset fairly easily.  That's good an all, but I need to update my presets on the fly.  Basically, I want to move four servos to a certain position via pulsewidth.  However, there will be many different positions (~20) I need them to be at, and trying to stop the code and put in constants seems like it will be extremely time consuming for each preset.  I would like to create a preset method which would take the servo location and store it somewhere so I can load it again later.  
    One idea I had was to create a txt file and store the pulsewidth of each servo.  This way I could overwrite the file and read the file at any time.  
    I have been looking into event structures, but that seems like it will go with the idea of making pre made presets.
    Any help would be gladly appreciated.
    Thanks,
    Matt
    Solved!
    Go to Solution.

    If you merely have 20 positions, which will never change, you can use an Enum control wired to a case structure (all of this in a while loop).  If the Enum control value changes, select the case structure which contains the correct pulswidth values.  
    If your positions could change overtime, an external file would be a good way to go.  Look at Configuration Files.  You can write one section for each position.  In each section, you will store the 4 pulsewidth values you need.

  • Most efficient method of storing configuration data for huge volume of data

    The scenario in which i'm boggled up is as follows:
    I have a huge volume of raw data (as CSV files).
    This data needs to be rated based on the configuration tables.
    The output is again CSV data with some new fields appended to the original records.
    These new fields are derived from original data based on the configuration tables.
    There are around 15 configuration tables.
    Out of these 15 tables 4 tables have huge configurations.
    1 table has 15 million configuration data of 10 colums.
    Other three tables have around 1-1.5 million configuration data of 10-20 columns.
    Now in order to carry forward my rating process, i'm left with the following methods:
    1) Leave the configurations in database table. Query the table for each configuration required.
    Disadvantage: Even if the indexes are created on the table, it takes a lot of time to query 15 configuration tables for each record in the file.
    2) Load the configurations as key value pairs in RAM using a suitable collection (Eg HashMap)
    Advantage: Processing is fast
    Disadvantage: Takes around 2 GB of RAM per instance.
    Also when the CPU context swithes (as i'm using a 8 CPU server), the process gets hanged up for 10 secs.
    This happens very frequently, so the net-net speed which i get is again less
    3) Store the configurations as CSV sorted files and then perform a binary search on it.
    Advantages: No RAM usage, Same configuration shared by multiple instances
    Disadvantages: Only 1 configuration table has an integer key, so cant use this concept for other tables
    (If i'm wrong in that please correct)
    4) Store the configurations as an XML file
    Dont know the advantages/disadvantages for it.
    Please suggest with the methodology which should be carried out....
    Edited by: Vishal_Vinayak on Jul 6, 2009 11:56 PM

    Vishal_Vinayak wrote:
    2) Load the configurations as key value pairs in RAM using a suitable collection (Eg HashMap)
    Advantage: Processing is fast
    Disadvantage: Takes around 2 GB of RAM per instance.
    Also when the CPU context swithes (as i'm using a 8 CPU server), the process gets hanged up for 10 secs.
    This happens very frequently, so the net-net speed which i get is again lessSounds like you don't have enough physical memory. Your application shouldn't be hanging at all.
    How much memory is attached to each CPU? e.g. numactl --show                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Pointers: more efficient method(s), styles for making unconventional UI's

    i currently use mages on my custom panels to give the customized look i want for my apps. But i just can shake the feeling that there are more efficient ways to do it. i just need pointers to some materials (books, articles, documentation, etc) for some technology i can use.
    thanks!

    i currently use mages on my custom panels to give the customized look i want for my apps. But i just can shake the feeling that there are more efficient ways to do it. i just need pointers to some materials (books, articles, documentation, etc) for some technology i can use.
    thanks!

  • Most efficient method of online replication!

    Hello Guys,
    I want to know the most efficient way of synchoronous (real time) replication among 2 oracle databases, that are on 2 different geographyical locations and connectivity among them is over the internet.
    Both systems are linux based and oracle 11gR1 is installed.
    The constraint is performance.
    Kindly help.
    Regards, Imran

    1) Do you really, really need synchronous replication? Or just "near real time" replication? Synchronous replication requires that neither database commit until both databases have the change, which implies that you're using two-phase commit which implies that every single transaction requires multiple messages to be exchanged between the servers. Two servers that are widely separated are likely going to add a substantial overhead to the transaction. There are a small handful of cases where that might be reasonable, but for 99.9% of the applications out there, synchronous replication is something to be avoided at all costs.
    2) What is the business problem you are trying to solve? Are you trying to create a failover database? Are you trying to replicate a small subset of data from one database to another? Something else?
    3) What edition of Oracle are you using? Enterprise or standard?
    Justin

  • What is an efficient method for increasing speed without deleting photos?

    I have a late 2009 model, 27-inch iMac G-5
    Model Name:                        iMac
      Model Identifier:                iMac10,1
      Processor Name:               Intel Core 2 Duo
      Processor Speed:              3.06 GHz
      Number of Processors:    1
      Total Number of Cores:   2
      L2 Cache:                            3 MB
      Memory:                              4 GB
      Bus Speed:                          1.07 GHz
      Boot ROM Version:            IM101.00CC.B00
      SMC Version (system):     1.53f13
      Serial Number (system):   QP******5PE
      Hardware UUID:      ****
    It has become frustratingly slow of late.
    Full Disclosure:  I have 18,660 photos and 81 videos in my iPhoto app.  I understand that deleting some of those would improve speed but these photos represent a good portion of my long life and having turned 75 on the last day of March, 2014, I doubt that I have enough years left to even begin such a task. 
    I would likely spend days and days agonizing over two nearly identical photos trying to decide which one to discard, then as soon as I made that crucial decision, would probably rethink my choice and pull the rejected photo out of the trash …
    So my question is:
    Other than deleting photos (a task I will continue to eschew forever, thank you very much), what are the options available to increase the speed without a substantial commitment of funds from my meager life savings?
    My frustration is such that soon, I will be forced to make an appointment with the Genius Bar at a local Mac Store but the nearest one to my home is some 30 miles away and I’d like to avoid the hassle, if possible.
    I understand that there is software available online the purports to safely increase the speed of a Mac but not being a techie and of a naturally suspicious bent, I hesitate to download apps unless I’m absolutely assured of their safety.
    Thanks for any help anyone can offer.
    <Edited By Host>

    Hardware Information:
              iMac (27-inch, Late 2009)
              iMac - model: iMac10,1
              1 3.06 GHz Intel Core 2 Duo CPU: 2 cores
              4 GB RAM
    Video Information:
              ATI Radeon HD 4670 - VRAM: 256 MB
    System Software:
              OS X 10.9.2 (13C64) - Uptime: 0 days 1:39:48
    Disk Information:
              ST31000528AS disk0 : (1 TB)
                        EFI (disk0s1) <not mounted>: 209.7 MB
                        Macintosh HD (disk0s2) / [Startup]: 999.35 GB (872.68 GB free)
                        Recovery HD (disk0s3) <not mounted>: 650 MB
              HL-DT-ST DVDRW  GA11N 
    USB Information:
              Toshiba External USB 3.0 1 TB
                        EFI (disk1s1) <not mounted>: 209.7 MB
                        TOSHIBA EXT (disk1s2) /Volumes/TOSHIBA EXT: 999.86 GB (595.98 GB free)
              Apple Inc. Built-in iSight
              Apple, Inc. Keyboard Hub
                        Microsoft Microsoft® Comfort Mouse 4500
                        Apple Inc. Apple Keyboard
              Apple Internal Memory Card Reader
              Apple Computer, Inc. IR Receiver
              Apple Inc. BRCM2046 Hub
                        Apple Inc. Bluetooth USB Host Controller
    FireWire Information:
    Thunderbolt Information:
    Launch Daemons:
              [System] com.adobe.fpsaud.plist 3rd-Party support link
              [System] com.carbonite.launchd.carbonitedaemon.plist 3rd-Party support link
              [System] com.microsoft.office.licensing.helper.plist 3rd-Party support link
    Launch Agents:
              [System] com.carbonite.launchd.carbonitealerts.plist 3rd-Party support link
              [System] com.carbonite.launchd.carbonitestatus.plist 3rd-Party support link
    User Launch Agents:
              [not loaded] com.adobe.ARM.[...].plist 3rd-Party support link
              [not loaded] com.spotify.webhelper.plist 3rd-Party support link
    User Login Items:
              iTunesHelper
              Spotify
              AdobeResourceSynchronizer
              CFAgent
              SacReminder
    Internet Plug-ins:
              FlashPlayer-10.6: Version: 12.0.0.77 - SDK 10.6 3rd-Party support link
              Default Browser: Version: 537 - SDK 10.9
              AdobePDFViewerNPAPI: Version: 11.0.06 - SDK 10.6 3rd-Party support link
              CouponPrinter-FireFox_v2: Version: 1.1.10 - SDK 10.5 3rd-Party support link
              AdobePDFViewer: Version: 11.0.06 - SDK 10.6 3rd-Party support link
              Flash Player: Version: 12.0.0.77 - SDK 10.6 3rd-Party support link
              QuickTime Plugin: Version: 7.7.3
              SharePointBrowserPlugin: Version: 14.3.9 - SDK 10.6 3rd-Party support link
              iPhotoPhotocast: Version: 7.0 - SDK 10.8
    Safari Extensions:
              clea.nr Videos: Version: 5.0
              Social Fixer: Version: 9.0
              AdBlock: Version: 2.6.18
              Facebook Cleaner: Version: 3.3
    Audio Plug-ins:
              BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
              AirPlay: Version: 2.0 - SDK 10.9
              AppleAVBAudio: Version: 203.2 - SDK 10.9
              iSightAudio: Version: 7.7.3 - SDK 10.9
    iTunes Plug-ins:
              Quartz Composer Visualizer: Version: 1.4 - SDK 10.9
    User Internet Plug-ins:
              fbplugin_1_0_0: Version: Unknown
    3rd Party Preference Panes:
              Carbonite  3rd-Party support link
              Flash Player  3rd-Party support link
    Old Applications:
              /Library/Application Support/Microsoft/MERP2.0
                        Microsoft Error Reporting:          Version: 2.2.9 - SDK 10.4 3rd-Party support link
                        Microsoft Ship Asserts:          Version: 1.1.4 - SDK 10.4 3rd-Party support link
              Solver:          Version: 1.0 - SDK 10.5 3rd-Party support link
                        /Applications/Microsoft Office 2011/Office/Add-Ins/Solver.app
              /Library/Application Support/Carbonite
                        CarboniteDaemon:          Version: 1.1.14 build 604 - SDK 10.5 3rd-Party support link
                        CarboniteStatus:          Version: 1.1.14 build 604 - SDK 10.5 3rd-Party support link
                        CarboniteAlerts:          Version: 1.1.14 build 604 - SDK 10.5 3rd-Party support link
              /Applications/Microsoft Office 2011/Office
                        Microsoft Graph:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Database Utility:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Office Reminders:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Upload Center:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        My Day:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        SyncServicesAgent:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Open XML for Excel:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Alerts Daemon:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Database Daemon:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Chart Converter:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Clip Gallery:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
              /Applications/Microsoft Office 2011
                        Microsoft PowerPoint:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Excel:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Outlook:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Word:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        Microsoft Document Connection:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
              Microsoft Language Register:          Version: 14.3.9 - SDK 10.5 3rd-Party support link
                        /Applications/Microsoft Office 2011/Additional Tools/Microsoft Language Register/Microsoft Language Register.app
              Microsoft AutoUpdate:          Version: 2.3.6 - SDK 10.4 3rd-Party support link
                        /Library/Application Support/Microsoft/MAU2.0/Microsoft AutoUpdate.app
    Time Machine:
              Skip System Files: NO
              Auto backup: YES
              Volumes being backed up:
                        Macintosh HD: Disk size: 930.71 GB Disk used: 117.97 GB
              Destinations:
                        TOSHIBA EXT [Local] (Last used)
                        Total size: 931.19 GB
                        Total number of backups: 99
                        Oldest backup: 2012-12-01 22:50:20 +0000
                        Last backup: 2014-04-05 17:27:14 +0000
                        Size of backup disk: Adequate
                                  Backup size 931.19 GB > (Disk used 117.97 GB X 3)
              Time Machine details may not be accurate.
              All volumes being backed up may not be listed.
    Top Processes by CPU:
                  72%          CarboniteDaemon
                   2%          WindowServer
                   1%          EtreCheck
                   0%          iPhoto
                   0%          SystemUIServer
    Top Processes by Memory:
              340 MB          Safari
              287 MB          iPhoto
              274 MB          CarboniteDaemon
              90 MB          WindowServer
              86 MB          Microsoft Word
    Virtual Memory Information:
              49 MB          Free RAM
              1.63 GB          Active RAM
              1.59 GB          Inactive RAM
              536 MB          Wired RAM
              818 MB          Page-ins
              23 MB          Page-outs

  • Most efficient coding method to get from WFM Digital to Timestamp Array and Channel Number Array - Earn Kudos Here !

    I'm Fetching data from a digitizer card using the niHSDIO Fetch Waveform VI.  After Fetching the data I want to keep the Timestamp and Digital Pattern which I'll stream to TMDS file.
    What is the most efficient method of striping out the Arrays I'm interested in keeping?
    The attached VI shows the input format and desired output.  The Record Length is always 1.  I'll be streaming 100,000+ records to file using a producer-consumer architecture with the consumer performing a TDMS write.
    I'm assuming only the WDT Fetch gives you the time from t0.
    Attachments:
    Digital Waveform to Array Coding.vi ‏11 KB

    Hi bmann2000,
    I'm not sure about efficiency but this method definitely works. I've just used a 'Get Digital Waveform Component' function and the 'Digital to Boolean Array' function.
    Hope this helps.
    Chris
    National Instruments - Tech Support
    Attachments:
    Digital Waveform to Array Coding 2.vi ‏15 KB

Maybe you are looking for