Most efficient coding method to get from WFM Digital to Timestamp Array and Channel Number Array - Earn Kudos Here !

I'm Fetching data from a digitizer card using the niHSDIO Fetch Waveform VI.  After Fetching the data I want to keep the Timestamp and Digital Pattern which I'll stream to TMDS file.
What is the most efficient method of striping out the Arrays I'm interested in keeping?
The attached VI shows the input format and desired output.  The Record Length is always 1.  I'll be streaming 100,000+ records to file using a producer-consumer architecture with the consumer performing a TDMS write.
I'm assuming only the WDT Fetch gives you the time from t0.
Attachments:
Digital Waveform to Array Coding.vi ‏11 KB

Hi bmann2000,
I'm not sure about efficiency but this method definitely works. I've just used a 'Get Digital Waveform Component' function and the 'Digital to Boolean Array' function.
Hope this helps.
Chris
National Instruments - Tech Support
Attachments:
Digital Waveform to Array Coding 2.vi ‏15 KB

Similar Messages

  • Most efficient way to strip nulls from a Double[] array?

    I'm trying to optimize performance of some code that needs to accept a Double[] array (or a List<Double>) and fetch the median of the non-null values. I'm using org.apache.commons.math.stat.descriptive.rank.Median to fetch the median after converting to double[]. My question is how I can most efficiently make this conversion?
    public class MathStatics {
         private static Median median = new Median();
         public static Double getMedian(Double[] doubles) {
              int numNonNull = 0;
              for (int i = 0; i < doubles.length; i++) {
                   if (doubles[i] != null) numNonNull++;
              double[] ds = new double[numNonNull];
              for (int i = 0; i < doubles.length; i++) {
                   if (doubles[i] != null) ds[i] = doubles;
                   System.out.println(ds[i]);
              return median.evaluate(ds);
         public static void main(String[] args) {
              Double[] test = new Double[] {null,null,-1.1,2.2,5.8,null};
              System.out.println(MathStatics.getMedian(test));
    I'm sure that the code I wrote above is clunky and amateurish, so I'd really appreciate some insight into how to make improvements. FWIW, the arrays will typically range in size from ~1-15,000 doubles.
    Thanks!

    There's no need to loop over the array twice
              int numNonNull = 0;
              double[] ds = new double[numNonNull];
              for (int i = 0; i < doubles.length; i++) {
                   if (doubles[i] != null) {
    numNonNull++;
    ds[i] = doubles;
                   System.out.println(ds[i]);
    Except that ds will have length zero, so you'll get a OutOfBoundsException every time you use it.
    As you're using Doubles rather than doubles, you can add the non-null values to an ArrayList and then convert it to an array at the end.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Most efficient way to import data from Excel to InDesign?

    Hi all,
    I'm designing a college prospectus which includes 400+ course listings. At the moment, these listings exist as a vast Excel sheet with fields like course type, course code, description etc.
    I'm familiar with importing Excel data into InDesign and formatting tables/creating table styles and such, but the problem I'm having is that the data is in multiple columns per course in the Excel sheet, but will be arranged in one column per course with multiple rows in the InDesign document. I can't seem to find a way to easily convert these columns into rows.
    Can anyone help me with an efficient way to get the data into the layout without laborious copying and pasting or formatting?
    Many thanks in advance!

    Hi,
    Dans excel coller/ transpose

  • How to go from Firewire 800 or Thunderbolt to HDMI? Which one is the most efficient?, How to go from Firewire 800 or Thunderbolt to HDMI? Which one is the most efficient?

    I have no HDMI port on my macbook, but only Thunderbolt and FireWire 800, and I need to connect to my digital TV that requires an HDMI port.

    Hello, thanks for your rapid answer.
    I have a macBook Pro (15"), with a Intel 2,3Ghz Core I7. 4Go 1700 Mhz DDR3. NVIDIA GeForce GT 650M 512 Mo, and OS X 10.8.4.
    I have the following ports: FireWire 800, Thunderbolt, USB 3 (x2), SDXC card slot, audio In and audio out (headphones).
    I bought a Belkin Mini DisplayPort to HDTV Cable (https://discussions.apple.com/message/22742775?ac_cid=op123456#22742775)
    I have an HD TV from LG 32LH3000, with 3 HDMI inputs. I regularly use one of these HDMI input to watch movies, stored on an HDD.
    Thanks again for your help.

  • What is the most efficient way to record a educational workshop training on my iPad and then upload it to either Schoology or a Teacher Tube channel?  We would need to record clips anywhere from 10 min to 45 minutes and organize them in such a way?

    Any help will be appreciated.  We have a training comming up next week and seeking to find out if there are any new technologies that I could utilize and model for this workshop. 
    Thank you.

    What many people do is use one of the solutions that allow you to mirror from an iOS device to a computer:
    Reflector-  http://www.airsquirrels.com/reflector/
    AirServer-  http://www.airserverapp.com
    X-Mirage: http://www.x-mirage.com/x-mirage/
    and then use screen capture software on the computer to capture the iPad's display. The other option is to use a document camera or video camera to record the iPad. Questions about uploading to the references sites would be issue best addressed with those sites or in their forums.
    Regards.

  • HT201304 Dear what kind of free program can i get from the Apple on line store and i tunes?

    Dear gent.
    kindly i would like to get a dvice how can i get the free program either Apple on line store or I tunes?
    Best Regards,

    There are many free Apps. DO a search for them on the app store.

  • Images imported from a digital camera were viewable and editable at first but later disappeared! They cannot be re-imported from either the camera or the iCloud as iPhoto indicates that they already exist, but cannot locate them. Please help!

    On holiday I have been uploading my photos to iphoto on my macbook pro every few days, the latest batch appeared and I was able to edit them, flag, merge events etc. However, later that day when I went back to the images they could not be found in my library. They had been removed from 'latest import', however were viewable on the icloud photo stream. I have attempted to re-import the images from both the digital camera and my photostream but iphoto indicates that the images had already been imported and would not import them again. I was able to locate the master images using finder, but Iphoto tells me that these files are unreadable (they are .jpg) when I try to re-add them this way. I have also tried to 'unhide' all images in case I had accidentally hidden them but this didn't work either.
    Any other suggestions?

    a best practice to is to never have any computer program (including iPhoto) delete the photos from the card but to import the photos and keep them and then after at least one successful backup cycle has completed and then reformat the card --  I use three very large (32 GB) cards in rotation so I do not reformat for typically a year or more giving me one more long term backup of my photos
    Back up your iPhoto library, Depress and hold the option (alt) and command keys and launch iPhoto - rebuild your iPhoto library database
    LN

  • I bought an iphone 4s that was purchased from an apple store in canada and the iphone won't work here in south africa. what can i do?

    please help

    Do you mean that it is carrier locked?
    You would have to contact the carrier to which it is locked and see if they offer unlocking and if you qualify.
    Or does "won't work" mean something else?

  • HT203167 I transferred a movie from a digital copy disc to iTunes and when I saw it in Downloads, I accidentally tapped Hide. How do I unhide it from the iPhone or computer? It said to look in account info. Where would that be on the phone or computer?

    I transferred a movie from a digital copy to iTunes and tapped Hide by mistake in the Downloads. How do I unhide the movie? It said to do this in account info, how do I do this on the iPhone or computer?

    The movie is from iTunes.   You would think that a download from iTunes to a MacBook could be watched on an HDTV using Apple TV without any headaches bit this does not appear to be the case. 

  • I have a 5th gen iPod, and I can't get it to be recognised by itunes, and the actual ipod does no load frther than the apple logo before turning itself back off. Is there anything I can try?

    It is refurbished and looks to be in perfect condition and was bought from the apple store, including box and serial number. Any help is greatly appreciated!

    chicx wrote:
    This is the third time of writing this on your Apple Support Communities!
    Not with your current user id.
    Far too much uneccesary information in your post, which only confuses things, a vast amount!
    Let's start with iTunes.
    Have you updated iTunes to 11.1.5, because the previous version did appear to have an issue about seeing iPods?
    With iTunes 11.1.5 installed, look in Edit/Preferences/Devices, (or use the ALT key, followed by the E key and then the F key) and make sure that the box named Prevent iPods, iPhones and iPads from syncing automatically does not have a tick in the box.
    Once you have doen those two things, check to see if the iPod is seen by iTunes.
    chicx wrote:
    By the way, what does IOS mean? (I thought IO stood for operating system, but am flummoxed by the S on the end.
    Really?
    OS stands for Operating System. (In computer speak, IO means Input/Output.)
    iOS originally stood for iPhone Operating System, but it now refers to the iPod Touch and iPhone. The iPod Classic, which you have listed in your profile as your iPod, does not use iOS.
    I assume that you have been listening to the Podcast in your iTunes on the computer as you cannot transfer it to your iPod. It's what I'd do.

  • Most efficient way to get a  connection from a defined connection -pool [whole message]

    Having recently load-tested the application we are developing I noticed that
    one of the most expensive (time-wise) calls was my fetch of a db-connection
    from the defined db-pool. At present I fetch my connections using :
    private Connection getConnection() throws SQLException {
    try {
    Context jndiCntx = new InitialContext();
    DataSource ds =
    (DataSource)
    jndiCntx.lookup("java:comp/env/jdbc/txDatasource");
    return ds.getConnection();
    } catch (NamingException ne) {
    myLog.error(this.makeSQLInsertable("getConnection - could not
    find connection"));
    throw new EJBException(ne);
    In other parts of the code, not developed by the same team, I've seen the
    same task accomplished by :
    private Connection getConnection() throws SQLException {
    return DriverManager.getConnection("jdbc:weblogic:jts:FTPool");
    From the performance-measurements I made the latter seems to be much more
    efficient (time-wise). To give you some metrics:
    The first version took a total of 75724ms for a total of 7224 calls which
    gives ~ 11ms/call
    The second version took a total of 8127ms for 11662 calls which gives
    ~0,7ms/call
    I'm no JDBC guru som i'm probably missing something vital here. One
    suspicion I have is that the second call first find the jdbc-pool and after
    that makes the very same (DataSource)
    jndiCntx.lookup("java:comp/env/jdbc/txDatasource") in order to fetch the
    actual connection anyway. If that is true then my comparison is plain wrong
    since one call is part of the second. If not, then the second version sure
    seems a lot faster.
    Apart from the obvious performance-differences in the two above approaches,
    is there any other difference one should be aware of (transaction-context
    for instance) between the two ? Basically I'm working in an EJB-environment
    on weblogic 7.0 and looking for the most efficient way to get hold of a
    db-connection in code. Comments anyone ?
    //Linus Nikander - [email protected]

    Linus Nikander wrote:
    Thank you for both your replies. As per your suggestions I've improved my
    connectionhandling (I ended up implementing the Service Locator pattern as a
    matter of fact).
    One thing still puzzles me though. Which (and why) is the "proper" way to
    fetch the actual dataSource. As I stated before in the code I've seen two
    approaches within the code I've got.
    1. myDs = myServiceLocator.getDataSource("jdbc:weblogic:jts:FTPool");
    2. myDs = myServiceLocator.getDataSource("java:comp/env/jdbc/tgsDB");
    where getDataSource does a dataSource = (DataSource)
    initialContext.lookup(dataSourceName); dataSourceName being the input-string
    obviously.
    tgsDB is defined as
    <reference-descriptor>
    <resource-description>
    <res-ref-name>jdbc/tgsDB</res-ref-name>
    <jndi-name>tgs-dataSource</jndi-name>
    </resource-description>
    </reference-descriptor>
    in weblogic-ejb-jar.xml
    From what I can understand by your answer, you don't recommend using the
    JNDI-lookup way of getting the connection at all ?Correct.
    The service locator that
    I implemented will still perform a JNDI lookup, but only once. Will the fact
    that I'm talking to an RMI-object anyway significantly impact performance
    (when compared to you non-jndi-method) ?In some cases, for earlier 7.0s, maybe yes. For the very latest, it shouldn't
    hurt.
    >
    >
    In my two examples above. If i use version 1. How will the server know
    whether to give me a TX-bound connection and when not to ? In version 1
    FTPool maps to a pool with both TX and non-TX datasources. In version 2.
    tgsDB maps directly to a TX-dataSource.
    I might be asking a lot of strange questions, probably because I'm just
    getting the hang of all the resource-reference issues that EJBs are
    associated with.Bear with me ;)
    //Linus
    "Joseph Weinstein" <[email protected]> wrote in message
    news:[email protected]...
    Hi. As Jon said, the lookups are redundant. Because you showed that otherway,
    I will infer that this code is always being run in serverside code. Good.I will give you
    a third way which is much better than either of the ones you showed. Thefirst method
    you showed has a problem for all but the latest sps, your jdbc objectswill all be
    going through an unnecessary level of indirection because you are gettingan rmi jdbc
    object which talks to the jts driver object.
    The second, faster method you showed also has a serious problem! Oneshould
    never call DriverManager methods in multithreaded JDBC programs becauseall
    DriverManager calls are class-synchronized, including some small internalones like
    DriverManager.println(), which all JDBC drivers and even the constructorfor
    SQLException call, so one slow getConnection() call can inadvertantly haltall other
    JDBC being done in the whole JVM! Also, for JVMs that have lots of jdbcdrivers
    registered, DriverManager is inefficient because it simply sends your URLand
    properties to every driver it has registered until it finds one thatdoesn't throw an
    exception and returns a connection.
    Here's the fastest way:
    // do once and reuse driver object everywhere. Can be used by multiplethreads
    Driver d =(Driver)Class.forName("weblogic.jdbc.jts.Driver").newInstance();
    Then, whenever you want a connection:
    public myJDBCMethod()
    Connection c = null; // always a method level object
    try {
    c = d.connect("jdbc:weblogic:jts:FTPool", null);
    ... do all the jdbc for the method...
    c.close();
    c = null;
    catch (Exception e) {
    ... do whatever, if needed...
    finally {
    // close connection regardless of failure or exit path
    if (c != null) try {c.close();}catch (Exception ignore){}
    Joe
    Linus Nikander wrote:
    Having recently load-tested the application we are developing I noticed
    that
    one of the most expensive (time-wise) calls was my fetch of adb-connection
    from the defined db-pool. At present I fetch my connections using :
    private Connection getConnection() throws SQLException {
    try {
    Context jndiCntx = new InitialContext();
    DataSource ds =
    (DataSource)
    jndiCntx.lookup("java:comp/env/jdbc/txDatasource");
    return ds.getConnection();
    } catch (NamingException ne) {
    myLog.error(this.makeSQLInsertable("getConnection - couldnot
    find connection"));
    throw new EJBException(ne);
    In other parts of the code, not developed by the same team, I've seenthe
    same task accomplished by :
    private Connection getConnection() throws SQLException {
    return DriverManager.getConnection("jdbc:weblogic:jts:FTPool");
    From the performance-measurements I made the latter seems to be muchmore
    efficient (time-wise). To give you some metrics:
    The first version took a total of 75724ms for a total of 7224 callswhich
    gives ~ 11ms/call
    The second version took a total of 8127ms for 11662 calls which gives
    ~0,7ms/call
    I'm no JDBC guru som i'm probably missing something vital here. One
    suspicion I have is that the second call first find the jdbc-pool andafter
    that makes the very same (DataSource)
    jndiCntx.lookup("java:comp/env/jdbc/txDatasource") in order to fetch the
    actual connection anyway. If that is true then my comparison is plainwrong
    since one call is part of the second. If not, then the second versionsure
    seems a lot faster.
    Apart from the obvious performance-differences in the two aboveapproaches,
    is there any other difference one should be aware of(transaction-context
    for instance) between the two ? Basically I'm working in anEJB-environment
    on weblogic 7.0 and looking for the most efficient way to get hold of a
    db-connection in code. Comments anyone ?
    //Linus Nikander - [email protected]

  • Most efficient method to process 2 million plus records from & to a Ztable

    Hi All,
    My requirement is as follows:
    There is a table which has 20 and odd columns, and close to 2 million records.
    Initially only 5 or 6 columns will have data. Now the requirement is to fetch them and populate the remaining columns of the table by looking into other tables.
    Looking for the most efficient method to handle this as the data count is huge.
    There should be an optimum balance between memory usage and time consumption.
    Kindly share your expertise in this regard.
    Thanks
    Mani

    Write   a Program to Download the data for that  table column to be filled  into Local file   .XLS Format.
    Then   Write  a report  for Uploading the data  from the  local file   .XLS to    database table through internal table  itab.
    Loop at  itab .
    UPDATA  database table   where   condition  of the primary fields.
    endloop.
    first  try this  in the  development  and testing  server  , then go for the Production.
    But   take  backup of  full exsisting  Production data into   local file  and also take neccesary approvals  for   doing this task  .
    Reward  Points if it is usefull..
    Girish

  • What's the most efficient way to serve a file from a servlet?

    I have a servlet that does various different things depending on the needs. Sometimes it dynamically generates content, and sometimes all it does is send a file out, with no alteration.
    What is the most efficient way to just send a file?
    One option:
    OutputStream os = response.getOutputStream();
    InputStream is = new FileInputStream(...)
    (send all the bytes from is to os, the regular way using a buffer)Another option is to say:
    RequestDispatcher rd = response.getRequestDispatcher(fileName);
    rd.forward();Any other options? What's the prefered way of doing this?
    I know the rule of "don't optimize too early" but this is a situation where we need to get the maximum amount of files served with the hardware we have, and it's going to be a lot of static files, so efficiency is important.
    Thanks

    Ok, that's what I thought. It would be nice if there were a "response.sendStream(InputStream input)" method in the ServletResponse class. Even nicer would be a sendFile or sendChannel or something. This is probably a common usage and it's a place where the container has many opportunities for optimization. For example, it could call the operating systems send_file kernel call so the entire transfer would be done directly from the disk controller to the ether card (on systems that support that).
    For now I'll just do my own buffered copy.

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • IFRAME into iMOVIE - most efficient method for importing?

    What would be the most efficient method for importing iFrame movies from a camera into iMovie?
    iFrame i suppose to save time and work more efficiently in lue of quality but I don't seem to find I way to import the movies faster than in other formats.
    On a second note, inporting in iMovie from DV (tape) cameras dramaticly reduced the image quality. Do we still have the same issue when importing an iFrame movie?
    Thank you for your help!

    Im completly new myself to importing IFRAME into Imovie 11 as i only got my new Panasonic X920 Camcorder 2 days ago.Can you please tell me is there a big drop in quality from 1080 60p to IFRAME Quality.

Maybe you are looking for