Parallel TDMS Logging

Good day all,
Just for a brief summary, I have a PCI-6280 M-series board using LV2010.
Currently, I have 4 Differential AI inputs created on one task, and a Linear Encoding Input using a counter on another task.  As you can probably guess, I would like to know the different AI measuremetns at every position I am at (using the linear encoder).  Since this is such a big task, I have decided to use the "DAQmx Configure Logging.vi" for logging into a TDMS file. 
However, I noticed that both tasks could not be written or "streamed" at the same time.  I read on this link that a way to do this is to concactenate the files together. However, I was wondering if this would affect how "synchronized" the position and measurements would be.  Is there a better way to do this?
Thanks
Lester
Solved!
Go to Solution.

Hey Lester,
Since all the concatenation is done as post-processing it should not affect the synchronization of your data. Another forum you can check for information is here:http://forums.ni.com/t5/Multifunction-DAQ/Multiple-TDMS-DAQmx-streaming-to-the-same-TDMS-file/td-p/1....  Please let me know if you have any additional questions.
Regards,
Kevin
Product Support Engineer
National Instruments

Similar Messages

  • Position, Velocity, Acceleration From Single Encoder, With TDMS Logging

    I'm still a fairly new user when it comes to Labview.  I started to feel comfortable enough to check out TDMS data logging and I'm kicking myself for not using it earlier.
    That said, I am currently reading angular position from an encoder, and estimating it's velocity and acceleration from the sampling rate & encoder resolution.  I'm using an FGV to do all the calculations and I've fine-tuned it to be as accurate as I would like.
    What I would like to do is implement TDMS logging that records position, calculated velocity and acceleration with the Log and Read option of TDMS.  I'd like these to be synchronized with the encoder reads, and I would like them to be handled by a single DAQmx Read vi.  As far as I can see though, you can only read the position information from the encoder.  I tried to drill down into the DAQmx Read.vi to create a modified version so that I could eploit the TDMS logging, but the subvi uses a call library function vi that currently is beyond my abilities. 
    I've been in MAX and I can't create any additional outputs for velocity and acceleration.  It seems like MAX takes enough information to create approximations from the sampling clock and resolution of the encoder.  Why doesn't it (or if it does, how do I implement it).

    What hardware are you using for reading the encoder? I have recently completed a similar project where the encoder was connected to a counter that was triggered by the counter increasing, so that it would only read on new positions, this worked reasonably well and seems like it should work well for your case.

  • Analog pulse, external trigger and TDMS logging

    I am working on a VI that will output an analog pulse, then log data when it receives an external trigger.  I have both parts working separately: I can output the pulse and read data when an external trigger is received (Pulse&Acq.jpg), and I can log data when an external trigger is received (TDMS Logging.jpg).  However, when I try to combine them I cannot import the resulting TDMS file (Pulse&Log.jpg).  I get the following error: USI encountered an exception: (326): Bulk:: GetValues failed for attribute.  Additionally, the importer opens two workbooks, which appear to be identical, and both have the USI error.
    Any ideas?  Thanks!
    Attachments:
    Pulse&Acq.jpg ‏138 KB
    TDMS Logging.jpg ‏39 KB
    Pulse&Log.jpg ‏130 KB

    Hello,
    For the bottom part (Analog Input triggered by digital pulse and saved in TDMS) you start the task AFTER the pulse is sent (by the top part), if you place the start for the analog task before the while loop I think it will work fine.
    EDIT : The thing that surprises me a bit is that the Pulse&Acq example works... maybe I'm just all wrong...
    Hope this helps
    When my feet touch the ground each morning the devil thinks "bloody hell... He's up again!"

  • Wait Events "log file parallel write" / "log file sync" during CREATE INDEX

    Hello guys,
    at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
    To get some performance values, that i can compare i just built up a normal oracle database in the first step.
    Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
    My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
    I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
    After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
    And now take a look at these values from the AWR
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    log file parallel write              10,019     .0         132      13      33.5
    log file sync                           293     .7           4      15       1.0
    ......How can this be possible?
    Regarding to the documentation
    -> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
    Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
    Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
    I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
    Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
    Do you have any idea how these values come about?
    Any thoughts/ideas are welcome.
    Thanks and Regards

    Surachart Opun (HunterX) wrote:
    Thank you for Nice Idea.
    In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
    CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
    Two points on nologging, though:
    <ul>
    it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
    If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
    </ul>
    Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
    The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
    There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • DaqMX & TDMS Logging - No Datestamp and dt until Close

    In our application, we are continuously logging data to a TDMS file using the "DaqMX Configure Logging (TDMS).vi". However, there are times when we need to look at the TDMS data while the application is still running (i.e. when a failure is found). What I've noticed is that the initial timestamp and dt are missing from the TDMS file until the "Stop and Clear Tasks.vi" is called, which in our application only occurs when the application is being terminated.
    In other words, when I go to look at the TDMS data while the app is running, the TDMS File Viewer shows the time starting in year 1903 and the wrong sample width until after the application exits, and then it shows the correct time and sample time. How can this be fixed so that the initial timestamp appears while the app is running?

    Duplicate:
    http://forums.ni.com/t5/LabVIEW/DaqMX-Access-TDMS-dt-and-timestamp-Before-quot-Stop-Task-quot-is/m-p...
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

  • Setting the digits of precision for TDMS logging

    I can't seem to figure out a way to set the digits of precision when logging data to a TDMS file. I'm hopping to be able to reduce the file size by doing this. Any suggestions?
    Thanks,
    Cosimo
    Solved!
    Go to Solution.

    The data in a TDMS file is binary.  So you can't set the digits of precision.  If you want to make smaller files, then cast your data to Singles instead of Doubles.  You will have 1/2 the resolution, but use 1/2 the bits on disk.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • TDMS log last N hours

    Hi, i'm working on project. We are doing acquisition during very long time (can be 60 days). I need to log on harddrive the last 2 hours .
    In normal condition i need to save 5 minutes every 2 hours . In case of failure i need to save the last 2 hours. (Frequency of acquisition is low 20HZ)
    Is it possible to use tdms recording to achieve this . At first i'm thinking to limit sample by file, then extract data from tdms to create new one.
    Regards

    Hello, it seems to me that what you need is some sort of circular buffer 72000 elements long (3600 sec * 2 * 20Hz). TDMS is not the proper instrument to achieve this, since you cannot delete elements from the file once written; I suggest you to keep all data in memory instead, and dump the whole set of data to disk only in case of failure, with a format at your choice.
    A good and simple way of storing data in memory for this purpose is to have an 72000-elements array with proper initialization at program start; when you need to record a measure, you can shift all elements by one to the right and store actual measure in element 0: that way your data will be always stored in the array sorted from the most recent backwards with increasing index.
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • Tdms logging stop time

    Hi, 
    I'm using DAQmxConfigureLogging function and it works very well for my application. But I'd like to know if is it possible to automatically log wf_stop_time beside of wf_start_time in the TDMS file. Should I use TMDS library and manage this information via software? Thanks
    Solved!
    Go to Solution.

    CVI equivalent of the function mentioned by A.P. is TDMS_SetFileProperty, which implies opening the file after acquisition is done and updating the property set.
    Nevertheless, in my opinion this is not needed, as you can calculate end time with wf_start_time + Lenght * wf_increment (in seconds)
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • Producer-Consumer, TDMS Log on Button

    The program I've enclosed does what I need it to do quite well for the most part. I'm asking this question because I'm getting more data than I want and it's inconvenient to work with. 
    In short, I have a sytem setup to record pressure data from an explosion in a very short time period, the entire event is over in less than 100 ms, it typically occurs between 1 and 2 seconds after the ignition signal goes out. Therefore I want to record about 2 seconds of data total. The system has to run in a monitor mode for several minutes before any ignition while I fiddle with various elements associated with the experiment. It DOES NOT record during the monitoring phase but it must be running so I can fiddle with the experiment. 
    I think have two small issues that are increasing the size of my TDMS file.
    When I hit the big red "STOP" button the system continues to record data for several seconds. I think it has something to do with the queue not being empty. It appears to be stopping at the correct relative time, but the data hasn't been streamed to disk yet. I can live with this but I suspect it's indicative of a bigger problem.
    When I hit "Record" I have a 5s delay built in to engage a FLIR camera which has a variable startup delay (red circle in picture, record button not shown). This shouldn't get recorded because a "true" signal doesn't go to the record case (blue circle) until after the 5s has elapsed.
    Example dataset below. To generate this data I started the VI and let it run for 30 seconds (I didn't have to let it run but that is more like a "real" test), after roughly 30 seconds I hit record and it started the 5 second countdown to ignition, I observed the ignition signal and waited to hit "stop" at roughly the 1 second mark on the count up. The VI continued to run until the timer read 8.32 seconds. The signals I recorded are just the TTL pulses that would normally go to my cameras, everything else is unplugged. Awyay, it looks like the system stopped acquirig data when I hit stop but the fact that it kept running tells me my queue didn't empty. 
    In the sample data the white line is the FLIR TTL signal. It lasts about 1 second (recording at 100 kHz), it should be a 1 second square pulse (or 100000 points), looks like the front got cut off. Sometimes it gets the whole pulse, sometimes it completely misses it. I think it has something to do with when I hit record relative to what is happening with the buffer. 
    The blue square wave that starts at 5 seconds is the 1/2 second TTL pulse that goes out to the high speed cameras and ignites the explosion. This is actually where the recording should start, where 0 seconds should be. I'm not certain why I'm getting the precursor data. In other words, all the data to the left of 500000 (5s) should not be recorded. The data ends just shy of 1/2 second (~50k points) after the blue pulse ends. This is right around the 1 second mark I tried to hit when I generated this data.
    What is the result? Well, I'm recording 10 channels at 100 kHz so I end up with a 50 mb file instead of a 10 MB file and an extra 500k points that I have to strip off. This is tedious and time consuming, it really adds up considering I'll be doing this testing hundreds or thousands of times.
    My questons. How do I delay recording until after that first delay is up? What am I missing? Is there some sort of a "TDMS clear queue"command?
    Thanks for your time!
    Attachments:
    HUCTA Controls 10 Channels Queue - Copy (3).vi ‏194 KB
    HUCTA_Controls_10_Channels_Queued.png ‏122 KB
    HUCTA_Controls_10_Channels_Queuep.png ‏28 KB

    Skinnert,
    Your program has many deficiencies that will make it very difficult to get your program to execute correctly.  The architecture is linear and hard to follow its flow.  Adding to the issues is the lack of wire labels and comments.
    The program is flawed from the first step.  You open 10 analog channels of DAQ but you are only saving a reference to the last channel opened.  It may be working by some magic of LabVIEW but not recommended.
    I would like to help out but there is just too much to take in to get a good answer.  The goal of the forums to help people out.  In your case you need to simplify the code and the question you post on the forums down to a paragraph or at most two.  There is just too much for me to provide an answer.  Ask a better question and you will get a better answer.
    Good luck!
    Matthew Fitzsimons
    Certified LabVIEW Architect
    LabVIEW 6.1 ... 2013, LVOOP, GOOP, TestStand, DAQ, and Vison
    Attachments:
    DAQopen10channels.JPG ‏72 KB

  • Free memory after TDMS logging

    Hi, how can I free memory after reading TDMS files. I haev a very big TDMS file that contains 17000000 digital samples, after reading the data and graphing it I would like to delete it from memory because it is no longer needed. I have noticed I should close LabVIEW to free that space. Is there any way to free that memory after reading and plotting?
    Thanks

    LabVIEW wil take care of the memory management. Thus, there is no explicit way for you to free memory in LabVIEW. In fact, I don't think you should do that by yourself. LabVIEW has quite good memory optimation strategy and it wll track all the memory usage in your VI and reuse/free memory when there is no reference to it. As for TDMS, once the data is read out from the TDMS VIs, there will be no additional data copy inside.

  • WARNING TDMS memory leak in LV 2010

    Hopefully this will save someone the headache that I've been through the last couple of days.  I have a very large applicaiton that is running a final verification test on a production line.  In my testing I noticed a memory leak in the application and after 2 days of debug discovered that the TDMS logging is the culprit.  This is very disappointing since I am (I mean was) a huge fan of the TDMS file format and the LabVIEW functions.  The attached VI reproduces the leak by checking and unchecking the Memory Leak checkbox.  Its a bit ugly but I was just coping and pasting the sections from my application and trying to reproduce the issue.  Luckily for me in this particular application I was only writing to the TDMS log files so I was able to eliminate the problem by switching to the gTDMS versions of the write functions.  I found these referenced in another post about TDMS memory leak, but in that case the leak was caused by the indexing and the fact that the SAME file was continuously written too over a very long period.  As you can see in my case, a log file is opened and closed for each "Test".
    gTDMS link
    Thanks,
    Brian
    Brian Gangloff
    DataAct Incorporated
    Attachments:
    TDMS Memory testing.vi ‏31 KB

    YongqingYe wrote:
    Hi Brian,
    I'm one of the developers of TDMS in NI R&D. Well, this is a problem of TDMS which has been complained by some customers. The reason you see the "memory leak" or the memory usage increment is because TDMS needs to bookkeep some information in memory and when you writing more and more data values, the information we keep in memory will keep increasing.
    There are some workarounds, gTDMS probably is also one of them, but the original purpose of creating gTDMS is to support writing TDMSs on Linux, Mac and other platforms:
    Using "NI_MinimumBufferSize" propertie on channels, you can find the details of the help documentation of TDMS Set Property. It can not eliminate this problem, but would reduce the memory usage significantly. Normally we would set it as 1,000 to 10,000.
    From LV 2009, if you writing to the file always with the same layout, like same channels same number of data values, you will not have memory increament.
    If you are using LV 2010 and later, you can try to play with TDMS Advanced API, this API will not have any memory increasing problem at all.
    Thank you!
    Yongqing Ye
    NI R&D
    Hello Yongqing,
    Apparently you did not bother to look at the examples that I provided or read any of the description either.  As Hooovahh has already pointed out, INDEXING is NOT the issue.  The example writes an array to multiple channels ONE time and then the reference is CLOSED.  In the case that does NOT leak, there are multiple waveform arrays written to the file which would require some indexing but the memory does NOT increase.  The problem is when an array of strings is written to multiple channels and the reference is CLOSED.  Unfortunately this type of quick assumption about the problem is why the real issue was overlooked back in 2009.
    Thanks,
    Brian
    Brian Gangloff
    DataAct Incorporated

  • TDMS in same system

    Hello TDMS Group Members,
    We had setup TDMS environment,
    Our BI 7.0 sandbox system is central system.
    Production copy sandbox client 100 is sender
    Production copy sandbox client 120 is receiver
    Both sender and receiver are on same ECC 6.0 system.
    Is there any issue foreseen or any issue to sender client data,
    Any information would be highly appreciated.
    Thank you,
    AKL

    hi Pankaj,
    i want to ask you regarding TDMS logging activities, if you could help me.
    I have written my own program for creating conversion scrambling rules and when i run it in the tdms interface
    i want to do the logging of this stuff for e.g from which user , time, date , and all the same as SAP default logging
    protocols are doing, i am going through this program cnv_mbt_def_046 again and again, but so far not successfull to know that
    how is the mechanism working, if  you could help me please how could i logg all the activities, as the SAP default logging
    procedure is working.
    Thanks in advance,
    regards,
    Umer Malik

  • How do you export LabVIEW SignalExpress Log files into MatLab?

    I am using SignalExpress to capture vibration and acoustic data.  I need to show the results to a customer that doens't have access to LabVIEW.  They have Matlab, so I need to find a way to export the SignalExpress log file into a usable format that I can import into Matlab.

    SignalExpress log files are in TDMS format.  You also have the option to convert your log to text when you finish it.  So you have two options:
    Send the original TDMS log and have your colleague use the MATLAB plugin which allows reading of TDMS files.
    Convert the log to text and send the text files.
    The first option is probably your best bet, since it will result in smaller files.  Let us know if you run into issues.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Linear Encoder and Configure Logging

    Hi All,
    I am currently using a PCI-6280 M-series board with LV2010.
    I have a Linear Encoder in X4 mode connected to Counter 0 and I was wondering if it is possible to stream the values on the counter as it keeps track of the distance into a TDMS file using "DAQmx Configure Logging" .  I tried this using MAX to configure this, but it does not seem to work, so I'm just wondering if this is even possible as I couldn't seem to find anything online about doing this.
    Thanks,
    Lester
    Solved!
    Go to Solution.

    Hi Lester,
    My apologies, I tried it out on a different board. It can work with yours, too, though. There is an example of a buffered counter task that should allow you to log the data by adding in the Configure Logging VI like we did before.
    Open LabVIEW and go to Help >> Find Examples... it will open the Example Finder. In the Example Finder, expand the folders to get to Hardware Input and Output >> DAQmx >> Counter Measurements >> Count Digital Events >> Count Digital Events-Buffered-Countinuous-Ext Clk.vi
    Put the correct counter channel and PFI channel in the controls, and run the VI to see what it does. Then you can modify the block diagram to add the Configure Logging VI (the same way as before) and run the VI again. You will end up with a TDMS log saved at the path you wire into the Configure Logging VI.
    I hope that helps.
    Regards,
    Daniel H.
    Customer Education Product Support Engineer
    National Instruments
    Certified LabVIEW Developer

  • Multi-task data logging with DAQmx

    I was wonder is it possible to use 'DAQmx Configure Logging' VI and 'DAQmx Start New File' VI for multiple tasks?  I'm doing synchronized high speed DAQ with NI PXI-6133 cards.  Each card (there are 16) must have its own task.  Although the DAQ is continuous, the user (software trigger) determines when data is saved to the disk and for how long.
    In my scenario the test length could be up to an hour with various test events scatter throughout.  The users want to display the data during the entire test length.  However they only want to write the data to disk during an event.  The event could last from 10 sec to 1 min.  That is why the users want to control when data is written to the disk.
    DAQmx Logging seems to work for a single task only, but I need to do multiple tasks.

    I've attempted to implement your suggestion, but I still do not acquire data for all channels for all tasks.  I've enclosed my VI.
    Attachments:
    TDMS Logging with Pause LoggingFSPR.vi ‏55 KB

Maybe you are looking for

  • ITunes Fix for Manually Moving Music and other Media

    Intro Any iTunes 7.x user who manages their own music library outside of the "iTunes Music" folder might be able to tell you that iTunes DOES NOT respond well after manually moving tracks from one place to another on their computer systems. In fact,

  • ERMS : Send mail on saving Sales Order

    HI, How can we send mail through ERMS on saving the Sales Order. I know about ERMS...All setting are done but i don't know on which event this will get trigger.  i have maintained the following rule : If ICWC_Order:type Equals Home 'ZXX' Then Forward

  • Default text in Purchase Order Header text

    Hi, When i create a PO, only for a specific plant I want a default text description to be captured automatically in the Header Text. How to get this done? Please post your inputs. In case you need any additional info please revert. Thanks, Suresh.

  • Inbound idoc delvry03

    Hi exports, I have scenario where I will get an IDOC(Basic Idoc type DELVRY06) from source SAP system but I am(receiver system) having upto DELVRY03(SAP ver 4.7). Some of the required segments in DELVRY06 are not there in DELVRY03. So  I am panning e

  • Need help in A/P reports in crystal

    Hi all, Need to create Date buckets ( parameters, future dates) which displays sum of  due amount depending on the parameter passed.   As of today   upto first bucket(date parameter 1)  B/w par 1 + 1 & parmeter 2 -1    B/w par 2 &par 3-    sum of due