Continuous Data Logging with NI 9236 an cRIO 9076 with FPGA
Hey all,
i'm a beginner in LabVIEW/FPGA. My goal is it to
continuous acquire and log data. I've a 9236, CH0 is
connected to a strain gauge and the cRIO 9076.
I've written a code and I see the incoming data on the FPGA.vi.
On the Host.vi there is no outcoming data out of the FIFO.
There is no error messages or an error during the compilation.
Do I have a timing problem? Where ist the big mistake
Thank you!
Attachments:
1.jpg 98 KB
2.jpg 71 KB
4.jpg 337 KB
In your first image one problem is that you are starting the module on each iteration of the loop. I can't tell how your FIFO is configured, but take a look at the example "Hardware Input and Output->CompactRIO->Module Specific IO->Analog Input->NI 923x Continuous DMA.pvproj. I don't know which LabVIEW version you are using but I found this example in 2012.
Similar Messages
-
Multi-task data logging with DAQmx
I was wonder is it possible to use 'DAQmx Configure Logging' VI and 'DAQmx Start New File' VI for multiple tasks? I'm doing synchronized high speed DAQ with NI PXI-6133 cards. Each card (there are 16) must have its own task. Although the DAQ is continuous, the user (software trigger) determines when data is saved to the disk and for how long.
In my scenario the test length could be up to an hour with various test events scatter throughout. The users want to display the data during the entire test length. However they only want to write the data to disk during an event. The event could last from 10 sec to 1 min. That is why the users want to control when data is written to the disk.
DAQmx Logging seems to work for a single task only, but I need to do multiple tasks.I've attempted to implement your suggestion, but I still do not acquire data for all channels for all tasks. I've enclosed my VI.
Attachments:
TDMS Logging with Pause LoggingFSPR.vi 55 KB -
Pulsewidth data logging with counter channel
Hello everyone.
i have query with data logging of pulse duration measured with counter input into .lvm file
pulse signal contain frequency of 100 hz with pulse duration vary from 2 to 8 ms
i want to log pulse duration in .lvm file.
when running vi(below screen shot) i find data logg with duplicate instance of time.
i want to log pulse duration data with every ridging or falling edge of pulse.
so if i run vi for 20 sec i sould have 2000 reading of pulse duration in my.lvm file with appropriate time in time column
Attachments:
pulsewidth using counter.jpg 659 KBHello Karthik,
I can think of two ways I would approach this. One would use software timing, and the other would use hardware timing.
1. Software timing: This might be the easiest approach. The idea is you would use your example much as it is written, then use a loop with a delay as the "Time between samples" to call your counting code over and over. In this case, I have a few suggestions for your code. First, instead of the loop, you could start the counter, use a delay in a sequence structure (use "Wait (ms)", not "Wait Until Next ms Multiple")for your "Sampling time", then read the counter and stop. After the stop you'll need another "Wait (ms)" in a sequence structure for your "Time between samples". Finally, wrap a loop around all of this to r
epeat it. One advantage of this approach is the user can change the between time on the fly.
2. Hardware timing: The idea behind this would be to use two counters. The first would count, the output of the second would be wired to the gate of the first to control when the first counts. The second counter would be programmed to output a pulse train where the positive portion of the pulse would be your "Sampling time" and the negative portion would be your "Time between samples". The creative part for this approach would be figuring out when to read the count. One way to do this would be to also connect the gate of the second counter to an analog input, then read the count whenever the input goes low (say, below 1 volt). This approach might be the more accurate of the two. However, you would always be getting a total count, so you would have to subract the previous count each time. Also, you would not be able to change your sampling time or between time once you start.
Finally
I'd suggest looking at some examples - you can usually find code to help you.
Best Regards,
Doug Norman -
I need timed data logging with continuous graphing
I am graphing continuous temperature measurements, but I only want to log onto a spreadsheet measurements every hour. How do I do that?
Hello,
In order to log the measurements every hour, you will want to implement a "Wait (ms)" function. This function can be added to the block diagram by [right-clicking] and selecting the following:
[All Functions] >> [Time & Dialog] >> [Wait (ms)]
You will want to place this wait function inside your code and wire a constant to the wait function with a value of 3,600,000 (60min/hr x 60sec/min x 1000ms/s). To wire a constant to the wait function, [right-click] on the input terminal of the "wait (ms)" icon and select [Create]>>[Constant].
I hope this helps. Please let me know if I can further assist you.
Kind Regards,
Joe Des Rosier
National Instruments -
Continuous data generation with VI simulation.
Is there any method that can generate the data continuously .I'm already seeing the data for the 18000samples that i have in a waveform graph.
I am attaching my simulation of reading spreadsheet (text files with tabs) and the data that i'm giving.
All i want to do is continuously generating the data by making changes to the imulation.
Is it possible with looping??
Please share your ideas with me.
Bharath.
Attachments:
bharath_22.vi 13 KB
data_excel2.txt 190 KBHi bharath_reddy,
I was reading your question and I am not exactly sure if I am understanding correctly what you are asking. Are you trying to bring in data from a spreadsheet file and load it onto a waveform graph? That is what your example VI looks like you are trying to do. Or are you trying to continuously acquire data coming from a spreadsheet file and you want it to keep acquiring the new data? If so, you will probably have to finish acquiring the data to the spreadsheet before it can be loaded to LabVIEW. There are some great examples in the NI example finder about using the File Input and Output in the NI Example Finder. Have you taken a look at these examples?
National Instruments
Applications Engineer -
Composite continuously filling logs with endless retrying
Hi All,
I had a composite to enqueue the message into MQ using native MQ Adapter. Everything was working fine till one day when suddenly network connectivity to the MQ was lost due to firewall changes. As no connection could be established by the composite, I started getting Connection Timeout exceptions in the logs. Since then it's filling the logs continuously. How do I stop it?
I deleted the JNDI from Weblogic, tried shutting down the composite, changed the JNDI hosts to blank but all in vain.
Could you please tell what's happening behind the scenes?
- SOA Suite 11.1.1.5
- Simple BPEL composite
- Synchronous process with just enqueue operation
Regards,
Neeraj SehgalHi,
Sorry for delay. I tried undeploying also. Later I went ahead and removed the host from the weblogic JNDI to prevent it from trying the connection, but after that the error changed to below.
<div class="jive-quote">
[2012-07-20T03:54:26.950-05:00] [soa_server1] [WARNING] [] [oracle.soa.adapter] [tid: weblogic.work.j2ee.J2EEWorkManager$WorkWithListener@19c2817f] [userId: ohsadmin] [ecid: 79c1912e960c7e6d:-1ffd1268:1375b34bd87:-8000-0000000000593a78,0] [APP: soa-infra] MQSeriesAdapter InsertMessageIntoMQ:EnqueueMessageInMQ [ Enqueue_ptt::Enqueue(opaque) ] [QueueProcessor] Exception caught in while loop
[2012-07-20T03:54:26.950-05:00] [soa_server1] [WARNING] [] [oracle.soa.adapter] [tid: weblogic.work.j2ee.J2EEWorkManager$WorkWithListener@19c2817f] [userId: ohsadmin] [ecid: 79c1912e960c7e6d:-1ffd1268:1375b34bd87:-8000-0000000000593a78,0] [APP: soa-infra] MQSeriesAdapter InsertMessageIntoMQ:EnqueueMessageInMQ [ Enqueue_ptt::Enqueue(opaque) ] [[
javax.resource.spi.IllegalStateException: [Connector:199176]Unable to execute allocateConnection(...) on ConnectionManager. A stale Connection Factory or Connection Handle may be used. The connection pool associated with it has already been destroyed. Try to re-lookup Connection Factory eis/MQ/BBW_MAXIMO_CORE from JNDI and get a new Connection Handle.
at weblogic.connector.outbound.ConnectionManagerImpl.checkIfPoolIsValid(ConnectionManagerImpl.java:442)
at weblogic.connector.outbound.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:109)
at oracle.tip.adapter.mq.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:120)
at oracle.tip.adapter.mq.inbound.QueueProcessor.checkForNewConnection(QueueProcessor.java:218)
at oracle.tip.adapter.mq.inbound.NonManagedQueueProcessor.startTransaction(NonManagedQueueProcessor.java:51)
at oracle.tip.adapter.mq.inbound.QueueProcessor.run(QueueProcessor.java:245)
at oracle.integration.platform.blocks.executor.WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
at weblogic.work.j2ee.J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:183)
at weblogic.work.DaemonWorkThread.run(DaemonWorkThread.java:30)
</div>
I further deleted the JNDI too, but still the above error is filling the logs. I think it's some sort of retry mechanism that is continuously trying to connect. But there is no process, no JNDI currently but still it's retrying.
Please help me stop this.
Regards,
Neeraj Sehgal -
I'm acquiring 64 channels at sampling rate of about 1450kHz continuously and showing the data second by second in a CWGraph. Now, I would like to add an on-line averaging for all of those 64 channels and show it in another CWGraph. The averaging should occur with respect of external trigger and the sample should consist of about 100ms prestimulus and 500ms poststimulus period. Has anybody been handling this sort of things? How it's solved with ComponentWorks and Visual Basic?
I'm using double Pentium 700MHz processors and NI-DAQ PCI-6033E card.1. To get started with external triggering check out the examples in the Triggering folder that install with CW under MeasurementStudio\VB\Samples\DAQ\Triggering.
2. Create a prestimulus buffer that always contains the latest 100 ms of data. Just modify the CWAI1_AcquiredData event to transfer ScaledData (the data acquired) to your prestimulus buffer. See how there will always be a prestimulus buffer of data regardless of trigger state?
3. On the trigger, you can use the same CWAI1_AcquiredData event to route the data into poststimulus buffer and average the 2.
I see something like this:
if no trigger
100 ms of data to prestimulus buffer
if trigger
500 ms of data to poststimulus buffer
averaging and display function
reset trigger and buffers
good luck
ben schulte
application engineer
national instruments
www.ni.com/ask -
I have a lock-in amp controlled by labview via gpib. My goal is to use the
"advanced data logger" supplied with labview to log data from a daq card
and the output from the lock-in amp. I have the lock-in vi working correctly
and have the output on the screen. I don't know how to get the pre-packaged
"advanced data logger" to read in data from the gpib device. I can only
read in data from my daq card.
Any assistance in this matter would be appreciated.1. An OR function should work just fine for you. I don't understand what you mean that the functionality changed. If the reset button is true OR the Trip button is true, then a True gets put in the notifier and the Trip file functions are executed.
I don't understand your "Scale". Right now you have a zero if the button is true and 100 if it is false. You don't have anything else going on with it. I don't understand the stay at zero part. How would it ever get to the point where it isn't zero.
2. For the main log file, you could detect when 43200 iterations of the loop have passed and at that time, just reset the writing to the beginning of the file.
See attached.
Attachments:
Save_Previous_10_SecondsMOD3.vi 27 KB -
Data Logging with Fluke 189 Multimeter
I am trying to get LaBview to read a signal coming off a Fluke 189 Multimeter, and I keep encountering the error code-1073807341. I installed the driver fl18x, and LaBview will still not talk to the multimeter. I have been able to use the fluke 189 multimeter before, but on a different computer and a newer version of LabView. I was able to get the fluke to work with LaBview 7.1.0, but not with LaBview 7.0.0 express. Should I be able to use the fluke with LaBview 7.0.0 express?
Hi,
Yes, you should be able to use LabVIEW 7.0 with the Fluke 189. The fl18x instrument drivers are available on the Instrument Driver Network, and there is a version for LabVIEW 7.0. You may need to repair or reinstall the NI-VISA and NI-VISA Run-time Engine. Make sure that you select support for LabVIEW 7.0 when installing the NI-VISA driver. Also make sure that you installed LabVIEW 7.0 before installing LabVIEW 7.1, and that you installed the drivers after installing LabVIEW.
Regards,
Rima
Rima H.
Web Product Manager -
Oracle8i Data Guard with log shipping
Is it true that :
in Oralce8i, with the data guard, there will be zero data loss if online redologs have been mirrored in DR site and in the event of DR, the last un-finished redolog can be used to recover the database.
What product is used to apply the redolog ?
I know Oracle9i claim this is possible, but when will Oracle9i available to Sun platform ?>
Thomas Schulz wrote:
> Here are my questions:
>
> 1. Is it correct, that I have to restore the last successful restored log (if not the first) from the previous session with "recover_start", before I can restore the next log with "recover_replace" in a new session?
Yes, that's correct. As soon as you leave the restore session, you have to provide a 'overlap' of log information so that the recovery can continue.
> 2. Can't I mix the restoring of incremental and log backups in this way: log001, incremental data, log002, ...? In my tests I was able to restore the incremental data direct after the complete data, but not between the log backups.
No, that's not possible. After you've recovered some log-information the incremental backup cannot be appliead as a "delta" to the data are anymore as the data area has already changed.
> 3. Can I avoid the state change to OFFLINE after a log restore?
Of course - don't use recover_cancel
As soon as you stop the recovery, the database is stopped - no way around this.
There are some 3rd party tools available for this, like LIBELLE.
KR Lars -
Unable to take online backup including logs with HP Data Protector - DB2 9.7 fixpack 8
Dear All,
Infrastructure : -
Database - IBM DB2 LUW 9.7 fixpack 8
OS - HP-UX 11.31
Backup solution - Data protector v7
OEM of tape library i.e. HP asked me to make changes in DB parameters with values as
--> LOGRETAIN to RECOVERY
--> USEREXIT to ON
--> LOGARCHMETH1 to USEREXIT
These parameters are used for activating online backup of Database (DB2 LUW 9.7 fixpack 8).
when backup is initiated from DP, backup happen successfully with option of excluding archive logs where as when we execute backup from DP(Data protector) with option including archive logs. Backup terminates with error.
SQL2428N The BACKUP did not complete because one or more of the requested log files could not be retrieved
Dear all need your expect help.
Thanks in advance.
NeerajHi Deepak,
Thanks for your quicky reply and i have check the directory log_dir and log_archive are having enough space available.
Please find the db2diag.log file.
Database Configuration for Database
Database configuration release level = 0x0d00
Database release level = 0x0d00
Database territory = en_US
Database code page = 1208
Database code set = UTF-8
Database country/region code = 1
Database collating sequence = IDENTITY_16BIT
Alternate collating sequence (ALT_COLLATE) =
Number compatibility = OFF
Varchar2 compatibility = OFF
Date compatibility = OFF
Database page size = 16384
Dynamic SQL Query management (DYN_QUERY_MGMT) = DISABLE
Statement concentrator (STMT_CONC) = OFF
Discovery support for this database (DISCOVER_DB) = ENABLE
Restrict access = NO
Default query optimization class (DFT_QUERYOPT) = 5
Degree of parallelism (DFT_DEGREE) = 1
Continue upon arithmetic exceptions (DFT_SQLMATHWARN) = NO
Default refresh age (DFT_REFRESH_AGE) = 0
Default maintained table types for opt (DFT_MTTB_TYPES) = SYSTEM
Number of frequent values retained (NUM_FREQVALUES) = 10
Number of quantiles retained (NUM_QUANTILES) = 20
Decimal floating point rounding mode (DECFLT_ROUNDING) = ROUND_HALF_EVEN
Backup pending = NO
All committed transactions have been written to disk = NO
Rollforward pending = NO
Restore pending = NO
Multi-page file allocation enabled = YES
Log retain for recovery status = RECOVERY
User exit for logging status = YES
Self tuning memory (SELF_TUNING_MEM) = ON
Size of database shared memory (4KB) (DATABASE_MEMORY) = AUTOMATIC(3455670)
Database memory threshold (DB_MEM_THRESH) = 10
Max storage for lock list (4KB) (LOCKLIST) = AUTOMATIC(20000)
Percent. of lock lists per application (MAXLOCKS) = AUTOMATIC(90)
Package cache size (4KB) (PCKCACHESZ) = AUTOMATIC(161633)
Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = AUTOMATIC(480362)
Sort list heap (4KB) (SORTHEAP) = AUTOMATIC(50000)
Database heap (4KB) (DBHEAP) = AUTOMATIC(3102)
Catalog cache size (4KB) (CATALOGCACHE_SZ) = 2560
Log buffer size (4KB) (LOGBUFSZ) = 1024
Utilities heap size (4KB) (UTIL_HEAP_SZ) = 10000
Buffer pool size (pages) (BUFFPAGE) = 10000
SQL statement heap (4KB) (STMTHEAP) = AUTOMATIC(8192)
Default application heap (4KB) (APPLHEAPSZ) = AUTOMATIC(256)
Application Memory Size (4KB) (APPL_MEMORY) = AUTOMATIC(40000)
Statistics heap size (4KB) (STAT_HEAP_SZ) = AUTOMATIC(4384)
Interval for checking deadlock (ms) (DLCHKTIME) = 10000
Lock timeout (sec) (LOCKTIMEOUT) = 3600
Changed pages threshold (CHNGPGS_THRESH) = 20
Number of asynchronous page cleaners (NUM_IOCLEANERS) = AUTOMATIC(2)
Number of I/O servers (NUM_IOSERVERS) = AUTOMATIC(5)
Index sort flag (INDEXSORT) = YES
Sequential detect flag (SEQDETECT) = YES
Default prefetch size (pages) (DFT_PREFETCH_SZ) = AUTOMATIC
Track modified pages (TRACKMOD) = YES
Default number of containers = 1
Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 2
Max number of active applications (MAXAPPLS) = AUTOMATIC(125)
Average number of active applications (AVG_APPLS) = AUTOMATIC(3)
Max DB files open per application (MAXFILOP) = 61440
Log file size (4KB) (LOGFILSIZ) = 16380
Number of primary log files (LOGPRIMARY) = 60
Number of secondary log files (LOGSECOND) = 0
Changed path to log files (NEWLOGPATH) =
Path to log files = /db2/ECP/log_dir/NODE 0000/
Overflow log path (OVERFLOWLOGPATH) =
Mirror log path (MIRRORLOGPATH) =
First active log file = S0000187.LOG
Block log on disk full (BLK_LOG_DSK_FUL) = YES
Block non logged operations (BLOCKNONLOGGED) = NO
Percent max primary log space by transaction (MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0
Group commit count (MINCOMMIT) = 1
Percent log file reclaimed before soft chckpt (SOFTMAX) = 300
Log retain for recovery enabled (LOGRETAIN) = RECOVERY
User exit for logging enabled (USEREXIT) = ON
HADR database role = STANDARD
HADR local host name (HADR_LOCAL_HOST) =
HADR local service name (HADR_LOCAL_SVC) =
HADR remote host name (HADR_REMOTE_HOST) =
HADR remote service name (HADR_REMOTE_SVC) =
HADR instance name of remote server (HADR_REMOTE_INST) =
HADR timeout value (HADR_TIMEOUT) = 120
HADR log write synchronization mode (HADR_SYNCMODE) = NEARSYNC
HADR peer window duration (seconds) (HADR_PEER_WINDOW) = 0
First log archive method (LOGARCHMETH1) = USEREXIT
Options for logarchmeth1 (LOGARCHOPT1) =
Second log archive method (LOGARCHMETH2) = OFF
Options for logarchmeth2 (LOGARCHOPT2) =
Failover log archive path (FAILARCHPATH) =
Number of log archive retries on error (NUMARCHRETRY) = 5
Log archive retry Delay (secs) (ARCHRETRYDELAY) = 20
Vendor options (VENDOROPT) =
Auto restart enabled (AUTORESTART) = ON
Index re-creation time and redo index build (INDEXREC) = SYSTEM (RESTART)
Log pages during index build (LOGINDEXBUILD) = OFF
Default number of loadrec sessions (DFT_LOADREC_SES) = 1
Number of database backups to retain (NUM_DB_BACKUPS) = 12
Recovery history retention (days) (REC_HIS_RETENTN) = 60
Auto deletion of recovery objects (AUTO_DEL_REC_OBJ) = OFF
TSM management class (TSM_MGMTCLASS) =
TSM node name (TSM_NODENAME) =
TSM owner (TSM_OWNER) =
TSM password (TSM_PASSWORD) =
Automatic maintenance (AUTO_MAINT) = ON
Automatic database backup (AUTO_DB_BACKUP) = OFF
Automatic table maintenance (AUTO_TBL_MAINT) = ON
Automatic runstats (AUTO_RUNSTATS) = ON
Automatic statement statistics (AUTO_STMT_STATS) = ON
Automatic statistics profiling (AUTO_STATS_PROF) = ON
Automatic profile updates (AUTO_PROF_UPD) = ON
Automatic reorganization (AUTO_REORG) = ON
Auto-Revalidation (AUTO_REVAL) = DEFERRED
Currently Committed (CUR_COMMIT) = DISABLED
CHAR output with DECIMAL input (DEC_TO_CHAR_FMT) = NEW
Enable XML Character operations (ENABLE_XMLCHAR) = YES
WLM Collection Interval (minutes) (WLM_COLLECT_INT) = 0
Monitor Collect Settings
Request metrics (MON_REQ_METRICS) = BASE
Activity metrics (MON_ACT_METRICS) = BASE
Object metrics (MON_OBJ_METRICS) = BASE
Unit of work events (MON_UOW_DATA) = NONE
Lock timeout events (MON_LOCKTIMEOUT) = WITHOUT_HIST
Deadlock events (MON_DEADLOCK) = WITHOUT_HIST
Lock wait events (MON_LOCKWAIT) = NONE
Lock wait event threshold (MON_LW_THRESH) = 5000000
Number of package list entries (MON_PKGLIST_SZ) = 32
Lock event notification level (MON_LCK_MSG_LVL) = 1
SMTP Server (SMTP_SERVER) =
SQL conditional compilation flags (SQL_CCFLAGS) =
Section actuals setting (SECTION_ACTUALS) = NONE
Connect procedure (CONNECT_PROC) =
Regards
Neeraj -
Rapid disk use with DSC data logging
I recently installed Labview 8.2.1 with DSC module. I have tried a few shared variable projects with data logging and now I find my hard drive space is being consumed very rapidly. The variable manager indicates that only a few shared variables are left from my projects. They would produce very little data in the Citadel database. Certainly not enough to consume 100 meg + of disk per day. I am obviously missing something that would turn off some hidden logging process. Any suggestions?
Thanks for the input. I did have another look at historical data and there are not large traces. I used the Variable manager to stop all processes and I stopped the shared variable engine. The leak continues. The only way I can stop it is to use the Windows Task Manager to shut down the Citadel5 process. I have searched the computer and cannot find any large database files that would account for the disk shrink. I have also uninstalled and re-installed the DSC module. No success.
If I knew where the data was located I might be able to determine the source. The National Instruments directory is not growing even though all the processes had their database pointed to a directory inside the NI directory!
Len -
How do I control a data log session with period and sample time?
I need a data logging system where the operator can select 2 logging parameters: Log Period and Sample Time. I also need a START and STOP button to control the logging session. For example, set the log period for 1 hour and the sampling time for 1 second. (I may be using the wrong jargon here.) In this case when the START button is clicked, the system starts logging for 1 second. An hour later, it logs data for another second, and so on until the operator clicks the STOP button. (I will also include a time limit so the logging session will automatically stop after a certain amount of time has elapsed.)
It’s important that when the STOP button is clicked, that the system promptly stops logging. I cannot have the operator wait for up to an hour.
Note that a logging session could last for several days. The application here involves a ship towing a barge at sea where they want to monitor and data log tow line tension. While the system is logging, I need the graph X-axis (autoscaled) to show the date and time. (I’m having trouble getting the graph to show the correct date and time.) For this application, I also need the system to promptly start data logging at a continuous high rate during alarm conditions.
Of course I need to archive the data and retrieve it later for analysis. I think this part I can handle.
Please make a recommendation for program control and provide sample code if you can. It’s the program control concepts that I think I mostly need help here. I also wish to use the Strip Chart Update Mode so the operator can easily view the entire logging session.
DAQ Hardware: Not Selected Yet
LabVIEW Version: 6.1 (Feel free to recommend a v7 solution because I need to soon get it anyway.)
Operating System: Win 2000
In summary:
How do I control a graphing (data log) session for both period and sample time?
How do I stop the session without having to wait for the period to end?
How do I automatically interrupt and control a session during alarm conditions?
Does it make a difference if there is more than one graph (or chart) involved where there are variable sample rates?
Thanks,
DaveHello Dave,
Sounds like you have quite the system to set up here. It doesn�t look like you are doing anything terribly complicated. You should be able to modify different examples for the different parts of your application. Examples are always the best place to start.
For analog input, the �Cont Acq&Chart (buffered).vi� example is a great place to start. You can set the scan rate (scans/second) and how many different input channels you want to acquire. This example has its own stop button; it should be a simple matter to add a manual start button. To manually set how long the application runs, you could add a 100 ms delay to each iteration of the while loop (recommended anyway to allow processor to multi-task) and add a control that sets the number
of iterations of the while loop.
For logging data, a great example is the �Cont Acq to File (binary).vi� example.
For different sample rate for different input lines, you could use two parallel loops both running the first example mentioned above. The data would not be able to be displayed on the same graph, however.
If you have more specific questions about any of the different parts of your application, let me know and I�ll b happy to look further into it.
Have a nice day!
Robert M
Applications Engineer
National Instruments
Robert Mortensen
Software Engineer
National Instruments -
With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?
For example, in Notes, I have written three notes; however if I click on 'All On My Mac' on the side bar, I see about 10 different versions of each note I make, it saves a version every time I add or delete a sentence.
I also noticed, that when I write an email, Mail saves about 10 or more draft versions before the final is sent.
I understand that all this journaling provides a level of security, and prevents data lost; but I was wondering, is there a function to clean up journal logs once in a while?
Thanks
RozAre you using Microsoft word? Microsoft thinks the users are idiots. They put up a lot of pointless messages that annoy & worry users. I have seen this message from Microsoft word. It's annoying.
As BDaqua points out...
When you copy information via edit > copy, command + c, edit > cut, or command +x, you place the information on the clipboard. When you paste information, edit > paste or command + v, you copy information from the clipboard to your data file.
If you edit > cut or command + x and you do not paste the information and you quite Word, you could be loosing information. Microsoft is very worried about this. When you quite Word, Microsoft checks if there is information on the clipboard & if so, Microsoft puts out this message.
You should be saving your work more than once a day. I'd save every 5 minutes. command + s does a save.
Robert -
Hello everyone,
We are trying to get on the screen the aquired pulses from our PCI 6602 with a 2121 BNC connector board, from several devices. We are able to read the data and save without problem, but we cannot look at it while we are measuring. Anybody has an idea to how program this in C++ ? Any suggestion is welcome!
Thanks for the helpHi,
try to look for some example programs and Tutorials:
Examples Results
http://search.ni.com/nisearch/app/main/p/bot/no/ap/tech/lang/en/pg/1/sn/catnav:ex/q/DAQmx%20C%2B%2B/...
Tutorials Results
http://search.ni.com/nisearch/app/main/p/bot/no/ap/tech/lang/en/pg/1/sn/catnav:tu/q/DAQmx%20C%2B%2B/...
You should also have a look at the "C Reference Help" which is installed with the NI DAQmx driver.
Acquire N Scans (Visual C++ 6.0, CW++, NI-DAQ)
http://zone.ni.com/devzone/cda/epd/p/id/207
Continuous Analog Acquisition with Producer Consumer Architecture in C#
https://decibel.ni.com/content/docs/DOC-4253
Good Luck!
Matteo
Maybe you are looking for
-
How can I use an image from an ip address for use in labview as image data?
I want to use a stand-alone webcam on ethernet as the source for imaging control.
-
Using an iMac G5 with 10.4.3 I tried to install iSyncPalm.dmg and got the message: iSync Palm Conduites cannot be installed on this computer. iSync Palm Conduit requries iSync to be insatlled first. Problem is that iSync is installed and is visible i
-
How to add Report to a transport
HI, Can any body tell me how to add the report to a transport number. thanks in advance. kp
-
Where is a location of bios chip in HP G71-343US?
Do you know where is the loction of BIOS chip on HP G71-343US? Can you help?
-
Hi, I have a question.
Hi! All. I'm doing some project using flash lite 1.1 for each tele-communication company, I mean Docomo, AU, Vodafone. My question is there is any way to free memory in flash lite 1.1 . Because you may already know, flash lite 1.1 has a limitation ab