Real time processing in formula node
hello all,
I'm using LabVIEW for data acquisition of an analog signal.My sampling frequency is KHz. Now if I've to process these data's in formula node what type of variable I need to define in formula node.?
For example,if I want to add add all the magnitudes of the samples and store the sum after each sample squired.How do I write the formula node?
I'm an amateur in labVIEW. amateur guide.
Thanks in advance.
pramodantony wrote:
Now if I've to process these data's in formula node what type of variable I need to define in formula node.?
Who forces you to use the formula node? You simply need to process the data, the means should be irrelevant.
Acquired data in LabVIEW can be of many forms (arrays, waveforms, dynamic data, etc.). Can you show us what you have?
LabVIEW Champion . Do more with less code and in less time .
Similar Messages
-
hai this is siva ,
i want sap-script real time process.hai this is siva ,
i want sap-script real time process. -
Acrobat Plugin Can be involved in Real-time processing
Hi,
I am implementing an acrobat plug-in solution for color separation and generation of PRN file for the given PDF file.
Is the Plugin Architecture is good for the real time processing and RIPPIng? I have a doubt like the Acrobat Process
is involved in exectuing the each and every plugin HFT functions, is that would be fine for my plug-in to respond
in realtime?
Please suggest me.
Thanks & Regards,
Abdul Rasheed.First and foremost, Acrobat can NOT be used on a server, right? So this is going to be a user-invoked process, correct?
Beyond that, what do you thin would be a problem?
From: Adobe Forums <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" <[email protected]<mailto:[email protected]>>
Date: Wed, 21 Sep 2011 07:29:28 -0700
To: Leonard Rosenthol <[email protected]<mailto:[email protected]>>
Subject: Acrobat Plugin Can be involved in Real-time processing
Acrobat Plugin Can be involved in Real-time processing
created by skrasheed<http://forums.adobe.com/people/skrasheed> in Acrobat SDK - View the full discussion<http://forums.adobe.com/message/3929688#3929688 -
Real-time process slowly filling up Postgres DB space
I have a real-time job which takes data from a JMS queue, processes the data, and then applies inserts/updates to an Oracle DB. In the task definition for the process Results drill down is set = "None" yet the Postgres DB grows substantially as the job runs. Also the following SQL seems to find a row per JMS message processed...
select count(*) from pg_stat_user_tables where schemaname = 'results'
Anybody any clues as to what may be causing the DB to grow?
Cheers
JonIt looks like we're going to have to - Postgres has just died...
WARNING: 22-May-2014 04:24:27: Unable to index table : 'DNM_819_776_1_ingr'
WARNING: 22-May-2014 04:24:27: A database error has occurred : ERROR: could not create relation base/25589/3549571: File too large. (Code: 200,302)
com.datanomic.director.results.database.exception.sql.ResultsSQLException: A database error has occurred : ERROR: could not create relation base/25589/3549571: File too large. (Code: 200,302)
at com.datanomic.director.results.database.translator.MapErrorCodes.mapException(MapErrorCodes.java:70)
at com.datanomic.director.results.database.AbstractTableDAO.executeSQL(AbstractTableDAO.java:66)
at com.datanomic.director.results.database.AbstractTableDAO.executeSQL(AbstractTableDAO.java:39)
at com.datanomic.director.results.database.TableInsertDao.addIndexes(TableInsertDao.java:291)
at com.datanomic.director.results.database.TableInsert.close(TableInsert.java:423)
at com.datanomic.director.results.database.TableInsert.close(TableInsert.java:301)
at com.datanomic.director.match.runtime.data.writers.AbstractDBWriter.close(AbstractDBWriter.java:173)
at com.datanomic.director.match.runtime.data.realtime.ResultsBucket.finishRealtimeBuckets(ResultsBucket.java:63)
at com.datanomic.director.match.runtime.RealtimeHandler.finalizeDBStore(RealtimeHandler.java:623)
at com.datanomic.director.match.munger.MatchRealtimeExecutor.doTheStuff(MatchRealtimeExecutor.java:303)
at com.datanomic.director.runtime.engine.RuntimeProcessMunger$MungerExecutable.execute(RuntimeProcessMunger.java:872)
at com.datanomic.utils.execution.Parallelizer$Worker.run(Parallelizer.java:210)
at com.datanomic.utils.execution.Parallelizer$Worker.runHere(Parallelizer.java:156)
at com.datanomic.utils.execution.Parallelizer.run(Parallelizer.java:85)
at com.datanomic.director.runtime.engine.RuntimeProcessMunger.execute(RuntimeProcessMunger.java:459)
at com.datanomic.utils.execution.Parallelizer$Worker.run(Parallelizer.java:210)
at java.lang.Thread.run(Thread.java:722)
Do you recognise this as being the ultimate failure of too much data written to the Postgres DB? -
HELP: Run-time array dimension in LabVIEW formula node
I need to dimension an array at run time within a formula node as follows:
int32 i,N;
N = sizeOfDim(inputArray,0);
float64 outputArray[N];
for (i = 0; i outputArray[i] = myfunction(inputArray[i]);
However, LabVIEW complains "Formula Node: index list expected". On the
other hand, if I say
float64 outputArray[1000];
LabVIEW is perfectly happy. But that's not what I need to do! Is there
an alternative
way of accomplishing my goal?
BTW, I've tried calculating N outside the formula node and then
presenting it as
an input with the same results. I've got a bad feeling that run time
array dimensioning
just isn't allowed.
TIA,
HughCan't you just use the Initialize Array function outside the formula node and pass that instead?
-
Continuous data acquisition and analysis in real time
Hi all,
This is a VI for the continous acquisition of an ECG signal. As far as I understand the DAQmx Analog read VI needs to be placed inside a while loop so it can acquire the data continously, I need to perform filtering and analysis of the waveform in real time. The way I set up the block diagram means that the data stays int the while loop, and as far as I know the data will be transfered out through the data tunnels once the loop finishes executing, clearly this is not real time data processing.
The only way I can think of fixing this problem is by placing another while loop that covers the filtering the stage VIs and using some sort of shift registeing to pass the data to the second loop. My questions is whether or not this would introduce some sort of delay, and wether or not this would be considered to be real time processing. Would it be better to place all the VIs (aquicition and filtering) inside one while loop? or is this bad programming practice. Other functions that I need to perform is to save the data i na file but only when the user wants to do so.
Any advice would be appriciated.
Solved!
Go to Solution.You have two options:
A. As you mentioned, you can place code inside your current while loop to do the processing. If you are clever, you won't need to place another while loop inside your existing one (nested loops). But that totally depends on the type of processing you are doing.
B. Create a second parallel loop to do the processing. This would decouple the processes to ensure that the processing does not hinder your acquisition. See here for more info.
Your choice really depends on the processing that you plan to perform. If it is processor-intensive, it might introduce delays as you mentioned.
I would reccomend you first try to place everything in the first loop, and see if your DAQ buffer overflows (you can monitor the buffer backlog while its running). If so, then you should decouple the processes into separate loops.
Regarding whether or not "this would be considered to be real time processing" is a loaded question. Most people on these forums will say that your system will NEVER be real-time because you are using a desktop PC to perform the processing (note: I am assuming this is code running on a laptop or desktop?). It is not a deterministic system and your data is already "old" by the time it exits your DAQ buffer. But the answer to your question really depends on how you define "real-time processing". Many lay-people will define it as the processing of "live data" ... but what is "live data"? -
TCP/IP Connecting with Real Time Controller
I have a host running Labview on a windows XP and a realtime embedded controller on a pxi chassis that acts as the server. When the realtime is started it automatically goes into listen mode and listens for a connection from the host. The host opens a connection. After a valid connection is open the Real-Time side goes into a TCP_Read and the host can then send commands that the real time processes and sends to the FPGA on the pxi-chassis.
Now the problem I'm having is how to handle the case when a TCP connection is lost. I can have the TCP_Read on the real-time error on a time out and then go into a listen mode but this isn't very logical because then the host will have to reconnect each time a time out occurs. So if I make the TCP_Read timeout be infinite and if the connection is lost (let's say I unplug the ethernet cable and re-plug it back in) then I cannot recover from this and the Real-time will need to be re-booted.
I've tried to send the Real-time into listen mode if the error code is other than a timeout error (code 56) and have it go back to TCP_read mode if it is a timeout error. But if the connection is lost by means of a physical way (such as me pulling the ethernet wire and plugging it back in) then the Real-Time never sees that the connection is invalid. The host on the other hand can detect it bc it will get an error when it's trying to write?
So my is:
Is there any way to prevent an infinite loop that needs a reboot and at the same time prevent the host from reconnecting every time there is a timeout?Hi SJeane,
I apologize for taking so long to respond, but I wanted to test this on my end. In doing so, I realized that using the RT Reboot Controller.vi after the connection is lost does not work because the message to reboot cannot be relayed to the target without communication! Thus, to solve this problem, we have to approach it a different way. You mentioned that you tried programmatically clearing errors, but did you try to reestablish connection after clearing the errors? I tested this on my end with a FieldPoint controller, and the attached VIs resumed operation even after unplugging/replugging the Ethernet cable (no reboot). Will this solution work for you?
Peter K.
National Instruments
Attachments:
Reestablish.zip 39 KB -
CRIO-9067 Real-time unexpected error restart (Hex 0x661)
Hi all,
I recently moved to LV2014 SP1 and NI-RIO 14.5 to support development of a cRIO-9067 Linux RT controller. The past two days of development have seen LabVIEW crash numerous times (eight at last count), along with numerous VIs being in an undeployable state due to some unseen error (there's no broken run arrows anywhere, and a LV restart seems to cure what was broken).
The above issues are tolerable, if somewhat annoying. What's concerning is the most recent error I received this morning. I had just run the top level VI from source (and verified it was running), went to make myself a coffee and came back to this error.
LabVIEW: (Hex 0x661) The LabVIEW Real-Time process encountered an unexpected error and restarted automatically.
The VI wouldn't have been running for more than 10 minutes. I've since tried to reproduce the error without luck.
This is somewhat concerning as the application for this cRIO will be 24/7 process control. Is this a known issue with the newer Linux RT controllers? Is there anything that can be done to detect the error in LabVIEW?
Below is a screenshot of the software installed on the controller:You should be able to access the error logs on your controller through MAX.
Is There an Error Log File for My Real-Time Controller?-http://digital.ni.com/public.nsf/allkb/E734886E027D0B6586256F2400761E30?OpenDocument
How Do I Locate the Error Logs on My cRIO if I Don't Have MAX? - http://digital.ni.com/public.nsf/allkb/9D2F9D4F8C834D678625766D00633837
Could you post the error log files for the Compact RIO targets that reproduce the error?
It’s possible that a memory leak is occurring someplace in your program that causes the crash.
Also, are you using the System Config Set Time.VI anywhere in your application? There has been a reported issue with this VI in relation to this error, but the hex code error is not common. We can try to cross-compare the issue internally to see if there are simlarities to the reported case.
Additionally, is this crash repeatable with other code, say a simple shipping example? I would imagine that the crash is related to a routine called in your program, but it’s possible there is a corruption in the software installation on your target.
Also, what is the ProcessCommandMessage.vi responsible for/doing in your application?
Will M.
Applications Engineer
National Instruments -
Hi All,
We some students of our university would like to implement Real time JVM on Linux.
We do have a solid idea on linux as well as on Java. But this is a very new and interesting topic.
Can anyone help me how to start the assignment i.e. implementation of real time jvm on linux.
Thanks and Regards
tapasJust guessing...
There are only two impacts to real time processing in java.
First the garbage collector. Since it runs at odd times. And ties up the rest of the application for unpredictable amounts of time a real time system will need another type of solution.
Second, threads. Because threads prempt each other this can also produce unpredictable behavior. Some usual solutions involve no threads at all. Or a way to preclude interruption.
Finally I suspect that if you do a literature search you would be able to find some papers on this subject. -
Real time solution using Documaker
Hi all,
We are using Documaker 10.3, currently we have a flat file as input to Documaker which creates the forms in our daily batch. Now we are planning to generate few forms in real time, like on a click of button from a source system a document should generate.
Right now I can think of creating a flat file out of the source system and generate a form (miniature batch), not satisfied with this approach though. Is there any better way of dealing with these kind of situations? Is there any facility (which I am obviously not aware of) in Documaker which can take care of this.
Thanks in advanceVenkata,
You can certainly continue to use the flat-file approach if it satisfies the requirements for your real-time processing. If the source system is not able to generate a "batch of one" extract file, then you will need to explore other methods of generating your input for Documaker, which are going to be specific to your source system -- can you elaborate on this?
When running Documaker in real-time mode, it's a fairly simple process using components in the Oracle Documaker suite -- specifically Docupresentment. However, with pre-11.4 versions of Documaker, the licensing for Docupresentment was handled separately, so you will need to ensure you are licensed for Docupresentment to use it. -
Process Chain for Real Time Demon
Please help I am stuck I followed the step by sdn but this is missing in step. how to create now process chain.
I created the below
DSO CONNECTED TO dATASOURCE via Trans,
Real Time IP
Real Time DTP
assigned to Datasource and assigned the DS, IP, DTP to Deamon in RSRDA. NOW I started also manually via start all IP. but How to set the process chains now.
PLEASE HELP ME STEP BY STEP TO PROCESS CHAIN SINCE i am new to this daemon in process chains
Thanks
Soniya
nullHi
refer to this
CREATION OF PROCESS CHAINS
Process chains are used to automated the loading process.
Will be used in all applications as you cannot schedule hundreds of infopackages manually and daily.
Metachain
Steps for Metachain :
1. Start ( In this variant set ur schedule times for this metachain )
2.Local Process Chain 1 ( Say its a master data process chain - Get into the start variant of this chain ( Sub chain - like any other chain ) and check the second radio button " Start using metachain or API " )
3.Local Process Chain 2 ( Say its a transaction data process chain do the same as in step 2 )
Steps for Process Chains in BI 7.0 for a Cube.
1. Start
2. Execute Infopackage
3. Delete Indexes for Cube
4.Execute DTP
5. Create Indexes for Cube
For DSO
1. Start
2. Execute Infopackage
3. Execute DTP
5. Activate DSO
For an IO
1. Start
2.Execute infopackage
3.Execute DTP
4.Attribute Change Run
Data to Cube thru a DSO
1. Start
2. Execute Infopackage ( loads till psa )
3.Execute DTP ( to load DSO frm PSA )
4.Activate DSO
5.Further Processing
6.Delete Indexes for Cube
7.Execute DTP ( to load Cube frm DSO )
8.Create Indexes for Cube
3.X
Master loading ( Attr, Text, Hierarchies )
Steps :
1.Start
2. Execute Infopackage ( say if you are loading 2 IO's just have them all parallel )
3.You might want to load in seq - Attributes - Texts - Hierarchies
4.And ( Connecting all Infopackages )
5.Attribute Change Run ( add all relevant IO's ).
Start
Infopackge1A(Attr)|Infopackge2A(Attr)
Infopackge1B(Txts)|Infopackge2B(Txts)
/_____________________|
Infopackge1C(Txts)______|
\_____________________|
\___________________|
__\___________________|
___\__________________|
______ And Processer_ ( Connect Infopackge1C & Infopackge2B )
__________|__________
Attribute Change Run ( Add Infobject 1 & Infoobject 2 to this variant )
1. Start
2. Delete Indexes for Cube
3. Execute Infopackage
4.Create Indexes for Cube
For DSO
1. Start
2. Execute Infopackage
3. Activate DSO
For an IO
1.Start
2.Execute infopackage
3.Attribute Change Run
Data to Cube thru a DSO
1. Start
2. Execute Infopackage
3.Activate DSO
5.Further Processing
6.Delete Indexes for Cube
7.Execute Infopackage
8.Create Indexes for Cube -
Desing requriment for processing the IDOC in real time
hi Gurus,
we have a requirement to design for processing the IDOC in real time i.e.all the IDOC must be store and process for give time line with out using BPPM tool.we have suggested to use cache storage or storing the files in file adapter.
please suggested good process with out compromising the Performance of PI.
regards
shankarhi Raj,
Please find the example below for your reference
let say
System A-->PI--
>system B
system A sends multiple files based on time(daily or weekly) and real time( as an when the record created) to systerm B (CRM) in the form of IDOC Via. PI
we needs to club the real time files and send it to system B.How best we can do without using BPM.
regards
shankar -
Do I have to use LabVIEW Real Time with a reflective memory node?
For reference with an external data system that will be temporarily installed at a customer's site, they have asked that I tie into their data network to record data from their control system. They apparently use a reflective memory network for data sharing. I have no prior experience with reflective memory, but all references to it involve real time systems. I do not need absolute determinism to acquire this data, I can be late by several milliseconds with no problem. Do I still need to use LabVIEW Real Time to interface with the PXI reflective memory node?
Hi AEI,
I have worked with that card briefly before. It has a Visa based driver and RT isn't required. However, I haven't worked with the card on a non-rt system and am not sure if there any issues to be aware of.
A lot of work has gone into integrating support for the card into Veristand, it may save you enough development time to use at an RT-Veristand system to be worth the extra cost.
Jesse Dennis
Design Engineer
Erdos Miller -
Bug: Built Real-Time app won't run if it accesses a typedef'ed shared variable node
Hello,
I finished developing a Real-Time program that uses typedef'ed (clusters and enums) networked shared variables (SVs). It works fine when I run the program in Development mode. However, when I built and deployed it as a start-up app, it refuses to start. The VI monitor in the Distributed System Manager says that the VIs that use those SV nodes (and the top-level VI that references them) have "Bad" status, and the rest are "Idle".
To get my built program to run, I had to disconnect all the variables from the typedefs. Deleting the SV nodes made the program run too, but that's not an option.
Has anyone else encountered this?
Platform: LabVIEW 2012 (32-bit), NI cRIO-9076Yes. See the following thread.
Paolo
LV 7.0, 7.1, 8.0.1, 2011 -
This is what happens when I leave the computer idle for a while and the Windows automatic maintenance starts:
Driver file Description ISR count DPC count Highest execution (ms) Total execution (ms)
ntoskrnl.exe NT Kernel & System 0 50764 0,235854
332,950426
A DPC spike is generated by ntoskrnl.exe causing drop outs in real-time streams.
JTSHi Fjtorsol,
We hope your issue has been resolved, if you've found solution by yourself. you could share with us and we will mark it as answer.
This high Deferred Procedure Call (DPC) latencies are usually caused by certain drivers, if it is caused by automatic maintenance, please re-check your task schedule which is marked as “when computer is idle”. As what has been suggested by MVP ZigZag, please
use the Microsoft Windows Performance Analyzer from the Windows Assessment and Deployment Kit (ADK) to identify the cause of any DPC latency spikes.
https://www.microsoft.com/en-gb/download/details.aspx?id=39982
DPC CPU Usage Summary Table will open containing a list of drivers/program. This list is already correctly sorted (by the Actual Duration column). The process on the very top of the list is therefore likely to be the cause of your problem.
Regards
D. Wu
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
Maybe you are looking for
-
I just upgraded to Firefox 5. Before when I filled in a form, I just had to click autofill and all my information was provided. I found autofill on customize toolbars and added it, but it only had my name and address, not my credit cards or passwords
-
Security Error when starting the server - WPLS 7.0
Security Error when starting the server - WPLS 7.0 what may be the reason ... console : BUILD SUCCESSFUL Total time: 3 seconds Starting WebLogic Server... <May 30, 2005 4:22:15 PM IST> <Notice> <Management> <140005> <Loading configuration C:\cpDomain
-
Problem with internal table declaration in function gui_upload
hi friends, can u tell me how should i define internal table when i use this function? i get error that in oo my declaration not good. thanks, dana.
-
Import from Sony vx1000 via AC or battery power?
Hi all. Can I import my video from my Sony VX1000 via battery power or do I need to have an AC power supply? Thanks.
-
How to convert PAL dvd to NTSC?
Hello I am trying to convert a PAL dvd to NTSC format and was unable to when I tried mac the ripper. Perhaps it wasn't the program for it. I'm looking for a free program that would allow me to do this. Thanks. Dorian