Thread swapped
Hello,
My question is regarding the below thread related to standby question.
Create a standby controlfile using cold backup
First OP posted into database general, this morning moderators moved to dataguard. Now again it is moved back to database general.
Why so Many swappings even if it in right folder?
Thanks.
-- No answers from Admins/moderators.. closing thread. :)
Edited by: CKPT on Feb 21, 2012 9:26 AM
109 fahrenheit is 42.78 celsius, that's a tad 'warm' - the bottom side of this one is showing 35 celsius (with ambient air of 26)
Try using SMCFanControl to increase the fan speeds and lower temperatures ?
Similar Messages
-
How to get details on swapped out processes?
I am trying to get the details on some swapped out processes. Currently vmstat is showing 71 processes a swapped out:
vmstat 3 3
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr vc vc vc -- in sy cs us sy id
*0 0 6 5598448 512968 155 352 428 40 42 0 22 3 28 1 0 1982 3904 1905 7 14 79*
*0 0 71 5091488 436568 0 4 0 0 0 0 0 0 1 0 0 976 1215 945 0 3 97*
*0 0 71 5099088 440896 2 5 0 0 0 0 0 0 2 0 0 1004 1221 991 1 2 97*
A search of the MOS knowledge base turned up Support Recommended doc 1009494.1 How to use DTrace and mdb to Interpret vmstat Statistics.
The section titled "kthr: Swapped out Threads (w)" includes the following statements and commands:
To see the kernel threads swapped out during the sample period:
$ dtrace -q -n 'fbt::swapout_lwp:entry{ proc = (proc *)arg[0]->t_procp; printf("Lwp: %d of \t Proc: %s being swaped out\n",arg[0]->t_id, proc->p_user.u_comm);}'
The vmstat output doesn't tell what processes are swapped out. Mdb(1) can be used to print swapped out processes:
# echo "::walk thread myvar|::print kthread_t t_schedflag|::grep .==0x8|::eval <myvar=K|::print kthread_t t_procp|::print proc_t p_user.u_comm"|mdb -k
The dtrace command does not pass edit:
*dtrace -q -n 'fbt::swapout_lwp:entry{ proc = (proc *)arg[0]->t_procp; printf("Lwp: %d of \t Proc: %s being swaped out\n",arg[0]->t_id, proc->p_user.u_comm);}'*
*dtrace: invalid probe specifier fbt::swapout_lwp:entry{ proc = (proc *)arg[0]->t_procp; printf("Lwp: %d of \t Proc: %s being swaped out\n",arg[0]->t_id, proc->p_user.u_comm);}: syntax error near ")"*
The mdb command does not return any thing:
*#echo "::walk thread myvar|::print kthread_t t_schedflag|::grep .==0x8|::eval <myvar=K|::print kthread_t t_procp|::print proc_t p_user.u_comm"|mdb -k*
The server is running:
uname -a
SunOS rizzotest 5.10 Generic_142900-11 sun4v sparc SUNW,SPARC-Enterprise-T5220
cat /etc/release
Solaris 10 10/09 s10s_u8wos_08a SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 September 2009
Thanks,
GlenGSatellite M505-S4975
There are some utilities that read SMART data from the log.
Comparison of S.M.A.R.T. tools
But, as Peter says, there is no way to tell when such a drive will fail. Best to replace it now.
-Jerry -
I am trying to get an understanding of what is a LabVIEW thread when using
LV6 on Win32.
1. If a block diagram has two parallel while loops, is each loop a thread.
2. Is each sub-VI a separate thread? (I don't think so, but I thought I
should ask)
3. If a sub-VI is a loop, and does not have a return terminal, is it a
thread?
4. If a call to a sub-VI starts a thread, how do you kill it?
Thanks. Any answers or references to material on this topic (other than NI
App Note 114) would be appreciated...EdBefore I get to the direct questions, a bit of overview. The LV
execution is controlled by dataflow and the loop/case/sequence
structures. LV can execute the same diagram in many ways. The same VI,
without being recompiled, can run on a single threaded OS such as the
classic Mac OS, on a single CPU multithreaded machine, or on a multi-CPU
multithreaded OS. It does this much like the OS itself does, by
scheduling tasks using a priority based round-robin scheduler. There
can be any number of these scheduling objects, which we call an
execution system. Each execution system has a queue used to execute
items marked to run there.
On a single threaded OS, every execution system is run by the one and
only thread. That thread takes turns cooperatively multitasking between
the different execution systems and the window handling UI tasks. It
picks an execution system, dequeues a task, and when the task releases
control, the thread repeats -- determines the next execution system and
task to execute.
On a single CPU multithreaded OS, each execution defaults to having its
own thread. The OS schedules threads preemptively based upon priorities
and heuristics. I said this is the default because it is possible to
customize this to have more than one thread per execution system. Each
thread sleeps until a task is placed on its queue. At that point the OS
may choose to swap threads based upon priorities. Additionally, the OS
can swap out threads, thus swapping out execution systems at any point.
The default for a multi-CPU multithreaded OS is to allocate M threads
per execution system where M is the number of CPUs. Each of the
execution queues now has multiple threads sleeping waiting for a task.
The OS chooses a thread based upon priorities for each of the CPUs and
determines how long to execute and how often to swap. Again, this is
the default and can be modified, though this is rarely necessary.
> 1. If a block diagram has two parallel while loops, is each loop a thread.
Normally, no. Each thread is one or more tasks and since they are in
the same VI, they will be executing in the same execution system. The
code for both while loops is generated to have cooperative scheduling
opertunities in it. The while loops will multitask based upon delays
and other asynchronous nodes within it. Provided the loops do not have
synchronization functions in them, they will execute relatively
independent of one another sharing the CPU resources. If one of the
loops is executing more frequently than it needs to, the best way to
control this is to place a Wait node or some other synchronization to
help limit the execution speed. You can place each of the loops in
separate threads either by placing the different loops in different VIs
and setting the VIs to execute in different execution systems. You can
also customize the execution system they run in to have more than one
thread. If you could be more specific as to why you want to run them in
different threads I can help determine if this would help and the best
way to do this.
> 2. Is each sub-VI a separate thread? (I don't think so, but I thought I
> should ask)
No. Each VI specifies which execution system it wishes to run in. By
default, it is set to Same as Caller so that it adapts to its caller and
avoids thread swaps. You can set the execution system on the Execution
page of the VI Properties dialog. By setting this, you can essentially
place VIs in their own thread, but this isn't the default since it would
affect performance.
> 3. If a sub-VI is a loop, and does not have a return terminal, is it a
> thread?
Whether or not a subVI has a loop doesn't affect how it is scheduled.
> 4. If a call to a sub-VI starts a thread, how do you kill it?
You use one of the synchronization primitives or some other
communication such a global variable to tell the loop to stop executing.
When the subVI stops executing, the execution system either picks
another task to execute or goes to sleep. It is possible to use the VI
Server to abort VIs that have been run using the Run method, but this
isn't necessarily a good thing to do. It is far better to have loops
and subVIs terminate normally.
>
> Thanks. Any answers or references to material on this topic (other than NI
> App Note 114) would be appreciated...Ed
The last three years there have been presentations on the execution
system of LV. You can find most of these on devzone by going to
zone.ni.com and searching for Inside LabVIEW.
Also feel free to ask more questions or give more background.
Greg McKaskle -
Threading communication advice please
Hello everyone, I have a question.
I am building a program which has two classes, one is a telnet client and the other a GUI. The GUI class is started in the original thread whereas the telnet class spawns a new thread so the UI remains reponsive.
The UI class can instigate telnet commands to be sent, and telnet messages received result in some part of the gui being updated.
What I am asking is what would be the best way for these two threads to pass messages to each other? I havent done much threading programming before, would some sort of message queue or maybe some shared static classes (which are checked everytime the threads swap being active)?
Im a little confused, any help would be great!
ThanksI don't get JLNayak's point...
If the GUI is based on Swing, you should create it and set it visible (there's no start notion) in the Event-Dispatch Thread (EDT). See [this section|http://java.sun.com/docs/books/tutorial/uiswing/concurrency/index.html] of the Java Tutorial.
From the link I gave, you'll also find a Swing-specific means (namely, SwingWorker) to have something running in a thread and appropriately post updates to the EDT (which is also the thread in which all Swing painting occurs). -
Swap space for multiple instaces on the same box
OS: Oracle Enterprise Linux 5 Update 2 (64-bit)
DB: 10.2.0.4
I'm fairly new to Linux and have a general question about configuring swap space on a Linux box running 10g. From the 10g: Managing Oracle on Linux for DBAs class, Oracle gives the following recommendations about swap size:
RAM (Swap)
<= 2Gb (150% of the RAM size)
Between 2Gb - 8Gb (Equal to the RAM size)
8Gb+ (75% of the RAM size)
It's not clear to me from reading the book if this recommendation is good for all Oracle processes running on the machine, or if it's a per instance value? The box has 16Gb of RAM but there are going to be three instances running on it and I need to verify that 12Gb of swap will support all three without issue or if I'll need to configure 12Gb swap per instance.
Appreciate any input.As noted earlier in this thread, swap space is an operating system property, and has nothing to do with software you want to run on top of it, like an oracle database.
The function of swap space is to be able cope with memory allocations beyond the amount of physical memory. In most occasions you do not want to use swap space, because it has a severe performance impact (memory blocks have to be exchanged from the swap device to physical memory).
(please mind that on some unixes (AIX and HPUX if I am not mistaking), swap is pre-allocated for processes which means the swap size is not only depended of operating system and memory size, but also the number of processes. Linux does not do that, it starts to allocate swap pages under memory pressure. There are certain issues with some memory settings in linux (linux kernel memory management is quite automatic) that can get your system to page. (memory overcommitting is one thing))
In my experience, the amount of memory which is used is depended on the amount of memory on the machine in most cases. Whilst this sound simple, think of it: with 2GB used, probably most DBA's would allocate approximately 1.5GB to the databases on that machine, if your machine got 16GB you probably want approximately 14GB to be used by the databases.
This means (for me) that I use swapspace with the same size as the amount of memory. That few extra GB's allocated to swap won't cost you or your company an arm and a leg, so have that for not running out of memory. -
Data acquisition loop with queue
What I would like to do is have a data acquisition loop that samples a load cell at 500Hz and have another loop that runs much slower to run a state machine and display some data in real time. The reason I want to sample the load cell so fast is to filter out some noise. Making producer/consumer loops with a queue kind of makes sense but I don't really care about all of the samples, I just want to be able to read a real time filtered signal at certain times. I looked at having just two parallel loops, one to acquire the data and the other to run a test and retrieve a real-time signal when I want but not sure how to pass data between the loops without using a queue. You can do it with local variables but you are at risk of a race condition. I hope this make sense. I am sure this is a simple problem I just don't know what direction to go. Thanks
Good Evening secr1973,
It sounds like you are on the right track. You already know about the producer/consumer architecture; this is almost always the first step to the separation that I think you are after.
The step that I think you are missing is a Case Structure around the enqueue element VI. You likely have some event or specific pattern that you are looking for in the input signal. You can have the output from this algorithm (likely a boolean) determine which case of the Case Structure to execute (Case 1: enqueue the element or Case 2: Do not enqueue the element).
This, of course, leads to processing being done in the producer loop, which is quite the opposite of what you are trying to accomplish with the producer/consumer architecture. You will have to decide if your processing is very simple or more complicated.
If it is easy/fast, you can likely get away with doing this processing in the producer loop. My guess is that your program falls under the category of do-it-all-in-the-producer loop because you are only acquiring at 500 Hz.
If the application requires faster acquisition rates or if the logic is going to require some processing, you may want to implement a double layer producer/consumer architecture. In this setup, you would pass all of the data from the DAQ producer to a second loop (using queue #1) that determines what to do with the data (to enqueue or not to enqueue...) and, if appropriate, write to a queue (queue #2) that the third loop can read. The third loop would be where your state machine executes.
If you have a quad core machine, each of these steps will execute on its own core. If not, you will have a little more thread swapping; not a huge concern in most cases. Here, we get into the art of programming more than the science.
In any event, I think you will be OK with a little processing for the enqueue or not algorithm in the producer loop.
Regards,
Charlie Piazza
Staff Product Support Engineer, RF
National Instruments -
when using the Tick Count millisecond timer with a .dll I've written in C, I'm getting some odd timing issues.
When I code the function I want (I'll explain it below in case it helps) in LV and run it as a subVI, feeding it the Tick count as an argument, the function runs quickly, but not quite as quickly as I would like. When I feed this same subVI just an integer constant rather than the Tick Count, it takes about the same amount of time, maybe a tiny bit more on average.
When I bring in my function from a .dll, however, I start to run into problems. When I feed my function an integer constant, it is much faster than my subVI written in LV. When I feel my .dll the Tick Count, however, it slows down tremendously. I'm including a table with the times below:
| Clock | Constant |
SubVi: | 450ms | 465ms |
.dll | 4900ms| 75ms |
This is running the function 100,000 times. The function basically shifts the contents of a 2-dimensional array one place. For this function, it probably won't be a huge deal for me, but I plan on moving some of my other code out of LV and into C to speed it up, so I'd really like to figure this out.
Thanks,
AaronHi Aaron,
Thanks for posting the code -- that made things a lot clearer for me. I believe I know what's going on here, and the good news is that it's easy to correct! (You shouldn't apologize for this though, as even an experienced LabVIEW programmer could run into a similar situation.) Let me explain...
When you set your Call Library Function Node to run in the UI Thread you're telling LabVIEW that your DLL is not Thread-safe -- this means that under no circumstances should the DLL be called from more than one place at a time. Since LabVIEW itself is inherently multithreaded the way to work with a "thread-unsafe" DLL is to run it in a dedicated thread -- in this case, the UI thread. This safety comes at a price, however, as your program will have to constantly thread-swap to call the DLL and then execute block diagram code. This thread-swapping can come with a performance hit, which is what you're seeing in your application.
The reason your "MSTick fine behavior.vi" works is that it isn't swapping threads with each iteration of the for loop -- same with the "MSTick bad behavior.vi" without the Tick Count function. When you introduce the Tick Count Function in the for loop, LabVIEW now has to swap threads every single iteration -- this is where your performance issues originate. In fact, you could reproduce the same behavior with any function (not just TIck Count) or any DLL. You could even make your "MSTick fine behavior.vi" misbehave by placing a control property node in the for loop. (Property nodes are also executed in the UI thread).
So what's the solution? If your DLL is thread-safe, configure the call library function node to be "reentrant." You should see a pretty drastic reduction in the amount of time it takes your code to execute. In general, you can tell if your DLL is thread-safe when:
The code is thread safe when it does not store any global data, such as global variables, files on disk, and so on.
The code is thread safe when it does not access any hardware. In other words, the code does not contain register-level programming.
The code is thread safe when it does not make any calls to any functions, shared libraries, or drivers that are not thread safe.
The code is thread safe when it uses semaphores or mutexes to protect access to global resources.
The code is thread safe when it is called by only one non-reentrant VI.
There are also a few documents on the website that you may want to take a look at, if you want some more details on this:
Configuring the Call Library Function Node
An Overview of Accessing DLLs or Shared Libraries from LabVIEW
VI Execution Speed
I hope this helps clear-up some confusion -- best of luck with your application!
Charlie S.
Visit ni.com/gettingstarted for step-by-step help in setting up your system -
I have created a project that consists of several VIs. Only the main VI has a front panel and the others perform functions. The function VIs are dependent on controls on the main VI's front panel. I have several ways of passing the value of the controls. One is to use a global variable and just place it on the dependent VIs. Another option is to strictly connect the terminal from the control to a VI connector block and pass the value directly. My last option is to create a reference of the control and reference it inside the dependent VIs, but this would also require connections to be made to the VI block.
What are the advantages/disadvantages of these options?
-Stephen5thGen wrote:
I have created a project that consists of several VIs. Only the main VI has a front panel and the others perform functions. The function VIs are dependent on controls on the main VI's front panel. I have several ways of passing the value of the controls.
1) One is to use a global variable and just place it on the dependent VIs.
2) Another option is to strictly connect the terminal from the control to a VI connector block and pass the value directly.
3) My last option is to create a reference of the control and reference it inside the dependent VIs, but this would also require connections to be made to the VI block.
What are the advantages/disadvantages of these options?
-Stephen
1) Globals are evil and introduce race conditions.
2) The sub-VI only get the value when it was called and updates that occur while the sub-VI is runing are not sensed by the sub-VI
3) This uses property node "value" or "value signaling" both of which run the user interface thread which is single-threaded and you incur a thread swap hit to performance. You also have a potential for race conditions.
The are various methods for sharing dat to/from sub-VI which include Queues and Action Engines.
I hope that hleps,
Ben
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction -
Best way to update an indicator
I've attached a very simple vi to demonstrate my question.
I'm making a test using the state-machine architecture. In the test, there is an indicator on the front panel which is to be updated at various places in the test.
My question is ... what is the best way to update the value of the indicator? In the vi, I've wired it directly, used a property node and used a local variable, all of which achieve the same result.
The first way - directly wiring - is obviously the best way if I have access to the input terminal of the indicator (as in state '0').
But what if I need to update the same indicator from the second or third states? What are my options here?
This is only a simple demonstration vi, so please don't say 'move the indicator outside the case structure and wire it through a tunnel', I know I can do that here. My 'real' vi updates the indicator several times within the state and I currently do using property nodes. I read somewhere that this isn't very efficient, which is why I'm asking this.
Regards,
Sebster
LabVIEW 8.6, WinXP.
Attachments:
Update an indicator.vi 9 KBThey look the same but they are implemented very differently. See this thread for some performance numbers.
The control terminal is the most efficient technique. If you read the docs on creating XControls there is an explicit warning to only use the terminal and in cases where the indicator gets updated in some conditions and not others, we need to move the terminal into a following case and use a boolean to decide if we are writing to the indicator.
I thought I had this list tagged already but i could not find it so here it goes again.
In order of speed fastest to slowest.
1) Terminal (VL has optimized code that let the update slip in thru a back door.
2) Local but these require additional copies so the data has to be copied to each instance of the local.
3) Property node has to use the User Interface thread to update. This means waiting for the OS to re-schedule the work after the thread swap.
Both Locals and Property nodes can result in a Race condition if you use the indicator for data storage. See my signature for a link to avoid Race Conditions using an Action Engine.
Ben
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction -
How to create this code in labview
hello...
please helpe me for this Question
i have create this code "c" in " labview "...
ex c :
if portc.f1==1 {
portc.f0=~portc.f0
ex labview :
if push button ==1 {
round led =~ round led
thanks...
Solved!
Go to Solution.pjr1121 wrote:
See attached image.
Why do people insist on using property nodes to get the value. It has the same issues with race conditions as the local variable but is extremely slow (forces a thread swap to the UI thread). Besides, you should be keeping the value of the LED in a shift register.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
Attachments:
Toggle LED.png 16 KB -
Hi all,
There is something I dont understand. I'm using LV 6i and NI6070E daq card. I'm using the AI and DIO to read my program. I used DIO to write my LEDs to alternating turn on and off. I realise, if I maximize my front panel to read, it slow down the freq of my LEDs, if I minimize the front panel, I'll get back my original freq. Can anyone explain to me why is that so. How can I resolve this pro?Hello,
All AEsulzer was saying is that if you have any hardware dependent code which returns data to you, you should instead replace those functions with something very simple such as just the uniform(0,1) random number generating function. Otherwise, since we don't have your precise hardware setup, we will not be able to execute your code and reproduce the problem. The attached screenshot shows how to navigate the LabVIEW palette to get the random number generator.
In any event, from looking at your code, it appears as though you have some traditional DAQ VIs there. In particular, you noted a problem with your LED's update frequency. The problem could be due to screen updates. Basically, you have a large front panel with many charts to update, and there will be thread swaps to the UI thread to handle corresponding updates. Both the thread swap overhead, and the time that it takes to update the front panel will cause delays for the execution thread handling your DAQ functionality. Check out the following example program I wrote to illustrate the time it takes to perform screen updates:
LabVIEW Execution Time and Screen Updates
I hope this helps explain the behavior!
Best Regards,
JLS
Message Edited by JLS on 03-27-2006 04:12 PM
Best,
JLS
Sixclear
Attachments:
Numeric Palette - Random Number (0-1).JPG 39 KB -
Occurrence refnum - access from a different vi.
Hi,
Labview help enticingly has this to say about Occurrence refnums:
"Occurrence Refnum—Used with the Occurrence functions. Use this type of refnum when you generate an occurrence in one VI but want to set or wait for the occurrence in another VI."
This is exactly what I want to do. I can create an occurrence refnum in vi #1 but I just cannot see how to 'get at' the refnum in vi #2 without using something like a global variable to pass it.
The vi's are all dynamically loaded so (unless I'm missing something else as well, very likely!) it is not possible to wire the occurrence or references.
I'm a 'C' programmer just starting out with Labview so please don't assume I know *anything* for sure.Ben wrote:
Yes!
And to extend the idea I have used AE's to share data not only between threads and between exe's, but also between machines across a network by exposting the AE and invoking the AE using VI Server call by reference in which case the call is mad on one PC and the inputs passed to the AE are transported across the network to the target where those inputs are used to call the AE on the target. The results of the AE method are then retuned to the VI server layer which in turn passes it back to the calling VI as if the AE was just a normal sub-VI.
The Un-initialized SR appears to be implmented as a "static local" (forgive my C jargon since it is very rusty) where the data space for the SR is not recreated on each cll but persist fromcall to call.
Ok, I've read a lot more now and I see that the USR technique seems to be officially sanctioned by NI. I was worried this might be exploiting undocumented behaviour which could change between compiler revisions. "Static local" is fine by me!
The real power of the AE comes into play when you promote it from what is often called a FGV (Functional Global Variable that functions like a global having a write and a read) to an AE by encapsulating the operations that act on a data structure (in the SRs) and returns the results to the same SR thereby preventing the possibility of a Race Condition. Race Conditions are part of any multithreding environement and since LV is by default multithreaded, we can't willy-nilly go off and ignore race Conditions.
It boils down to the same effect as wrapping every bit of code that accesses the data in some form of lock, whether that be a mutex, semaphore or whatever, but the encapsulation of the functionality with the data is quite elegant. I wasn't planning to ignore race conditions.
In the case you outlined above you were using controls and indicators to share data and planned to use Semaphores to protect them. In theory that could work if you only used property nodes to read and right the objects. Using a local variable of the control would present an potential Race Conditions becuase of the way locals are implemented (Search for more on that topic) since they have a "special back door" that uses a buffer to apply the update without getting the UI thread involved.
I have tended to do all reads and writes using property nodes because it isn't possible to wire data into a control. Plus, I read all sorts of warnings about local variables being inefficient and consuming lots of storage space. I didn't see any warnings about property nodes causing a performance hit in the help pages - in fact, I got the impression that the 'value(signal) property' would force the control to update but the 'value' property would not. Having re-read that section I now see that it is referring to the generation of UI events rather than updating the control. The two things just merged in my mind. I also see that Google has *lots* of info on the property node/UI thread switch problem.
I found a lot of material explaining the traditional multi-thread interleaved read-modify-write problem - I assume this is the cause of the race conditions you're referring to?
And regarding the UI thread...
If you did implement your scheme using property node >>> value you would be forcing all of your data acces to operate in the UI thread (which is single threaded) incurring the over-head of thread swapping added to dealing with the bottle-neck of doing most of your work in a single thread. Running that code on an eight core machine would swamp one of the cores and leeve the other seven to twiddle their thumbs.
Yeah, that's bad news. Do you know whether the UI's of all top level windows in one .exe run in the same thread?
"No Sir, Don't like it." (Mr Horse from Ren and Stimpy ?)
Stepping back and doing the big picture thing for you...
LV is often looked at as just another programming langauage. This is true on the surface but "the devil is in the details." (Ross Perot). Since LV uses a data flow paradigm, development in LabVIEW is well served by ading an additional phase to the traditional design work which is very similar to the "Data normalization" applied to data bases ("The key the key and nothing but the key so help me Codd."). Altough I have no formal training in that area I am married to a DB-Guru so I have an informal awareness of the ideas.
I thought Labview was supposed to be 'programming made easy for electronic engineers'. It's funny, really, because I've only been timkering with it for a few weeks and I'm already up to my neck in multithreaded programming issues while just trying to put together a fairly trivial program with a few windows and a bit of data. I'm quite enjoying it, but I hadn't expected to be so concerned with what's going on under the hood quite so early. NI should do a course entitled "Advanced Labview for Complete Beginners".
the Dta analyisis work amounts to looking at the data structures I plan to use in an application and doing sanity checks and drawing up data paths to help me streamline my designs.
Some of the things I look at BEFORE CODING are;
1) Are the values I am grouping together used together? If not I seperate them.
2) What type of operations are perfomed on the data? The processing steps may benefit by me structuring the data as an array of clusters rather a flat cluster etc.
3) How big is the data set? Large data sets get special attention because they can sneak up and bite you. Benchmarking may be involved in this step so I know about perfomance issue ahead of time.
4) How busy is the data path? Busy data paths (DAQ to logging in high speed apps) is of particular concern. I have had to implement duplicate DAQ systems to provide a second data path for information coming from a RT machine or used SCRAMnet to provide the perfomance I need.
5) What all needs to touch the data? This combined wiht the above help to suggest where I am going to put the data. Do I put it in an AE so it can be shared and mashed from multiple locations, Do I put it in a queue to get the raw dat there fast...
I can probably go on a bit more but my point is it is only after the data anaylsis phase that I have a good idea what types of mechiansms will be used for each data structure.
That's all good advice and makes a lot of sense.
Still trying to help,
And I very much appreciate it!
Ben -
How can I optimize this CAN program?
I currently have the program called "GCS message.vi". It reads CAN messages and has the ability to alter them. It then resends the CAN messages out. "GCS message stim and monitor.vi" runs with this, but does the opposite. It sends out messages, and then monitors the ouput of "GCS message.vi". This program runs at only a couple of Hertz, but when all the CAN programming is eliminated, the program runs extremely fast. And if all the excess programming is eliminated leaving only sending and receiving of CAN messages, then the program runs extremely fast again. I'm wondering what aspect of the program is slowing everything down and how I could program around it.
Notes:
The stim and monitor should be run first, and the start button on the VI should be pressed once the message vi is running.
To check the execution speed, put a value in the "Inject Selected Errors" box and click the button next to it. It will countdown to zero.
Attachments:
GCS.zip 400 KBHello,
As you have noted, your problem seems to be purely LabVIEW. When you run with just your CAN commands, things are fast. One thing to note in your program (looking in CGS message.vi) is the large number of property node accesses; each access to a property node will cause a thread swap to the user interface (in the event that multiple threads have been spawned, which appears likely since you define multiple parallel tasks). Given that you have a relativel complicated GUI there, this may indeed affect performance significantly. In general, you should (if at all possible) use wires to dictate value assignment, and if necessary you may try changing the some of your property nodes (the ones that simply change the value of a control or indicator) to local variables to prevent the thread swapping. Now, this may not be the only performance enhancement to make; I would strongly recommend reading the following document to help get a better idea for how to find and correct memory and time performance issues in LabVIEW:
http://zone.ni.com/devzone/conceptd.nsf/webmain/732CEC772AA4FBE586256A37005541D3
Best Regards,
JLS
Best,
JLS
Sixclear -
hello
this is probably something that had been treated millions times, however i could not find direct answers from previous threads on that, so i ask directly.
i am a bit confused by the utilisation of event structures:
first, one would expect that if the event is used, say in a while loop, then outgoing data would be kept trough the sequence, without having to wire trough the other events, unlike as depicted in the pic below. that should be to my understanding the definition of "use default values" on the outgoing data nodes.
however it seems not to work like that. why? is there a way to force LV to do as i want (instead of "use default values", make him "use last input value")?
wiring trough all the nodes make the event structures very unelegant.
second: this is probably a very naive one, but i dont like the way i do it: in some events i have same operations going on as on turn on of the vi. in other words, when i initialise my system, i pass trough several operations, which also exist in the event structure. to make the diagram more elegant it would be usefull to call all those events programmatically a first time. up to now i do it by programatically signalling the values of some controls. however there must be a more elegant way, where i could just queue the events needed. any suggestions?
Message Edited by Gabi1 on 05-17-2007 06:11 PM
Message Edited by Gabi1 on 05-17-2007 06:13 PM
... And here's where I keep assorted lengths of wires...
Attachments:
event structure.PNG 10 KBJarrod S. wrote:
Triggering events forces a thread swap to the
user interface thread, which can slow down execution. It can also make
a redundant copy of the data that has to get stored in the control
whose value change was triggered, which you might not need. Enqueuing
commands onto a queue does not have these limitations.
To clarify Jarrod's comment, it's important to note that neither the event structure itself nor dynamic events cause a switch to the User Interface Thread. Functions inside the event structure (e.g. property nodes, invoke nodes) can cause a switch to the UI Thread when they operate on UI components.
In that discussion, Jason King points out that:
"There is nothing specific about the event
structure that requires the event-handling case to run in the UI
thread. In fact, there is not any event that will force it to run in
the UI thread"
"Reading from or writing to a front panel terminal or local variable does not cause the diagram to run in the UI thread."
"Any actual processing of user interaction,
however - either looking at user interaction to determine which events
to generate or finishing processing a filter event after the event
structure has had a chance to discard it or modify any of the event
details - requires the UI thread."
"Pretty much anything you do with the reference to
a control or indicator will cause a switch to the UI thread (property
nodes, invoke nodes, etc)"
Certified LabVIEW Architect
Wait for Flag / Set Flag
Separate Views from Implementation for Strict Type Defs -
Multitasking not working while calling DLL
This is my problem:
I have LV DLL based driver, every function in this driver is
reentrant, every DLL call is reentrant.
I want to communicate with number if instruments via this driver.
So I programatically clone the VI which communicates with instrument.
When I run all cloned VIs in parallel to communicate with all
instruments, the DLL calls are executed sequentialy.
Why there are not able to dynamically generate new threads ?Greg McKaskle wrote in message news:<[email protected]>...
> > I have LV DLL based driver, every function in this driver is
> > reentrant, every DLL call is reentrant.
> > I want to communicate with number if instruments via this driver.
> > So I programatically clone the VI which communicates with instrument.
> >
> > When I run all cloned VIs in parallel to communicate with all
> > instruments, the DLL calls are executed sequentialy.
> >
> > Why there are not able to dynamically generate new threads ?
>
> It sounds like you have most of the correct settings, but let me review
> them anyway. First, make sure that threading is on in the
> Tools>>Options on the Execution page. It defaults to on, but sometimes
> gets turned off for compatibility with ActiveX or other nonthread
> friendly programs.
>
> Second, you say that your DLL calls are reentrant. If that means that
> the Call Library Function nodes have their checkbox set to show that the
> call can be made reentrantly, and making their color yellow instead of
> orange, then step two is taken care of. Otherwise make this change.
> You also mention that you cloned VIs. If you are using the same VI in
> more than one place on the diagram, they default to making a critical
> section -- allowing only one caller at a time. If you want to allow
> reentrant execution, this is set in VI Properties.
>
> Third, which I think is what is going wrong in your case, you need to
> have enough threads or execution systems to run your VI. You ask why LV
> doesn't generate dynamic threads. The answer is that LV preallocates
> threads that the VI needs to run and schedules nodes on those threads
> according to dataflow. Creating and destroying threads is actually
> quite expensive on most OSes, so that is not the mechanism LV uses.
>
> Here are several solutions for your system. Each VI and subVI can be
> set to run in a particular execution system and at a particular priority
> using the VI Properties>>Execution page. On a single processor computer
> each execution system priority cell defaults to having one thread. On a
> dual processor it defaults to having two, a quad defaults to four. In
> most cases this ends up being sufficient to keep the processors busy
> without causing excessive thread swaps.
>
> In the case where your threads are being consumed by a DLL call and
> therefore cannot multitask with the other dataflow nodes, you can either
> set your VIs to run in the different execution systems, or you can make
> the Standard execution system have more threads.
>
> To set the execution system of a VI, use the VI Properties. To change
> the threads per execution system, open the
> vi.lib/utilities/sysinfo.llb/threadcfg.vi or something very close to
> that name. This VI shows the threads allocated per cell, and hitting
> the config button at the bottom lets you change it. Note that as of LV6
> these numbers are the maximum limit allocated by LV and it doesn't
> allocate the threads in an execution cell until a VI is loaded that
> needs that system/priority.
>
> Now that I've told you how to do it, I'd recommend doing a very quick
> experiment with the execution system settings to see if there is any
> advantage to having multiple threads active. If the various threads are
> sleeping/waiting for hardware, this may indeed allow other threads to
> make progress, but if they are doing a spin-lock or heavy computation,
> there really isn't any benefit.
>
> Greg McKaskle
Dear Greg,
Thank you for your answer. I checked all the setting and I am sure
that I have these settings correct as you wrote. I tried the VI for
changing allocation of the threads, but the number of threads is too
small for me.
If I am right LV maximum thread number is number of preffered
execution system multiplied available priorities (App.Note 114), what
happened and how in threadconfig.vi is clear for you only (this vi is
locked, by the way number of locked vi's increases with LV version
number :-( - why this? )
I have from 50 to 200 points with same instrument on every point. So I
need up to 200 parallely running tasks for communicate with these
instruments via DLL based instrument driver, in every parallel task is
unique instrument session, but calling different functions from the
same DLL. Now I am not able to do it in LabView, only way is to use
LabVIEW native driver, but in this case I loose advantages of IVI
driver. Do you have any idea how to use IVI driver in 200 parallel
tasks?
Best regards
Jiri
Maybe you are looking for
-
Mail cannot update because home directory is full
I have read the the solutions to this problem and have tried them all. I have thrown away the preference file, I have re-installed mail w/Pacifist software, I have copied envelope index to desktop. These have all failed, I still get the same message
-
How to set the 4th Y axis color for a line graph
Hello experts, I have developed a multiple y axis line graph using oracle reports 10g. It has got 1 x axis and 4 axes. I am able to set the color of the the first three y axes but the reports designer automatically assigns yellow color to the 4th axi
-
Netboot : boot image - default user settings
Hello So far I have successfully created from installation DVDs a netboot boot image and customized/modified it so that I can use it for all users created on the MacOSX Server. Now for some details I was not able to resolve them. I can not persistent
-
Non flowing of Customs Payment to Inventory
Dear Consultant, Against a import PO 4500005062 we have made Customs payment twice (MIRO) following 2 FI documents are posted:- Doc No-30006904 Customs Vendor A/c Credit 23815094 (INR) Customs clearing Debit 23121450 (INR) Customs clearing Debit
-
Hi, How do we calculate the following ratios in SAP Business One. Wkg. Capital Turnover Inventory Turnover Current Ratio Quick Ratio Debt/Equity Ratio Gross Profit % Nett Profit % Operating Cost % Recv. Turnover in days Return on Investment % Return