LabVIEW Embedded - Performance Testing - Different Platforms
Hi all,
I've done some performance testing of LabVIEW on various microcontroller development boards (LabVIEW Embedded for ARM) as well as on a cRIO 9122 Real-time Controller (LabVIEW Real-time) and a Dell Optiplex 790 (LabVIEW desktop). You may find the results interesting. The full report is attached and the final page of the report is reproduced below.
Test Summary
µC MIPS
Single Loop
Effective MIPS
Single Loop
Efficiency
Dual Loop
Effective MIPS
Dual Loop
Efficiency
MCB2300
65
31.8
49%
4.1
6%
LM3S8962
60
50.0
83%
9.5
16%
LPC1788
120
80.9
56%
12.0
8%
cRIO 9122
760
152.4
20%
223.0
29%
Optiplex 790
6114
5533.7
91%
5655.0
92%
Analysis
For microcontrollers, single loop programming can retain almost 100% of the processing power. Such programming would require that all I/O is non-blocking as well as use of interrupts. Multiple loop programming is not recommended, except for simple applications running at loop rates less than 200 Hz, since the vast majority of the processing power is taken by LabVIEW/OS overhead.
For cRIO, there is much more processing power available, however approximately 70 to 80% of it is lost to LabVIEW/OS overhead. The end result is that what can be achieved is limiting.
For the Desktop, we get the best of both worlds; extraordinary processing power and high efficiency.
Speculation on why LabVIEW Embedded for ARM and LabVIEW Real-time performance is so poor puts the blame on excessive context switch. Each context switch typically takes 150 to 200 machine cycles and these appear to be inserted for each loop iteration. This means that tight loops (fast with not much computation) consume enormous amounts of processing power. If this is the case, an option to force a context switch every Nth loop iteration would be useful.
Conclusion
LabVIEW Embedded
for ARM
LabVIEW Real-time for cRIO/sbRIO
LabVIEW Desktop for Windows
Development Environment Cost
High
Reasonable
Reasonable
Execution Platform Cost
Very low
Very High / High
Low
Processing Power
Low (current Tier 1)
Medium
Enormous
LabVIEW/OS efficiency
Low
Low
High
OEM friendly
Yes+
No
Yes
LabVIEW Desktop has many attractive features. This explain why LabVIEW Desktop is so successful and is the vast majority of National Instruments’ software sales (and consequently results in the vast majority of hardware sales). It is National Instruments’ flagship product and is the precursor to the other LabVIEW offerings. The execution platform is powerful, available in various form factors from various sources and is competitively priced.
LabVIEW Real-time on a cRIO/sb-RIO is a lot less attractive. To make this platform attractive the execution platform cost needs to be vastly decreased while increasing the raw processing power. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. A single plug-in board no larger than 75 x 50 mm (3” x 2”) with a single unit price under $180 would certainly make the sb-RIO a viable execution platform. The peripheral connectors would not be part of the board and would be accessible via a connector. A developer mother board could house the various connectors, but these are not needed when incorporated into the final product. The recently released Xilinx Zynq would be a great chip to use ($15 in volume, 2 x ARM Cortex A9 at 800 MHz (4,000 MIPS), FPGA fabric and lots more).
LabVIEW Embedded for ARM is very OEM friendly with development boards that are open source with circuit diagrams available. To make this platform attractive, new more capable Tier 1 boards will need to be introduced, mainly to counter the large LabVIEW/OS overhead. As before, these target boards would come from microcontroller manufacturers, thereby making them inexpensive and open source. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. What is required now is another Tier 1 boards (eg. DK-LM3S9D96 (ARM Cortex M3 80 MHz/96 MIPS)). Further Tier 1 boards should be targeted every two years (eg. BeagleBoard-xM (ARM Cortex A8 1000 MHz/2000 MIPS board)) to keep LabVIEW Embedded for ARM relevant.
Attachments:
LabVIEW Embedded - Performance Testing - Different Platforms.pdf 307 KB
I've got to say though, it would really be good if NI could further develop the ARM embedded toolkit.
In the industry I'm in, and probably many others, control algorithm development and testing oocurs in labview. If you have a good LV developer or team, you'll end up with fairly solid, stable and tested code. But what happens now, once the concept is validated, is that all this is thrown away and the C programmers create the embedded code that will go into the real product.
The development cycle starts from scratch.
It would be amazing if you could strip down that code and deploy it onto ARM and expect it to not be too inefficient. Development costs and time to market go way down.. BUT, but especially in the industry I presently work in, the final product's COST is extremely important. (These being consumer products, chaper micro cheaper product) .
These concerns weight HEAVILY. I didn't get a warm fuzzy about the ARM toolkit for my application. I'm sure it's got its niches, but just imagine what could happen if some more work went into it to make it truly appealing to wider market...
Similar Messages
-
Using Test Setting file to run web performance tests in different environments
Hello,
I have a set of web performance tests that I want to be able to run in different environments.
Currently I have csv file containing the url of the load balancer of the particular environment I want to run the load test containing the web performance tests in, and to run it in a different environment I just edit this csv file.
Is it possible to use the test settings file to point the web performance tests at a particular environment?
I am using VSTS 2012 Ultimate.
ThanksInstead of using the testsettings I suggest using the "Parameterize web servers" command (found via context menu on the web test, or via one of the icons). The left hand column then suggests context parameter names for the parameterised web server
URLs. It should be possible to use data source entries instead. You may need to wrap the data source accesses in doubled curly braces if editing via the "Parameterize web servers" window.
Regards
Adrian -
Is there any tool uses Selenium tests for performance test
Hello!
I am looking for a tool for using performans test.
I have Selenium test scenarios and I want to use them for performance tests.
Which tools use Selenium tests for performance tests or
Which are the best tools to test a JSF Application? Which experiences
have you made?Hi,
If you have the test kit installed i.e. the CTK , then you will find it under the corresponding test folder. For example , i have it under C:\Program Files (x86)\WindowsEmbeddedCompact7TestKit\tests\target\
The test harness files , tux and kato can be found under
C:\Program Files (x86)\WindowsEmbeddedCompact7TestKit\harnesses\target\
The above two files tux and kato would be required for running any tests on Windows embedded compact platforms.
Depending on your platform , you may choose to use the corresponding binaries in the sub directory.
Regards,
Balaji. -
LabVIEW Embedded/ARM with password = Crash to Desktop
Hi,
I just installed the LabVIEW Embedded for ARM evaluation kit with the LM3S8962.
A basic project will compile and download perfectly. (Amazing how well this works!)
However, as I'm primarily a SW dev, I am a firm believer in GOOP and other OO technologies. I can create the class and call it in the main VI, but when I go to compile the project, the compiler asks me for the password to a deeply embedded VI (which I do not have) and, after failing to validate or canceling, LabVIEW will just disappear.
If I create and use a native LVOOP class, it'll compile and run, however pass-by-value is simply not an option.
Test Cases:
1) Naked Main VI: compiles, runs OK
2) Main VI calling protected VI: will not compile if VI remains locked ('cancel' button or invalid password entered) and will sometimes CTD
3) Main VI calling LVOOP class: comples, runs OK
4) Main VI calling GOOP class: CTD after OK/Cancel in password dialog
Versions:
Windows XP 32bit Professional SP3
LabVIEW 2009 9.0f1 Evaluation
Endevo GDS 3.51
This really looks like an issue with password-protected VIs.
Has anyone seen this sort of problem before?
Thanks,
Tony
P.S. Will try to attach to another post since the forums keeps giving me an error when posting with attachment...Claire,
I understand why the builder asks for the password.
I also understand that the LabVIEW Application Builder does not ask for the password, it instead compiles the VI into the Runtime bytecode whether password protected or not.
If this is indeed the case, then the LabVIEW Application Builder generated bytecode could then be used to "expose the functionality of the VI."
However, that's just not the case.
If you've ever looked at the C code generated from a single VI, then you might understand that the C code is in no way understandable or recognizable as to what is really happening in the VI.
I guess if you personally worked on or made great contributions to the LabVIEW runtime engine you might possibly - with no small amount of time and effort - be able to gain some small understanding of what's going on in the generated C code. However, for the average (or even advanced) C programmer, it's obfuscated to the point of incomprehensibility.
I've attached the C code generated from the Naked VI for reference. It's 45Kb of structures, lookup tables, and functions that - while they do perform what the VI does - are in no way recognizable as a while loop, a couple shift registers, addition nodes, split/join number nodes, and VI calls.
While, on the surface, that answer seems plausible, I'm afraid it is indeed nonsensical. Perhaps I could have a chat with the person making this decision?
Thanks for your time,
Tony
Attachments:
Main_VI___Naked.c 45 KB -
MSI Z97 GAMING 3 Review--Performance Testing
After the previous hardware and software introduction, I believe Z97 GAMING 3 will meet gamers’ expectation.
Z97 GAMING 3 integrated with Killer E2200 LAN, Audio Boost 2, M.2 interface and the normal array of connections,
It is truly a good gaming motherboard. Could all these features offer great performance and a good experience?
Today I will test the performance of Z97 GAMING 3 and how good it is.
MSI Z97 GAMING 3 Testing
My test platform is MSI Z97 GAMING 3, Intel ® Core i7-4770K and MSI GeForce GTX 750 graphics card. The test
consists of two parts:
CPU Performance: Super PI, PC Mark Vantage and Cinebench R11.5.
GAMING Performance: 3DMARK 11, Evil 6 Benchmark and FFXI Benchmark.
Test Part 1
CPU : Intel Core i7-4770K @ 3.5 GHz
CPU Cooler : Thermaltake TT-8085A
Motherboard : MSI Z97 GAMING 3
RAM : Corsair DDR 3-1600 4GB X 2
PSU : Cooler Master 350W
OS : Windows 7 64 bit
Basic performance testing (CPU setting by default)
CPU Mark Score : 679.
Super PI 32M Result – 8m53.897s.
Graphics Performance Testing:3DMark 11
3DMark 11 is designed to measure PC’s performance. It makes extensive use of all the new features in DirectX 11
including Tessellation, Compute Shader and Multi-threading.
Intel ® HD4600 iGPU in 3DMark 11 Basic mode testing, the results is X385 Score.
Performance mode test score is P1511 .
System Performance:PCMark Vantage
PCMark Vantage is a PC analysis and benchmarking tool consisting of a mix of applications such as based and
synthetic tests that measure system performance.
From the test results, the score of Z97 GAMING 3 with Intel ® HD4600 iGPU is 11,946.
MSI GeForce GTX 750 Testing
Test Part 2
CPU : Intel Core i7-4770K @ 3.5 GHz
CPU Cooler : Thermaltake TT-8085A
Motherboard : MSI Z97 GAMING 3
Graphics Card:MSI GeForce GTX 750
RAM : Corsair DDR 3-1600 4GB X 2
PSU : Cooler Master 350W
OS : Windows 7 64 bit
Graphics Performance Testing:3DMark 11
Z97 GAMING 3 with GeForce GTX 750 the test scores is X1653 in 3DMark 11 basic test mode, The performance
mode test score is P5078.
System Performance:PC Mark Vantage
From the test results, Z97 GAMING 3 with GeForce GTX 750 scores 11,518.
System Performance:Cinebench R11.5
Cinebench is the software developed by MAXON Cinema 4D. Cinebench could test CPU and GPU performance with
different processes at the same time. For the CPU part, Cinebench test the CPU performance by displaying a HD 3D
scene. For the GPU part, Cinebench test GPU performance based on OpenGL capacity.
Main Processor Performance (CPU) - The test scenario uses all of your system's processing power to render a photorealistic
3D scene. Graphics Card Performance (OpenGL) - This procedure uses a complex 3D scene depicting a car chase which
measures the performance of your graphics card in OpenGL mode.
In Cinebench R11.5 test, MSI Z97 GAMING 3 with GeForce GTX 750 multi-core test is 6.87pts; OpenGL score is 73.48 fps.
Z97 GAMING 3 with HD 4600 and GeForce GTX 750 in the GAME Benchmark Test
For game performance testing, I will use Resident Evil 6 and FFXI Benchmark with the same platform.
Evil 6 Benchmark
CPU: Core i7-4770K
Game resolution setting: 1920X1080
Other setting: Default
In the Z97 GAMING 3 with Intel® HD4600 iGPU platform, score:1175 (Rank D)
In the Z97 GAMING 3 with GeForce GTX 750 platform, score: 5874 (Rank A)
I use Fraps tool to record FPS status during benchmark testing.The Z97 GAMING 3 with GeForce GTX 750 average
FPS is 202. The Z97 GAMING 3 with Intel® HD4600 iGPU average FPS is 32.
FFXIV Benchmark
CPU: Core i7-4770K
Game resolution setting: 1920X1080
Other setting: Default
The 1920X1080 resolution, Intel® HD4600 iGPU score is only 910.
However, the GeForce GTX 750 testing score is 4167. According to the official classification system, the score
between 3000 to 4499 means high performance.
I use Fraps tool to recorded FPS status during benchmark testing.
the GeForce GTX 750 average FPS is 111. Intel® HD4600 iGPU average FPS is 19.
Test Summary
MSI Z97 GAMING 3 is not very expensive. It has many features which are specially designed for gaming experience
and good performance of benchmarks. Even in 1920x1200 resolution and high quality display setting, Z97 GAMING 3
with Intel Core i7-4770K and MSI GeForce GTX 750 can easily handle any kind of games. The FPS of this system is
higher than 60 and users will enjoy no lag as gaming. It is really a good and afforadable chioce for gamers.Thx for the sharing, since there are not much reviews about Z97 GAMING 3.
-
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
Case study: "Large?" labview programs flooded with different VIT's
Case study: "Large?" labview programs flooded
with different VIT's
Type of application:
Computer with loads of individual hardware connected or other software (either
onsite (different buses) or offsite (Satelite/GSM/GPRS/radio etc.).
Hardware
description: little data "RPM" but communications to all devices are intact.
More "RPM" when many VITs are involved.
Size: 1000+
VITS in memory (goal). Total software has been tested and simulated with 400.
I'm posting
this post after reading this thread (and actually I cant sleep and am bored as
hell).
Note: I do
not use LVOOP (but sure post OOP examples, am starting to learn more and more
by the day.)
Things I
will discuss are:
Case 1: Memory usage using a plugin
architecture
CASE 2: memory usage using VITs (!)
CASE 3: updating datastructures:
CASE 4: shutdown of the whole system
CASE 5: stability & heath monitoring
CASE 6: Inifiles
CASE 7: When the hardware is getting crappy
Total
application overview:
We have a
main application. This main application is mainly empty as hell, and only holds
a plugin functionality (to register and administer plugins) and holds an
architecture that holds the following items:
Queue state
machine for main application error handling
Queue state
machine for status messages
Queue state
machine for updating virtual variables
Event state
machine for GUI
Some other
stuff
Other
global functionality is:
User
logins, user configurations and unique access levels
Different
nice tools like the good old BootP and other juicy stuff
Supervision
of variables (like the NI tag engine, but here we have our own datastructures)
Generation
of virtual variables (so that the user can configure easy mathematical
functions and combining existing tags)
Licensing
of plugins (hell we free-lance programmers need some money to don't we?)
Handles
all communication between plugins themselves, or directly to a plugin or vice
versus.
And now we don't
talk about that (or marketing) the main application .
Message Edited by Corny on 01-20-2010 08:52 AMCASE 3: updating datastructures:
As we do NOT use clusters here (that would
just be consuming) we only use an 1D array of data that needs to be updated in
different functional globals. If the the number of VITS exceeds so that the
updating of this datastructures becomes the bottleneck, this would cause
delays. And since in this example we use 250 serial interfaces (lol) we do not
want to disrupt that by any delays. When this happends, does anyone know a good
solution to transfer data?
A thought:
perhaps sending it down to the plugin and let the plugin handle it, this should
save some time, but then again if more VITs are added again this would become a
bottleneck and the queue would fill up after a while unable to process it fast
enough. Any opinions?
CASE 4: shutdown of the whole system
Lets say we
want to close it all down, but the VITs need perhaps to do some shutdown
procedure towards the hardware, that can be heavy.
If we ask
them to shutdown all together we can use an natofier or userevent to do this
job. Well, what happends next is that the CPU will jump to the roof, and well
that can only cause dataloss and trouble. The solution here was to let the
plugin shut them all down one by one, when one has been shutdown, begin at the
next. Pro; CPU will not jump to the moon. Con's: shutdown is going to take a
while. Be ready with a cup of coffee.
Also we
want the main application not to exit before we exit. The solution above solved
this as the plugin knows when all have been shut down, and can then shut itself
down. When all plugins are shutdown - the application ends.
Another
solution is to use rendovous (arg cant spell it) and only shut the system down
when all rendezvous have met.
CASE 5: stability & heath monitoring
This IS
using a lot of memory. How to get it down. And has anyone experienced any
difficulties with labview using A LOT of memory? I want to know if something
gets corrupt. The VITs send out error information in case, but what if
something weird happens, how can I surveillance all the VIT's in memory to know
one is malfunctioning in an effective way/code (as backup
solution so the application knows
something is wrong?
CASE 6: Inifiles
Well, we
all like them. Even if XML is perhaps more fahionally. Now Ive runned some
tests on large inifiles. And the labview Inifile functions use ages to parsing
all this information. Perhaps an own file structure in binary format or
something would be better? (and rather create an configuration program)?
CASE 7: When the hardware is getting crappy:
Now what if
the system is hitting the limit and gradually exceeds the hardware req. of the
software. What to do then (thinking mostly of memory usage)? Needing to install
it on more servers or something and splitting configurations? Is that the best
way to solve this? Any opinions?
Wow. Time for a coffee cup. Impressive if someone
actually read all of this. My goal is to reach the 1000 VIT mark.. someday.. so
any opinions, and just ask if something unclear or other stuff, Im open for all
stuff, since I see the software will hit a memory barrier someday if I want to
reach that 1000 mark hehe -
LabVIEW Embedded uygulama ornekleri,egitim-tanitim videolari varmıdır?,bunlara nasıl ulaşabilirim?
TurgayMerhaba Turgay,
Genel bilgi olarak bu sayfayi oneririm: http://www.nxtbook.com/nxtbooks/ni/embeddeddesignplatforms/
Embedded icin ana portal burda: http://www.ni.com/embedded/
Bir kac video:
http://zone.ni.com/wv/app/doc/p/id/wv-1360
http://zone.ni.com/wv/app/doc/p/id/wv-820
http://zone.ni.com/wv/app/doc/p/id/wv-1686
http://zone.ni.com/wv/app/doc/p/id/wv-707
http://zone.ni.com/wv/app/doc/p/id/wv-1359
Embedded alaninda hangi konuda bilgi almak istersiniz? Bende yardimci olabilirim.
NIin embedded alaninda urunleri su kategorilere ayrilabilir:
Microprocessors/Microcontrollers
Custom Circuit Design
FPGA
Industrial/Real-Time PCs
Embedded Computers
Gordugunuz gibi bir cok alanda bilgi var, ancak dusundugunuz bir embedded platform varsa, daha detayli bilgi verebilirim.
Ornek program olarak LabVIEW yada LabWindows/CVI ornek programlari NI-RIO driverleri indiriseniz otomatik olarak LabVIEWe yuklenir.
-Tolga -
Web performance test using Ultimate VS2012 is possible?
Hello,
I worked on automation using VS2012 for different client who used .net. Now i am on different account who used Java and my goal is same to convert day to day functionality checked in automation. Any help?? how to start where to start???
Thanks in advance!
Hema.
HSHi SCRana,
Thank you for posting in MSDN forum.
According to your description, you mean that you want to set the breakpoint and debug the web performance test during using the recorder to record this web performance test, am I right?
As far as I know that there have no this feature of breakpoint/debugger in web performance test.
Generally, we set the breakpoint and debug this coded web performance test after we finished this web performance recording.
So if you still want to this feature, I suggest you could submit this feature request:
http://visualstudio.uservoice.com/forums/121579-visual-studio.
The Visual Studio product team is listening to user voice there. You can send your idea there and people can vote. If you submit this suggestion, I hope you could post that link here, I will help you vote it.
Thanks for your understanding.
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Performance test from unit tests
Hi;
I tried to apply the principle TDD at the level of perf test of an J2ee application and this from units tests.
I thus used the tool JUnitPerf but regrettably with this tool I must to modify my unit test for the update of the sequence of a test and it propose nobody management of waiting time between every call of a unit test.
Q : Know a tool more complete, stable and maintained which allows to write of the perf tests from unit test ?
Regards;Hi John,
The testing will depend on the scenario's configured in SAP, which are to be tested. As scenario vary from customer to customer, it is not possible to send a document with screen shot, as these are confidential documents. However, the write-up below should be able to give some clueon the same.
<b>Different Type of Testing are as under</b>:
Unit testing is done in bit and pieces. Like e.g. in SD standard order cycle; we do have 1-create order, then 2-delivery, then 3-transfer order, then 4-PGI and then 5-Invoice. So we will be testing 1,2,3,4 and 5 seperately alone one by one using test cases and test data. We will not be looking and checking/testing any integration between order and delivery; delivery and TO; TO and PGI and then invoice.
Whrereas System testing you will be testing the full cycle with it's integration, and you will be testing using test cases which give a full cyclic test from order to invoice.
Security testing you will be testing different roles and functionalities and will check and signoff.
Performance testing is refered to as how much time / second will take to perform some actions, like e.g. PGI. If BPP defination says 5 seconds for PGI then it should be 5 and not 6 second. Usually it is done using software.
Regression testing is reffered to a test which verfies that some new configuration doesnot adversly impact existing functionality. This will be done on each phase of testing.
User Acceptance Testing: Refers to Customer testing. The UAT will be performed through the execution of predefined business scenarios, which combine various business processes. The user test model is comprised of a sub-set of system integration test cases.
Regards,
Rajesh Banka
Reward suitable points.
How to give points: Mark your thread as a question while creating it. In the answers you get, you can assign the points by clicking on the stars to the left. You also get a point yourself for rewarding (one per thread). -
Hi,
I would like to write some code in 'LabVIEW embedded' 8.5 for the NXP LPC2146 microcontroller (ARM7).
http://www.standardics.nxp.com/products/lpc2000/lpc214x/
The 2146 device is used within one of our main 'volume' products and I would like to write some special test code for the product in LV Embedded. I have the full NI development suite at 8.5 level.
The question is, does LV embedded suport this microcontroller fully?
I have found this info but still not sure: http://zone.ni.com/devzone/cda/tut/p/id/6207
Many thanks in antisipation of a reply.
Andrew VHi Andrew,
Using the LabVIEW Microprocessor SDK, you can "port" LabVIEW to build applications for any 32-bit microprocessor. The LabVIEW Microprocessor SDK Porting Guide describes the steps involved in the porting process.
The amount of effort involved depends on these factors:
How similar your target is to one of the example targets that are included in the LabVIEW Microprocessor SDK. As you can see in the article you linked, the SDK contains an example target with a Philips ARM and an eCos BSP. If your target is similar to this one (especially if the OS is the same), the porting process might take less than a week.
Familiarity with LabVIEW and embedded domain expertise. The porting process involves writing "plug-in" VIs in LabVIEW and building C run-time libraries for your target. However, once the porting process is complete, your target can be programmed solely in LabVIEW by someone with no embedded expertise whatsoever.
Target selection. We recommend a target have the following characteristics: 32-bit processor, OS/microkernel (not "bare metal"), and 256 KB RAM. Also, if you plan to make use of the LabVIEW Advanced Analysis libraries, a floating point unit is recommended.
Michael P
National Instruments -
Labview – embedded - part time (Homeworking)
£20ph -3mc - 2 days a week - Home working
Labview – embedded - part time
A part time Labview, engineer is required immediately for a home working role for 2 days a week on a 3 month contract. The candidate would be required to be testing hardware using Labview so would need a good appreciation of testing environments and hardware.
The company is an innovator in its field, creating and producing cutting edge products to be used in the science field.
Key Skills:
Labview
Testing environments
Preferably Hardware backgroundIt might help if you mentioned where this project is. Monthly "face to face" meeting is a little vague on an international forum.
I might ask what is to be accomplished in a "face to face" (presumably physically face to face) that can't, in this day and age, be accomplished with GoToMeeting, WebEx or Skype video meetings? I'm currently working on projects in Asia, Europe and Latin America, without having to leave my central location in N.America, except when I have to be present for actual hardware installation/commissioning. And even that has been reduced by Remote Desk Top, DameWare or some equivalent.
Putnam
Certified LabVIEW Developer
Senior Test Engineer
Currently using LV 6.1-LabVIEW 2012, RT8.5
LabVIEW Champion -
RMS performance testing using HP Loadrunner
Hi,
We are currently planning on how to do our performance testing of Oracle Retail. We are planning to use HP Loadrunner and use different virtual users for Java, GUI, webservices and database requests. Have anyone here done performance testing in RMS using HP Loadrunner and what kind of setup did you use?
Any tips would be greatly appreciated.
Best regards,
GustavHi Gustav
How is your performance testing of Oracle Retail ? Did you get good results ?
I need to start a RMS/RPM performance testing project and I would like to know how to implement an appropriated structure . Any informations about servers , protocols , tools used to simulate a real production environment would be very appreciated.
Thanks & Regards,
Roberto -
Hi, I have a customer who wants to performance test OID.
Their actual installed data will be 600,000 users, however they want to only query using a sample of 10-20 different usernames. My question is will caching within the database/and or ldap make the results erroneous.
Regards
KevinKevin,
what do you mean by '.. make the results erroneous'? If you're talking about a performance test you want to achieve the best possible result, right? So why don't you want to use either the DB cache or the OID server cache to achieve maximum performance?
What is the use case scenario that you only want to have a very small subset of entries to be used?
Please take a look at Tuning Considerations for the Directory in http://download-west.oracle.com/docs/cd/B14099_14/idmanage.1012/b14082/tuning.htm#i1004959 for some details.
You might want to take a look at the http://www.mindcraft.com benchmark to get some other infos.
regards,
--Olaf -
Performance testing doubt with XI.
Hi All,
How is performance testign done inside XI?
We need to propose to the testing team the different methids of testing the interfaces that run through XI.
Is there some document available to explain the same?
Also any other method of testing for XI?
Thanks.Hi
How is performance testign done inside XI?
Performance testing can be done by passing certain number of the messages and you can get how fast those messages being processed in Integration Engine or BPE.
On RWB, you can get the reading from "Performance Monitoring"
We need to propose to the testing team the different methids of testing the interfaces that run through XI.
Unit test is done by developer.
End-to-Ent test: is functionality test of your interface, a basic message are triggered from your sending system, and final verification point should be on your receiving system.
Loading test: A test to stress application server to identify bottbleneck:
e.g. can my system process single message with size of 50 MB ?
if I sending 10000 messages within very short of time period, is my system is able to process them without error ?
UAT: User acceptance test: This test is conducted by business user, will execute all the business scenarios to see if your interface can handle without error.
Hope it helps
Liang
Maybe you are looking for
-
How user trace the message created in Satellite System
HI All, I'm using Servide Desk in my company. Just wondering how user can trace the progress of the message created where email notification is not configured. Remember when user send/save the message, there's a message number created. Just wonder an
-
Satellite C850 - downgrade from Win 8 to Win 7
Epilogue: Ok, so I've tried my best. I realise there are a few nice things about Win8 but like a ginger stepchild I just can't accept it as part of my life. I've had the laptop for 6 months now and regardless of changes I make I just keep on coming b
-
Different material group for two CO companies with same material no.
Hi all We are implementing another CO company in COPA. The issue is that both companies use the same materials number, but in COPA grouping categories (MATKL) must be different . I was thinking in using another category, but is there another option?
-
Managed User cannot see blank CD
Hi all, I have recently reinstalled the College Mac suite where I work as an IT Tech (and Mac noob). Over the last term the desktops on the macs were used as a dumping ground for student work and as such proved difficult to back up the work at the en
-
Std. Transaction / Report to List all PO's with no invoice posting
Hi All, I would like to know is there a standard transaction or report, which lists out all PO's for which goods receipt has been done, but invoice posting has not been done? Vivek