Execution time continues to increase while vi is running

I started with a vi which reads data from a WII remote (wiimote).  The code works.  I want to convert the acceleration data to velocity and displacement by numerical integration.  Accurate conversion requires a measure of execution time.  The vi correctly returns 9-11 ms when it first starts, however the measured intervals continue to increase in lenght as the vi runs.  The measured interval goes from 10 ms to 80 ms after about 1 hour.  I used a tick counter to measure the loop execution time.  Any suggestions?
Attachments:
Simple Event Callback Roll Pitch. timed eventvi.vi ‏19 KB

Can you do some profiling to see which subVI could be the problem?
If you look at the task manager, how is the momery use over time?
Your timing systems seems flawed, because the execution of the first tick cound is poorly defined with respect to the rest of the code.
Some typical things to look out for.
growing arrays in shift registers
constantly opening references without ever closing them.
LabVIEW Champion . Do more with less code and in less time .

Similar Messages

  • CVI XML Functions Execution Times Increase When Looped

    I have written multiple functions using CVI that read XML files. I have confirmed in the Resource Tracking utility that i have cleaned up all of my lists, elements, documents, etc. I have found that when I loop any of the functions I have created, the execution times increase. The increase is small but it is noticable and does effect my execution.
    Are there any other sources of memory that I need to deallocate? It seems that there is a memory leak somewhere but I am unable to see where this increase is located.
    I am currently running LabWIndows/CVI 2009 on Windows 2008 Server. I have looped my functions using TestStand 4.2.1. Any help would be appreciated!
    Thanks in advance,
    Kyle
    Solved!
    Go to Solution.

    HI Daniel,
    Thanks for the quick response.
    It is indeed slow down in execution speed when we loop. When looped, the XML reader is overwriting variables, not adding to an array. Our application is structured differently than my test case. We run a CVI function from TestStand that contains a series of commands, which contains the XML reading. The XML looping is really done in CVI. I used TestStand in my test case just to get execution times. Our psuedocode for the CVI function is as followed:
    For loop (looping over values, like amplitude or frequency)
    Reading the XML
    Applying the data from the XML to set up some instrument(s)
    Do something...
    End loop
    I can confirm that the instrument set up is not the cause of the slow down. We have written the same XML reading in C# and applied the values to the instrument setup and do not experience the slow down.
    I tested with On-The-Fly Reporting enabled and the execution time continued to slow down.
    I hope that answers all of your questions!
    Thanks,
    Kyle

  • Collect Idocs - Work Item Execution Time

    I have created an integration process that includes a collect process which follows the BpmPatternCollectTime pattern. Everything works great. I have one question though. When monitoring the process in SXMB_MONI and looking at the technical details for the collect block work item the execution time continues to grow, despite the work item having a status of completed with a creation date and latest end date. Why does this work item's execution time continue to grow, despite being in a completed status? Will this cause a problem? Does the SWWCLEAR job clean this up? I don't want to have workflow items out there with execution times climbing into days. I fear that will cause a performance issue, even though they show as completed status. By the way, this is in a PI 7.1 environment.

    hi Chris,
    >>>Your blog does provide some interesting points when collecting to generate a single file.
    file is only of of 5 possibilities that mz blog mentiones
    the point is that BPM is the worst approach out of those 5 and all other 4
    can easily replace your BPM IDOC collect pattern
    this way not only the vendor's system will work more efficienlty but also yours
    but you will do as you like/prefare and you can come back
    to the forum everytime you get an issue with a BPM - no problem
    Regards,
    Michal Krawczyk

  • LabVIEW2012 has long execution time to write string than LabVIEW2011 on XP SP3

    in for loop, each execution will output string and display it on front panel by value property.
    But after upgrade to LabVIEW 2012, the execution time doubled on Chinese XP OS, I really want to know this difference between LabVIEW 2011 & LabVIEW 2012, pls advice.
    "I think therefore I am"

    This is no way to measure the execution time. Your parallel while loop spins as fast as the computer allows, consuming all CPU it can get in the process, and starving everything else. Your code is highly flawed!
    Eli: actually I add 10milliseconds delay inside both for and while loop. 
    In addition, get date/time in seconds is also a relatively expensive function, not to mention all these local variables! Your benchmark is completely meaningless! All it does is slow down your regular code, nothing else. How do you know in what order things start? Where is the "start time" initilized (where is the terminal?!)
    Eli: I am using the states machine, and set the "start time" during the program initialization, then calculate "test time"in next case (running block). 
    A proper benchmark uses a three-frame flat sequence with to high resolution relative seconds timers, on in each outer frame and the code to be benchmarked in the inner frame. (here is an example). The difference between the two timers is the exection time of the inner code in seconds. Make sure to only have wires in the benchmarking code. All controls and indicators belong outside the sequence. Also make sure that no other code can run in parallel to the sequence.
    Use this and you'll be surprise how fast your code actually runs.
    Then you should also disabele debugging.
    Please attach your actual code (including the subVI) so we can see what else you are doing wrong.
    Eli: if possible, give me your mail address, then I can sent more actual code to you separately. 
    "I think therefore I am"

  • How to measure fpga execution time

    Howdy--
    I'm hacking through my first FPGA project without yet having the hardware on hand, and I find I could answer a lot of my own questions if I could predict what the execution time (ticks, mSec, whatever) of bits of my code will be.  Running FPGA VIs on the Dev Computer with built in tick counters does not seem to be the proper way to go.  Is it possible to know the execution time of FPGA code before compiling and running it on the target?
    If it matters to anyone, my context for the question is a situation where a 10 uSec loop is imposed by the sample time of my hardware (cRIO 9076, with a couple of 100 ks/S I/O cards), and I'm trying to figure out how much signal processing I can afford between  samples.
    Thanks everyone, and have a great day.
    Solved!
    Go to Solution.

    bcro,
    You can look into cycle accurate simulation, which would give you a better understanding of how your code will work.  More information can be found here: http://zone.ni.com/devzone/cda/tut/p/id/12917
    As a rough measure, you can estimate that simple functions will take one tick to execute.  However, there is not list of what is and is not a simple function.
    You could also try placing code inside a single cycle timed loop (SCTL), which would then guarantee that all of the code in the loop will execute in 1 tick.  However, if you are doing a lot of operations or trying to acquire an analog input, this will fail compilation.
    Drew T.
    NIC AE Specialist

  • Changing Business rule execution time

    Hi,
    I'm working on EPMA 11.1.1.3 when i'm launching BR on dataform after certain time the dataform is opening in a new window.when i'm looking in logs it shows that
    BR launching execution time has exceeded now the BR is running in the background.
    can you gurus tell me where can i change the default BR execution time.....
    thanks in advance
    Edited by: kailash on Sep 14, 2011 4:15 PM

    Have a read of - http://download.oracle.com/docs/cd/E12825_01/epm.111/hp_admin/ch02s07s06.html
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Why the execution time increases with a while loop, but not with "Run continuously" ?

    Hi all,
    I have a serious time problem that I don't know how to solve because I don't know exactly where it comes from.
    I command two RF switches via a DAQ card (NI USB-6008). Only one position at the same time can be selected on each switch. Basically, the VI created for this functionnality (by a co-worker) resets all the DAQ outputs, and then activates the desired ones. It has three inputs, two simp0le string controls, and an array of cluster, which contains the list of all the outputs and some informations to know what is connected (specific to my application).
    I use this VI in a complex application, and I get some problems with the execution time, which increased each time I callled the VI, so I made a test VI (TimeTesting.vi) to figure out where the problem came from. In this special VI I record the execution time in a csv file to analyse then with excel.
    After several tests, I found that if I run this test VI with the while loop, the execution time increases at each cycle, but if I remove the while loop and use the "Run continuously" funtionnality, the execution time remains the same. In my top level application I have while loops and events, and so the execution time increases too.
    Could someone explain me why the execution time increases, and how can I avoid that? I attached my test VI and the necessary subVIs, as well as a picture of a graph which shows the execution time with a while loop and with the "run continuously".
    Thanks a lot for your help!
    Solved!
    Go to Solution.
    Attachments:
    TimeTesting.zip ‏70 KB
    Graph.PNG ‏20 KB

    jul7290 wrote:
    Thank you very much for your help! I added the "Clear task" vi and now it works properly.
    If you are still using the RUn Continuously you should stop. That is meant strictly for debugging. In fact, I can't even tell you the last time I ever used it. If you want your code to repeat you should use loops and control the behavior of the code.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Oracle NoSQL YCSB - continuously increasing execution time

    Greetings,
    currently I am testing several nosql databases using YCSB. I am new to that type of databases, but I have already tested few of them. I am using VM with 2GB RAM and hosted on Win 7. Even though it is not recommended, since I am working in now capacity environment, I am using KVlite. But my problem is confusing and I can not find the reason. So, I have successfully loaded data and tested Oracle NoSQL using different workloads. However, with each execution, I get higher execution time. For example, if at 1st execution I get 20 seconds, if I shut database down and next day execute same workload again, I get 35 second execution time and so on.
    Do you have any idea of what may be causing that? Like I said, I have been researching some nosql databases but I have never had that strange results.
    Regards.

    To add to Robert's comment, the NoSQL DB documentation on KVLite states the following:
         KVLite is a simplified version of Oracle NoSQL Database. It provides a single-node store
         that is not replicated. It runs in a single process without requiring any administrative interface.
         You configure, start, and stop KVLite using a command line interface.
         KVLite is intended for use by application developers who need to unit test their Oracle NoSQL
         Database application. It is not intended for production deployment, or for performance measurements.
    Per the documentation, you can use KVLite to test out the API calls in your performance benchmarking application, but you should not use to it perform the actual performance testing. For performance testing, please install and use the Oracle NoSQL Database server.

  • Execution time increase

    hai  guys,
    Im currently developing a code that run 24 hours 7 days a week. My problem is the execution time increases as days pass by. As im tracing down the execution time ,the 1st day i start the to run the program execution time is 0.04 second but after 3 day it increases to 4.5 seconds .
    then i indually tracked down the subvi that causing problem and find out that the problem is on writing to ini file.
    The subvi function is to REPLACE the data (20 data )in a file every 10 second*not accumulating data just replace the old data  . I open file replace data and close file every itteration and as day pass the execution time for this subvi increases.
    Can any one help me to get a solution for the problem as time is a very important factor in my programming. the execution time is limited for less that 10 second.
    thanks
    regards
    kokula 

    As David said, post the code.
    Is there any reason why your are loggin to an .ini file?  Does that mean you are using the LabVIEW config file functions?
    I don't know if deallocating memory would help, or is even possible, without seeing your code.  You either have a problem with resources continuously being created, but not being closed out.  Or you have an ever growing array.  This takes more memory over time, and slows down the code because LabVIEW has to move memory around to account for the ever larger array.
    I bet if you let your program run long enough, it would eventually crash due to running out of memory.
    By the way 47MB to 350 MB of memory consumption means the same thing as RAM usage increasing.

  • Just downloaded Firefox 6.0 and have restarted 4 plus times, but it continues to hang while launching. I cannot open new windows, change any settings, or do anything but Force Quit. I'm on Mac OSX 10.5.8. How do I get past this hang?

    Just downloaded Firefox 6.0 and have restarted 4 plus times, but it continues to hang while launching. I cannot open new windows, change any settings, or do anything but Force Quit. I'm on Mac OSX 10.5.8. How do I get past this hang?

    Long story short: Simply get CC for teams. At 500 bucks a year it's a steal and those 5 licenses in the base package (or was it 10 even?) cover all your computers, at least the 3 ones you mentioned. For system requirements refer to the individual product pages.
    Mylenium

  • Hi, Like in our laptop when we press a key for a time then that alphabet's frequency continue to increase.

    Hi,
    Like in our laptop when we press a key for a time then that alphabet's frequency continue to increase.
    for eg. if we press "j" in text then it will be like "jjjjjjjjjjjjj" on screen.
    that function in my macbook pro disabled.
    Please help me with this cause its very irritating.
    Thank You
    <Edited by Host>

    I think it's a keyboard malfunction. Take it to Genius Bar or authorised service centre to fix it up.

  • Database Performance: Large execution time.

    Hi,
    I have TPC-h database of size 1GB. I am running a nested query having multiple joins between 5 tables and a group by and order by on three attributes. It took around 1 hour for this query to get executed (also it was fired for the point which can be considered as the center of selectivity range.).
    Following is the query:
    select
         supp_nation,
         cust_nation,
         l_year,
         sum(volume)
    from
              select
                   n1.n_name as supp_nation,
                   n2.n_name as cust_nation,
                   YEAR (l_shipdate) as l_year,
                   l_extendedprice * (1 - l_discount) as volume
              from
                   supplier,
                   lineitem,
                   orders,
                   customer,
                   nation n1,
                   nation n2
              where
                   s_suppkey = l_suppkey
                   and o_orderkey = l_orderkey
                   and c_custkey = o_custkey
                   and s_nationkey = n1.n_nationkey
                   and c_nationkey = n2.n_nationkey
                   and (
                        (n1.n_name = 'FRANCE' and n2.n_name = 'GERMANY')
                        or (n1.n_name = 'GERMANY' and n2.n_name = 'FRANCE')
                   and l_shipdate between '1995-01-01' and '1996-12-31'
                   and o_totalprice <= 246835
                   and c_acctbal <= -422.16
         )as shipping
    group by
         supp_nation,
         cust_nation,
         l_year
    order by
         supp_nation,
         cust_nation,
         l_year
    Moreover it has been observed that such types of queries viz., nested, sub queries, aggregation are taking very high amount of time for execution as compared to other databases. The above mentioned query took only 18 seconds to execute in ORACLE server.
    The machine configuration and the database configuration are as follows:
    Machine:
    64-bit Windows Vista operating System.
    RAM: 8GB.
    CPU: 3.0 GHZ
    Database:
    Data Area: No. of Volumes: 1, Size of Volume: 4GB (as mentioned on wiki, for 10 GB database 4 volumes must be assigned.)
    Log Area: Volume: 1, Size: 1GB
    Data and Log are on same disk.
    Caches:
    I/O Buffer Cache: 1 GB
    Data Cache: 1 GB
    Catalog Cache: 30 MB
    Parameters:
    CacheMemorySize - 131072
    ReadAheadLobThreshold- 3000
    Also, we have set other optimizer parameters as required and recommended by SAPDB. Even then I am not able get better performance.
    How to increase or better the performance? Is there any other parameter that remains to be set?

    > I have TPC-h database of size 1GB. I am running a nested query having multiple joins between 5 tables and a group by and order by on three attributes. It took around 1 hour for this query to get executed (also it was fired for the point which can be considered as the center of selectivity range.).
    > Moreover it has been observed that such types of queries viz., nested, sub queries, aggregation are taking very high amount of time for execution as compared to other databases. The above mentioned query took only 18 seconds to execute in ORACLE server.
    Such general statements are usually total crap.
    MaxDB is running for many SAP customer and SAP internally in many installations - even for BI systems.
    We don't know your Oracle server, we don't know the execution plans - so there's nothing to tell why it may be the case here.
    > Data Area: No. of Volumes: 1, Size of Volume: 4GB (as mentioned on wiki, for 10 GB database 4 volumes must be assigned.)
    It's a rule of thumb - having just one volume is a rather bad idea since you don't get parallel I/O with that.
    > Log Area: Volume: 1, Size: 1GB
    > Data and Log are on same disk.
    Although this is irrelevant for the query performance it's nonsense in productive environments and a performance killer as well.
    > I/O Buffer Cache: 1 GB
    > Data Cache: 1 GB
    Why don't you allow more Cache ?
    > Catalog Cache: 30 MB
    What for? Do you understand the catalog cache in MaxDB?
    It's  a per session setting...
    > Also, we have set other optimizer parameters as required and recommended by SAPDB. Even then I am not able get better performance.
    Can you be more specific here?
    What MaxDB version are you using? What parameter settings do you use?
    > How to increase or better the performance? Is there any other parameter that remains to be set?
    How about showing us the execution plan for the statement and the index structure?
    How should we know what MaxDB does here that takes so much time?
    Did you have the DBanalyzer running while the query ran?
    TPC-H is a benchmark for ad-hoc, decision making support: did you enable any of the BI feature pack features of MaxDB? What about prefetching? What about table clustering, column compression, star join optimization ...?
    All in all - you left us here with "MaxDB is slower than Oracle" and nothing to work on.
    That's not useful in any way.
    Want some answers - provide some information!
    regards,
    Lars

  • Execution Time Issue

    Help Please!!!
    I've been searching for an execution time issue in our application for a while now. Here is some background on the application:
    Collects analog data from a cDAQ chassis with a 9205 at 5kHz
    Data is collected in 100ms chunks
    Some of the data is saved directly to a TDMS file while the rest is averaged for a single data point. That single data point is saved to disk in a text file every 200ms.
    Problem: During operation, the VI that writes the data to the text file will periodically take many hundreds of milliseconds to execute. Normal operation execution times are on the order of 1ms or less. This issue will happen randomly during operation. It's usually many seconds between times that this occurs and it doesn't seem to have any pattern to when the event happens.
    Attached is a screenshot of the VI in question. The timing check labeled "A" is the one that will show the troubling execution time. All the other timing checks show 0ms every time this issue occurs. I simply can't see what else is holding this thing up. The only unchecked subVI is the "append error call chain" call. I've gone through the heirarchy of that VI and ensured that everything is set for reentrant execution. I will check that too soon, but I really don't expect to find anything.
    Where else can I look for where the time went? It doesn't seem to make sense.
    Thanks for reading!
    Tim
    Attachments:
    Screen Shot 2013-09-06 at 9.32.46 AM.png ‏87 KB

    You should probably increase how much data you write with a single Write to Text File.  Move the Write to Text File out of the FOR loop.  Just have the data to be written autoindex to create an array of strings.  The Write to Text File will accept the array of strings directly, writing a single line for each element in the arry.
    Another idea I am having is to use another loop (yes another queue as well) for the writing of the file.  But you put the Dequeue Element inside of another WHILE loop.  On the first iteration of this inside loop, set the timeout to something normal or -1 for wait forever.  Any further iteration should have a timeout of 0.  You do this with a shift register.  Autoindex the read strings out of the loop.  This array goes straight into the Write to Text File.  This way you can quickly catch up when your file write takes a long time.
    NOTE:  This is just a very quick example I put together. It is far from a complete idea, but it shows the general idea I was having with reading the queue.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Write all data on queue.png ‏16 KB

  • Execution time of a flat-sequence

    Hello there -
    Is there any way to get a measurement of how long each part of
    the flat sequence takes to execute?  Anything like matlab's "tic" and "toc"
    commands in labview?  I have been playing with it for a while now and
    have yet to discover if Labview has this functionality.  Anyone know of
    anything like this?
    I currently have a VI that controls the realtime acquisition of a CCD camera via Firewire and a USB spectrometer.  The VI collects data from each of these devices (triggered by an external source at 10Hz), and dumps them into a Matlab script which does analysis on the CCD image and spectrum.  The bulk of the VI sits inside a while loop, which continues to run until the user presses the stop button.  Inside this main loop is a flat-sequence.  The sequence goes:    ACQUIRE DATA --->  PROCESSING DATA ---->  MATLAB SCRIPT ----> PLOTTING GRAPHS -----> OUTPUT DATA TO FILE.   
    The problem here is that the VI runs at 5Hz, while we are triggering it at 10Hz.  Originally, it was my thought thought that the matlab algorithm was to blame, but I used the matlab commands "tic" and "toc" to determine that the matlab algorithm runs in 15-20ms.  I did this by putting a "tic" command at the top of the matlab algorithm and a "toc" command at the bottom.  The problem, as I have now discovered is that the rest of the labview code takes ~180ms to execute.  (This was discovered by putting the "tic" at the bottom of the program, and the "toc" at the top of the program, thereby measuring the execution time of everything except the matlab algorithm).  Each time a trigger signal from the external source comes in, it starts the flat-sequence structure (which takes ~190ms), and then waits for another trigger signal, always missing every second signal.  My eventual goal is to reduce the bloat, and get the algorithm down to less than 100ms, so that I can run the VI and acquire data at 10Hz rather than 5Hz.  If anyone can offer some help with this, it would be much appreciated!
    Eric
    P.S. - I have attached a copy of the VI that I am working on, but unfortunately, it most likely will not run on your computer....the VI will not run unless it is connected to a triggered spectrometer and CCD camera....but I have attached it anyways incase anyone who can help might want to take a look.
    Attachments:
    RTSpider.vi ‏376 KB

    can we divide the program into 2 parts and use background process for acquisition and front end process for analysis?
    I mean, create 2 VIs from the present VI and then launch the acquisition program dynamically as a background process and fire events in Main VI from acquisition VI and process it.  not sure how much it is going to reduce. lets give a try....
    Anil Punnam
    CLD
    LV 2012, TestStand 4.2..........

  • Query Execution Time for a Query causing ORA-1555

    dear Gurus
    I have ORA-01555 error , earlier I used the Query Duration mentioned in Alert Log and increased the Undo Retention as I did not find th UnDOBLKS column of v$undostat high for the time of occurence of ORA-01555..
    But new ORA-01555 is coming whose query duration exceeds Undo Retention time..
    My question -
    1. Is it possible to accurately find the query duration time besides the Alert Log file ?

    abhishek, as you are using an undo tablespace and have already increased the time that undo data is retained via undo_retention then you might want to consider the following ideas which were useful with 1555 error under manual rbs segment management.
    1- Tune the query. The faster a query runs the less likely a 1555 will occur.
    2- Look at the processing. If a process was reading and updating the same table while committing frequenctly then the process under manual rbs management would basically create its own 1555 error rather than just being the victum of another process changing data and the rbs data being overlaid while the long running query was still running. With undo management the process could be generating more data than can be held for the undo_retention period but because it is committed Oracle has been told it doesn't really have to keep the data for use rolling back a current transaction so it gets discarded to make room for new changes.
    If you find item 2 is true then separating the select from the update will likely eliminate the 1555. You do this by building a driving table that has the keys of the rows to be updated or deleted. Then you use the driver to control accessing the target table.
    3- If the cause of the 1555 is or may be delayed block cleanout then select * from the target prior to running the long running query.
    Realistically you might need to increase the size of the undo tablespace to hold all the change data and the value of the undo_retention parameter to be longer than the job run time. Which brings up back to option 1. Tune every query in the process so that the job run time is reduced to optimal.
    HTH -- Mark D Powell --
    dear mark
    Thanks for the excellent advise..I found that the error is coming because of frequent commits..which is item 2 as u righly mentioned ..
    I think I need to keep a watch on the queries running , I was just trying to find the execution time for the queries..If there is any way to find the query duration without running a trace ..
    regards
    abhishek

Maybe you are looking for