Execution speed improvements of FGV over locals

This my first post.  I am new to LabVIEW and am currently writing my first significant application.  I am actually modifying an existing application, but am adding additional functionality.
I have read this VI Execution Speed article but still have questions http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/vi_execution_speed/
As an example, I am populating an xy plot with several sets of data, such as saved data points and a curve fit.  I am currently using a case structure to decide when to write various elements.  As an example, I only write the curve fit one time and it remains static as other pieces of data are added to the graph. In an attempt to make things faster I did not want to redo and redraw the fit each time.  I am using locals within the case to populate the graph.  In the default case nothing if written to these locals (and I assume the graph is not redrawn)?
I now realize that instead using a case in which nothing is written, I could use a feedback node (to write the previous case). The in addition, in place of using a local, I could use a Functional Global Variable.
Would it be better to have several cases which write locals to populate the plot and one case which writes nothing  OR Have the several cases write to a FGV and then a default case which writes the last data to the FGV via a feedback node?
The first seems like less load as nothing is written in the default case, but the other cases do write to a local which has a front panel object that I don't need or want.
Iis there any benefit to hiding unused front panel objects?
Is there a way to create a local without front panel object?
Finally, with a tabbed UI, are objects on the non active tabs being redrawn and slowing the process?

Good questions!
Starting from the bottom:
Iis there any benefit to hiding unused front panel objects?
Is there a way to create a local without front panel object?
No and No. Local variables are generally a poor way to store and move data. For the reasons you cited (FP object you do not want or need, hidden FP objects) and that they force a copy of the data and may cause a thread switch to the UI thread.
Finally, with a tabbed UI, are objects on the non active tabs being redrawn and slowing the process?
It depends. Newer versions of LV are generally smart enough that non-visible indicators are not redrawn. This may not have been true in older versions, although I do not know when the changes were made.
You certainly do not need to redo the fit if the data has not changed. I think the entire graph gets redrawn when any new data is written.
Search for Ben's extremely informative Nugget on Action Engines (AE). An Action Engine is a FGV with added capabilities. It may be a very good option for what you are trying to do. It could store all the data sets, update the graphs, and make the data available where it may be needed without extra copies. With the AE you could easily acquire and store data at one rate and update the graphs at a (slower) rate appropriate to the users eyes and brains. Updating graphs more than ~10 times per second is a waste of resources because the user cannot respond any faster than that. Also you could write a subset or reduced data set to the graph if the amount of data is larger than the number of pixels in the plot.
Lynn

Similar Messages

  • On an IMac will a 4GB graphics card give a noticeable speed improvement over a 2GB card?

    On an IMac will a 4GB graphics card give a noticeable speed improvement over a 2GB card?

    In terms of Lightroom I'm pretty sure the answer would be "no".

  • Massive Disk Speed Improvement Plan

    I am moving forward with a disk storage speed improvement plan using my Dell Precision T5400 workstation as the test bed.
    Specifically, my goal is to create a super fast 2 TB drive C: from four OCZ Vertex 3 480GB SATA3 SSD drives in RAID 0 configuration.  This will replace an already fast RAID 0 array made from two Western Digital 1TB RE4 drives.
    So far I have ordered two of these fast SSD drives, along with what is touted to be a very good value in high performance SATA3 RAID controllers, a Highpoint 2420SGL.  I'll get started with this combination and get to know it first as a data drive before trying to make it bootable.
    Getting any kind of hard information online about putting SSDs into RAID is a bit like pulling teeth, so I'm not 100% confident that these parts will work perfectly together, but I think the choice of SSD drives is the right one.  I had briefly considered a PCIe RevoDrive SSD card made by OCZ, but was just too esoteric...  I'm actually getting double the storage this way for the same price, I can swap to a different RAID controller if need be, and these drives can easily be ported to any new workstation I may get in the future.
    Notably, some early concerns with using SSD in RAID configurations (and things like TRIM commands) have already been alleviated, as the drives are now quite intelligent in their internal "garbage collection" processes.  I've verified this with the engineers at OCZ.  They have said that with these modern SSD drives you really don't have to worry about them being special - just use them as you would a normal drive.
    Once I get the first two SSDs set up in RAID 0 I'll specifically do some comparisons with saving large files and also using the array as the Photoshop scratch drive, vs. the spinning 1 TB drive I have in that role now.
    Assuming all goes well, I'll then add the additional two SSDs to complete the four drive array.  After a quick test of that, I'll see if I can restore a Windows System Image backup made from my 2 TB C: (spinning drive) array, which (if it works) will let me hit the ground running using the same exact Windows setup, just faster.
    My current C: drive, made from two Western Digital 1 TB RE4 drives, delivers about 210 MB/sec throughput with very large files, with 400 MB/sec bursts with small files (these drives have big caches).  Where they fall down dismally (by comparison to SSD) is operations involving seeking...  The PassMark advanced "Workstation" benchmark generates random small accesses such as what you might see during real work (and I can hear the drives seeking like crazy) results in a meager 4 MB/sec result.
    My current D: drive, a single Hitachi 1 TB spinning drive, clocks in at about 100 MB/sec for large reads/writes.
    The SSD array should push the throughput up at least 5x as compared to my current drive C: array, to over 1 GB/sec, but the biggest gain should be with random small accesses (no seek time in an SSD), where I'm hoping to see at leasdt a 25x improvement to over 100 MB/second.  That last part is what's going to speed things up from an every day usage perspective.
    I imagine that when the dust settles on this build-up, I'll end up pointing virtually everything at drive C:, including the Photoshop scratch file, since it will have such a massively fast access capability.  It will be interesting to experiment.  I suppose I'll have to come up with some gargantuan panoramas to stitch in order to force Photoshop to go heavily to the scratch drive for testing.
    I'll let you all know how it works out, and I'll be sure and do before/after comparisons of real use scenarios (big files in Photoshop, and various other things).  Perhaps fully my "real world" results can help others looking to get more Photoshop performance out of their systems understand what SSD can and can't do for them.
    I welcome your thoughts and experiences.
    -Noel

    Not sure who might be following this thread, but I have executed the final phase of this plan, restoring a system backup from my spinning drive array onto the new 4 drive SSD array.
    All went off without a hitch, I have my same system configuration including all apps and everything just as it was, except everything is now MUCH faster.
    The 4 drive array achieves a staggering 1.74 gigabytes/second sustained throughput rate.
    Windows 7 WEI score is 7.9 for the Primary hard disk category.
    Windows boots up quickly, everything starts immediately, nothing bogs the system down, and just overall everything feels very fluid and snappy.  And there is no seeking noise from the drives.
    Regarding what this has done for Photoshop...  I've only tested on Photoshop CS6 beta so far today, but everything is incrementally improved.  Startup time is faster, things seem more smooth and fluid while editing overall, and a benchmark I created using an action to run a lot of image adjustment operations on a big, multi-layer image ran this long to completion:
    When the file is opened from (and the Photoshop scratch file is on) a single spinning disk: 
    4 minutes 26 seconds (266 seconds)
    When the file is opened from (and the scratch file was is on) a fast array of spinning drives: 
    3 minutes 45 seconds (225 seconds)
    When the entire system is run from the SSD array: 
    2 minutes 31 seconds (151 seconds)
    During the action, because so many steps are performed on the big file, Photoshop writes a 30+ gigabyte scratch file on the scratch drive.
    Summary
    Clearly the very fast disk access markedly improves Photoshop's speed when it uses scratch space. 
    Plus copying big image files around is virtually instantaneous. 
    I don't use Bridge myself, but I have noticed that all the image thumbnails (via FastPictureViewer Codec Pack) just show up immediately in Explorer windows and Photoshop File Open/Save dialogs.  We can only assume this kind of drive speed would really make Bridge blaze through its operations as well.
    Following my footsteps would be expensive, but it can really work.
    -Noel

  • Why is the difference in execution speed of the function "SetCtrlVa​l" between constant and changing values so small ?

    In my large application (1 MB exe-file) I am continuously updating a lot of numeric controls with new values. Most of them do not really change their value. Within my search of improving the performance of my application I noticed, that there is only a small difference of the execution speed between a call of "SetCtrlValue" with constant values and calls with changing values. It runs much faster (25 times on my PC), if I get the actual control value with "GetCtrlVal", compare it with my new value an do a call to "SetCtrlVal" only if the current value and the new value are different.
    My questions to CVI-developers is:
    Isn't it possib
    le to do this compare within the function "SetCtrlVal"
    My question to all CVI-users is:
    Does anyone have similar tips to improve the performance of CVI applications ?
    I developed a small test application for this problem, which I can mail to interested users.

    What takes the extra time is the redraw of the control. When you call SetCtrlVal we ALWAYS redraw the control. We wouldn't want to build in functionality to check if the value was the same because that would add additional time to the SetCtrlVal in every case. If you want to do it outside of the loop you can as you have done above. You have a few options. First, keep a previous value variable for the controls that you can use to determine whether to set the control value. I.E.
    int oldVal = 0;
    int newVal = 0;
    if(newVal!=oldVal) {
    SetCtrlVal(..., newVal);
    oldVal = newVal;
    Also, if you set the value of a control through SetCtrlAttribute instead, there is no built in redraw of the control (which is what takes all the time). Using SetCtrlAttribute
    to set the value is very fast, but remember there isn't a built in redraw on the screen to display the new number.
    Best Regards,
    Chris Matthews
    Measurement Studio Support Manager

  • DRI (declarative referential integrity) and speed improvements.

    EDITED: See my second post--in my testing, the relevant consideration is whether the parent table has a compound primary key or a single primary key.  If the parent has a simple primary key, and there is a trusted (checked) DRI relation
    with the child, and a query requests only records from the child on an inner join with the parent, then sql server (correctly) skips performing the join (shown in the execution plan).  However, if the parent has a compound primary key, then sql server
    performs a useless join between parent and child.   tested on sql 2008 r2 and denali.  If anyone can get sql server NOT to perform the join with compound primary keys on the parent, let me know.
    ORIGINAL POST: I'm not seeing the join behavior in the execution plan given in the link provided (namely that the optimizer does not bother performing a join to the parent tbl when a query needs information from the child side only AND
    trusted DRI exists between the tables AND the columns are defined as not null).  The foreign key relation "is trusted" by Sql server ("is not trusted" is false), but the plan always picks both tables for the join although only one is needed. 
    If anyone has comments on whether declarative ref integrity does produce speed improvements on certain joins, please post.  thanks.
    http://dinesql.blogspot.com/2011/04/does-referential-integrity-improve.html

    I'm running sql denali ctp3 x64 and sql 2008 r2 x64, on windows 7 sp1. I've tested it on dozens of tables, and I defy anyone to provide a counter-example (you can create ANY parent table with two ints as a composite primary key, and then a child table using
    that compound as a foreign key, and create a trusted dri link between them and use the above queries I posted)--any table with a compound foreign key relation as the basis for the DRI apparently does not benefit from referential integrity between those tables
    (in terms of performance). Or to be more precise, the execution plan reveals that sql server performs a costly and unnecessary join in these cases, but not when the trusted DRI relation between them is a single primary key. If anyone has seen a different result,
    please let me know, since it does influence my design decisions.
    fwiw, a similar behavior is true of sql server's date correlation optimization: it doesn't work if the tables are joined by a composite key, only if they are a joined by a single column:
    "There must be a single-column
    foreign key relationship between the tables. "
    So I speculate, knowing absolutely nothing, that there must be something deep in the bowels of the engine that doesn't optimize compound key relations as well as single column ones.
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[parent](
    [pId1] [int] NOT NULL,
    [pId2] [int] NOT NULL,
    CONSTRAINT [PK_parent] PRIMARY KEY CLUSTERED
    [pId1] ASC,
    [pId2] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    CREATE TABLE [dbo].[Children](
    [cId] [int] IDENTITY(1,1) NOT NULL,
    [pid1] [int] NOT NULL,
    [pid2] [int] NOT NULL,
    CONSTRAINT [PK_Children] PRIMARY KEY CLUSTERED
    [cId] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[Children] WITH CHECK ADD CONSTRAINT [FK_Children_TO_parent] FOREIGN KEY([pid1], [pid2])
    REFERENCES [dbo].[parent] ([pId1], [pId2])
    ON UPDATE CASCADE
    ON DELETE CASCADE
    GO
    /* the dri MUST be trusted to work, but it doesn't work anyway*/
    ALTER TABLE [dbo].[Children] CHECK CONSTRAINT [FK_Children_TO_parent]
    GO
    /* Enter data in parent and children */
    select c.cId FROM dbo.Children c INNER JOIN Parent p
    ON p.pId1 = c.pId1 AND p.pId2 = c.pId2;
    /* Execution plan will be blind to the trusted DRI--performs the join!*/

  • Have speed improvements been made?

    I just want to say that my posts on the forums today have appeared in the topic list nearly instantaneously — which wasn't the case just a few days ago. And if the forum administrators have made speed improvements recently, but are disappointed no-one has noticed, then perhaps this will bring a modicum of satisfaction.
    ...I wonder if someone "in the know" (Eric?) can indicate whether this is a permanent improvement. If it's merely due to draining the _internet tubes_, it may not be...

    Thanks for your reply, but I'm not sure it explains the particular slow-down I'm seeing...
    Displaying forums is as fast as ever, as is the system's acceptance of new posts/replies. The problem is, as I said, that +"posts take a long time to appear"+. ...I list the main forum page and expect to see my just-accepted post at the top of the list, but it doesn't appear for between one and a few minutes after my post has been accepted. Yes, there's a warning to expect a delay, but I'm pointing out that these forums go through periods of near instantaneous appearance of new posts (as was the case a week ago) to a now sluggish appearance of new posts.
    The idea that there is a geographic aspect to the problem is perhaps nullified by the comment by MGW (in New Hampshire?) above:
    "Actually, the speedup has been noticeable for the past couple of days, having complained bitterly about the clog..."
    Also during "clogged" periods, duplicate posts from all over the world tend to appear as members mistakenly think their post, although accepted by the system, didn't "take" and re-enter it — because it doesn't show up in the forum's main list for a couple of minutes or so.
    ...We seem to regularly go through these alternating multi-week periods of members reporting speed and sluggishness but, so far, with no acknowledgement or explanation from the hosts.
    By the way, this post itself took a minute to appear in the main forum list — a week ago, it would have appeared almost instantaneously.
    Message was edited by: Alancito

  • LabVIEW MathScript computation speed improvement

    I am using a MathScript node to make calculations on an sbRIO FPGA module and the speed of these computations is critical.  What are some ways to improve the speed of calculations and is there a faster way to do matrix calculations than MathScript?  If I make the MathScript portion into a subVI will it improve the speed of calculations?
    Thanks for any ideas
    Solved!
    Go to Solution.

    Please look at the attached VI. It has your original .m code, my modifications to your .m code, and the G code equivalent to the modified .m code. First, let me describe to you the numbers I saw on a cRIO 9012 for each of the three approaches.
    I ran each of the three approaches for hundred iterations, ignored the first 30 iterations to allow for memory allocations (which caused a huge spike in run-time performance on RT), and then took the average run-time for each loop iteration for the remaining iterations
    Original M: 485 msec/iteration
    Modified M: 276 msec/iteration
    G: 166 msec/iteration
    The modifications I made to your .m code are the following:
    (1) Added ; to end of each line to suppress output (used for debugging)
    (2) Moved the random code generation out - used whitenoise (seems like that's what you were doing)
    (3) Switch on the data type highlighting feature. Noticed that majority of the data was casted to complex, although didn't seem like you needed the complex domain. The source was sqrt function. Modified it to using real(sqrt(...))
    This improved performance by over 40%. I believe more can be squeezed if you follow the documentation - Writing MathScript for Real-Time Applications. 
    Then, I took the MathScript you had and wrote equivalent G leaving the algorithm as is. This gave us performance improvement of another 40% over the modified G. This is a known issue that on slow controllers MathScript adds a 2x penalty to equivalent G. We are currently investigating this issue and may be able to fix it in a future release.
    If you profile the G code, you will notice that most of the time is spent in matrix multiplication. Unless you rethink your algorithm, I doubt this can improve further.
    Let me know if you have questions
    Regards,
    Rishi Gosalia
    Attachments:
    Mathcript_efficiencyProblem Modified.vi ‏255 KB
    MathScript_efficiencyProblem_G.vi ‏62 KB

  • Execution Speed of my page

    Hi everybody,
    I converted an existing report into an FM and used that FM in my BSP page to display the output in browser.
    But the execution speed of BSP page is almost half as compared to the execution of report. I am not sure why this has happened.
    Is the layout reponssible for this low speed?
    But its excatly the same layout that report displayed.
    Please tell me where should i do the changes to increase execution speed .
    Thanks

    I suggest you to execute SE30 transaction (performance measurements) and including your BSP. You can do this also from SICF, I think that in the menu you hace a Runtime Analysis option or similar (sorry I by now I cannot acces to my SAP BW). Then, check the report, and discover where the most of the time is spent.
    I had a similar problem. I had a BSP which execute many times a FM for generate a data table. I put AJAX to my BSP to improve performance and concurrency.

  • VI execution speed in subpanel is slow

    Hello,
    I have the following problem with subpanels. I have a VI that should run inside of a subpanel. This VI contains an Image Display component (Vision). This Image Display is updated with a new image inside a while loop every 40 milliseconds. This is the one and only action inside of this loop. When I run this VI in a subpanel, the execution slows down to 200 ms for the Image Display update action (for the while loop). I measured the this with a Timervalue (ms) component. If my application runs the VI outside of the subpanel in an own window then the timig of 40 milliseconds works perfect. The only difference between the two situation is that the VI runs in or outside of a subpanel.
    This is a great problem for me because my camera captures images with a framerate of 7,5 fps. So an Image Display update time of 200 ms is not acceptable for my application. Is this a known issue. Is there a workaround or some settings that I should tweak to improve execution speed of VIs in a subpanel. Or should i simply not use subpanels if execution speed matters - then it would be good to write this into the subpanel documentation. I carefully read the documentation an searched the dicussion forum but I could not find an answer.
    Thank you very much for any help.

    Hey Incredible,
    could you tell me your labview version and please post your labview code.
    Kind regards,
    Elmar

  • How do I associate timing with multiple AI loops that is independan​t of loop execution speed

    I am using Labview 7.1 and I am performing AI of
    voltage on different channels of the same multifunction DAQ.  I am
    using while loops to aquire the data.  I am writing the data to
    spreadsheet file with an associated time as a 2D array.  Currently I am
    using the 'elapsed time' interactive subVI to get the time in seconds
    (attempting to use the 'Get Date/Time in seconds' and converting it to
    DBL results in time values that do not change. My thoughts are that
    maybe the number of seconds is too large to display single second
    precision with a DBL floating point number). When I run AI on two
    multiple loops they execute at different rates and thus the time values
    from "elapsed time" are accumulated at different rates. I have tried
    using timed loops to control timing, but if one loop executes at a rate
    slower than the timing of the loop the 'elapsed time' still accumulates
    at different rates.  I need to be able to associate both analog inputs
    to the same time
    value in the spreadsheet value and I would like to find a way to
    associate a time with each data point that is independant of loop
    execution speed (although I would still like to control execution speed
    of the loops. I am pretty new to Labview and programming in general,
    any help would be greatly appreciated.

    If you want to read more than one AI at the same time, you should use a trigger.  You would need to set up the AI Trigger to an external source.  Then you would have to use a function generator or digital out to create a clock.  The clock would be wired to the AI external trigger.  When the clock goes high (or low depending on configuration), both AIs would read.  That is the method I use when needing to sync multiple AI inputs.
    - tbob
    Inventor of the WORM Global

  • Speed improvements in JDeveloper 3.0

    Will the overall response time/speed improve in JDeveloper 3.0?
    Thanks
    Mike
    null

    Hi
    JDeveloper 3.0 is in beta right now, and it has improvements in
    response time/speed.
    For example the deployment wizard is way faster.
    regards
    raghu
    Michael Maculsay (guest) wrote:
    : Will the overall response time/speed improve in JDeveloper 3.0?
    : Thanks
    : Mike
    null

  • Execution Speed of Aurora JRE

    We are experiencing some problems with the execution speed of the aurora JRE. The documentation states that the Aurora JRE is compiled to native code and should run 2 - 10 times faster than a normal JRE. We are atempting to do some memory sorts using the Array class which is taking 10 times longer in the Oracle instance than on a client.
    Is this a known performance bottleneck, are there any "tweaks" available?
    Thanks
    Julian

    Hello Ravi,
    Does your statement about core java classes being natively compiled also apply to the JVM in OAS / iAS?
    I remember reading in a white paper that, in iAS, java code would be translated into C code for native execution. Does this apply to user code or only core java classes ?
    (by the way, It seems impossible to download OAS 4.0.8.2 from OTN, maybe it would help if it was divided into several smaller files...)
    Thanks, Remi DEH
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Oracle Support Analyst (Ravi):
    In the current JServer release, Java code you load to the server is
    interpreted. The underlying core classes upon which your code relies (java.lang.*) are natively compiled. Until the native compiler is available for user programs,
    the net speed benefit of native compilation to your executing program is dependent upon how much native code is traversed, as opposed to interpreted code. The
    more Java code from core classes and Oracle-provided class libraries you use, the more benefit you will see from native compilation.
    In 8.1.7 i.e 8i Release 3 we support natively compiled code for the user programs.
    <HR></BLOCKQUOTE>
    null

  • Execution Speed help

    Hi there!
    I hope someone can help me about my (probably noobish) question:
    I have 2 Oracle environments, both not maintained by me and if I execute the following querry:
    select * from DATA_TBL where rownum < 11;
    It takes about 8 seconds on the one and less than 1 second on the other environment.
    As there is everything selected via '*' and the where clause does not include data colums it can't be about indexes or analysed information or?
    On both environments I also get the same explain_plan (only thing I have the rights to do):
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | | | |
    |* 1 | COUNT STOPKEY | | | | |
    | 2 | TABLE ACCESS FULL | DATA_TBL | | | |
    Predicate Information (identified by operation id):
    1 - filter(ROWNUM<11)
    Note: rule based optimization
    15 rows selected.
    I would be happy if someone could tell me about possible reasons for the difference in execution speed, or what to tell the DBA's..
    Thanks in advance!
    Jan

    >
    select * from DATA_TBL where rownum < 11;
    It takes about 8 seconds on the one and less than 1 second on the other environment.
    >
    SQL>  create table t1 as select 'AAAAAAAAAAAAA' as word from dual connect by level<=1e7;
    Table created.
    SQL> create table t2 as select 'AAAAAAAAAAAAA' as word from dual connect by level<=1e7;
    Table created.
    SQL> delete from t1 where rownum<5000000;
    SQL> commit;
    Commit complete.
    SQL> set timing on
    SQL> select * from t1 where rownum<10;
    WORD
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    9 rows selected.
    Elapsed: 00:00:02.44
    SQL> select * from t2 where rownum<10;
    WORD
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    AAAAAAAAAAAAA
    9 rows selected.
    Elapsed: 00:00:00.48Kind regards
    Uwe
    http://uhesse.wordpress.com

  • Execution speed onboard programs

    Hi all
    I am tying store the value of analog input channels of PCI-7352 on buffer, from onboard program.
    The program run fine, but I don´t know how measure the total acquisition time in the program.
    In an example in LabView ( adc-gpbuffer.vi ), use 'flex_load_delay' function to establish an interval in
    the acquisition. But the real interval between readings not is calculated.
    Another question, How I can know the execution speed of onboard program ?.  The documentation does not
    show information.
             Thank´s
             Javier
    Attachments:
    adc-gpbuffer.vi ‏165 KB

    Javier,
    there is no easy way to measure the execution timing of onboard programs. One thing that you could do is to toggle a digital line in your onboard program and measure the timing with an oscilloscope.
    Please be aware that the timing of onboard programs doesn't work deterministicallly as onboard programs don't run with time critical priority on the board's CPU so you will probably see a fair amount of jitter.
    The main purpose of the analog inputs on the 7352 is analog feedback. You can use them for single point measurements, too but if you need to acquire data with an accurate timing you better should use an additional M-Series board like the PCI-6220 which provide a much better measurement and timing accuracy and as true measurement devices they provide a whole set of additional useful features.
    Best regards,
    Jochen Klier
    National Instruments Germany

  • Why am I only getting 0.24mbs download speed on my ipad over my wifi when my Microsoft laptop is working massively faster?

    Why am I only getting 0.24mbs download speed &amp; 0.47mbs upload speed on my ipad over my wifi when my Microsoft laptop is working massively faster?
    I have a EE BrightBox wifi router and other devices in the house are working perfectly only problem I have is with my Apple products.

    Hi,
    Don't have a Mac, using new ipad,  iPhone 5&amp;4s both with ios7 but it's soo slow, Microsoft laptop is working fine over wifi so it has to be Apple devices that have the problem. Have tried all of the suggested fixes on forums but still no joy.

Maybe you are looking for

  • Adobe Creative Cloud Crashed/ then deleted/ need to REdownload

    Ugh. I really messed up. I was downloading the Adobe Creative Cloud from my school's website. During the install process I kept trying to open Adobe Creative Cloud it kept crashes and refused to open. I deleted for some stupid reason and now I can't

  • SmartView report automation  v9.3

    Thanks is advance for any help. I have built automated reports of various types over the years using the Excell Client add-in, Where I am workinf now, they only install Smart View so I am trying to produce the same sort of automated reports with the

  • Where is: SampleShapefileToJGeomFeature.java

    We are starting an Oracle Spatial project and need to convert a few hundred ESRI shapefiles to a SDO_GEOMETRY datatype. Where can I find the SampleShapefileToJGeomFeature.java file and JAR files to help with the conversion? Thanks in advance...

  • Execute c:\abc\myprogram.exe from abap

    Hi, How to execute my executable file c:\abc\myprogram.exe from abap. And I want to make sure myprogram.exe is finished execution before continue to the next line in abap. Can paste some code sample? Thanks.

  • GroupWise 2012 SP2 - silent install - Reboot force?!

    Hi, I was asked to create a silent deployment for GroupWise 2012 SP2 for our ZENworks. So I followed the steps that I found here: Novell GroupWise 2012 Deployed Using ZCM 11 SP1 Part 1 - YouTube. My bundle do install silently GroupWise 2012 SP2 but o