Blinking LED without property node

Hey!
I am familiar with property node blinking but I want to make a LED blink because of a timer interfering with a FOR cycle. I have almost made a VI that compares timer value to y value. When timer value is larger than y, then my LED should shine. When timer value is smaller than y then it should turn off. Something is wrong, maybe someone could help me.
Solved!
Go to Solution.
Attachments:
blinking.vi ‏9 KB

Here are two simpler versions that don't involve any "math" and operate "in place". Pick one!
LabVIEW Champion . Do more with less code and in less time .
Attachments:
blinkers.png ‏4 KB

Similar Messages

  • How can I copy property nodes without them unlinking?

    I'm in the process of designing a VI that allows the user to choose between two methods of data analysis, each with a unique display structure. So I set up the two display methods, and overlapped them, and wrote a quick sequence to make the appropriate set appear/disappear using the visible property node. I wanted to be able to copy the frame where I made all the elements of one type invisible. However, when I copied the property nodes, they all unlike from their front panel items. I found a more elegant way around the issue, but the problem remains: is there any way to copy property nodes without having to relink them manually afterwards?
    Drew

    If you copy by selecting the items and then while holding down the Ctrl key (on a PC) drag a copy instead of copying and pasting the property nodes won't unlink.
    Brian

  • Use blinking property node with a pipe indicator from Automation Symbols toolkit

    Can I use a blinking property node with a pipe indicator/pump from the Automation Symbols toolkit? How?

    The blink works just fine for me.
    Ben
    See attached
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction
    Attachments:
    blink.vi ‏19 KB

  • Just checking on another odd feature: Typedef enum Property Node "Value" not reflecting changes in the Typedef?

    Just encountered this odd behavior in LV 2011 which I reduced to the following example:
    - create a new VI and drop an enum control on te FP.
    - make this control a typedef and open the corresponding typedef
    - create a "case 1" item and a "case 2" item
    - save the typedef (I used the name Typedef Control 1) and CLOSE it (this is to allow updating of the original control).
    - drop a case structure on the diagram and connect the enum to it:
    So far so good. Now create a "Value" Property node for the enum and use it instead of the terminal:
    If I now go back to the typedef and add a "case 3" item, save the typedef and close it, the control is updated, but the Property node is not.
    How do I know that? For one, the Case Structure contextual menu does not offer to create a case for every value. Also, if I add a "case 3" case manually, it turns red.
    Luckily, the magic Ctrl-R key stroke actually does solve this problem.
    Tested in LV 2011.

    By Ctrl-R trick do you simply mean running the VI?  That is one way to force a recompile, but if you hold down Ctrl while pressing the run button you can recompile without running.  This should not be "dangerous" in any situation.
    As you have drawn your example, I see no reason not to use a local in that situation (ok, maybe for vanity).  Still, I view the behavior you describe as a bug, and it should certainly be fixed for the benefit of local haters out there.  You have to be a little careful where you draw the line between what gets handled in real time and what gets handled only at compile time.

  • Disabled property node hangs loop

    I've got a parser loop, operating on streamed data from a CRIO via UDP.  The loop operates at about 90Hz.  In response to the user opening a file, the code will use a property node to enable (or disable and gray) a couple of boolean front-panel objects.  When these property nodes execute, I see the CPU usage (in Windows task manager) go to close to 100% and coincidentally, the parser loop hangs.  Other loops within the same VI continue to run.  CPU usage stays at 100% until I force a VI abort.
    With a small test-VI, I've noted that these property nodes require roughly 10ms to execute.  This seems quite sluggish, but nonetheless, my thoughts are that my code, running at 90Hz, would be able to tolerate a single slip of the loop execution time, particularly because the UDP data is queued.
    Any thoughts regarding this property node execution time or suggestions on how to improve the code?  Thanks in advance.

    Mark, thank you for offering to look at a stripped-down version of the code.  I just couldn't spare the time required to simplify our complex code.  However, I've been working on this problem in the past couple of weeks since I originally posted the question, and have made some progress toward a solution.  Although I still do not have a conclusive explanation for the Labview's behavior, I thought I'd follow up with a list of what appears to have made improvements to the code.  I'll concede that these suggestions are not definitive, but the problems are not repeatable and without any transparency into Labview's internal behavior, my analysis of the problems and my attempts to find a fix are admittedly speculative in nature.  Software development shouldn't be magic, but damn it seems like Labview requires we dance around a black candle.  Frustrating.  OK, exiting rant mode, here is a list of what NOT to do if you want Labview to be more stable:
    --  Do not use frames around your front panel objects.  Our main panel has approximately 100 front-panel indicators.  In an attempt to make the interface more intuitive, in a recent code revision we grouped the objects using frames.  The effect was a sluggish UI and a processor loading that went to close to 85%.  I'm aware from posts on this forum that overlapping indicators forces Labview to update all when any is updated.  This is an understandable coding contraint.  OK, fine, we weren't overlapping any idicators.  But for pete's sake, why should the same constraint apply to a purely decorative object like a frame?  This strikes me as a fundamental philosophical flaw in LV's coding.  Group N objects with a nice frame -- update the objects N-squared times.  If this is the result of using a frame, I would have preferred that NI not even offer the option.  Bad choice on the part of the Labview coders and bad choice on our part for assuming zero frame impact on performance.
    -- Do not use property nodes.  We occasionally gray-out front panel objects when appropriate for the state of the software.  This appeared to be contributing to Labview's instability.  I built a diagnostic routine that measures execution time for the "gray-out and disable" property node.  Generally around 8ms, but occasionally as high as 16ms.  Good grief.  I've got a code loop running at 90Hz.  A 16ms hit isn't easy to tolerate or frankly to understand.  Particularly when slow execution is the BEST of the consequences -- the worst is that the property node seemed on occasion to precipitate a hung loop.
    -- Do not use Labview's built-in queue structure.  Our code was originally using queues to hand off packets of data from a UDP-listener loop to a packet-parser.  The UDP-listener blocks on UDP reception, shoves the packets into the queue.  The packet parser blocks on data available in the queue and subsequently writes the data to file.  NI would have you believe, and I did believe for a while, that this is an elegant producer/consumer approach to this problem.  When our problem would occur, the UDP-listener continued to put data into the queue, but the packet parser would never retrieve it.  Just went off into nowhere, consumed and forgotten by the Great Labview Scheduler in the Sky.  The loop would hang, wouldn't respond to the stop button, would require a forced-abort.  Subsequently, if we simply restarted the code, we couldn't be assured that the packets retrieved from the queue would be in chronological order.  Seemed to be just randomly retrieved.  Clearly the failures had corrupted some of the Labview internal data structures that govern the queue operation.  We couldn't assure proper behavior unless we shut down and restarted Labview each time the error occurred.  The solution was to abandon code elegance in favor of sequential operation -- get rid of the queue, listen for the UDP packet then parse it immediately.  No queue handoff.  No further parser lock-ups.
    I'm not sure what other bombs might be lurking in our code.  Our listener and parser code hasn't lately hung, but the problem is starting to move on to other loops.  They'll run for hours and then just stop.  Dead.  Frozen.  In the most recent cases, even the abort button won't shut them down.  We have to use Windows Task Manager to kill them.  I'll admit to harboring some deepening skepticism for any of the more clever and powerful "features" that NI has added to LV.  From my perspective, these more powerful features cannot come free of cost -- they must impose some unavoidable computational burden on LV itself, a burden that LV seems unable to handle, with unpleasant consequences.  Must we impose a moratorium on Timed Loops?  Event structures?  To what level of simplicity must I drive our code to ensure stability?
    Thanks, everyone, for tolerating my frustration, and for your comments, if you've got any guidance you can offer.
    -dave sprinkle

  • Can I use the 'Export Signal Property Node' on a quadrature encoder?

    Hi,
    So I don't know which counter board I'd be using yet for this (it's used in conjunction with a PCI-6280--the PCI-6280's counter inputs are all taken and so I need another board), but assuming this is possible at all in DAQmx I wouldn't mind knowing whether, say, the PCI-6601 (or any other timer board for that matter) could do this. I'm programming this in LabVIEW 2010 by the way. 
    I want to have a counter which counts the number of pulses on one channel (I'll call this the 'clock' channel) between when another channel goes from low to high (which I'll call the trigger). It's basically a pulse width measurement, but I only care if there are more than n clock pulses between triggers. I need to have a hardware-timed digital signal which goes from low to high if there are ever more than n pulses between trigger changing state from low to high. 
    What I am planning to do is this: 
    Wire 'trigger' to the z-input of the quadrature encoder, and set the z-input value to some arbitrary large value such that, at the quadrature encoder counter task's settings, the counter reaches terminal count in n pulses.
    Configure the quadrature encoder counter using DAQmx Export Signal Property Node (tutorial I was looking at is here: http://zone.ni.com/devzone/cda/tut/p/id/5387 ) to toggle a digital channel ('counter event output') from low to high if the counter reaches terminal count (ie, if the encoder reads n pulses).
    If the encoder ever reads n pulses on 'clock' between two rising pulses on 'trigger', it sets counter event output high.
    Is this possible? Reading through the manual of M series PCI-62xx devices, the index pulse loads the counter with a particular value so it seems like you could conceivably set the counter to the terminal count if you wanted. My only real problem is whether DAQmx Export Signal Property Node works on all counter tasks or just on edge counting tasks. 
    Thanks in advance for your help. If this isn't possible, I can reply with more details on the problem this is supposed to solve so that you can help me figure out an alternate method.
    Solved!
    Go to Solution.

    There is probably a way to do it, but it it may be easier to use an X-series board for the job.   They support a new counter capability for count reset on a digital edge without needing to be configured in encoder position mode.  I am not sure exactly how that feature's been implemented however, so maybe it won't make things easier after all.
    The plan based on the hoped-for behavior: 
    1. Configure an X-series counter for pulse generation based on "ticks" of your clock channel.
    2. Set both initial delay and low time to the critical # of ticks.
    3. Configure for count reset on a digital edge (if possible in pulse generation mode)
    4. Configure the count reset value to be the critical # (or possibly 1 less, if possible in pulse generation mode)
    5. If you want the output to remain high indefinitely, configure the counter task to use its own output as a
    pause trigger, and pause while high.
    The way pulse generation works is to preload a # of "low time" ticks into the count register.  Then every source edge will decrement the count.  When the count reaches terminal count (0), the counter's output is toggled (or can be configured to pulse).  The register is then loaded with the # of "high time" ticks and the process continues.
    You would be perpetually interrupting the count-down process as long as you got your triggers in time.  The count would keep getting reset to the # of low counts, keep decrementing toward 0 without reaching it, and so on.  If ever you did reach 0, the output state would toggle high, then the high state would prevent subsequent clock signals from decrementing the count.
    You can conceivably do a similar thing with a 6601, but I'm pretty sure you'd need 2 counters working together to get it working.
    -Kevin P

  • Property nodes are severely affecting performanc​e

    LabVIEW Gurus,
    I am continually running into some serious performance hits using property nodes to update attributes of FP objects. Attached is a classic example.
    I have 8 XY plots that are being fed 600 SGL points every 200 msec - a very modest data rate. Each plot is a dynamically instanciated .vit placed into one of 8 subpanels in a container VI. The container VI also acts as the data server for the charts, sending each one their data in their own single element queue. The entire architecture runs great (~4% CPU load, see attached picture) until I being updating a property node to display the value of the cursor y-value. When I enable the "Caption.Text" property node of the XY Graph to display the cursor value, the CPU usage soars to over 30%.
    As an aside, I am developing on a dual core 2.1GHz platform with 4G memory with LV8.5.1, and the target machine is not nearly as beefy. That's why 30% CPU on my powerhouse is an issue - it basically brings the embedded target to its knees.
    I have included an example VI for you to run on your machine. Consider it "representative" of my bigger issues. The VI runs about 10% CPU without the caption update, and 20% with the caption updates.
    Finally, I have tried putting the VIs into the UI execution system. I have also tried Defer Panel Updates, but this actually slows down performance.
    Best regards,
    Jack Dunaway
     With Captions
    Without Captions:
    Message Edited by mechelecengr on 10-10-2008 11:23 AM
    Message Edited by mechelecengr on 10-10-2008 11:27 AM
    a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"] {color: black;} a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"]:after {content: '';} .jrd-sig {height: 80px; overflow: visible;} .jrd-sig-deploy {float:left; opacity:0.2;} .jrd-sig-img {float:right; opacity:0.2;} .jrd-sig-img:hover {opacity:0.8;} .jrd-sig-deploy:hover {opacity:0.8;}
    Solved!
    Go to Solution.
    Attachments:
    WithCaptions.png ‏102 KB
    WithoutCaptions.png ‏96 KB
    GraphPerformanceProblems.vi ‏23 KB

    Yes, property nodes force synchronous execution and if you do that too often, other things suffer.
    The above solution is good. You can simplify things even more by removing all that unneeded extra code that just complicates things.
    Here's a quick draft. Let me know if you have questions.
    Message Edited by altenbach on 10-10-2008 09:58 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    samewithlesscode.png ‏9 KB
    GraphPerformanceProblemsMOD.vi ‏19 KB

  • How to hide a tab in a Tab Control by Property Node?

    Hi,
    I have a Tab Control in my Front Panel. So, using a array of LEDs, I would like to hide some tabs, probably using Property Node functions, but I can't find what option do it. Is it possible?

    Thanks Macro!  I came to the forum to find a solution for this exact problem, it's nice to find it already laid out for me so clearly. 
    -Joe

  • Property Node in VI throws Error 7 in LV 7.1 but runs OK in LV80 and LV86

    Hi everybody
    I build a custom IVI instrument driver and using the LV tool <Generate VI Interface from Instrumnet CVI Driver> I was able to get a LV wrapper for each driver method. From LV86 I saved first in LV80 and from LV80 I saved as LV71. I have all these LV versions on installed on my PC.
    I have no trouble in using these LV wrappers in any of these LV versions as they work OK.
    Now my IVI driver has also Properties that the Import Driver Tool does not convert as a wrapper and for that reason I had to create a Property Node Warpper myself and saved in the same LLB under LV/instr.lib folder.
    Once I have all these method wrappers and the property node wrappers I made a small VI
    1. Initialize IVI With Options on a TCPIP instrument
    2. Set-Get an IVI Timeout property
    3 Close IVI driver on TCPIP instrument.
    Good part is that in LV 86 and in LV 80 the VI is running fine when I use these LV version wrappers from their coresponding instr.lib folder.
    As soon as I am going to use the LV 71 wrappers in LV 71 I could create the same small VI to Set/Get Timeout and the VI look OK nothing is broken but when I run it the Initialization is OK but as soon as is reaching the Timeout Set Property Node gets out an Error Code 7.
    To run this VI the user need to install the jdCMR IVI driver then add the LV71 jdCMR wrappers inside LV71/instr.lib and then he may build any VI using these LV warppers.
    The only problem is that the Property Node get out an Error Code 7 and the same error code get out from Property Node if I am picking the Property Node from VISA Advanced Pallette, connect this node to jdCMR driver, set Timeout property inside the node and after connecting the input and output references  plus the input and output error to Initialize and Close LV wrappers I still get this error at running time.
    The same LV 71 VI if I open in LV80 or in LV 86 runs without any problem?
    Does anybody knows why LV 71 is not working the same way as LV80 and LV86 with respect to Property Node?
    Btw When I save my VI from LV 80 to LV 71 I get this warning...
    IVI Error Message Builder.vi
        Cannot save VI from VI.LIB to previous version.
    Merge Errors.vi
        Cannot save VI from VI.LIB to previous version.
    Thanks
    Sorin

    Hi
    For further testing of the IVI driver Property Node bug with LabVIEW 7.1 IVI drivers I download and installed two different IVI drivers from two very known instrument companies, Rodhe-Schwarz and Agilent Technologies.  
    I have to mention that both these companies releasead their IVI NI LabVIEW 7.1 drivers under NI Instrument Driver Network and ready to be installed and used by any customer of these two instruments.
    The bug is present only in LabVIEW 7.1 version as if I take the same VI that breaks in LabVIEW 7.1 I could run it without any problem in LabVIEW 8.0 or LabVIEW 8.6 versions.
    Anybody could test this bug for these two NI released IVI drivers in simulation in LabVIEW 7.1 by following these links below.
      Agilent ag81150ni IVI Driver for LabVIEW 7.1 install from here. Run in simulation only by setting Simulate=1
    http://sine.ni.com/apps/utf8/niid_web_display.download_page?p_id_guid=55798957B1A633BDE0440003BA7CCD...
       Rodhe Schwarz rsngpt IVI Driver for LabVIEW 7.1 install from here. Run in simulation only by setting Simulate=1
    http://sine.ni.com/apps/utf8/niid_web_display.download_page?p_id_guid=E3B19B3E91D6659CE034080020E748...
      After installation complete close LabVIEW 7.1 if was open, then restart LabVIEW 7.1  and now you may see under the LabVIEW Instrument Driver Palette  two new IVI drivers ready to be used as LabVIEW 7.1 wrappers.
    Open a new blank VI and from Instrument Driver Palette use two well known Vis that are Initialize With Options.vi and Close.vi add them on your blank VI block diagram and connect thm together. Accept all default parameters except Simulate that must be Simulate=1.
    Both Vis run OK in simulation mode without errors. Now pick a Property Node from VISA Advanced Panel and squeeze this between the Initialize With Options VI and Close VI and make the instrument reference in-out and error in-out connections.
     Now run these two simple Vis in simulation
    I run Rodhe Schwarz IVI driver and Property Node passes OK until the end
    I run Agilent IVI driver but Property Node is getting out Error Code 7 that is the same as my own driver error.
    Both these IVI drivers are under NI Instrument Driver Networks and have been built and integrated as native NI IVI instrument drivers.
    Question is why they behave so different with respect to Property Node from the VISA Advanced Panel?
    I attached the screen shots as PNG files that show clearly the difference of VISA Property Node behaviour when used under the same circumstances.
     Thanks  Sorin  
    Attachments:
    ScreenTestShots.zip ‏152 KB

  • What are the alternatives to updating indicators using property nodes?

    Hello,
    I'm building a VI which needs to update several controls/indicators at multiple points throughout its execution. It also needs to be able to accept new values from the controls at any given time.
    The problem with this is that all of these controls and indicators are on the front panel of another VI which is calling my VI. The current version of my program updates all of these controls and indicators using references and property nodes (each indicator/control that needs to be used has its own reference control on my VI, and these references are then sent into property nodes), which naturally makes it slow.
    At the moment I'm considering rebuilding my VI in such a way that the main one is able to retrieve the data without using references, but this will be not only very time-consuming but also difficult and possibly even impossible without my code turning into a massive disorganized pile (especially since the lab computer is slow enough, and the main VI large enough, that pressing the 'clean/reorganize block diagram' button causes a crash).
    Any alternatives to this? Queues?
    Solved!
    Go to Solution.

    This (my Event nugget) is the best general solution I came up so far.
    Felix
    www.aescusoft.de
    My latest community nugget on producer/consumer design
    My current blog: A journey through uml

  • Error -70038 occurred at Property Node in nimc.fb.power.power.coordinate.vi

    HI,
    I created a Real Time application with debugging enabled that only makes a straight line move and then finds center (using SoftMotion function blocks). When I run the VI from the project everything works fine. When I create the executable and run the application as a start up some times it moves beyond the limit switches and I have to abort the application, it never really works as smooth as it does when ran directly from the project.
    I simplified the application. When connecting to the executable ( Operate>>Debug Application or Shared Library...)
    I saw that sometimes I was getting an error that said the find center function could not be executed because the driver was not enabled. This was never needed when running from the project. I added the Power SoftMotion block and I enabled the driver before getting into the while loop that finds center. Again the VI ran without errors when ran directly from the project, but when I connected to the executable I found the following error coming out of the Power SoftMotion block (the name of my VI is: "Testing SEND to ZERO by itself.vi"):  
    "Error -70038 occurred at Property Node in nimc.fb.power.power.coordinate.vi --> Testing SEND to ZERO by itself.vi
    Possible reason(s):
    Motion: An unexpected error has occurred internal to the driver. Please contact National Instruments with the name of the function or VI that returned this error" 
    I am using cRIO 9022 as a controller and two 9512s.
    LabVIEW 2009
    NI-RIO 3.2.1 
    SoftMotion
    Any idea why the VI works fine when ran directly from the project but does not work as expected when ran as a stand alone application directly on the cRIO?
    Thanks,
    Fab 
    Certified LabVIEW Architect * Certified Professional Instructor * LabVIEW Champion

    Marc,
    When I simplified my program, I forgot to put the Power Function block inside a loop and make sure I was sending a pulse for the execute input. I am guessing that could have caused the weird error.
    If I see it again, I will let you know.
    Thanks,
    Fab 
    Certified LabVIEW Architect * Certified Professional Instructor * LabVIEW Champion

  • How to change color of blinking LED

    Hello,
    in the attached file is a LED with property blinking = true.
    The problem is that it blinks yellow.. How can i change the yellow to another color?
    TFTC
    Attachments:
    blink.vi ‏9 KB

    If you go into the labview options, and select colors.  There is a color for the blink foreground and for the blink background.  Changing these from the defaults will add these entries to you labview.ini file.
    blinkFG=
    blinkBG=
    If you make an exe from your VI you will need to add these two entries to you exe INI file.  You can also change the speed.  On the upside, you can do this programatically, but it will require a restart of the VI in order to re-read the INI file.. I think, I have not tested it... but I assume that it is only read on launch.  You can experiment with it.
    You can also change the blinking speed in the properties which adds the line
    blinkSpeed=
    I hope this clarifies things
    Paul <--Always Learning!!!
    sense and simplicity.
    Browse my sample VIs?

  • Y scale flip in waveform graph when using property node.

    I'm having problem with property node. I'm trying to build a stackable scope with Waveform Graph And dynamically change the number of plots. It is working. But The Y scale is flipping. The minimum goes on the upper part and the minimum scale goes down. when I change it appearance it's not very fast. It take about 1 or 2 second on my computer.
    The way it is build actually is:
    Cluster_1 is containing 8 clusters. And each of the 8 clusters are containing a Waveform Graph. By reference I change the property of each Waveform to make visible or not each of the Waveforms.
    First I would like to solve the scale problem and second accelerate the redrawing of my scope.
    Maybe there is an other way t
    o do what I need.
    Thanks.
    Nitrof
    Attachments:
    MultiScopeExample.llb ‏192 KB

    Wow, you have a neat program! The problem with your y axis flipping is because the last four charts in the cluster were programmed with a flipped axis. I relabeled the charts and it worked correctly.
    From what I could tell with your code, you do not need the sequence structures. The one inside of the even actually runs the same code twice. Remove this duplication and you should see improvement. Also clusters are slow, so you should not expect it to blink back with additional coding. Use defer panel updates to get that behavior. I incorporated these changes and attached the program.
    Jeremy Braden
    National Instruments
    Attachments:
    MultiScopeExample.vi ‏163 KB

  • Property Node activated even though it's been removed from Block Diagram

    Hi,
    I messed around with the property node of one of my controls and when I was finished and removed the property node from the block diagram it still makes my control blink.
    How do I make it stop??
    Is the option of Enable state not possible to set in property node mode. I'd like to go from disabled and grayed to enabled.
    Thanks.
    Solved!
    Go to Solution.

    Hi
    Your blinking property might still be set. Add the property again to set it to False.
    To set the enabled/disabled/grayed state use the "Disabled" property.
    Hope this helps.

  • How to read the configuration of a FXP number via property nodes or other methods.

    Hello all,
    I am attempting to store in plain-text the value and configuration specifics of the LV FXP datatype. (please do not suggest I cast it to integer).
    The ini config format does not support FXP. So we'd like to; using property nodes, interrogate the specifics of a FXP numeric control on the FP.
    It appears this is not exposed by control ref's and property nodes.
    Oddly enough, you cannot even interrogate the detailed type of an INT if it is a U8, I8 or I16 etc?! via property nodes, the deepest you can go is 'digital'.
    Specifically, you need to know the following to 'reconstruct' a FXP:
    1. Value
    2. Signed\unsigned
    3. Word length
    4. Integer word length
    Without this, there is a risk of corrupting the value during conversion.i.e we want to load initial FXP from disk at application startup. This is a critical application and we cannot tolerate rounding\conversion errors accumilating with numerous reads\writes of FXP settings.
    It is odd that you cannot interrogate via property nodes the details of a numeric type. ie. you cannot 'discover' via the property node of an INT if it is a U8, I8 or any other type.
    Regards
    Jack Hamilton
    PS: Don't suggest XML either....

    Aristos Queue wrote:
    Alias name here wrote:
    ..second post telling me the 'propertys' of a control have nothing to do with the value is bizzare - via 'properties' for a LV control is the ONLY way to configure the specific type of a numeric...so via the numeric 'property nodes' should\would be able to query it's configuration.
    I do not see any way to set these things through the properties...
    I think he means by right clicking the control on the front panel and configuring with the properties dialog. The properties are exposed there, but not within the property nodes.
    Edit: You beat me.
    CLA, LabVIEW Versions 2010-2013

Maybe you are looking for