Digits of precision on a gauge dig display

Hello all,
Im experiencing a simple problem with my digital indicator on a gauge. I want to set the display to 'Floating Point' with 0 digits of precision but for some reason it keeps defaulting to 1 digit of precision. Im using LV8.2 but its something that I could do in LV7.1.
Any tips would be great.
Thanks,
David

I just came across another workaround....
Select "properties" by right-clicking the DBL indicator, go to "Format and precision" then activate the "Advanced editing mode".
There you can input %d as the formatting for the indicator.  It's internally still DBL (useful for saving is you use local variables) but it's represented as an integer.
Thought this might help.
Shane.
PS, this may be what was meant in the previous post, but just to be sure, and because I found it by accident, I thought I'd post......
Message Edited by shoneill on 12-07-2006 01:47 PM
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Attachments:
Float precision LV 8.20.PNG ‏14 KB

Similar Messages

  • Too many digits of precision in imported data

    I store some data in a TDMS file that has four digits of precision. When I recall the data in DIAdem or import it into Excel there are many more digits there than I want. Do I just have to live with that or is there a way to limit the number of digits to just what I want?
    George

    Hi George,
    Are you saving the TDMS file with LabVIEW or DIAdem?  In LabVIEW, you wire a data array of a particular data type to be written to the TDMS file, so the precision used on disk will match that of the data type.  So if you're currently wiring a DBL type, converting that array to a SGL type prior to writing the array to TDMS will reduce the precision (and the disk footprint), though it may not reduce it enough for your taste.  If you are wiring waveforms to the TDMS file, I believe you are stuck with DBL precision.
    If you are saving the TDMS file in DIAdem, and if you have DIAdem 10.1, then you have the option of using the new ChnQuantize() function to limit the resolution of the data both inside of DIAdem and when you save it to disk.  Unfortunately, DIAdem 10.1 has a bug for TDMS output when using custom quantization, though this will be fixed in the upcoming DIAdem 10.1 SP1.  Still, this doesn't necessarily reduce the number of displayed digits, it just reduces the accuracy so that similar values can be saved in a smaller binary footprint.  So this may not be what you're after.
    Regardless of the resolution of the saved binary data, DIAdem loads all channel values as DBLs in the Data Portal, and it displays all the digits of precision corresponding to the DBL value it has in memory.  However, you can always change the displayed format of a channel value in a VIEW or REPORT table by changing that column's format string.  Try "d.ddd" or "d.ddde" for instance.
    Ask if you have further questions,
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • TDMS digits of precision

    I am developing an instrument to record voltages in the microVolt range from a Keithley 2420, and using TDMS to record them 'real-time'.  I just discovered that the TDMS file only has 6 digits of precision, which is not enough, I need at least 7.  I looked but did not see a way to increase this, so here is my lateset quandary.
    I am using LV2012 on a Mac as well, so I know TDMS just got implimented this version, but do not recall any setting of this type when I used the Windows LV TDMS.
    Thoughts?
    Thank you,
      Vince
    Solved!
    Go to Solution.

    Cancel this, I found that the display is only 6 digits of precision using the 'TDMS File Viewer', but when opened using the 'TDMS Read' it contains all the original precision.
    Whew.
      Vince

  • Message sent using invalid number of digits. Please resend using 10 digit number or valid short code. Msg 2114

    Message sent using invalid number of digits. Please resend using 10 digit number or valid short code. Msg 2114
    How can this be fixed? Most of my numbers are saved as 7 digit

    So I THINK I've figured this out. Hopefully. I was getting the same problem when I was trying to text my boss and my dad. I went back to try to delete the contact and try again and I noticed that I had the little phone icon next to their numbers and not the messaging text bubble. But I'd also left their type of phone as "home" in the drop down menu. I edited that to mobile and boom, got the text messaging bubble. Tried texting them both, I could now send the message.
    I think if the number is set to home the phone assumes it's a landline and doesn't even try to send. I think the people who have had success deleting and re-inserting the contact edited that piece of information without realizing it.
    I really hope this solves your problem!

  • Setting the digits of precision for TDMS logging

    I can't seem to figure out a way to set the digits of precision when logging data to a TDMS file. I'm hopping to be able to reduce the file size by doing this. Any suggestions?
    Thanks,
    Cosimo
    Solved!
    Go to Solution.

    The data in a TDMS file is binary.  So you can't set the digits of precision.  If you want to make smaller files, then cast your data to Singles instead of Doubles.  You will have 1/2 the resolution, but use 1/2 the bits on disk.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • PowerMac G4 Digital Audio & Wide Screen (19/9) LCD Display - Compatible?

    I want to buy a 23" LCD wide screen monitor to use with my G4 466 Digital Audio (1/2/2001). The display Chipset Model is ATY,Rage128Pro with 16 MB of VRAM. Both the G4 and the wide screen monitor have VGA connectors.
    When I look at the display resolutions in Sys Prefs I don't see a 16/9 ratio resolution listed?
    Can this computer be compatible with the 16/9 monitor?
    Thanks!
    -Joe

    You never do see an exact 16:9 resolution in the list. On my 1 GHz G4 for example the resolution is set to 1680 x 1050, which is 16:10.
    I do not know the answer to your question since I have never used a 23" monitor. I do have an Acer 22", and I just verified it gives the correct aspect ratio connected to the video card on a 400 Mhz G4. Thus my guess is you will be able to set the resolution so there is no distortion, but you may not be able to squeeze the monitors pixels to their limit. i.e. you may have to settle for 1360 x 768 - but I am just guessing.

  • Significant Digits to Digits of Precision by default?

    Howdy,
    Is it possible to set the default precision for front panel numerics to, say 4 Digits of precision, instead of 6 Significant digits? I prefer digits of precision, and I always have to change each and every control and indicator. Is this selectable somewhere?
    Thanks!
    B-)

    I agree it would be useful to set global defaults for formats and other cosmetic properties.
    I typically want a fixed decimal resolution and the numbers right-aligned. I will never understand why all numerics are left-aligned.
    If I change an integer to hexadecimal or binary format, I want it padded with zeroes on the left and the default number of digits corresponding to the data type (e.g. U8: 8 digits for binary and 2 digits for hex).
    There is always the product suggestion center.
    LabVIEW Champion . Do more with less code and in less time .

  • Dvt:gauge StatusMeter displaying negative value strangely

    Hi,
    I'm creating a number of dvt:gauge components to display some retrieved numeric values. My aesthetic preference is to use the STATUSMETER gauge type. The values to be displayed vary greatly in scale and may be positive or negative. As a result I'm letting Oracle determine the best Min and Max values for the gauge(s).
    The problem is that when a negative value is displayed, the status bar is still shown starting from left to right. This looks quite odd to our users as we would expect the bar to start at 0 (zero) and move from right to left. Currently it looks as if the value starts at an arbitrary negative value and is moving towards zero...?!
    I've tried configuring the Min and Max values myself with no joy. Are there some additional attributes that I'm missing that enable the Status Meter to be shown in a more sensible way? Thanks.
    FYI. I'm using JDeveloper Studio Edition Version 11.1.2.1.0.

    Hi,
    Any help on this would be really appreciated.
    Thanks.

  • Single precision integer to 8 digit hex string

    I need to convert a single precision integer to a MSB-LSB 8 digit hex string to send to a test machine via RS-232.  So 15 needs to turn into 41700000.  The best I have been able to do is get 15 to turn into 0000000F, which is correct but not in the MSB-LSB format I need.  Any ideas?

    Just typecast the SGL to U32 and format it with %08x.
    Message Edited by altenbach on 05-30-2007 08:49 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    SGL-to-HEXstring.vi ‏8 KB
    SGL-to-HEXstring.png ‏6 KB

  • Digits of precision limit to 3 significant digits

    Hello!
    I need 3 signifcant digits. If I choose 13 digits, it gives me 2 places. If I choose 14, it gives me all places. Can you please see the attached VI and help me? I need 3 decimal places. Is this possible?
    Thanks
    Solved!
    Go to Solution.
    Attachments:
    digits.vi ‏6 KB

    HI Susanne,
    you need to choose a display format of "%.3f". Change the properties of the numeric control accordingly!
    If you really insist on "3 significant numbers" you need to choose "%_3f", but all this is available in the control's properties dialog!
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Double digit display of one digit number

    Hi all,
    I was wondering if there is a quick way of showing one digit numbers between 0-9 as in 00-09.
    Best,
    Kutal.
    Solved!
    Go to Solution.

    Just as RavensFan suggested Check the image below, if still nor clear.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13
    Attachments:
    Example_VI.png ‏3 KB

  • DVT Gauge error display tabular data

    Hi,
    I have a dvt:gauge and is related to backing bean to get the tabular data.
        public List getGaugeData()
            ArrayList list = new ArrayList();
            String[] rowLabels  = new String[] {"London", "Paris", "New York"};
            String[] colLabels  = new String[] {"Quota", "Sales", "Margin", "Costs", "Units"};
            double [] [] values = new double[][]{
                {60, 90, 135},
                {50, -100, -150},
                {130, 140, 150},
                {70, 80, -130},
                {110, 120, 130}
            for (int c = 0; c < colLabels.length; c++)
             for (int r = 0; r < rowLabels.length; r++)
                list.add (new Object [] {colLabels[c], rowLabels[r],
                    new Double (values [c][r])});
            return list;
        }the result is wrong, rather than to have 3 gauge i am getting a lot more.
    If you a sample to give is much appreciated.
    How to solve this problem.
    Regards

    Hi,
    Data Specification - A Data Specification can be used to specify columns or rows of data to be used as the metric value, and threshold, minimum and maximum values. The following example shows how to use a Data Specification for a set of tabular data. This tabular data contains 5 columns and 2 rows. If Data Specification is not used, the following table will generate 10 Gauges that there will be a gauge for each value. Each column has its desired data specification as shown. Sales is assigned to be metric value, Quota is assigned to be the first threshold, etc. This will generate 2 Gauges, one for Boston and one for Chicago, both with metric, minimum, maximum and threshold values specified.
    Desired Spec. Metric Minimum Maximum Threshold 1 Threshold 2
    Name Sales Min Inventory Quota Target
    Boston 40 0 100 30 50
    Chicago 60 0 80 35 70
    [Oracle DVT Gauge|http://download.oracle.com/docs/cd/E15523_01/apirefs.1111/e12418/tagdoc/dvt_gauge.html]
    Too bad there is no sample.
    Anu hints

  • Do numerical indicators display extended precision floats correctly?

    I'm using windows XP sp2 on a new computer with a new intel processor, nothing weird. I'm displaying an extended precision floating point number using a numeric indicator that is set to display an extended data type with thirty digits of precision. I expect to see at least 19 or 20 significant digits out of my extended precision float, but the numeric indicator only ever displays 17 significant digits before going to a trail of zeros. Does the display routine that converts the float to a display string use double precision or what?
    global variables make robots angry

    Yes, I understand what you are saying and you are completely correct. The problem I have is not that I expect a mathematically perfect representation of a number, but rather that LabVIEW calculates and produces an 80-bit extended precision number on my computer and then appears to convert it to a 64-bit representation of that number before displaying it!
    If you convert the extended precision value into an unflattened string in order to attempt to access the binary representation of the data, you’ll find that it is represented by 80-bits. This is a 64-bit fraction plus a 15-bit exponent plus one bit for the sign. Delightfully, the flatten to string function appears to scramble the bits into “noncontiguous” pieces, so about all I can tell for certain is that we have, as expected, an 80-bit extended precision number in memory. The documentation for the other number-to-Boolean array and bit manipulation functions I looked at (even the exponent-mantissa function) all claim to only be able to handle a maximum input of a 64-bit number (double precision float max) -correct me if I’m wrong on this one, because I’d really like to be able to see the contiguous binary representation of 80-bit extended floats.
    It turns out though that what you said about not being able to tell whether we have twenty digits of precision without bit fiddling is not true at all. If you look at the program I wrote, you can prove with simple addition and subtraction that beyond the shadow of a doubt the extended numbers are being stored and calculated with twenty digits of precision on my computer yet being displayed with less precision.
    As you can plainly see in the previous example I sent:
    A =          0.1111111111
    B =         0.00000000001111111111
    A+B=C= 0.11111111111111111111
    We know that
    C-A=B
    The actual answer we get is
    C-A=0.00000000001111111110887672
    Instead of the unattainable ideal of
    C-A=0.00000000001111111111
    The first nineteen digits of the calculated answer are exactly correct. The remainder of the actual answer is equal to 88.7672% of the remainder of the perfect answer, so we effectively have 19.887672 digits of accuracy.
    That all sounds well and good until you realize that no individual number displayed on the front panel seems to be displayed with more than 16-17 significant digits of accuracy.
    As you see below, the number displayed for the value of A+B was definitely not as close to being the right answer as the number LabVIEW stores internally in memory.
    A+B=0.11111111111111111111 (the mathematically ideal result)
    A+B=0.111111111111111105     (what LabVIEW displays as its result)
    We know darned well that if the final answer of A+B-A was accurate to twenty digits, then the intermediate step of A-B did not have a huge error in the seventeenth or eighteenth digit! The value being displayed by LabVIEW is not close to being the value in the LabVIEW variable because if it were then the result of the subtract operation would be drastically different!
    0.11111111111111110500       (this is what LabVIEW shows as A+B)  
    0.11111111110000000000       (this is what we entered and what LabVIEW shows for A)
    0.00000000001111110500    (this is the best we can expect for A+B-A)
    0.00000000001111111110887672 this is what LabVIEW manages to calculate.
    The final number LabVIEW calculates magically has extra accuracy conjured back into it somehow! It’s more than 1000 times more accurate than a perfect calculation using the corrupted value of A+B that the display shows us – the three extra digits give us three orders of magnitude better resolution than should be possible unless LabVIEW is displaying a less accurate version of A+B than is actually being used!
    This would be like making a huge mistake at the beginning of a math problem, and then making a huge mistake at the end and having them cancel each other out. Except imagine getting that lucky on every answer on every question. No matter what numbers I plug into my LabVIEW program, the intermediate step of A+B has only about 16-17 digits of accuracy, but miraculously the final step of A+B-A will have 19-20 digits of accuracy. The final box at the bottom of the program shows why.
    If you convert the numbers to double and use doubles to calculate the final answer, you only get 16-17 digits of accuracy. That’s no surprise because 16 digits of accuracy is about as good as you’re gonna do with a 64-bit floating point representation. So it’s no wonder all the extended numbers I display appear to only have the same accuracy as a 64-bit representation because the display routine is using double precision numbers, not extended precision.
    This is not cool at all. The indicator is labeled as being able to accept an extended precision number and it allows the user to crank out a ridiculous number of significant digits. There is no little red dot on the input wire telling me, ‘hey, I’m converting to a less accurate representation here, ok!’ Instead, the icon shows me ‘EXT’ for ‘Hey, I’m set to extended precision!’
    The irony is that the documentation for the addition function indicates that it converts input to double. It obviously can handle extended.
    I’ve included a modified version of the vi for you to tinker with. Enter some different numbers on the front panel and see what I mean.
    Regardless of all this jazz, if someone knows the real scoop on the original question, please end our suffering: Can LabVIEW display extended floating point numbers properly, or is it converting to double precision somewhere before numerals get written to the front panel indicator?
    Message Edited by Root Canal on 06-09-2008 07:16 PM
    global variables make robots angry
    Attachments:
    numerical display maxes out at double precision 21.vi ‏17 KB

  • Formula Variable decimal digits precision

    Hi All,
    I am using a formual variable checked for cumulated and digits to be 0.00. But with report output displays, decimal places upto 7 digits.
    When I uncheck cumulated, it displays properly with two decimal digits.
    Can you please throw some light on how to bring 2 decimal digit precision with checked for cumulated ?
    Thanks,
    Sri Arun Prian

    Hi Arun,
    As suggested by Akhan,Click on the KF and go to the display of KF and make the Decimals as 0.00.
    Rgds
    SVU

  • Cannot display more than 16 significant digit numbers from Oracle

    The WebI Report cannot display more than 16 significant digit numbers from an Oracle data type field.
    This occurs for both an ODBC and native driver connection to Oracle.
    The data is defined by Number Data Type in Oracle, and view correctly in Universe, but display incorrectly in WebI report which created by InfoView.
    I know BOE XIR2 has this behavior, but it seems that XI3.0 and XI3.1 also have this problem.
    If this behavior in XI3.0 and XI3.1 perform by design? or it's a bug? and Why?
    Thanks!

    Hi Sarah,
    Precision limitation of a Web Intelligence filter constant :
    In an Oracle database, a Number field type can store up to 38 digits. When creating a Web Intelligence report in the Java Panel, a condition is specified for this field (that is represented in the universe as a number), and a constant number is entered for its value, only up to 15 digits of precision are retained.
    For example, if 1234567890123456789 is entered, the value becomes 1234567890123460000.
    Or, if you have a column in database which has a number like 123.123456789012345678 it will display only 123.123456789012.
    Cause
    The reason for this limitation is that Java Panel represents a number internally as a Double field type with only 15 digits precision. This is by design.
    Resolution
    The only workaround is to change the field type to a Character field; however, doing so means losing the ability to perform calculations and/or sorting on the field.
    This issue has been raised as an Enhancement Request and given Track ID # ADAPT00908702.
    Regards,
    Deepti Bajpai

Maybe you are looking for

  • How do I upload a pages newsletter to my website

    I am new with Pages and just created a news letter for my website. It includes a video. When I try to upload it to my website, I get an HTTP error. I have tried saving it as a word document, which not only eliminates the video, but the photos on the

  • PDP Error for only for Service Item

    Hi Guys, We are on SRM 5.5 with ECC6.0 in Classic Mode and trying to create a PO in ECC. We are facing a unique issue. We have maintained all the required settings for material as well as services. When a material is transferred to SoCo and assigned

  • Volume button not functioned

    Hi, I am in Colchester, Essex, UK. I have problem with the volume button which did not funtioned properly since I bought it. I thought it is like that. I try to use the ipad at Tesco near my place and feel the button is typically different from mine.

  • Can't print more than 1 on both my hp printers

    hi, I have 2 printers, an hp CP2025dn and a Photosmart C7280. Neither of them will print more than one of any document, no matter what I put in the box. It just started happening, I used to be able to print more than one copy. I am running windows 7

  • List / database currency in FAGLFLEXT reporting (FGI0)

    Hi everyone We have a number of New GL reports that can be run under transaction FGI0. However, they all have the same problem, that when displaying the reports as "Graphical report output" the users can go in under Setting -> Currency and enter a cu