Significant Figures

I can easily find several options relating to the number of decimal places shown for a number in Numbers, but I can't find any way of setting a specific number of significant figures instead - something which is often more useful if your values are over a wide range. Is there any way of doing this?

septimus ii wrote:
While I appreciate (and am rather impressed by) the ingenuity of the solutions posted here, it's not quite what I originally meant.
I learnt that there are two ways of rounding numbers - to a specific number of decimal places or a specific number of significant figures. While the former is easy to adjust in the inspector and other places, I can't find any easy ways of displaying the latter. I'm not sure if this is a small gap in the program, or merely an option which I have missed.
It's a "small gap in the program," for which the responders have attempted (successfully, I think) to provide some chinking.
It also looks like an opportunity to Provide Numbers Feedback making a feature enhancement request for future editions of Numbers. Use the item of that name in the Numbers menu, or the link in the previous sentence.
Regards,
Barry

Similar Messages

  • Rounding to a Desired Number of Significant Figures

    I have a set of calculated results that I need to round to two significant figures.  The rounding rules that I need to use are 5 and above rounds up and below 5 rounds down (I'm not taking into consideration whether or not the number before the 5 is odd or even).  For example, I would need to round 0.04255 to 0.043 and 0.0255813953 to 0.026 etc.  What is the best way to do this?

    I see it more like: (Significant figures - Wikipedia, the free encyclopedia)
    with
    nums as
    (select dbms_random.value(1,10) * power(10,dbms_random.value(-3,3)) n
       from dual
    connect by level <= 20
    select n,to_number(to_char(n,'9.9EEEE')) two_sf
      from nums
    N
    TWO_SF
    .1077173774258338170235954132298430158
    .11
    8.53962771187634184076369984400795578324
    8.5
    .022194820889696427358288675717763445656
    .022
    4.76262569327700302708295638036845816105
    4.8
    3.43343151110840952925833441847618384521
    3.4
    .011050298181252026432486073787207501344
    .011
    1.05884415230803488802221342198646306532
    1.1
    3.08619868068015260196327218960746037759
    3.1
    127.338206691206921082742082774157923779
    130
    62.7437377016055722022948632262851053876
    63
    4467.85370308854835976491726818452213797
    4500
    .093808998630023941227511219104505127839
    .094
    27.4073431755099062757919616516549440289
    27
    167.067765582500874637656618415509558873
    170
    55.828010718142781712579586239160087148
    56
    672.679541910114142076738291610977895052
    670
    .841422486489167075107155761553287521158
    .84
    445.568345138976965886465702688656719617
    450
    1.93038983554040004349873070460225178164
    1.9
    .018499393781581719222937784325059743601
    .018
    Regards
    Etbin

  • Ni scope read vi significant figures

    Hello everyone,
    Let me start by giving you some background.
    I'm trying to measure a RMS voltage (around 1.8V) using the NI-Scope Read Measurement Vi, which works fine, my problem is that I'm also trying to calculate the RMS current within the same Vi, using a known resistance. Problem is with the voltage I only get two significant figures using the NI-Scope Read, which gives me to much of a rounding error when it comes to the current.
    So my question is this; is there another stock Vi that I could use to increase the number of significant figures I get or is there a way with this Vi to increase the number of significant figures? Any other ideas are welcome, and thanks so much for your help.
    Solved!
    Go to Solution.

    (Found out the conversation continued while I was typing, just back from lunch)
    The "niScope Read Measurement" VI is actually giving you a double-precision answer, which is a lot more than 2 sig. figs (15, I think, could be off a couple).
    Look at your front panel, do not start your program.
    Move the cursor over one of your RMS reading numbers.
    Click your right mouse button.
    Go down the menu to "Display Format" (or "Properties", makes no difference, you then just click on Properties tab).
    Read: "Digits [2] Precision Type [Digits of Precision]".
    Fix.
    Cameron
    One bit of unsolicited advice.
    Since it looks like you may not have tried much LabVIEW programming since you were last here asking basic questions like this one a bit over a year ago, please take some time to go through the online LabVIEW tutorials
    LabVIEW Introduction Course - Three Hours
    LabVIEW Introduction Course - Six Hours
    To err is human, but to really foul it up requires a computer.
    The optimist believes we are in the best of all possible worlds - the pessimist fears this is true.
    Profanity is the one language all programmers know best.
    An expert is someone who has made all the possible mistakes.
    To learn something about LabVIEW at no extra cost, work the online LabVIEW tutorial(s):
    LabVIEW Unit 1 - Getting Started
    Learn to Use LabVIEW with MyDAQ

  • Decimal align data with variable significant figures

    iWork Numbers: Is it possible to align number data in a column of cells by the decimal point? My data has varying places before and after the decimal point. (I don't want to display any trailing zeros.)

    As far as I know, there is no tab stops in Numbers tables.
    There is no decimal alignment.
    I wish to add that the one available in text areas doesn't behave correctly (at least with the decimal comma). It behaves exactly like the tabtoright one.
    Yvan KOENIG (from FRANCE jeudi 19 juin 2008 22:21:53)

  • Set the number of figures to display

    Hi,
    I have a numeric indicator to display values in the order of 10^6 to 10^9, in SI notation. I have it set to a precision of one digit. So, my values look like:
    1.2M, 10.5M, 325.7M, 1.3G and so on.
    But instead of having a fixed number of digits after the decimal point, it would be more useful to have a fixed number of figures, such as:
    1.18M, 10.5M, 325M, and 1.32G (3 significant figures)
    or
    1.182M, 10.45M 325.1M and 1.320G (4 significant figures)
    How can I do this?
    Thank you.

    Porio,
    Here is a VI that Bill VanArsdale submitted to OpenG ( http://openg.org ). It is called "number to string". This VI converts a "number" to a "string" showing one or more significant "digits" (default: 3). It'll do exactly what you are after.
    Good luck,
    -Jim
    Attachments:
    number_to_string.vi ‏35 KB

  • Error in the Harmonic Distortion Analyzer VI?

    Hello
    Today I tried to use the harmonics distortion analyzer VI and it seems that there is an error in the VI (or I misunderstood something...).
    The phenomenon I observed can very well be shown in the attached example.
    Description:
    - THD - calc: output for the THD calculated by hand with the amplitudes (in this example: THD = sqrt(2^2+3^2)/4 = 0.901)
    - THD - VI: output calculated by the Harmonic Distortion Analyzer VI
    Both results are identical if the number of analyzed periods is >= 3 (# periods = # samples*fundamental frequency / Fs). If only 1 or 2 periods are analyzed, the Harmonic Distortion Analyzer VI gives a wrong result.
    What is the mistake? Did I do any wrong settings or is it a problem with the VI?
    Thanks in advance for the help.
    Best regards,
    Stefan
    Solved!
    Go to Solution.
    Attachments:
    HarmonicsAnalizer_Example.vi ‏32 KB

    If you open the block diagram of the Harmonic Distortion Analyzer VI, you will see that it uses Fourier transform techniques to calculate THD.  FFTs do not work well with small numbers of cycles and can have large errors when fractional cycles are involved. 
    In the VI you posted increasing the number of samples to 300 produces identical results to 5 significant figures. At 200 samples the error is about 10%.
    When working with FFT based analysis tools, use lots of cycles.
    Lynn

  • Double subtract

    Hello,
    Can you explain me what's wrong with the following code and the result I get?
    Thank you for your help
    Dom
    // the code
    import java.math.*;
    public class SubtractMaxDouble{
    static double r=0D;
    public static final void main(String[] args){
    System.out.println("Double.MAX_VALUE="+Double.MAX_VALUE);
    r=Double.MAX_VALUE-1D;
    if (r==Double.MAX_VALUE){System.out.println("BAD "+Double.MAX_VALUE+"-1="+r);}else{System.out.println("OK r="+r);}
    System.out.println("MaxDouble= "+new BigDecimal(Double.MAX_VALUE).toString());
    System.out.println("Diff = "+new BigDecimal(Double.MAX_VALUE-1D).toString());
    // the output I get
    Double.MAX_VALUE=1.7976931348623157E308
    BAD 1.7976931348623157E308-1=1.7976931348623157E308
    MaxDouble= 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
    Diff = 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368

    Thanks for this answer, Danperkins.
    I'm clearly a newbye with computer mathematics but also with american language...
    Sig figs, dude. Sig figs.Sig figs, Google says Significant figures
    I'm here to learn a maximum (Double.MAX_VALUE ?). Some "jargon" doesn't sound for me (I'm a french reader, I apologize). Can you explain me the meaning of "dude"? A cool nickname I hope!
    About your explanation, I agree there is a big difference beetween the digits used to represent a "real" number (my human reading)
    and the way they are represented inside the computer (using IEEE754). I perfectly understand that the continuum of real numbers cannot be represented.
    So now the question is "what is the double X value that is enougth to obtain 1.7976931348623157E308-X=1.7976931348623156E308?"
    X=1.7976931348623157E308-1.7976931348623156E308
    I obtain X=1.9958403095347198E292
    I'm really sorry but I have some difficulties to "agree" with such a huge X that means
    All the "real" numbers from "1.7976931348623157E308-X+1" to "Double.MAX_VALUE" are represented by the internal IEE754 double 1.7976931348623157E308
    that also means that for each Y in [Double.MIN_VALUE .. X-Double.MIN_VALUE] the answer to Double.MAX_VALUE-Y will be Double.MAX_VALUE
    Where I'm wrong? And if I'm right "You know what Georges?" I'm not happy!
    Dom
    To satisfy your curiosity about adding the following class
    import java.math.*;
    public class AddMinDouble{
    static double r=0D;
    public static final void main(String[] args){
    System.out.println("Double.MIN_VALUE="+Double.MIN_VALUE);
    r=Double.MIN_VALUE+1D;
    if (r==Double.MIN_VALUE){System.out.println("BAD "+Double.MIN_VALUE+"+1="+r);}else{System.out.println("OK r="+r);}
    System.out.println("MinDouble= "+new BigDecimal(Double.MIN_VALUE).toString());
    System.out.println("Add = "+new BigDecimal(Double.MIN_VALUE+1D).toString());
    r=1.7976931348623157E-300+1D;// your value+1
    if (r==1.7976931348623157E-300){System.out.println("BAD r=1.7976931348623157E-300");}else{System.out.println("OK r="+r);}
    output
    Double.MIN_VALUE=4.9E-324
    OK r=1.0
    MinDouble= 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    31903114045278458171678489821036887186360569987307230500063874091535649843873124733972731696151400317153853980741262385655911710
    389733598993664809941164205702637090279242767544565229087538682506419718265533447265625
    Add = 1
    OK r=1.0

  • OSC and foreign currency / euro regulations

    I have the following problem in OSC;
    If transactions are in a currency, other than the internal functional currency, TRANSACTION_CURRENCY_CODE and EXCHANGE_RATE are required attributes in the COMM_LINES_API table. (11.0, 11i-versions)
    I'm puzzled by the EXCHANGE_RATE.
    It seems
    a) technical superfluous; the exchange-rates are allready defined in the currency domain.
    b) functional undesirable; how should an external transaction-source know what the functional currency (and thus the right exchange_rate) in OSC is at the moment that the transactions are going to be loaded?
    c) in violation of the euro-regulations, though Oracle claims that applications are supporting all euro-requirements from release 11 and up.
    Some of these rules are (don't blame me, I did not invent them):
    &#61623; Conversion rates may be expressed only as one euro in terms of each national currency.
    &#61623; Inverse rates derived from the conversion rates cannot be used.
    &#61623; Monetary amounts to be converted from one national denomination into another shall first be converted to euros and then converted to the other currency.
    The first two rules are violated when I load an euro-transaction in my dutch guilders database : I have to put an inverse exchange-rate on the interface.
    The last rule is violated when I load a D-Mark transaction in my dutch guilders database; my exchange-rate on the interface is the direct conversion-rate between DM and NLG.
    My basic questions:
    - can anyone explain why OSC needs the EXCHANGE-RATE field in a transaction?
    - Are there other OSC-users in euro-country with the same problem(s)
    - Who within Oracle is the author of the following (copied from the Oracle euro-site) "Oracle Release 11 supports Regulation 235 requirements. Triangulation and rounding rules are fully implemented. Oracle is uniquely qualified to support conversions with precision exceeding EC requirements because Oracle Financials native mode use of the Oracle database engine permits calculations with 38 significant figures. . . ." bla bla etc.
    Maybe I can make him or her the owner of this problem. . .
    Regards,
    Remmelt Veenstra
    null

    Exchange_rate is a value that we collect from AR or OM. This is the rate that we use to convert to the functional currency.
    The exception is a MANUAL trx. In this case, we do the lookup to the GL API's using currency code to establish the appropriate daily rate.
    I'm not completely up to speed on the EU adjustments that occur in OM, AR, or legacy sources. In general we are dependent on the values that are available to collect. It makes sense that the conversion to EU should occur in the source system and then we would convert to the functional (GL) currency from there. This should comply with triangulation rules.
    If the source system does not convert to EU or if the mapping has not been set up to collect the EU value, there might be an issue. You really have to trace the transaction flows and conversions in the system as a whole.
    null

  • TO_CHAR not Scientific Notation

    I want to convert a number to a string but guarantee that the result will NOT be in scientific notation.
    TO_CHAR(num, '9999999999.99999999999999') is not very convenient since the data type of the number is BINARY_DOUBLE and I will not know how many significant figures the number will have.

    user3975338 wrote:
    So is it safe to say there there is no built in feature that would ever output a the numeric string
    '0.00000000000000000000000000000000000000000000000000000000000000000000000000000009'The scientific notation is only a matter of how the particular interface is displaying the number and how your NLS settings are for converting automatically between numbers and varchar.
    e.g. in SQL*Plus you can change the format of a column easily...
    SQL> ed
    Wrote file afiedt.buf
      1  with t as (select cast(1.0000000000000001E-001 as binary_double) as dbl from dual union all
      2             select 4.0144896E+008 from dual union all
      3             select 3.0976E+005 from dual union all
      4             select 2.78784E+006 from dual union all
      5             select 6.4E+001 from dual union all
      6             select 2.7878400000000001E+003 from dual union all
      7             select 2.4909766860524436E-011 from dual union all
      8             select 1.0000000000000001E-001 from dual union all
      9             select 7.7160493827160492E-005 from dual union all
    10             select 6.9444444444444436E-004 from dual)
    11  --
    12  select dbl, dbl2, length(trim(dbl2)) as ldbl
    13  from (
    14    select dbl, rtrim(rtrim(to_char(dbl,'99999999999999990.999999999999999999999999999'),'0'),'.') as dbl2
    15    from t
    16*   )
    SQL> /
           DBL DBL2                                                 LDBL
      1.0E-001                  0.10000000000000001                   19
    4.014E+008          401448960                                      9
    3.098E+005             309760                                      6
    2.788E+006            2787840                                      7
      6.4E+001                 64                                      2
    2.788E+003               2787.8400000000001                       18
    2.491E-011                  0.000000000024909766860524436         29
      1.0E-001                  0.10000000000000001                   19
    7.716E-005                  0.000077160493827160492               23
    6.944E-004                  0.00069444444444444436                22
    10 rows selected.
    SQL> col dbl format 999999999990.9999999999999999999999999999999
    SQL> /
                                              DBL DBL2                                                 LDBL
                0.1000000000000000100000000000000                  0.10000000000000001                   19
        401448960.0000000000000000000000000000000          401448960                                      9
           309760.0000000000000000000000000000000             309760                                      6
          2787840.0000000000000000000000000000000            2787840                                      7
               64.0000000000000000000000000000000                 64                                      2
             2787.8400000000001000000000000000000               2787.8400000000001                       18
                0.0000000000249097668605244360000                  0.000000000024909766860524436         29
                0.1000000000000000100000000000000                  0.10000000000000001                   19
                0.0000771604938271604920000000000                  0.000077160493827160492               23
                0.0006944444444444443600000000000                  0.00069444444444444436                22
    10 rows selected.
    SQL>... see ... no scientific notation any more.

  • Using Word Easy Table Under Report Generation takes long time to add data points to table and generate report

    Hi All,
    We used report generation tool kit to generate the report on word and with other API 's under it,we get good reports .
    But when the data points are more (> 100 on all channels) it take a long time  to write all data and create a table in the word and generate report.
    Any sugegstions how to  make this happen in some seconds .
    Please assist.

    Well, I just tried my suggestion.  I simulated a 24-channel data producer (I actually generated 25 numbers -- the first number was the row number, followed by 24 random numbers) and generated 100 of these for a total of 2500 double-precision values.  I then saved this table to Excel and closed the file.  I then opened Word (all using RGT), wrote a single text line "Text with Excel", inserted the previously-created "Excel Object", and saved and closed Word.
    First, it worked (sort of).  The Table in Word started on a new page, and was in a very tiny font (possibly trying to fit 25 columns on a page?  I didn't inspect it very carefully).  This is probably "too much data" to really try to write the whole table, unless you format it for, say, 3 significant figures.
    Now, timing.  I ran this four times, two duplicate sets, one with Excel and Word in "normal" mode, one in "minimized".  To my surprise, this didn't make a lot of difference (minimized was less than 10% faster).  Here are the approximate times:
         Generate the data -- about 1 millisecond.
         Write the Excel Report -- about 1.5 seconds
         Write the Word Report -- about 10.5 seconds
    Seems to me this is way faster than trying to do this directly in Word.
    Bob Schor

  • Hidden Special Characters in Variable

    I am having a weird issue and for the life of me cannot figure out the root cause.  I have a query that is pulling back some data including a summed money data type from sqlserver called totalAmount.  We are looping over these rows and adding up some totals of the rows such as:
    <cfset myAA = 0 />
    <cfset myAB = 0 />
    <cfloop ... >
    <cfif ... >
    <cfset myAA = myAA + getPayments.totalAmount />
    <cfelse>
    <cfset myAB = myAB+ getPayments.totalAmount />
    </cfif>
    </cfloop>
    At this point we add the 2 together...
    <cfset mySum = myAA + myAB />
    We are expecting this to be 0, but in one instance it is not so, even though it should be.
    Outputting the two variables gives them as -75.03 and 75.03.  When these are added together the result is:
    -1.05160324892E-012
    If I do a trim like:
    <cfset mySum = trim(myAA) + trim(myAB) />
    It returns 0.  So it seems there is an additional character on one or both of those variables, but what is it and where is it coming from?  If I output:
    (#myAA#) + (#myAB#)
    I get (-75.03) + (75.03).  So no whitespace... If I do a len() on each I get 6 and 5.
    If I do a compare such as #myAA# => #(myAA eq "-75.03")#
    I get -75.03 => NO, same with the other.
    So I am dumbfounded, it seems there is a control character there that is hidden and throwing this off.  Anyone have any suggestions on what to check or any ideas what the problem may be?  I would rather fix the problem at the root rather than throwing trim() around variables that should be numeric to begin with.
    -Shawn

    BKBK's explanation is slightly wide of the mark, in mentioning that floats are all stored with an intrinsically high number of decimal places.
    You misquote me. I didn't say intrinsic. I said arbitrary, twice even.
    I am aware of binary representation and storage of numbers in memory. However, I intentionally kept the technicalities to a minimum, without losing the essence, so that it can make sense to Smholstein.
    Whilst they can have a high number of decimal places, they don't automatically (1 will be stored with no decimal places; 1.5 will be stored with one DP, etc), and the "number of decimal places" is not really the correct way of looking at it.  For one thing, all numbers are stored in binary, not decimal, so there is no such thing as a decimal place.  A Double is 64-bit, which - if I'm reading the spec right, gives around 15 digits of decimal precision, and an exponent (of 10) range of 308 orders.  So this means one could express 123456789012345 or 0.123456789012345E-308.  But one could not represent 1234567890.123456: it's good too high a level of precision (16 digits; wherein a Double can only do 15).
    Secondly, decimal fractions are not very easy to express in binary.  The only way binary has to express a number is in varying powers of 2, eg: 42 is  101010 in binary (32+8+2 or 1*2^5 + 0*2^4 + 1*2^3 + 0*2^2 + 1*2^1 + 0*2^0).  When expressing fractions, it has to do the same thing.  2.5 is 1*2^1 + 0*2^0 + 1*2^-1. Easy.  But what about 2.3?  It's easy in decimal, but actually not possible to represent in binary (try expressing 0.3 only using sums of 2 raised to negative powers...).  So intrinsically, many many floating point numbers are not possible to completely accurately represent in binary; the computer can only make a reasonably good approximation by calculating it to a reasonable number of significant figures.  This means that under the hood, 2.3 might be represented as 2.3000000000000001 or something.  This is inate to floating point representation.
    Brave attempt! However, there are far too many technicalities and intricacies about floating points than we have time or space for in this forum. It is sufficient to say that ColdFusion's underlying machine, Java, uses the IEEE 754 standard for representing floats and doubles. 
    According to this standard, a  float is a single-precision, 32-bit floating-point number. Whereas a double is double-precision 64-bit floating-point number. A float requires up to 4 bytes of storage in memory, a double, 8 bytes. Floats have a precision of 23 binary digits, and doubles, 52.
    Those are the limitations that make it impossible for the computer to represent, for example, 2.3 with exact precision. However, that doesn't mean that it is impossible to represent 2.3 in binary! It is infact possible to so, but it requires an arbitrarily large number of terms. Here it goes:
    2.3 = 2 + 1/4 + 1/32 + 1/64 + 1/512 + 1/1024 + 1/8192 + 1/16384 + 1/131072 + 1/262144 + ...
    Now, there's a vagary in CF that when one converts a floating point number to a string it internally "fixes" this approximation, so it'll convert 2.3000000000000001 to "2.3" as a string.  So if you output a float, it'll get "fixed", and if you use a float as an argument value for a string function (like trim), it'll also get "fixed".  That's why trimming your value seems to fix the problem.  It's really just a side-effect of the way CF converts floats to strings.
    I think it's a bit more complicated than that. It can actually happen the other way round. That is, that ColdFusion may cast from the float 2.3 to the double 2.3000000000000001. That is what happened to Smholstein.
    On the other hand, there are times when ColdFusion may round off numbers. This can happen, for example, if the number of digits after the decimal point exceeds, say, 12.
    Run the following test. You will find that the 'fixing' you describe fails to work for y. The results are, respectively, 2.3 and 2.30000000001.
    <cfset x = trim(2.300000000001)>
    <cfset y = trim(2.30000000001)>
    <!--- Add 0 to make ColdFusion convert to numbers --->
    trim(2.300000000001) + 0 = <cfoutput>#x+0#</cfoutput><br>
    trim(2.30000000001) + 0 = <cfoutput>#y+0#</cfoutput>

  • More than 4 places after the decimal for the slope of line equation?

    Hello
    I'm doing some graphing for chemistry class and I'm trying to find where in Numbers I can change the preference for the trendline equation.(y=mx+b) Currently m is only showing up to 0.xxxx . I need it to show at least 7 or 8 places. Or even better, is there a preference to set it to show x amount of significant figures?
    Much thanks,
    Ryan

    No preference that I am aware of. You can calculate the intercept and slope yourself using INTERCEPT and SLOPE functions, create a one-celled table with a formula that makes a string for y=mx+b, and place that one-cell table on top of the chart.
    If Table 1 cell B2 has the slope and C2 has the intercept, the one-cell new table formula would be
    ="="&Table 1::B2&"x+"&Table 1::C2
    The string will have the same number of decimal places as what it finds in B2 and C2.

  • Rounding After a Certain Number of Characters

    Hi,
    I'm looking for a way to round a number off after a certain number of characters. I'm not looking to round a number off after, say, the third or fourth decimal place, but after the third or fourth number. For example, If I round off after 3 characters, then:
    19235.6578 goes to 19200
    1.95723 goes to 1.96
    How would I go about doing this? Any help is much appreciated.
    Thanks,
    JOD8FY

    Thanks for your reply. What I'm really trying to do is round off an answer to the correct number of significant figures. According to the rules for multiplying significant figures, the answer will have the same number of sig figs as the number in the operation with the least number of sig figs. To do this, what I thought I would do is find the length() of the least precise number in the operation and then round off the answer after that many characters. So, if the least precise number has four sig figs, the answer will get rounded off after the fourth number. Hope this makes what I want to do a little clearer; sorry for being vague.
    JOD8FY

  • Calculatin​g the true output frequency of a PXI-5402

    I have a PXI-5402 card sat in a PXIe chassis. I am only interested in sine wave output at frequencies up to approx 10kHz. I know that it is posible to request an output frequency and then query the acutal output frequency but I would rather be able to calculate it before hand. All I can find in the literature is a figure of 0.355uHz for frequency resolution.
    Is there a better description of the frequency resolution? If not, is the resolution exactly 0.355uHz or is this an approximation (to 3 significant figures)?
    Solved!
    Go to Solution.

    This webcast is an excellent way to learn the process of which the NI 5402 and NI 5406 operate to generate their periodic functions: http://www.ni.com/webcast/75/en/ 
    The 0.355uHz value is a theoretical value of achievable frequency rates based on the Clock Rate and Phase Accumulator size. This is the closest thing I can find on ni.com for you to use to calculate the value: http://zone.ni.com/reference/en-XX/help/370524R-01​/siggenhelp/ni_5401_11_31_frequency_resolution_and​...
    I believe Fc for the NI 5402/5406 should be 100M and the accumulator size is 48-bit. Therefore frequency resolution = Fc / 2N = (100 × 10^6) / 2^48 = 3.55271368e-7
    Keep in mind that the device has a VCXO frequency accuracy spec of +/- 25ppm, if you do not PLL lock it to a better source.
    Product Support Engineer
    National Instruments

  • HT201280 How can I rotate a table in "Numbers" ?

    I have 'Copy'd a 'chart' to a second sheet, rotated and expanded it for higher resolution and ease of reading.
    I then copied a small 3 column x 2 row table, which is linked to the main chart table on sheet 1, onto the chart in sheet 2, showing (and updating) significant figures from the chart when figures are added or changed in the main table.  Unfortunately, while I am able to position the small table strategically on the chart I cannot rotate it like I have the chart, to make it read sensibly.
    Can anyone help as to how I may rotate the small table on sheet 2, please ?

    Thanks Jeff.   I had investigated doing just as you said but being new to 'Numbers', I could not find the means so I looked for another method and oviously, run up against another stopper. As is the way, 10 minutes after posting the question I spotted the little icons, bottom left of the screen and behold, they work. Thanks again.
    There are a number of things in 'Numbers' that seem 'Not to work' for no apparent reason.  In particular the the 'Chart colour' drop-down menu has no effect on anything as far as I can see. I have to use the menu bar.
    Thanks also to "Wayne Contello" I did try that but, as you say, all interactiveness is lost. I also tried it in Excel but gave up. To much trouble.

Maybe you are looking for

  • View GUI

    Hi All, 1) How do I set a background picture to a WebDynpro view?     Is it possible at all? or only through the portal framework? and if so, How? 2) How can I set the height or width using percentage (100%) in a tab viewset in order to achieve full

  • Messages and Yahoo IM Video Chats not working!

    I have a 2011 Macbook Pro running the latest version of mountain Lion. I have tried to video call my friend through yahoo but it wont work at all. Does someone have a solution to this issue? I am really frustrated. I don't want to have to call her on

  • Rebooting in solaris 10

    hi all, I am facing a unique problem with solaris 10. The system is rebooting every few seconds. When i boot in the fail safe mode everything is fine, i mean the file system is al fine. I even ran fdisk.everything seems to be fine..someone help!

  • Exception in Adapter Framework

    Hi, I'm receiving the following error from my Adapter Framework: Exception caught by Adapter Framework: Cannot lookup the ra connectionfactory. Reason: Path to object does not exist at XXXXX, the whole lookup name is deployedAdapters/XXXXXX This erro

  • JSP page display wrong time when compared to the server time

    Hi, This problem seems to weird but it is happening I have web application running where time is displayed in the page. In this application time is very critical. Currently, time is an hour behind the orginal time. I checked the server it is displayi