Weird Calculator anomaly

I have a weird Apple Calculator anomaly that I'm hoping someone can clarify. I have seen it in both "Basic" and "Scientific" mode using Reverse Polish Notation (RPN) which can be found under the View menu. Precision is set to the maximum 16 decimal places.
The apparent problem is that calculations are rounded unnecessarily. For example, typing the following results in a value of 3:
.75 enter
2 +
My expectation is that the answer is 2.75 given the entries and that the precision is set to 16. Is there something about Calculator or its RPN implementation that I don't understand?
Any insights would be greatly appreciated.

My expectation is that the answer is 2.75 given the entries and that the precision is set to 16. Is there something about Calculator or its RPN implementation that I don't understand?
You're correct, under RPN the sum value seems rounded up (or down if you enter for example .3 enter 2 +). I'll make a technical post on the lvl 4/5 area of the forum and hope it gets some notice.

Similar Messages

  • Calculator Anomaly?

    I have a weird Apple Calculator anomaly that I'm hoping someone can clarify. I have seen it in both "Basic" and "Scientific" mode using Reverse Polish Notation (RPN) which can be found under the View menu. Precision is set to the maximum 16 decimal places.
    The apparent problem is that calculations are rounded unnecessarily. For example, typing the following results in a value of 3:
    .75 enter
    2 +
    My expectation is that the answer is 2.75 given the entries and that the precision is set to 16. Is there something about Calculator or its RPN implementation that I don't understand?
    Any insights would be greatly appreciated.
    G5 DP2GHZ, PB G4 12   Mac OS X (10.4.8)  

    My expectation is that the answer is 2.75 given the entries and that the precision is set to 16. Is there something about Calculator or its RPN implementation that I don't understand?
    You're correct, under RPN the sum value seems rounded up (or down if you enter for example .3 enter 2 +). I'll make a technical post on the lvl 4/5 area of the forum and hope it gets some notice.

  • Time calculation anomaly

    On 11/04/2007 right after midnight calculation of time at midnight (00:00:00) since epoch gave me a value of 1194148800. The same calculation in the morning of 11/04/07 (I suspect it might have started after DST switch) showed 1194152400 -- 3600 higher. I am totally baffled by this. Why would elapsed seconds at midnight since epoch would be affected by DST? The following code sample shows what I was doing:
    #include <stdio.h>
    #include <time.h>
    int main()
    time_t now = time(0);
    struct tm nyNow;
    if (localtime_r(&now, &nyNow) != 0)
    nyNow.tm_sec = 0;
    nyNow.tm_min = 0;
    nyNow.tm_hour = 0;
    time_t midnight = mktime(&nyNow);
    fprintf(stderr, "Midnight: %d\n", midnight);
    return 0;
    I'd like an explanation. Thanks. -- Rajiv

    Sure it can. You have asked for the epoch converted for two different times.
    Just as if I asked for midnight Eastern and midnight Central, they'd be off by an hour. You asked for midnight EST and midnight EDT. They are also off by an hour.
    The first time you asked for was 2007-11-03 0:0:0 EDT, the second time was 2007-11-03 0:0:0 EST. Now your particular location would not normally be using Eastern Standard Time for that particular midnight, but it is still a valid identifier, and it occured one hour later. So the epoch identifer you converted was 3600 higher.
    Darren

  • Simple calculator

    Hello,
    I'm new to programming on the Mac and I've been following through a tutorial in the "Cocoa Programing for Mac OS X For Dummies" book. I'm getting a weird calculation error and was wondering if anyone had time to have a look at it for me?
    // Fusebox.h
    // My First Project
    #import <Cocoa/Cocoa.h>
    @interface Fusebox : NSObject {
    IBOutlet id answerField;
    IBOutlet id numberField1;
    IBOutlet id numberField2;
    - (IBAction)calculateAnswer:(id)sender;
    @end
    // Fusebox.m
    // My First Project
    #import "Fusebox.h"
    @implementation Fusebox
    - (IBAction)calculateAnswer:(id)sender
    float num1, num2, answer;
    num1 = [numberField1 floatValue];
    num2 = [numberField2 floatValue];
    answer = num1 + num2;
    [answerField setFloatValue:answer];
    @end
    I've re-read the tutorial and even copy and pasted the code and I can't find any problems. But when I run the simple application the simple addition doesn't return the proper results.
    For example, I just ran the application then and tried entering 1 into the first text field and 1.2 into the second. But when I click "Calculate" it returns "2.200000047683716".
    !http://img23.imageshack.us/img23/9361/picture1fc.png!
    Am I doing something wrong?
    Thanks!

    Just to add my two cents.
    Floating point numbers have always had a representation problem with whatever computer hardware in use, and as noted in many engineering texts, any number is only as accurate as the precision used in the problem.
    Thus, 1.2 is not the same as 1.20 or 1.200 because there is a question about precision. So, when you entered 1 + 1.2, the 1.0 had no fractional component, but a purist engineer will beg to differ. The 1.2 however does have a fractional component and could represent a number between 1.15000 and 1.24999 using traditional mathematical rounding rules. So, you need to either set the display as rounded to the nearest precision used, or work in a specified precision.
    Thus to expect 8 digits of precision, 1 + 1.2 == 1.20000000 you need to enter 1.00000000 + 1.20000000 and unless that wraps the precision of the computer, it should be accurate to the proper number of digits.
    There was a problem with some early pentium chips that adding 2.0+2.0 did not equal 4.0 and it was said to have been closer to 5.0, but I never checked the actual details on this since Macs didn't use pentium processors, so somewhere online there may be a reference to the actual calculation problem.

  • Per Diem Calculations for Single Day and Multiple Days

    Hi Gurus
    I have just now confronted a weird calculation of Per Diems.
    I have configured Per Diems for 1 Day say $50 per Day, irrespective of the no.of hours.
    When I create a trip for a single day, the system is calculating $50 Per DIem for one day.
    Whereas, when I create a trip for 3 days, the system is calculating only for Two days i.e, $50x2=$100 and not $150 as expected, meaning the lfirst and seond day gets calculated and the last day is not getting calculated
    Please let me know where I am missing the little configuration in order to let the system calculate for multiple days correctly ?
    This is an urgent requirement please.....Need help.
    Thank you
    RRoy

    Hi,
    As I told you earlier that there could be serval options which needs to be checked again.
    For example:
    a)  V_T702N_D
    b) Your per diem table
    I think if you have checked 24 hours in V_T702N_D , then please verify the trip duration on PR05
    If it was working earlier then below might help you...
    For Example:
    Start date/time 23-July-2011 Time 00:00
    End date/time 25-July-2011 Time 00:00
    Here above number of days are three (23,24,25) but calculation for meals shall be done for two days, because for three days per diem calculation it shall be 48:01 Hours  above is 48 Hours..
    Please try in PR05 and check the results.
    Start date/time 23-July-2011 Time 00:00
    End date/time 25-July-2011 Time 00:01
    So please check again and let know in case of problem..
    Regards,
    Muhammad Umer

  • Problems with Time Capsule after upgrade to 10.7.5 & firmware on Time Capsule

    I got a software update to 10.7.5.   There was also a firmware update to Time capsule.  It won't work now.  I get the error message of (-1).  They tell me it's a Western Digital software driver.  I never had a western digital drive, but must have migrated the driver from an older computer.      I did have an Iomega drive on that computer, but no Western Digital.
    Support has not been able to help me.    Any suggestions?

    I've worked on this over the weekend with Apple Support.   I'm now on my fourth advisor, who is a tier 3 resource. Over the weekend I've downloaded the patch, erased the Time Capsule, reset it to factory settings, plugged it into ethernet, renamed the Time Capsule, the airport network & my computer and still don't have a functioning Time Capsule after a very 'Microsoft' weekend of waiting for several backups to bomb.
    The only reason I'm on Lion is that I was a MobileMe customer and wanted to keep my email address.   I'm now thinking of getting a personal domain email, i.e. [email protected], dumping iCloud which really doesn't have any value to me, and in fact has less functionality that I need than MobileMe had, and reloading Snow Leopard.
    That, or I can stop trying to fix Time Capsule, use third-party local drives until Apple gets around to fixing Lion.
    Another 'feature' I've also discovered is that I can no longer take a Pages document and send a Mail message converting the document to PDF reliably.   It now works some of the time.
    I've been one of 'the faithful' who's had Apple products since 1987 and loved them.   I stuck with Apple through the 'Soda Jerk' years and that european dude.   I'm hoping that Lion is some weird product anomaly and that Apple will improve it's product quality to where it used to be.    Also, not force people into product upgrades by not restricting features on new releases like Mountain Lion to machines less than 3 years old.   There really will not be any place else to go unless Google gets into hardware.

  • Aliasing when resizing 1080p to 720p

    I can't wrap my head around this, maybe you can.
    Original source video:
    Prores 422
    1920x1080 square pixels
    29.97
    Destination encoding:
    H.264
    1280x720 square pixels
    29.97
    3000kbps
    When I do this, there is severe aliasing, especially on text. However if I use the exact same settings, but encode to 50% of original size (960x540) there aren't any problems at all.
    My best guess is that because 1280x720 is 2/3 of 1920x1080 there is some weird calculating going on that causes this problem. To be honest, it doesn't even look like aliasing, it just looks crummy. Diagonal edges are jagged and uneven.
    In fact, the same thing happens if I downsize using ProRes 422 HQ (same codec as original source).
    Any ideas?
    Message was edited by: revrevrev

    Use the resize image command, set the resampling mode to "nearest neighbor".

  • Class based WRED on an ATM vc in a 7600? I'm stuck

    Good afternoon everyone,
    I've been working on creating some new qos policies that we apply to ATM PVCs that are built up on ATM SPAs in a SIP-200 on the 7600 series routing platform.  I'm running into a bit of a brick wall when looking at what some of the engineers before me created and what I need to do to create the new ones.
    The quick and dirty looks something like this.  For atm PVCs that are built on a SPA in the 7600 they built different policy maps for different speed PVCs.  We have need to build a new policy for a PVC that is going to exceed what was built up already.  The only thing in the policy-map that could be tied to the speed specified in the policy map name is the min max thresholds used for random detect.  I cannot figure out how these values were determined and have been unsuccessful in locating any documentation that might help.  I'm not ruling out that this is not the correct way to do this but without knowing why it was done this way I'm reluctant to toss it aside and build it another way.  Can anyone shed any light on why someone would build it this way and maybe offer some alternatives?  I've included an example of what I'm talking about.
    policy-map atmspa-qos-9216k
      class Voip
       police cir percent 50    conform-action transmit     exceed-action drop     violate-action drop
        priority
      class Voip-Signaling
        bandwidth percent 5
      class Network-Control
        bandwidth percent 5
      class Critical-Data
        bandwidth percent 20
        random-detect dscp-based aggregate
        random-detect dscp values 18 minimum-thresh 231 maximum-thresh 356 mark-prob 10
        random-detect dscp values 20 minimum-thresh 89 maximum-thresh 356 mark-prob 10
        random-detect dscp values 22 minimum-thresh 35 maximum-thresh 356 mark-prob 10
      class Bulk-Data
        bandwidth percent 4
        random-detect dscp-based aggregate
       random-detect dscp values 10 minimum-thresh 46 maximum-thresh 71 mark-prob 10
        random-detect dscp values 12 minimum-thresh 18 maximum-thresh 71 mark-prob 10
        random-detect dscp values 14 minimum-thresh 7 maximum-thresh 71 mark-prob 10
      class Scavenger
        bandwidth percent 1
      class class-default
        bandwidth percent 15

    Hi Dave,
    Are you still stuck with this one?  I know you tend to look for definitive answers and I can't provide you one on this issue. Anyway, I decided to post at least a couple of thoughts. At a first glance the WRED numbers seem completely random, but they are not. Look at this config for example:
    class Critical-Data
        bandwidth percent 20
        random-detect dscp-based aggregate
        random-detect dscp values 18 minimum-thresh 231 maximum-thresh 356 mark-prob 10
        random-detect dscp values 20 minimum-thresh 89 maximum-thresh 356 mark-prob 10
        random-detect dscp values 22 minimum-thresh 35 maximum-thresh 356 mark-prob 10
    With some reverse engineering I think I can see a pattern. Because 231/89 = 2.6 and 89/35 = 2.54, I am thinking of the reverse procedure. That is, somehow 231 (or 35) is chosen and then you move down (or up) dividing (or multiplying) with a factor of 2.6 and round to nearest integer.
    Same thing happens below:
      class Bulk-Data
        bandwidth percent 4
        random-detect dscp-based aggregate
       random-detect dscp values 10 minimum-thresh 46 maximum-thresh 71 mark-prob 10
        random-detect dscp values 12 minimum-thresh 18 maximum-thresh 71 mark-prob 10
        random-detect dscp values 14 minimum-thresh 7 maximum-thresh 71 mark-prob 10
    That is: 46/2.6 = 18, 18/2.6 = 7
    Now, if you take a look at both of these configs you can see that Critical-Data uses percent 20 and Bulk-Data percent 4. That is, Critical-Data is supposed to use 5 times more bandwidth than Bulk-Data. Is it random that both 231/46 and 356/71 are both close to 5? I don't think so.
    [Actually the slogan "Is it random? I don't think so!" comes from a very successful telecom catalogue service campaign in Greece with some guy that uses weird calculations to yield the number 11888 and in the end he uses that slogan ]
    I am not sure if all this can be "scientifically" explained or there is some rule of thumb or some tuning was done or a combination. Certainly the 5 times more situation makes sense. For the rest I suspect some tuning and some approximation, perhaps with some assistance from cisco (or device itself e.g. some defaults) or manual tuning while traffic was being monitored. The documentation says that when you do not specify min-thresh, max-thresh in "random-detect dscp-based aggregate" command, then those parameters will be set based on interface (VC) bandwidth:
    http://www.cisco.com/en/US/docs/interfaces_modules/shared_port_adapters/configuration/7600series/76cfgatm.html#wp1431753
    Can you check if that makes sense and what those values are?
    To sum up: So far I can't figure how the initial values (e.g. 231 and 356 or 46 and 71) are chosen. The rest can be calculated. Are those rules of thumb "correct"? I guess traffic can respond to this question better than me. And it depends on what you are trying to do. Generally, the configuration doesn't seem "wrong" to me because:
    1) You have better treatment for Critical-Data than Bulk-Data (higher maximum-threshold for critical data)
    2) Low minimum-thresh means drops start ealier, which means worst treatment for the least important dscp values
    3) High maximum-thresh means full drop is delayed for the important data
    Another thing I was thinking are some guidelines I had come across in the context of the GSR:
    http://www.cisco.com/en/US/docs/ios/11_2/feature/guide/wred_gs.html#wp6484
    Calculating B and the min/max thresholds doesn't seem to coincide with anything in your config if I use a value of 9216k for the speed (that is from the name of your policy-map ), but they are not far away either.
    B = (9216 * 1000)/8/1500 = 768
    min-thresh = 0.03B = 23
    max-thresh = 0.1B = 77
    The closest match is : random-detect dscp values 12 minimum-thresh 18 maximum-thresh 71 mark-prob 10
    Again, those guidelines for the GSR are starting points and estimations. If your traffic behaves ok, then there is no need to change something. Actually, the more I look at this issue the more it seems like some rules of thumb. Configuration would become even more cumbersome (than it already is) if the perfect values was the goal. If you have another policy-map for a VC of different bandwidth we could compare the two to find the relationship between starting values for VC's of different bandwidth (i.e. use one VC bandwidth as a reference point and compute the rest using a factor x).
    And of course you could ask cisco, if you haven't already.
    Kind Regards,
    Maria
    Message was edited: missed a B (0.1 -> 0.1B)

  • Weird "Trend" metric calculation in Tabular KPI

    Hi Experts ,
    We have a tabular model in which we started designing the KPI to show Actual metric value vs its Target comparison by an Status indicator to know how good the metric has performed.  In the KPI calculation window of the tabular model , there are only
    place holders to calculate Value , Status & Target but not for “Trend”  . Even though we didn’t code anything specific for “Trend” calculation , Under the newly created KPI we see the “Trend” along with  Value , Goal & Status . But the “Trend”
    is behaving weirdly in the Tabular KPI, The trend indicator is being showed for every dimension attribute that is sliced with the KPI irrespective of whether it has Metric value or not.  I searched many websites to understand how this “Trend”  is
    being calculated in a KPI , but none of them are able to throw some light on the “Trend” calculation. In this kind of scenario please suggest me a way to circumvent this issue.
    How to hide the “Trend” indicator from the newly created KPI, as this I think we cannot define “Trend” calculation in tabular as in that of Multidimensional Cubes
    Understand the reason why “Trend” is displayed in tabular models
    Below is snapshot of our KPI when interfaced through Excel.
    Can you guys please help on how to hide the “Trend” expression from tabular models. So that our users wouldn’t be confused by using unrequired metric in KPI.
    Rajesh Nedunuri.

    Hi NedunuriRajesh,
    According to your description, since you haven't specified any expression for Trend calculation, you want to hide the Trend option. Right?
    In Analysis Services Tabular, Value, Goal Status and Trend in KPI are based on the Base Value, Target Value and Status Threshold. No matter you specify a Trend Expression or not, the Trend box is always displayed in the KPI pane. And it will do the
    calculation automatically. This is feature by design. There's no way to edit or modify it. So your requirement can't be achieved currently.
    I recommend you
    a feature request at https://connect.microsoft.com/SQLServer so
    that we can try to modify and
    expand the product features based on your needs.
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Unusual FIX calculation - Weird Issue - Whats Wrong?

    Hi All,
    I'm facing a weird issue today, not sure if I'm missing on something:
    substitution variables defined:
    &RangeYears= "FY13":"FY16"
    &F1stYr= "FY13"
    Sample script:
    FIX("E1","P1", &RangeYears)
    <sample scripts here>
    FIX(&F1stYr)
    <calculate account A1 here>
    ENDFIX
    ENDFIX
    However when I refresh, I see account "A1" calculated for all years"FY13" to "FY16". However, the last Fix statement is for "FY13" only.
    Can someone explain this abnormality?

    Let's try something similar on Sample Basic
    FIX(@RELATIVE("Qtr1",0),"Sales","New York")
    "100-10"=100;
      FIX(Apr)
      Actual
      "100-10"=200;
      ENDFIX
    ENDFIX
    For first part of FIX
    Calculating [ Product(100-10)] with fixed members [Year(Jan, Feb, Mar); Measures(Sales); Market(New York)]
    For the second part since Apr is not part of it Essbase will include Apr together with Jab, Feb, Mar (First fix acts as a universal FIX)
    Calculating [ Scenario(Actual)] with fixed members [Year(Jan, Feb, Mar, Apr); Measures(Sales); Market(New York)]
    That is what is happening in your case.
    Regards
    Celvin
    http://www.orahyplabs.com

  • Weird Issue: External Hard Drive disc space calculation error

    Okay I just started having this weird error as far as calculating disc space. I added everything up and it reached 200GB out of 465GB but the bottom reads as no space left at all!
    And when I go into the widget it will read 0 but the bar will show space left.
    Any help getting this hard drive to read correctly.

    Check this user tip I wrote:
    http://discussions.apple.com/thread.jspa?threadID=122973&tstart=20
    Also, what file system is the hard drive formatted in according to Get Info's Format field? Get Info can be gotten by selecting the drive icon on the desktop and going to File menu -> Get Info, or using Command-I (the letter pronounced "eye". Command is the Apple logo key on your keyboard).
    Also is boot camp installed?

  • [ASO Calculation] Weird behavior of calculation in ASO

    Hi,
    I have a strange scenario in one of ASO  cube (in 11.1.2.3 ) where calculation script (highlighted in orange below) written and run with POV set for all time periods results in expected values only for one period and for the rest results are not seen (we see only #MISSING). We have similar logic for other calculation and it seems to have worked fine. I am puzzled now as to why the calculation (with similar logic) work for all periods in one script and doesnt in other script. What is preventing results to be seen across all periods ? We have tried
    (a) slice merge - NO CHANGE, (b) clear partial data > re-run calc >  slice merge - NO CHANGE. We have run out of options now and have no clue as to what is triggering this strange behavior in ASO ? Is this a bug ?
    =========================================================================================================================
    Here even-though we have POV set for all periods, the result is only seen on Oct-15 and not for other periods.When we do manual calculation in Excel, we see result but from the cube we get only #Missing.
    CALCULATION RESULTING IN PARTIAL RESULT:
         [L0_S/S COS per Day] := (([AC_ZZ5030],[DP_Z3100]) + ([AC_ZZ5020],[DP_Z3100]) - ([AC_500510],[DP_Z3100]) - ([AC_500520],[DP_Z3100])) / ([AC_991023],[DP_Z3100]);
    MAXL RESULTING IN PARTIAL RESULT:
    execute calculation on database "WAG_FBI"."WAG_FBI" with
    local script_file "/u01/app/test/fmw/instances/instance1/Essbase/essbaseserver1/app/WAG_FBI/WAG_FBI/SS_DOS.csc"
    POV "Crossjoin({[Loaded]},
      Crossjoin({[Actual]},
      Crossjoin ([Time].Levels(0).Members,
      Crossjoin({[DP_15125]},
      Crossjoin([LOB].Levels(0).Members,
      Crossjoin([LE].Levels(0).Members,
      [Location].Levels(0).Members))))))"
    SourceRegion "CrossJoin({[AC_ZZ5030],[AC_ZZ5020],[AC_500510],[AC_500520],[AC_991023]},{[DP_Z3100]})" ;
    =========================================================================================================================
    Here we have same POV  and similar calculation logic (only change being in the denominator and the account where result should be stored) , the result is seen across all periods.
    CALCULATION WHERE COMPLETE RESULT IS SEEN:
    [L0_Rx COS per Day] := (([AC_ZZ5030],[DP_10010]) + ([AC_ZZ5020],[DP_10010]) - ([AC_500510],[DP_10010]) - ([AC_500520],[DP_10010])) / ([AC_991012],[DP_10010]);
    MAXL WHERE COMPLETE RESULT IS SEEN:
    execute calculation on database "WAG_FBI"."WAG_FBI" with
    local script_file "/u01/app/test/fmw/instances/instance1/Essbase/essbaseserver1/app/WAG_FBI/WAG_FBI/RX_DOS.csc"
    POV "Crossjoin({[Loaded]},
      Crossjoin({[Actual]},
      Crossjoin ([Time].Levels(0).Members,
      Crossjoin({[DP_10010]},
      Crossjoin([LOB].Levels(0).Members,
      Crossjoin([LE].Levels(0).Members,
      [Location].Levels(0).Members))))))"
    SourceRegion "CrossJoin({[AC_ZZ5030],[AC_ZZ5020],[AC_500510],[AC_500520],[AC_991012]},{[DP_10010]})" ;
    =========================================================================================================================
    Regards,
    Sathish

    I have no idea what the cause of your problem is, but a suggestion for a test...
    Create a member (e.g. 'Calc L0_S/S COS per Day') in the same dimension as [L0_S/S COS per Day] (assuming this a dynamic dimension), assign the formula to it (instead of placing it in the script) then make your script copy from that member to the target.
    I felt sure I had seen a defect that related to partial ASO allocation results but I can't find it now, so perhaps I was confused.

  • Calculations - weird results with add blend

    When using the calculations command to blend two channels with the ADD blend mode I am getting strange results when using a negative offset with a 16 bit image. The preview behaves as expected with the image darkening but when I press OK the light areas become dark and vice versa. Looking at Levels all pixels are either sent to 0 or 255.
    8 bit images perform as I would expected.
    Has anyone else noticed this behaviour ?

    I am not sure why you have used addNode method on same parentNode two times (line by line) because you already have added node in first line and now you have to save it. May be this is causing issue.
    parentNode = parentNode.addNode("orignal", "nt:file"); 
    parentNode.addNode("orignal", "nt:file");
    let me know if it does not help you.

  • Calculation - weird behaviour on 'Total By'

    Hi,
    I have a repository fact folder that has all SUM aggregated folders.
    In the same folder I have some calculation folders, that work out projections based on Actual Year To Date + Current Period on a weighted average basis.
    This all works perfectly in 'Answers' on a row by row basis, but when I attempt to utilise the 'Total By' functionality to create sub-totals by each change in cost centre parent code the "none calculation columns" work fine, but the calculation columns show spurious sub-totals, even on single row aggregation (sic).
    My calculation aggregation is greyed out, I presumed it was based on the underlying columns, and in the calculation formula there are CASE statements, but no aggregation.
    Any ideas please (not involving pivot table if possible) what I need to do to resolve this behaviour.
    I am on OBIEE 10.1.3.4
    thanks for your input,
    Robert.

    Hi,
    yes - as I said I realised it was inheriting its parents columns aggregation behaviour....
    And this is my code >
    CASE  WHEN "Budget Report"."Gl Periods"."Period Num" >= 12 THEN "Budget Report"."Budget Report Facts"."Actual Year To Date" ELSE "Budget Report"."Budget Report Facts"."Actual Year To Date" + (12 - "Budget Report"."Gl Periods"."Period Num") * "Budget Report"."Budget Report Facts"."Actual Year To Date" / "Budget Report"."Gl Periods"."Period Num" * 0.4 + "Budget Report"."Budget Report Facts"."Expenditure In Month" * (12 - "Budget Report"."Gl Periods"."Period Num") * 0.6 END As I said all of my none calculation columns have the same SUM aggregation, is it the SUMs in the below that are causing the issue??
    n.b. The repository generates the following from my code - in data type derives from physical sources
    case  when GL PERIODS FOR BUDGET REPORT.PERIOD_NUM >= 12 then sum(GL_BUDGET_REPORT.ACTUAL_YEAR_TO_DATE) else (12 - GL PERIODS FOR BUDGET REPORT.PERIOD_NUM) * sum(GL_BUDGET_REPORT.ACTUAL_YEAR_TO_DATE) / nullif( GL PERIODS FOR BUDGET REPORT.PERIOD_NUM , 0) * 0.4 + sum(GL_BUDGET_REPORT.ACTUAL_YEAR_TO_DATE) + (12 - GL PERIODS FOR BUDGET REPORT.PERIOD_NUM) * sum(GL_BUDGET_REPORT.EXPENDITURE_IN_MONTH) * 0.6 end thanks,
    Robert.
    Edited by: Robert Angel on 18-Aug-2011 08:21 - who later discovered difference between what the tool had generated and what he had typed in....

  • "Weird" result in MOSFET Capacitance calculation.

    Hi everyone,
    I have a problem with the simulation result for calculating N-CH Mosfet
    capacitance (Cgs, Cgd, and Cdb).  
    In this simulation I tried to verify my manual calculation with the Spice
    result using Multsim 11.0. But, the result in Multism simulation is quit
    different either from the manual calculation or from Pspice/Hspice simulation.
    Manual Calculation:
    Multisim Simulation:
    Setting Parameters in Multisim:
    .DC from Multisim
    >DC from PSPICE
    From those results, it seems that Pspice gave a close value to the theoretical MOSFET calculation here.  And I got messy result in Multisim.
    Can somebody help me solving this problem?..is Multisim not as powerful as other spice software? or I just messed up with the parameters setting form Multisim and ended with the wrong answer?

    You need to post this on the Multisim forum.
    Alan

Maybe you are looking for

  • Printing to airport printer

    Is it possible to print via the internet to a USB printer (HP PSC 2210) connected to the USB port of an airport extreme 802.11n? I have set the airport to allow printing over the WAN port, but I cannot add the printer using the printer utility and th

  • Local vs Server Saving

    What differences would be experiened if one saved documents created in say InDesign, Photoshop, FrameMaker (yeah, I know FM isn't in CS!) locally rather than over the network on a server disk? Clearly saving over the network on a server HDD has benef

  • SeeBurger Adapter PostInstallation

    Hi guys, I'm back with another cool doubt... We're using SeeBurger AS2 Adapter and we've followed every installation step described in the corresponding manuals. I'd like to try a short example, sending a test message to a partner just to check if th

  • I just dont understand how to work photo shop HELP ME

    my mom bought this for me and im reading the directions and trying to figure out how it works i just dont get it at all!!! i dont know why, im good at using computers i just nvr grasped the concept of how to work this if you have a helpful hint pleas

  • Load balancing and fault tolerance in BPEL PM

    BPEL documentation talks about the ability of clustering processes in BPEL PM Server for fault tolerance and Load balancing. Can anybody tell me how is it done? Thanks