readyplayernull • 8 days ago

> You see, when formulas create a circular reference, Excel will run that computation up to a number of times. If, in those computations, the magnitude of the difference between the most recent and previous computed values for the cell falls below some pre-defined epsilon value (usually a very small number, like 0.00001), Excel will stop recomputing the cell and pretend like it finished successfully.

llimllib • 9 days ago

Here's how you can use sympy (`pip install sympy`) or the decimal module to get better numerical answers out of python:

```
# as shown in the text, fp math yields an inexact result.
# should be 3.310115e-5
$ python -c "from math import *; print(8721*sqrt(3)-10_681*sqrt(2))"
3.31011488015065e-05
# https://docs.sympy.org/latest/modules/evalf.html
$ python -c "from sympy import *; print(N(8721*sqrt(3)-10_681*sqrt(2)))"
3.31011506606020e-5
# the second argument to N is the desired precision
$ python -c "from sympy import *; print(N(8721*sqrt(3)-10_681*sqrt(2), 50))"
0.000033101150660602022280988927734905539317579147087675
# or using the somewhat more awkward decimal module:
$python -c "from decimal import *; print(Decimal(8721)*Decimal(3).sqrt() - Decimal(10_681)*Decimal(2).sqrt())"
0.00003310115066060202229
```

I'm not at all an expert at this, just wanted to play around with some tools that have helped me with similar problems in the past.I also don't know how accurate the digits following `3310115` are! All I know is that these are ways to make the machine spit out the correct number up to seven digits; I'd love it if somebody would explain to me more of what I've done here tbh

Majromax • 9 days ago

What you've done here is tell SymPy to use extra precision for the intermediate (and final) output. This doesn't truly *fix* the problem of cancellation and loss of precision, but for many practical purposes it can postpone the problem long enough to give you a useful result.

Internally, SymPy uses mpmath (https://mpmath.org/) for representation of numbers to arbitrary precision. You could install and use the latter library directly, gaining extra precision without going through symbolic manipulation.

All that being said, it's still good practice to avoid loss of precision at the outset. Arbitrary-precision calculations are slow compared to hardware-native floating point operations. Using the example from mpmath's homepage in iPython:

```
In [1]: import mpmath as mp; import scipy as sp; import numpy as np
In [2]: mp.mp.dps=50 # set extended precision
In [3]: %%timeit
...: mp.quad(lambda x: mp.exp(-x**2), [-mp.inf, mp.inf]) ** 2
40.5 ms ± 4.74 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [4]: mp.quad(lambda x: mp.exp(-x**2), [-mp.inf, mp.inf]) ** 2
Out[4]: mpf('3.1415926535897932384626433832795028841971693993751015')
In [5]: %%timeit
...: sp.integrate.quad(lambda x: np.exp(-x**2), -np.inf, np.inf)[0]**2
570 µs ± 78.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [6]: sp.integrate.quad(lambda x: np.exp(-x**2), -np.inf, np.inf)[0]**2
Out[6]: 3.1415926535897927
```

aidenn0 • 8 days ago

SymPy will numerically approximate the value of the equation inside "N" to the number of specified decimal digits, so the first 50 non-zero digits will be completely correct[1] when you pass "50" as the second argument to N.

1: I'm not sure how SymPy handles rounding, but it is typical to round the last digit so e.g. printing just 5 digits of 0.123456 would correctly be 0.12346

svilen_dobrev • 9 days ago

further, python has fractions (with proper divisors' arithmetic over them), so

>>> Fraction("2/3") * Fraction("66/13")

Fraction( 44, 13)

nickcw • 9 days ago

I tried (2 / 3) * 3 - 2 on my Casio FX-180P [1] (now almost 40 years old!) and on Android Realcalc [2] (which mimics a very similar calculator) and I got 0 on both.

Same with sqrt(6)^2 - 6 - 0 on both. (-1)^5 works fine too.

So full marks for the ancient Casio and its emulator on the initial quality tests!

[1] https://www.calculator.org/calculators/Casio_fx-180P.html

[2] https://play.google.com/store/apps/details?id=uk.co.nickfine...

Note that [1] claims the calculator came out in 1985 but I'm sure I got mine in 1983 or 1984

dolmen • 7 days ago

jeroenhd • 9 days ago

Testing these problems in https://www.cemetech.net/projects/jstified/ seems to indicate that "modern" (as in, late 90s) calculators have these problems well under control.

Android's built in calculator on my phone and tablet (take note, Apple) both seem to get the answer right as well.

Fun problems for people trying to build their own calculators, but not really a problem if you're actually trying to solve equations using existing ones.

kccqzy • 8 days ago

The Android calculator is indeed super amazing. There's a super interesting paper behind the tech (constructive reals): https://cacm.acm.org/magazines/2017/8/219594-small-data-comp... written by Boehm (the same person behind the eponymous garbage collector).

Pinus • 8 days ago

Sometimes they "control" the problem with trickery that occasionally blows up. For example, there was talk some time ago about how some Casio models sometimes invent a factor pi in something that should obviously be a rational number: https://twitter.com/standupmaths/status/1284117779025727494 . (Ok, admittedly this is a much nastier trick than just using a few extra hidden digits for floating point numbers.)

TheTon • 8 days ago

Apple’s calculator seems to get the first set of problems from the pdf right as well. Did you test with it and get something different?

kccqzy • 8 days ago

Consider this example from the article where it asks you to calculate the limit of ln(1+h)/h as h tends to zero. Set h to be 10^(-17). The Android calculator reliably succeeds and gives the same answer as Mathematica. The iOS calculator reliably fails; at least it knows its limits and refuses to give even an approximate answer.

graywh • 9 days ago

and yet today I saw a website calculate 3.3 x 4 to be 13.200000000000003

NikkiA • 8 days ago

Calculators don't use floating point for a reason.

masswerk • 8 days ago

There was recently an interesting thread on the Retro Computing Forum [1] related to this and curious behavior in 6502 flavors of MS BASIC. In brief, the condition in

```
IF SQR(3 * 2) = SQR(3) * SQR(2) THEN…
```

evaluates to false, while,```
IF SQR(3 * 2) - SQR(3) * SQR(2) = 0 THEN…
```

evaluates to true. Similarly,```
A = SQR(3 * 2) - SQR(3) * SQR(2)
```

assigns zero (0.0) to variable A. So,```
SQR(3 * 2) <> SQR(3) * SQR(2)
```

but```
SQR(3 * 2) - SQR(3) * SQR(2) = 0
```

Which is kind of a nice paradox. (The same is true for a variety of other value combinations.) We learn, the value of an expression in a comparison is not the same as its result (probably due to result cleanup, especially, when the exponent happens to be zero.)[1] https://retrocomputingforum.com/t/investigating-basic-floati...

lmm • 8 days ago

> We learn, the value of an expression in a comparison is not the same as its result (probably due result cleanup, especially, when the exponent happens to be zero.)

That happens all the time on traditional x86 - (non-SSE) floating point registers are 80 bits and then (usually) get rounded to 64 bits when you store them in memory, so you will very often have an expression that's nonzero if you use it immediately, but zero if you put it in a variable and use it later.

masswerk • 8 days ago

In this case, there are two mechanisms at work: a conventional rounding bit (for each of the two floating point accumulators available), and a special routine, which zeros out all bytes of the floating point accumulator, when the result is transferred and the exponent is zero. I personally suspect that it's the latter, which is not triggered appropriately and which seems to be the culprit here. Notably, this routine only applies to the first of these accumulators. (It seems, though, that the routine should be called, when a value is copied from one FAC to the other, which we would expect here to happen. So I'm not sure.)

(The precision, BTW, is 4 bytes mantissa including a sign bit, and one byte exponent with a bias of 127. The floating point accumulators extract the sign to an extra byte, but there is no change in precision with regard to the storage format, other than said rounding bit. At least, this is what it's in Commodore BASIC, the flavor I'm most familiar with. Apple ][ BASIC should be pretty much the same, as it's closely related. And, notably, the 6502 processor only provides addition and subtraction for single-byte values, everything else has to be done in software. So there's no extra precision in any processor register, either.)

cratermoon • 8 days ago

My favorite site for FP math fun: https://0.30000000000000004.com/

Also fun, Muller's recurrence: https://scipython.com/blog/mullers-recurrence/

quickthrower2 • 9 days ago

Floating point values are not numbers. They are ranges. They are approximately a field like entity-ish.

Majromax • 9 days ago

Floating point values are binary scientific notation with a fixed precision. All of the unintuitive behaviour of floats is repeated if you imagine conducting decimal calculations yourself in said scientific notation, with intermediate rounding at every step.

Floats are only ranges in that each real number (within the overall range limit) has a unique floating point representation to which it is closest, up to rounding at the precise interval boundary.

Floats can be *extended* to interval arithmetic, but doing so requires the extra work of keeping track of interval limits. When calculations lose precision (through cancellation of digits in the floating point representation), the interval of uncertainty is larger than the floating point precision.

orlp • 9 days ago

IEEE 754 floating point *are* numbers, they are not ranges. Every operation rounds to a representable number in the set.

repiret • 9 days ago

In what way do they behave like ranges?

I generally prefer to think of them as they are: able to represent a subset of the rationals, plus a couple other weird values, but not a field, and importantly, not obeying any of the usual algebraic laws with arithmetic.

lmm • 8 days ago

Yep. Floating point addition is not even associative, so they're not only not a field, but not a ring, not a group, nor even a monoid.

voluntaryeuth • 9 days ago

Can you point me to a reference?

Every floating point article (including IEEE 754) I've seen treats normal floating point numbers as dyadic rationals.

BeetleB • 9 days ago

Everyone is correcting you, but no one is pointing out the real culprit. Floating point operations are not the same as math operations (i.e. +, -, *, / behave differently).

ska • 8 days ago

I think what you mean here is that floating point numbers and the usual operators (+,*) and inverses (-,/) do not work the same as Reals (i.e. they don't form a field), but people often behave as if they do.

This is a common confusion.

gilcot • 8 days ago

IEEE 754 is well defined so that $a+b=b+a$ But it can't give you warranty that $(a+b)+c=a+(b+c)$ (well known error propagation issue)

BeetleB • 8 days ago

IEEE guarantees commutativity, but does not guarantee the *correctness* of a+b.

My point is that floating point numbers are real numbers, but the usual operations on them will not give you the same results as the operations on real numbers.

+1

nayuki • 8 days ago

BeetleB • 7 days ago

> IEEE does guarantee the correctness of a+b in the sense that it will return the floating-point number that is closest to the real number that would've been computed with infinite precision.

Until you deal with overflow.

And to be frank, this is just redefining "correctness". If you want to view FPs as real numbers (which they are), there's nothing wrong with pointing out that real number operations will give you incorrect results (even ignoring overflow).

> There's no randomness or fuzziness in floating-point rounding.

I did not imply there was.

+1

gilcot • 8 days ago

BeetleB • 7 days ago

Nothing I said implied they weren't.

tgv • 9 days ago

I don't think you can see them as a range. Sure, you can think of the float representation of 0.1 and 0.2 as a small range around those values, but what does that make 0.1 + 0.2, or 0.1 * 0.2, or 0.1 ** 0.2? Now the resulting float no longer represents the range of potential conversion error.

And at the same time, integer values (up until some decently high numbers) are represented correctly, so they would need a different range.

dang • 8 days ago

Looks like *Calculus: Early Transcendentals* was first published in 1987, so I've put that year above until proven otherwise.

https://www.google.com/search?tbo=p&tbm=bks&q=%22To+have+a+f...

tiahura • 8 days ago

See also, Intel® Decimal Floating-Point Math Library https://www.intel.com/content/www/us/en/developer/articles/t....

nayuki • 8 days ago

IEEE 754 in base-10 still doesn't help you for (1.0 / 3.0) * 3.0.

pif • 9 days ago

Not reading "https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h..." considered harmful.

basemi • 9 days ago

On my linux box:

```
$ echo "(2/3)*3-2" | bc -l
-.00000000000000000002
```

teddyh • 9 days ago

In the terminal I always use “dc”, where, besides being RPN, you have to explicitly set the precision. I.e. the result of 2∕3 (or “2 3 /”) in dc is always ”0” unless you set a precision beforehand. This makes it clear that the result is arbitrary.

irishsultan • 9 days ago

bc and dc are arbitrary precision. By using -l you are specifying that it should keep track of 20 decimal digits (plus you are importing some extra stuff).

You can try higher precision by setting the scale.

```
$ echo "scale = 100; (2/3)*3 - 2" | bc
-.000000000000000000000000000000000000000000000000000000000000000000\
0000000000000000000000000000000002
```

jepler • 9 days ago

a neat calculator is spigot. It has no trouble getting exactly 0 for this calculation:

```
$ spigot '(2/3)*3-2'
0
```

furthermore, when you specify a precision number, this sets the precision of the final result, NOT the precision of intermediaries (which is what bc & dc, and many other arbitrary precision calculators do); every printed digit is correctly rounded. [edited to add: truncated towards zero by default; other rounding modes available as commandline flag]```
$ spigot -d72 'sin(1)-cos(1)'
0.301168678939756789251565714187322395890252640180448838002654454610810009
```

There are still conditions where spigot can't give an answer, such as```
$ spigot 'sin(asin(0.12345))'
```

.. spigot can never determine the next correctly rounded digit after "0.1234" with certainty, for reasons elucidated in the documentation.spigot is in debian and here's the author's page with links to a lot more background: https://www.chiark.greenend.org.uk/~sgtatham/spigot/

zozbot234 • 8 days ago

> spigot can never determine the next correctly rounded digit after "0.1234" with certainty, for reasons elucidated in the documentation.

Because spigot rounds towards zero in that mode, and it can only bound the result as arbitrarily close to 0.12345 - i.e., it can never decide between 0.12344999..99something and 0.12345000..00something because the "something" part that might break the tie never appears. This is a general issue with exact real computation; in more complicated cases, it can even be undecidable whether some expression is exactly zero.

jeroenhd • 9 days ago

Stuff like this is why I use Python for basic math

On my phone:

```
~ $ python -c "print( (2 / 3) * 3 - 2 )"
0.0
~ $ python --version
Python 3.11.5
```

knodi123 • 9 days ago

why not pipe the input through a few more commands? :-P

` bc -l -e "(2/3) * 3 - 2"`

5e92cb50239222b • 9 days ago

I don't know which system you're on, but GNU bc does not support this flag.

On the other hand, bash and zsh support this:

```
bc -l <<<'(2/3) * 3 - 2'
```

edit: neither does busybox.knodi123 • 7 days ago

Huh. I assumed I was on standard versions of these kind of really old utilities. But you're on to something:

```
bc -v
bc 4.0.2
Copyright (c) 2018-2021 Gavin D. Howard and contributors
Report bugs at: https://git.yzena.com/gavin/bc
```

gibspaulding • 9 days ago

Like this?

$ echo "(2/3)*3-2" | bc -l > /dev/null && echo "0"

seanhunter • 9 days ago

First prize would be if you could write a pipe which meaningfully combines bc, cc, dc, fc, nc and wc. No other commands allowed.

Obscurity4340 • 9 days ago

[flagged]

Don’t you need to go into the settings to turn on circular references (ie iterative calculation) and then you can choose the limits/tolerance?

my thought reading this quote was: If you squint and turn it sideways, that's a solver function.

Sure enough, slightly higher in the article: > “We use a circular reference in Excel to do linear regression.”

Excel has a solver function. It’s called “Goal Seek” for one unknown or “solver” for multiple.

https://support.microsoft.com/en-us/office/use-goal-seek-to-...

https://support.microsoft.com/en-us/office/define-and-solve-...