Precision Tail-off?

Oscar Benjamin oscar.j.benjamin at gmail.com
Fri Feb 17 22:52:51 EST 2023


On Sat, 18 Feb 2023 at 01:47, Chris Angelico <rosuav at gmail.com> wrote:
>
> On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
> <python-list at python.org> wrote:
> >
> > On 18/02/23 7:42 am, Richard Damon wrote:
> > > On 2/17/23 5:27 AM, Stephen Tucker wrote:
> > >> None of the digits in RootNZZZ's string should be different from the
> > >> corresponding digits in RootN.
> > >
> > > Only if the storage format was DECIMAL.
> >
> > Note that using decimal wouldn't eliminate this particular problem,
> > since 1/3 isn't exactly representable in decimal either.
> >
> > To avoid it you would need to use an algorithm that computes nth
> > roots directly rather than raising to the power 1/n.
> >
>
> It's somewhat curious that we don't really have that. We have many
> other inverse operations - addition and subtraction (not just "negate
> and add"), multiplication and division, log and exp - but we have
> exponentiation without an arbitrary-root operation. For square roots,
> that's not a problem, since we can precisely express the concept
> "raise to the 0.5th power", but for anything else, we have to raise to
> a fractional power that might be imprecise.

Various libraries can do this. Both SymPy and NumPy have cbrt for cube roots:

  >>> np.cbrt(123456789000000000000000000000000000000000000000000000000000000.)
  4.979338592181745e+20

SymPy can also evaluate any rational power either exactly or to any
desired accuracy. Under the hood SymPy uses mpmath for the approximate
numerical evaluation part of this and mpmath can also be used directly
with its cbrt and nthroot functions to do this working with any
desired precision.

> But maybe, in practice, this isn't even a problem?

I'd say it's a small problem. Few people would use such a feature but
it would be have a little usefulness for those people if it existed.
Libraries like mpmath and SymPy provide this and can offer a big step
up for those who are really concerned about exactness or accuracy
though so there are already options for those who care. These are a
lot slower though than working with plain old floats but on the other
hand offer vastly more than a math.cbrt function could offer to
someone who needs something more accurate than x**(1/3).

For those who are working with floats the compromise is clear: errors
can accumulate in calculations. Taking the OPs example to the extreme,
the largest result that does not overflow is:

  >>> (123456789. * 10**300) ** (1.0 / 3.0)
  4.979338592181679e+102

Only the last 3 digits are incorrect so the error is still small. It
is not hard to find other calculations where *all* the digits are
wrong though:

  >>> math.cos(3)**2 + math.sin(3)**2 - 1
  -1.1102230246251565e-16

So if you want to use floats then you need to learn to deal with this
as appropriate for your use case. IEEE standards do their best to make
results reproducible across machines as well as limiting avoidable
local errors so that global errors in larger operations are *less
likely* to dominate the result. Their guarantees are only local though
so as soon as you have more complicated calculations you need your own
error analysis somehow. IEEE guarantees are in that case also useful
for those who actually want to do a formal error analysis.

--
Oscar


More information about the Python-list mailing list