Precision Tail-off?

Oscar Benjamin oscar.j.benjamin at gmail.com
Tue Feb 14 06:17:20 EST 2023


On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker at sil.org> wrote:
>
> Hi,
>
> I have just produced the following log in IDLE (admittedly, in Python
> 2.7.10 and, yes I know that it has been superseded).
>
> It appears to show a precision tail-off as the supplied float gets bigger.
>
> I have two questions:
> 1. Is there a straightforward explanation for this or is it a bug?
> 2. Is the same behaviour exhibited in Python 3.x?
>
> For your information, the first 20 significant figures of the cube root in
> question are:
>    49793385921817447440
>
> Stephen Tucker.
> ----------------------------------------------
> >>> 123.456789 ** (1.0 / 3.0)
> 4.979338592181744
> >>> 123456789000000000000000000000000000000000. ** (1.0 / 3.0)
> 49793385921817.36

You need to be aware that 1.0/3.0 is a float that is not exactly equal
to 1/3 and likewise the other float cannot have as many accurate
digits as is suggested by the number of zeros shown. Therefore you
should compare what exactly it means for the numbers you really have
rather than comparing with an exact cube root of the number that you
intended. Here I will do this with SymPy and calculate many more
digits than are needed. First here is the exact cube root:

In [29]: from sympy import *

In [30]: n = 123456789000000000000000000000000000000000

In [31]: cbrt(n).evalf(50)
Out[31]: 49793385921817.447440261250171604380899353243631762

So that's 50 digits of the exact cube root of the exact number and the
first 20 match what you showed. However in your calculation you use
floats so the exact expression that you evaluate is:

In [32]: e = Pow(Rational(float(n)), Rational(1.0/3.0), evaluate=False)

In [33]: print(e)
123456788999999998830049821836693930508288**(6004799503160661/18014398509481984)

Neither base or exponent is really the number that you intended it to
be. The first 50 decimal digits of this number are:

In [34]: e.evalf(50)
Out[34]: 49793385921817.360106660998131166304296436896582873

All of the digits in the calculation you showed match with the first
digits given here. The output from the float calculation is correct
given what the inputs actually are and also the available precision
for 64 bit floats (53 bits or ~16 decimal digits).

The reason that the results get further from your expectations as the
base gets larger is because the exponent is always less than 1/3 and
the relative effect of that difference is magnified for larger bases.
You can see this in a series expansion of a^x around x=1/3. Using
SymPy again:

In [37]: a, x = symbols('a, x')

In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should
expect this difference to grow approximately linearly as you keep
adding more zeros in the base.

--
Oscar


More information about the Python-list mailing list