Precision Tail-off?

Peter Pearson pkpearson at nowhere.invalid
Fri Feb 17 10:57:29 EST 2023


On Fri, 17 Feb 2023 10:27:08, Stephen Tucker wrote:[Head-posting undone.]
> On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson at nowhere.invalid>
> wrote:
>> On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
>> > On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker at sil.org>
>> wrote:
>> [snip]
>> >> I have just produced the following log in IDLE (admittedly, in Python
>> >> 2.7.10 and, yes I know that it has been superseded).
>> >>
>> >> It appears to show a precision tail-off as the supplied float gets
>> bigger.
>> [snip]
>> >>
>> >> For your information, the first 20 significant figures of the cube root
>> in
>> >> question are:
>> >>    49793385921817447440
>> >>
>> >> Stephen Tucker.
>> >> ----------------------------------------------
>> >> >>> 123.456789 ** (1.0 / 3.0)
>> >> 4.979338592181744
>> >> >>> 123456789000000000000000000000000000000000. ** (1.0 / 3.0)
>> >> 49793385921817.36
>> >
>> > You need to be aware that 1.0/3.0 is a float that is not exactly equal
>> > to 1/3 ...
>> [snip]
>> > SymPy again:
>> >
>> > In [37]: a, x = symbols('a, x')
>> >
>> > In [38]: print(series(a**x, x, Rational(1, 3), 2))
>> > a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
>> >
>> > You can see that the leading relative error term from x being not
>> > quite equal to 1/3 is proportional to the log of the base. You should
>> > expect this difference to grow approximately linearly as you keep
>> > adding more zeros in the base.
>>
>> Marvelous.  Thank you.
[snip]
> Now consider appending three zeroes to the right-hand end of N (let's call
> it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
>
> The *only *difference between RootN and RootNZZZ is that the decimal point
> in RootNZZZ is one place further to the right than the decimal point in
> RootN.
>
> None of the digits in RootNZZZ's string should be different from the
> corresponding digits in RootN.
>
> I rest my case.
[snip]


I believe the pivotal point of Oscar Benjamin's explanation is
that within the constraints of limited-precision binary floating-point
numbers, the exponent of 1/3 cannot be represented precisely, and
is in practice represented by something slightly smaller than 1/3;
and accordingly, when you multiply your argument by 1000, its
not-quit-cube-root gets multiplied by something slightly smaller
than 10, which is why the number of figures matching the "right"
answer gets steadily smaller.

Put slightly differently, the crux of the problem lies not in the
complicated process of exponentiation, but simply in the failure
to represent 1/3 exactly.  The fact that the exponent is slightly
less than 1/3 means that you would observe the steady loss of
agreement that you report, even if the exponentiation process
were perfect.

-- 
To email me, substitute nowhere->runbox, invalid->com.


More information about the Python-list mailing list