Strange behaviour with numbers in exponential notation

Christian Gollwitzer auriocus at gmx.de
Fri Sep 2 15:12:18 EDT 2016


Am 02.09.16 um 19:24 schrieb Marco Sulla:
> Excuse me, I forgot to include the python list mail addess. I repost the mail.
>
>
>
>  On Fri, Sep 2, 2016 at 6:18 PM, Christian Gollwitzer <auriocus at gmx.de> wrote:
>> 1e26 denotes a *floating point number* Floating point has finite precision,
>> in CPython it is a 64bit IEEE number. The largest exact integer there is
>> 2**53 (~10^16), everything beyond cannot be accurately represented.
>
> I see. So python float type is the IEEE 754 double. Why is it not
> automatically converted to long double?

Because Python has no long double type? And "long double" (assuming 
80bit or 128bit) does not solve the problem, it just shifts the bound to 
some higher arbitrary limit. In fact, it is impossible to make a 
bigfloat class which acts like a float, but always carries enough bits. 
A simple counterexample is 0.3, a periodic fraction in binary. Or, if 
you include arbitrary precision fractions, then e.g. sqrt(2).

> And I want to add to my original question: indeed I read from the docs:
> https://docs.python.org/3/reference/lexical_analysis.html#floating-point-literals
> that float has an 'exponentfloat' syntax. Why integers does not have
> an equivalent syntax?

Tradition? All languages I know of treat a number with an exponent as 
floating point.

	Christian



More information about the Python-list mailing list