Against PEP 240

Bengt Richter bokr at accessone.com
Thu May 31 18:40:54 EDT 2001


On Wed, 30 May 2001 08:31:59 -0700, Paul Prescod
<paulp at ActiveState.com> wrote:

>Mikael Olofsson wrote:
>> 
>> On 29-May-2001 Paul Prescod wrote:
>>  >  As far as I know. A float is an inexact (and high performance) rational
>>  >  and a  rational is a complex number without an imaginary part.
>> 
>> No, a real is a complex number without an imaginary part. There are
>> real numbers that are not rational (called irrational). Pi is an
>> example, e is another, and sqrt(2) is a third. Actually, most reals
>> are irrational.
>
>We're talking about Python. I don't think it will accurately represent
>irrational numbers in my lifetime! In context of proposed Python
Philosophical point: see a few lines above ;-)

I.e., "pi","e", "sqrt(2)" are all exact representations of irrational
numbers. Of course, when you evaluate math.pi or math.e or
math.sqrt(2) you get approximate values, but that is only because it's
programmed to return a conventional decimal floating point
representation. "sqrt(2)" could return "sqrt(2)" and be exact. And
"sqrt(4)" could return "2" and be exact.  And "atan(1)" could return
"pi/4". Etc. Theoretically.

My feeling is that ideally a programming language and all the elements
it deals with ought to exist as pure abstractions, and the important
things ought to be considered and decided in that realm. (I like
James Allen's book "Anatomy of Lisp" ).

To be useful, of course, requires the possibility of a transformation
from the abstract structures to representations in computer and
storage media states and back (which happens in our minds, really:
we tranform visible (typically) representations on screen or paper
to our mental representations of pure abstractions).

I'd say the primary tranformation from the Python abstractions to
conventional representation occurred in the choices made by Python's
BDFL, deciding how to represent his ideas.

Representation of numbers is a special sub-area. Abstractly, one could
argue that there is only one type of number we are representing,
namely complex. The usual discussion about "types" is really not about
types of numbers, but about types of representation -- hardware-wise,
and humanly-perceptible-wise -- and the constraints the various types
of representations impose on the set of exactly representable numbers.

It just occurred to me that if you have a restricted set of decimal
numbers being represented by binary floating point, you could view
all the extra bits in a double as a kind of error-correcting code,
to be used when transforming back to an exact representation.
In that way, an "inexact" representation of 7.35 could be viewed as
being exact, in the sense that it was correctable back to the
original. Something like ECC memory holding a temporary value with
a single bit error, but which gets corrected when retrieved.

I bring this up not as a practical solution to anything, but to point
out that conventional uses of, and ideas about, representation states
are just that. Ceci n'est pas une pipe.

Note the following "recovery" of exactness, assuming the result has
two legitimate decimals ;-)

 >>> 0.98
 0.97999999999999998
 >>> "%4.2f" % (.98,)
 '0.98'
 >>>




More information about the Python-list mailing list