[Python-ideas] Python Numbers as Human Concept Decimal System

Andrew Barnert abarnert at yahoo.com
Fri Mar 7 03:53:35 CET 2014


From: Mark H. Harris <harrismh777 at gmail.com>
Sent: Thursday, March 6, 2014 6:19 PM


[snipping out of order]

> >>> from decimal import Decimal
>>>> a=Decimal(1)
>>>> b=Decimal(.1)
>>>> a+b
>Decimal('1.100000000000000005551115123')      <==== does this not bother you at all ?
>
>    ... even though we both know why its doing this, doesn't it bother you a little?


That's not a problem with Python's number system, it's that Decimal(.1) is not the right way to write what you want.

The only solution without changing Python is to train end-users to write something correct, like Decimal('.1').

The obvious solution for changing Python is to make it easier to create Decimal numbers correctly and/or harder to create them incorrectly. For example, a decimal suffix, as already proposed before this thread, would completely solve the problem:

    >>> a = 1d
    >>> b = .1d
    >>> a+b
    1.1d

Of course the exact suffix (or other syntax) is up for bikeshedding, as is the possibility of one day changing the default from binary floats to decimal floats, but other than those trivial details, tada. But there's no need for anything more radical, like some amorphous idea to "unify Python numbers".

>On Thursday, March 6, 2014 8:09:02 PM UTC-6, Guido van Rossum wrote:
>
>>Mark, it feels like you do not understand Python well enough to be able to make sweeping proposals about its reform. 
>
>hi Guido,  ouch.


You proposed that Python should handle numbers in an OO way, with numbers being real objects, instances of classes, with a hierarchy including abstract base classes; all of this is already there in Python.

You went off on a long digression about how you could implement this using the details of C++-style inheritance, when Python has a completely (and more powerful) different solution to inheritance that has already been used to solve this problem.

You proposed some complicated AI-based solution to solve the problem of using separate number classes in a single expression, even though Python (almost exactly like C++, in this case) has already solved that problem with operator overloading.

(And note that Python is flexible enough that third-party libraries can easily insert new types like quaternions, matrices, symbolic expressions, etc. into the hierarchy in a way that's transparent to end users. I can multiply a NumPy matrix of float64 values by the builtin in 2 just by writing "m * 2", and it works exactly the way you'd want it to. It's hard to imagine that would be even feasible with an AI-based solution, but with the current design, that's the easiest part of NumPy.)

There are some ideas in your posts that are worth responding to, but I think it's perfectly fair for Guido to decide it's not worth digging through the mass of ignorance about Python's basic design to find the nuggets that can be rejected for more specific reasons instead of just dismissed.



More information about the Python-ideas mailing list