[Python-ideas] Python Numbers as Human Concept Decimal System

Ron Adam ron3200 at gmail.com
Sat Mar 8 18:35:34 CET 2014



On 03/08/2014 01:08 AM, Steven D'Aprano wrote:
> On Fri, Mar 07, 2014 at 06:02:02PM -0800, Guido van Rossum wrote:
>> >On Fri, Mar 7, 2014 at 5:05 PM, Steven D'Aprano<steve at pearwood.info>  wrote:


>> >The issue (as I see it) is that there are many different decimals d that
>> >all convert to the same float f (because of rounding). The d that is
>> >constructed by taking the exact value of f is gross overkill.

> Decimal -> float is many-to-one: more than one Decimal will round to a
> single float. But float -> Decimal is always one-to-one, I think,
> regardless of whether you use the current exact conversion or repr
> first. The crux of the matter is whether or not it is overkill for
> Decimal to use the exact value.

> Correct me if I'm wrong, but I think we agree that there ought to be a
> way to convert floats exactly to Decimal, we just disagree on whether
> that ought to be spelled Decimal(x) or Decimal.from_float(x).


Decimal(x),  just like you do... float(x) when x is an int.


I think the from_float on the decimal type was an attempt to solve a 
problem that should have been solved by more explicit docs on the decimal 
class.  A constructor should call the type to_type method when it's higher 
precision type.

       float(decimal('2.1'))    # float call's decimals to_float() method.

       int(decima('2.1'))       # int call's decimals to_int() mehtod. *1


    1.[* or __int__]


Going in the other direction is different.  They aren't symmetric 
operations and to think they are is a mistake.


       float(int('2.1'))       # float knows how to convert ints.

       decimal(float('2.1))    # decimal knows how to convert floats.



What this says if an object can supply it's own converter for less accurate 
types, but should ask the other how to covert for more accurate types.  (or 
ask if it doesn't know how to covert.)

So the decimal.from_float method is redundant, as it was combined into the 
constructor.  (which is the more consistent to python interface. I'm not 
sure if that was the main reason, but it makes sense to me.)


> Likewise I think we agree that there ought to be some way to convert
> floats to the nearest "simple" Decimal, with the question being whether
> that ought to be spelled Decimal(repr(x)) or Decimal(x).

It should be this...  I'm absolutely certain! :-)

      Decimal(str(n))

No problems with that as long as it's an explicit choice.


And by adding a decimal literal, we avoid the float to decimal conversion 
completely for setting decimal value constants and entering in raw data by 
hand.

     2.1d

The use of str(n) above is more about getting a human readable form.


When converting the internal value of a float to a decimal, it should equal 
the floats exact value.  A repr should give the exact value its object if 
it's suppose to be a machine readable version of it.  (As numbers __repr__ 
should do.)

The __str__ method should be the human readable version.  Possibly the 
console can have a setting to use str() in place of repr().  So when you 
just use the console as a calculator, it will work more like one.

In other words, don't fix decimal, because floats repr isn't showing it's 
exact value.  That's really an issue with float, not decimal.  And don't 
use repr() if you want a nice human readable value,  use str().



> If we're all in agreement that this is the only serious area of
> disagreement for a change which has any hope of appearing in 3.5, then
> we can put aside the more speculative proposals

The following seem like important relationships to me.


For all numbers (n), with exact representation in int.

       '1' -> int -> float -> decimal    (exact)
       decimal' -> float -> int -> '1'   True

       >>> n = D(float(int(1)))
       >>> int(float(n))
       1


For all numbers (n), with exact representations in float.

       n -> float -> decimal      (exact)
       decimal -> float -> n      True

       >>> n = D(float('2.5'))
       >>> float(n)
       2.5

Is this valid in python 3.4?


When going from higher precision to lower precision values, (as stored 
internally), the best answer is to use the best value with the lowest 
possible error.  (Not necessarily the one that looks the nicest.)

I say potential, because if you don't know what the error may be, you 
shouldn't guess.  It's possible someone somewhere is studying exact float 
representations as a set.  ;-)

Data input inaccuracies are for the user to determine and handle,  but 
python should supply the tools to help them do that.  It just shouldn't do 
it prematurely.

Cheers,
    Ron











More information about the Python-ideas mailing list