Decimal vs float

Kay Schluehr kay.schluehr at gmx.net
Thu Jan 19 08:18:09 EST 2006


Steve Holden wrote:
> > If Mr. interpreter is as slick as he is why doesn't he convert the
> > float by himself? This is at most a warning caused by possible rounding
> > errors of float.
> >
> Indeed, as the documentation says: """This serves as an explicit
> reminder of the details of the conversion (including representation
> error)""". Otherwise you would get numpties using constructions like
> Decimal(0.1) and then asking why the result was the same as
> Decimal("0.10000000000000001") (or something similar). Who needs it?
> Certainly not Mr. interpreter, or his c.l.py friends.

The stringification of floats seems to work accurately just like the
error message tells:

>>> Decimal(str(0.1))
Decimal("0.1")

This is interesting. If we define

def f():
    print str(1.1)

and disassemble the function, we get:

dis.dis(f)
  2           0 LOAD_GLOBAL              0 (str)
              3 LOAD_CONST               1 (1.1000000000000001)      #
huh?
              6 CALL_FUNCTION            1
              9 PRINT_ITEM
             10 PRINT_NEWLINE
             11 LOAD_CONST               0 (None)
             14 RETURN_VALUE

But when we call f, we receive

>>> f()
1.1

Mr. Interpreter seems to have a higher level of awareness :)

> > Instead of dealing with awkward wrappers, I wonder if literals
> > currently interpreted as floats could not be interpreted as Decimal
> > objects in future?
> >
> That would be a very large change in the behaviour of the interpreter,
> and unfortunately it doesn't take account of the need in decimal to
> specify the context in which a calculation takes place.

I don't see this as a big obstacle. With the current implementation the
compiler has to generate a decimal object from a NUMBER token instead
of a float object. The context of a calculation is still the decimal
module object and it's attributes. Why should it be changed?

Kay




More information about the Python-list mailing list