[Python-Dev] Decimal type question [Prothon]

Mark Hahn mark at prothon.org
Sun Aug 8 00:55:53 CEST 2004


When I first announced Prothon, Tim Peters told me to feel free to post on
Python lists because he considered Prothon a possible sandbox to learn
from.  So far I have interpreted this to mean c.l.p but I have been
dissapointed in the quality of technical responses.  Several people have
suggested I post here instead.  Let me know if you think I should go back
to c.l.p.

Prothon is considering using a floating Decimal type to fix the confusion
around binary floating point mentioned in some of the Python pitfalls,
warts, & gotchas.  We are not considering total replacement of the binary
Float because it would be a waste of the floating point hardware
performance, and because the decimal implementation we are considering
would be inferior in range.

Our proposal it to add a new Decimal type "between" Int and Float. It would
have less precision than the infinite precision of Int (Prothon uses Longs
for Int), but more than Float.  The Decimal would have less range than
either Ints or Floats though (although 56 orders of magnitude is more range
than is usually used in Ints).

The Decimal type in .Net, which will be the VM for the first Prothon
release, has a precision of 28 decimal digits and has a range of 7e-28 to
7e28.  Decimal calculations that overflowed the range of 7e28 to 7e-28
(like 0.0000_0000_0000_0000_0000_0000_0000_7 / 2) would throw an exception
instead of automatically converting to Float.

Numeric constants containing only a decimal point would become a Decimal
type: 1.354, 3452.7862.  Any constant with the letter e would be a normal
binary Float type:  1e-3, 1.23e62.  Int divide "i/j" would always create a
Decimal.  Decimal would only support the basic ops: +, -, *, /, and simple
funcs like min, max, and abs.  The math module would automatically coerce
Decimal arguments to Float for things like sin and log, just as it does
with Int arguments.

This would still solve the gotcha:  0.1 + 0.1 would equal 0.2 exactly.  For
that matter, any constant or expression (not including a divide) that
didn't overflow/underflow the 28 digit precision would have an exact
representation.

Can anyone see any problem with this scheme?




More information about the Python-Dev mailing list