[Python-ideas] Decimal literal?

Adam Olsen rhamph at gmail.com
Thu Dec 4 10:37:11 CET 2008


On Thu, Dec 4, 2008 at 12:51 AM, Chris Rebert <clp at rebertia.com> wrote:
> With Python 3.0 being released, and going over its many changes, I was
> reminded that decimal numbers (decimal.Decimal) are still relegated to
> a library and aren't built-in.
>
> Has there been any thought to adding decimal literals and making
> decimal a built-in type? I googled but was unable to locate any
> discussion of the exact issue. The closest I could find was a
> suggestion about making decimal the default instead of float:
> http://mail.python.org/pipermail/python-ideas/2008-May/001565.html
> It seems that decimal arithmetic is more intuitively correct that
> plain floating point and floating point's main (only?) advantage is
> speed, but it seems like premature optimization to favor speed over
> correctness by default at the language level.

Intuitively, you'd think it's more correct, but for non-trivial usage
I see no reason for it to be.  The strongest arguments on [1] seem to
be controllable precision and stricter standards.  Controllable
precision works just as well in a library.  Stricter standards (ie
very portable semantics) could be done with base-2 floats via software
emulating on all platforms (and throwing performance out the window).

Do you have some use cases that are (completely!) correct in decimal,
and not in base-2 floating point?  Something not trivial (emulating a
schoolbook, writing a calculator, etc.)

I see Decimal as a modest investment for a mild return.  Not worth the
effort to switch.


-- 
Adam Olsen, aka Rhamphoryncus



More information about the Python-ideas mailing list