Turn off ZeroDivisionError?

Mark Dickinson dickinsm at gmail.com
Sun Feb 10 20:12:56 EST 2008


On Feb 10, 5:50 pm, Ben Finney <bignose+hates-s... at benfinney.id.au>
wrote:
> Most people would not want this behaviour either::
>
>     >>> 0.1
>     0.10000000000000001

Sure. And if it weren't for backwards-compatibility and speed issues
one
could reasonably propose making Decimal the default floating-point
type
in Python (whilst still giving access to the hardware binary floating
point).
I dare say that the backwards-compatibility isn't really a problem:  I
can
imagine a migration strategy resulting in Decimal default floats in
Python 4.0  ;-).  But there are orders-of-magnitude differences in
speed
that aren't going to be solved by merely rewriting decimal.py in C.

I guess it's all about tradeoffs.

> But the justification for this violation of surprise is "Python just
> does whatever the underlying hardware does with floating-point
> numbers". If that's the rule, it shouldn't be broken in the special
> case of division by zero.

I'm not convinced that this is really the justification, but I'm not
quite sure
what we're talking about here.  The justification for *printing*
0.1000...1 instead
of 0.1 has to do with not hiding binary floating-point strangeness
from users, since
they're eventually going to have to deal with it anyway, and hiding it
arguably
causes worse difficulties in understanding.  The justification for
having
the literal 0.1 not *be* exactly the number 0.1:  well, what are the
alternatives?
Decimal and Rational are very slow in comparison with float, and
historically
Decimal wasn't even available until recently.

Mark



More information about the Python-list mailing list