Decimal arithmatic, was Re: Python GUI app to impress the boss?

Magnus Lyckå magnus at thinkware.se
Fri Sep 20 08:31:51 EDT 2002


ChrisBarker wrote:
> The only reason I can see
> to not writing your class in Python is performance, and unless you are 
> doing a lot of calculations, that is probably a non-issue.

Performance issues might well matter in large financial
systems, but that's not the only reason. If we want a
statement like

x = 5 * 2.43

to yield an exact decimal number, and not a float, we can't
get away with a Python class.

Ok, you might say that it's not a big thing to write

x = 5 * Decimal(2.43)

instead, but it's not quite the same thing. And if someone
would write

x = Decimal(9.999999999999999) - Decimal(9.9999)

they would not get what they expected (unless they
expected 0.000099999999998). Ok, they could write

x = Decimal('9.999999999999999') - Decimal('9.9999')

but now it's starting to look pretty ugly compared to

x = 9.999999999999999 - 9.9999

Python is claimed to use the principle of least surprise:
Thus 1 / 2 is being changed from meaning 0 to meaning 0.5.
Quite a few people have expressed their surprise over things
like

 >>> 0.9
0.90000000000000002

and reported it as a bug. Floating point arithmetic is
surprising to a lot of people. It's annoying when people
report this behaviour as a bug. It's worse if it leads
them to make buggy programs. Particularly if they are
used at your bank!

I've seen ambitious and appreciated programmers with a
non-technical background doing things like
"if (a == (b / 5))" in tools that create C++ using
doubles for a and b.

Python is in many ways an excellent language for people
who are beginners at programming, or not full time
programmers. It would be great if we could remove some
more of the remaining stumbling blocks.

What you see as a natural restriction, completely logic,
is an evil, subtle and completely absurd bug to some people.

I don't want to break a lot of existing code though.

It's boring if we have to change a lot of 3.14 to 3.14e0.
That's hardly acceptable before Python 3.0.

But perhaps a command line argument or something like
"import exact_arithmetic" could change the behaviour to
using exact instead of floating point arithmetic. I guess
we'd need a module anyway, to specify things. Like:

 >>> import exact_arithmetics as exact
 >>> exact.default_rounding(exact.UP)
 >>> exact.default_fractions(2)
 >>> exact.max_precision = 20
 >>> 7.5/20
0.38
 >>>






More information about the Python-list mailing list