[Python-Dev] RE: Adding decimal (aka FixedPoint) numbers to Python

Tim Peters tim.one@comcast.net
Thu, 19 Dec 2002 00:58:37 -0500


[Michael McLay]
> ...
> be a reasonable start. I am concerned about one thing. Will the
> fixed point implementation you have implemented be backwards
> compatible with an implementation of Cowlishaw's decimal numbers?

Nope.  Any scheme that can lose information is, in general, incompatible
with any other scheme that can lose information, unless they lose
information in identical ways in all cases.  Since Cowlishaw's is a floating
scheme, and FixedPoint is a fixed scheme, they'll get different results in
some information-losing cases.

Examples aren't hard to find.  Here's one:  what's 13% of 100006.73?

The true result is exactly 10000673 * 13 / 10000 = 13000.8749.

Do that via fixed-points with 2 digits after the decimal point, and it's
rounded down to 13000.87 (which is the best possible result if you get to
keep no more than two digits after the decimal point).

Use Cowlishaw's scheme with 8 digits of precision (which counts all
digits -- the decimal point floats) instead, and the trailing 9 has to be
tossed, and so rounds the last retained digit from 4 up to 5:  13000.875.
You've got 8 digits now, but a worse result if pennies are what you need:
if you round *that* to the nearest penny, to-nearest/even ("banker's
rounding") says "OK, I have to toss the 5, and 7 is odd, so round it up the
even 8", leaving 13000.88.

The end results aren't the same, and I recall now that this is why
FixedPoint was designed to lose info on multiply:  when you're computing
percentages to the nearest penny (be it sales tax or rebates), you can't get
the best possible result in all cases if you use a scheme that rounds twice.
A fixed-point scheme can get away with rounding just once (& very
naturally), but a floating scheme can have "too many digits", and therefore
force the user to do a second rounding step to cut the result back to the
number of fractional digits they wanted.  Double-rounding can't be as good
as single-rounding in all cases, unless the floating scheme uses much more
precision than the fixed scheme (there are tricky theorems that kick in
then -- if you have "enough" excess precision, it's provable that
double-rounding is always the same as single-rounding -- but not one user in
a million can be expected to know this, and even if they did the application
is too difficult since it effectively requires proving that you've always
got enough extra precision relative to your app's specific numbers).

BTW, this is why FixedPoint's "good to the last penny" has more to do with
the fixed-point business than with the decimal business.  Decimal arithmetic
is just as prone to double-rounding errors as binary arithmetic, and
floats-- whether decimal or binary --force the user into the double-rounding
business when they're working with fixed-point quantitities.