Decimal arithmetic, was Re: Python GUI app to impress the boss?

Christopher Browne cbbrowne at acm.org
Fri Sep 20 19:54:55 EDT 2002


In an attempt to throw the authorities off his trail, danb_83 at yahoo.com (Dan Bishop) transmitted:
> Magnus Lyckå <magnus at thinkware.se> wrote in message news:<3D8B1537.4010803 at thinkware.se>...
>> ChrisBarker wrote:
>> > The only reason I can see
>> > to not writing your class in Python is performance, and unless you are 
>> > doing a lot of calculations, that is probably a non-issue.
>> 
>> Performance issues might well matter in large financial
>> systems, but that's not the only reason.
>
> Please name one other real-world situation in which decimal arithmetic
> is actually *needed*.
>
>> [snip]
>> And if someone would write
>> 
>> x = Decimal(9.999999999999999) - Decimal(9.9999)
>> 
>> they would not get what they expected (unless they expected
>> 0.000099999999998). Ok, they could write

> But how often does an error of 10 parts per trillion really matter?

> Afaict, not nearly enough to justify having a default number
> representation that works against the hardware.

This isn't thinking like an ex-COBOL programmer.

An ex-COBOL programmer will look at this and say something like:

  "You stupid buffoons!  You're arguing over how to deal with some
  abstruse fifteenth decimal place.  In COBOL, we never had to worry
  about such idiocy concerning the numeric /error/ that you're
  describing.

  If we're working with dollars and cents, we define the variable as:

  X PIC 999999999V99

  And we will have none of these idiotic rounding errors."

Add X to a variable with the same format, and there's NO issue of bad
rounding.  The behaviour of BCD arithmetic is perfectly well defined,
and _doesn't_ depend on hoping and praying that the IEEE FP operations
will do approximately the right thing.

The COBOL (or PL/I, or anyone else accustomed to BCD data types) guy
will justifiably look at the bizarro FP rounding as being, well,
bizarre.

There's _no_ good reason for subtracting two perfectly good decimal
values from one another to result in having weird artifacts appear in
some 17th decimal place, aside from having a willingness to admit
fairly deranged arithmetic.

People use FP arithmetic making the ludicrous assumption that FP
values are "Real numbers."  They aren't, despite some languages that
call them REALs.  They also generally aren't decimal values, but
rather binary fractions, from whence enters the bizarre artifacts of
FP.

I think there's a good argument for the idea that people should
generally be flogged mercilessly if they use FP values without having
had a /good/ education in numerical analysis which is /necessary/ in
order to be able to competently use them.
-- 
(concatenate 'string "cbbrowne" "@cbbrowne.com")
http://cbbrowne.com/info/unix.html
"It don't mean a thing, if it ain't got that swing..." 



More information about the Python-list mailing list