Maths error

Nick Maclaren nmm1 at cus.cam.ac.uk
Fri Jan 12 04:29:19 EST 2007


In article <mailman.2632.1168583141.32031.python-list at python.org>,
"Hendrik van Rooyen" <mail at microcorp.co.za> writes:
|> 
|> I would have thought that this sort of thing was a natural consequence
|> of rounding errors - if I round (or worse truncate) a binary, I can be off
|> by at most one, with an expectation of a half of a least significant digit,
|> while if I use hex digits, my expectation is around eight, and for decimal
|> around five...
|> 
|> So it would seem natural that errors would propagate 
|> faster on big base systems, AOTBE, but this may be 
|> a naive view.. 

Yes, indeed, and that is precisely why the "we must use binary" camp won
out.  The problem was that computers of the early 1970s were not quite
powerful enough to run real applications with simulated floating-point
arithmetic.  I am one of the half-dozen people who did ANY actual tests
on real numerical code, but there may have been some work since!

Nowadays, it would be easy, and it would make quite a good PhD.  The
points to look at would be the base and the rounding rules (including
IEEE rounding versus probabilistic versus last bit forced[*]).  We know
that the use or not of denormalised numbers and the exact details of
true rounding make essentially no difference.

In a world ruled by reason rather than spin, this investigation
would have been done before claiming that decimal floating-point is an
adequate replacement for binary for numerical work, but we don't live
in such a world.  No matter.  Almost everyone in the area agrees that
decimal floating-point isn't MUCH worse than binary, from a numerical
point of view :-)


[*] Assuming signed magnitude, calculate the answer truncated towards
zero but keep track of whether it is exact.  If not, force the last
bit to 1.  An old, cheap approximation to rounding.


Regards,
Nick Maclaren.



More information about the Python-list mailing list