Maths error

Nick Maclaren nmm1 at cus.cam.ac.uk
Thu Jan 11 05:24:53 EST 2007


In article <Xns98B4E769834AAtim111one at 216.196.97.136>,
Tim Peters <tim.one at comcast.net> writes:
|> 
|> Sure.  Possibly even most.  Short of writing a long & gentle tutorial, 
|> can that be improved?  Alas, most people wouldn't read that either <0.5 
|> wink>.

Yes.  Improved wording would be only slightly longer, and it is never
appropriate to omit all negative aspects.  The truth, the whole truth
and nothing but the truth :-)

|> Worse, I expect most people have no real idea of that there's a possible 
|> difference between internal and external representations.  This is often 
|> given as a selling point for decimal arithmetic:  it's WYSIWYG in ways 
|> binary fp can't be (short of inventing power-of-2 fp representations for 
|> I/O, which few people would use).

Right.  Another case when none of the problems show up on dinky little
examples but do in real code :-(

|> > A lot of very well-respected numerical analysts said that larger bases
|> > led to a faster build-up of error (independent of the precision).  My
|> > limited investigations indicated that there was SOME truth in that,
|> > but it wasn't a major matter; I never say the matter settled
|> > definitively.
|> 
|> My point was that 28 decimal digits of precision is far greater than 
|> supplied even by 64-bit binary floats today (let alone the smaller sizes 
|> in most-common use back in the 60s and 70s).  "Pollution" of low-order 
|> bits is far less of a real concern when there are some number of low-
|> order bits you don't care about at all.

Yes, but that wasn't their point.  It was that in (say) iterative
algorithms, the error builds up by a factor of the base at every step.
If it wasn't for the fact that errors build up, almost all programs
could ignore numerical analysis and still get reliable answers!

Actually, my (limited) investigations indicated that such an error
build-up was extremely rare - I could achieve it only in VERY artificial
programs.  But I did find that the errors built up faster for higher
bases, so that a reasonable rule of thumb is that 28 digits with a decimal
base was comparable to (say) 80 bits with a binary base.

And, IN GENERAL, programs won't be using 128-bit IEEE representations.
Given Python's overheads, there is no reason not to, unless the hardware
is catastrophically slower (which is plausible).

|> If you know a < b, doing
|> 
|>     c = a + (b-a)/2
|> 
|> instead of
|> 
|>     c = (a+b)/2
|> 
|> at least guarantees (ignoring possible overflow) a <= c <= b.  As shown 
|> last time, it's not even always the case that (x+x)/2 == x in decimal fp 
|> (or in any fp base > 2, for that matter).

Yes.  Back in the days before binary floating-point started to dominate,
we taught that as a matter of routine, but it has not been taught to
all users of floating-point for a couple of decades.  Indeed, a lot of
modern programmers regard having to distort simple expressions in that
way as anathema.

It isn't a major issue, because our experience from then is that it is
both teachable and practical, but it IS a way in which any base above
2 is significantly worse than base 2.


Regards,
Nick Maclaren.



More information about the Python-list mailing list