Math errors in python

Alex Martelli aleaxit at yahoo.com
Mon Sep 20 07:57:42 EDT 2004


Heiko Wundram <heikowu at ceosg.de> wrote:

> Am Sonntag, 19. September 2004 19:41 schrieb Alex Martelli:
> > gmpy (or to be more precise the underlying GMP library) runs optimally
> > on AMD Athlon 32-bit processors, which happen to be dirt cheap these
> > days, so a cleverly-purchased 300-dollars desktop Linux PC using such an
> > Athlon chip would no doubt let you use way more than these humble couple
> > thousand bits for such interactive computations while maintaining a
> > perfectly acceptable interactive response time.
> 
> But still, no algorithm implemented in software will ever beat the 
> FADD/FMUL/FDIV/FPOW/FSIN/FCOS etc. instructions in runtime, that was my

Yep, the hardware would have to be designed in a very lousy way for its
instructions to run slower than software running on the same CPU;-).

If you're not using some "vectorized" package such as Numeric or
numarray, though, it's unlikely that you care about speed -- and if you
_are_ using Numeric or numarray, it doesn't matter to you what type
Python itself uses for some literal such as 3.17292 -- it only matters
(speedwise) what your computational package is using (single precision,
double precision, whatever).

> point... And error calculation is always possible, so that you can give
> bounds to your result, even when using normal floating point arithmetic. And,

Sure!  Your problems come when the bounds you compute are not good
enough for your purposes (given how deucedly loose error-interval
computations tend to be, that's going to happen more often than actual
accuracy loss in your computations... try an interval-arithmetic package
some day, to see what I mean...).

> even when using GMPy, you have to know about the underlying limitations of
> binary floating point so that you can reorganize your code if need be to add
> precision (because one calculation might be much less precise if done in some
> way than in another).

Sure.  Throwing more precision at a badly analyzed and structured
algorithm is putting a band-aid on a wound.  I _have_ taught numeric
analysis to undergrads and nobody could have passed my course unless
they had learned to quote that "party line" back at me, obviously.

In the real world, the band-aid stops the blood loss often enough that
few practising engineers and scientists are seriously motivated to
remember and apply all they've learned in their numeric analysis courses
(assuming they HAVE taken some: believe it or not, it IS quite possible
to get a degree in engineering, physics, etc, in most places, without
even getting ONE course in numeric analysis! the university where I
taught was an exception only for _some_ of the degrees they granted --
you couldn't graduate in _materials_ engineering without that course,
for example, but you COULD graduate in _buildings_ engineering while
bypassing it...).

Yes, this IS a problem.  But I don't know what to do about it -- after
all, I _am_ quite prone to taking such shortcuts myself... if some
computation is giving me results that smell wrong, I just do it over
with 10 or 100 times more bits... yeah, I _do_ know that will only work
99.99% of the time, leaving a serious problem, possibly hidden and
unsuspected, more often than one can be comfortable with.  In my case, I
have excuses -- I'm more likely to have fallen into some subtle trap of
_statistics_, making my precise computations pretty meaningless anyway,
than to be doing perfectly correct statistics in numerically smelly ways
(hey, I _have_ been brought up, as an example of falling into traps, in
"American Statistician", but not yet, AFAIK, in any journal dealing with
numerical analysis...:-).


Alex



More information about the Python-list mailing list