[SciPy-dev] Numeric precision measurements?

Eric Jonas jonas at cortical.mit.edu
Wed Jun 8 14:20:48 EDT 2005


So, some friends and I are hacking on scipy this summer (yea, this is
our idea of a fun summer) and as we try out different algorithms, we're
running into floating point precision effects. For example, we get
slightly different answers when we do a convolution via FFT vs via the
simple algorithm. I'm curious how the scipy developers measure/quantify
this sort of error when choosing which algorithms to implement / use in
the actual scipy codebase. Is something like GMP used to compute a
much-closer-to-"real" value, and then (say) the output of the
GMP-implementation used to measure error against other methods? Or does
someone just say "hey, I think I like -this- algorithm for
matmul/conv/whatever, I'll use it and assume users are smart enough to
deal with FP issues". 

Thanks!
			...Eric




More information about the SciPy-Dev mailing list