Bug in floating-point addition: is anyone else seeing this?

Dan Upton upton at virginia.edu
Wed May 21 17:19:47 EDT 2008


On Wed, May 21, 2008 at 4:56 PM, Dave Parker
<daveparker at flamingthunder.com> wrote:
> On May 21, 2:44 pm, "Jerry Hill" <malaclyp... at gmail.com> wrote:
>
>> My understand is no, not if you're using IEEE floating point.
>
> Yes, that would explain it.  I assumed that Python automatically
> switched from hardware floating point to multi-precision floating
> point so that the user is guaranteed to always get correctly rounded
> results for +, -, *, and /, like Flaming Thunder gives.  Correct
> rounding and accurate results are fairly crucial to mathematical and
> scientific programming, in my opinion.

However, this is not an issue of language correctness, it's an issue
of specification and/or hardware.  If you look at the given link, it
has to do with the x87 being peculiar and performing 80-bit
floating-point arithmetic even though that's larger than the double
spec.  I assume this means FT largely performs floating-point
arithmetic in software rather than using the FP hardware (unless of
course you do something crazy like compiling to SW on some machines
and HW on others depending on whether you trust their functional
units).

The fact is, sometimes it's better to get it fast and be good enough,
where you can use whatever methods you want to deal with rounding
error accumulation.  When accuracy is more important than speed of
number crunching (and don't argue to me that your software
implementation is faster than, or probably even as fast as, gates in
silicon) you use packages like Decimal.

Really, you're just trying to advertise your language again.



More information about the Python-list mailing list