Precision Tail-off?

Mats Wichmann mats at wichmann.us
Fri Feb 17 15:57:06 EST 2023


On 2/17/23 11:42, Richard Damon wrote:
> On 2/17/23 5:27 AM, Stephen Tucker wrote:

> The key factor here is IEEE floating point is storing numbers in BINARY, 
> not DECIMAL, so a multiply by 1000 will change the representation of the 
> number, and thus the possible resolution errors.
> 
> Store you numbers in IEEE DECIMAL floating point, and the variations by 
> multiplying by powers of 10 go away.

The development of the original IEEE standard led eventually to 
consistent implementation in hardware (when they implement floating 
point at all, which embedded/IoT class chips in particular often don't) 
that aligned with how languages/compilers treated floating point, so 
that's been a really successful standard, whatever one might feel about 
the tradeoffs. Standards are all about finding a mutually acceptable way 
forward, once people admit there is no One Perfect Answer.

Newer editions of 754 (since 2008) have added this decimal floating 
point representation, which is supported by some software such as IBM 
and Intel floating-point libraries.  Hardware support has been slower to 
arrive.  The only ones I've heard of have been the IBM z series 
(mainframes) and somebody else mentioned Power though I'd never seen 
that. It's possible some of the GPU lines may be going this direction.

As far as Python goes... the decimal module has this comment:

 > It is a complete implementation of Mike Cowlishaw/IBM's General 
Decimal Arithmetic Specification.

Cowlishaw was the editor of the 2008 and 2019 editions of IEEE 754, fwiw.

And... this topic as a whole comes up over and over again, like 
everywhere.  See Stack Overflow for some amusement.


More information about the Python-list mailing list