Precision Tail-off?

Peter J. Holzer hjp-python at hjp.at
Fri Feb 17 14:38:44 EST 2023


On 2023-02-17 08:38:58 -0700, Michael Torrie wrote:
> On 2/17/23 03:27, Stephen Tucker wrote:
> > Thanks, one and all, for your reponses.
> > 
> > This is a hugely controversial claim, I know, but I would consider this
> > behaviour to be a serious deficiency in the IEEE standard.
> 
> No matter how you do it, there are always tradeoffs and inaccuracies
> moving from real numbers in base 10 to base 2.

This is phrased ambiguosly. So just to clarify:

Real numbers are not in base 10. Or base 2 or base 37 or base e. A
positional system (which uses a base) is just a convenient way to write
a small subset of real numbers. By using any base you limit yourself to
rational numbers (no e or π or √2) and in fact only those rational
numbers where the denominator is a power of the base.

Converting numbers from one base to another with any finite precision
will generally involve rounding - so do that as little as possible.


> That's just the nature of the math.  Any binary floating point
> representation is going to have problems.

Any decimal floating point representation is also going to have
problems.

There is nothing magical about base 10. It's just what we are used to
(which also means that we are used to the rounding errors and aren't
surprised by them as much).

> Also we weren't clear on this, but the IEEE standard is not just
> implemented in software. It's the way your CPU represents floating point
> numbers in silicon.  And in your GPUs (where speed is preferred to
> precision).  So it's not like Python could just arbitrarily do something
> different unless you were willing to pay a huge penalty for speed.

I'm pretty sure that compared to the interpreter overhead of CPython the
overhead of a software FP implementation (whether binary or decimal)
would be rather small, maybe negligible.


> > Perhaps this observation should be brought to the attention of the IEEE. I
> > would like to know their response to it.
> Rest assured the IEEE committee that formalized the format decades ago
> knew all about the limitations and trade-offs.  Over the years CPUs have
> increased in capacity and now we can use 128-bit floating point numbers

The very first IEEE compliant processor (the Intel 8087) had an 80 bit
extended type (in fact it did all computations in 80 bit and only
rounded down to 64 or 32 bits when storing the result). By the 1990s, 96
and 128 bit was quite common.

> which mitigate some of the accuracy problems by simply having more
> binary digits. But the fact remains that some rational numbers in
> decimal are irrational in binary,

Be careful: "Rational" and "irrational" have a standard meaning in
mathematics and it's independent of base.

        hp

-- 
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp at hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <https://mail.python.org/pipermail/python-list/attachments/20230217/077e8b26/attachment.sig>


More information about the Python-list mailing list