Decimals not equalling themselves (e.g. 0.2 = 0.2000000001)

Dan Bishop danb_83 at yahoo.com
Sun Aug 3 17:29:22 EDT 2008


On Aug 3, 9:02 am, CNiall <cni... at icedcerulean.com> wrote:
> I am very new to Python (I started learning it just yesterday), but I
> have encountered a problem.
>
> I want to make a simple script that calculates the n-th root of a given
> number (e.g. 4th root of 625--obviously five, but it's just an example
> :P), and because there is no nth-root function in Python I will do this
> with something like x**(1/n).
>
> However, with some, but not all, decimals, they do not seem to 'equal
> themselves'. This is probably a bad way of expressing what I mean, so
> I'll give an example:
>  >>> 0.5
> 0.5
>  >>> 0.25
> 0.25
>  >>> 0.125
> 0.125
>  >>> 0.2
> 0.20000000000000001
>  >>> 0.33
> 0.33000000000000002
>
> As you can see, the last two decimals are very slightly inaccurate.
> However, it appears that when n in 1/n is a power of two, the decimal
> does not get 'thrown off'.

This is because your computer stores numbers in binary, and thus those
numbers can be represented exactly.

decimal 0.5 = binary 0.1
decimal 0.25 = binary 0.01
decimal 0.125 = binary 0.001

However, numbers that have "nice" representations in decimal are often
non-terminating in binary.

decimal 0.2 = binary 0.0011 0011 0011 0011 0011 0011...
decimal 0.33 = binary 0.01 01010001111010111000
01010001111010111000...

It's just like in decimal where 1/3 = 0.333333..., so if you're
limited to a finite number of digits, you get rounding error.
(Interesting that you picked 0.33 as one of your examples.)

> How might I make Python recognise 0.2 as 0.2
> and not 0.20000000000000001?

With floats, you can't.  As far as your computer is concerned, there's
no such number as 0.2.  The closest you can get with IEEE 754 double
precision is

binary 0.001100110011001100110011001100110011001100110011001101 (53
significant bits)
= decimal 0.200000000000000011102230246251565404236316680908203125
(exactly)

If absolutely necessary, you can use the decimal.Decimal("0.2").
However, I do *NOT* recommend always using decimal arithmetic, for the
following reasons:

(1) You still have to deal with round-off errors resulting from finite
precision.  For example:

>>> Decimal(1) / 3 * 3
Decimal("0.9999999999999999999999999999")
>>> Decimal.sqrt(Decimal(2)) ** 2
Decimal("1.999999999999999999999999999")

(2) There's nothing special about the number ten.  We chose it as our
number base not because of any of its mathematical properties, but
simply because of the number of fingers we have.  The designers of our
computers recognized this fact and chose to optimize the hardware for
base-two instead, therefore...

(3) Decimal arithmetic is at least an order of magnitude slower binary
arithmetic.

If your complaint is merely about all the "noise" digits displayed,
simply use a smaller number of significant digits.

>>> repr(0.2)
'0.20000000000000001'
>>> '%.17g' % 0.2
'0.20000000000000001'
>>> '%.16g' % 0.2
'0.2'

> This discrepancy is very minor, but it makes the whole n-th root
> calculator inaccurate. :\

I think you may have unrealistic expectations for accuracy.

Unlike ints, which are used for counted (and therefore exact)
quantities, floats are typically used for measured (and therefore
inexact) quantities.

The error in approximating 0.2 by its double-precision approximation
is less than 56 parts per quintillion.  There are very few physical
measurements that require that level of accuracy.  Suppose you went to
the butcher's and ordered a pound of beef, and got only
0.9999999999999999444888487687 pound.  Would you even notice?  And if
you did, how exactly would you expect them to compensate you for the
less than a trillionth of a cent that you got shortchanged?



More information about the Python-list mailing list