1-0.95

Chris Angelico rosuav at gmail.com
Wed Jul 2 20:16:09 EDT 2014


On Thu, Jul 3, 2014 at 6:00 AM, Marko Rauhamaa <marko at pacujo.net> wrote:
> Steven D'Aprano <steve+comp.lang.python at pearwood.info>:
>
>>>    >>> Rational(2).sqrt() * Rational(2).sqrt() == Rational(2)
>>>    False
>>
>> Square root of 2 is not a rational number.
>
> Nobody said it was. It's just that even "arbitrary-precision" rational
> numbers wouldn't free you from the issues of floating-point numbers. The
> Decimal number class won't do it, either, of course.

They do free you from the issues of floating point. In exchange, they
give you the problems of rationals. (Most notably, addition becomes
very slow. Remember grade school and learning to add/subtract vulgar
fractions?)

> On the other hand, floating-point numbers are perfect whenever you deal
> with science and measurement. And when you deal with business (= money),
> integers are the obvious choice.

Why are floats perfect for science, but not for other situations?

Integers are great if you can guarantee you can fit within them -
which, when you're talking about money, presumably means you're
working in a fixed-point system (eg with the popular
something-and-cents notation (including the GBP with pounds and
pence), you store currency in cents, which is fixed-point two places
after the decimal). What about when you have to work with fractions of
a cent? Ah! I know! Let's have two integers - one for the number of
dollars/euro/pounds/etc, and then another one that says how much out
of 2**32 of another one we have!

book = (29, 2147483648) # $29.50
airfare = (2468, 2920577761) # $2468.68
interest = (1, 616212701) # $1.1434732

See, integers are the obvious choice for money!

> I would venture to say that the real applications for Decimal are very
> rare. In practice, I'm afraid, people with rather a weak understanding
> of numbers and computation might gravitate toward Decimal unnecessarily.

Your second part? Possibly. There was some discussion about an import
hook that would turn all unmarked non-integer literals into Decimals
rather than floats, and it was decided that it wouldn't be worth it.
But there definitely are real uses for Decimal, and quite a lot of
them - just as there were very solid reasons for REXX's numeric
implementation having been fairly similar. (Although - unsurprisingly
given that Python has had another couple of decades of development -
not as sophisticated. For instance, REXX doesn't have numeric
contexts, so all changes to precision etc are global.)

Numbers can't be represented in a computer in any way that doesn't
potentially demand infinite storage. There are two basic techniques
for storing numbers: ratios, possibly where the denominator is
selected from a very restricted set (IEEE floating point is (usually)
this - the denominator must be a power of two), and algebraic symbols,
where you represent sqrt(2) as "\u221a2" and evaluate to an actual
number only at the very end, if ever (which gets around the problems
of intermediate rounding, and allows perfect cancelling out -
pow("\u221a2",8) == "16"). No matter what system you use, you're
eventually going to get down to a choice: retain all the precision you
possibly can, and maybe use infinite or near-infinite storage; or
throw away the bits that aren't going to affect the calculation
significantly, and keep the object size down to something reasonable.
I do seem to recall, back in maths class, being allowed to use either
22/7 or 3.14 for π, because the difference between either of those and
the true value was not significant :) It's the same in computing,
except that it's common to go as far as 3.141592653589793 (a number I
memorized out of the GW-BASIC manual, back when I first started
programming with floating point). Short of actually running on a
Turing machine, your program is always going to be bound by these
restrictions.

ChrisA



More information about the Python-list mailing list