Math errors in python

Alex Martelli aleaxit at yahoo.com
Sun Sep 19 12:41:49 EDT 2004


Chris S. <chrisks at NOSPAM.udel.edu> wrote:
   ...
> Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this 

Of course it doesn't.  What a silly assertion.

> arithmetic is meant for. Any decimal can be represented by a fraction,

And pi can't be represented by either (if you mean _finite_ decimals and
fractions).

> yet not all fractions can be represented by decimals. My point is that
> such simple accuracy should be supported out of the box.

In Python 2.4, decimal computations are indeed "supported out of the
box", although you do explicitly have to request them (the default
remains floating-point).  In 2.3, you have to download and use any of
several add-on packages (decimal computations and rational ones have
very different characteristics, so you do have to choose) -- big deal.


> > While I'd love to compute with all those numbers in infinite
> > precision, we're all stuck with FINITE sized computers, and hence with
> > the inaccuracies of finite representations of numbers.
> 
> So are our brains, yet we somehow manage to compute 12.10 + 8.30 
> correctly using nothing more than simple skills developed in 

Using base 10, sure.  Or, using fractions, even something that decimals
would not let you compute finitely, such as 1/7+1/6.

> grade-school. You could theoretically compute an infinitely long 
> equation by simply operating on single digits,

Not in finite time, you couldn't (excepting a few silly cases where the
equation is "infinitely long" only because of some rule that _can_ be
finitely expressed, so you don't even have to LOOK at all the equation
to solve [which is what I guess you mean by "compute"...?] it -- if you
have to LOOK at all of the equation, and it's infinite, you can't get
done in finite time).

> yet Python, with all of 
> its resources, can't overcome this hurtle?

The hurdle of decimal arithmetic, you mean?  Download Python 2.4 and
play with decimal to your heart's content.  Or do you mean fractions?
Then download gmpy and ditto.  There are also packages for symbolic
computation and even more exotic kinds of arithmetic.

In practice, with the sole exception of monetary computations (which may
often be constrained by law, or at the very least by customary
practice), there is no real-life use in which the _accuracy_ of floating
point isn't ample.  There are nevertheless lots of traps in arithmetic,
but switching to forms of arithmetic different from float doesn't really
make all the traps magically disappear, of course.


> However, I understand Python's limitation in this regard. This 
> inaccuracy stems from the traditional C mindset, which typically 
> dismisses any approach not directly supported in hardware. As the FAQ

Ah, I see, a case of "those who can't be bothered to learn a LITTLE
history before spouting off" etc etc.  Python's direct precursor, the
ABC language, used unbounded-precision rationals.  As a result (obvious
to anybody who bothers to learn a little about the inner workings of
arithmetic), the simplest-looking string of computations could easily
consume all the memory at your computer's disposal, and then some, and
apparently unbounded amounts of time.  It turned out that users object,
most of the time, to having some apparently trivial computation take
hours, rather than seconds, in order to be unboundedly precise rather
than, say, precise to "just" a couple hundred digits (far more digits
than you need to count the number of atoms in the Galaxy).  So,
unbounded rationals as a default are out -- people may sometimes SAY
they want them, but in fact, in an overwhelming majority of the cases,
they actually do not (oh yes, people DO lie, first of all to
themselves:-).

As for decimals, that's what a very-high level language aiming for a
niche very close to Python used from the word go.  It got started WAY
before Python -- I was productively using it over 20 years ago -- and
had the _IBM_ brand on it, which at the time pretty much meant the
thousand-pounds gorilla of computers.  So where is it now, having had
all of these advantages (started years before, had IBM behind it, AND
was totally free of "the traditional C mindset", which was very far from
traditional at the time, particularly within IBM...!)...?

Googlefight is a good site for this kind of comparisons... try:

<http://www.googlefight.com/cgi-bin/compare.pl?q1=python&q2=rexx
&B1=Make+a+fight%21&compare=1&langue=us>

and you'll see...:
"""
Number  of results on Google  for the keywords python  and rexx:

python
(10 300 000  results)
versus
rexx
( 419 000 results)

The  winner is:    python
"""

Not just "the winner", an AMAZING winner -- over TWENTY times more
popular, despite all of Rexx's advantages!  And while there are no doubt
many fascinating components to this story, a key one is among the pearls
of wisdom you can read by doing, at any Python interactive prompt:

    >>> import this

and it is: "practicality beats purity".  Rexx has always been rather
puristic in its adherence to its principles; Python is more pragmatic.
It turns out that this is worth a lot in the real world.  Much the same
way, say, C ground PL/I into the dust.  Come to think of it, Python's
spirit is VERY close to C (4 and 1/2 half of the 5 principles listed as
"the spirit of C" in the C ANSI Standard's introduction are more closely
followed by Python than by other languages which borrowed C's syntax,
such as C++ or Java), while Rexx does show some PL/I influence (not
surprising for an IBM-developed language, I guess).

Richard Gabriel's famous essay on "Worse is Better", e.g. at
<http://www.jwz.org/doc/worse-is-better.html>, has more, somewhat bitter
reflections in the same vein.  

Python never had any qualms in getting outside the "directly supported
in hardware" boundaries, mind you.  Dictionaries and unbounded precision
integers are (and have long been) Python mainstays, although neither the
hardware nor the underlying C platform has any direct support for
either.  For non-integer computations, though, Python has long been well
served by relying on C, and nowadays typically the HW too, to handle
them, which implied the use of floating-point; and leaving the messy
business of implementing the many other possibly useful kinds of
non-integer arithmetic to third-party extensions (many in fact written
in Python itself -- if you're not in a hurry, they're fine, too).

With Python 2.4, somebody finally felt enough of an itch regarding the
issue of getting support for decimal arithmetic in the Python standard
library to go to the trouble of scratching it -- as opposed to just
spouting off on a mailing list, or even just implementing what they
personally needed as just a third-party extension (there are _very_ high
hurdles to jump, to get your code into the Python standard library, so
it needs strong motivation to do so as opposed to just releasing your
own extension to the public).

> states, this problem is due to the "underlying C platform". I just find
> it funny how a $20 calculator can be more accurate than Python running
> on a $1000 Intel machine.

You can get a calculator much cheaper than that these days (and "intel
machines" not too out of the mainstream for well less than half, as well
as several times, your stated price).  It's pretty obvious that the
price of the hardware has nothing to do with that "_CAN_ be more
accurate" issue (my emphasis) -- which, incidentally, remains perfectly
true even in Python 2.4: it can be less, more, or just as accurate as
whatever calculator you're targeting, since the precision of decimal
computation is one of the aspects you can customize specifically...


Alex



More information about the Python-list mailing list