[Python-Dev] Changes to decimal.py

Fredrik Johansson fredrik.johansson at gmail.com
Sat Apr 14 02:48:13 CEST 2007


On 4/13/07, Tim Peters <tim.peters at gmail.com> wrote:
> One low-effort approach is to use a general root-finding algorithm and
> build ln(x) on top of exp() via (numerically) solving the equation
> exp(ln(x)) == x for ln(x).  That appears to be what Don Peterson did
> in his implementation of transcendental functions for Decimal:
>
>     http://cheeseshop.python.org/pypi/decimalfuncs/1.4
>
> In a bit of disguised form, that appears to be what Brian Beck and
> Christopher Hesse also did:
>
>     http://cheeseshop.python.org/pypi/dmath/0.9
>

Whatever they do, they are very slow :-) I get the following timings
with decimalfuncs (dmath is even slower):

exp(Decimal(1)): 0.02 seconds at 28 digits, 0.2 seconds at 100 digits
ln(Decimal(2)): 0.1 seconds at 28 digits, 0.6 seconds at 100 digits

A while back I implemented exp and ln using fixed-point integer
arithmetic, which unsurprisingly turned out to be much faster than
working with Decimals. If I convert a decimal input to fixed-point,
calculate exp or log, and convert the output back to decimal, I get
the following timings:

exp(1): 0.0008 seconds at 28 digits, 0.001 seconds at 100 digits
ln(2): 0.001 seconds at 28 digits, 0.007 seconds at 100 digits

[Note: I didn't use the exact numbers 1 and 2 for these timings; I
added noise digits to make sure the decimal conversion code wasn't
taking any shortcuts.]

I use the Taylor series for exp(x) but first divide x by 2^32 to
improve the convergence rate and finally square the sum 32 times. (32
could be replaced by any number; 32 just happens to be fast for a
large range of precisions on my system.) Looking more closely at the
28-digit exp calculation:

0.0005 s is spent converting the Decimal to fixed-point
0.0001 s is spent summing the Taylor series (integer arithmetic)
0.0002 s is spent converting the fixed-point number back to Decimal

For comparison, multiplying two 28-digit decimals takes 0.0003
seconds. So, computing exp this way is only about twice as expensive
as multiplication at 28-digit precision; at 100 digit precision, I
find that they are equally expensive.

This approach assumes that the input has magnitude close to 1. For
very large or very small numbers, you have to use the functional
properties of exp to get a number close to 1, then convert back, all
probably best (at least most easily) done entirely using Decimal
arithmetic.

For ln, I use Newton's method to invert exp, with an initial estimate
from math.log, doubling the working precision at each iteration so
that only one full-precision evaluation of exp is necessary. Using
fixed-point internally during the Newton iteration, ln should be at
most twice as slow as exp.

Of course, this could all be irrelevant if the decimal module ever
gets replaced by a C implementation...

Fredrik


More information about the Python-Dev mailing list