extend methods of decimal module

Oscar Benjamin oscar.j.benjamin at gmail.com
Wed Feb 19 17:29:27 EST 2014


On 19 February 2014 15:30, Mark H. Harris <harrismh777 at gmail.com> wrote:
> Would it be possible to extend the methods of the decimal module just a bit to include atan(), sin(), cos(), and exp() ?
>
> The module has methods for ln() and sqrt(); and that's great!
>
> I have done some rudimentary searching of the pep history and I'm not finding any pep related to extending the decimal module with other scientific functions.

As Terry has pointed out, the decimal module follows IEEE 854 which
doesn't include those.

I think the module sort of has two big use-cases. On the one hand you
have people looking to do financial calculations etc. who are looking
for basic arithmetic with decimal type rounding. On the other hand you
have people like me who see it as a convenient multi-precision library
for when double precision just isn't enough. The module doesn't fully
live up to my use-case because as you say it lacks support for
transcendental functions.

I think, though, that there's a case for having say a "dmath" module
that would export a similar interface to the math and cmath modules
but with functions that work with Decimals in full precision. Someone
has already created a pure Python version of this idea here:
https://code.google.com/p/dmath/

> It is easy to write them in pure python, of course, but I am interested in having the same performance boost with atan(), sin(), cos(), and exp() as I see with the rest of the decimal module on 3.3/  Is it possible anytime sooner than later?

Actually the performance difference isn't as big as you might think.
So given the following exponential function:

from decimal import Decimal, localcontext

e = Decimal('2.7182818284590452353602874713527')

def exp(x):
    '''
        >>> print exp(Decimal(1))
        2.718281828459045235360287471
        >>> print exp(Decimal(2))
        7.389056098930650227230427461
    '''
    # Work in slightly higher precision
    with localcontext() as ctx:
        ctx.prec += 2
        xi, xf = divmod(x, 1)
        # Use integer exponentiation
        yi = e ** xi
        # Now use the Maclaurin series for the fractional part
        lastyf = -1
        yf = 1
        n = 0
        fact = 1
        xfn = 1
        while yf != lastyf:
            lastyf = yf
            n += 1
            fact *= n
            xfn *= xf
            yf += xfn / fact
    # Downgrade precision
    return yi * yf

import doctest
doctest.testmod()


I get the following timings:
$ python3.3 -m timeit -s 'from decimal import Decimal as D;
d=D('0.123'); from tmp import exp' 'exp(d)'
10000 loops, best of 3: 32.3 usec per loop
$ python3.3 -m timeit -s 'from decimal import Decimal as D;
d=D('0.123'); from tmp import exp' 'd.exp()'
10000 loops, best of 3: 26.5 usec per loop

So the pure Python exponential function (using the C-accelerated
decimal module) weighs in at 32usec and the pure C version at 26usec.
The intensity of software arithmetic even in C is still dominating the
performance here. The difference becomes more noticeable as you
approach an integer value from below e.g. something like 24.9 but not
more than a factor of 2.

For comparison here's how it looks on the FPU (from Python's perspective):

$ python3.3 -m timeit -s 'd=0.123; from math import exp' 'exp(d)'
10000000 loops, best of 3: 0.149 usec per loop

So that's 2 orders of magnitude faster. It makes sense if you think
about it in terms of CPU instructions: on x87 there are about 5
instructions to call exp in (extended) double precision and takes
about 20-30 cycles (with the time dominated by the F2XM1 instruction).
The number of integer instructions required to compute the above with
the decimal module is massive in comparison.


Oscar



More information about the Python-list mailing list