[SciPy-dev] Numerical stability of special functions (here: chebyu)

Charles R Harris charlesr.harris at gmail.com
Mon Apr 24 12:26:31 EDT 2006


Hi,

On 4/24/06, Norbert Nemec <Norbert.Nemec.list at gmx.de> wrote:
>
> Hi there,
>
> I just had my first encounter with Chebyshev polynomials of the second
> kind. A lengthy calculation could be solved very elegantly using these
> special functions. In the end, I wanted to plot the solution using
> scipy&numpy&matplotlib. To my disappointment, the results showed strong
> numerical errors, so I had to dig in deeper.
>
> I quickly found out that the call chebyu(N)(x) itself returned bad
> results for N>30 and complex x ~= +/- 1.0
>
> I checked the implementation of chebyu and found that it basically first
> calculates the coefficients from the generating function and then adds
> up the polynomial. In the numerical range of question, this means adding
> up huge numbers with alternating sign, producing bad numerical errors.
>
> After a long odyssee of optimizations (I even considered using gmpy to
> simply increase the internal precision) I found a ridiculously simple
> solution:
>
> ----------------
> def chebyu(N,x):
>     previous = 0.0
>     current = 1.0
>     for n in range(N):
>        new = 2*x*current - previous
>        previous = current
>        current = new
>     return current
> ----------------



Chebychev polynomials are notorious for roundoff error when evaluated as
power series; power series tend to be ill conditioned and often lose
precision when the degree gets up around 5 -10, which is one reason to use
Chebychev series instead. Chebychev series can be evaluated using inverse
recursion (aka Clenshaw's recurrence), which is a generalized Horner's
method.  You will probably want to do that at some point and you can find it
covered in several texts, Numerical Recipes for instance.

The normal Chebychev recursion may not be suitable for very large values of
N due to roundoff error. There is a trick used to generate high precision
sin,cos for fft's using recursion that could probably be adapted to those
cases. I've normally constrained myself to N < 100 since the Chebychev
coefficients tend to decrease *very* rapidly. In those cases there is a loss
of float64 precision with the error going from roughly 1e-17 to 1e-14. The
Chebychev functions of the second kind are the normal Chebychev functions
over the interval [-2,2] scaled to [-1,1], so the same precision observation
should hold over at least part of the interval, [-.5,.5], but outside of
that interval the solution you want to compute from the recursion is the
increasing one, so you should be fine.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20060424/c3bf6102/attachment.html>


More information about the SciPy-Dev mailing list