[SciPy-user] Numerical stability and Legendre polynomials

Yaroslav Bulatov yaroslavvb at gmail.com
Fri Sep 9 14:29:18 EDT 2005


Thanks for the insight.

The 3-term recursion indeed does much better.
I did some experiments to measure exactly how much better.

Here's the graph of the number of bits needed to represent maximum
error when you project f(x)=0.5 onto Legendre polynomial basis.

http://web.engr.oregonstate.edu/~bulatov/research/reports/iet/legendre/numerical-error.png
Green graph is the old method using scipy's functions, and blue graph
uses 3-term recursion


And here's just the graph of 3-term's recursion until 250'th order
http://web.engr.oregonstate.edu/~bulatov/research/reports/iet/legendre/numerical-error2.png

For some reason the error explodes around 210th order

Here's the code used to generate error values

k = 30
vals = zeros(200, Float)
errors = zeros(k, Float)
for j in range(k):
  ff = lambda x: eval_poly_3term(j, x)
  coef = (2.*j+1)/2*(integrate.quad(ff, -1, 1))[0]
  vals+=coef*eval_poly_3term(j, arange(-1,1,0.01))
  errors[j] = log(max(max(abs(vals-1)),2**-53))/log(2)+53 

> In [33]: for n in range(30):
>     ....:     pn = scipy.special.legendre(n)
>     ....:     print pn(-1.0)-((-1)**n)*pn(1.0)
>     ....:
> 0.0
> 0.0
> 0.0
> 0.0
> -1.11022302463e-15

For some reason when I execute the same code I get
0.0
0.0
0.0
8.881784197e-016
-3.10862446895e-015

I'm using 64 bit arithmetic (floats), scipy version 0.3.2, why do you
get higher accuracy?

-- 
Yaroslav Bulatov
bulatov at cs.oregonstate.edu
Dearborn 102
Oregon State University
Corvallis, OR




More information about the SciPy-User mailing list