[Numpy-discussion] numpy and roundoff(?)

Charles R Harris charlesr.harris at gmail.com
Sat Mar 1 16:49:22 EST 2008


2008/3/1 Lisandro Dalcin <dalcinl at gmail.com>:

> Dear Charles,
>
> As I said, I have no time to code the pure Python+numpy nonlinear and
> linear loops, and the matrix-free stuff to mimic the PETSc
> implementation. However, I post the F90 code and the numpy code, and a
> small script for testing with random input. When I have some spare
> time, I'll try to do the complete application in pure python.
>
> Regards,
>
> On 3/1/08, Charles R Harris <charlesr.harris at gmail.com> wrote:
> >
> >
> >
> > On Sat, Mar 1, 2008 at 12:43 PM, Lisandro Dalcin <dalcinl at gmail.com>
> wrote:
> > > Dear all,
> > >
> > > I want to comment some extrange stuff I'm experiencing with numpy.
> > > Please, let me know if this is expected and known.
> > >
> > > I'm trying to solve a model nonlinear PDE, 2D Bratu problem (-Lapacian
> > > u - alpha * exp(u), homogeneus bondary conditions), using the simple
> > > finite differences with a 5-point stencil.
> > >
> > > I implemented the finite diference scheme in pure-numpy, and also in a
> > > F90 subroutine, next wrapped with f2py.
> > >
> > > Next, I use PETSc (through petsc4py) to solve the problem with a
> > > Newton method, a Krylov solver, and a matrix-free technique for the
> > > Jacobian (that is, the matrix is never explicitelly assembled, its
> > > action on a vector is approximated again with a 1st. order finite
> > > direrence formula).
> > >
> > > And the, surprise! The pure-numpy implementation accumulates many more
> > > inner linear iterations (about 25%) in the complete nonlinear solution
> > > loop than the one using the F90 code wrapped with f2py.
> > >
> > > Additionally, PETSc have in its source distribution a similar example,
> > > but implemented in C and using some PETSc utilities for managing
> > > structured grids. In short, this code is in C and completelly
> > > unrelated to the previously commented code. After running this
> > > example, I get almost the same results that the one for my petsc4py +
> > > F90 code.
> > >
> > > All this surprised me. It seems that for some reason numpy is
> > > accumulating some roundoff, and this is afecting the acuracy of the
> > > aproximated Jacobian, and then the linear solvers need more iteration
> > > to converge.
> > >
> > > Unfortunatelly, I cannot offer a self contained example, as this code
> > > depends on having PETSc and petsc4py. Of course, I could write myself
> > > the nonlinear loop, and a CG solver, but I am really busy.
> > >
> > > Can someone comment on this? Is all this expected? Have any of you
> > > experienced somethig similar?
> > >
> > >
> > Could you attach the pure numpy solution along with a test case
> (alpha=?).
> >
>
Here are the differences as well as the values of F1 and F2 at the same
point:

D  = 4.4408920985e-16
F1 = 2.29233319997
F2 = 2.29233319997

So they differ in the least significant bit. Not surprising, I expect the
Fortran compiler might well perform operations in different order,
accumulate in different places, etc. It might also accumulate in higher
precision registers or round differently depending on hardware and various
flags. The exp functions in Fortran and C might also return slightly
different results. I don't think the differences are significant, but if you
really want to compare results you will need a higher precision solution to
compare against.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20080301/f3e8a69a/attachment.html>


More information about the NumPy-Discussion mailing list