[SciPy-Dev] Powell failure on MingW windows build - any insights?

Matthew Brett matthew.brett at gmail.com
Tue Apr 22 15:51:37 EDT 2014


Hi,

On Mon, Apr 21, 2014 at 6:05 PM, Matthew Brett <matthew.brett at gmail.com> wrote:
> Hi,
>
> I'm experimenting with Carl Kleffner's MingW-w64 builds of numpy and scipy.
>
> Numpy now passes all tests for me (building with Carl's toolchain and
> ATLAS 64-bit).
>
> Scipy fails 2 tests only, both using the Powell routine.
>
> Errors look like this:
>
> ======================================================================
> FAIL: Powell (direction set) optimization routine
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "D:\devel\py27\lib\site-packages\nose\case.py", line 197, in runTest
>     self.test(*self.arg)
>   File "D:\devel\py27\lib\site-packages\scipy\optimize\tests\test_optimize.py",
> line 209, in test_powell
>     atol=1e-14, rtol=1e-7)
>   File "D:\devel\py27\lib\site-packages\numpy\testing\utils.py", line
> 1181, in assert_allclose
>     verbose=verbose, header=header)
>   File "D:\devel\py27\lib\site-packages\numpy\testing\utils.py", line
> 644, in assert_array_compare
>     raise AssertionError(msg)
> AssertionError:
> Not equal to tolerance rtol=1e-07, atol=1e-14
>
> (mismatch 100.0%)
>  x: array([[ 0.75077639, -0.44156936,  0.47100962],
>        [ 0.75077639, -0.44156936,  0.48052496],
>        [ 1.50155279, -0.88313872,  0.95153458],...
>  y: array([[ 0.72949016, -0.44156936,  0.47100962],
>        [ 0.72949016, -0.44156936,  0.48052496],
>        [ 1.45898031, -0.88313872,  0.95153458],...
>
> Does anyone have any insight as to what might be going on here?  Same
> failure on Windows 32 and 64 bit, with ATLAS or OpenBLAS...

To follow up:

The test_powell test tests that the estimated parameters for an
example function are nearly the same as a previous example run, for
iterations 34 through 38.  It therefore asserts that the optimization
path is exactly the same, up until iteration 38.

There is a separate test above that the solution is nearly the same
and the number of iterations are similar.

In the test code, there is:

        # However, some leeway must be added: the exact evaluation
        # count is sensitive to numerical error, and floating-point
        # computations are not bit-for-bit reproducible across
        # machines, and when using e.g. MKL, data alignment
        # etc. affect the rounding error.

In fact, what seems to have happened in this test in my case, is that
tiny O(eps) differences in the result of the exp function have led to
a divergent path earlier than step 34. The end solution is the same,
and the number of iterations is similar (those tests do pass).

It seems to me the iteration path test is not a good test.  It's
likely that these kind of tiny differences will lead to divergent
paths, and testing that steps 34 though 38 are the same seems
arbitrary and difficult to justify.

I think that test should be removed, leaving the check for the same
solution and roughly the same number of iterations.  Is that
reasonable?

Cheers,

Matthew



More information about the SciPy-Dev mailing list