[SciPy-User] peer review of scientific software

Matthew Brett matthew.brett at gmail.com
Wed Jun 5 17:47:47 EDT 2013


Hi,

On Wed, Jun 5, 2013 at 2:36 PM, Matt Newville
<newville at cars.uchicago.edu> wrote:

> I'm sorry to admit that I read only the abstract, but I would not be
> surprised if Matthew Brett's example also fell into this category.
> That is, were the nearly-cancelling mistakes discovered because of
> unit testing or because of tests of the whole?  Obviously, if two
> functions were always (always!) used together, and had canceling
> errors (say, one function "incorrectly" scaled by a factor of 2 and
> the other incorrectly scaled by a factor or 1/2), unit testing might
> show flaws that never, ever changed the end results.

I believe what happened was that the first author of the paper read
the previous paper and saw the errors in the math.

As with your example, if the previous paper's algorithms had only been
run on similar data then we would never have had a problem.

If you had two functions both off by a factor of two you will have to
hope that no-one is calling only one of those functions.

If we want to provide a library that our users can trust, we must test
the whole public API of our code.   Of course even then we've only got
'I don't know of any bugs for the ranges of parameters I've tested'.

Cheers,

Matthew



More information about the SciPy-User mailing list