[AstroPy] Testing Guidelines

Derek Homeier derek at astro.physik.uni-goettingen.de
Mon Jul 11 14:54:37 EDT 2011


On 10.07.2011, at 8:08AM, Erik Tollerud wrote:

> As I mentioned in my message a moment ago, I have added a page on the
> astropy wikispaces page dealing with testing for the astropy package:
> 
> http://astropy.wikispaces.com/Astropy+Testing+Guidelines
> 
> The page itself has basically no content - we will fill it in once
> we've had a good discussion here on the list as to what the community
> is comfortable with.  There has already been some discussion on this
> topic on the list, so this message is to serve as a place to house the
> "official" discussion of what the actual package guidelines document
> should say.
> 
> Once item I would like to put out for discussion: unless the community
> strongly disagrees, we (the coordinating committee) would prefer not
> to *require* complete coverage of unit or compliance testing, as we
> are concerned that this might be a major burden on some
> packages/submodules that would overly slow development.
> 
What is "complete coverage" anyway - testing all public methods with all allowed input types (and possibly some that are not allowed to test the correct error handling)? This is probably not even realised for numpy right now, so I generally second that idea. In particular, it would probably needlessly delay acceptance of some largely finished packages that are otherwise ready for inclusion. 
I would probably put a somewhat stronger emphasis than "Unit tests are encouraged..." into the guidelines for any newly  developed or revised code, also to encourage test-driven development practices. 

> Regardless, probably the most important starting point is to determine
> the framework (Mark already began this discussion, so I've pasted his
> message below - sorry to hijack the thread, but I'm hoping to use the
> guideline documents to organization the discussion).  Once that's
> decided, we can discuss the best arrangement and policies for how to
> organize tests.
> 
> 
> 
> On that topic, prompted by Mark's comments below, I'll clarify that
> the "using nose" that I had in mind in my earlier comments was to use
> nose as it is, using only built-in plugins that seem to be fairly
> stable as it stands right now (although I've never personally used it
> with python 3.x - has anyone?).  My experiences with nose have been
> pretty painless, and it's nice that it allows us to combine most of
> the different testing options into one framework (in particular, the
> combination of doctests and tests in a separate "tests" directory is
> attractive).  It might be reasonable to use nose as a unit test
> framework, and then have a separate "compliance" test suite that might
> depend on external tools (as suggested by both Paul and Vicki).  I
> think having 4 different test suites as (at least, I think) suggested
> by Paul might be a bit burdensome, though.
> 
I can confirm nose-1.0.0 supports Python 3 - just about as well as Python 2.x, afaikt (Python 3 support indeed seemed to have stalled until a few months ago, allowing the impression nose development was not very much alive...). 
I found Mark's comments about numpy and nose possibly a bit confusing - numpy.test() definitely depends on nose, and in fact I cannot find vastly different behaviour in the current numpy master to calling nosetests directly. For example, calling nosetests2.7 [3.2] in site-packages/numpy is running 3615 [3614] tests with 10 [13] errors, while numpy.test('full') runs 3609 [3608] tests with 4 [5] known failures with the respective python versions. So the main difference seems to be the way nosetests deals (or deals not) with tests decorated as known failures or otherwise to be skipped. But otherwise, as you can call nosetests in the "tests" subdir of every subpackage, it allows of course for quote some more flexibility. 

> Has anyone actually used py.test for a project that can chime in and
> give a sense of how it compares to nose?  The last time I looked it
> wasn't nearly as well-documented as it seems to be now, so it may be a
> better alternative...

I have no experience with it - so personally, and also because there are plenty of examples in the numpy/scipy code etc. to build on, I am biased towards nose, but of course this does not mean one should exclude a possibly superior alternative (especially not at this point).

Cheers,
							Derek

> On Fri, Jul 8, 2011 at 12:57 PM, Mark Sienkiewicz <sienkiew at stsci.edu> wrote:
>> 
>>>> As for the testing framework, we were thinking nose
>>>> (http://somethingaboutorange.com/mrl/projects/nose/1.0.0/).   Nose is
>>>> very nice for a project like this because it's super easy to run and
>>>> also easy to write tests with, but is also compatible with the stdlib
>>>> unittests.
>> 
>> 
>> "use nose" is a little ambiguous.  numpy "uses nose" but if you look
>> inside numpy.test() it looks like of like they implemented their own
>> testing system using nose as a library.  I've never succeeded in using
>> nosetests to run the numpy tests, for example.
>> 
>> If you choose nose, my preference is to say:  Tests come in a separate
>> directory from the source.  You can run all the tests with the command
>> "nosetests tests/".
>> 
>> This is not to exclude the possibility of installing some of the tests
>> or of providing a test() function, but I also want to run the tests
>> directly with nosetests.
>> 
>> I actually have an ulterior motive here:  I use a nose plugin that feeds
>> a report generator.  If you only provide a foo.test() function, I can't
>> necessarily get it to use the plugin even if you base the whole thing on
>> nose.  You might not think that was a big deal, but it gets important
>> when you have the test volume that I regularly deal with: > 100 000
>> tests per night, sometimes > 10 000 fail, depending what the developers
>> are up to.
>> 
>> b.t.w.  This doesn't mean I have any attachment to nose; it just happens
>> to be one of a small number of test runners that I use right now.
>> 
> 



More information about the AstroPy mailing list