[SciPy-Dev] GSoC'17 candidate - Interested in Nose to Pytest Migration.

Robert Kern robert.kern at gmail.com
Mon Feb 27 18:08:34 EST 2017


On Mon, Feb 27, 2017 at 2:32 PM, Charles R Harris <charlesr.harris at gmail.com>
wrote:
>
>
>
> On Mon, Feb 27, 2017 at 2:35 PM, Robert Kern <robert.kern at gmail.com>
wrote:
>>
>> On Mon, Feb 27, 2017 at 1:03 PM, Pauli Virtanen <pav at iki.fi> wrote:
>> >
>> > Mon, 27 Feb 2017 19:13:48 +0300, Evgeni Burovski kirjoitti:
>> > [clip]
>> > > A few scattered thoughts about the project:
>> > >
>> > > - This is not exactly a transition to pytest as such. The goal is to
>> > > allow an alternative test runner, with minimal changes to the test
>> > > suite.
>> >
>> > I'd suggest we should consider actually dropping nose, if we make the
>> > suite pytest compatible.
>> >
>> > I think retaining nose compatibility in the numpy/scipy test suites
does
>> > not bring advantages --- if we are planning to retain it, I would like
to
>> > understand why. Supporting two systems is more complicated, makes it
>> > harder to add new tests, and if the other one is not regularly used,
>> > it'll probably break often.
>> >
>> > I think we should avoid temptation to build compatibility layers, or
>> > custom test runners --- there's already complication in numpy.testing
>> > that IIRC originates from working around issues in nose, and I think
not
>> > many remember how it works in depth.
>> >
>> > I think the needs for Scipy and Numpy are not demanding, so sticking
with
>> > "vanilla" pytest features and using only its native test runner would
>> > sound best. The main requirement is numerical assert functions, but
AFAIK
>> > the ones in numpy.testing are pytest compatible (and independent of the
>> > nose machinery).
>>
>> If we're migrating test runners, I'd rather drop all dependencies and
target `python -m unittest discover` as the lowest common denominator
rather than target a fuzzy "vanilla" subset of pytest in particular.
>
> I'd certainly expect to make full use of the pytest features, why use it
otherwise? There is a reason that both nose and pytest extended unittest.
The only advantage I see to unittest is that it is less likely to become
abandon ware.

Well, that's definitely not what Pauli was advocating, and my response was
intending to clarify his position.

>> This may well be the technical effect of what you are describing, but I
think it's worth explicitly stating that focus.
>
> I don't think that us where we were headed. Can you make the case for
unittest?

There are two places where nose and pytest provide features; they are each
a *test runner* and also a *test framework*. As a test runner, they provide
features for discovering tests (essentially, they let you easily point to a
subset of tests to run) and reporting the test results in a variety of
user-friendly ways. Test discovery functionality works with
plain-old-TestCases, and pretty much any test runner works with the
plain-old-TestCases. These features face the user who is *running* the
tests.

The other place they provide features is as test frameworks. This is where
things like the generator tests and additional fixtures come in (i.e.
setup_class, setup_module, etc.). These require you to write your tests in
framework-specific ways. Those written for nose don't necessarily work with
those written for pytest and vice versa. And any other plain test runner is
right out. These features face the developer who is *writing* the tests.

Test discovery was the main reason we started using nose over plain
unittest. At that time (and incidentally, when nose and pytest began),
`python -m unittest discover` didn't exist. To run unit tests, you had to
manually collect the TestCases into a TestSuite and write your own logic
for configuring any subsets from user input. It was a pain in the ass for
large hierarchical packages like numpy and scipy. The test framework
features of nose like generator tests were nice bonuses, but were not the
main motivating factor. [Source: me; it was Jarrod and I who made the
decision to use nose at a numpy sprint in Berkeley.]

If we're going to make a switch, I'd rather give up those framework
features (we don't use all that many) in order to get agnosticism on the
test runner front. Being agnostic to the test runner lets us optimize for
multiple use cases simultaneously. That is, the best test runner for "run
the whole suite and record every detail with coverage under time
constraints under Travis CI" is often different from what a developer wants
for "run just the one test module for the thing I'm working on and report
to me only what breaks as fast as possible with minimal clutter".

--
Robert Kern
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20170227/ad2982a3/attachment.html>


More information about the SciPy-Dev mailing list