[pytest-dev] My gripes with pytest and tox

Florian Bruhin me at the-compiler.org
Fri Jun 10 04:00:53 EDT 2016


* Ronny Pfannschmidt <opensource at ronnypfannschmidt.de> [2016-06-10 09:38:42 +0200]:
> Hi Florian,
> 
> below i threw in some thoughts and strong opinions,
> take with a grain of salt and feel free to tear them down :)

Thanks!

> Am 10.06.2016 um 09:12 schrieb Florian Bruhin:
> > pytest
> >     - doesn't have an easy way to do printf-style debugging
> >       (without -s)
> thats per design - you are supposed to leave prints in and use the
> output capture to get context
> i would prefer to extend on that instead of adding a new more limited
> way just because its convenient in some cases

Let's continue discussing this in the issue[1] or at the sprint :)

[1] https://github.com/pytest-dev/pytest/issues/1602

> >     - prints the exception before all other report output instead of
> >       after
> perhaps it makes sense to order things based on number of lines, have a
> way to take control of that
> different kinds of testsuites have different kinds of output and debug help
> 
> for somethings the exceptions are a better help, for others output
> traces are

True - I guess a config option would make sense.

I often have hundreds of lines of (valuable!) logging in the report
output, so I need to scroll up a lot to find out *what* went wrong in
the first place.

> >     - segfaults in my testsuite when using pdb (haven't investigated
> >       yet, pdb++ works)
> i suspect a potential readline issue (pdb++ uses pyrepl instead of
> readline) please report

I'll do, after getting my qutebrowser release out today :D

> >     - blows up in weird ways with ImportErrors in plugins
> thats indeed a pluggy gripe, IMHO we should put pluggy under pytest-dev
> and extend it (i#d also like to cythonize it and publish
> manylinux/windows wheels)

+1

> >     - too much boilerplate with parametrized fixtures
> >     - too much boilerplate with parametrized tests (what about keyword
> >       arguments?)
> examples for those 2 items please, i saw them as convenient and
> boilerplate-removing and i'd love to see your new uses and pain points

For test parametrizing I imagine something like:

    @pytest.mark.parametrize(inp=['one', 'two'],
                             expected=[(1, True), (2, False)])
    def test_numbers(inp, expected):
        assert number(inp) == expected

instead of

    @pytest.mark.parametrize('inp, expected', [
        ('one', (1, True)),
        ('two', (2, False)),
    ])
    def test_numbers(inp, expected):
        assert number(inp) == expected

Though with the current syntax, the parameters for one invocation are
grouped together nicely. I need to think about it some more.

For the fixtures, I often need to do something like:

    @pytest.fixture(params=['one', 'two'])
    def fixt(request):
        # ...
        return request.param

I just wonder if there's a slightly cleaner way of doing this kind of
thing. Maybe there isn't. Haven't given it much thought yet.

> >     - doesn't work on Windows with colors (for me), prints black on
> >       black
> afair that needs colorama what can we do to improve?

Yeah, it's with colorama. I just get black output in a black shell
when running pytest and can only see anything with --color=no.

It's a bug somewhere, but I haven't had time to investigate yet.

> >     - hides python warnings
> i consider that a bad bug and gripe
> personally i'd like to integrate python warnings and pytest warnings
> under a consistent track-able and python warnings compatible api

Sounds like a plan! Definitely a sprint item I presume.

> > pytest-xdist
> >     - breaks my testsuite in ways I don't understand
> >     - doesn't seem to have an easy way for per-job fixtures
> xdist has no "jobs" - as of now its simply running multiple sessions and
> schedules items between them based on strategies,
> my basic opinion is that the codebase completely outgrew the initial
> design and needs major internal changes
> and those a re pretty expensive in terms of time and its also not clear
> what model to use

I mean jobs as in the -j flag ;)

Basically I run my code under test as a subprocess, and then send
commands to it via IPC and check the log output.

This means if I parallelize my tests (which I'd really want to!) I
have four test-processes talking to one process under test, which gets
confused. So I need an easy way to start four test processes and four
subprocesses.

Florian

-- 
http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP)
   GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc
         I love long mails! | http://email.is-not-s.ms/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/pytest-dev/attachments/20160610/c027b901/attachment-0001.sig>


More information about the pytest-dev mailing list