[py-dev] utest thoughts
Ian Bicking
ianb at colorstudy.com
Tue Sep 28 01:58:44 CEST 2004
Here's some things I'd like to do with utest. Maybe some of them are
possible now. This is kind of a brainstorming list of features, I guess.
* Specify tests to run within a module. The only way to select a module
that I see now is by filename. Package name would also be nice.
Wildcards could also be useful, e.g., utest modulename.'*transaction*'.
I think regular expressions are unnecessarily complex. Maybe a
wildcard character other than * would be nice, to keep it from
conflicting with shell expansion. A setting, or an optional alternative
character? Maybe % (like in SQL).
* Data-driven tests, where the same code is tested with many different
sets of data. Naturally this is often done in a for loop, but it's
better if the data turns into multiple tests, each of which are
addressable. There's something called a "unit" in there, I think, that
relates to this...? But not the same thing as unittest; I think I saw
unittest compatibility code as well.
Anyway, with unittest I could provide values to the __init__, creating
multiple tests that differed only according to data, but then the runner
became fairly useless. I'm hoping that be easier with utest.
* Specifying an option to the runner that gets passed through to the
tests. It seems like the options are fixed now. I'd like to do
something like -Ddatabase=mysql. I can do this with environmental
variables now, but that's a little crude. It's easiest if it's just
generic, like -D for compilers, but of course it would be nicer if there
were specific options. Maybe this could be best achieved by
specializing utest and distributing my own runner with the project.
* I'm not clear how doctest would fit in. I guess I could turn the
doctest into a unit test TestCase, then test that. Of course, it would
be nice if this was streamlined. I also have fiddled with doctest to
use my own comparison functions when testing if we get the expected
output. That's not really in the scope of utest -- that should really
go in doctest. Anyway, I thought I'd note its existance.
* Code coverage tracking. This should be fairly straight-forward to add.
The last time I looked around at test runners, Zope3's seemed the best.
Well, it would have been better if I could have gotten it to do
something. But it *seemed* best. Mining it for features:
* Different levels of tests (-a --at-level or --all; default level is 1,
which doesn't run all tests). They have lots of tests, so I'm guessing
they like to avoid running tests which are unlikely to fail.
* A distinction between unit and functional tests (as in acceptance or
system tests). This doesn't seem very generic -- these definitions are
very loose and not well agreed upon. There's not even any common
language for them. I'm not sure how this fits in with level, but some
sort of internal categorization of tests seems useful.
* A whole build process. I think they run out of the build/ directory
that distutils generates. It is a little unclear how paths work out
with utest, depending on where you run it from. Setting PYTHONPATH to
include your development code seems the easiest way to resolve these
issues with utest. I don't have anything with complicated builds, so
maybe there's issues I'm unaware of.
* A pychecker option. (-c --pychecker)
* A pdb option (-D --debug). I was able to add this to utest with
fairly small modifications (at least, if I did it correctly).
* An option to control garbage collection (-g --gc-threshold). I guess
they encounter GC bugs sometimes.
* Run tests in a loop (-L --loop). Also for checking memory leaks.
I've thought that running loops of tests in separate threads could also
be a useful test, for code that actually was supposed to be used with
threads. That might be another place for specializing the runner.
* Keep bytecode (-k --keepbytocode). Not interesting in itself, but it
implies that they don't normally keep bytecode. I expect this is to
deal with code where the .py file has been deleted, but the .pyc file is
still around. I've wasted time because of that before, so I can imagine
its usefulness.
* Profiling (-P --profile). Displays top 50 items, by time and # of calls.
* Report only first doctest failure (-1
--report-only-first-doctest-failure).
* Time the tests and show the slowest 50 tests (-t --top-fifty). I
first thought this was just a bad way of doing profiling, but now that I
think about it this is to diagnose problems with the tests runnning slowly.
That's all the interesting options, I think. There's also options to
select which tests you display, but these seem too complex, while still
not all that powerful.
More information about the Pytest-dev
mailing list