Unit testing data; What to test

John J. Lee phrxy at csv.warwick.ac.uk
Sun Feb 18 02:09:59 EST 2001


Having decided testing was a Good Thing and that I ought to do it, I've
started to write tests, using PyUnit.

The first question is straightforward: do people have a standard, simple
way of handling data for tests, or do you just tend to stick most of it in
the test classes?  KISS I suppose.  But if the software is going to change
a lot, isn't it a good idea to separate the tests from their input and
expected result data?  Of course with big data -- I have some mathematical
functions that need to be checked for example -- you're obviously not
going to dump it directly the test code: I'm wondering more about data of
the order of tens of objects (objects in the Python sense).

In fact, do unit tests often end up having to be rewritten as code is
refactored?  Presumably yes, given the (low) level that unit tests are
written at.

The second (third) one is vague: What / How should one test?  Discuss.

I realise these questions are not strictly python-specific, but (let's
see, how do I justify this) it seems most of what is out there on the web
& USENET is either inappropriately formal, large-scale and
business-orientated for me (comp.software.testing and its FAQ, for
example), or merely describes a testing framework.  A few scattered XP
articles are the only exceptions I've found.

I'm sure there must be good and bad ways to test -- for example, I read
somewhere (can't find it now) that you should aim to end up so that each
bug generates, on (mode) average, one test failure, or at least a small
number.  The justification for this was that lots of tests failing as a
result of a single bug are difficult to deal with.  It seems to me that
this goal is a) impossible to achieve and b) pointless, since if multiple
test failures really are due to a single bug, they will all go away when
you fix it, just as compile-time errors often do.  No?


John




More information about the Python-list mailing list