Evaluating error strings for 'unittest' assert methods.

Steven D'Aprano steve at pearwood.info
Wed Apr 6 22:14:52 EDT 2016


On Thu, 7 Apr 2016 08:58 am, John Pote wrote:

[...]
> I like each assert...() to output helpful information when things go
> wrong. So I have put in quite complicated code to generate the error
> string the assert() method uses only when things go wrong. The normal
> case, when everything is working, means that all these error strings are
> constructed only to be discarded immediately when the assert() detects
> the test result is correct and no exception is throw.
> 
> To my mind this seems a waste and adding unnecessary delays in the
> running of the whole test script.

This sounds like premature optimization. I would be very surprised if this
actually makes much difference to the run time of the test suite, unless
the tests are *really* basic and the error strings *impressively* complex.
Your example below:


> So I was wondering if there was some convienient, Python, way of calling
> an assert() method so the error string is only evaluated/constructed if
> the assert() fails and throws an exception. For example,
> 
> self.assertEqual( (nBytes,expectedValues), (nBytesRd,valuesRead),
>              """Unexpected reg value.
> Expected values nBytes:%02x (%s)
> """%(nBytes,' '.join( [ "%04x"%v for v in expectedValues] )) +
> "Read values     nBytes:%02x (%s)"%(nBytesRd,' '.join( [ "%04x"%v for v
> in valuesRead] ))
>          )

doesn't look too complicated too me. So my *guess* is that you are worried
over a tiny proportion of your actual runtime. Obviously I haven't seen
your code, but thinking about my own test suites, I would be shocked if it
was as high as 1% of the total. But I might be wrong.

You might try running the profiler over your test suite and see if it gives
you any useful information, but I suspect not.

Otherwise -- and I realise that this is a lot of work -- I'd consider making
a copy of your test script, then go through the copy and replace every
single one of the error messages with the same short string, say, "x". Now
run the two versions, repeatedly, and time how long they take.

On Linux, I would do something like this (untested):


time python -m unittest test_mymodule > /dev/null 2>&1


the intent being to ignore the overhead of actual printing any error
messages to the screen, and just seeing the execution time. Run that (say)
ten times, and pick the *smallest* runtime. Now do it again with the
modified tests:

time python -m unittest test_mymodule_without_messages > /dev/null 2>&1


My expectation is that if your unit tests do anything like a significant
amount of processing, the difference caused by calculating a few extra
error messages will be insignificant.



But, having said that, what version of Python are you using? Because the
unittest module in Python 3 is *significantly* enhanced and prints much
more detailed error messages without any effort on your part at all.

https://docs.python.org/3/library/unittest.html

For example, starting in Python 3.1, assertEqual() on two strings will
display a multi-line diff of the two strings if the test fails. Likewise,
there are type-specific tests for lists, dicts, etc. which are
automatically called by assertEqual, and the default error message contains
a lot more information than the Python 2 version does.


If you're stuck with Python 2, I *think* that the new improved unittest
module is backported as unittest2:


https://pypi.python.org/pypi/unittest2



-- 
Steven




More information about the Python-list mailing list