Using unittest for benchmarking / speed improvements

Daniel Abel abli at freemail.hu
Sat Mar 29 03:42:32 EDT 2008


Hi!

As everyone knows, using unittests to monitor and improve the
correctness of code is really handy. Is there a good system for using
unittests to monitor / improve the _speed_ of code?

I.e. I would like a system which every time when running a test, also
measures the time the test took, and stores it in a file
somewhere. (For example, stores date&time, svn revision, and the time
the test took) Then, the programmer can plot that in gnuplot / etc,
and see whether the speed of the code is improving or not, and also
compare different tests to each other (when comparing the speed of
different implementations).

Also, svn revisions could be labelled with svn checkin comments, so
that one can easily associate any improvement / regression with the
summary of the change.

I was thinking about something built on top of nose, and hoping
someone has already created such a thing. (After all, 'benchmark
before optimize' and similar are often repeated; such a system seems
really necessary for doing that systematically.)

I found
http://renesd.blogspot.com/2007/08/timing-and-unittests-graphing-speed.html
which talks about this idea, and
the stopwatch plugin at:
http://darcs.idyll.org/~t/projects/pinocchio/doc/#stopwatch-selecting-tests-based-on-execution-time
which can measure the time (and would be the easies to extend to store
the information I would need)
and also the profile nose plugin (included in the nose package) can
measure how time is spent _inside_ the test. (which is also handy, but
I
would only need unittest-level timing, i.e. not as granular as
profile
creates.)

So, is there such a complete framework (i.e. which can compare speed
across revisions) or do I have to hack one together myself? (And would
others be interested in it?)

Thanks in advance,
Daniel Abel



More information about the Python-list mailing list