unit-profiling, similar to unit-testing

Roy Smith roy at panix.com
Wed Nov 16 09:36:40 EST 2011


In article <95bcp8-bft.ln1 at satorlaser.homedns.org>,
 Ulrich Eckhardt <ulrich.eckhardt at dominolaser.com> wrote:

> Hi!
> 
> I'm currently trying to establish a few tests here that evaluate certain 
> performance characteristics of our systems. As part of this, I found 
> that these tests are rather similar to unit-tests, only that they are 
> much more fuzzy and obviously dependent on the systems involved, CPU 
> load, network load, day of the week (Tuesday is virus scan day) etc.
> 
> What I'd just like to ask is how you do such things. Are there tools 
> available that help? I was considering using the unit testing framework, 
> but the problem with that is that the results are too hard to interpret 
> programmatically and too easy to misinterpret manually. Any suggestions?

It's really, really, really hard to either control for, or accurately 
measure, things like CPU or network load.  There's so much stuff you 
can't even begin to see.  The state of your main memory cache.  Disk 
fragmentation.  What I/O is happening directly out of kernel buffers vs 
having to do a physical disk read.  How slow your DNS server is today.

What I suggest is instrumenting your unit test suite to record not just 
the pas/fail status of every test, but also the test duration.  Stick 
these into a database as the tests run.  Over time, you will accumulate 
a whole lot of performance data, which you can then start to mine.

While you're running the tests, gather as much system performance data 
as you can (output of top, vmstat, etc) and stick that into your 
database too.  You never know when you'll want to refer to the data, so 
just collect it all and save it forever.



More information about the Python-list mailing list