Benchmarking some modules - strange result

Dan Sommers dan at tombstonezero.net
Sun Jan 25 10:46:31 EST 2015


On Sun, 25 Jan 2015 13:24:40 +0100, Peter Otten wrote:

> Dan Stromberg wrote:
> 
>> I've been benchmarking some python modules that are mostly variations
>> on the same theme.
>> 
>> For simplicity, let's say I've been running the suite of performance
>> tests within a single interpreter - so I test one module thoroughly,
>> then move on to the next without exiting the interpreter.
>> 
>> I'm finding that if I prune the list of modules down to just the best
>> performers, I get pretty different results - what was best no longer
>> is.  This strikes me as strange.
> 
>> I'm about ready to rewrite things to run each individual test in a
>> fresh interpreter. But is there a better way?
> 
> You could run combinations of two modules in the same interpreter to
> see if there are specific modules that slow down the following module.
> If you can identify such modules you could then look into their code
> to try and find the cause of the slowdown.
> 
> Requires some work, but the results might be interesting to the
> greater public -- many real-world applications make do with a single
> interpreter ;)

I would add that your present results probably do reflect the real-world
better than a contrived, yet scientific and arguably more "accurate"
test.  This tells me that the variations you see are down in the noise,
and that any optimizations you may make to improve performance could be
lost in any given application.

Unless, of course, you're on an academic pursuit, in which case carry
on!

Dan



More information about the Python-list mailing list