if not DEBUG: log = null_log

Steven D'Aprano steve at REMOVETHIScyber.com.au
Tue Jul 12 22:16:03 EDT 2005


On Wed, 13 Jul 2005 11:00:14 +1000, Simon Burton wrote:

> Hi,
> 
> I'm after a no-op command, so that i can redirect
> logging commands in performance critical code.
> 
> Something like this:
> 
> def log(*args): print args
> def null_log(*args): pass
> if not DEBUG: log = null_log
> 
> is unacceptable because of the overhead of calling
> functions in python.

Excuse my skepticism, but I'd like to see the profiling that you did that
shows this is a problem. Everybody thinks their code is performance
critical, even when it isn't.

So, let's do a test. First, I set up a fake object that requires lots of
name look-ups:

>>> class Placeholder:
...     pass
...
>>> obj = Placeholder()
>>> obj.method = Placeholder()
>>> obj.method.attribute = Placeholder()
>>> obj.method.attribute.record = None

Now I run a test to time those name lookups:

>>> def test_func_overhead(n):
...     loop = range(n)
...     t = time.time()
...     for i in loop:
...             f = obj.method.attribute.record
...     t = time.time() - t
...     return t
...
>>> test_func_overhead(100000)
0.10761499404907227
>>> test_func_overhead(100000)
0.13230800628662109
>>> test_func_overhead(100000)
0.11942911148071289

Now set up a test bypassing the name lookups:

>>> def test_func_no_overhead(n):
...     loop = range(n)
...     y = obj.method.attribute.record
...     t = time.time()
...     for i in loop:
...             f = y
...     t = time.time() - t
...     return t
...
>>> test_func_no_overhead(100000)
0.052425861358642578
>>> test_func_no_overhead(100000)
0.042248010635375977
>>> test_func_no_overhead(100000)
0.055256843566894531

So four global lookups performed 100,000 times takes 0.1 second, while a
single local lookup performed 100,000 times takes 0.05 second. Not a big
difference. Extrapolating from four lookups down to one would suggest
that global lookups are faster than local, which can't be right.

Redoing test_func_overhead to only make one lookup (change the
reference to obj.method.attribute.record to just obj) gives very
surprising results:

>>> test_func_overhead2(100000)
0.041122913360595703
>>> test_func_overhead2(100000)
0.037561893463134766
>>> test_func_overhead2(100000)
0.020340204238891602

According to my testing, looking up a global variable is *faster* than
looking up a local. Hmmmm. Caching effects perhaps? 

Try the local version again:

>>> test_func_no_overhead(100000)
0.025532007217407227
>>> test_func_no_overhead(100000)
0.016258001327514648
>>> test_func_no_overhead(100000)
0.016184806823730469

Ah, that's better! At least now the local lookup is slightly faster
than the global. But also widely different from the first run. Function
lookup is so fast to start with, and affected by so many external factors,
that the cost-benefit equation of trying to optimize it is not very good.
There almost certainly will be more useful things for you to spend your
time on.

More comments below:-

> log ("about to slip into python feature request mode.")
> 
> Maybe this is what the PEP 336 guy was thinking of (Make None Callable).
> Obviously we don't want None to be callable, but
> what about a "Null" [1] that's callable, with any args ?
> 
> But I guess what I am really asking for is something on the bytecode
> level that tells the VM to "do nothing". 
> 
> Here's an idea: make "pass" into an expression (a value) that is callable, 
> with any args, and returns None.
> 
> log ("finished with python feature request mode.")

And how do you use this without the overhead of function calls?

Do you sprinkle your code with:

if DEBUG:
    log("spam")
else:
    pass("spam")

?

Why not just do this?

if DEBUG: log("spam")

Which is much simpler. Or even simpler still, put the "if DEBUG" test
inside log, since the extra time taken in calling the function is probably
lost in the noise of the rest of your code.




Not related to the logging issue:

> I recently discovered "pyc" [2], but i don't quite see how i can
> use it while maintaining python source compatability.
[snip]
> [2]: http://students.ceid.upatras.gr/~sxanth/pyc/

How fascinating. From the pyc page:

"In fact you can use pyc to re-compile your standard library and make it
about 100kB smaller."

Hmmm. Let's see now, Python 2.3.3 on Fedora Core 2:

$ du -hs /usr/lib/python2.3/
88M     /usr/lib/python2.3/

Oh joy! So by using pyc, I can save 0.11% of the Python library storage
requirements!

For all I know, pyc is a very useful package, and kudos to the author for
taking the time and effort to develop it. But if this space saving is
"one good reason to use pyc" according to the author, I'm not impressed.


-- 
Steven.




More information about the Python-list mailing list