Python too slow?

Steven D'Aprano steven at REMOVE.THIS.cybersource.com.au
Wed Jan 9 21:10:56 EST 2008


On Wed, 09 Jan 2008 21:26:05 +0100, Bruno Desthuilliers wrote:

> hint: how can a compiler safely optimize anything in a language so
> dynamic that even the class of an object can be changed at runtime ?

Is that a trick question?

By finding things to optimize that *can't* change at runtime.

E.g. a keyhole optimizer that detects when you write something like this:

def foo():
    x = 1 + 2
    return x

and optimizes it to this:

>>> import dis
>>> dis.dis(foo)
  2           0 LOAD_CONST               3 (3)
              3 STORE_FAST               0 (x)

  3           6 LOAD_FAST                0 (x)
              9 RETURN_VALUE

(Only in CPython version 2.5 or better.)

Okay, okay, so constant folding isn't going to save you a lot of time 
(unless your constant is being re-calculated millions of times) but it's 
still an optimization. There are others, some have been done (e.g. 
optimize away the quadratic behaviour for string concatenation in some 
limited cases), some are possible but rejected by the BDFL (e.g. tail-
call optimization), presumably there are some possible but not yet 
implemented or even discovered (which is the point of the PyPy project).

You are correct that optimizing Python is hard. However, in many cases, 
the problem is not that Python is too slow, but the specific algorithm 
chosen by the programmer is slow, e.g. no compiler optimization is going 
to turn an O(n**2) algorithm into an O(1) algorithm.

The Original Poster says it takes one or two seconds to process an 
800x600 GIF. That sounds believable: on my PC, it takes about five 
seconds to loop over range(800*600) and do a tiny bit of processing. 
Something like Psycho might speed that up a lot, possibly by an order of 
magnitude or two.



-- 
Steven



More information about the Python-list mailing list