Why is this loop heavy code so slow in Python? Possible Project Euler spoilers
Alex Martelli
aleax at mac.com
Sun Sep 2 12:55:29 EDT 2007
Mark Dickinson <dickinsm at gmail.com> wrote:
> On Sep 2, 9:45 am, jwrweather... at gmail.com wrote:
> > [snip code]
> >
> > Thanks for that. I realise that improving the algorithm will speed
> > things up. I wanted to know why my less than perfect algorithm was so
> > much slower in python than exactly the same algorithm in C. Even when
> > turning off gcc's optimiser with the -O0 flag, the C version is still
> >
> > > 100 times quicker.
>
> Well, for one thing, you're creating half a million xrange objects in
> the course of the search. All the C code has
> to do is increment a few integers.
I don't think the creation of xrange objects is a meaningful part of
Python's execution time here. Consider:
M = 1000
solutions = [0] * M
def f2():
"a*a + b*b precalculated"
for a in xrange(1, M):
a2 = a*a
for b in xrange(1, M - a):
s = a2 + b*b
for c in xrange(1, M - a - b):
if s == c*c:
solutions[a+b+c] += 1
def f3(M=M, solutions=solutions):
"pull out all the stops"
xrs = [xrange(1, k) for k in xrange(0, M+1)]
for a in xrs[M]:
a2 = a*a
for b in xrs[M-a]:
s = a2 + b*b
for c in xrs[M-a-b]:
if s == c*c:
solutions[a+b+c] += 1
import time
t = time.time()
f2()
e = time.time()
print e-t, max(xrange(M), key=solutions.__getitem__)
solutions = [0]*M
t = time.time()
f3(M, solutions)
e = time.time()
print e-t, max(xrange(M), key=solutions.__getitem__)
f2 is Arnaud's optimization of the OP's algorithm by simple hoisting; f3
further hoists the xrange creation -- it creates only 1000 such objects
rather than half a million. And yet...:
brain:~/py25 alex$ python puz.py
34.6613101959 840
36.2000119686 840
brain:~/py25 alex$
...which suggests that creating an xrange object is _cheaper_ than
indexing a list...
Alex
More information about the Python-list
mailing list