I'm missing something here with range vs. xrange

Joe Goldthwaite joe at goldthwaites.com
Fri Dec 7 22:33:55 EST 2007


>90+ seconds?? What hardware, OS, and Python version? What else was
>running in the background?

>With this kit:
>OS Name:	Microsoft Windows XP Professional
>Version:	5.1.2600 Service Pack 2 Build 2600
>Processor:	x86 Family 15 Model 36 Stepping 2 AuthenticAMD ~1995 Mhz
>Python: Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310
>32 bit (Intel)] on win32

>and your exact code, I get:
>xRange 4.0
>Range 5.0

>After changing time.time to time.clock, I get:
>xRange 4.00560127055
>Range 4.8927366467

>After making the "benchmark" somewhat more realistic by executing it
>inside a function, I get:
>xRange 1.86865816745
>Range 3.31902658019

>By "inside a function", I mean changing
><script>
>to

>def foo():
>    <script>
>foo()

You bring up an excellent point.  It might seem like I'm actually running on
a Macbook Pro with an Intel Core 2 Duo at 2.33 GHz with 2 GB of ram.  The
problem is, I'm running my benchmarks under Windows running in a VMWare
Fusion virtual machine.

I'm also running the folding at home Mac SMP client on the other side.  The fah
client is supposed to yield to all other processes but it doesn't seem to be
very well behaved on the Mac.

For giggles, I re-ran the benchmarks to see what the impact of the various
pieces were.

The original run under a VMWare Fusion VM with fah running;
	xRange 92.5529999733
	Range 95.2669999599

Disabling the fah client;
	Range 54.8080000877
	xRange 54.5390000343

Running on the Mac OS X side;
	Range 4.10751199722
	xRange 2.72287797928

Running on a Mac with the benchmark as a function like you did;
	Range Foo 2.57200694084
	xRange Foo 1.33577299118

There are a number of suprises for me here.  I knew that the fah client
would affect the performance but I didn't know it would be that much.  I've
had it running on this machine since it was new. I've never really run any
benchmarks without it. I've benchmarked how the fah client affects other
virtual machines on my Windows computers and there's virtually no
difference. (Pardon the pun).

I'm also suprised that running as a function was so much quicker.  It must
have something to do with compiling the source into bytecode but I wouldn't
think that would make anywhere near that big of a difference.

Where I'm really surprised is now big of a performance difference there is
running a Windows virtual machine on a Mac. I suspected this.  I had a
process that would take over five minutes to run on my machine.  I installed
in on an end-users PC telling her that her old slow machine would probably
take fifteen minutes to run it. It finished before I finished explaining how
long it was going to take. She found the stupid look on my face very
entertaining.  She had a good laugh over it.

The strange thing is, I have lots of python processes under VMware under
Windows.  In all my benchmarking, I've only had a 5% to 10% performance
penalty.  It's disappointing that the Mac based VMs seem to do so badly.  (I
was running this VM under parallels before. I converted it to VMWare last
week.  The performance under both was about the same.)

If I get a chance, I'll move this virtual machine over to a Windows box to
see how it will work over there.




More information about the Python-list mailing list