[python-uk] urllib latency

Matt Hamilton matth at netsight.co.uk
Mon Dec 20 14:54:13 CET 2010


Ok, here's a non job related post...

Anyone know why urllib.urlopen() can be so much slower than using ab to do the same thing? I seem to be getting an extra 100ms latency on a simple HTTP GET request of a static, small image.

e.g.

>>> for x in range(10):
...   t1 = time(); data = urlopen('http://example.com/kb-brain.png').read(); t2=time(); print t2-t1
... 
0.12966299057
0.131743907928
0.303734064102
0.136001110077
0.136011838913
0.13859796524
0.13979101181
0.145252943039
0.145727872849
0.150994062424

versus, on the same machine:

dhcp90:funkload netsight$ ab -n10 -c1 http://example.com/kb-brain.png

...
Concurrency Level:      1
Time taken for tests:   0.309 seconds
Complete requests:      10
Failed requests:        0
Write errors:           0
Total transferred:      31870 bytes
HTML transferred:       28990 bytes
Requests per second:    32.32 [#/sec] (mean)
Time per request:       30.942 [ms] (mean)
Time per request:       30.942 [ms] (mean, across all concurrent requests)
Transfer rate:          100.59 [Kbytes/sec] received
...

I've tried it repeatedly and get consistent results. The server under test is a cluster of Plone instances behind haproxy. The client and server are connected via 100Mbit fairly lightly loaded network.

I've tried taking the read() part out, still the same... I've tried using urllib2 and still pretty much the same.

-Matt


-- 
Matt Hamilton                                         matth at netsight.co.uk
Netsight Internet Solutions, Ltd.          Business Vision on the Internet
http://www.netsight.co.uk                               +44 (0)117 9090901
Web Design | Zope/Plone Development and Consulting | Co-location | Hosting



More information about the python-uk mailing list