urllib's performance
Mikhail Sobolev
mss at transas.com
Sun May 13 14:38:47 EDT 2001
In article <9dbmuo$978$1 at harvester.transas.com>,
Michael Sobolev <mss at transas.com> wrote:
>Two days ago I tried to create a small script that should help me to estimate a
>performance of a web system we created. But I found that using the python
>script does not allow to get as many requests as I was able to with plain shell
>with wget. It was a little bit surprising as I thought that python's
>performance is rather good. Maybe I am just missing something. I am using
>python 1.5.2 and the urllib that comes with it.
Just to give a little bit more background. I have a server program that can
create a number of working threads. What I tried to look at is how the number
of working threads and the number of concurrent clients correlate. So my
script looked like:
for thread_no in range (1, 21): # as 20 seems to be a reasonable limit as
# each working thread on server uses a
# significant amount of memory
for client_no in range (1, 21): # maybe 20 is not sufficient, but let's
# have a look on that many concurrent
# clients
for client in range (1, client_no+1):
thread.start_new (client_proc, args)
wait_for_threads_to_finish ()
And the problem is that the request rate of each client_proc is not sufficient.
Thank you in advance,
--
Misha
More information about the Python-list
mailing list