python thread scheduler?

project2501 project2501 at project2501.cor
Thu Apr 8 04:21:44 EDT 2004


at the bottom is a reply i got on the comp.prgramming.threads group... it
sufggests there is no programmatic way to improve client request rate ona
uni-processor machine... do you agree? i think i agree with the person
suggesting the only solutionis more horsepower. 

will twisted help even after this consideration? i had a look at twisted a
while back (re: asyncore) and it seemed overly complex...


On Wed, 07 Apr 2004 16:20:19 +0000, exarkun wrote:

>  utilizes a global lock which prevents more than one Python thread from running at any particular time.  You will not see performance scale well with large numbers of threads (indeed, performance will probably be worse).

Subject:      Re: threads not switching fast enough one a 1-cpu system?
From:         steve at nospam.Watt.COM (Steve Watt)
Newsgroups:   comp.programming.threads
Date:         Wed, 7 Apr 2004 19:31:31 GMT

In article <pan.2004.04.07.15.10.17.891000 at project2501.cor>,
  project2501  <project2501 at project2501.cor> wrote:
>i'm trying to becnhmark a server software. simply throwing requests at it
>sequentially, however small the interval, is not stressing the server.

That means your server is able to serve whatever you're using as a test
client.  Probably good news.

>forking() children to throw requests quickly fills up the memory and swap
>until the client machine breaks.

Why do you think making more processes will make more requests per
unit time?

>threading (using python and also stackless python for now) lets me have
>plenty of threads (i've tried up to 100). however my responce time graphs
>are flat (although more threads means higher flat graph)... 

Why do you think making more threads will make more requests per
unit time?

>... which indicates the that the bottleneck being measured is the thread
>swicthing... and that not enough threads are actually running "parallel"
>enough. 

No, it indicates that it takes no longer to service a request than it
does to generate it.  If you've got one client machine and one server
machine, your server is probably faster than your client, or your
requests are easy for the server.

You need more client horsepower.  It is not uncommon, when doing big
load testing, to use a dozen or more machines against a single server to
really exercise the overload behavior.

Creating threads or processes does not create computing power.  In fact,
it ALWAYS* reduces the amount of CPU available to user code, because
of the increased state maintenance.

So you need to procure more CPU cycles from somewhere, or make requests
that take the server longer to complete.

* OK, SMP machines are allowed a thread/process per CPU.
-- 
Steve Watt KD6GGD  PP-ASEL-IA          ICBM: 121W 56' 57.8" / 37N 20' 14.9"
 Internet: steve @ Watt.COM                         Whois: SW32
   Free time?  There's no such thing.  It just comes in varying prices...




More information about the Python-list mailing list