xmlprclib/server not reusing connections

Dave Brueck dave at pythonapocrypha.com
Tue Feb 24 12:36:19 EST 2004


Roger wrote:
> > You might have better luck with Twisted http://twistedmatrix.com/.
>
> I did look at twisted but really don't like it.  In particular I really
> don't like the event driven code, that has to add callbacks and
> deal with partial state when called.
>
> That is certainly the right way to do things if you want to avoid
> using threads.  It is however complicated and convoluted.  See this
> example:
>
> http://twistedmatrix.com/documents/howto/tutorial#auto20
>
> The model I far prefer is to use multiple threads (available on all
> major Python platforms) and use worker threads with work queues.
> They turn out to be simpler since they don't have queue callbacks
> or effectively be a glorified state machine scattered across
> several functions.

Yes, but the main drawback is of course scalability. But, you can have the best
of both worlds with Stackless Python's tasklets. I few years ago I wrote an
event-based I/O framework for my company using normal Python and it's nice and
fast but programs built on it are ugly (logic split across many callback
functions, state is maintained in an awkward way as a result, etc). Over the
past week or so I've started writing a replacement framework using Stackless
and it's been *so* nice... the applications that use it _look_ like they are
threaded, but you get the performance and scalability of "traditional" asynch
I/O frameworks (the core of the framework still uses a loop to call
select.poll).

Here's one of my test cases, a server that sends back a dummy HTTP response:

def ConnHandler(sock):
    header = sock.Read(4096)
    resp = 'HTTP/1.0 200 Ok\r\nContent-length: 5\r\nContent-type:
text/html\r\n\r\n12345'
    sock.Write(resp)
    sock.Close()

engine = Engine()
s = ListenSocket('127.0.0.1', 7777, ConnHandler)
s.AttachToEngine(engine)
engine.Run()

On a 900MHz P3 Linux box the above server easily handles 2000 requests per
second, which is nice. :)

Although obviously a very trivial example, it's a good enough proof-of-concept
to encourage me to continue. I haven't tested high levels of concurrent
connections yet - but a similar test that sent back larger responses didn't
have any problems serving well over 1000 simultaneous connections, so in the
worse case using Stackless won't have any performance drawbacks compared to
other approaches.

-Dave





More information about the Python-list mailing list