[Chicago] Facebook open sources FriendFeed's real-time Python web framework, Tornado

Massimo Di Pierro mdipierro at cs.depaul.edu
Sat Sep 19 10:25:37 CEST 2009


Here is a better version:

     http://web2py.com/examples/static/sneaky.py

and a Python 3.0 version:

     http://web2py.com/examples/static/sneaky3.py

They may still need some more work but they seem to work fine. I did  
not test SSL and chunked uploads yet.

Massimo


On Sep 18, 2009, at 8:45 PM, Massimo Di Pierro wrote:

> Thank you Garrett for the tests.
>
> Can you tell us about what hardware/os you used?
>
> Massimo
>
> Could you tell us what machine you are using for the test?
>
> Massimo
>
> On Sep 18, 2009, at 6:41 PM, Garrett Smith wrote:
>
>> On Fri, Sep 18, 2009 at 3:03 PM, Massimo Di Pierro
>> <mdipierro at cs.depaul.edu> wrote:
>>> I can contribute one more:
>>>
>>>    http://web2py.com/examples/static/web2pyserver.py
>>>
>>> - api compatible with cherrypy and very much inspired by it.
>>> - works with cherrypy ssl_handler (to be tested) but will soon have
>>> its own.
>>> - multithreaded
>>> - can handle requests and responses via chunking (like cherrypy)
>>> (but not
>>> tested yet!)
>>> - should work with python 3 (but not tried yet!)
>>> - 30-50% faster then cherrypy in my tests.
>>>
>>> I could use some independent tests and benchmarks.
>>
>> I've confirmed these results. The new web2py WSGI server is a playah!
>>
>> web2server is on par with CherryPy at lowish levels of concurrency (<
>> 1000) but is far better at handling very high levels of concurrency
>> requests (> 10,000). I was skeptical of Massimos 50% faster claims,
>> but it shows up at these very high levels.
>>
>> It's *very* comparable, per my benchmarks, to Tornado. And it's
>> threaded.
>>
>> I hope word gets out about this.
>>
>> As folks have been saying here for a while now -- it's really not a
>> good idea to plunge into async/event concurrency models for web apps.
>> Hell, even crazy highly concurrent apps like instant messaging or
>> simulators can probably get by perfectly well using threads, provided
>> the stack size is kept reasonable.
>>
>> If you need a hundred thousand concurrent, long running processes,
>> fine. But you'll probably want something like Stackless anyway. Or
>> Erlang :)
>>
>> Nice work Massimo!
>>
>> P.S. The LGPL is kind of a pain for people that want to throw this
>> module into their source tree and not worry about the particulars of
>> that license. Massimo, if you plan to keep this as a single module
>> (hope so), would you consider an alternative or dual license?
>> _______________________________________________
>> Chicago mailing list
>> Chicago at python.org
>> http://mail.python.org/mailman/listinfo/chicago
>
> _______________________________________________
> Chicago mailing list
> Chicago at python.org
> http://mail.python.org/mailman/listinfo/chicago



More information about the Chicago mailing list