best way to serve wsgi with multiple processes

alex goretoy aleksandr.goretoy at gmail.com
Wed Feb 11 15:35:52 EST 2009


GAE (Google App Engine) uses WSGI for webapps. You don't have to overhead of
managing a server and all it's services this way as well. Just manage dns
entries. Although, there are limitations depending on your project needs of
what libs you need to use.

appengine.google.com

-Alex Goretoy
http://www.goretoy.com



On Wed, Feb 11, 2009 at 1:59 PM, Graham Dumpleton <
Graham.Dumpleton at gmail.com> wrote:

> On Feb 11, 8:50 pm, Robin <robi... at gmail.com> wrote:
> > Hi,
> >
> > I am building some computational web services using soaplib. This
> > creates a WSGI application.
> >
> > However, since some of these services are computationally intensive,
> > and may be long running, I was looking for a way to use multiple
> > processes. I thought about using multiprocessing.Process manually in
> > the service, but I was a bit worried about how that might interact
> > with a threaded server (I was hoping the thread serving that request
> > could just wait until the child is finished). Also it would be good to
> > keep the services as simple as possible so it's easier for people to
> > write them.
> >
> > I have at the moment the following WSGI structure:
> > TransLogger(URLMap(URLParser(soaplib objects)))
> > although presumably, due to the beauty of WSGI, this shouldn't matter.
> >
> > As I've found with all web-related Python stuff, I'm overwhelmed by
> > the choice and number of alternatives. I've so far been using cherrypy
> > and ajp-wsgi for my testing, but am aware of Spawning, twisted etc.
> > What would be the simplest [quickest to setup and fewest details of
> > the server required - ideally with a simple example] and most reliable
> > [this will eventually be 'in production' as part of a large scientific
> > project] way to host this sort of WSGI with a process-per-request
> > style?
>
> In this sort of situation one wouldn't normally do the work in the
> main web server, but have a separarte long running daemon process
> embedding mini web server that understands XML-RPC. The main web
> server would then make XML-RPC requests against the backend daemon
> process, which would use threading and or queueing to handle the
> requests.
>
> If the work is indeed long running, the backend process would normally
> just acknowledge the request and not wait. The web page would return
> and it would be up to user to then somehow occassionally poll web
> server, manually or by AJAX, to see how progres is going. That is,
> further XML-RPC requests from main server to backend daemon process
> asking about progress.
>
> I do't believe the suggestions about fastcgi/scgi/ajp/flup or mod_wsgi
> are really appropriate as you don't want this done in web server
> processes as then you are at mercy of web server processes being
> killed or dying when part way through something. Some of these systems
> will do this if requests take too long. Thus better to offload real
> work to another process.
>
> Graham
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20090211/647451d4/attachment-0001.html>


More information about the Python-list mailing list