[melbourne-pug] Spawn new process to handle inbound websocket connection

Tim Richardson tim at growthpath.com.au
Mon Jul 23 18:32:08 EDT 2018


I'm sure lots of people will give this sort of answer
a) you normally don't want a webserver keeping connections open for
minutes, processing a request, although if you were sending progress
messages through the websockets I guess it's ok
b) to handle large connection numbers, you want to use asyncio and the
aiohttp which builds on top of it. This gives you a pure python websocket
server which can handle lots of connections. Process or thread-spawning
servers don't scale.
I made a toy implementation here to learn about it
https://github.com/timrichardson/asyncio_bus_timetable
In production, most people would run it behind nginx which moves your
solution to serious volume.

The solution to (a) is to use a background task processor. The most common
python choice is probably celery (to do the work, this can spawn multiple
processes) and redis (to communicate).

Although arguably if you are going to be CPU bound anyway, maybe this
conventional advice isn't going to help much.
Because you are CPU bound and each request takes two to three minutes, I
can't see how are you are going to reach thousands of connections on one
machine. The approach I sketched can scale across multiple machines; nginx
can run multiple websocket servers, and you can distribute celery/redis.

You can go to upwork, find an expert, specify you want a python solution,
and you'll learn a lot from them for $1000 or so.





On Tue, 24 Jul 2018 at 07:27, Andrew Stuart <
andrew.stuart at supercoders.com.au> wrote:

> I have servers that send a sequence of PNG images which need to be
> processed via a sequence of commands.
>
> My idea is this:
>
> A Python websockets server is listening for connections from the servers.
> When a websocket connection is received, a new Python process is spawned -
> presumably using https://docs.python.org/3/library/multiprocessing.html
> and somehow the websocket is given to that process (how?).  The spawned
> process listens to the websocket, receives the PNG images and then
> processes them.  After completion, the spawned process dies and the
> websocket is closed. So the spawned Python processes might run for up to a
> few minutes each.
>
> Is this a workable idea in Python?
>
> One open question is how would I hand the websocket to the spawned process?
>
> Also, coud such an approach support tens, hundreds, thousands of
> connections on a for example 8 core machine with 16 gig RAM ( know this is
> how long is a piece of string, but just finger in the air guessing to make
> sure there’s no huge gotcha in there that would impose a huge constraint &
> invalidate the idea)?
>
> I was looking at this gist as inspiration for a start point
> https://gist.github.com/gdamjan/d7333a4d9069af96fa4d
>
> Any ideas/feedback valued.
>
> thanks
>
> Andrew
> _______________________________________________
> melbourne-pug mailing list
> melbourne-pug at python.org
> https://mail.python.org/mailman/listinfo/melbourne-pug
>


-- 


*Tim Richardson CPA, Director*
GrowthPath. Finance transformation for SMEs via Cloud ERP, advanced
reporting, CRM

Mobile: +61 423 091 732 Office: +61 3 8678 1850.
Chat via https://hangouts.google.com or Skype: te.richardson

GrowthPath Pty Ltd
ABN 76 133 733 963




<http://www.growthpath.com.au/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/melbourne-pug/attachments/20180724/a81fc9cb/attachment.html>


More information about the melbourne-pug mailing list