asyncio question

Ian Kelly ian.g.kelly at gmail.com
Tue Dec 13 10:37:34 EST 2016


On Tue, Dec 13, 2016 at 6:15 AM, Frank Millman <frank at chagford.com> wrote:
> The client uses AJAX to send messages to the server. It sends the message
> and continues processing, while a background task waits for the response and
> handles it appropriately. As a result, the client can send a second message
> before receiving a response to the first one. The server can detect this,
> but it cannot wait for the first message to complete, otherwise it will
> block other clients. I have not noticed any problems with processing 2
> requests from the same client concurrently, but I don't like it, so I want
> to process them sequentially.

Is there a particular reason why you're worried about this? The
browser is perfectly capable of keeping the requests straight. Also,
note that since the requests use separate HTTP connections, even if
the server sends its responses in a particular order, there's no
guarantee that the client will read them in that order, so this
doesn't free you from the need to allow the client to handle the
requests coming back in any order.

> Here is my solution. As I create each Session instance, I set up a
> background task, using asyncio.ensure_future, which sets up an
> asyncio.Queue. The request handler identifies the session that the request
> belongs to, and 'puts' the request onto that session's Queue. The background
> task runs a 'while True' loop waiting for requests. As they come in it
> 'gets' them and processes them. It seems to work.
>
> This means that I have a background task running for each concurrent user.
> Each one will be idle most of the time. My gut-feel says that this will not
> cause a problem, even if there are hundreds of them, but any comments will
> be welcome.

In a 64-bit Linux build of CPython, the combined size of a generator
and a stack frame is around half a kilobyte (not including whatever
space is needed for local variables), so hundreds of yielding asyncio
tasks should consume no more than hundreds of kilobytes of memory.
Remember that half a kilobyte figure is per generator, not per task,
so if your while loop is four coroutines deep, that will inflate the
cost of the task to two kilobytes each. This is different from the
threading model where each thread would need its own separate stack
space, not just a frame on the heap; you probably wouldn't want to do
this with threads, but with coroutines it should be fine.



More information about the Python-list mailing list