python - handling HTTP requests asynchronously

justin walters walters.justin01 at gmail.com
Fri May 6 11:23:01 EDT 2016


On Thu, May 5, 2016 at 11:56 PM, <lordluke80 at gmail.com> wrote:

> Hi everyone,
> I need to generate a PDF report for each entry of a django queryset.
> There'll be between between 30k and 40k entries.
>
> The PDF is generated through an external API. Since currently is generated
> on demand, this is handled synchronously via an HTTP request/response. That
> will be different for this task, since I think I'll use a django management
> command to loop through the queryset and perform the PDF generation.
>
> Which approach should I follow for this task? I thought about 3 possibile
> solutions, although are technologies that I never used:
>
> 1) Celery: assign a task (http request with a different payload) to a
> worker, then retrieve it once it's done.
>
> 2) request-futures: using requests in a non-blocking way.
>
> 3) multiprocessing module, with e.g. 10 as workers limit.
>
>
> the goal is to use the API concurrently (e.g. send 10 or 100 http requests
> simultaneously, depending on how many concurrent requests the API can
> handle).
>
> Anybody here that handled a similar task and can give advices on how to
> proceed on this?
> --
> https://mail.python.org/mailman/listinfo/python-list
>



Have you tried channels: https://github.com/andrewgodwin/channels ? If it's
an asyncronous request/response cycle you're looking for, it should work
well. Essentially, you have worker processes that receive a message over a
websocket connection and then send the new message back to the "group". I
would recommend using the redis in memory transaction layer if you go this
route as it is the fastest and most efficient. The best part about channels
is that you can still run a normal django app alongside it. You can have
only one app use websockets while the rest use standard http..



More information about the Python-list mailing list