python - handling HTTP requests asynchronously

Michael Selik michael.selik at gmail.com
Sat May 7 15:04:21 EDT 2016


On Fri, May 6, 2016 at 3:01 AM <lordluke80 at gmail.com> wrote:

> The PDF is generated through an external API. Since currently is generated
> on demand, this is handled synchronously via an HTTP request/response.


Are you sending the request or are you receiving the request?
If you are sending, you can just use threads as you are only doing IO.
If you are receiving the requests and generating PDFs, you may want to use
subprocesses if the PDF-generation is compute-intensive.


> 3) multiprocessing module, with e.g. 10 as workers limit.
>

multiprocessing.Pool is an easy way to use subprocesses
multiprocessing.pool.ThreadPool is an easy way to use threads. It's not
well documented, but has the exact same interface as Pool.

the goal is to use the API concurrently (e.g. send 10 or 100 http requests
> simultaneously, depending on how many concurrent requests the API can
> handle).
>

Sounds like you want to use threads. How does the API let you know you're
hitting it too frequently? Perhaps you want to code an exponential backoff
and retry wrapper for your API requests.



More information about the Python-list mailing list