asyncio and blocking - an update

Chris Angelico rosuav at gmail.com
Thu Feb 11 02:07:55 EST 2016


On Thu, Feb 11, 2016 at 5:36 PM, Frank Millman <frank at chagford.com> wrote:
> "Chris Angelico"  wrote in message
> news:CAPTjJmrVCkKAEevc9TW8FYYTnZgRUMPHectz+bD=DQRphXYTpw at mail.gmail.com...
>>
>>
>> Something worth checking would be real-world database performance metrics
>
>
> [snip lots of valid questions]
>
> My approach is guided by something I read a long time ago, and I don't know
> how true it is, but it feels plausible. This is a rough paraphrase.
>
> Modern databases are highly optimised to execute a query and return the
> result as quickly as possible. A properly written database adaptor will work
> in conjunction with the database to optimise the retrieval of the result.
> Therefore the quickest way to get the result is to let the adaptor iterate
> over the cursor and let it figure out how best to achieve it.
>
> Obviously you still have to tune your query to make make sure it is
> efficient, using indexes etc. But there is no point in trying to
> second-guess the database adaptor in figuring out the quickest way to get
> the result.

As far as that goes, it's sound. (It's pretty obvious that collecting
all the rows into a list is going to take (at least) as long to give
the first row as iteration would take to give the last row, simply
because you could always implement one on top of the other, and
iteration has flexibility that fetchall doesn't.) The only question
is, what price are you paying for that?

> 1.
>    future = loop.run_in_executor('SELECT ...')
>    await future
>    rows = future.result()
>    for row in rows:
>        process row
>
>    The SELECT will not block, because it is run in a separate thread. But it
> will return all the rows in a single list, and the calling function will
> block while it processes the rows, unless it takes the extra step of turning
> the list into an Asynchronous Iterator.

This is beautifully simple.

> 2.
>        rows = AsyncCursor('SELECT ...')
>        async for row in rows:
>            process row

Also beautifully simple. But this one comes with much more complexity
cost in your second thread and your AsyncCursor.

So really, the question is: Is this complexity buying you enough
performance that it's worthwhile? My questions about real-world stats
are based on the flip side of your assumption - to quote it again:

> Modern databases are highly optimised to execute a query and return the
> result as quickly as possible. A properly written database adaptor will work
> in conjunction with the database to optimise the retrieval of the result.
> Therefore the quickest way to get the result is to let the adaptor iterate
> over the cursor and let it figure out how best to achieve it.

A properly-built database will optimize for two things: Time to first
row, and time to query completion. (And other things, like memory
usage, which don't directly affect this discussion.) In some cases,
they'll be very different figures, and then you'll get a lot of
benefit from iteration. In other cases, they'll be virtually the same
- imagine a query that involves a number of tables and lots of
aggregate functions, governed by a big GROUP BY that gathers them all
up into, say, three rows, sorted by one of the aggregate functions (eg
"show me these categories, sorted by the total value of sales per
category"). How long does it take for the database to get the first
row? It has to execute the entire query. How long to get the other
two? Just return 'em from memory. So there's basically no benefit to
this query of iteration above fetchall. Most queries will  be
somewhere in between, hence the question about real-world
significance. If it costs you little to iterate, great! But if you're
paying a high price, it's something to consider.

ChrisA



More information about the Python-list mailing list