Asynchronous programming

Chris Angelico rosuav at gmail.com
Thu Aug 11 05:33:09 EDT 2016


On Thu, Aug 11, 2016 at 5:55 PM, Marko Rauhamaa <marko at pacujo.net> wrote:
> Chris Angelico <rosuav at gmail.com>:
>
>> Hmm. I'm not sure about that last bit. In order to properly support
>> asynchronous I/O and the concurrency and reentrancy that that implies,
>> you basically need all the same disciplines that you would for
>> threaded programming
>
> Correct. Python's asyncio closely mirrors threads in its API and thought
> model. An async is like a thread except that you know the control will
> only be relinquished at an "await" statement. Thus, you probably need
> far less locking.

I've never needed ANY explicit locking in a threaded Python or Pike
program, and only rarely in C. But then, I'm one of those people who
finds threaded programming relatively easy.

> Asyncs have a couple of distinct advantages over threads:
>
>  * You can await multiple events in a single statement allowing for
>    multiplexing. Threads can only wait for a single event.

Err, that's kinda the point of select() and friends. There are some
types of event that you can't select() on, but those also tend not to
work perfectly with async programming either - because, guess what,
event loops for async programs are usually built on top of select.

>  * Asyncs can be canceled; in general, threads can't.

Sadly, it's not that simple. Some async actions can be cancelled,
others cannot. At best, what you're saying is that *more* things can
be cancelled if they were started asynchronously.

>> But maybe I'm too comfortable with threads. It's entirely possible; my
>> first taste of reentrancy was interrupt handling in real mode 80x86
>> assembly language, and in comparison to that, threads are pretty
>> benign!
>
> I steer clear of threads if I can because:
>
>  * No multiplexing.
>
>  * No cancellation.

See above

>  * Deadlocks and/or race conditions are virtually guaranteed.

Definitely not. There are EASY ways to avoid deadlocks, and race
conditions are only a problem when you have some other bug. Yes,
threading bugs can be harder to debug; but only when you've violated
the other principles.

>  * Error symptoms are highly nonlocal making debugging very tricky.

But at least you have consistent and simple tracebacks. Callback model
code generally lacks that. (That's one of the advantages of Python's
generator-based async model - although it's not perfect.)

> Instead, I strongly prefer asynchronous programming and multiprocessing.

Multiprocessing is *exactly* as complicated as multithreading, with
the additional overhead of having to pass state around explicitly. And
if you're not getting the overhead of passing state around, you
wouldn't have the complexity of shared state in threading, ergo both
models would be identical.

> My favorite asynchronous development model is the "callback hell," where
> each cell of the state/event matrix is represented by a method (or at
> least a switch case in C) and each state has an explicit name in the
> source code. The resulting code is still hard to read and hard to get
> right, but that complexity is unavoidable because reality just is that
> complex.

Wow. Are you for real, or are you trolling us? You actually *enjoy*
callback hell?

Give me yield-based asyncio any day.

ChrisA



More information about the Python-list mailing list