[Python-Dev] [PEP 3148] futures - execute computations asynchronously

P.J. Eby pje at telecommunity.com
Sun Mar 7 20:56:13 CET 2010


At 10:59 AM 3/7/2010 -0800, Jeffrey Yasskin wrote:
>So is it that you just don't like the idea of blocking, and want to 
>stop anything that relies on it from getting into the standard library?

Um, no.  As I said before, call it a "parallel task queue" or 
"parallel task manager" or something to that general effect and I'm on board.

It may not be in the Zen of Python, but ISTM that names should 
generally follow use cases.  It is something of a corollary to "one 
obvious way to do it", in that if you see something whose name 
matches what you want to do, then it should be obvious that that's 
the way in question.  ;-)

The use cases for "parallel task queues", however, are a subset of 
those for "futures" in the general case.  Since the proposed module 
addresses most of the former but very little of the latter, calling 
it futures is inappropriate.

Specifically, it's:

1. Confusing to people who don't know what futures are (see e.g R.D. 
Murray's post), and

2. Underpowered for people who expect/want a more fully-featured 
futures system along the lines of E or Deferreds.

It seems that the only people for whom it's an intuitively correct 
description are people who've only had experience with more limited 
futures models (like Java's).  However, these people should not have 
a problem understanding the notion of parallel task queueing or task 
management, so changing the name isn't really a loss for them, and 
it's a gain for everybody else.


>  Given the set_result and set_exception methods, it's pretty 
> straightforward to fill in the value of a future from something 
> that isn't purely computational.

Those are described as "internal" methods in the PEP; by contrast, 
the Deferred equivalents are part of the public API.


>  Given a way to register "on-done" callbacks with the future, it 
> would be straightforward to wait for a future without blocking, too.

Yes, and with a few more additions besides that one, you might be on 
the way to an actual competitor for Deferreds.  For example: retry 
support, chaining, logging, API for transparent result processing, 
coroutine support, co-ordination tools like locks, sempaphores and queues, etc.

These are all things you would very likely want or need if you 
actually wanted to write a program using futures as *your main 
computational model*, vs. just needing to toss out some parallel 
tasks in a primarily synchronous program.

Of course, Deferreds are indeed overkill if all you're ever going to 
want is a few parallel tasks, unless you're already skilled in using 
Twisted or some wrapper for it.

So, I totally support having a simple task queue in the stdlib, as 
there are definitely times I would've used such a thing for a quick 
script, if it were available.

However, I've *also* had use cases for using futures as a 
computational model, and so that's what I originally thought this PEP 
was about.  After the use cases were clarified, though, it seems to 
me that *calling* it futures is a bad idea, because it's really just 
a nice task queuing system.

I'm +1 on adding a nice task queuing system, -1 on calling it by any 
other name.  ;-)



More information about the Python-Dev mailing list