PEP 255: Simple Generators

Glyph Lefkowitz glyph at twistedmatrix.com
Thu Jun 21 14:59:08 EDT 2001


On Thu, 21 Jun 2001, Neil Schemenauer wrote:

> Glyph Lefkowitz wrote:
> > Disadvantages to the PEP 255 approach:
> > 
> >  * we'll probably want microthreads eventually; this feature will bloat
> >    the language unnecessarily then.
> 
> How so?  It would still be nice to have some syntax on the
> language to support generators.  If the implementation supports
> continuations then implementing PEP 255 will be easy.

Syntax in the language is in fact *exactly* why I am arguing against
this.  If a "generator" is different from a regular function somehow, then
this idiom cannot possibly work:

 def writeComplexMessage(self, msg, data):
   fmt = format(msg)
   data.put(fmt)

 def dataHandlingLoop(self, data):
   while 1:
     x = data.get(bufsiz)
     msg = process(x)
     self.writeComplexMessage(msg, data)

since the "yeild" is one level deep in the function call stack,
writeComplexMessage will be flagged a "generator" and dataHandlingLoop a
regular function.

So the way to do this is to have a "yeild" in dataHandlingLoop that is a
tuple (bytes to write, number of bytes to read) or something insane like
that.  Generators doesn't make this easier; but it is the same sort of
problem that generators are designed to solve.

(I guess this indicates that I *could* use generators to implement the
example in my earlier email, but I think that this "solution" is a worse
indicator than the original problem.)

> >  * less pythonic: once we've implemented generators this way, what about
> >    simulation loops?  Do we need syntax support for those too?
> 
> I don't know what you mean by simulation loops.

If I have 10,000 "agents" in a simulation (something like Core Wars,
maybe), and they all have to be time-shared equally, real threads are not
an option (try spinning up 10k threads in python on a default redhat linux
install, haw haw.) and neither are generators.  The simulation may want to
yeild some processor time several levels deep in the stack; if I have to
return to the toplevel function it's pretty much the same as if I have to
process time slices as event-based rather than synchronously.

> > The one advantage to the PEP 255 way of doing things is that it
> > looks like it might be slightly easier to implement.
> 
> Make that "much easier".  Stackless is wizard code.  Christian
> did an amazing piece of work there.  Hopefully those ideas can
> be integrated into mainstream Python.  Its still going to take a
> lot more work though.  So far no one has stepped forward to do
> it.

Well, I'm not advocating the immediate integration of stackless.  
Speaking against some common Python zen; now is *not* better than later.  
Generators solve *some* problems for which synchronous execution is
desirable, but not others.  They change the semantics of function
definition (despite the BDFL's pronouncement, my favorite piece of python
zen is still "explicit is better than implicit" ^_^).  They add a keyword.  
They do not fill my needs, or the needs of anyone working with simulation
code, network dispatching, or massively parallel computing; whereas the
simpler solutions which *do not* add a keyword and *do not* change the
semantics of function definition can fill the needs of a far greater
group, as well as the same set that would require generators.

Stackless is problematic, but I think that an implementation similiar to
that of generators could give us microthreads.  When I get back from
vacation I'll write up a PEP; if you'd like to give me a hand, feel free
to drop me an email :)

                      ______      __   __  _____  _     _
                     |  ____ |      \_/   |_____] |_____|
                     |_____| |_____  |    |       |     |
                     @ t w i s t e d m a t r i x  . c o m
                     http://twistedmatrix.com/users/glyph






More information about the Python-list mailing list