[Python-Dev] Free threading

Tim Peters tim.one@home.com
Thu, 9 Aug 2001 00:31:30 -0400


[Tim]
>> So what is it that multiple interpreters within a single process
> *are* for?

[Guidp]
> It was always feature number one requested by folks coming from Tcl,
> where creating multiple interpreters (threaded or not) is an
> acceptable pastime for people of good manners.  But it seems to be
> wasted on the average Pythoneer.

Well, I'm not a Tcl guy, but Tcl has historically had no threads at all, yet
did have an event model where the *illusion* of multiple threads could still
be put to good use.  I wouldn't fall over dead with surprise if that's all
there were to it:  Extreme Fear of Threads <wink>.

I believe I've seen three requests more common from Tcl land, least to most
common:

1. The Tcl event model indeed.

2. Tcl's more robust (than Python's) x-platform exec implementation
   (similar to Python os.system(), but with a platform-independent
   notation for pipes and redirection and envar substitution, and
   returning the exit status correctly even under command.com-based
   Windows flavors (where os.system() always returns 0)).  I looked
   at this once -- there are several thousand lines of exec support
   code in Tcl (which shouldn't be surprising, since they're basically
   supplying one of the hardest parts of a x-platform shell!).

3. Variants of Tcl's uplevel, i.e. gimmicks for fiddling locals
   "up the stack".  Maybe it's not called exactly "uplevel" in Tcl,
   but it's plain sick however it's spelled <wink>.

> In Python, there's one good use I can think of: an application may
> embed Python and occasionally want to run a good chunk of Python code.
> If it needs each chunk of Python code to run completely independent
> from each other chunk, creating a new interpreter for each chunk is a
> good way to avoid that changes to e.g. sys.path or tabnanny's globals
> made by one chunk affects the next chunk.  The hosting application
> might want the execution to share memory with *it*, so these
> interpreters have to live in the same process.
>
> This works well if the next chunk is only started after the previous
> chunk is done; but then using Py_Initialize() and Py_Finalize() is
> enough (creating one interpreter at a time).  If the chunks need to
> overlap in time, creating multiple interpreters is the right solution.

That's cool.  There isn't much code supporting multiple interpreters, so
there's little to gain by getting rid of it now.  I just wish it were
*used* -- and that it had a more compelling use case <wink>.