[Python-ideas] the future of the GIL
Aahz
aahz at pythoncraft.com
Sun May 13 01:30:57 CEST 2007
[excessive quoting ahead to move to python-ideas from python-3000, please
trim when you followup]
On Wed, May 09, 2007, Talin wrote:
> Greg Ewing wrote:
>> Giovanni Bajo wrote:
>>>
>>> using multiple processes cause some
>>> headaches with frozen distributions (PyInstaller, py2exe, etc.),
>>> like those usually found on Windows, specifically because Windows
>>> does not have fork().
>>
>> Isn't that just a problem with Windows generally? I don't
>> see what the method of packaging has to do with it.
>>
>> Also, I've seen it suggested that there may actually be
>> a way of doing something equivalent to a fork in Windows,
>> even though it doesn't have a fork() system call as such.
>> Does anyone know more about this?
>
> I also wonder about embedded systems and game consoles. I don't know how
> many embedded microprocessors support fork(), but I know that modern
> consoles such as PS/3 and Xbox do not, since they have no support for
> virtual memory at all.
>
> Also remember that the PS/3 is supposed to be one of the poster children
> for multiprocessing -- the whole 'cell processor' thing. You can't write
> an efficient game on the PS/3 unless it uses multiple processors.
>
> Admittedly, not many current console-based games use Python, but that
> need not always be the case in the future, and a number of PC-based
> games are using it already.
>
> This much I agree: There's no point in talking about supporting multiple
> processors using threads as long as we're living in a refcounting world.
>
> Thought experiment: Suppose you were writing and brand-new dynamic
> language today, designed to work efficiently on multi-processor systems.
> Forget all of Python's legacy implementation details such as GILs and
> refcounts and such. What would it look like, and how well would it
> perform? (And I don't mean purely functional languages a la Erlang.)
>
> For example, in a language that is based on continuations at a very deep
> level, there need not be any "global interpreter" at all. Each separate
> flow of execution is merely a pointer to a call frame, the evaluation of
> which produces a pointer to another call frame (or perhaps the same
> one). Yes, there would still be some shared state that would have to be
> managed, but I wouldn't think that the performance penalty of managing
> that would be horrible.
One of the primary goals of Python was/is to be an easy glue language
for C libraries. How do you propose to handle the issue that many C
libraries use global state?
--
Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/
"Look, it's your affair if you want to play with five people, but don't
go calling it doubles." --John Cleese anticipates Usenet
More information about the Python-ideas
mailing list