2.6, 3.0, and truly independent intepreters

Patrick Stinson patrickstinson.lists at gmail.com
Fri Oct 24 11:26:09 EDT 2008


I'm not finished reading the whole thread yet, but I've got some
things below to respond to this post with.

On Thu, Oct 23, 2008 at 9:30 AM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> On approximately 10/23/2008 12:24 AM, came the following characters from the
> keyboard of Christian Heimes:
>>
>> Andy wrote:
>>>
>>> 2) Barriers to "free threading".  As Jesse describes, this is simply
>>> just the GIL being in place, but of course it's there for a reason.
>>> It's there because (1) doesn't hold and there was never any specs/
>>> guidance put forward about what should and shouldn't be done in multi-
>>> threaded apps (see my QuickTime API example).  Perhaps if we could go
>>> back in time, we would not put the GIL in place, strict guidelines
>>> regarding multithreaded use would have been established, and PEP 3121
>>> would have been mandatory for C modules.  Then again--screw that, if I
>>> could go back in time, I'd just go for the lottery tickets!! :^)
>
>
> I've been following this discussion with interest, as it certainly seems
> that multi-core/multi-CPU machines are the coming thing, and many
> applications will need to figure out how to use them effectively.
>
>> I'm very - not absolute, but very - sure that Guido and the initial
>> designers of Python would have added the GIL anyway. The GIL makes Python
>> faster on single core machines and more stable on multi core machines. Other
>> language designers think the same way. Ruby recently got a GIL. The article
>> http://www.infoq.com/news/2007/05/ruby-threading-futures explains the
>> rationales for a GIL in Ruby. The article also holds a quote from Guido
>> about threading in general.
>>
>> Several people inside and outside the Python community think that threads
>> are dangerous and don't scale. The paper
>> http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf sums it up
>> nicely, It explains why modern processors are going to cause more and more
>> trouble with the Java approach to threads, too.
>
> Reading this PDF paper is extremely interesting (albeit somewhat dependent
> on understanding abstract theories of computation; I have enough math
> background to follow it, sort of, and most of the text can be read even
> without fully understanding the theoretical abstractions).
>
> I have already heard people talking about "Java applications are buggy".  I
> don't believe that general sequential programs written in Java are any
> buggier than programs written in other languages... so I had interpreted
> that to mean (based on some inquiry) that complex, multi-threaded Java
> applications are buggy.  And while I also don't believe that complex,
> multi-threaded programs written in Java are any buggier than complex,
> multi-threaded programs written in other languages, it does seem to be true
> that Java is one of the currently popular languages in which to write
> complex, multi-threaded programs, because of its language support for
> threads and concurrency primitives.  These reports were from people that are
> not programmers, but are field IT people, that have bought and/or support
> software and/or hardware with drivers, that are written in Java, and seem to
> have non-ideal behavior, (apparently only) curable by stopping/restarting
> the application or driver, or sometimes requiring a reboot.
>
> The paper explains many traps that lead to complex, multi-threaded programs
> being buggy, and being hard to test.  I have worked with parallel machines,
> applications, and databases for 25 years, and can appreciate the succinct
> expression of the problems explained within the paper, and can, from
> experience, agree with its premises and conclusions.  Parallel applications
> only have been commercial successes when the parallelism is tightly
> constrained to well-controlled patterns that could be easily understood.
>  Threads, especially in "cooperation" with languages that use memory
> pointers, have the potential to get out of control, in inexplicable ways.
>
>
>> Python *must* gain means of concurrent execution of CPU bound code
>> eventually to survive on the market. But it must get the right means or we
>> are going to suffer the consequences.
>
> This statement, after reading the paper, seems somewhat in line with the
> author's premise that language acceptability requires that a language be
> self-contained/monolithic, and potentially sufficient to implement itself.
>  That seems to also be one of the reasons that Java is used today for
> threaded applications.  It does seem to be true, given current hardware
> trends, that _some mechanism_ must be provided to obtain the benefit of
> multiple cores/CPUs to a single application, and that Python must either
> implement or interface to that mechanism to continue to be a viable language
> for large scale application development.
>
> Andy seems to want an implementation of independent Python processes
> implemented as threads within a single address space, that can be
> coordinated by an outer application.  This actually corresponds to the model
> promulgated in the paper as being most likely to succeed.  Of course, it
> maps nicely into a model using separate processes, coordinated by an outer
> process, also.  The differences seem to be:
>
> 1) Most applications are historically perceived as corresponding to single
> processes.  Language features for multi-processing are rare, and such
> languages are not in common use.
>
> 2) A single address space can be convenient for the coordinating outer
> application.  It does seem simpler and more efficient to simply "copy" data
> from one memory location to another, rather than send it in a message,
> especially if the data are large.  On the other hand, coordination of memory
> access between multiple cores/CPUs effectively causes memory copies from one
> cache to the other, and if memory is accessed from multiple cores/CPUs
> regularly, the underlying hardware implements additional synchronization and
> copying of data, potentially each time the memory is accessed.  Being forced
> to do message passing of data between processes can actually be more
> efficient than access to shared memory at times.  I should note that in my
> 25 years of parallel development, all the systems created used a message
> passing paradigm, partly because the multiple CPUs often didn't share the
> same memory chips, much less the same address space, and that a key feature
> of all the successful systems of that nature was an efficient inter-CPU
> message passing mechanism.  I should also note that Herb Sutter has a recent
> series of columns in Dr Dobbs regarding multi-core/multi-CPU parallelism and
> a variety of implementation pitfalls, that I found to be very interesting
> reading.
>
> I have noted the multiprocessing module that is new to Python 2.6/3.0 being
> feverishly backported to Python 2.5, 2.4, etc... indicating that people
> truly find the model/module useful... seems that this is one way, in Python
> rather than outside of it, to implement the model Andy is looking for,
> although I haven't delved into the details of that module yet, myself.  I
> suspect that a non-Python application could load one embedded Python
> interpreter, and then indirectly use the multiprocessing module to control
> other Python interpreters in other processors.  I don't know that
> multithreading primitives such as described in the paper are available in
> the multiprocessing module, but perhaps they can be implemented in some
> manner using the tools that are provided; in any case, some interprocess
> communication primitives are provided via this new Python module.
>
> There could be opportunity to enhance Python with process creation and
> process coordination operations, rather than have it depend on
> easy-to-implement-incorrectly coordination patterns or
> easy-to-use-improperly libraries/modules of multiprocessing primitives (this
> is not a slam of the new multiprocessing module, which appears to be filling
> a present need in rather conventional ways, but just to point out that ideas
> promulgated by the paper, which I suspect 2 years later are still research
> topics, may be a better abstraction than the conventional mechanisms).
>
> One thing Andy hasn't yet explained (or I missed) is why any of his
> application is coded in a language other than Python.  I can think of a
> number of possibilities:
>
> A) (Historical) It existed, then the desire for extensions was seen, and
> Python was seen as a good extension language.
>
> B) Python is inappropriate (performance?) for some of the algorithms (but
> should they be coded instead as Python extensions, with the core application
> being in Python?)
>
> C) Unavailability of Python wrappers for particularly useful 3rd-party
> libraries
>
> D) Other?

We develop virtual instrument plugins for music production using
AudioUnit, VST, and RTAS on Windows and OS X. While our dsp engine's
code has to be written in C/C++ for performance reasons, the gui could
have been written in python. But, we didn't because:

1) Our project lead didn't know python, and the project began with
little time for him to learn it.
2) All of our third-party libs (for dsp, plugin-wrappers, etc) are
written in C++, so it would far easier to write and debug our app if
written in the same language. Could I do it now? yes. Could we do it
then? No.

** Additionally **, we would have run into this problem, which is very
appropriate to this thread:

3) Adding python as an audio scripting language in the audio thread
would have caused concurrency issues if our GUI had been written in
python, since audio threads are not allowed to make blockign calls
(f.ex. acquiring the GIL).

OK, I'll continue reading the thread now :)

>
> --
> Glenn -- http://nevcal.com/
> ===========================
> A protocol is complete when there is nothing left to remove.
> -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>



More information about the Python-list mailing list