Cpython optimization

Olof Bjarnason olof.bjarnason at gmail.com
Thu Oct 22 02:21:55 EDT 2009


2009/10/22 John Yeung <gallium.arsenide at gmail.com>

> On Oct 22, 12:28 am, John Nagle <na... at animats.com> wrote:
>
> >    The Shed Skin people would welcome some help.
> >
> >        http://shed-skin.blogspot.com/
>
> People?  It's one guy.  It apparently started out as a Master's thesis
> as well. ;)
>
> I am a great admirer of the Shed Skin project, and I would be as happy
> as anyone to see it progress.  However, it seems to me that Shed Skin
> is at a stage where what it really needs is plain old "more work".  (I
> don't want to call it grunt work, but it's things like more testing,
> implementing more library support, maintaining the Windows build,
> etc.  Very worthy and worthwhile work, but tough to pass off as
> academic graduate work.)
>
> To the original poster:  I am not sure if this is too ambitious for
> your time frame, but one thing many real-world Python users would LOVE
> is for some way to get rid of the GIL (while still retaining thread
> safety, single-core performance, etc.).  If you could patch CPython to
> make it GIL-less, I think you would have academically publishable
> material as well as being a hero in the Python community. :D
>

A short question after having read through most of this thread, on the same
subject (time-optimizing CPython):

http://mail.python.org/pipermail/python-list/2007-September/098964.html

We are experiencing multi-core processor kernels more and more these days.
But they are all still connected to the main memory, right?

To me that means, even though some algorithm can be split up into several
threads that run on different cores of the processor, that any algorithm
will be memory-speed limited. And memory access is a quite common operation
for most algorithms.

Then one could ask oneself: what is the point of multiple cores, if memory
bandwidth is the bottleneck? Specifically, what makes one expect any speed
gain from parallelizing a sequential algorithm into four threads, say, when
the memory shuffling is the same speed in both scenarios? (Assuming memory
access is much slower than ADDs, JMPs and such instructions - a quite safe
assumption I presume)

[ If every core had it's own primary memory, the situation would be
different. It would be more like the situation in a distributed/internet
based system, spread over several computers. One could view each core as a
separate computer actually ]


> John
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
twitter.com/olofb
olofb.wordpress.com
olofb.wordpress.com/tag/english
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20091022/0535aa5d/attachment-0001.html>


More information about the Python-list mailing list