dual processor

Jeremy Jones zanesdad at bellsouth.net
Mon Sep 5 23:42:38 EDT 2005


Steve Jorgensen wrote:

>On Mon, 05 Sep 2005 21:43:07 +0100, Michael Sparks <ms at cerenity.org> wrote:
>
>  
>
>>Steve Jorgensen wrote:
>>
>>    
>>
>>>On 05 Sep 2005 10:29:48 GMT, Nick Craig-Wood <nick at craig-wood.com> wrote:
>>>
>>>      
>>>
>>>>Jeremy Jones <zanesdad at bellsouth.net> wrote:
>>>>        
>>>>
>>>>> One Python process will only saturate one CPU (at a time) because
>>>>> of the GIL (global interpreter lock).
>>>>>          
>>>>>
>>>>I'm hoping python won't always be like this.
>>>>        
>>>>
>>>I don't get that.  Python was never designed to be a high performance
>>>language, so why add complexity to its implementation by giving it
>>>high-performance capabilities like SMP? 
>>>      
>>>
>>It depends on personal perspective. If in a few years time we all have
>>machines with multiple cores (eg the CELL with effective 9 CPUs on a chip,
>>albeit 8 more specialised ones), would you prefer that your code *could*
>>utilise your hardware sensibly rather than not.
>>
>>Or put another way - would you prefer to write your code mainly in a
>>language like python, or mainly in a language like C or Java? If python,
>>it's worth worrying about!
>>
>>If it was python (or similar) you might "only" have to worry about
>>concurrency issues. If it's a language like C you might have to worry
>>about  memory management, typing AND concurrency (oh my!).
>>(Let alone C++'s TMP :-)
>>
>>Regards,
>>
>>
>>Michael
>>    
>>
>
>That argument makes some sense, but I'm still not sure I agree.  Rather than
>make Python programmers have to deal with concurrentcy issues in every app to
>get it to make good use of the hardware it's on, why not have many of the
>common libraries that Python uses to do processing take advantage of SMP when
>you use them.  A database server is a good example of a way we can already do
>some of that today.  Also, what if things like hash table updates were made
>lazy (if they aren't already) and could be processed as background operations
>to have the table more likely to be ready when the next hash lookup occurs.
>  
>
Now, *this* is a really interesting line of thought.  I've got a feeling 
that it'd be pretty tough to implement something like this in a 
language, though.  An application like an RDBMS is one thing, an 
application framework another, and a programming language is yet a 
different species altogether.  It'd have to be insanely intelligent 
code, though.  If you had bunches of Python processes, would they all 
start digging into each list or generator or hash to try to predict what 
the code is going to potentially need next?  Is this predictive behavior 
going to chew up more CPU time than it should?  What about memory?  
You've got to store the predictive results somewhere.  Sounds great.  
Has some awesomely beneficial implications.  Sounds hard as anything to 
implement well, though.

JMJ



More information about the Python-list mailing list