dual processor

Bengt Richter bokr at oz.net
Tue Sep 6 19:26:17 EDT 2005


On Tue, 6 Sep 2005 19:35:49 +0000 (UTC), Thomas Bellman <bellman at lysator.liu.se> wrote:

>Michael Sparks <ms at cerenity.org> writes:
>
>> Similarly, from
>> a unix command line perspective, the following will automatically take
>> advantage of all the CPU's I have available:
>
>>    (find |while read i; do md5sum $i; done|cut -b-32) 2>/dev/null |sort
>
>No, it won't.  At the most, it will use four CPU:s for user code.
>
>Even so, the vast majority of CPU time in the above pipeline will
>be spent in 'md5sum', but those calls will be run in series, not
>in parallell.  The very small CPU bursts used by 'find' and 'cut'
>are negligable in comparison, and would likely fit within the
>slots when 'md5sum' is waiting for I/O even on a single-CPU
>system.
>
>And I'm fairly certain that 'sort' won't start spending CPU time
>until it has collected all its input, so you won't gain much
>there either.
>
Why wouldn't a large sequence sort be internally broken down into parallel
sub-sequence sorts and merges that separate processors can work on?

Regards,
Bengt Richter



More information about the Python-list mailing list