dual processor
Jorgen Grahn
jgrahn-nntq at algonet.se
Tue Sep 6 13:45:53 EDT 2005
On Tue, 06 Sep 2005 08:57:14 +0100, Michael Sparks <ms at cerenity.org> wrote:
...
> Are you so sure? I suspect this is due to you being used to writing code
> that is designed for a single CPU system. What if you're basic model of
> system creation changed to include system composition as well as
> function calls? Then each part of the system you compose can potentially
> run on a different CPU. Take the following for example:
...
> It probably looks strange, but it's really just a logical extension of the
> Unix command line's pipelines to allow multiple pipelines. Similarly, from
> a unix command line perspective, the following will automatically take
> advantage of all the CPU's I have available:
>
> (find |while read i; do md5sum $i; done|cut -b-32) 2>/dev/null |sort
>
> And a) most unix sys admins I know find that easy (probably the above
> laughable) b) given a multiprocessor system will probably try to maximise
> pipelining c) I see no reason why sys admins should be the only people
> writing programs who use concurrency without thinking about it :-)
Nitpick: not all Unix users are sysadmins ;-) Some Unix sysadmins actually
have real users, and the clued users use the same tools. I used the 'make
-j3' example elsewhere in the thread (I hadn't read this posting when I
responded there).
It seems to me that there must be a flaw in your arguments, but I can't seem
to find it ;-)
Maybe it's hard in real life to find two independent tasks A and B that can
be parallellized with just a unidirectional pipe between them? Because as
soon as you have to do the whole threading/locking/communication circus, it
gets tricky and the bugs (and performance problems) show up fast.
But it's interesting that the Unix pipeline Just Works (TM) with so little
effort.
/Jorgen
--
// Jorgen Grahn <jgrahn@ Ph'nglui mglw'nafh Cthulhu
\X/ algonet.se> R'lyeh wgah'nagl fhtagn!
More information about the Python-list
mailing list