More CPUs doen't equal more speed

Marek Mosiewicz marek.mosiewicz at jotel.com.pl
Tue May 28 14:05:48 EDT 2019


Do you do it as separate process or thread.

There is https://wiki.python.org/moin/GlobalInterpreterLock

so you need to spawn many processes

Best regards,

         Marek Mosiewicz

         http://marekmosiewicz.pl


W dniu 23.05.2019 o 20:39, Bob van der Poel pisze:
> I've got a short script that loops though a number of files and processes
> them one at a time. I had a bit of time today and figured I'd rewrite the
> script to process the files 4 at a time by using 4 different instances of
> python. My basic loop is:
>
> for i in range(0, len(filelist), CPU_COUNT):
>      for z in range(i, i+CPU_COUNT):
>          doit( filelist[z])
>
> With the function doit() calling up the program to do the lifting. Setting
> CPU_COUNT to 1 or 5 (I have 6 cores) makes no difference in total speed.
> I'm processing about 1200 files and my total duration is around 2 minutes.
> No matter how many cores I use the total is within a 5 second range.
>
> This is not a big deal ... but I really thought that throwing more
> processors at a problem was a wonderful thing :) I figure that the cost of
> loading the python libraries and my source file and writing it out are
> pretty much i/o bound, but that is just a guess.
>
> Maybe I need to set my sights on bigger, slower programs to see a
> difference :)
>



More information about the Python-list mailing list