doubling the number of tests, but not taking twice as long

Stephan Houben stephanh42 at gmail.com.invalid
Mon Jul 16 14:33:07 EDT 2018


Op 2018-07-16, Larry Martell schreef <larry.martell at gmail.com>:
> I had some code that did this:
>
> meas_regex = '_M\d+_'
> meas_re = re.compile(meas_regex)
>
> if meas_re.search(filename):
>     stuff1()
> else:
>     stuff2()
>
> I then had to change it to this:
>
> if meas_re.search(filename):
>     if 'MeasDisplay' in filename:
>         stuff1a()
>     else:
>         stuff1()
> else:
>     if 'PatternFov' in filename:
>         stuff2a()
>    else:
>         stuff2()
>
> This code needs to process many tens of 1000's of files, and it runs
> often, so it needs to run very fast. Needless to say, my change has
> made it take 2x as long. 

It's not at all obvious to me.  Did you actually measure it?
Seems to depend strongly on what stuff1a and stuff2a are doing.

> Can anyone see a way to improve that?

Use multiprocessing.Pool to exploit multiple CPUs?

Stephan



More information about the Python-list mailing list