Confusing performance results for prime

Bengt Richter bokr at oz.net
Sat Oct 18 00:32:41 EDT 2003


On Fri, 17 Oct 2003 15:33:24 -0500, "Greg Brunet" <gregbrunet at NOSPAMsempersoft.com> wrote:

>In doing some testing of different but simple algorithms for getting a
>list of prime numbers, I ended up getting some results that seem a bit
>contradictory.  Given the following test program (testPrimes.py) with
>two algorithms that both check for primes by testing only odd numbers
>using factors up to the square root of the value, where Primes1 is based
>on all of the existing primes so far, and Primes2 is based on all odd
>numbers, I would expect that, as the number of primes we check gets
>larger, that Primes1 should get more & more efficient (since it should
>be performing less tests for each number).  However the ratio seems to
>be relatively consistent.  Another thing that is interesting is that
>once I tested up to n=200000, I got an exception in Primes2.  I added
>the long typecast in the 'limit='  statement for both versions and the
>resulting improvements were strange: Primes1 runs 10-20% SLOWER, and
>Primes2 runs about 50% FASTER!
>
>I am not looking to come up with the most optimized solution to prime
>numbers here.  Primes1 is faster than Primes2 as I would expect and it
>is clear, relatively elegant, and sufficient for me.  However, I can't
>figure out why the ratio doesn't improve with size, and the strange
>results of the long typecast, and I would like to be able to understand
>what is causing their behavior.  Any ideas?  Thanks,

Without looking at the implementation code, I could only guess, but you
are using range instead of xrange (but maybe range is now optimized not to build
and actual list in iterator context?). Also, limit as a float or long being passed to
range must cause a little hiccup. Perhaps an iter optimization is being interfered with?
If range is not optimized, then you must be taking a hit from building throwaway
lists with range instead of using xrange, so your inner loop in Primes2 would have
a bunch of overhead. Also, you don't need floating point or long for quite some time,
and x*x > y will be faster than  x > math.sqrt(y) and avoid the float altogether
and long to boot, unless you are generating primes larger than maxint.

If you want to conserve space for an all-int primes list, you can use
primes = array.array('l',[2]) instead of primes = [2] to start. Just doing that
apparently gives about the same speed, but I imagine pre-allocating array space
could make it faster, though you'd have to change the append and iteration logic.
Hard to say without testing.

>
>-- 
>Greg
>
Sorry, I can't resist suggesting a slight speed improvement ;-)

def Primes3(prime_range_top):
    primes = [2]
    for prime_candidate in xrange(3,prime_range_top,2):
        for p in primes:
            if prime_candidate%p==0: break
            if p*p>prime_candidate:
                primes.append(prime_candidate)
                break
        else:
            primes.append(prime_candidate)
    return primes

And maybe running the bunch with

if __name__ == "__main__":
    n=100000
    times = []
    for pfun in (Primes1, Primes2, Primes3):
        tb = time.time()
        l1 = pfun(n)
        dt = time.time() - tb
        times.append(dt)
        print '%s: %s primes up to %s in %5.3f secs' % (
            pfun.__name__, len(l1), n, dt)
    fastest = min(times)
    print '\nTimes relative to the fastest:'
    for i, pfun in enumerate((Primes1, Primes2, Primes3)):
        print '%s: %5.3f'%(pfun.__name__, times[i]/fastest)

Which gave me (on older machine ;-)

Primes1: 9592 primes up to 100000 in 2.984 secs
Primes2: 9592 primes up to 100000 in 5.608 secs
Primes3: 9592 primes up to 100000 in 2.444 secs

Times relative to the fastest:
Primes1: 1.221
Primes2: 2.295
Primes3: 1.000

Regards,
Bengt Richter




More information about the Python-list mailing list