why memoizing is faster

Andrea Crotti andrea.crotti.0 at gmail.com
Fri Mar 25 04:49:58 EDT 2011


Terry Reedy <tjreedy at udel.edu> writes:
>
> For the reason Stefan explained and hinted above. Try the following instead:
>
> def fib_iter(n, _cache = [1,1]):
>   k = len(_cache)
>   if n >= k:
>     for i in range(k, n+1):
>        _cache.append(_cache[i-2] + _cache[i-1])
>   return _cache[n]
>
> This should be slightly faster than the crazy-key cache for the
> memoized recursive version.
>

You're right this is much faster, also of the recursive function, so it
was just my iterative version that was bad...

But I don't see what's the difference, I don't get what's the difference
between that and:

def fib_iter(n):
    ls = [0, 1]
    for i in range(2, n+1):
        ls.append(ls[i-2] + ls[i-1])

    return ls[n]


Your version runs in 
250 ns, while this version runs in 25.1 us.
How can passing a default "_cache" argument can make such a difference?

What I find interesting in all this is that the memoize function works
also because python has the mutable types "problem" as argument of
functions, which every beginner stumbles upon at some point.

If it had a more "expected" behaviour of instantiating a new cache
dictionary every time this trick would not be possible.



More information about the Python-list mailing list