why memoizing is faster

Stefan Behnel stefan_ml at behnel.de
Fri Mar 25 05:16:17 EDT 2011


Andrea Crotti, 25.03.2011 09:49:
> Terry Reedy writes:
>>
>> For the reason Stefan explained and hinted above. Try the following instead:
>>
>> def fib_iter(n, _cache = [1,1]):
>>    k = len(_cache)
>>    if n>= k:
>>      for i in range(k, n+1):
>>         _cache.append(_cache[i-2] + _cache[i-1])
>>    return _cache[n]
>>
>> This should be slightly faster than the crazy-key cache for the
>> memoized recursive version.
>>
>
> You're right this is much faster, also of the recursive function, so it
> was just my iterative version that was bad...
>
> But I don't see what's the difference, I don't get what's the difference
> between that and:
>
> def fib_iter(n):
>      ls = [0, 1]
>      for i in range(2, n+1):
>          ls.append(ls[i-2] + ls[i-1])
>
>      return ls[n]
>
>
> Your version runs in
> 250 ns, while this version runs in 25.1 us.
> How can passing a default "_cache" argument can make such a difference?

Terry's version is playing with the fact that default arguments are only 
instantiated once, i.e. (unless overridden by passing an explicit argument) 
the "_cache" is shared over all calls to the function. This is similar to 
what your memoization decorator does, but without the additional dict. And, 
it also "suffers" from the same timing problem that I mentioned before.

Stefan




More information about the Python-list mailing list