why memoizing is faster

Andrea Crotti andrea.crotti.0 at gmail.com
Thu Mar 24 09:48:31 EDT 2011


I was showing a nice memoize decorator to a friend using the classic
fibonacci problem.

--8<---------------cut here---------------start------------->8---
  def memoize(f, cache={}):
      def g(*args, **kwargs):
          # first must create a key to store the arguments called
          # it's formed by the function pointer, *args and **kwargs
          key = ( f, tuple(args), frozenset(kwargs.items()) )
          # if the result is not there compute it, store and return it
          if key not in cache:
              cache[key] = f(*args, **kwargs)
  
          return cache[key]
  
      return g
  
  # without the caching already for 100 it would be unfeasible
  @memoize
  def fib(n):
      if n <= 1:
          return 1
      else:
          return fib(n-1) + fib(n-2)
  
--8<---------------cut here---------------end--------------->8---


It works very well, but he said that the iterative version would be
anyway faster.

But I tried this
  
--8<---------------cut here---------------start------------->8---
  def fib_iter(n):
      ls = {0: 1, 1:1}
      for i in range(2, n+1):
          ls[i] = ls[i-1] + ls[i-2]
  
      return ls[max(ls)]
--8<---------------cut here---------------end--------------->8---

and what happens is surprising, this version is 20 times slower then the
recursive memoized version.

I first had a list of the two last elements and I thought that maybe the
swaps were expensive, now with a dictionary it should be very fast.

Am I missing something?
Thanks, 
Andrea



More information about the Python-list mailing list