Speed: bytecode vz C API calls

Jacek Generowicz jacek.generowicz at cern.ch
Mon Dec 8 09:21:29 EST 2003


I have a program in which I make very good use of a memoizer:

  def memoize(callable):
      cache = {}
      def proxy(*args):
          try: return cache[args]
          except KeyError: return cache.setdefault(args, callable(*args))
      return proxy
  
which, is functionally equivalent to 
  
  class memoize:
  
      def __init__ (self, callable):
          self.cache = {}
          self.callable = callable
  
      def __call__ (self, *args):
          try: return self.cache[args]
          except KeyError:
              return self.cache.setdefault(args, self.callable(*args))

though the latter is about twice as slow.

I've got to the stage where my program is still not fast enough, and
calls to the memoizer proxy are topping profiler output table. So I
thought I'd try to see whether I can speed it up by recoding it in C.

The closure-based version seems impossible to recode in C (anyone know
a way?), so I decided to write an extension type equivalent to "class
memoize" above. This seems to run about 10% faster than the pure
Python version ... which is still a lot slower than the pure Python
closure-based version.

I was expecting C-extensions which make lots of calls to Python C API
functions, not to be spectacularly fast, but I'm still a little
disapponited with what I've got. Does this sort of speedup (or rather,
lack of it) seem right, to those of you experienced with this sort of
thing? or does it look like I'm doing it wrong?

Could anyone suggest how I could squeeze more speed out of the
memoizer? (I'll include the core of my memoize extension type at the below.)

[What's the current state of the art wrt profiling Python, and its
extension modules?  I've tried using hotshot (though not extensively
yet), but it seems to show even less information than profile, at
first blush.]


The memoize extension type is based around the following:

typedef struct {
  PyObject_HEAD
  PyObject* x_attr;
  PyObject* cache;
  PyObject* fn;
} memoizeObject;


static int
memoize_init(memoizeObject* self, PyObject* args, PyObject* kwds) {
  PyArg_ParseTuple(args, "O", &(self->fn));
  Py_INCREF(self->fn);
  self->cache = PyDict_New();
  return 0;
}

static PyObject*
memoize_call(memoizeObject* self, PyObject* args) {
  PyObject* value = PyDict_GetItem(self->fn, args);
  if (! value) {
    PyDict_SetItem(self->cache, args,
        	   PyObject_CallObject(self->fn, args));
    value = PyDict_GetItem(self->cache, args);
  }
  //Py_INCREF(value);
  return value;
};



Thanks,





More information about the Python-list mailing list