[Numpy-discussion] python numpy code many times slower than c++

Neal Becker ndbecker2 at gmail.com
Tue Jan 20 21:57:13 EST 2009


Robert Kern wrote:

> 2009/1/20 Neal Becker <ndbecker2 at gmail.com>:
>> I tried a little experiment, implementing some code in numpy (usually I
>> build modules in c++ to interface to python).  Since these operations are
>> all large vectors, I hoped it would be reasonably efficient.
>>
>> The code in question is simple.  It is a model of an amplifier, modeled
>> by it's AM/AM and AM/PM characteristics.
>>
>> The function in question is the __call__ operator.  The test program
>> plots a spectrum, calling this operator 1024 times each time with a
>> vector of 4096.
> 
> If you want to find out what lines in that function are taking the
> most time, you can try my line_profiler module:
> 
> http://www.enthought.com/~rkern/cgi-bin/hgwebdir.cgi/line_profiler/
> 
> That might give us a better idea in the absence of a self-contained
> example.
> 
I see the problem.  Thanks for the great profiler!  You ought to make this 
more widely known.

It seems the big chunks of time are used in data conversion between numpy 
and my own vectors classes.  Mine are wrappers around boost::ublas.  The 
conversion must be falling back on a very inefficient method since there is no 
special code to handle numpy vectors.

Not sure what is the best solution.  It would be _great_ if I could make 
boost::python objects that export a buffer interface, but I have absolutely 
no idea how to do this (and so far noone else has volunteered any info on 
this).





More information about the NumPy-Discussion mailing list