[Numpy-discussion] Improvement of performance
Davide Lasagna
lasagnadavide at gmail.com
Tue May 4 17:36:19 EDT 2010
If your x data are equispaced I would do something like this
def derive( func, x):
"""
Approximate the first derivative of function func at points x.
"""
# compute the values of y = func(x)
y = func(x)
# compute the step
dx = x[1] - x[0]
# kernel array for second order accuracy centered first derivative
kc = np.array([-1.0, +0.0, +1.0]) / 2 / dx
# kernel array for second order accuracy left and right first derivative
kl = np.array([-3.0, +4.0, -1.0]) / 2 / dx
kr = np.array([+1.0, -4.0, +3.0]) / 2 / dx
# correlate it with the original array,
# note that only the valid computation are performed
derivs_c = np.correlate( y, kc, mode='valid' )
derivs_r = np.correlate( y[-3:], kr, mode='valid' )
derivs_l = np.correlate( y[:+3], kl, mode='valid' )
return np.r_[derivs_l, derivs_c, derivs_r]
This is actually quite fast: on my machine (1.7GHz) i have this.
>>>:x = np.linspace(0,2*np.pi, 1e6)
>>>:func = lambda x: np.sin(x)
>>>:timeit derive(func, x)
10 loops, best of 3: 177 ms per loop
I'm curious if someone comes up with something faster.
Regards,
Davide
On 4 May 2010 22:17, <josef.pktd at gmail.com> wrote:
> On Tue, May 4, 2010 at 4:06 PM, gerardob <gberbeglia at gmail.com> wrote:
> >
> > Hello, I have written a very simple code that computes the gradient by
> finite
> > differences of any general function. Keeping the same idea, I would like
> > modify the code using numpy to make it faster.
> > Any ideas?
> > Thanks.
> >
> > def grad_finite_dif(self,x,user_data = None):
> > assert len(x) == self.number_variables
> > points=[]
> > for j in range(self.number_variables):
> > points.append(x.copy())
> >
> points[len(points)-1][j]=points[len(points)-1][j]+0.0000001
> > delta_f = []
> > counter=0
> > for j in range(self.number_variables):
> >
> delta_f.append((self.eval(points[counter])-self.eval(x))/0.0000001)
>
> it looks like your are evaluating the same point several times self.eval(x)
>
> > counter = counter + 1
> > return array(delta_f)
>
> That's what I used as a pattern for a gradient function
>
> #from scipy.optimize
> def approx_fprime(xk,f,epsilon,*args):
> f0 = f(*((xk,)+args))
> grad = np.zeros((len(xk),), float)
> ei = np.zeros((len(xk),), float)
> for k in range(len(xk)):
> ei[k] = epsilon
> grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon
> ei[k] = 0.0
> return grad
>
> Josef
>
> > --
> > View this message in context:
> http://old.nabble.com/Improvement-of-performance-tp28452458p28452458.html
> > Sent from the Numpy-discussion mailing list archive at Nabble.com.
> >
> > _______________________________________________
> > NumPy-Discussion mailing list
> > NumPy-Discussion at scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20100504/40dfede8/attachment.html>
More information about the NumPy-Discussion
mailing list