[Numpy-discussion] reduce array by computing min/max every n samples
Bruce Southey
bsouthey at gmail.com
Mon Jun 21 12:39:05 EDT 2010
On Thu, Jun 17, 2010 at 4:50 PM, Brad Buran <bburan at cns.nyu.edu> wrote:
> I have a 1D array with >100k samples that I would like to reduce by
> computing the min/max of each "chunk" of n samples. Right now, my
> code is as follows:
>
> n = 100
> offset = array.size % downsample
> array_min = array[offset:].reshape((-1, n)).min(-1)
> array_max = array[offset:].reshape((-1, n)).max(-1)
>
> However, this appears to be running pretty slowly. The array is data
> streamed in real-time from external hardware devices and I need to
> downsample this and compute the min/max for plotting. I'd like to
> speed this up so that I can plot updates to the data as quickly as new
> data comes in.
>
> Are there recommendations for faster ways to perform the downsampling?
>
> Thanks,
> Brad
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
I am curious why you don't resize the original array from a 1-d array
to a 2-d array?
>>> a=np.arange(100)
>>> b=a.reshape((a.shape[0]/5,5))
>>> b.min(axis=1)
array([ 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80,
85, 90, 95])
>>> b.max(axis=1)
array([ 4, 9, 14, 19, 24, 29, 34, 39, 44, 49, 54, 59, 64, 69, 74, 79, 84,
89, 94, 99])
Bruce
More information about the NumPy-Discussion
mailing list