[Numpy-discussion] Managing Rolling Data

Mike Ressler mike.ressler at alum.mit.edu
Wed Feb 21 14:13:26 EST 2007


On 2/21/07, Alexander Michael <lxander.m at gmail.com> wrote:
> ... T is to large to fit in memory, so I need to
> load up H, perform my calculations, pop the oldest N x P slice and
> push the newest N x P slice into the data cube. What's the best way to
> do this that will maintain fast computations along the one-dimensional
> slices over N and H? Is there a commonly accepted idiom?

Would loading your data via memmap, then slicing it, do your job
(using numpy.memmap)? I work on 12 GB files with 4 GB of memory, but
it is transparent to me since the OS takes care of moving data in and
out of memory. May not be the fastest solution possible, but for me it
is a case where dev time is more significant than run time.

Mike


-- 
mike.ressler at alum.mit.edu



More information about the NumPy-Discussion mailing list