[SciPy-User] Porting code from IDL to Python - 'Common block' equivalent?

Charles R Harris charlesr.harris at gmail.com
Fri Jul 23 10:28:59 EDT 2010


On Fri, Jul 23, 2010 at 1:01 AM, Sebastian Haase <seb.haase at gmail.com>wrote:

> On Fri, Jul 23, 2010 at 2:26 AM, Charles R Harris
> <charlesr.harris at gmail.com> wrote:
> >
> >
> > On Wed, Jul 21, 2010 at 2:18 AM, David Andrews <irbdavid at gmail.com>
> wrote:
> >>
> >> Hi All,
> >>
> >> I suppose this might not strictly be a scipy type question, but I'll
> >> ask here as I expect some of you might understand what I'm getting at!
> >>
> >> I'm in the process of porting some code from IDL (Interactive Data
> >> Language - popular in some fields of science, but largely nowhere
> >> else) to Python.  Essentially it's just plotting and analyzing time
> >> series data, and so most of the porting is relatively simple.  The one
> >> stumbling block - is there an equivalent or useful replacement for the
> >> "common block" concept in IDL available in Python?
> >>
> >> Common blocks are areas of shared memory held by IDL that can be
> >> accessed easily from within sub-routines.  So for example, in our IDL
> >> code, we load data into these common blocks at the start of a session,
> >> and then perform whatever analysis on it.  In this manner, we do not
> >> have to continually re-load data every time we re-perform a piece of
> >> analysis.  They store their contents persistently, for the duration of
> >> the IDL session.  It's all for academic research purposes, so it's
> >> very much 'try this / see what happens / alter it, try again' kind of
> >> work.  The loading and initial processing of data is fairly time
> >> intensive, so having to reload at each step is a bit frustrating and
> >> not very productive.
> >>
> >> So, does anyone have any suggestions as to the best way to go about
> >> porting this sort of behavior?  Pickle seems to be one option, but
> >> that would involve read/write to disk operations anyway?  Any others?
> >>
> >
> > Depending on the sort of data you have, PyTables might be an option. I'm
> > currently using it to store a 42 GB image data cube on disk and it works
> > well for that. I can browse through an image and shift-click on a pixel
> to
> > get a plot of the data associated with the pixel. It is quite fast. The
> data
> > cube needs to be passed as an argument to the various functions that need
> > the data, but that isn't much of a problem.
> >
> Chuck, just out of curiosity: what are the specs of your hardware and
> which OS are you on ?
>
>
It is 64 bit ubuntu running on quad core intel, 8GB memory. The memory usage
is pretty modest in practice as PyTables chunks the data.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20100723/de1aeda2/attachment.html>


More information about the SciPy-User mailing list