[SciPy-user] Debugging memory exhaustion in Python?

Anne Archibald peridot.faceted at gmail.com
Wed Jun 20 14:21:56 EDT 2007


On 19/06/07, Greg Novak <novak at ucolick.org> wrote:

> It seems like this is what's happening, even though I'm using Python
> 2.5.  I have a function that, when called, pretty reliably makes the
> memory usage and resident size go up by ~40 megabytes every time it's
> called.   I'm looking at the VmSize and VmRSS lines in
> /proc/pid/status on an Ubuntu machine to determine memory usage.  I
> expected to find zillions of objects added to the list returned by
> gc.get_objects.  However, there were only 27 objects added, and they
> all seemed small -- strings, small dicts, one Frame object, and that's
> about it.

What does the Frame object contain? Doesn't it have the complete set
of function local variables? I suppose you're listing everything it
points to as well.

Keep in mind that numpy objects sometimes keep alive big hunks of
memory. For example, if you allocate a huge array and then pick out a
small piece using a view, the original huge chunk of memory is kept
(and it is not allocated using python's malloc so it may not be
accounted for in your tools). There's also the problem that a view
holds a reference to the array object it's a view of, so taking views
of views of views of ... can lead to arbitrarily long chains of
objects.

Anne



More information about the SciPy-User mailing list