[SciPy-user] Debugging memory exhaustion in Python?

Greg Novak novak at ucolick.org
Tue Jun 19 22:57:30 EDT 2007


On 6/15/07, Brian Granger <ellisonbg.net at gmail.com> wrote:
> 1)  What version of python are you using?  Python 2.4 and below has
> some issues with memory not  being released back to the OS.

It seems like this is what's happening, even though I'm using Python
2.5.  I have a function that, when called, pretty reliably makes the
memory usage and resident size go up by ~40 megabytes every time it's
called.   I'm looking at the VmSize and VmRSS lines in
/proc/pid/status on an Ubuntu machine to determine memory usage.  I
expected to find zillions of objects added to the list returned by
gc.get_objects.  However, there were only 27 objects added, and they
all seemed small -- strings, small dicts, one Frame object, and that's
about it.

I mentioned a python module called Heapy:
http://guppy-pe.sourceforge.net/

It lets you set a reference point and then look at the sizes of all
objects allocated after that time.  This confirms what I found above
manually-- only a few objects created, and they're small.

So it does seem as though the Python garbage collector has freed the
objects, but it hasn't returned the memory to the operating system.
This continues until I have several GB allocated and the program
crashes.

I'm not using any of my own C extensions for this (where I could screw
up the reference counting) and it doesn't look like the problem is
leaking objects anyway.   So...  does anyone have any thoughts about
what could cause this?

Thanks,
Greg



More information about the SciPy-User mailing list