Program eating memory, but only on one machine?

Wolfgang Grafen wolfgang.grafen at marconi.com
Mon Jan 22 13:19:05 EST 2007


I had a similar problem with an extension module on Solaris years ago.
My problem at that time:
I requested memory and released it and requested more memory in the next step
and so on.

The reason that the memory was eaten up:
An answer out of this group was that the operating system doesn't release the
memory space because it assumes you will need it soon again. The memory will
only be released with the end of the process.

The solution was always to request memory for the largest array the process will
demand and it worked for me.

Regards

Wolfgang

Per B.Sederberg wrote:
> Wolfgang Draxinger <wdraxinger <at> darkstargames.de> writes:
>>> So, does anyone have any suggestions for how I can debug this
>>> problem?
>> Have a look at the version numbers of the GCC used. Probably
>> something in your C code fails if it interacts with GCC 3.x.x.
>> It's hardly Python eating memory, this is probably your C
>> module. GC won't help here, since then you must add this into
>> your C module.
>>
>>>  If my program ate up memory on all machines, then I would know
>>> where to start and would blame some horrible programming on my
>>> end. This just seems like a less straightforward problem.
>> GCC 3.x.x brings other runtime libs, than GCC 4.x.x, I would
>> check into that direction.
>>
> 
> Thank you for the suggestions.  Since my C module is such a small part of the
> simulations, I can just comment out the call to that module completely (though I
> am still loading it) and fill in what the results would have been with random
> values.  Sadly, the program still eats up memory on our cluster.
> 
> Still, it could be something related to compiling Python with the older GCC.
> 
> I'll see if I can make a really small example program that eats up memory on our
> cluster.  That way we'll have something easy to work with.
> 
> Thanks,
> Per
> 
> 
> 



More information about the Python-list mailing list