[Chicago] Out of Memory: Killed Process: on CentOS

Brian Ray bray at sent.com
Mon Apr 27 17:16:25 CEST 2009


Hello:

I am trying to tackle an issue I am having with httpd running an app  
under mod_python not being able to kill enough processes (to free  
memory) quickly enough to remain stable.  The server ends up in a  
state where it stops responding.

The server has several Gig's of memory, but the requests it tries to  
process are really huge.  It run's fine if the requests are small.  
Basically, the requests contain XML that is parsed by ElementTree (C  
version) and converts that data to an internal database format.   
Somewhere in the parsing of that XML is where is seems to eat the most  
memory, but writing to the database uses some, as well.

What is the best ways to manage this scenario?  Specifically, what  
should I try?  One thing I thought about doing is repetitively calling  
"free -m"  or "ps .." and parse the results to see if something has to  
be done.  I am ok with the idea of telling the caller they need to try  
again later or try smaller chunks--and I would prefer this than  
sending no response at all.

Should I be looking at my apache configuration?  I tried setting  
StartServers, MinSpareServers, MaxSpareServers, and MaxClients to low  
numbers, like 3.  Overall it seems to help but, I am not sure it will  
if I get a really huge request.

Should I run the parser outside of Apache?  I was currious if Python  
by itself (not embedded into Apache) would run more efficently in this  
situation.

Is there some other way to clear memory when running large requests?


If you help me out with a good solution, I will be open to presenting  
(or talking you into presenting) at a future ChiPy meeting on what was  
done to tackle this issue.


Regards,
Brian Ray





More information about the Chicago mailing list