"High water" Memory fragmentation still a thing?

Chris Angelico rosuav at gmail.com
Fri Oct 3 19:26:40 EDT 2014


On Sat, Oct 4, 2014 at 4:36 AM, Croepha <croepha at gmail.com> wrote:
> What this means is that processes that do need to use a lot of memory will
> exhibit a "high water" behavior, where they remain forever at the level of
> memory usage that they required at their peak.

This is almost never true. What you will see, though, is something
like Christian described; pages get allocated, and then partially
used, and you don't always get all that memory back. In theory, a high
level language like Python would be allowed to move objects around to
compact memory, but CPython doesn't do this, and there's no proof that
it'd really help anything anyway. (Look at Christian's comments about
"Should you return it or not?" and the cost of system calls... now
consider the orders-of-magnitude worse cost of actually moving memory
around.)

This is why a lot of long-duration processes are built to be restarted
periodically. It's not strictly necessary, but it can be the most
effective way of solving a problem. I tend to ignore that, though, and
let my processes just keep on running... for 88 wk 4d 23:56:27 so far,
on one of those processes. It's consuming less than half a gig of
virtual memory, quarter gig resident, and it's been doing a fair bit
(it keeps all sorts of things in memory to avoid re-decoding from
disk). So don't worry too much about memory usage until you see that
there's actually a problem; with most Python processes, you'll restart
them to deploy new code sooner than you'll restart them to fix memory
problems. (The above example isn't a Python process, and code updates
happen live.) In fact, I'd advise that as a general policy: Don't
panic about any Python limitation until you've proven that it's
actually a problem. :)

ChrisA



More information about the Python-list mailing list