Multiprocessing and memory management

Peter J. Holzer hjp-python at hjp.at
Wed Jul 3 15:48:09 EDT 2019


On 2019-07-03 08:37:50 -0800, Israel Brewster wrote:
> 1) Determine the total amount of RAM in the machine (how?), assume an
> average of 10GB per process, and only launch as many processes as
> calculated to fit. Easy, but would run the risk of under-utilizing the
> processing capabilities and taking longer to run if most of the
> processes were using significantly less than 10GB
> 
> 2) Somehow monitor the memory usage of the various processes, and if
> one process needs a lot, pause the others until that one is complete.
> Of course, I’m not sure if this is even possible.

If you use Linux or another unixoid OS, you can pause and resume
processes with the STOP and CONT signals. Just keep in mind that the
paused processes still need virtual memory and cannot complete - so you
need enough swap space.


> 3) Other approaches?

Is the memory usage at all predictable? I.e. can you estimate the usage
from the size of the input data? Or after the process has been running
for a short time? In that case you could monitor the free space and use
your estimates to determine whether you can start another process now or
need to wait until later (until a process terminates or maybe only until
you get a better estimate).

        hp

-- 
   _  | Peter J. Holzer    | we build much bigger, better disasters now
|_|_) |                    | because we have much more sophisticated
| |   | hjp at hjp.at         | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/python-list/attachments/20190703/967534f8/attachment.sig>


More information about the Python-list mailing list