[issue17560] problem using multiprocessing with really big objects?
Richard Oudkerk
report at bugs.python.org
Wed Mar 27 21:03:57 CET 2013
Richard Oudkerk added the comment:
On 27/03/2013 7:27pm, Charles-François Natali wrote:
>
> Charles-François Natali added the comment:
>
>> Through fork, yes, but "shared" rather than "copy-on-write".
>
> There's a subtlety: because of refcounting, just treating a COW object
> as read-only (e.g. iteratin on the array) will trigger a copy
> anyway...
I mean "write-through" (as opposed to "read-only" or "copy-on-write").
>> I don't think shm_open() really has any advantages over
>> using mmaps backed by "proper" files (since posix shared memeory uses up
>> space in /dev/shm which is limited).
>
> File-backed mmap() will incur disk I/O (although some of the data will
> probably sit in the page cache), which would be much slower than a
> shared memory. Also, you need corresponding disk space.
> As for the /dev/shm limit, it's normally dimensioned according to the
> amount of RAM, which is normally, which is in turn dimensioned
> according to the working set.
Apart from creating, unlinking and resizing the file I don't think there
should be any disk I/O.
On Linux disk I/O only occurs when fsync() or close() are called.
FreeBSD has a MAP_NOSYNC flag which gives Linux behaviour (otherwise
dirty pages are flushed every 30-60).
Once the file has been unlink()ed then any sensible operating system
should realize it does not need to sync the file.
----------
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue17560>
_______________________________________
More information about the Python-bugs-list
mailing list