[issue17560] problem using multiprocessing with really big objects?

Serhiy Storchaka report at bugs.python.org
Mon Mar 13 15:04:58 EDT 2017


Serhiy Storchaka added the comment:

Pickle currently handle byte strings and unicode strings larger than 4GB only with protocol 4. But multiprocessing currently uses the default protocol which currently equals 3. There was suggestions to change the default pickle protocol (issue23403), the pickle protocol for multiprocessing (issue26507) or customize the serialization method for multiprocessing (issue28053). There is also a patch that implements the support of byte strings and unicode strings larger than 4GB with all protocols (issue25370).

Beside this I think that using some kind of shared memory is better way for transferring large data between subprocesses.

----------
nosy: +serhiy.storchaka
versions: +Python 3.7 -Python 3.4

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue17560>
_______________________________________


More information about the Python-bugs-list mailing list