array.array()'s memory shared with multiprocessing.Process()

gerlando.falauto at gmail.com gerlando.falauto at gmail.com
Tue Sep 12 04:21:10 EDT 2017


Il giorno lunedì 11 settembre 2017 12:19:27 UTC+2, Thomas Jollans ha scritto:
> On 2017-09-10 23:05, iurly wrote:
> > As far as I'm concerned, I'm probably better off using double buffers to avoid this kind of issues.
> > Thanks a lot for your help!
> > 
> 
> 
> That should work. Some other things to consider, if both processes are
> on the same machine, are a series of memory-mapped files to pass the
> data without pickling, or, if your code is only going to run on Unix,
> something involving shared memory through multiprocessing.Array.

Oh, I see, multiprocessing.Array() sounds like a pretty good idea, thanks!
It's simple enough and actually already compatible with what I'm doing.
That would involve double buffers which I would have to use as the first optimization step anyway.

Notice however how I'd have to create those Arrays dynamically in the producer thread. Would I then be able to pass them to the consumer by putting a reference in a queue? I wouldn't want them to be pickled at all in that case, of course.

Also, why do you say it only works on Unix? I couldn't find any reference to such limitation in the documentation.

Thank you so much again!



More information about the Python-list mailing list