cPickle.load vs. file.read+cPickle.loads on large binary files
Peter Otten
__peter__ at web.de
Tue Nov 17 12:20:44 EST 2015
andrea.gavana at gmail.com wrote:
>> > I am puzzled with no end... Might there be something funny with my C
>> > libraries that use fread? I'm just shooting in the dark. I have a
>> > standard Python installation on Windows, nothing fancy :-(
>>
>> Perhaps there is a size threshold? You could experiment with different
>> block sizes in the following f.read() replacement:
>>
>> def read_chunked(f, size=2**20):
>> read = functools.partial(f.read, size)
>> return "".join(iter(read, ""))
>
>
> Thank you for the suggestion. I have used the read_chunked function in my
> experiments now and I can report a nice improvements - I have tried
> various chunk sizes, from 2**10 to 2**31-1, and in general the optimum
> lies around size=2**22, although it is essentially flat from 2**20 up to
> 2**30 - with some interesting spikes at 45 seconds for 2**14 and 2**15
> (see table below).
>
> Using your suggestion, I got it down to 3.4 seconds (on average). Still at
> least twice slower than cPickle.load, but better.
>
> What I find most puzzling is that a pure file.read() (or your read_chunked
> variation) should normally be much faster than a cPickle.load (which does
> so many more things than just reading a file), shouldn't it?
That would have been my expectation, too.
I had a quick look into the fileobject.c source and didn't see anything that
struck me as suspicious.
I think you should file a bug report so that an expert can check if there is
an underlying problem in Python or if it is a matter of the OS.
> Timing table:
>
> Size (power of 2) Read Time (seconds)
> 10 9.14
> 11 8.59
> 12 7.67
> 13 5.70
> 14 46.06
> 15 45.00
> 16 24.80
> 17 14.23
> 18 8.95
> 19 5.58
> 20 3.41
> 21 3.39
> 22 3.34
> 23 3.39
> 24 3.39
> 25 3.42
> 26 3.43
> 27 3.44
> 28 3.48
> 29 3.59
> 30 3.72
More information about the Python-list
mailing list