pickle: huge memory consumption *during* pickling

Jean Brouwers mrjean1ATcomcastDOTnet at no.spam.net
Thu Nov 11 12:40:19 EST 2004


FWIIW, we pickle data extracted from large log files.  The number of
pickled objects is about 1500, the size of the pickle file is 55+MB and
it takes about 3 mins to generate that file**.

This is cPickle, using protocol 2 (!) and all strings to be pickled are
intern'ed when initially created.

/Jean Brouwers
  ProphICy Semiconductor, Inc.

**) On a dual 2.4 GHz Xeon machine with 2 GB of memory running RedHat
Linux 8.0.



In article <cn00o5$r0j$1 at fuerst.cs.uni-magdeburg.de>, Hans Georg
Krauthaeuser <hgk at et.uni-magdeburg.de> wrote:

> Dear all,
> 
> I have a long running application (electromagnetic compatibility 
> measurements in mode-stirred chambers over GPIB) that use pickle 
> (cPickle) to autosave a class instance with all the measured data from 
> time to time.
> 
> At the beginning, pickling is quite fast but when the data becomes more 
> and more pickling slows down rapidly.
> 
> Today morning we reached the situation that it took 6 hours to pickle 
> the class instance. The pickle file was than approx. 92 MB (this is ok). 
> During pickling the memory consuption of the python proccess was up to 
> 450 MB (512 MB RAM -> machine was swapping all the time).
> 
> My class use data types taken from a c++ class via swig. Don't know if 
> that is important...
> 
> My feeling is that I'm doing something wrong. But my python knowlegde is 
> not so deep to see what that is.
> 
> Is there an other way to perform an autosave of an class instance? Shelve?
> 
> System: python 2.3.4, Win XP, 1.X GHz class PC, 512 MB ram
> 
> Best redards
> Hans Georg



More information about the Python-list mailing list