[New-bugs-announce] [issue5557] Byte-code compilation uses excessive memory

Tom Goddard report at bugs.python.org
Tue Mar 24 20:48:44 CET 2009


New submission from Tom Goddard <goddard at cgl.ucsf.edu>:

Bytecode compiling large Python files uses an unexpectedly large amount
of memory.  For example, compiling a file containing a list of 5 million
integers uses about 2 Gbytes of memory while the Python file size is
about 40 Mbytes.  The memory used is 50 times the file size.  The
resulting list in Python consumes about 400 Mbytes of memory, so
compiling the byte codes uses about 5 times the memory of the list
object.  Can the byte-code compilation can be made more memory efficient?

The application that creates simlilarly large Python files is a
molecular graphics program called UCSF Chimera that my lab develops.  It
writes session files which are Python code.  Sessions of reasonable size
for Chimera for a given amount of physical memory cannot be
byte-compiled without thrashing, crippling the interactivity of all
software running on the machine.

Here is Python code to produce the test file test.py containing a list
of 5 million integers:

print >>open('test.py','w'), 'x = ', repr(range(5000000))

I tried importing the test.py file with Python 2.5, 2.6.1 and 3.0.1 on
Mac OS 10.5.6.  In each case when the test.pyc file is not present the
python process as monitored by the unix "top" command took about 1.7 Gb
RSS and 2.2 Gb VSZ on a MacBook Pro which has 2 Gb of memory.

----------
components: Interpreter Core
messages: 84108
nosy: goddard
severity: normal
status: open
title: Byte-code compilation uses excessive memory
type: performance
versions: Python 2.6

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue5557>
_______________________________________


More information about the New-bugs-announce mailing list