This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: cgi.py uses too much memory on large file uploads
Type: enhancement Stage:
Components: Library (Lib) Versions:
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: barry Nosy List: barry, barryp
Priority: normal Keywords:

Created on 2000-10-30 21:35 by barryp, last changed 2022-04-10 16:02 by admin. This issue is now closed.

Messages (2)
msg2233 - (view) Author: Barry Pederson (barryp) Date: 2000-10-30 21:35
When uploading files through a web form and parsing the form using cgi.py - the entire fil(s are read into memory, which is intolerable with very large (I'm thinking multiple 10's of megabyte) files.

The culprit seems to be in the FieldStorage class, basically the three calls to:

   self.lines.append(line)

in the read_lines_to_eof, read_lines_to_outerboundary, and skip_lines methods.  

Commenting those calls out seems to fix the problem, as long as you don't care to access the 'lines' member of the FieldStorage instance.  A real fix would possibly be to add a "keep_lines=1" keyword parameter to FieldStorage.__init__(), save that value in the instance, and check it before accumulating form lines.
msg2234 - (view) Author: Barry A. Warsaw (barry) * (Python committer) Date: 2000-11-06 18:36
This is identical to bug 110674 which was closed prior to Python 2.0 after added it to PEP 42.

For Python 2.1, we will remove the self.lines attribute and see if anybody complains.  Nobody can remember why those were added in the first place.
History
Date User Action Args
2022-04-10 16:02:33adminsetgithub: 33418
2000-10-30 21:35:20barrypcreate