[Image-SIG] PIL, Python and memory issues.... HELP!

Kevin Cazabon KCAZA@cymbolic.com
Tue, 20 Apr 1999 16:35:46 -0700


I've written a handy program for splitting up large images into separate panels for printing (including overlap and crop marks, etc), but it's been rendered basically useless due to the memory problems I'm running into.

Normally, images that would require being broken up into panels for my application would be in the 300MB-1.5GB range (which is huge, but I'm dealing with a continuous-tone photographic imagesetter at 305dpi which prints up to 50"x100").

Where are the memory bottlenecks likely to be:  in Python itself, or in PIL?  I'm finding that you basically need to be able to hold the entire image in RAM, plus OS, etc.  It doesn't seem to be effectively using the virtual memory on my Win98 or NT test machines, or there's an arbitrary memory limit somewhere in the code.

I'm dumping everything I can from memory as I go, but it still has problems.  Even processing a 175MB file on an NT machine with 128MB of RAM, and 512MB of VM didn't work (Typically, I will suggest using a 512MB+ machine for real images, but this is sufficient for testing.  And yes, I've tested it successfully with 60MB images).

Anyone have suggestions for using Python/PIL with such large images, short of parsing the files pixel for pixel (or line by line)?  I'd hate to have to start over again in C, because Python is so convenient for this type of thing (if not amazingly fast).

Thanks,

Kevin Cazabon.