segfault using shutil.make_archive

Tim jtim.arnold at gmail.com
Thu Oct 6 14:43:18 EDT 2016


On Thursday, October 6, 2016 at 2:04:20 PM UTC-4, Random832 wrote:
> On Thu, Oct 6, 2016, at 13:45, Random832 wrote:
> > On Thu, Oct 6, 2016, at 12:46, Tim wrote:
> > > I need to zip up a directory that's about 400mb.
> > > I'm using shutil.make_archive and I'm getting this response:
> > > 
> > >     Segmentation fault: 11 (core dumped)
> > > 
> > > The code is straightforward (and works on other, smaller dirs):
> > 
> > Are you able to make a test case that reproduces it reliably without any
> > specifics about what data you are using (e.g. can you generate a
> > directory full of blank [or /dev/urandom if the compressed size is a
> > factor] files that causes this)? Is it a large number of files or a
> > large total size of files?
> 
> Also consider passing a logging object to make_archive and see if it
> prints out anything interesting before the segfault.
> 
> import shutil
> import logging
> import sys
> logger = logging.getLogger()
> logger.setLevel(logging.DEBUG)
> logger.addHandler(logging.StreamHandler(sys.stderr))
> shutil.make_archive(..., logger=logger)

Interesting. I tried the above in an interactive python session and never saw the problem at all.  So then I added the logger to my program code and watched it go. Using the program, I still get the segfault. It happens on the same file; I removed it and then it happens on the next file in the list. 

It looks like it (python process) has no more space, but the machine has plenty of disk space and available memory.

I didn't think about the number of files, I'll look into that.
For now, I'm just using subprocess and the zip command, which is working.

thanks,
--Tim



More information about the Python-list mailing list