fast copying of large files in python

Catherine Moroney Catherine.M.Moroney at jpl.nasa.gov
Wed Nov 2 15:31:48 EDT 2011


Hello,

I'm working on an application that as part of its processing, needs to 
copy 50 Meg binary files from one NFS mounted disk to another.

The simple-minded approach of shutil.copyfile is very slow, and I'm 
guessing that this is due to the default 16K buffer size.  Using
shtil.copyfileobj is faster when I set a larger buffer size, but it's
still slow compared to a "os.system("cp %s %s" % (f1,f2))" call.
Is this because of the overhead entailed in having to open the files
in copyfileobj?

I'm not worried about portability, so should I just do the
os.system call as described above and be done with it?  Is that the
fastest method in this case?  Are there any "pure python" ways of
getting the same speed as os.system?

Thanks,

Catherine



More information about the Python-list mailing list