corrupt download with urllib2

Ulli Horlacher framstag at rus.uni-stuttgart.de
Tue Nov 10 12:21:24 EST 2015


Peter Otten <__peter__ at web.de> wrote:

> > I have a problem with it: There is no feedback for the user about the
> > progress of the transfer, which can last several hours.
> > 
> > For small files shutil.copyfileobj() is a good idea, but not for huge
> > ones.
> 
> Indeed. Have a look at the source code:
> 
> def copyfileobj(fsrc, fdst, length=16*1024):
>     """copy data from file-like object fsrc to file-like object fdst"""
>     while 1:
>         buf = fsrc.read(length)
>         if not buf:
>             break
>         fdst.write(buf)
> 
> As simple as can be

Oooops - that's all?!


> I suggested the function as an alternative to writing 
> the loop yourself when your example code basically showed

Good idea :-)


> For the huge downloads that you intend to cater to you probably want your 
> script not just to print a dot on every iteration, you need expected 
> remaining time, checksums, ability to stop and resume a download and 
> whatnot.
> 
> Does the Perl code offer that?

Of course, yes. For download AND upload.



> Then why rewrite?

There is no more a Perl compiler for windows which supports https.



> Or are there Python libraries that do that out of the box?

No.


-- 
Ullrich Horlacher              Server und Virtualisierung
Rechenzentrum IZUS/TIK         E-Mail: horlacher at tik.uni-stuttgart.de
Universitaet Stuttgart         Tel:    ++49-711-68565868
Allmandring 30a                Fax:    ++49-711-682357
70550 Stuttgart (Germany)      WWW:    http://www.tik.uni-stuttgart.de/



More information about the Python-list mailing list