Downloading Large Files -- Feedback?
Fuzzyman
fuzzyman at gmail.com
Mon Feb 13 05:34:49 EST 2006
mwt wrote:
> This code works fine to download files from the web and write them to
> the local drive:
>
> import urllib
> f = urllib.urlopen("http://www.python.org/blah/blah.zip")
> g = f.read()
> file = open("blah.zip", "wb")
> file.write(g)
> file.close()
>
> The process is pretty opaque, however. This downloads and writes the
> file with no feedback whatsoever. You don't see how many bytes you've
> downloaded already, etc. Especially the "g = f.read()" step just sits
> there while downloading a large file, presenting a pregnant, blinking
> cursor.
>
> So my question is, what is a good way to go about coding this kind of
> basic feedback? Also, since my testing has only *worked* with this
> code, I'm curious if it will throw a visibile error if something goes
> wrong with the download.
>
By the way, you can achieve what you want with urllib2, you may also
want to check out the pycurl library - which is a Python interface to a
very good C library called curl.
With urllib2 you don't *have* to read the whole thing in one go -
import urllib2
f = urllib2.urlopen("http://www.python.org/blah/blah.zip")
g = ''
while True:
a = f.read(1024*10)
if not a:
break
print 'Read another 10k'
g += a
file = open("blah.zip", "wb")
file.write(g)
file.close()
All the best,
Fuzzyman
http://www.voidspace.org.uk/python/index.shtml
> Thanks for any pointers. I'm busily Googling away.
More information about the Python-list
mailing list