download x bytes at a time over network
Steven D'Aprano
steven at REMOVE.THIS.cybersource.com.au
Tue Mar 17 03:37:00 EDT 2009
Please excuse my replying to a reply instead of the original, but the
original doesn't show up on my news feed.
On 2009/3/16 Saurabh <phonethics at gmail.com>:
> I want to download content from the net - in chunks of x bytes or
> characters at a time - so that it doesnt pull the entire content in one
> shot.
>
> import urllib2
> url = "http://python.org/"
> handler = urllib2.urlopen(url)
>
> data = handler.read(100)
> print """Content :\n%s \n%s \n%s""" % ('=' * 100, data, '=' * 100)
>
> # Disconnect the internet
>
> data = handler.read(100) # I want it to throw an exception because it
> # cant fetch from the net
> print """Content :\n%s \n%s \n%s""" % ('=' * 100, data, '=' * 100)
> # But still works !
Perhaps you have a local caching web proxy server, which has downloaded
the entire document, and even though you have disconnected the Internet,
you haven't disconnected the proxy which is feeding you the rest of the
file from the server's cache.
>> Apparently, handler = urllib2.urlopen(url) takes the entire data in
>> buffer and handler.read(100) is just reading from the buffer ?
I could be wrong, but I don't think so.
I think you have the right approach. By default, urllib2 uses a
ProxyHandler which will try to auto-detect any proxies (according to
environment variables, registry entries or similar).
--
Steven
More information about the Python-list
mailing list