urllib2 question

Laszlo Nagy gandalf at designaproduct.biz
Tue Jan 23 09:38:47 EST 2007


  Hi All,

I would like to save a URL into a file. I need to manage cookies and use 
my own HTTP headers so I'm using urllib2 and a custom openerdirector. 
Here is a code fragment:

                        while True:
                            data = openerdirector.read(1024)
                            fd.write(data)
                            if not data:
                                break

The main reason for doing this is that the URL can contain big amounts 
of data, and I do not want to store it in memory. The other way would be:

fd.write(openerdirector.read())

My question is that am I doing this the right way? I used the 
openerdirector as a file here, but I'm not sure if it works like a file. 
A file object should block when read() is called, and then return at 
most 1024 bytes of data when available. It will return with an empty 
string only after EOF reached. But is the same true for the 
openerdirector instance? I did not find useful documentation in the 
python docs about this. The documentation says:

*class OpenerDirector*( 	)

    The OpenerDirector class opens URLs via BaseHandlers chained
    together. It manages the chaining of handlers, and recovery from
    errors. 

But I'm not sure if it can be used as a file or not.

Thanks,

   Laszlo





More information about the Python-list mailing list