[Tutor] open a webpage which may be unavailable
pileux systeme
nogentstanford at yahoo.fr
Thu Oct 18 04:34:13 CEST 2007
Hello,
I am trying to retrieve data from several webpages. My problem is the following: after a random number of requests, the page I'm trying to open is unavailable (and I get an IOError). Note that the page may become available if I try again after some time. Since I have thousands pages to explore, I'd like to be able to continue the program in spite of this error.
I've thought of trying to raise an exception such as:
try:
usock = urllib.urlopen('http:// <etc>')
except:
< something >
else:
usock = urllib.urlopen('http:// <etc>')
However, this doesn't work because the page can become unavailable between the time when I run the 'try' and the 'else'. [for instance, assume that my internet connection stops for a couple seconds every random amount of time].
Would anyone know how to solve this problem?
[Another way to look at it is as follows: I'd like to be able to check whether the page is available AND copy it if it is AT THE SAME TIME]
Thanks a lot for your answer,
N.
---------------------------------
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.python.org/pipermail/tutor/attachments/20071018/614cdb6c/attachment.htm
More information about the Tutor
mailing list