urllib2 timeout??
John J. Lee
jjl at pobox.com
Thu Apr 29 17:06:14 EDT 2004
"Doug Gray" <dgray at coldstorage.com> writes:
[...]
> request =
> urllib2.Request("http://fantasygames.sportingnews.com/crs/home_check_reg.htm
> l",data='username=penngray1&password=testpwd')
>
> response = ClientCookie.urlopen(request)
>
> #return response.info()
>
> request2
> =urllib2.Request("http://fantasygames.sportingnews.com/baseball/fullseason/u
> ltimate/game/frozen_roster.html?user_id=6208")
>
> resp2 = ClientCookie.urlopen(request2)
>
> return resp2.read()
Looks fine. No need to have an explicit Request object every time,
though: just pass the URL to urlopen.
> The ClientCookie is a 3rd party software that handles the Set-cookie stuff
> on the client for me. This allows me to login into a site then access other
> webpages from that site that need cookies set. The problem is that 50% of
> the time it works and 50% of the time it fails. I assume this problem is a
> time out problem
As they used to say in Parliament: I refer the Honourable Gentleman to
the answer I gave some moments ago (in a thread with subject "urllib2
request blocks"). I'm not sure it's a timeout problem, though.
> Is there a time out problem here, why is the connection so slow in the first
> place. Is it my unix server?
Looks like some relatively low-level problem (some socket error that
httplib's not catching). Not sure precisely what, though.
[...]
> Traceback (most recent call last):
[...]
> File "/usr/lib/python2.3/site-packages/ClientCookie/_urllib2_support.py",
> line 612, in do_open
> raise URLError(err)
>
> URLError:
I'm not sure why you're not getting a more informative error message
here. URLError here is wrapping a socket.error. Try catching it and
printing the socket error directly:
try:
response = urllib2.OpenerDirector.open(self, req, data)
except urllib2.URLError, e:
print "e.reason", e.reason
John
More information about the Python-list
mailing list