urllib timeout issues

Gabriel Genellina gagsl-py2 at yahoo.com.ar
Tue Mar 27 17:50:21 EDT 2007


En Tue, 27 Mar 2007 17:41:44 -0300, supercooper <supercooper at gmail.com>  
escribió:

> On Mar 27, 3:13 pm, "Gabriel Genellina" <gagsl-... at yahoo.com.ar>
> wrote:
>> En Tue, 27 Mar 2007 16:21:55 -0300, supercooper <supercoo... at gmail.com>
>> escribió:
>>
>> > I am downloading images using the script below. Sometimes it will go
>> > for 10 mins, sometimes 2 hours before timing out with the following
>> > error:
>>
>> >     urllib.urlretrieve(fullurl, localfile)
>> > IOError: [Errno socket error] (10060, 'Operation timed out')
>>
>> > I have searched this forum extensively and tried to avoid timing out,
>> > but to no avail. Anyone have any ideas as to why I keep getting a
>> > timeout? I thought setting the socket timeout did it, but it didnt.
>>
>> You should do the opposite: timing out *early* -not waiting 2 hours- and
>> handling the error (maybe using a queue to hold pending requests)
>>
>> --
>> Gabriel Genellina
>
> Gabriel, thanks for the input. So are you saying there is no way to
> realistically *prevent* the timeout from occurring in the first

Exactly. The error is out of your control: maybe the server is down,  
irresponsive, overloaded, a proxy has any problems, any network problem,  
etc.

> place?  And by timing out early, do you mean to set the timeout for x
> seconds and if and when the timeout occurs, handle the error and start
> the process again somehow on the pending requests?  Thanks.

Exactly!
Another option: Python is cool, but there is no need to reinvent the  
wheel. Use wget instead :)

-- 
Gabriel Genellina




More information about the Python-list mailing list