How to prevent the script from stopping before it should

python at hope.cz python at hope.cz
Mon Jan 17 14:12:09 EST 2005


Steve Holden wrote:
> wittempj at hotmail.com wrote:
>
> > #import urllib, sys
> > #pages = ['http://www.python.org', 'http://xxx']
> > #for i in pages:
> > #   try:
> > #        u = urllib.urlopen(i)
> > #        print u.geturl()
> > #    except Exception, e:
> > #        print >> sys.stderr, '%s: %s' % (e.__class__.__name__, e)
> > will print an error if a page fails opening, rest opens fine
> >
> More generally you may wish to use the timeout features of TCP
sockets.
> These were introduced in Python 2.3, though Tim O'Malley's module
> "timeoutsocket" (which was the inspiration for the 2.3 upgrade) was
> available for earlier versions.
>
> You will need to import the socket module and then call
> socket.setdefaulttimeout() to ensure that communication with
> non-responsive servers results in a socket exception that you can
trap.
>
> regards
>   Steve
> --
> Steve Holden               http://www.holdenweb.com/
> Python Web Programming  http://pydish.holdenweb.com/
> Holden Web LLC      +1 703 861 4237  +1 800 494 3119

Thank you wittempj at hotmail.com  and Steve for some ideas.Finding the
fact that the script hanged is not a big problem .
I,however, would need a solution that I will not need to start again
the script but the script re-start by itself. I am thinking about two
threads, the main(master) that will supervise a slave thread.This slave
thread will download the pages and whenever there is a timeout the
master thread restart a slave thread.
Is it a good solution? Or is there a better one?
Thanks for help
Lad




More information about the Python-list mailing list