Handling signals and performance issues.

Ian Parker parker at gol.com
Sat Dec 9 20:43:26 EST 2000


In article <meRU5.211$Mx.12164 at sjc-read.news.verio.net>, Ray Van Dolson
<rayvd at nospam.firetail.org> writes
>I've written a small Python script that watches a logfile (kinda like tail -
>f) and splits it into certain other files in real time.  I'm running into 
>two issues however that I haven't quite resolved yet to get this script 
>running as I'd like it.  Firstly, the program uses up a lot of CPU time.  
>This is almost certainly because of the way I'm watching the logfiles for 
>changes.  Basically it's like this:
>
>while 1:
>       ln=inputLog.readline()
>       if ln:
>               processLine(ln)
>       else:
>               pass
>
>Much of the time of course there have been no changes to the logfile and so 
>it keeps looping, passing and waiting for a change.  While the system 
>actually remains fast, watching 'top' shows that python is using up all the 
>cpu when it can.  Is there a better way to watch a file for changes that 
>doesn' use such a cpu consuming while loop?
>
>My second problem is that while the program handles a CTRL-C, etc well--
>cleaning up and the like, if it is sent a kill -9 or kill -HUP by root or 
>the user spawning the process it dies a horrible death--ie, it doesn't clean 
>up and flush its buffers, or run the 'cleanup' functions I've defined with a 
>try: except: block.  How can I watch for a kill -9 or kill -HUP and have my 
>script respond accordingly before shutting down?
>
>Thanks much for any help provided.
>Ray Van Dolson

How about putting a "time.sleep(n)" in place of the "pass" statement?
That way you'd pause for a while when there was no new data in the log
file, but continue to loop quickly if multiple lines had been appended
during the previous n seconds.

-- 
Ian Parker



More information about the Python-list mailing list