avoid script running twice

Nick Craig-Wood nick at craig-wood.com
Mon Jun 18 18:30:04 EDT 2007


Jeff McNeil <jeff at jmcneil.net> wrote:
>  I've got a rather large log processing job here that has the same
>  requirement.  I process and sort Apache logs from an 8-way cluster. I
>  sort and calculate statistics in 15-minute batch jobs. Only one copy
>  should run at once.
> 
>  I open a file and lock it via something like this:
> 
>  import fcntl
> 
>  fhandle = file("ourlockfile.txt", "w")
> 
>  try:
>      fcntl.lockf(fhandle.fileno(), fcntl.LOCK_EX|fcntl.LOCK_NB)
>  except IOError, e:
>      if e.errno == errno.EAGAIN:
>          print >>sys.stderr, "exiting, another copy currently running"
>      else:
>          raise
> 
>  I've got it wrapped in a 'FileBasedLock' class that quacks like Lock
>  objects in the threading module.

That is the traditional unix locking method.  Note that it may not
work if you are writing the lock file to an NFS mount!

Traditionally you write your os.pid() to the file also.  You can then
send a signal to the running copy, detect stale lock files etc.

-- 
Nick Craig-Wood <nick at craig-wood.com> -- http://www.craig-wood.com/nick



More information about the Python-list mailing list