File locking

CyberGuest rickymz at yahoo.com
Tue Apr 24 11:52:38 EDT 2001


have you tried fcntl?
logfile = open(LOGFILE, 'w')

#fileno() returns the file descriptor of opened file,
#lock_ex for exclusive locking, the other process waits for file to 
unlock before proceeding. (if the file is locked already). you don't 
have to lock the LOGFILE, lock a new file, as long as it's locked, other 
processes won't be able to write to LOGFILE.
fcntl.flock(logfile.fileno(), fcntl.LOCK_EX)
#code for writing to file.... blahblah....

#unlock file
fcntl.flock(logfile.fileno(), fcntl.LOCK_UN)
logfile.close()

Martin Kaufmann wrote:

> Could somebody help me? I have a script that writes its output in a
> logfile. This logfile is needed to check whether a task has already be
> done. Several workstations are now using the same script (via nfs). So I
> implemented the following file locking mechanism:
> 
>         logfile = posixfile.open(LOGFILE, 'a')
>         logfile.lock('|w')
> 	[code snipped...]
>         log_string = '%s  %s%s  %d  %s  %s\n' % (mytime, host, url,
> 					error_code, message, hostname)
>         logfile.write(log_string)
>         logfile.lock('u')
>         logfile.close()
> 
> But I still get problems with two processes trying to write at the same
> time. What is wrong with my implementation?
> 
> Thanks for your help.
> 
> Regards,
> 
> Martin





More information about the Python-list mailing list