locking files on Linux
Oscar Benjamin
oscar.j.benjamin at gmail.com
Thu Oct 18 11:04:43 EDT 2012
On 18 October 2012 15:49, andrea crotti <andrea.crotti.0 at gmail.com> wrote:
> 2012/10/18 Grant Edwards <invalid at invalid.invalid>:
>>
>> If what you're guarding against is multiple instances of your
>> application modifying the file, then either of the advisory file
>> locking schemes or the separate lock file should work fine.
>
> Ok so I tried a small example to see if I can make it fail, but this
> below just works perfectly fine.
>
> Maybe it's too fast and it release the file in time, but I would
> expect it to take some time and fail instead..
Why not come up with a test that actually shows you if it works? Here
are two suggestions:
1) Use time.sleep() so that you know how long the lock is held for.
2) Write different data into the file from each process and see what
you end up with.
>
> import fcntl
>
> from multiprocessing import Process
>
> FILENAME = 'file.txt'
>
>
> def long_text():
> return ('some text' * (100 * 100))
>
>
> class Locked:
> def __init__(self, fileobj):
> self.fileobj = fileobj
>
> def __enter__(self):
> # any problems here?
> fcntl.lockf(self.fileobj, fcntl.LOCK_EX)
> return self.fileobj
>
> def __exit__(self, type, value, traceback):
> fcntl.lockf(self.fileobj, fcntl.LOCK_UN)
>
>
> def write_to_file():
> with open(FILENAME, 'w') as to_lock:
I don't think it will work if you truncate the file like this. This
will empty the file *before* checking for the lock. Try opening the
file for reading and writing (without truncating).
> with Locked(to_lock):
> to_lock.write(long_text())
>
>
> if __name__ == '__main__':
> Process(target=write_to_file).start()
> Process(target=write_to_file).start()
Oscar
More information about the Python-list
mailing list