locking files on Linux

andrea crotti andrea.crotti.0 at gmail.com
Thu Oct 18 10:49:02 EDT 2012


2012/10/18 Grant Edwards <invalid at invalid.invalid>:
>
> If what you're guarding against is multiple instances of your
> application modifying the file, then either of the advisory file
> locking schemes or the separate lock file should work fine.
>
> --
> Grant Edwards               grant.b.edwards        Yow! All this time I've
>                                   at               been VIEWING a RUSSIAN
>                               gmail.com            MIDGET SODOMIZE a HOUSECAT!
> --
> http://mail.python.org/mailman/listinfo/python-list

Ok so I tried a small example to see if I can make it fail, but this
below just works perfectly fine.

Maybe it's too fast and it release the file in time, but I would
expect it to take some time and fail instead..

import fcntl

from multiprocessing import Process

FILENAME = 'file.txt'


def long_text():
    return ('some text' * (100 * 100))


class Locked:
    def __init__(self, fileobj):
        self.fileobj = fileobj

    def __enter__(self):
        # any problems here?
        fcntl.lockf(self.fileobj, fcntl.LOCK_EX)
        return self.fileobj

    def __exit__(self, type, value, traceback):
        fcntl.lockf(self.fileobj, fcntl.LOCK_UN)


def write_to_file():
    with open(FILENAME, 'w') as to_lock:
        with Locked(to_lock):
            to_lock.write(long_text())


if __name__ == '__main__':
    Process(target=write_to_file).start()
    Process(target=write_to_file).start()



More information about the Python-list mailing list