flock experience

Todd Warner tawNOSPAM at redhat.com
Tue Jan 14 07:37:25 EST 2003


File-locking *can* be tricky. Word of caution: if the log-file resides
on an NFS file-system, things can be... unreliable. We've had issues in
the past with that. NFS, from what I understand, cannot guarantee when
something is actually written to disk (note: not an NFS expert).

If you want to do file-locking, standard producer-consumer code should
do the trick (blocks when someone is writing, and when someone *wants*
to write, but the file is "busy"), using flock (which is easier) or 
FCNTL (which is a bit more esoteric, but gives you more control).

For a log file though, my recommendation is... open the file in append
mode for updating, and line-buffered eg. fo = file('logfile.log', 'a+', 1).
This works well enough for logfiles. Even with many processes.

Of course, there is the option of having a logfile per process, but
that's... messy (depends on the application of course).

I hope this was somewhat helpful.

On Tue, 14 Jan 2003, Robin Becker wrote:
> has anybody good experience using file locking in python. I have the
> typical problem of opening a log file for a short time (cgi environment)
> in a multiprocess world. So far I think we have been lucky, but our 8
> machine front end is contracting to one machine, just for us, so the
> collision probabilities are rising. I see there's a recipe in the pcb,
> but what happens to locked out processes etc etc.
> 

-- 
____________
 /odd Warner                                    <taw@{redhat,pobox}.com>
          Bit Twiddler - Operation Cheetah Flip - Red Hat Inc.
---------------------gpg info in the message headers--------------------
"Sometimes you need to build a fire to keep warm, but you can't,
 and you freeze to death."
                  -Jack London, "To Build a Fire", book-a-minute version






More information about the Python-list mailing list