File locking and logging

Vinay Sajip vinay_sajip at yahoo.co.uk
Fri Dec 3 04:47:40 EST 2004


Kamus of Kadizhar <yan at NsOeSiPnAeMr.com> wrote in message news:<pan.2004.12.03.00.35.11.626712 at NsOeSiPnAeMr.com>...
> Thanks to Robert Brewer, I got enough insight into logging to make it work....
> 
> Now I have another issue:  file locking.  Sorry if this is a very basic
> question, but I can't find a handy reference anywhere that mentions this.
> 
> When a logger opens a log file for append, is it automatically locked so
> other processes cannot write to it?  And what happens if two or more
> processes attempt to log an event at the same time?
> 
> Here's my situation.  I have two or three workstations that will log an
> event (the playing of a movie).  The log file is NFS mounted and all
> workstations will use the same log file.  How is file locking implemented?
> Or is it?
> 

No file locking is attempted by current logging handlers with respect
to other processes - an ordinary open() call is used. Within a given
Python process, concurrency support is is provided through threading
locks. If you need bullet-proof operation in the scenario where
multiple workstations are logging to the same file, you can do this
through having all workstations log via a SocketHandler to a
designated node, where you run a server process which locally logs to
file events received across the network. There is a working example of
this in the Python 2.4 docs.

Best regards,


Vinay Sajip



More information about the Python-list mailing list