file locking...

Nigel Rantor wiggly at wiggly.org
Sun Mar 1 04:28:11 EST 2009


bruce wrote:
> Hi.
> 
> Got a bit of a question/issue that I'm trying to resolve. I'm asking
> this of a few groups so bear with me.
> 
> I'm considering a situation where I have multiple processes running,
> and each process is going to access a number of files in a dir. Each
> process accesses a unique group of files, and then writes the group
> of files to another dir. I can easily handle this by using a form of
> locking, where I have the processes lock/read a file and only access
> the group of files in the dir based on the  open/free status of the
> lockfile.
> 
> However, the issue with the approach is that it's somewhat
> synchronous. I'm looking for something that might be more
> asynchronous/parallel, in that I'd like to have multiple processes
> each access a unique group of files from the given dir as fast as
> possible.

I don't see how this is synchronous if you have a lock per file. Perhaps 
you've missed something out of your description of your problem.

> So.. Any thoughts/pointers/comments would be greatly appreciated. Any
>  pointers to academic research, etc.. would be useful.

I'm not sure you need academic papers here.

One trivial solution to this problem is to have a single process 
determine the complete set of files that require processing then fork 
off children, each with a different set of files to process.

The parent then just waits for them to finish and does any 
post-processing required.

A more concrete problem statement may of course change the solution...

   n




More information about the Python-list mailing list