multiple processes, private working directories

Michael Palmer m_palmer45 at yahoo.ca
Wed Sep 24 22:07:38 EDT 2008


On Sep 24, 9:27 pm, Tim Arnold <a_j... at bellsouth.net> wrote:
> I have a bunch of processes to run and each one needs its own working
> directory. I'd also like to know when all of the processes are
> finished.
>
> (1) First thought was threads, until I saw that os.chdir was process-
> global.
> (2) Next thought was fork, but I don't know how to signal when each
> child is
> finished.
> (3) Current thought is to break the process from a method into a
> external
> script; call the script in separate threads.  This is the only way I
> can see
> to give each process a separate dir (external process fixes that), and
> I can
> find out when each process is finished (thread fixes that).
>
> Am I missing something? Is there a better way? I hate to rewrite this
> method
> as a script since I've got a lot of object metadata that I'll have to
> regenerate with each call of the script.
>
> thanks for any suggestions,
> --Tim Arnold

1, Does the work in the different directories really have to be done
concurrently? You say you'd like to know when each thread/process was
finished, suggesting that they are not server processes but rather
accomplish some limited task.

2. If the answer to 1. is yes: All that os.chdir gives you is an
implicit global variable. Is that convenience really worth a multi-
process architecture? Would it not be easier to just work with
explicit path names instead? You could store the path of the per-
thread working directory in an instance of threading.local - for
example:

>>> import threading
>>> t = threading.local()
>>>
>>> class Worker(threading.Thread):
...     def __init__(self, path):
...         t.path=path
            ...

the thread-specific value of t.path would then be available to all
classes and functions running within that thread.



More information about the Python-list mailing list