Processes not exiting

MRAB python at mrabarnett.plus.com
Sun Aug 2 14:25:36 EDT 2009


ma3mju wrote:
> On 2 Aug, 15:48, ma3mju <matt.u... at googlemail.com> wrote:
>> On 31 July, 11:34, MRAB <pyt... at mrabarnett.plus.com> wrote:
>>
>>
>>
>>> ma3mju wrote:
>>>> Hi all,
>>>> I'm having trouble with multiprocessing I'm using it to speed up some
>>>> simulations, I find for large queues when the process reaches the
>>>> poison pill it does not exit whereas for smaller queues it works
>>>> without any problems. Has anyone else had this trouble? Can anyone
>>>> tell me a way around it? The code is in two files below.
>>> [snip]
>>>> #get number of cores and set the number on concurrent processes
>>>> num_hard_workers = 2
>>>> num_workers = multiprocessing.cpu_count()*1.5
>>>> easy_workers = []
>>>> hard_workers = []
>>>> #add poison pill for each worker and create the worker
>>>> for i in range(0,num_workers-num_hard_workers):
>>>>     easy_work_queue.put(None)
>>>>     easy_workers.append(multiprocessing.Process
>>>> (target=GP.RandomWalkGeneralizationErrorParallel,args=
>>>> (easy_work_queue,result_queue,)))
>>>> for i in range(0,num_hard_workers):
>>>>     hard_work_queue.put(None)
>>>>     hard_workers.append(multiprocessing.Process
>>>> (target=GP.RandomWalkGeneralizationErrorParallel,args=
>>>> (hard_work_queue,result_queue,)))
>>> You have 2 hard workers and ceil(CPU_count * 1.5) - 2 easy workers.
>>> What if the number of CPUs was 1? That would give 2 hard and 0 easy!
>>> Also, I recommend that you put only 1 'poison pill' in each queue and
>>> have the workers put it back when they see it.
>> I'll give that a go in a sec and see if it helps. The processes quit
>> out for smaller queues though so it should in theory be alright. I'm
>> not too fussed about the CPU's it's only there because I change
>> between a uni PC and home one with a different number of cores in each
>> but both greater than one.
> 
> Just tried changing the poison pill part to no avail sadly

I wonder whether one of the workers is raising an exception, perhaps due
to lack of memory, when there are large number of jobs to process.

Another question: why are you distinguishing between easy and hard jobs?
Do you actually get a measurable improvement in performance from doing
it this way instead of having just a single queue of jobs and a single
pool of workers?



More information about the Python-list mailing list