From jnoller at gmail.com Tue Aug 23 22:28:27 2011 From: jnoller at gmail.com (Jesse Noller) Date: Tue, 23 Aug 2011 16:28:27 -0400 Subject: [Multiprocessing-sig] Need new lead maintainer Message-ID: I can not longer in good conscience act as "lead maintainer" for multiprocessing; the module needs care and feeding above what I can give to it. jesse From brian.curtin at gmail.com Tue Aug 23 22:34:54 2011 From: brian.curtin at gmail.com (Brian Curtin) Date: Tue, 23 Aug 2011 15:34:54 -0500 Subject: [Multiprocessing-sig] Need new lead maintainer In-Reply-To: References: Message-ID: On Tue, Aug 23, 2011 at 15:28, Jesse Noller wrote: > I can not longer in good conscience act as "lead maintainer" for > multiprocessing; the module needs care and feeding above what I can > give to it. > > jesse I can put out a call for help on blog.python.org if you want. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnoller at gmail.com Tue Aug 23 22:39:19 2011 From: jnoller at gmail.com (Jesse Noller) Date: Tue, 23 Aug 2011 16:39:19 -0400 Subject: [Multiprocessing-sig] Need new lead maintainer In-Reply-To: References: Message-ID: On Tue, Aug 23, 2011 at 4:34 PM, Brian Curtin wrote: > On Tue, Aug 23, 2011 at 15:28, Jesse Noller wrote: >> >> I can not longer in good conscience act as "lead maintainer" for >> multiprocessing; the module needs care and feeding above what I can >> give to it. >> >> jesse > > I can put out a call for help on blog.python.org if you want. I'm going to blog about it within the next 24 hours. No worries. From SGONG at mdacorporation.com Mon Aug 29 21:39:59 2011 From: SGONG at mdacorporation.com (Shawn Gong) Date: Mon, 29 Aug 2011 12:39:59 -0700 Subject: [Multiprocessing-sig] stop and restart processes Message-ID: <33584A1DEF4341428D15C1273466D3230176E59BF2@EVSYVR1.ds.mda.ca> Hi list, My Linux server has 24 CPUs. When I multiprocess a large job with more than 800 runs (para_list has >800 entries), I run out memory. There could be a memory leak somewhere. Is there a way to stop after running 10 processes on one CPU and restart, so that memory leak can be avoided? thanks, Shawn Codes: args = [(arg1, arg2, ...) for arg1, arg2 in para_list] # run My_calculation asynchronously pool = multiprocessing.Pool(processes = cpu_used) results = [pool.apply_async(My_calculation, a) for a in args] # get the results: waits for each to finish and logs any errors for idx, r in enumerate(results): try: r.get() except: log.debug(traceback.format_exc()) From SGONG at mdacorporation.com Mon Aug 29 22:34:16 2011 From: SGONG at mdacorporation.com (Shawn Gong) Date: Mon, 29 Aug 2011 13:34:16 -0700 Subject: [Multiprocessing-sig] stop and restart processes In-Reply-To: <33584A1DEF4341428D15C1273466D3230176E59BF2@EVSYVR1.ds.mda.ca> References: <33584A1DEF4341428D15C1273466D3230176E59BF2@EVSYVR1.ds.mda.ca> Message-ID: <33584A1DEF4341428D15C1273466D3230176E59BF3@EVSYVR1.ds.mda.ca> To clarify: I am using python 2.6.6, not 2.7 So 'maxtasksperchild' is not available. Is there something similar? thanks, Shawn -----Original Message----- From: multiprocessing-sig-bounces+sgong=mdacorporation.com at python.org [mailto:multiprocessing-sig-bounces+sgong=mdacorporation.com at python.org] On Behalf Of Shawn Gong Sent: Monday, August 29, 2011 3:40 PM To: 'multiprocessing-sig at python.org' Subject: [Multiprocessing-sig] stop and restart processes Hi list, My Linux server has 24 CPUs. When I multiprocess a large job with more than 800 runs (para_list has >800 entries), I run out memory. There could be a memory leak somewhere. Is there a way to stop after running 10 processes on one CPU and restart, so that memory leak can be avoided? thanks, Shawn Codes: args = [(arg1, arg2, ...) for arg1, arg2 in para_list] # run My_calculation asynchronously pool = multiprocessing.Pool(processes = cpu_used) results = [pool.apply_async(My_calculation, a) for a in args] # get the results: waits for each to finish and logs any errors for idx, r in enumerate(results): try: r.get() except: log.debug(traceback.format_exc()) _______________________________________________ Multiprocessing-sig mailing list Multiprocessing-sig at python.org http://mail.python.org/mailman/listinfo/multiprocessing-sig From jnoller at gmail.com Tue Aug 30 01:15:07 2011 From: jnoller at gmail.com (Jesse Noller) Date: Mon, 29 Aug 2011 19:15:07 -0400 Subject: [Multiprocessing-sig] stop and restart processes In-Reply-To: <33584A1DEF4341428D15C1273466D3230176E59BF3@EVSYVR1.ds.mda.ca> References: <33584A1DEF4341428D15C1273466D3230176E59BF2@EVSYVR1.ds.mda.ca> <33584A1DEF4341428D15C1273466D3230176E59BF3@EVSYVR1.ds.mda.ca> Message-ID: There is no similar control in 2.6, and we can not backport that functionality to 2.6 which is in security fix mode only. If I remember correctly, Celery (http://celeryproject.org/) implements something like what you want internally. Jesse On Mon, Aug 29, 2011 at 4:34 PM, Shawn Gong wrote: > To clarify: I am using python 2.6.6, not 2.7 > So 'maxtasksperchild' is not available. Is there something similar? > > thanks, > Shawn > > > -----Original Message----- > From: multiprocessing-sig-bounces+sgong=mdacorporation.com at python.org [mailto:multiprocessing-sig-bounces+sgong=mdacorporation.com at python.org] On Behalf Of Shawn Gong > Sent: Monday, August 29, 2011 3:40 PM > To: 'multiprocessing-sig at python.org' > Subject: [Multiprocessing-sig] stop and restart processes > > Hi list, > > My Linux server has 24 CPUs. When I multiprocess a large job with more than 800 runs (para_list has >800 entries), I run out memory. There could be a memory leak somewhere. > > Is there a way to stop after running 10 processes on one CPU and restart, so that memory leak can be avoided? > > thanks, > Shawn > > Codes: > ? ?args = [(arg1, arg2, ...) for arg1, arg2 in para_list] > > ? ?# run My_calculation asynchronously > ? ?pool = multiprocessing.Pool(processes = cpu_used) > ? ?results = [pool.apply_async(My_calculation, a) for a in args] > > ? ?# get the results: waits for each to finish and logs any errors > ? ?for idx, r in enumerate(results): > ? ? ? ?try: > ? ? ? ? ? ?r.get() > ? ? ? ?except: > ? ? ? ? ? ?log.debug(traceback.format_exc()) > > > _______________________________________________ > Multiprocessing-sig mailing list > Multiprocessing-sig at python.org > http://mail.python.org/mailman/listinfo/multiprocessing-sig > _______________________________________________ > Multiprocessing-sig mailing list > Multiprocessing-sig at python.org > http://mail.python.org/mailman/listinfo/multiprocessing-sig > From ask at celeryproject.org Tue Aug 30 13:46:26 2011 From: ask at celeryproject.org (Ask Solem) Date: Tue, 30 Aug 2011 12:46:26 +0100 Subject: [Multiprocessing-sig] stop and restart processes In-Reply-To: References: <33584A1DEF4341428D15C1273466D3230176E59BF2@EVSYVR1.ds.mda.ca> <33584A1DEF4341428D15C1273466D3230176E59BF3@EVSYVR1.ds.mda.ca> Message-ID: On 30 Aug 2011, at 00:15, Jesse Noller wrote: > There is no similar control in 2.6, and we can not backport that > functionality to 2.6 which is in security fix mode only. If I remember > correctly, Celery (http://celeryproject.org/) implements something > like what you want internally. > https://github.com/ask/billiard contains the maxtasksperchild feature. Billiard is the celery.concurrency.processes.pool module as a separate package, but it has not been updated for a while. If people are interested then we should update the package to have the latest fixes in Celery. -- Ask Solem twitter.com/asksol | +44 (0)7713357179 From SGONG at mdacorporation.com Tue Aug 30 15:00:37 2011 From: SGONG at mdacorporation.com (Shawn Gong) Date: Tue, 30 Aug 2011 06:00:37 -0700 Subject: [Multiprocessing-sig] stop and restart processes In-Reply-To: References: <33584A1DEF4341428D15C1273466D3230176E59BF2@EVSYVR1.ds.mda.ca> <33584A1DEF4341428D15C1273466D3230176E59BF3@EVSYVR1.ds.mda.ca> Message-ID: <33584A1DEF4341428D15C1273466D3230176E59BF5@EVSYVR1.ds.mda.ca> Thank you, Jesse and Ask Solem Shawn -----Original Message----- From: Ask Solem [mailto:ask at celeryproject.org] Sent: Tuesday, August 30, 2011 7:46 AM To: Jesse Noller Cc: Shawn Gong; multiprocessing-sig at python.org Subject: Re: [Multiprocessing-sig] stop and restart processes On 30 Aug 2011, at 00:15, Jesse Noller wrote: > There is no similar control in 2.6, and we can not backport that > functionality to 2.6 which is in security fix mode only. If I remember > correctly, Celery (http://celeryproject.org/) implements something > like what you want internally. > https://github.com/ask/billiard contains the maxtasksperchild feature. Billiard is the celery.concurrency.processes.pool module as a separate package, but it has not been updated for a while. If people are interested then we should update the package to have the latest fixes in Celery. -- Ask Solem twitter.com/asksol | +44 (0)7713357179