[SciPy-user] running scipy code simultaneously on several machines

Jaonary Rabarisoa jaonary at gmail.com
Sat Oct 13 02:47:03 EDT 2007


Thank you guys for these hints. I think I'll try IPython1 first. I'll let
you know soon about my experiment on this.
Cheers,

Jaonary

On 10/12/07, Brian Granger <ellisonbg.net at gmail.com> wrote:
>
> Mentioned below by Jarrod, IPython1 is probably the best solution for
> this.  Here is the simplest parallel implementation in IPython1:
>
> In [1]: import ipython1.kernel.api as kernel
>
> In [2]: rc = kernel.RemoteController(('127.0.0.1',10105))
>
> In [3]: rc.getIDs()
> Out[3]: [0, 1, 2, 3]
>
> In [4]: def my_func(A): return 'result'
>    ...:
>
> In [5]: rc.mapAll(my_func, range(16))
> Out[5]:
> ['result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result',
> 'result']
>
> This partitions the input array (range(16)) amongst 4 processors,
> calls my_func on each element and then gathers the result back.
>
> This is the simplest approach, but IPython1 supports many other styles
> and approaches, including a dynamically load balanced task farming
> system.  I don't know if you need it, but IPython1 also has full
> integration with mpi.
>
> Please let us know if you have questions.
>
> Cheers,
>
> Brian
>
> On 10/12/07, Jaonary Rabarisoa <jaonary at gmail.com> wrote:
> > Hi all,
> >
> > I need to perform several times one python function that is very time
> > consuming. Suppose
> > to be simple that this function takes only one argument and return one
> > value, so its prototype
> > is as follow :
> >
> > def my_func(A) :
> > ....
> >     return res
> >
> > I need too call this function for different values of A. A naive
> approach to
> > do this is the following
> >
> > for A in my_array_of_A :
> >         res = my_func(A)
> >         all_res.append(res)
> >
> > My problem is that one call of my_func takes several hours. Then, I
> wonder
> > if it's possible to distribute
> > this "for" loop between several machines (or processors) in order to
> speed
> > up the process.
> >
> > I've  heard something about the cow module in scipy and pympi package
> but I
> > just do not know how
> > to tackle this probelm correctly with one of these modules. So, if one
> of
> > you could give some hints in how to do this ?
> >
> > Best regards,
> >
> > Jaonary
> >
> > _______________________________________________
> > SciPy-user mailing list
> > SciPy-user at scipy.org
> > http://projects.scipy.org/mailman/listinfo/scipy-user
> >
> >
> _______________________________________________
> SciPy-user mailing list
> SciPy-user at scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20071013/aaa6fa97/attachment.html>


More information about the SciPy-User mailing list