[SciPy-Dev] scipy improve performance by parallelizing

Sai Rajeshwar rajsai24 at gmail.com
Thu Jul 24 12:47:46 EDT 2014


ok ..  what about 2d or 3d convolution..  does it perform better?

thanks

*with regards..*

*M. Sai Rajeswar*
*M-tech  Computer Technology*


*IIT Delhi----------------------------------Cogito Ergo Sum---------*


On Thu, Jul 24, 2014 at 10:16 PM, Eric Moore <ewm at redtetrahedron.org> wrote:

>
>
> On Thursday, July 24, 2014, Sai Rajeshwar <rajsai24 at gmail.com> wrote:
>
>> hi julian thanks..
>>
>> but when i use numpy.convolve  i get this error  ValueError: object too
>> deep for desired array
>>
>> does numpy.convolve work for 2D or 3D convolution?
>> thanks
>>
>> *with regards..*
>>
>> *M. Sai Rajeswar*
>> *M-tech  Computer Technology*
>>
>>
>> *IIT Delhi----------------------------------Cogito Ergo Sum---------*
>>
>>
>> On Fri, Jul 11, 2014 at 11:13 PM, Julian Taylor <
>> jtaylor.debian at googlemail.com> wrote:
>>
>>> for simple convolutions there is also np.convolve
>>>
>>> compared to scipy it releases the GIL and you can use normal python
>>> threads for parallization if you need to compute many independent
>>> convolutions and not just one.
>>>
>>> That said scipy should probably release the GIL too, probably a bug that
>>> it doesn't.
>>>
>>> On 10.07.2014 17:19, Ashwin Srinath wrote:
>>> > Hey, Sai
>>> >
>>> > I'm no expert, so I'll just share a few links to start this discussion.
>>> > You definitely want to look at Cython <http://cython.org/> if you're
>>> > computing with NumPy arrays. If you're familiar with the MPI
>>> programming
>>> > model, you want to check out mpi4py <http://mpi4py.scipy.org/>. If you
>>> > have NVIDIA GPUs that you'd like to take advantage of, check out PyCUDA
>>> > <http://mathema.tician.de/software/pycuda/>.
>>> >
>>> > Thanks,
>>> > Ashwin
>>> >
>>> >
>>> > On Thu, Jul 10, 2014 at 6:08 AM, Sai Rajeshwar <rajsai24 at gmail.com
>>> > <mailto:rajsai24 at gmail.com>> wrote:
>>> >
>>> >     hi all,
>>> >
>>> >        im trying to optimise a python code takes huge amount of time on
>>> >     scipy functions such as scipy.signa.conv. Following are some of my
>>> >     queries regarding the same.. It would be great to hear from you..
>>> >     thanks..
>>> >     ----------------------------------------------------
>>> >       1) Can Scipy take advantage of multi-cores.. if so how
>>> >     2)what are ways we can improve the performance of scipy/numpy
>>> >     functions eg: using openmp, mpi etc
>>> >     3)If scipy internally use blas/mkl libraries can we enable
>>> >     parallelism through these?
>>> >
>>> >
>>> >     looks like i have to work on internals of scipy.. thanks a lot..
>>> >
>>> >
>>> >     *with regards..*
>>> >     *
>>> >     *
>>> >     *M. Sai Rajeswar*
>>> >     *M-tech  Computer Technology*
>>> >     *IIT Delhi
>>> >     ----------------------------------Cogito Ergo Sum---------
>>> >     *
>>> >
>>> >     _______________________________________________
>>> >     SciPy-Dev mailing list
>>> >     SciPy-Dev at scipy.org <mailto:SciPy-Dev at scipy.org>
>>> >     http://mail.scipy.org/mailman/listinfo/scipy-dev
>>> >
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > SciPy-Dev mailing list
>>> > SciPy-Dev at scipy.org
>>> > http://mail.scipy.org/mailman/listinfo/scipy-dev
>>> >
>>>
>>> _______________________________________________
>>> SciPy-Dev mailing list
>>> SciPy-Dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/scipy-dev
>>>
>>
>>
>
> There are also convolution functions in scipy.ndimage. For simple smallish
> 1d convolution ndimage is much much faster than scipy.signal and somewhat
> faster than numpy.convolve.
>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20140724/6dedb1c6/attachment.html>


More information about the SciPy-Dev mailing list