Parallelization of Python on GPU?

John Ladasky john_ladasky at sbcglobal.net
Thu Feb 26 12:34:16 EST 2015


On Thursday, February 26, 2015 at 8:41:26 AM UTC-8, Sturla Molden wrote:
> If you are doing SVM regression with scikit-learn you are using libSVM.
> There is a CUDA accelerated version of this C library here:
> http://mklab.iti.gr/project/GPU-LIBSVM
> 
> You can presumably reuse the wrapping code from scikit-learn.
> 
> Sturla

Hi Sturla,  I recognize your name from the scikit-learn mailing list.  

If you look a few posts above yours in this thread, I am aware of gpu-libsvm.  I don't know if I'm up to the task of reusing the scikit-learn wrapping code, but I am giving that option some serious thought.  It isn't clear to me that gpu-libsvm can handle both SVM and SVR, and I have need of both algorithms. 

My training data sets are around 5000 vectors long.  IF that graph on the gpu-libsvm web page is any indication of what I can expect from my own data (I note that they didn't specify the GPU card they're using), I might realize a 20x increase in speed.



More information about the Python-list mailing list