Parallelization of Python on GPU?

Sturla Molden sturla.molden at gmail.com
Thu Feb 26 16:10:42 EST 2015


On 26/02/15 18:48, Jason Swails wrote:
> On Thu, 2015-02-26 at 16:53 +0000, Sturla Molden wrote:
>> GPU computing is great if you have the following:
>>
>> 1. Your data structures are arrays floating point numbers.
>
> It actually works equally great, if not better, for integers.

Right, but not complicated data structures with a lot of references or 
pointers. It requires data are laid out in regular arrays, and then it 
acts on these arrays in a data-parallel manner. It is designed to 
process vertices in parallel for computer graphics, and that is a 
limitation which is always there. It is not a CPU with 1024 cores. It is 
a "floating point monster" which can process 1024 vectors in parallel. 
You write a tiny kernel in a C-like language (CUDA, OpenCL) to process 
one vector, and then it will apply the kernel to all the vectors in an 
array of vectors. It is very comparable to how GLSL and Direct3D vertex 
and fragment shaders work. (The reason for which should be obvious.) The 
GPU is actually great for a lot of things in science, but it is not a 
CPU. The biggest mistake in the GPGPU hype is the idea that the GPU will 
behave like a CPU with many cores.

Sturla












More information about the Python-list mailing list