[Baypiggies] PyCuda - using video card hardware for parallel computation

Andy Wiggin andywiggin at gmail.com
Wed Jul 14 18:34:16 CEST 2010


On Tue, Jul 13, 2010 at 6:14 PM, Seth Friedman <sfseth at gmail.com> wrote:
> This is really neat.   I've got a CUDA-capable nvidia card I'm thinking I
> want to write a python program to answer the following question: what/where
> is the effect of the GIL when programming 240/480/720 1.3ghz cores?   It
> seems like this question must have been conquered to some degree, at
> least.   Is this simply a non issue because there isn't any python
> "interpreting" happening in parallel, that it's translated in serial
> (assuming single-threaded/single-process python doing the farming to the
> GPUs) ..?
>
> seth
>

Seth,
it doesn't look to me like any python code is actually executing on
the GPU. The "kernels" of code running on the GPU appear to still be
written in C (notice the C-format string literals when creating
SourceModule objects in the examples; that's what runs on the GPU). So
unfortunately I don't think you can do the experiment you're
describing.
Regards,
Andy


More information about the Baypiggies mailing list