[Baypiggies] PyCuda - using video card hardware for parallel computation

Andrew Akira Toulouse andrew at atoulou.se
Wed Jul 14 22:38:51 CEST 2010


Is expressivity really warranted at that abstraction level? If you're
doing a massively parallel computation tuned to nVidia graphics cars
that require no I/O, chances are that performance is important enough
that the program running on the CPU should distill the problem into
its most essential calculations.

At least, any environment not designed for that kind of environment
probably made compromises that were grossly inappropriate for the
platform, and your tradeoff of performance for expressivity could
sabotage the platform.

On Wed, Jul 14, 2010 at 12:09 PM, Seth Friedman <sfseth at gmail.com> wrote:
> Right, I didn't think the python was actually running on GPU.    So for the
> case of a million element array where a minute of computation needs to be
> independently performed on each element, it seems relatively straightforward
> to send a CUDA swarm of worker bees at the problem.  What I'm thinking about
> is where there are relationships between elements and/or computation -
> essentially can the elements or worker bees have more logic such as control
> flow or intercommunication.      I'll spend some more time on the pycuda
> docs but what I think I'm wondering is essentially how rich is the language
> - is it really just compiling down certain data structures and math, or is
> there more of the expressive capability present in python.


More information about the Baypiggies mailing list