[Numpy-discussion] Fwd: GPU Numpy

Rohit Garg rpg.314 at gmail.com
Thu Sep 10 03:45:29 EDT 2009


> You do realize that the throughput from onboard (video) RAM is going
> to be much higher, right? It's not just the parallelization but the
> memory bandwidth. And as James pointed out, if you can keep most of
> your intermediate computation on-card, you stand to benefit immensely,
> even if doing some operations where the GPU provides no tangible
> benefit (i.e. the benefit is in aggregate and avoiding copies).

Good point made here. GPU's support bandwidth O(100 GBps) (bytes not
bits). Upcoming GPU's will likely break the 250 GBps mark. Even if
your expressions involve low operation/memory ratios, GPU's are a big
win as their memory bandwidth is    higher than CPU's L2 and even L1
caches.

Regards,

-- 
Rohit Garg

http://rpg-314.blogspot.com/

Senior Undergraduate
Department of Physics
Indian Institute of Technology
Bombay



More information about the NumPy-Discussion mailing list