[Numpy-discussion] Fwd: GPU Numpy

Francesc Alted faltet at pytables.org
Thu Sep 10 04:36:27 EDT 2009


A Thursday 10 September 2009 09:45:29 Rohit Garg escrigué:
> > You do realize that the throughput from onboard (video) RAM is going
> > to be much higher, right? It's not just the parallelization but the
> > memory bandwidth. And as James pointed out, if you can keep most of
> > your intermediate computation on-card, you stand to benefit immensely,
> > even if doing some operations where the GPU provides no tangible
> > benefit (i.e. the benefit is in aggregate and avoiding copies).
>
> Good point made here. GPU's support bandwidth O(100 GBps) (bytes not
> bits). Upcoming GPU's will likely break the 250 GBps mark. Even if
> your expressions involve low operation/memory ratios, GPU's are a big
> win as their memory bandwidth is    higher than CPU's L2 and even L1
> caches.

Where are you getting this info from?  IMO the technology of memory in 
graphics boards cannot be so different than in commercial motherboards.  It 
could be a *bit* faster (at the expenses of packing less of it), but I'd say 
not as much as 4x faster (100 GB/s vs 25 GB/s of Intel i7 in sequential 
access), as you are suggesting.  Maybe this is GPU cache bandwidth?

-- 
Francesc Alted
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20090910/e81a8954/attachment.html>


More information about the NumPy-Discussion mailing list