numpy performance and random numbers

Carl Johan Rehn care02 at gmail.com
Sat Dec 19 10:20:10 EST 2009


On Dec 19, 3:16 pm, sturlamolden <sturlamol... at yahoo.no> wrote:
> On 19 Des, 14:06, Carl Johan Rehn <car... at gmail.com> wrote:
>
> > Matlab and numpy have (by chance?) the exact names for the same
> > functionality,
>
> Common ancenstry, NumPy and Matlab borrowed the name from IDL.
>
> LabView, Octave and SciLab uses the name randn as well.
>
> > So the basioc question is, how can I speed up random number
> > generation?
>
> The obvious thing would be to compile ziggurat yourself, and turn on
> optimization flags for your hardware.http://www.jstatsoft.org/v05/i08/
>
> P.S. Be careful if you consider using more than one processor.
> Multithreading is a very difficult issue with PRNGs, becuase it is
> difficult to guarrantee they are truely independent. But you can use a
> producer-consumer pattern, though: one thread constantly producing
> random numbers (writing into a buffer or pipe) and another thread(s)
> consuming them.

How about mulit-core or (perhaps more exciting) GPU and CUDA? I must
admit that I am extremely interested in trying the CUDA-alternative.

Obviously, cuBLAS is not an option here, so what is the safest route
for a novice parallel-programmer?

Carl


Carl



More information about the Python-list mailing list