speed up a numpy code with huge array

Alexzive zasaconsulting at gmail.com
Wed May 26 07:48:00 EDT 2010


sorry it was just bullshit what I wrote about the second bottleneck,
it seemed to hang up but it was just me forgetting to double-enter
during debugging after "for cycle".

On May 26, 1:43 pm, Alexzive <zasaconsult... at gmail.com> wrote:
> thank you all for the tips.
> I 'll try them soon.
>
> I also notice another bottleneck, when python tries to access some
> array data stored in the odb files (---> in text below), even before
> starting the algoritm:
>
> ###
> EPS_nodes = range(len(frames))
> for f in frames:
> ...     sum = 0
> --->    UN = F[f].fieldOutputs['U'].getSubset(region=TOP).values <---
> ...     EPS_nodes[f] = UN[10].data[Scomp-1]/L3
>
> ###
>
> unfortunately I don't have time to learn cython. Using dictionaries
> sounds promising.
> Thanks!
> Alex
>
> On May 26, 8:14 am, Stefan Behnel <stefan... at behnel.de> wrote:
>
>
>
> > Alexzive, 25.05.2010 21:05:
>
> > > is there a way to improve the performance of the attached code ? it
> > > takes about 5 h on a dual-core (using only one core) when len(V)
> > > ~1MIL. V is an array which is supposed to store all the volumes of
> > > tetrahedral elements of a grid whose coord. are stored in NN (accessed
> > > trough the list of tetraelements -->  EL)
>
> > Consider using Cython for your algorithm. It has direct support for NumPy
> > arrays and translates to fast C code.
>
> > Stefan




More information about the Python-list mailing list