[Numpy-discussion] merge_arrays is very slow; alternatives?

Gerrit Holl gerrit.holl at gmail.com
Fri Nov 26 14:57:30 EST 2010


On 26 November 2010 20:16, Gerrit Holl <gerrit.holl at gmail.com> wrote:
> Hi,
>
> upon profiling my code, I found that
> numpy.lib.recfunctions.merge_arrays is extremely slow; it does some
> 7000 rows/second. This is not acceptable for me.
...
> How can I do this in a faster way?

Replying to my own code here. Either I have written a much faster
implementation of this, or I am missing something. I consider it
unlikely that I write a much faster implementation of an established
numpy function with little effort, so I suspect I am missing something
here.

I wrote this implementation of the flattened version of merge_arrays:

def merge_arrays(arr1, arr2):
    t1 = arr1.dtype
    t2 = arr2.dtype
    newdtype = numpy.dtype(t1.descr + t2.descr)
    newarray = numpy.empty(shape=arr1.shape, dtype=newdtype)
    for field in t1.names:
        newarray[field] = arr1[field]
    for field in t2.names:
        newarray[field] = arr2[field]
    return newarray

and benchmarks show it's almost 100 times faster for a medium-sized array:

In [211]: %timeit merged1 =
numpy.lib.recfunctions.merge_arrays([metarows[:10000],
targetrows2[:10000]], flatten=True)
1 loops, best of 3: 1.01 s per loop

In [212]: %timeit merged2 =
pyatmlab.tools.merge_arrays(metarows[:10000], targetrows2[:10000])
100 loops, best of 3: 10.8 ms per loop

In [214]: (merged1.view(dtype=uint64).reshape(-1, 100) ==
merged2.view(dtype=uint64).reshape(-1, 100)).all()
Out[214]: True

# and still 4 times faster for a small array:

In [215]: %timeit merged1 =
numpy.lib.recfunctions.merge_arrays([metarows[:10], targetrows2[:10]],
flatten=True)
1000 loops, best of 3: 1.31 ms per loop

In [216]: %timeit merged2 = pyatmlab.tools.merge_arrays(metarows[:10],
targetrows2[:10])
1000 loops, best of 3: 344 us per loop

# and 15 times faster for a large array (1.5 million elements):

In [218]: %timeit merged1 =
numpy.lib.recfunctions.merge_arrays([metarows, targetrows2],
flatten=True)
1 loops, best of 3: 110 s per loop

In [217]: %timeit merged2 = pyatmlab.tools.merge_arrays(metarows, targetrows2)
1 loops, best of 3: 7.26 s per loop

I wonder, am I missing something or have I really written a
significant improvement in less than 10 LOC? Should I file a patch for
this?

Gerrit.

-- 
Exploring space at http://gerrit-explores.blogspot.com/
Personal homepage at http://www.topjaklont.org/
Asperger Syndroom: http://www.topjaklont.org/nl/asperger.html



More information about the NumPy-Discussion mailing list