[Numpy-discussion] ENH: compute many inner products quickly

josef.pktd at gmail.com josef.pktd at gmail.com
Sun Jun 5 20:50:44 EDT 2016


On Sun, Jun 5, 2016 at 8:41 PM, Stephan Hoyer <shoyer at gmail.com> wrote:

> If possible, I'd love to add new functions for "generalized ufunc" linear
> algebra, and then deprecate (or at least discourage) using the older
> versions with inferior broadcasting rules. Adding a new keyword arg means
> we'll be stuck with an awkward API for a long time to come.
>
> There are three types of matrix/vector products for which ufuncs would be
> nice:
> 1. matrix-matrix product (covered by matmul)
> 2. matrix-vector product
> 3. vector-vector (inner) product
>
> It's straightful to implement either of the later two options by inserting
> dummy dimensions and then calling matmul, but that's a pretty awkward API,
> especially for inner products. Unfortunately, we already use the two most
> obvious one word names for vector inner products (inner and dot). But on
> the other hand, one word names are not very descriptive, and the short name
> "dot" probably mostly exists because of the lack of an infix operator.
>
> So I'll start by throwing out some potential new names:
>
> For matrix-vector products:
> matvecmul (if it's worth making a new operator)
>
> For inner products:
> vecmul (similar to matmul, but probably too ambiguous)
> dot_product
> inner_prod
> inner_product
>
>
how about names in plural as in the PR
I thought the `s` in inner_prods would signal better the broadcasting
behavior

dot_products
...

"dots" ?  (I guess not)

Josef


>
>
>
>
> On Sat, May 28, 2016 at 8:53 PM, Scott Sievert <sievert.scott at gmail.com>
> wrote:
>
>> I recently ran into an application where I had to compute many inner
>> products quickly (roughy 50k inner products in less than a second). I
>> wanted a vector of inner products over the 50k vectors, or `[x1.T @ A @ x1,
>> …, xn.T @ A @ xn]` with A.shape = (1k, 1k).
>>
>> My first instinct was to look for a NumPy function to quickly compute
>> this, such as np.inner. However, it looks like np.inner has some other
>> behavior and I couldn’t get tensordot/einsum to work for me.
>>
>> Then a labmate pointed out that I can just do some slick matrix
>> multiplication to compute the same quantity, `(X.T * A @ X.T).sum(axis=0)`.
>> I opened [a PR] with this, and proposed that we define a new function
>> called `inner_prods` for this.
>>
>> However, in the PR, @shoyer pointed out
>>
>> > The main challenge is to figure out how to transition the behavior of
>> all these operations, while preserving backwards compatibility. Quite
>> likely, we need to pick new names for these functions, though we should try
>> to pick something that doesn't suggest that they are second class
>> alternatives.
>>
>> Do we choose new function names? Do we add a keyword arg that changes
>> what np.inner returns?
>>
>> [a PR]:https://github.com/numpy/numpy/pull/7690
>>
>>
>>
>>
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion at scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20160605/84bee28e/attachment.html>


More information about the NumPy-Discussion mailing list