[Python-ideas] [RFC] draft PEP: Dedicated infix operators for matrix multiplication and matrix power

Robert Kern robert.kern at gmail.com
Fri Mar 14 12:25:27 CET 2014


On 2014-03-14 10:16, M.-A. Lemburg wrote:

> I have some questions:
>
> 1. Since in math, the operator is usually spelt "·" (the center dot,
>     or "." but that's already reserved for methods and attributes in
>     Python), why not try to use that instead of "@" (which in Python
>     already identifies decorators) ?

I think the current feeling of the Python core team is against including 
non-ASCII characters in the language's keywords or operators. Even if that were 
not so, I would still recommend against it because it would be quite difficult 
to type. I don't know off-hand the key combination to do it on my native system, 
and it would change from system to system.

> 2. The PEP should include a section on how other programming languages
>     solve this, i.e. what syntax they use for matrix multiplications.
>
> 3. Since matrix multiplication is only one type of product you find
>     in math, albeit a very frequently used one, how would those other
>     products fit into the picture ? Would then have to use methods
>     again ? E.g. think of cross product, inner product, outer/tensor
>     product.

Our experience is that these have come up much less regularly than matrix 
multiplication. The two products in common use in our code are the Hadamard 
product (elementwise multiplication, currently assigned to * in numpy) and 
matrix multiplication (currently done with the function numpy.dot()).

> 4. Another very common operation needed in vector/matrix calculation
>     is transposition. This is usually written as superscript "T" or "t"
>     ("ᵀ" in Unicode). Wouldn't this operator be needed as well, to
>     make the picture complete ? OTOH, we currently don't have postfix
>     operators in Python, so I guess writing this as A.transpose()
>     comes close enough ;-)

Indeed. Numpy already uses a .T property for this.

> Now since this is all about syntactic sugar, we also need to look at
> some code examples:
>
>
> I == A @@ -1 @ A
> vs.
> I == A ·· -1 · A
> vs.
> I == A.inverse().dot(A)
>
>
> (A @ B).transpose() == A.transpose() @ B.transpose()
> vs.
> (A · B).transpose() == A.transpose() · B.transpose()
> vs.
> A.dot(B).transpose() == A.transpose().dot(B.transpose())

(A @ B).T == B.T @ A.T
(A · B).T == B.T · A.T
A.dot(B).T == B.T.dot(A.T)

(FWIW, I didn't notice the math error until I wrote out the @ version.)

> c = A @ v
> vs.
> c = A · v
> vs.
> c = A.dot(v)
>
>
> Hmm, even though I'd love to see matrix operators in Python,
> I don't think they really add clarity to the syntax of matrix
> calculations -  a bit disappointing, I must say :-(

Some more from real code:

RSR = R.dot(var_beta.dot(R.T))
RSR = R @ var_beta @ R.T

xx_inv.dot(xeps.dot(xx_inv))
xx_inv @ xeps @ xx_inv

dF2lower_dper.dot(F2lower.T) + F2lower.dot(dF2lower_dper.T) - 
4/period*F2lower.dot(F2lower.T)
dF2lower_dper @ F2lower.T + F2lower @ dF2lower_dper.T - 4/period*(F2lower @ 
F2lower.T)

dFX_dper.dot(Gi.dot(FX2.T)) - FX.dot(Gi.dot(dG_dper.dot(Gi.dot(FX2.T)))) + 
FX.dot(Gi.dot(dFX2_dper.T))
(dFX_dper @ Gi @ FX2.T) - (FX @ Gi @ dG_dper @ Gi @ FX2.T) + (FX @ G @ dFX2_dper.T)

torient_inv.dot(tdof).dot(torient).dot(self.vertices[parent].meta['key']))
(((torient_inv @ tdof) @ torient) @ self.vertices[parent].meta['key']

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



More information about the Python-ideas mailing list