python - Element wise dot product of matrices and vectors -


there similar questions here, here, here, don't understand how apply them case precisely.

i have array of matrices , array of vectors , need element-wise dot product. illustration:

in [1]: matrix1 = np.eye(5)  in [2]: matrix2 = np.eye(5) * 5  in [3]: matrices = np.array((matrix1,matrix2))  in [4]: matrices out[4]:  array([[[ 1.,  0.,  0.,  0.,  0.],         [ 0.,  1.,  0.,  0.,  0.],         [ 0.,  0.,  1.,  0.,  0.],         [ 0.,  0.,  0.,  1.,  0.],         [ 0.,  0.,  0.,  0.,  1.]],         [[ 5.,  0.,  0.,  0.,  0.],         [ 0.,  5.,  0.,  0.,  0.],         [ 0.,  0.,  5.,  0.,  0.],         [ 0.,  0.,  0.,  5.,  0.],         [ 0.,  0.,  0.,  0.,  5.]]])  in [5]: vectors = np.ones((5,2))  in [6]: vectors out[6]:  array([[ 1.,  1.],        [ 1.,  1.],        [ 1.,  1.],        [ 1.,  1.],        [ 1.,  1.]])  in [9]: np.array([m @ v m,v in zip(matrices, vectors.t)]).t out[9]:  array([[ 1.,  5.],        [ 1.,  5.],        [ 1.,  5.],        [ 1.,  5.],        [ 1.,  5.]]) 

this last line desired output. unfortunately inefficient, instance doing matrices @ vectors computes unwanted dot products due broadcasting (if understand well, returns first matrix dot 2 vectors , second matrix dot 2 vectors) faster.

i guess np.einsum or np.tensordot might helpful here attempts have failed:

in [30]: np.einsum("i,j", matrices, vectors) valueerror: operand has more dimensions subscripts given in einstein sum, no '...' ellipsis provided broadcast dimensions.  in [34]: np.tensordot(matrices, vectors, axes=(0,1)) out[34]:  array([[[ 6.,  6.,  6.,  6.,  6.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.]],         [[ 0.,  0.,  0.,  0.,  0.],         [ 6.,  6.,  6.,  6.,  6.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.]],         [[ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 6.,  6.,  6.,  6.,  6.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.]],         [[ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 6.,  6.,  6.,  6.,  6.],         [ 0.,  0.,  0.,  0.,  0.]],         [[ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 0.,  0.,  0.,  0.,  0.],         [ 6.,  6.,  6.,  6.,  6.]]]) 

nb: real-case scenario use more complicated matrices matrix1 , matrix2

with np.einsum, might use:

np.einsum("ijk,ki->ji", matrices, vectors)  #array([[ 1.,  5.], #       [ 1.,  5.], #       [ 1.,  5.], #       [ 1.,  5.], #       [ 1.,  5.]]) 

Comments

Popular posts from this blog

php - Permission denied. Laravel linux server -

google bigquery - Delta between query execution time and Java query call to finish -

python - Pandas two dataframes multiplication? -