In mathematics, the Khatri–Rao product or block Kronecker product of two partitioned matrices
A
{\displaystyle \mathbf {A} }
and
B
{\displaystyle \mathbf {B} }
is defined as[ 1] [ 2] [ 3]
A
∗
B
=
(
A
i
j
⊗
B
i
j
)
i
j
{\displaystyle \mathbf {A} \ast \mathbf {B} =\left(\mathbf {A} _{ij}\otimes \mathbf {B} _{ij}\right)_{ij}}
in which the ij -th block is the mi pi × nj qj sized Kronecker product of the corresponding blocks of A and B , assuming the number of row and column partitions of both matrices is equal. The size of the product is then (Σi mi pi ) × (Σj nj qj ) .
For example, if A and B both are 2 × 2 partitioned matrices e.g.:
A
=
[
A
11
A
12
A
21
A
22
]
=
[
1
2
3
4
5
6
7
8
9
]
,
B
=
[
B
11
B
12
B
21
B
22
]
=
[
1
4
7
2
5
8
3
6
9
]
,
{\displaystyle \mathbf {A} =\left[{\begin{array}{c | c}\mathbf {A} _{11}&\mathbf {A} _{12}\\\hline \mathbf {A} _{21}&\mathbf {A} _{22}\end{array}}\right]=\left[{\begin{array}{c c | c}1&2&3\\4&5&6\\\hline 7&8&9\end{array}}\right],\quad \mathbf {B} =\left[{\begin{array}{c | c}\mathbf {B} _{11}&\mathbf {B} _{12}\\\hline \mathbf {B} _{21}&\mathbf {B} _{22}\end{array}}\right]=\left[{\begin{array}{c | c c}1&4&7\\\hline 2&5&8\\3&6&9\end{array}}\right],}
we obtain:
A
∗
B
=
[
A
11
⊗
B
11
A
12
⊗
B
12
A
21
⊗
B
21
A
22
⊗
B
22
]
=
[
1
2
12
21
4
5
24
42
14
16
45
72
21
24
54
81
]
.
{\displaystyle \mathbf {A} \ast \mathbf {B} =\left[{\begin{array}{c | c}\mathbf {A} _{11}\otimes \mathbf {B} _{11}&\mathbf {A} _{12}\otimes \mathbf {B} _{12}\\\hline \mathbf {A} _{21}\otimes \mathbf {B} _{21}&\mathbf {A} _{22}\otimes \mathbf {B} _{22}\end{array}}\right]=\left[{\begin{array}{c c | c c}1&2&12&21\\4&5&24&42\\\hline 14&16&45&72\\21&24&54&81\end{array}}\right].}
This is a submatrix of the Tracy–Singh product
[ 4]
of the two matrices (each partition in this example is a partition in a corner of the Tracy–Singh product ).
Column-wise Kronecker product
edit
The column-wise Kronecker product of two matrices is a special case of the Khatri-Rao product as defined above, and may also be called the Khatri–Rao product. This product assumes the partitions of the matrices are their columns. In this case m 1 = m , p 1 = p , n = q and for each j : nj = qj = 1 . The resulting product is a mp × n matrix of which each column is the Kronecker product of the corresponding columns of A and B . Using the matrices from the previous examples with the columns partitioned:
C
=
[
C
1
C
2
C
3
]
=
[
1
2
3
4
5
6
7
8
9
]
,
D
=
[
D
1
D
2
D
3
]
=
[
1
4
7
2
5
8
3
6
9
]
,
{\displaystyle \mathbf {C} =\left[{\begin{array}{c | c | c}\mathbf {C} _{1}&\mathbf {C} _{2}&\mathbf {C} _{3}\end{array}}\right]=\left[{\begin{array}{c | c | c}1&2&3\\4&5&6\\7&8&9\end{array}}\right],\quad \mathbf {D} =\left[{\begin{array}{c | c | c }\mathbf {D} _{1}&\mathbf {D} _{2}&\mathbf {D} _{3}\end{array}}\right]=\left[{\begin{array}{c | c | c }1&4&7\\2&5&8\\3&6&9\end{array}}\right],}
so that:
C
∗
D
=
[
C
1
⊗
D
1
C
2
⊗
D
2
C
3
⊗
D
3
]
=
[
1
8
21
2
10
24
3
12
27
4
20
42
8
25
48
12
30
54
7
32
63
14
40
72
21
48
81
]
.
{\displaystyle \mathbf {C} \ast \mathbf {D} =\left[{\begin{array}{c | c | c }\mathbf {C} _{1}\otimes \mathbf {D} _{1}&\mathbf {C} _{2}\otimes \mathbf {D} _{2}&\mathbf {C} _{3}\otimes \mathbf {D} _{3}\end{array}}\right]=\left[{\begin{array}{c | c | c }1&8&21\\2&10&24\\3&12&27\\4&20&42\\8&25&48\\12&30&54\\7&32&63\\14&40&72\\21&48&81\end{array}}\right].}
This column-wise version of the Khatri–Rao product is useful in linear algebra approaches to data analytical processing[ 5] and in optimizing the solution of inverse problems dealing with a diagonal matrix.[ 6] [ 7]
In 1996 the column-wise Khatri–Rao product was proposed to estimate the angles of arrival (AOAs) and delays of multipath signals[ 8] and four coordinates of signals sources[ 9] at a digital antenna array .
Face-splitting product
edit
Face splitting product of matrices
An alternative concept of the matrix product, which uses row-wise splitting of matrices with a given quantity of rows, was proposed by V. Slyusar [ 10] in 1996.[ 9] [ 11] [ 12] [ 13] [ 14]
This matrix operation was named the "face-splitting product" of matrices[ 11] [ 13] or the "transposed Khatri–Rao product". This type of operation is based on row-by-row Kronecker products of two matrices. Using the matrices from the previous examples with the rows partitioned:
C
=
[
C
1
C
2
C
3
]
=
[
1
2
3
4
5
6
7
8
9
]
,
D
=
[
D
1
D
2
D
3
]
=
[
1
4
7
2
5
8
3
6
9
]
,
{\displaystyle \mathbf {C} ={\begin{bmatrix}\mathbf {C} _{1}\\\hline \mathbf {C} _{2}\\\hline \mathbf {C} _{3}\\\end{bmatrix}}={\begin{bmatrix}1&2&3\\\hline 4&5&6\\\hline 7&8&9\end{bmatrix}},\quad \mathbf {D} ={\begin{bmatrix}\mathbf {D} _{1}\\\hline \mathbf {D} _{2}\\\hline \mathbf {D} _{3}\\\end{bmatrix}}={\begin{bmatrix}1&4&7\\\hline 2&5&8\\\hline 3&6&9\end{bmatrix}},}
the result can be obtained:[ 9] [ 11] [ 13]
C
∙
D
=
[
C
1
⊗
D
1
C
2
⊗
D
2
C
3
⊗
D
3
]
=
[
1
4
7
2
8
14
3
12
21
8
20
32
10
25
40
12
30
48
21
42
63
24
48
72
27
54
81
]
.
{\displaystyle \mathbf {C} \bullet \mathbf {D} ={\begin{bmatrix}\mathbf {C} _{1}\otimes \mathbf {D} _{1}\\\hline \mathbf {C} _{2}\otimes \mathbf {D} _{2}\\\hline \mathbf {C} _{3}\otimes \mathbf {D} _{3}\\\end{bmatrix}}={\begin{bmatrix}1&4&7&2&8&14&3&12&21\\\hline 8&20&32&10&25&40&12&30&48\\\hline 21&42&63&24&48&72&27&54&81\end{bmatrix}}.}
Transpose (V. Slyusar , 1996[ 9] [ 11] [ 12] ):
(
A
∙
B
)
T
=
A
T
∗
B
T
{\displaystyle \left(\mathbf {A} \bullet \mathbf {B} \right)^{\textsf {T}}={\textbf {A}}^{\textsf {T}}\ast \mathbf {B} ^{\textsf {T}}}
,Bilinearity and associativity :[ 9] [ 11] [ 12]
A
∙
(
B
+
C
)
=
A
∙
B
+
A
∙
C
,
(
B
+
C
)
∙
A
=
B
∙
A
+
C
∙
A
,
(
k
A
)
∙
B
=
A
∙
(
k
B
)
=
k
(
A
∙
B
)
,
(
A
∙
B
)
∙
C
=
A
∙
(
B
∙
C
)
,
{\displaystyle {\begin{aligned}\mathbf {A} \bullet (\mathbf {B} +\mathbf {C} )&=\mathbf {A} \bullet \mathbf {B} +\mathbf {A} \bullet \mathbf {C} ,\\(\mathbf {B} +\mathbf {C} )\bullet \mathbf {A} &=\mathbf {B} \bullet \mathbf {A} +\mathbf {C} \bullet \mathbf {A} ,\\(k\mathbf {A} )\bullet \mathbf {B} &=\mathbf {A} \bullet (k\mathbf {B} )=k(\mathbf {A} \bullet \mathbf {B} ),\\(\mathbf {A} \bullet \mathbf {B} )\bullet \mathbf {C} &=\mathbf {A} \bullet (\mathbf {B} \bullet \mathbf {C} ),\\\end{aligned}}}
where A , B and C are matrices, and k is a scalar ,
a
∙
B
=
B
∙
a
{\displaystyle a\bullet \mathbf {B} =\mathbf {B} \bullet a}
,[ 12]
where
a
{\displaystyle a}
is a vector ,The mixed-product property (V. Slyusar , 1997[ 12] ):
(
A
∙
B
)
(
A
T
∗
B
T
)
=
(
A
A
T
)
∘
(
B
B
T
)
{\displaystyle (\mathbf {A} \bullet \mathbf {B} )\left(\mathbf {A} ^{\textsf {T}}\ast \mathbf {B} ^{\textsf {T}}\right)=\left(\mathbf {A} \mathbf {A} ^{\textsf {T}}\right)\circ \left(\mathbf {B} \mathbf {B} ^{\textsf {T}}\right)}
,
(
A
∙
B
)
(
C
∗
D
)
=
(
A
C
)
∘
(
B
D
)
{\displaystyle (\mathbf {A} \bullet \mathbf {B} )(\mathbf {C} \ast \mathbf {D} )=(\mathbf {A} \mathbf {C} )\circ (\mathbf {B} \mathbf {D} )}
,[ 13]
(
A
∙
B
∙
C
∙
D
)
(
L
∗
M
∗
N
∗
P
)
=
(
A
L
)
∘
(
B
M
)
∘
(
C
N
)
∘
(
D
P
)
{\displaystyle (\mathbf {A} \bullet \mathbf {B} \bullet \mathbf {C} \bullet \mathbf {D} )(\mathbf {L} \ast \mathbf {M} \ast \mathbf {N} \ast \mathbf {P} )=(\mathbf {A} \mathbf {L} )\circ (\mathbf {B} \mathbf {M} )\circ (\mathbf {C} \mathbf {N} )\circ (\mathbf {D} \mathbf {P} )}
[ 15]
(
A
∗
B
)
T
(
A
∗
B
)
=
(
A
T
A
)
∘
(
B
T
B
)
{\displaystyle (\mathbf {A} \ast \mathbf {B} )^{\textsf {T}}(\mathbf {A} \ast \mathbf {B} )=\left(\mathbf {A} ^{\textsf {T}}\mathbf {A} \right)\circ \left(\mathbf {B} ^{\textsf {T}}\mathbf {B} \right)}
,[ 16]
where
∘
{\displaystyle \circ }
denotes the Hadamard product ,
(
A
∘
B
)
∙
(
C
∘
D
)
=
(
A
∙
C
)
∘
(
B
∙
D
)
{\displaystyle (\mathbf {A} \circ \mathbf {B} )\bullet (\mathbf {C} \circ \mathbf {D} )=(\mathbf {A} \bullet \mathbf {C} )\circ (\mathbf {B} \bullet \mathbf {D} )}
,[ 12]
a
⊗
(
B
∙
C
)
=
(
a
⊗
B
)
∙
C
{\displaystyle \ a\otimes (\mathbf {B} \bullet \mathbf {C} )=(a\otimes \mathbf {B} )\bullet \mathbf {C} }
,[ 9]
where
a
{\displaystyle a}
is a row vector ,
(
A
⊗
B
)
(
C
∗
D
)
=
(
A
C
)
∗
(
B
D
)
{\displaystyle (\mathbf {A} \otimes \mathbf {B} )(\mathbf {C} \ast \mathbf {D} )=(\mathbf {A} \mathbf {C} )\ast (\mathbf {B} \mathbf {D} )}
,[ 16]
(
A
⊗
B
)
∗
(
C
⊗
D
)
=
P
[
(
A
∗
C
)
⊗
(
B
∗
D
)
]
{\displaystyle (\mathbf {A} \otimes \mathbf {B} )\ast (\mathbf {C} \otimes \mathbf {D} )=\mathbf {P} [(\mathbf {A} \ast \mathbf {C} )\otimes (\mathbf {B} \ast \mathbf {D} )]}
,
where
P
{\displaystyle \mathbf {P} }
is a permutation matrix.[ 7]
(
A
∙
B
)
(
C
⊗
D
)
=
(
A
C
)
∙
(
B
D
)
{\displaystyle (\mathbf {A} \bullet \mathbf {B} )(\mathbf {C} \otimes \mathbf {D} )=(\mathbf {A} \mathbf {C} )\bullet (\mathbf {B} \mathbf {D} )}
,[ 13] [ 15]
Similarly:
(
A
∙
L
)
(
B
⊗
M
)
⋯
(
C
⊗
S
)
=
(
A
B
⋯
C
)
∙
(
L
M
⋯
S
)
{\displaystyle (\mathbf {A} \bullet \mathbf {L} )(\mathbf {B} \otimes \mathbf {M} )\cdots (\mathbf {C} \otimes \mathbf {S} )=(\mathbf {A} \mathbf {B} \cdots \mathbf {C} )\bullet (\mathbf {L} \mathbf {M} \cdots \mathbf {S} )}
,
c
T
∙
d
T
=
c
T
⊗
d
T
{\displaystyle c^{\textsf {T}}\bullet d^{\textsf {T}}=c^{\textsf {T}}\otimes d^{\textsf {T}}}
,[ 12]
c
∗
d
=
c
⊗
d
{\displaystyle c\ast d=c\otimes d}
,
where
c
{\displaystyle c}
and
d
{\displaystyle d}
are vectors ,
(
A
∗
c
T
)
d
=
(
A
∗
d
T
)
c
{\displaystyle \left(\mathbf {A} \ast c^{\textsf {T}}\right)d=\left(\mathbf {A} \ast d^{\textsf {T}}\right)c}
,[ 17]
d
T
(
c
∙
A
T
)
=
c
T
(
d
∙
A
T
)
{\displaystyle d^{\textsf {T}}\left(c\bullet \mathbf {A} ^{\textsf {T}}\right)=c^{\textsf {T}}\left(d\bullet \mathbf {A} ^{\textsf {T}}\right)}
,
(
A
∙
B
)
(
c
⊗
d
)
=
(
A
c
)
∘
(
B
d
)
{\displaystyle (\mathbf {A} \bullet \mathbf {B} )(c\otimes d)=(\mathbf {A} c)\circ (\mathbf {B} d)}
,[ 18]
where
c
{\displaystyle c}
and
d
{\displaystyle d}
are vectors (it is a combine of properties 3 an 8),
Similarly:
(
A
∙
B
)
(
M
N
c
⊗
Q
P
d
)
=
(
A
M
N
c
)
∘
(
B
Q
P
d
)
,
{\displaystyle (\mathbf {A} \bullet \mathbf {B} )(\mathbf {M} \mathbf {N} c\otimes \mathbf {Q} \mathbf {P} d)=(\mathbf {A} \mathbf {M} \mathbf {N} c)\circ (\mathbf {B} \mathbf {Q} \mathbf {P} d),}
F
(
(
C
(
1
)
x
)
⋆
(
C
(
2
)
y
)
)
=
(
(
F
C
(
1
)
)
∙
(
F
C
(
2
)
)
)
(
x
⊗
y
)
=
(
F
C
(
1
)
x
)
∘
(
F
C
(
2
)
y
)
{\displaystyle {\mathcal {F}}\left((C^{(1)}x)\star (C^{(2)}y)\right)=\left(({\mathcal {F}}C^{(1)})\bullet ({\mathcal {F}}C^{(2)})\right)(x\otimes y)=({\mathcal {F}}C^{(1)}x)\circ ({\mathcal {F}}C^{(2)}y)}
,
where
⋆
{\displaystyle \star }
is vector convolution ;
C
(
1
)
,
C
(
2
)
{\displaystyle C^{(1)},C^{(2)}}
are "count sketch" matrices; and
F
{\displaystyle {\mathcal {F}}}
is the Fourier transform matrix (this result is an evolving of count sketch properties[ 19] ).
This can be generalized for appropriate matrices
A
,
B
{\displaystyle \mathbf {A} ,\mathbf {B} }
:
F
(
(
A
x
)
⋆
(
B
y
)
)
=
(
(
F
A
)
∙
(
F
B
)
)
(
x
⊗
y
)
=
(
F
A
x
)
∘
(
F
B
y
)
{\displaystyle {\mathcal {F}}\left((\mathbf {A} x)\star (\mathbf {B} y)\right)=\left(({\mathcal {F}}\mathbf {A} )\bullet ({\mathcal {F}}\mathbf {B} )\right)(x\otimes y)=({\mathcal {F}}\mathbf {A} x)\circ ({\mathcal {F}}\mathbf {B} y)}
because property 11 above gives us
(
(
F
A
)
∙
(
F
B
)
)
(
x
⊗
y
)
=
(
F
A
x
)
∘
(
F
B
y
)
{\displaystyle \left(({\mathcal {F}}\mathbf {A} )\bullet ({\mathcal {F}}\mathbf {B} )\right)(x\otimes y)=({\mathcal {F}}\mathbf {A} x)\circ ({\mathcal {F}}\mathbf {B} y)}
And the convolution theorem gives us
F
(
(
A
x
)
⋆
(
B
y
)
)
=
(
F
A
x
)
∘
(
F
B
y
)
{\displaystyle {\mathcal {F}}\left((\mathbf {A} x)\star (\mathbf {B} y)\right)=({\mathcal {F}}\mathbf {A} x)\circ ({\mathcal {F}}\mathbf {B} y)}
A
∙
B
=
(
A
⊗
1
k
T
)
∘
(
1
c
T
⊗
B
)
{\displaystyle \mathbf {A} \bullet \mathbf {B} =\left(\mathbf {A} \otimes \mathbf {1_{k}} ^{\textsf {T}}\right)\circ \left(\mathbf {1_{c}} ^{\textsf {T}}\otimes \mathbf {B} \right)}
,[ 20]
where
A
{\displaystyle \mathbf {A} }
is
r
×
c
{\displaystyle r\times c}
matrix,
B
{\displaystyle \mathbf {B} }
is
r
×
k
{\displaystyle r\times k}
matrix,
1
c
{\displaystyle \mathbf {1_{c}} }
is a vector of 1's of length
c
{\displaystyle c}
, and
1
k
{\displaystyle \mathbf {1_{k}} }
is a vector of 1's of length
k
{\displaystyle k}
or
M
∙
M
=
(
M
⊗
1
T
)
∘
(
1
T
⊗
M
)
{\displaystyle \mathbf {M} \bullet \mathbf {M} =\left(\mathbf {M} \otimes \mathbf {1} ^{\textsf {T}}\right)\circ \left(\mathbf {1} ^{\textsf {T}}\otimes \mathbf {M} \right)}
,[ 21]
where
M
{\displaystyle \mathbf {M} }
is
r
×
c
{\displaystyle r\times c}
matrix,
∘
{\displaystyle \circ }
means element by element multiplication and
1
{\displaystyle \mathbf {1} }
is a vector of 1's of length
c
{\displaystyle c}
.
M
∙
M
=
M
[
∘
]
(
M
⊗
1
T
)
{\displaystyle \mathbf {M} \bullet \mathbf {M} =\mathbf {M} [\circ ]\left(\mathbf {M} \otimes \mathbf {1} ^{\textsf {T}}\right)}
,
where
[
∘
]
{\displaystyle [\circ ]}
denotes the penetrating face product of matrices.[ 13]
Similarly:
P
∗
N
=
(
P
⊗
1
k
)
∘
(
1
c
⊗
N
)
{\displaystyle \mathbf {P} \ast \mathbf {N} =(\mathbf {P} \otimes \mathbf {1_{k}} )\circ (\mathbf {1_{c}} \otimes \mathbf {N} )}
, where
P
{\displaystyle \mathbf {P} }
is
c
×
r
{\displaystyle c\times r}
matrix,
N
{\displaystyle \mathbf {N} }
is
k
×
r
{\displaystyle k\times r}
matrix,.
W
d
A
=
w
∙
A
{\displaystyle \mathbf {W_{d}} \mathbf {A} =\mathbf {w} \bullet \mathbf {A} }
,[ 12]
vec
(
(
w
T
∗
A
)
B
)
=
(
B
T
∗
A
)
w
{\displaystyle \operatorname {vec} ((\mathbf {w} ^{\textsf {T}}\ast \mathbf {A} )\mathbf {B} )=(\mathbf {B} ^{\textsf {T}}\ast \mathbf {A} )\mathbf {w} }
[ 13] =
vec
(
A
(
w
∙
B
)
)
{\displaystyle \operatorname {vec} (\mathbf {A} (\mathbf {w} \bullet \mathbf {B} ))}
=
vec
(
A
W
d
B
)
{\displaystyle \operatorname {vec} (\mathbf {A} \mathbf {W_{d}} \mathbf {B} )}
,
vec
(
A
T
W
d
A
)
=
(
A
∙
A
)
T
w
{\displaystyle \operatorname {vec} \left(\mathbf {A} ^{\textsf {T}}\mathbf {W_{d}} \mathbf {A} \right)=\left(\mathbf {A} \bullet \mathbf {A} \right)^{\textsf {T}}\mathbf {w} }
,[ 21]
where
w
{\displaystyle \mathbf {w} }
is the vector consisting of the diagonal elements of
W
d
{\displaystyle \mathbf {W_{d}} }
,
vec
(
A
)
{\displaystyle \operatorname {vec} (\mathbf {A} )}
means stack the columns of a matrix
A
{\displaystyle \mathbf {A} }
on top of each other to give a vector.
(
A
∙
L
)
(
B
⊗
M
)
⋯
(
C
⊗
S
)
(
K
∗
T
)
=
(
A
B
.
.
.
C
K
)
∘
(
L
M
.
.
.
S
T
)
{\displaystyle (\mathbf {A} \bullet \mathbf {L} )(\mathbf {B} \otimes \mathbf {M} )\cdots (\mathbf {C} \otimes \mathbf {S} )(\mathbf {K} \ast \mathbf {T} )=(\mathbf {A} \mathbf {B} ...\mathbf {C} \mathbf {K} )\circ (\mathbf {L} \mathbf {M} ...\mathbf {S} \mathbf {T} )}
.[ 13] [ 15]
Similarly:
(
A
∙
L
)
(
B
⊗
M
)
⋯
(
C
⊗
S
)
(
c
⊗
d
)
=
(
A
B
⋯
C
c
)
∘
(
L
M
⋯
S
d
)
,
(
A
∙
L
)
(
B
⊗
M
)
⋯
(
C
⊗
S
)
(
P
c
⊗
Q
d
)
=
(
A
B
⋯
C
P
c
)
∘
(
L
M
⋯
S
Q
d
)
{\displaystyle {\begin{aligned}(\mathbf {A} \bullet \mathbf {L} )(\mathbf {B} \otimes \mathbf {M} )\cdots (\mathbf {C} \otimes \mathbf {S} )(c\otimes d)&=(\mathbf {A} \mathbf {B} \cdots \mathbf {C} c)\circ (\mathbf {L} \mathbf {M} \cdots \mathbf {S} d),\\(\mathbf {A} \bullet \mathbf {L} )(\mathbf {B} \otimes \mathbf {M} )\cdots (\mathbf {C} \otimes \mathbf {S} )(\mathbf {P} c\otimes \mathbf {Q} d)&=(\mathbf {A} \mathbf {B} \cdots \mathbf {C} \mathbf {P} c)\circ (\mathbf {L} \mathbf {M} \cdots \mathbf {S} \mathbf {Q} d)\end{aligned}}}
,
where
c
{\displaystyle c}
and
d
{\displaystyle d}
are vectors
Source:[ 18]
(
[
1
0
0
1
1
0
]
∙
[
1
0
1
0
0
1
]
)
(
[
1
1
1
−
1
]
⊗
[
1
1
1
−
1
]
)
(
[
σ
1
0
0
σ
2
]
⊗
[
ρ
1
0
0
ρ
2
]
)
(
[
x
1
x
2
]
∗
[
y
1
y
2
]
)
=
(
[
1
0
0
1
1
0
]
∙
[
1
0
1
0
0
1
]
)
(
[
1
1
1
−
1
]
[
σ
1
0
0
σ
2
]
[
x
1
x
2
]
⊗
[
1
1
1
−
1
]
[
ρ
1
0
0
ρ
2
]
[
y
1
y
2
]
)
=
[
1
0
0
1
1
0
]
[
1
1
1
−
1
]
[
σ
1
0
0
σ
2
]
[
x
1
x
2
]
∘
[
1
0
1
0
0
1
]
[
1
1
1
−
1
]
[
ρ
1
0
0
ρ
2
]
[
y
1
y
2
]
.
{\displaystyle {\begin{aligned}&\left({\begin{bmatrix}1&0\\0&1\\1&0\end{bmatrix}}\bullet {\begin{bmatrix}1&0\\1&0\\0&1\end{bmatrix}}\right)\left({\begin{bmatrix}1&1\\1&-1\end{bmatrix}}\otimes {\begin{bmatrix}1&1\\1&-1\end{bmatrix}}\right)\left({\begin{bmatrix}\sigma _{1}&0\\0&\sigma _{2}\\\end{bmatrix}}\otimes {\begin{bmatrix}\rho _{1}&0\\0&\rho _{2}\\\end{bmatrix}}\right)\left({\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}\ast {\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}}\right)\\[5pt]{}={}&\left({\begin{bmatrix}1&0\\0&1\\1&0\end{bmatrix}}\bullet {\begin{bmatrix}1&0\\1&0\\0&1\end{bmatrix}}\right)\left({\begin{bmatrix}1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}\sigma _{1}&0\\0&\sigma _{2}\\\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}\,\otimes \,{\begin{bmatrix}1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}\rho _{1}&0\\0&\rho _{2}\\\end{bmatrix}}{\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}}\right)\\[5pt]{}={}&{\begin{bmatrix}1&0\\0&1\\1&0\end{bmatrix}}{\begin{bmatrix}1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}\sigma _{1}&0\\0&\sigma _{2}\\\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}\,\circ \,{\begin{bmatrix}1&0\\1&0\\0&1\end{bmatrix}}{\begin{bmatrix}1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}\rho _{1}&0\\0&\rho _{2}\\\end{bmatrix}}{\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}}.\end{aligned}}}
Source:[ 18]
If
M
=
T
(
1
)
∙
⋯
∙
T
(
c
)
{\displaystyle M=T^{(1)}\bullet \dots \bullet T^{(c)}}
, where
T
(
1
)
,
…
,
T
(
c
)
{\displaystyle T^{(1)},\dots ,T^{(c)}}
are independent components a random matrix
T
{\displaystyle T}
with independent identically distributed rows
T
1
,
…
,
T
m
∈
R
d
{\displaystyle T_{1},\dots ,T_{m}\in \mathbb {R} ^{d}}
, such that
E
[
(
T
1
x
)
2
]
=
‖
x
‖
2
2
{\displaystyle E\left[(T_{1}x)^{2}\right]=\left\|x\right\|_{2}^{2}}
and
E
[
(
T
1
x
)
p
]
1
p
≤
a
p
‖
x
‖
2
{\displaystyle E\left[(T_{1}x)^{p}\right]^{\frac {1}{p}}\leq {\sqrt {ap}}\|x\|_{2}}
,
then for any vector
x
{\displaystyle x}
|
‖
M
x
‖
2
−
‖
x
‖
2
|
<
ε
‖
x
‖
2
{\displaystyle \left|\left\|Mx\right\|_{2}-\left\|x\right\|_{2}\right|<\varepsilon \left\|x\right\|_{2}}
with probability
1
−
δ
{\displaystyle 1-\delta }
if the quantity of rows
m
=
(
4
a
)
2
c
ε
−
2
log
1
/
δ
+
(
2
a
e
)
ε
−
1
(
log
1
/
δ
)
c
.
{\displaystyle m=(4a)^{2c}\varepsilon ^{-2}\log 1/\delta +(2ae)\varepsilon ^{-1}(\log 1/\delta )^{c}.}
In particular, if the entries of
T
{\displaystyle T}
are
±
1
{\displaystyle \pm 1}
can get
m
=
O
(
ε
−
2
log
1
/
δ
+
ε
−
1
(
1
c
log
1
/
δ
)
c
)
{\displaystyle m=O\left(\varepsilon ^{-2}\log 1/\delta +\varepsilon ^{-1}\left({\frac {1}{c}}\log 1/\delta \right)^{c}\right)}
which matches the Johnson–Lindenstrauss lemma of
m
=
O
(
ε
−
2
log
1
/
δ
)
{\displaystyle m=O\left(\varepsilon ^{-2}\log 1/\delta \right)}
when
ε
{\displaystyle \varepsilon }
is small.
Block face-splitting product
edit
The Face-splitting product and the Block Face-splitting product used in the tensor -matrix theory of digital antenna arrays . These operations are also used in:
^ Khatri C. G., C. R. Rao (1968). "Solutions to some functional equations and their applications to characterization of probability distributions" . Sankhya . 30 : 167–180. Archived from the original (PDF) on 2010-10-23. Retrieved 2008-08-21 .
^
Liu, Shuangzhe (1999). "Matrix Results on the Khatri–Rao and Tracy–Singh Products" . Linear Algebra and Its Applications . 289 (1–3): 267–277. doi :10.1016/S0024-3795(98)10209-4 .
^ Zhang X; Yang Z; Cao C. (2002), "Inequalities involving Khatri–Rao products of positive semi-definite matrices", Applied Mathematics E-notes , 2 : 117–124
^
Liu, Shuangzhe; Trenkler, Götz (2008). "Hadamard, Khatri-Rao, Kronecker and other matrix products". International Journal of Information and Systems Sciences . 4 (1): 160–177.
^ See e.g. H. D. Macedo and J.N. Oliveira. A linear algebra approach to OLAP . Formal Aspects of Computing, 27(2):283–307, 2015.
^ Lev-Ari, Hanoch (2005-01-01). "Efficient Solution of Linear Matrix Equations with Application to Multistatic Antenna Array Processing" (PDF) . Communications in Information & Systems . 05 (1): 123–130. doi :10.4310/CIS.2005.v5.n1.a5 . ISSN 1526-7555 .
^ a b Masiero, B.; Nascimento, V. H. (2017-05-01). "Revisiting the Kronecker Array Transform" . IEEE Signal Processing Letters . 24 (5): 525–529. Bibcode :2017ISPL...24..525M . doi :10.1109/LSP.2017.2674969 . ISSN 1070-9908 . S2CID 14166014 .
^ Vanderveen, M. C., Ng, B. C., Papadias, C. B., & Paulraj, A. (n.d.). Joint angle and delay estimation (JADE) for signals in multipath environments . Conference Record of The Thirtieth Asilomar Conference on Signals, Systems and Computers. – DOI:10.1109/acssc.1996.599145
^ a b c d e f g h Slyusar, V. I. (December 27, 1996). "End matrix products in radar applications" (PDF) . Izvestiya VUZ: Radioelektronika . 41 (3): 71–75.
^ Anna Esteve, Eva Boj & Josep Fortiana (2009): "Interaction Terms in Distance-Based Regression," Communications in Statistics – Theory and Methods , 38:19, p. 3501 [1]
^ a b c d e Slyusar, V. I. (1997-05-20). "Analytical model of the digital antenna array on a basis of face-splitting matrix products" (PDF) . Proc. ICATT-97, Kyiv : 108–109.
^ a b c d e f g h Slyusar, V. I. (1997-09-15). "New operations of matrices product for applications of radars" (PDF) . Proc. Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED-97), Lviv. : 73–74.
^ a b c d e f g h i j Slyusar, V. I. (March 13, 1998). "A Family of Face Products of Matrices and its Properties" (PDF) . Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz. 1999 . 35 (3): 379–384. doi :10.1007/BF02733426 . S2CID 119661450 .
^ Slyusar, V. I. (2003). "Generalized face-products of matrices in models of digital antenna arrays with nonidentical channels" (PDF) . Radioelectronics and Communications Systems . 46 (10): 9–17.
^ a b c d e Vadym Slyusar. New Matrix Operations for DSP (Lecture). April 1999. – DOI: 10.13140/RG.2.2.31620.76164/1
^ a b C. Radhakrishna Rao . Estimation of Heteroscedastic Variances in Linear Models.//Journal of the American Statistical Association, Vol. 65, No. 329 (Mar., 1970), pp. 161–172
^ Kasiviswanathan, Shiva Prasad, et al. «The price of privately releasing contingency tables and the spectra of random matrices with correlated rows.» Proceedings of the forty-second ACM symposium on Theory of computing. 2010.
^ a b c d Thomas D. Ahle, Jakob Bæk Tejs Knudsen. Almost Optimal Tensor Sketch. Published 2019. Mathematics, Computer Science, ArXiv
^ Ninh, Pham; Pagh, Rasmus (2013). Fast and scalable polynomial kernels via explicit feature maps . SIGKDD international conference on Knowledge discovery and data mining. Association for Computing Machinery. doi :10.1145/2487575.2487591 .
^ a b Eilers, Paul H.C.; Marx, Brian D. (2003). "Multivariate calibration with temperature interaction using two-dimensional penalized signal regression". Chemometrics and Intelligent Laboratory Systems . 66 (2): 159–174. doi :10.1016/S0169-7439(03)00029-7 .
^ a b c Currie, I. D.; Durban, M.; Eilers, P. H. C. (2006). "Generalized linear array models with applications to multidimensional smoothing". Journal of the Royal Statistical Society . 68 (2): 259–280. doi :10.1111/j.1467-9868.2006.00543.x . S2CID 10261944 .
^ Bryan Bischof. Higher order co-occurrence tensors for hypergraphs via face-splitting. Published 15 February 2020, Mathematics, Computer Science, ArXiv
^ Johannes W. R. Martini, Jose Crossa, Fernando H. Toledo, Jaime Cuevas. On Hadamard and Kronecker products in covariance structures for genotype x environment interaction.//Plant Genome. 2020;13:e20033. Page 5. [2]
Khatri C. G., C. R. Rao (1968). "Solutions to some functional equations and their applications to characterization of probability distributions" . Sankhya . 30 : 167–180. Archived from the original on 2010-10-23. Retrieved 2008-08-21 .
Rao C.R.; Rao M. Bhaskara (1998), Matrix Algebra and Its Applications to Statistics and Econometrics , World Scientific, p. 216
Zhang X; Yang Z; Cao C. (2002), "Inequalities involving Khatri–Rao products of positive semi-definite matrices", Applied Mathematics E-notes , 2 : 117–124
Liu Shuangzhe; Trenkler Götz (2008), "Hadamard, Khatri-Rao, Kronecker and other matrix products", International Journal of Information and Systems Sciences , 4 : 160–177