In linear algebra , a multilinear map is a function of several variables that is linear separately in each variable. More precisely, a multilinear map is a function
f
:
V
1
×
⋯
×
V
n
→
W
,
{\displaystyle f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}}
where
V
1
,
…
,
V
n
{\displaystyle V_{1},\ldots ,V_{n}}
(
n
∈
Z
≥
0
{\displaystyle n\in \mathbb {Z} _{\geq 0}}
) and
W
{\displaystyle W}
are vector spaces (or modules over a commutative ring ), with the following property: for each
i
{\displaystyle i}
, if all of the variables but
v
i
{\displaystyle v_{i}}
are held constant, then
f
(
v
1
,
…
,
v
i
,
…
,
v
n
)
{\displaystyle f(v_{1},\ldots ,v_{i},\ldots ,v_{n})}
is a linear function of
v
i
{\displaystyle v_{i}}
.[ 1] One way to visualize this is to imagine two orthogonal vectors; if one of these vectors is scaled by a factor of 2 while the other remains unchanged, the cross product likewise scales by a factor of two. If both are scaled by a factor of 2, the cross product scales by a factor of
2
2
{\displaystyle 2^{2}}
.
A multilinear map of one variable is a linear map , and of two variables is a bilinear map . More generally, for any nonnegative integer
k
{\displaystyle k}
, a multilinear map of k variables is called a k -linear map . If the codomain of a multilinear map is the field of scalars , it is called a multilinear form . Multilinear maps and multilinear forms are fundamental objects of study in multilinear algebra .
If all variables belong to the same space, one can consider symmetric , antisymmetric and alternating k -linear maps. The latter two coincide if the underlying ring (or field ) has a characteristic different from two, else the former two coincide.
Any bilinear map is a multilinear map. For example, any inner product on a
R
{\displaystyle \mathbb {R} }
-vector space is a multilinear map, as is the cross product of vectors in
R
3
{\displaystyle \mathbb {R} ^{3}}
.
The determinant of a matrix is an alternating multilinear function of the columns (or rows) of a square matrix .
If
F
:
R
m
→
R
n
{\displaystyle F\colon \mathbb {R} ^{m}\to \mathbb {R} ^{n}}
is a Ck function , then the
k
{\displaystyle k}
th derivative of
F
{\displaystyle F}
at each point
p
{\displaystyle p}
in its domain can be viewed as a symmetric
k
{\displaystyle k}
-linear function
D
k
F
:
R
m
×
⋯
×
R
m
→
R
n
{\displaystyle D^{k}\!F\colon \mathbb {R} ^{m}\times \cdots \times \mathbb {R} ^{m}\to \mathbb {R} ^{n}}
.[citation needed ]
Coordinate representation
edit
Let
f
:
V
1
×
⋯
×
V
n
→
W
,
{\displaystyle f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}}
be a multilinear map between finite-dimensional vector spaces, where
V
i
{\displaystyle V_{i}\!}
has dimension
d
i
{\displaystyle d_{i}\!}
, and
W
{\displaystyle W\!}
has dimension
d
{\displaystyle d\!}
. If we choose a basis
{
e
i
1
,
…
,
e
i
d
i
}
{\displaystyle \{{\textbf {e}}_{i1},\ldots ,{\textbf {e}}_{id_{i}}\}}
for each
V
i
{\displaystyle V_{i}\!}
and a basis
{
b
1
,
…
,
b
d
}
{\displaystyle \{{\textbf {b}}_{1},\ldots ,{\textbf {b}}_{d}\}}
for
W
{\displaystyle W\!}
(using bold for vectors), then we can define a collection of scalars
A
j
1
⋯
j
n
k
{\displaystyle A_{j_{1}\cdots j_{n}}^{k}}
by
f
(
e
1
j
1
,
…
,
e
n
j
n
)
=
A
j
1
⋯
j
n
1
b
1
+
⋯
+
A
j
1
⋯
j
n
d
b
d
.
{\displaystyle f({\textbf {e}}_{1j_{1}},\ldots ,{\textbf {e}}_{nj_{n}})=A_{j_{1}\cdots j_{n}}^{1}\,{\textbf {b}}_{1}+\cdots +A_{j_{1}\cdots j_{n}}^{d}\,{\textbf {b}}_{d}.}
Then the scalars
{
A
j
1
⋯
j
n
k
∣
1
≤
j
i
≤
d
i
,
1
≤
k
≤
d
}
{\displaystyle \{A_{j_{1}\cdots j_{n}}^{k}\mid 1\leq j_{i}\leq d_{i},1\leq k\leq d\}}
completely determine the multilinear function
f
{\displaystyle f\!}
. In particular, if
v
i
=
∑
j
=
1
d
i
v
i
j
e
i
j
{\displaystyle {\textbf {v}}_{i}=\sum _{j=1}^{d_{i}}v_{ij}{\textbf {e}}_{ij}\!}
for
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n\!}
, then
f
(
v
1
,
…
,
v
n
)
=
∑
j
1
=
1
d
1
⋯
∑
j
n
=
1
d
n
∑
k
=
1
d
A
j
1
⋯
j
n
k
v
1
j
1
⋯
v
n
j
n
b
k
.
{\displaystyle f({\textbf {v}}_{1},\ldots ,{\textbf {v}}_{n})=\sum _{j_{1}=1}^{d_{1}}\cdots \sum _{j_{n}=1}^{d_{n}}\sum _{k=1}^{d}A_{j_{1}\cdots j_{n}}^{k}v_{1j_{1}}\cdots v_{nj_{n}}{\textbf {b}}_{k}.}
Let's take a trilinear function
g
:
R
2
×
R
2
×
R
2
→
R
,
{\displaystyle g\colon R^{2}\times R^{2}\times R^{2}\to R,}
where Vi = R 2 , di = 2, i = 1,2,3 , and W = R , d = 1 .
A basis for each Vi is
{
e
i
1
,
…
,
e
i
d
i
}
=
{
e
1
,
e
2
}
=
{
(
1
,
0
)
,
(
0
,
1
)
}
.
{\displaystyle \{{\textbf {e}}_{i1},\ldots ,{\textbf {e}}_{id_{i}}\}=\{{\textbf {e}}_{1},{\textbf {e}}_{2}\}=\{(1,0),(0,1)\}.}
Let
g
(
e
1
i
,
e
2
j
,
e
3
k
)
=
f
(
e
i
,
e
j
,
e
k
)
=
A
i
j
k
,
{\displaystyle g({\textbf {e}}_{1i},{\textbf {e}}_{2j},{\textbf {e}}_{3k})=f({\textbf {e}}_{i},{\textbf {e}}_{j},{\textbf {e}}_{k})=A_{ijk},}
where
i
,
j
,
k
∈
{
1
,
2
}
{\displaystyle i,j,k\in \{1,2\}}
. In other words, the constant
A
i
j
k
{\displaystyle A_{ijk}}
is a function value at one of the eight possible triples of basis vectors (since there are two choices for each of the three
V
i
{\displaystyle V_{i}}
), namely:
{
e
1
,
e
1
,
e
1
}
,
{
e
1
,
e
1
,
e
2
}
,
{
e
1
,
e
2
,
e
1
}
,
{
e
1
,
e
2
,
e
2
}
,
{
e
2
,
e
1
,
e
1
}
,
{
e
2
,
e
1
,
e
2
}
,
{
e
2
,
e
2
,
e
1
}
,
{
e
2
,
e
2
,
e
2
}
.
{\displaystyle \{{\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{1}\},\{{\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{2}\},\{{\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{1}\},\{{\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{2}\},\{{\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{1}\},\{{\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{2}\},\{{\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{1}\},\{{\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{2}\}.}
Each vector
v
i
∈
V
i
=
R
2
{\displaystyle {\textbf {v}}_{i}\in V_{i}=R^{2}}
can be expressed as a linear combination of the basis vectors
v
i
=
∑
j
=
1
2
v
i
j
e
i
j
=
v
i
1
×
e
1
+
v
i
2
×
e
2
=
v
i
1
×
(
1
,
0
)
+
v
i
2
×
(
0
,
1
)
.
{\displaystyle {\textbf {v}}_{i}=\sum _{j=1}^{2}v_{ij}{\textbf {e}}_{ij}=v_{i1}\times {\textbf {e}}_{1}+v_{i2}\times {\textbf {e}}_{2}=v_{i1}\times (1,0)+v_{i2}\times (0,1).}
The function value at an arbitrary collection of three vectors
v
i
∈
R
2
{\displaystyle {\textbf {v}}_{i}\in R^{2}}
can be expressed as
g
(
v
1
,
v
2
,
v
3
)
=
∑
i
=
1
2
∑
j
=
1
2
∑
k
=
1
2
A
i
j
k
v
1
i
v
2
j
v
3
k
,
{\displaystyle g({\textbf {v}}_{1},{\textbf {v}}_{2},{\textbf {v}}_{3})=\sum _{i=1}^{2}\sum _{j=1}^{2}\sum _{k=1}^{2}A_{ijk}v_{1i}v_{2j}v_{3k},}
or in expanded form as
g
(
(
a
,
b
)
,
(
c
,
d
)
,
(
e
,
f
)
)
=
a
c
e
×
g
(
e
1
,
e
1
,
e
1
)
+
a
c
f
×
g
(
e
1
,
e
1
,
e
2
)
+
a
d
e
×
g
(
e
1
,
e
2
,
e
1
)
+
a
d
f
×
g
(
e
1
,
e
2
,
e
2
)
+
b
c
e
×
g
(
e
2
,
e
1
,
e
1
)
+
b
c
f
×
g
(
e
2
,
e
1
,
e
2
)
+
b
d
e
×
g
(
e
2
,
e
2
,
e
1
)
+
b
d
f
×
g
(
e
2
,
e
2
,
e
2
)
.
{\displaystyle {\begin{aligned}g((a,b),(c,d)&,(e,f))=ace\times g({\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{1})+acf\times g({\textbf {e}}_{1},{\textbf {e}}_{1},{\textbf {e}}_{2})\\&+ade\times g({\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{1})+adf\times g({\textbf {e}}_{1},{\textbf {e}}_{2},{\textbf {e}}_{2})+bce\times g({\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{1})+bcf\times g({\textbf {e}}_{2},{\textbf {e}}_{1},{\textbf {e}}_{2})\\&+bde\times g({\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{1})+bdf\times g({\textbf {e}}_{2},{\textbf {e}}_{2},{\textbf {e}}_{2}).\end{aligned}}}
Relation to tensor products
edit
There is a natural one-to-one correspondence between multilinear maps
f
:
V
1
×
⋯
×
V
n
→
W
,
{\displaystyle f\colon V_{1}\times \cdots \times V_{n}\to W{\text{,}}}
and linear maps
F
:
V
1
⊗
⋯
⊗
V
n
→
W
,
{\displaystyle F\colon V_{1}\otimes \cdots \otimes V_{n}\to W{\text{,}}}
where
V
1
⊗
⋯
⊗
V
n
{\displaystyle V_{1}\otimes \cdots \otimes V_{n}\!}
denotes the tensor product of
V
1
,
…
,
V
n
{\displaystyle V_{1},\ldots ,V_{n}}
. The relation between the functions
f
{\displaystyle f}
and
F
{\displaystyle F}
is given by the formula
f
(
v
1
,
…
,
v
n
)
=
F
(
v
1
⊗
⋯
⊗
v
n
)
.
{\displaystyle f(v_{1},\ldots ,v_{n})=F(v_{1}\otimes \cdots \otimes v_{n}).}
Multilinear functions on n ×n matrices
edit
One can consider multilinear functions, on an n ×n matrix over a commutative ring K with identity, as a function of the rows (or equivalently the columns) of the matrix. Let A be such a matrix and ai , 1 ≤ i ≤ n , be the rows of A . Then the multilinear function D can be written as
D
(
A
)
=
D
(
a
1
,
…
,
a
n
)
,
{\displaystyle D(A)=D(a_{1},\ldots ,a_{n}),}
satisfying
D
(
a
1
,
…
,
c
a
i
+
a
i
′
,
…
,
a
n
)
=
c
D
(
a
1
,
…
,
a
i
,
…
,
a
n
)
+
D
(
a
1
,
…
,
a
i
′
,
…
,
a
n
)
.
{\displaystyle D(a_{1},\ldots ,ca_{i}+a_{i}',\ldots ,a_{n})=cD(a_{1},\ldots ,a_{i},\ldots ,a_{n})+D(a_{1},\ldots ,a_{i}',\ldots ,a_{n}).}
If we let
e
^
j
{\displaystyle {\hat {e}}_{j}}
represent the j th row of the identity matrix, we can express each row ai as the sum
a
i
=
∑
j
=
1
n
A
(
i
,
j
)
e
^
j
.
{\displaystyle a_{i}=\sum _{j=1}^{n}A(i,j){\hat {e}}_{j}.}
Using the multilinearity of D we rewrite D (A ) as
D
(
A
)
=
D
(
∑
j
=
1
n
A
(
1
,
j
)
e
^
j
,
a
2
,
…
,
a
n
)
=
∑
j
=
1
n
A
(
1
,
j
)
D
(
e
^
j
,
a
2
,
…
,
a
n
)
.
{\displaystyle D(A)=D\left(\sum _{j=1}^{n}A(1,j){\hat {e}}_{j},a_{2},\ldots ,a_{n}\right)=\sum _{j=1}^{n}A(1,j)D({\hat {e}}_{j},a_{2},\ldots ,a_{n}).}
Continuing this substitution for each ai we get, for 1 ≤ i ≤ n ,
D
(
A
)
=
∑
1
≤
k
1
≤
n
…
∑
1
≤
k
i
≤
n
…
∑
1
≤
k
n
≤
n
A
(
1
,
k
1
)
A
(
2
,
k
2
)
…
A
(
n
,
k
n
)
D
(
e
^
k
1
,
…
,
e
^
k
n
)
.
{\displaystyle D(A)=\sum _{1\leq k_{1}\leq n}\ldots \sum _{1\leq k_{i}\leq n}\ldots \sum _{1\leq k_{n}\leq n}A(1,k_{1})A(2,k_{2})\dots A(n,k_{n})D({\hat {e}}_{k_{1}},\dots ,{\hat {e}}_{k_{n}}).}
Therefore, D (A ) is uniquely determined by how D operates on
e
^
k
1
,
…
,
e
^
k
n
{\displaystyle {\hat {e}}_{k_{1}},\dots ,{\hat {e}}_{k_{n}}}
.
In the case of 2×2 matrices, we get
D
(
A
)
=
A
1
,
1
A
1
,
2
D
(
e
^
1
,
e
^
1
)
+
A
1
,
1
A
2
,
2
D
(
e
^
1
,
e
^
2
)
+
A
1
,
2
A
2
,
1
D
(
e
^
2
,
e
^
1
)
+
A
1
,
2
A
2
,
2
D
(
e
^
2
,
e
^
2
)
,
{\displaystyle D(A)=A_{1,1}A_{1,2}D({\hat {e}}_{1},{\hat {e}}_{1})+A_{1,1}A_{2,2}D({\hat {e}}_{1},{\hat {e}}_{2})+A_{1,2}A_{2,1}D({\hat {e}}_{2},{\hat {e}}_{1})+A_{1,2}A_{2,2}D({\hat {e}}_{2},{\hat {e}}_{2}),\,}
where
e
^
1
=
[
1
,
0
]
{\displaystyle {\hat {e}}_{1}=[1,0]}
and
e
^
2
=
[
0
,
1
]
{\displaystyle {\hat {e}}_{2}=[0,1]}
. If we restrict
D
{\displaystyle D}
to be an alternating function, then
D
(
e
^
1
,
e
^
1
)
=
D
(
e
^
2
,
e
^
2
)
=
0
{\displaystyle D({\hat {e}}_{1},{\hat {e}}_{1})=D({\hat {e}}_{2},{\hat {e}}_{2})=0}
and
D
(
e
^
2
,
e
^
1
)
=
−
D
(
e
^
1
,
e
^
2
)
=
−
D
(
I
)
{\displaystyle D({\hat {e}}_{2},{\hat {e}}_{1})=-D({\hat {e}}_{1},{\hat {e}}_{2})=-D(I)}
. Letting
D
(
I
)
=
1
{\displaystyle D(I)=1}
, we get the determinant function on 2×2 matrices:
D
(
A
)
=
A
1
,
1
A
2
,
2
−
A
1
,
2
A
2
,
1
.
{\displaystyle D(A)=A_{1,1}A_{2,2}-A_{1,2}A_{2,1}.}
A multilinear map has a value of zero whenever one of its arguments is zero.