Gram matrix

(Redirected from Gramian)

In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product .[1] If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers.

An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.

It is named after Jørgen Pedersen Gram.

Examples

edit

For finite-dimensional real vectors in   with the usual Euclidean dot product, the Gram matrix is  , where   is a matrix whose columns are the vectors   and   is its transpose whose rows are the vectors  . For complex vectors in  ,  , where   is the conjugate transpose of  .

Given square-integrable functions   on the interval  , the Gram matrix   is:

 

where   is the complex conjugate of  .

For any bilinear form   on a finite-dimensional vector space over any field we can define a Gram matrix   attached to a set of vectors   by  . The matrix will be symmetric if the bilinear form   is symmetric.

Applications

edit
  • In Riemannian geometry, given an embedded  -dimensional Riemannian manifold   and a parametrization   for  , the volume form   on   induced by the embedding may be computed using the Gramian of the coordinate tangent vectors:   This generalizes the classical surface integral of a parametrized surface   for  :  
  • If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector.
  • In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix.
  • In control theory (or more generally systems theory), the controllability Gramian and observability Gramian determine properties of a linear system.
  • Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94).
  • In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.
  • In machine learning, kernel functions are often represented as Gram matrices.[2] (Also see kernel PCA)
  • Since the Gram matrix over the reals is a symmetric matrix, it is diagonalizable and its eigenvalues are non-negative. The diagonalization of the Gram matrix is the singular value decomposition.

Properties

edit

Positive-semidefiniteness

edit

The Gram matrix is symmetric in the case the inner product is real-valued; it is Hermitian in the general, complex case by definition of an inner product.

The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:

 

The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors   are linearly independent (that is,   for all  ).[1]

Finding a vector realization

edit

Given any positive semidefinite matrix  , one can decompose it as:

 ,

where   is the conjugate transpose of   (or   in the real case).

Here   is a   matrix, where   is the rank of  . Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of  .

The columns   of   can be seen as n vectors in   (or k-dimensional Euclidean space  , in the real case). Then

 

where the dot product   is the usual inner product on  .

Thus a Hermitian matrix   is positive semidefinite if and only if it is the Gram matrix of some vectors  . Such vectors are called a vector realization of  . The infinite-dimensional analog of this statement is Mercer's theorem.

Uniqueness of vector realizations

edit

If   is the Gram matrix of vectors   in   then applying any rotation or reflection of   (any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any   orthogonal matrix  , the Gram matrix of   is also  .

This is the only way in which two real vector realizations of   can differ: the vectors   are unique up to orthogonal transformations. In other words, the dot products   and   are equal if and only if some rigid transformation of   transforms the vectors   to   and 0 to 0.

The same holds in the complex case, with unitary transformations in place of orthogonal ones. That is, if the Gram matrix of vectors   is equal to the Gram matrix of vectors   in   then there is a unitary   matrix   (meaning  ) such that   for  .[3]

Other properties

edit
  • Because  , it is necessarily the case that   and   commute. That is, a real or complex Gram matrix   is also a normal matrix.
  • The Gram matrix of any orthonormal basis is the identity matrix. Equivalently, the Gram matrix of the rows or the columns of a real rotation matrix is the identity matrix. Likewise, the Gram matrix of the rows or columns of a unitary matrix is the identity matrix.
  • The rank of the Gram matrix of vectors in   or   equals the dimension of the space spanned by these vectors.[1]

Gram determinant

edit

The Gram determinant or Gramian is the determinant of the Gram matrix:  

If   are vectors in   then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular. When n > m the determinant and volume are zero. When n = m, this reduces to the standard theorem that the absolute value of the determinant of n n-dimensional vectors is the n-dimensional volume. The Gram determinant is also useful for computing the volume of the simplex formed by the vectors; its volume is Volume(parallelotope) / n!.

The Gram determinant can also be expressed in terms of the exterior product of vectors by

 

When the vectors   are defined from the positions of points   relative to some reference point  ,

 

then the Gram determinant can be written as the difference of two Gram determinants,

 

where each   is the corresponding point   supplemented with the coordinate value of 1 for an  -st dimension.[citation needed] Note that in the common case that n = m, the second term on the right-hand side will be zero.

Constructing an orthonormal basis

edit

Given a set of linearly independent vectors   with Gram matrix   defined by  , one can construct an orthonormal basis

 

In matrix notation,  , where   has orthonormal basis vectors   and the matrix   is composed of the given column vectors  .

The matrix   is guaranteed to exist. Indeed,   is Hermitian, and so can be decomposed as   with   a unitary matrix and   a real diagonal matrix. Additionally, the   are linearly independent if and only if   is positive definite, which implies that the diagonal entries of   are positive.   is therefore uniquely defined by  . One can check that these new vectors are orthonormal:

 

where we used  .

See also

edit

References

edit
  1. ^ a b c Horn & Johnson 2013, p. 441, p.441, Theorem 7.2.10
  2. ^ Lanckriet, G. R. G.; Cristianini, N.; Bartlett, P.; Ghaoui, L. E.; Jordan, M. I. (2004). "Learning the kernel matrix with semidefinite programming". Journal of Machine Learning Research. 5: 27–72 [p. 29].
  3. ^ Horn & Johnson (2013), p. 452, Theorem 7.3.11
edit