In multilinear algebra, a reshaping of tensors is any bijection between the set of indices of an order- tensor and the set of indices of an order- tensor, where . The use of indices presupposes tensors in coordinate representation with respect to a basis. The coordinate representation of a tensor can be regarded as a multi-dimensional array, and a bijection from one set of indices to another therefore amounts to a rearrangement of the array elements into an array of a different shape. Such a rearrangement constitutes a particular kind of linear map between the vector space of order- tensors and the vector space of order- tensors.

Definition

edit

Given a positive integer  , the notation   refers to the set   of the first M positive integers.

For each integer   where   for a positive integer  , let   denote an  -dimensional vector space over a field  . Then there are vector space isomorphisms (linear maps)

 

where   is any permutation and   is the symmetric group on   elements. Via these (and other) vector space isomorphisms, a tensor can be interpreted in several ways as an order-  tensor where  .

Coordinate representation

edit

The first vector space isomorphism on the list above,  , gives the coordinate representation of an abstract tensor. Assume that each of the   vector spaces   has a basis  . The expression of a tensor with respect to this basis has the form   where the coefficients   are elements of  . The coordinate representation of   is  where   is the   standard basis vector of  . This can be regarded as a M-way array whose elements are the coefficients  .

General flattenings

edit

For any permutation   there is a canonical isomorphism between the two tensor products of vector spaces   and  . Parentheses are usually omitted from such products due to the natural isomorphism between   and  , but may, of course, be reintroduced to emphasize a particular grouping of factors. In the grouping,   there are   groups with   factors in the   group (where   and  ).

Letting   for each   satisfying  , an  -flattening of a tensor  , denoted  , is obtained by applying the two processes above within each of the   groups of factors. That is, the coordinate representation of the   group of factors is obtained using the isomorphism  , which requires specifying bases for all of the vector spaces  . The result is then vectorized using a bijection   to obtain an element of  , where  , the product of the dimensions of the vector spaces in the   group of factors. The result of applying these isomorphisms within each group of factors is an element of  , which is a tensor of order  .

Vectorization

edit

By means of a bijective map  , a vector space isomorphism between   and   is constructed via the mapping   where for every natural number   such that  , the vector   denotes the ith standard basis vector of  . In such a reshaping, the tensor is simply interpreted as a vector in  . This is known as vectorization, and is analogous to vectorization of matrices. A standard choice of bijection   is such that

 

which is consistent with the way in which the colon operator in Matlab and GNU Octave reshapes a higher-order tensor into a vector. In general, the vectorization of   is the vector  .

The vectorization of   denoted with   or   is an  -reshaping where   and  .

Mode-m Flattening / Mode-m Matrixization

edit

Let   be the coordinate representation of an abstract tensor with respect to a basis. Mode-m matrixizing (a.k.a. flattening) of   is an  -reshaping in which   and  . Usually, a standard matrixizing is denoted by

 

This reshaping is sometimes called matrixizing, matricizing, flattening or unfolding in the literature. A standard choice for the bijections   is the one that is consistent with the reshape function in Matlab and GNU Octave, namely

 

Definition Mode-m Matrixizing:[1]   The mode-m matrixizing of a tensor   is defined as the matrix  . As the parenthetical ordering indicates, the mode-m column vectors are arranged by sweeping all the other mode indices through their ranges, with smaller mode indexes varying more rapidly than larger ones; thus

References

edit
  1. ^ Vasilescu, M. Alex O. (2009), "Multilinear (Tensor) Algebraic Framework for Computer Graphics, Computer Vision and Machine Learning" (PDF), University of Toronto, p. 21