Tensor rank decomposition

(Redirected from Parafac)

In multilinear algebra, the tensor rank decomposition [1] or rank-R decomposition is the decomposition of a tensor as a sum of R rank-1 tensors, where R is minimal. Computing this decomposition is an open problem.[clarification needed]

Canonical polyadic decomposition (CPD) is a variant of the tensor rank decomposition, in which the tensor is approximated as a sum of K rank-1 tensors for a user-specified K. The CP decomposition has found some applications in linguistics and chemometrics. It was introduced by Frank Lauren Hitchcock in 1927[2] and later rediscovered several times, notably in psychometrics.[3][4] The CP decomposition is referred to as CANDECOMP,[3] PARAFAC,[4] or CANDECOMP/PARAFAC (CP). Note that the PARAFAC2 rank decomposition is a variation of the CP decomposition.[5]

Another popular generalization of the matrix SVD known as the higher-order singular value decomposition computes orthonormal mode matrices and has found applications in econometrics, signal processing, computer vision, computer graphics, and psychometrics.

Notation

edit

A scalar variable is denoted by lower case italic letters,   and an upper bound scalar is denoted by an upper case italic letter,  .

Indices are denoted by a combination of lowercase and upper case italic letters,  . Multiple indices that one might encounter when referring to the multiple modes of a tensor are conveniently denoted by   where  .

A vector is denoted by a lower case bold Times Roman,   and a matrix is denoted by bold upper case letters  .

A higher order tensor is denoted by calligraphic letters, . An element of an  -order tensor   is denoted by   or  .

Definition

edit

A data tensor   is a collection of multivariate observations organized into a M-way array where M=C+1. Every tensor may be represented with a suitably large   as a linear combination of   rank-1 tensors:

 

where   and   where  . When the number of terms   is minimal in the above expression, then   is called the rank of the tensor, and the decomposition is often referred to as a (tensor) rank decomposition, minimal CP decomposition, or Canonical Polyadic Decomposition (CPD). If the number of terms is not minimal, then the above decomposition is often referred to as CANDECOMP/PARAFAC, Polyadic decomposition'.

Tensor rank

edit

Contrary to the case of matrices, computing the rank of a tensor is NP-hard.[6] The only notable well-understood case consists of tensors in  , whose rank can be obtained from the KroneckerWeierstrass normal form of the linear matrix pencil that the tensor represents.[7] A simple polynomial-time algorithm exists for certifying that a tensor is of rank 1, namely the higher-order singular value decomposition.

The rank of the tensor of zeros is zero by convention. The rank of a tensor   is one, provided that  .

Field dependence

edit

The rank of a tensor depends on the field over which the tensor is decomposed. It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor. As an example,[8] consider the following real tensor

 

where  . The rank of this tensor over the reals is known to be 3, while its complex rank is only 2 because it is the sum of a complex rank-1 tensor with its complex conjugate, namely

 

where  .

In contrast, the rank of real matrices will never decrease under a field extension to  : real matrix rank and complex matrix rank coincide for real matrices.

Generic rank

edit

The generic rank   is defined as the least rank   such that the closure in the Zariski topology of the set of tensors of rank at most   is the entire space  . In the case of complex tensors, tensors of rank at most   form a dense set  : every tensor in the aforementioned space is either of rank less than the generic rank, or it is the limit in the Euclidean topology of a sequence of tensors from  . In the case of real tensors, the set of tensors of rank at most   only forms an open set of positive measure in the Euclidean topology. There may exist Euclidean-open sets of tensors of rank strictly higher than the generic rank. All ranks appearing on open sets in the Euclidean topology are called typical ranks. The smallest typical rank is called the generic rank; this definition applies to both complex and real tensors. The generic rank of tensor spaces was initially studied in 1983 by Volker Strassen.[9]

As an illustration of the above concepts, it is known that both 2 and 3 are typical ranks of   while the generic rank of   is 2. Practically, this means that a randomly sampled real tensor (from a continuous probability measure on the space of tensors) of size   will be a rank-1 tensor with probability zero, a rank-2 tensor with positive probability, and rank-3 with positive probability. On the other hand, a randomly sampled complex tensor of the same size will be a rank-1 tensor with probability zero, a rank-2 tensor with probability one, and a rank-3 tensor with probability zero. It is even known that the generic rank-3 real tensor in   will be of complex rank equal to 2.

The generic rank of tensor spaces depends on the distinction between balanced and unbalanced tensor spaces. A tensor space  , where  , is called unbalanced whenever

 

and it is called balanced otherwise.

Unbalanced tensor spaces

edit

When the first factor is very large with respect to the other factors in the tensor product, then the tensor space essentially behaves as a matrix space. The generic rank of tensors living in an unbalanced tensor spaces is known to equal

 

almost everywhere. More precisely, the rank of every tensor in an unbalanced tensor space  , where   is some indeterminate closed set in the Zariski topology, equals the above value.[10]

Balanced tensor spaces

edit

The expected generic rank of tensors living in a balanced tensor space is equal to

 

almost everywhere for complex tensors and on a Euclidean-open set for real tensors, where

 

More precisely, the rank of every tensor in  , where   is some indeterminate closed set in the Zariski topology, is expected to equal the above value.[11] For real tensors,   is the least rank that is expected to occur on a set of positive Euclidean measure. The value   is often referred to as the expected generic rank of the tensor space   because it is only conjecturally correct. It is known that the true generic rank always satisfies

 

The Abo–Ottaviani–Peterson conjecture[11] states that equality is expected, i.e.,  , with the following exceptional cases:

  •  
  •  

In each of these exceptional cases, the generic rank is known to be  . Note that while the set of tensors of rank 3 in   is defective (13 and not the expected 14), the generic rank in that space is still the expected one, 4. Similarly, the set of tensors of rank 5 in   is defective (44 and not the expected 45), but the generic rank in that space is still the expected 6.

The AOP conjecture has been proved completely in a number of special cases. Lickteig showed already in 1985 that  , provided that  .[12] In 2011, a major breakthrough was established by Catalisano, Geramita, and Gimigliano who proved that the expected dimension of the set of rank   tensors of format   is the expected one except for rank 3 tensors in the 4 factor case, yet the expected rank in that case is still 4. As a consequence,   for all binary tensors.[13]

Maximum rank

edit

The maximum rank that can be admitted by any of the tensors in a tensor space is unknown in general; even a conjecture about this maximum rank is missing. Presently, the best general upper bound states that the maximum rank   of  , where  , satisfies

 

where   is the (least) generic rank of  .[14] It is well-known that the foregoing inequality may be strict. For instance, the generic rank of tensors in   is two, so that the above bound yields  , while it is known that the maximum rank equals 3.[8]

Border rank

edit

A rank-  tensor   is called a border tensor if there exists a sequence of tensors of rank at most   whose limit is  . If   is the least value for which such a convergent sequence exists, then it is called the border rank of  . For order-2 tensors, i.e., matrices, rank and border rank always coincide, however, for tensors of order   they may differ. Border tensors were first studied in the context of fast approximate matrix multiplication algorithms by Bini, Lotti, and Romani in 1980.[15]

A classic example of a border tensor is the rank-3 tensor

 

It can be approximated arbitrarily well by the following sequence of rank-2 tensors

 

as  . Therefore, its border rank is 2, which is strictly less than its rank. When the two vectors are orthogonal, this example is also known as a W state.

Properties

edit

Identifiability

edit

It follows from the definition of a pure tensor that   if and only if there exist   such that   and   for all m. For this reason, the parameters   of a rank-1 tensor   are called identifiable or essentially unique. A rank-  tensor   is called identifiable if every of its tensor rank decompositions is the sum of the same set of   distinct tensors   where the  's are of rank 1. An identifiable rank-  thus has only one essentially unique decomposition  and all   tensor rank decompositions of   can be obtained by permuting the order of the summands. Observe that in a tensor rank decomposition all the  's are distinct, for otherwise the rank of   would be at most  .

Generic identifiability

edit

Order-2 tensors in  , i.e., matrices, are not identifiable for  . This follows essentially from the observation  where   is an invertible   matrix,   ,  ,   and  . It can be shown[16] that for every  , where   is a closed set in the Zariski topology, the decomposition on the right-hand side is a sum of a different set of rank-1 tensors than the decomposition on the left-hand side, entailing that order-2 tensors of rank   are generically not identifiable.

The situation changes completely for higher-order tensors in   with   and all  . For simplicity in notation, assume without loss of generality that the factors are ordered such that  . Let  denote the set of tensors of rank bounded by  . Then, the following statement was proved to be correct using a computer-assisted proof for all spaces of dimension  ,[17] and it is conjectured to be valid in general:[17][18][19]

There exists a closed set   in the Zariski topology such that every tensor  is identifiable (  is called generically identifiable in this case), unless either one of the following exceptional cases holds:

  1. The rank is too large:  ;
  2. The space is identifiability-unbalanced, i.e.,  , and the rank is too large:  ;
  3. The space is the defective case   and the rank is  ;
  4. The space is the defective case  , where  , and the rank is  ;
  5. The space is   and the rank is  ;
  6. The space is   and the rank is  ; or
  7. The space is   and the rank is  .
  8. The space is perfect, i.e.,   is an integer, and the rank is  .

In these exceptional cases, the generic (and also minimum) number of complex decompositions is

  • proved to be   in the first 4 cases;
  • proved to be two in case 5;[20]
  • expected[21] to be six in case 6;
  • proved to be two in case 7;[22] and
  • expected[21] to be at least two in case 8 with exception of the two identifiable cases   and  .

In summary, the generic tensor of order   and rank   that is not identifiability-unbalanced is expected to be identifiable (modulo the exceptional cases in small spaces).

Ill-posedness of the standard approximation problem

edit

The rank approximation problem asks for the rank-  decomposition closest (in the usual Euclidean topology) to some rank-  tensor  , where  . That is, one seeks to solve

 

where   is the Frobenius norm.

It was shown in a 2008 paper by de Silva and Lim[8] that the above standard approximation problem may be ill-posed. A solution to aforementioned problem may sometimes not exist because the set over which one optimizes is not closed. As such, a minimizer may not exist, even though an infimum would exist. In particular, it is known that certain so-called border tensors may be approximated arbitrarily well by a sequence of tensor of rank at most  , even though the limit of the sequence converges to a tensor of rank strictly higher than  . The rank-3 tensor

 

can be approximated arbitrarily well by the following sequence of rank-2 tensors

 

as  . This example neatly illustrates the general principle that a sequence of rank-  tensors that converges to a tensor of strictly higher rank needs to admit at least two individual rank-1 terms whose norms become unbounded. Stated formally, whenever a sequence

 

has the property that   (in the Euclidean topology) as  , then there should exist at least   such that

 

as  . This phenomenon is often encountered when attempting to approximate a tensor using numerical optimization algorithms. It is sometimes called the problem of diverging components. It was, in addition, shown that a random low-rank tensor over the reals may not admit a rank-2 approximation with positive probability, leading to the understanding that the ill-posedness problem is an important consideration when employing the tensor rank decomposition.

A common partial solution to the ill-posedness problem consists of imposing an additional inequality constraint that bounds the norm of the individual rank-1 terms by some constant. Other constraints that result in a closed set, and, thus, well-posed optimization problem, include imposing positivity or a bounded inner product strictly less than unity between the rank-1 terms appearing in the sought decomposition.

Calculating the CPD

edit

Alternating algorithms:

Direct algorithms:

General optimization algorithms:

General polynomial system solving algorithms:

Applications

edit

In machine learning, the CP-decomposition is the central ingredient in learning probabilistic latent variables models via the technique of moment-matching. For example, consider the multi-view model[32] which is a probabilistic latent variable model. In this model, the generation of samples are posited as follows: there exists a hidden random variable that is not observed directly, given which, there are several conditionally independent random variables known as the different "views" of the hidden variable. For example, assume there are three views   of a  -state categorical hidden variable  . Then the empirical third moment of this latent variable model   is a rank 3 tensor and can be decomposed as:  .

In applications such as topic modeling, this can be interpreted as the co-occurrence of words in a document. Then the coefficients in the decomposition of this empirical moment tensor can be interpreted as the probability of choosing a specific topic and each column of the factor matrix   corresponds to probabilities of words in the vocabulary in the corresponding topic.

See also

edit

References

edit
  1. ^ Papalexakis, Evangelos. "Automatic Unsupervised Tensor Mining with Quality Assessment" (PDF).
  2. ^ F. L. Hitchcock (1927). "The expression of a tensor or a polyadic as a sum of products". Journal of Mathematics and Physics. 6 (1–4): 164–189. doi:10.1002/sapm192761164.
  3. ^ a b Carroll, J. D.; Chang, J. (1970). "Analysis of individual differences in multidimensional scaling via an n-way generalization of 'Eckart–Young' decomposition". Psychometrika. 35 (3): 283–319. doi:10.1007/BF02310791. S2CID 50364581.
  4. ^ a b Harshman, Richard A. (1970). "Foundations of the PARAFAC procedure: Models and conditions for an "explanatory" multi-modal factor analysis" (PDF). UCLA Working Papers in Phonetics. 16: 84. No. 10,085. Archived from the original (PDF) on October 10, 2004.
  5. ^ Gujral, Ekta. "Aptera: Automatic PARAFAC2 Tensor Analysis" (PDF). ASONAM 2022.
  6. ^ Hillar, C. J.; Lim, L. (2013). "Most tensor problems are NP-Hard". Journal of the ACM. 60 (6): 1–39. arXiv:0911.1393. doi:10.1145/2512329. S2CID 1460452.
  7. ^ Landsberg, J. M. (2012). Tensors: Geometry and Applications. AMS.
  8. ^ a b c de Silva, V.; Lim, L. (2008). "Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem". SIAM Journal on Matrix Analysis and Applications. 30 (3): 1084–1127. arXiv:math/0607647. doi:10.1137/06066518x. S2CID 7159193.
  9. ^ Strassen, V. (1983). "Rank and optimal computation of generic tensors". Linear Algebra and Its Applications. 52/53: 645–685. doi:10.1016/0024-3795(83)80041-x.
  10. ^ Catalisano, M. V.; Geramita, A. V.; Gimigliano, A. (2002). "Ranks of tensors, secant varieties of Segre varieties and fat points". Linear Algebra and Its Applications. 355 (1–3): 263–285. doi:10.1016/s0024-3795(02)00352-x.
  11. ^ a b Abo, H.; Ottaviani, G.; Peterson, C. (2009). "Induction for secant varieties of Segre varieties". Transactions of the American Mathematical Society. 361 (2): 767–792. arXiv:math/0607191. doi:10.1090/s0002-9947-08-04725-9. S2CID 59069541.
  12. ^ Lickteig, Thomas (1985). "Typical tensorial rank". Linear Algebra and Its Applications. 69: 95–120. doi:10.1016/0024-3795(85)90070-9.
  13. ^ Catalisano, M. V.; Geramita, A. V.; Gimigliano, A. (2011). "Secant varieties of  1 × ··· ×  1 (n-times) are not defective for n ≥ 5". Journal of Algebraic Geometry. 20 (2): 295–327. doi:10.1090/s1056-3911-10-00537-0.
  14. ^ Blehkerman, G.; Teitler, Z. (2015). "On maximum, typical and generic ranks". Mathematische Annalen. 362 (3–4): 1–11. arXiv:1402.2371. doi:10.1007/s00208-014-1150-3. S2CID 14309435.
  15. ^ Bini, D.; Lotti, G.; Romani, F. (1980). "Approximate solutions for the bilinear form computational problem". SIAM Journal on Scientific Computing. 9 (4): 692–697. doi:10.1137/0209053.
  16. ^ Harris, Joe (1992). Algebraic Geometry SpringerLink. Graduate Texts in Mathematics. Vol. 133. doi:10.1007/978-1-4757-2189-8. ISBN 978-1-4419-3099-6.
  17. ^ a b Chiantini, L.; Ottaviani, G.; Vannieuwenhoven, N. (2014-01-01). "An Algorithm For Generic and Low-Rank Specific Identifiability of Complex Tensors". SIAM Journal on Matrix Analysis and Applications. 35 (4): 1265–1287. arXiv:1403.4157. doi:10.1137/140961389. ISSN 0895-4798. S2CID 28478606.
  18. ^ Bocci, Cristiano; Chiantini, Luca; Ottaviani, Giorgio (2014-12-01). "Refined methods for the identifiability of tensors". Annali di Matematica Pura ed Applicata. 193 (6): 1691–1702. arXiv:1303.6915. doi:10.1007/s10231-013-0352-8. ISSN 0373-3114. S2CID 119721371.
  19. ^ Chiantini, L.; Ottaviani, G.; Vannieuwenhoven, N. (2017-01-01). "Effective Criteria for Specific Identifiability of Tensors and Forms". SIAM Journal on Matrix Analysis and Applications. 38 (2): 656–681. arXiv:1609.00123. doi:10.1137/16m1090132. ISSN 0895-4798. S2CID 23983015.
  20. ^ Chiantini, L.; Ottaviani, G. (2012-01-01). "On Generic Identifiability of 3-Tensors of Small Rank". SIAM Journal on Matrix Analysis and Applications. 33 (3): 1018–1037. arXiv:1103.2696. doi:10.1137/110829180. ISSN 0895-4798. S2CID 43781880.
  21. ^ a b Hauenstein, J. D.; Oeding, L.; Ottaviani, G.; Sommese, A. J. (2016). "Homotopy techniques for tensor decomposition and perfect identifiability". J. Reine Angew. Math. 2019 (753): 1–22. arXiv:1501.00090. doi:10.1515/crelle-2016-0067. S2CID 16324593.
  22. ^ Bocci, Cristiano; Chiantini, Luca (2013). "On the identifiability of binary Segre products". Journal of Algebraic Geometry. 22 (1): 1–11. arXiv:1105.3643. doi:10.1090/s1056-3911-2011-00592-4. ISSN 1056-3911. S2CID 119671913.
  23. ^ Domanov, Ignat; Lathauwer, Lieven De (January 2014). "Canonical Polyadic Decomposition of Third-Order Tensors: Reduction to Generalized Eigenvalue Decomposition". SIAM Journal on Matrix Analysis and Applications. 35 (2): 636–660. arXiv:1312.2848. doi:10.1137/130916084. ISSN 0895-4798. S2CID 14851072.
  24. ^ Domanov, Ignat; De Lathauwer, Lieven (January 2017). "Canonical polyadic decomposition of third-order tensors: Relaxed uniqueness conditions and algebraic algorithm". Linear Algebra and Its Applications. 513: 342–375. arXiv:1501.07251. doi:10.1016/j.laa.2016.10.019. ISSN 0024-3795. S2CID 119729978.
  25. ^ Faber, Nicolaas (Klaas) M.; Ferré, Joan; Boqué, Ricard (January 2001). "Iteratively reweighted generalized rank annihilation method". Chemometrics and Intelligent Laboratory Systems. 55 (1–2): 67–90. doi:10.1016/s0169-7439(00)00117-9. ISSN 0169-7439.
  26. ^ Leurgans, S. E.; Ross, R. T.; Abel, R. B. (October 1993). "A Decomposition for Three-Way Arrays". SIAM Journal on Matrix Analysis and Applications. 14 (4): 1064–1083. doi:10.1137/0614071. ISSN 0895-4798.
  27. ^ Lorber, Avraham. (October 1985). "Features of quantifying chemical composition from two-dimensional data array by the rank annihilation factor analysis method". Analytical Chemistry. 57 (12): 2395–2397. doi:10.1021/ac00289a052. ISSN 0003-2700.
  28. ^ Sanchez, Eugenio; Kowalski, Bruce R. (January 1990). "Tensorial resolution: A direct trilinear decomposition". Journal of Chemometrics. 4 (1): 29–45. doi:10.1002/cem.1180040105. ISSN 0886-9383. S2CID 120459386.
  29. ^ Sands, Richard; Young, Forrest W. (March 1980). "Component models for three-way data: An alternating least squares algorithm with optimal scaling features". Psychometrika. 45 (1): 39–67. doi:10.1007/bf02293598. ISSN 0033-3123. S2CID 121003817.
  30. ^ Bernardi, A.; Brachat, J.; Comon, P.; Mourrain, B. (May 2013). "General tensor decomposition, moment matrices and applications". Journal of Symbolic Computation. 52: 51–71. arXiv:1105.1229. doi:10.1016/j.jsc.2012.05.012. ISSN 0747-7171. S2CID 14181289.
  31. ^ Bernardi, Alessandra; Daleo, Noah S.; Hauenstein, Jonathan D.; Mourrain, Bernard (December 2017). "Tensor decomposition and homotopy continuation". Differential Geometry and Its Applications. 55: 78–105. arXiv:1512.04312. doi:10.1016/j.difgeo.2017.07.009. ISSN 0926-2245. S2CID 119147635.
  32. ^ Anandkumar, Animashree; Ge, Rong; Hsu, Daniel; Kakade, Sham M; Telgarsky, Matus (2014). "Tensor decompositions for learning latent variable models". The Journal of Machine Learning Research. 15 (1): 2773–2832.

Further reading

edit
edit