The Lyapunov equation, named after the Russian mathematician Aleksandr Lyapunov, is a matrix equation used in the stability analysis of linear dynamical systems.[1][2]

In particular, the discrete-time Lyapunov equation (also known as Stein equation) for is

where is a Hermitian matrix and is the conjugate transpose of , while the continuous-time Lyapunov equation is

.

Application to stability

edit

In the following theorems  , and   and   are symmetric. The notation   means that the matrix   is positive definite.

Theorem (continuous time version). Given any  , there exists a unique   satisfying   if and only if the linear system   is globally asymptotically stable. The quadratic function   is a Lyapunov function that can be used to verify stability.

Theorem (discrete time version). Given any  , there exists a unique   satisfying   if and only if the linear system   is globally asymptotically stable. As before,   is a Lyapunov function.

Computational aspects of solution

edit

The Lyapunov equation is linear; therefore, if   contains   entries, the equation can be solved in   time using standard matrix factorization methods.

However, specialized algorithms are available which can yield solutions much quicker owing to the specific structure of the Lyapunov equation. For the discrete case, the Schur method of Kitagawa is often used.[3] For the continuous Lyapunov equation the Bartels–Stewart algorithm can be used.[4]

Analytic solution

edit

Defining the vectorization operator   as stacking the columns of a matrix   and   as the Kronecker product of   and  , the continuous time and discrete time Lyapunov equations can be expressed as solutions of a matrix equation. Furthermore, if the matrix   is "stable", the solution can also be expressed as an integral (continuous time case) or as an infinite sum (discrete time case).

Discrete time

edit

Using the result that  , one has

 

where   is a conformable identity matrix and   is the element-wise complex conjugate of  .[5] One may then solve for   by inverting or solving the linear equations. To get  , one must just reshape   appropriately.

Moreover, if   is stable (in the sense of Schur stability, i.e., having eigenvalues with magnitude less than 1), the solution   can also be written as

 .

For comparison, consider the one-dimensional case, where this just says that the solution of   is

 .

Continuous time

edit

Using again the Kronecker product notation and the vectorization operator, one has the matrix equation

 

where   denotes the matrix obtained by complex conjugating the entries of  .

Similar to the discrete-time case, if   is stable (in the sense of Hurwitz stability, i.e., having eigenvalues with negative real parts), the solution   can also be written as

 ,

which holds because

 

For comparison, consider the one-dimensional case, where this just says that the solution of   is

 .

Relationship Between Discrete and Continuous Lyapunov Equations

edit

We start with the continuous-time linear dynamics:

 .

And then discretize it as follows:

 

Where   indicates a small forward displacement in time. Substituting the bottom equation into the top and shuffling terms around, we get a discrete-time equation for  .

 

Where we've defined  . Now we can use the discrete time Lyapunov equation for  :

 

Plugging in our definition for  , we get:

 

Expanding this expression out yields:

 

Recall that   is a small displacement in time. Letting   go to zero brings us closer and closer to having continuous dynamics—and in the limit we achieve them. It stands to reason that we should also recover the continuous-time Lyapunov equations in the limit as well. Dividing through by   on both sides, and then letting   we find that:

 

which is the continuous-time Lyapunov equation, as desired.

See also

edit

References

edit
  1. ^ Parks, P. C. (1992-01-01). "A. M. Lyapunov's stability theory—100 years on *". IMA Journal of Mathematical Control and Information. 9 (4): 275–303. doi:10.1093/imamci/9.4.275. ISSN 0265-0754.
  2. ^ Simoncini, V. (2016-01-01). "Computational Methods for Linear Matrix Equations". SIAM Review. 58 (3): 377–441. doi:10.1137/130912839. hdl:11585/586011. ISSN 0036-1445.
  3. ^ Kitagawa, G. (1977). "An Algorithm for Solving the Matrix Equation X = F X F' + S". International Journal of Control. 25 (5): 745–753. doi:10.1080/00207177708922266.
  4. ^ Bartels, R. H.; Stewart, G. W. (1972). "Algorithm 432: Solution of the matrix equation AX + XB = C". Comm. ACM. 15 (9): 820–826. doi:10.1145/361573.361582.
  5. ^ Hamilton, J. (1994). Time Series Analysis. Princeton University Press. Equations 10.2.13 and 10.2.18. ISBN 0-691-04289-6.