User:Kmckiou2/Numerical linear algebra

Numerical linear algebra (NLA) is the study of algorithms for performing linear algebra computations, primarily vector and matrix operations, using numerical approximations. In present day, numerical linear algebra techniques are usually implemented on computers, but the study of NLA predates the existence of computers. [1]

NLA is a fundamental area of numerical analysis that is necessary for solving problems arising in various fields, such as optimization, control, image and signal processing, material science simulations, data mining, structural biology, bioinformatics, fluid dynamics, finance, and others.[2] Software for solving NLA problems rely on the development, analysis, and implementation of state-of-the-art algorithms for solving various numerical linear algebra problems, in large part because of the role of matrices in finite difference and finite element methods.

Common problems

edit

Common problems in numerical linear algebra include systems of linear equations, linear least squares, and eigenvalue problems. Additionally, many other problems require NLA techniques when solved numerically, including solving nonlinear systems of equations, optimization problems, polynomial interpolation, numerical integration, numerical differentiation, initial value problems, boundary value problems, and partial differential equations.[1][2]

Challenges of numerical linear algebra

edit

NLA techniques differ from traditional linear algebra techniques primarily by the finite precision used in their operations, resulting in approximate solutions instead of exact solutions.[3] This is useful because computers can solve NLA problems quickly, but only if implemented carefully. Unfortunately, finite-precision approximations are inherently inaccurate due to computational errors which can be divided into two main categories: truncation errors and rounding errors.

Truncation errors

edit

Truncation error is the difference between the exact solution and the numerical approximation to the solution due to truncation of infinite series, replacement of derivatives by finite difference, or termination of iterative sequence prior to convergence.[2] In the case of infinite series, truncation errors are unavoidable because the series must be truncated in order to be computed in finite time. Derivative that are too complicated to solve analytically result in truncation error due to discretization required for finite difference methods. Iterative sequences that require an infinite number of steps to ensure convergence result in truncation error because they, in a similar fashion to infinite series, must be terminated in order to compute in finite time.

Rounding errors

edit

Rounding error is the difference between the exact solution and the approximate solution due to the finite precision representation of the value.[2] Rounding errors are inherent in NLA by definition because numerical approximations are employed in place of exact values, although it is possible for rounding error to be non-existent if the value may be represented exactly in the finite precision system.

Numerical methods

edit

There are many methods of solving NLA problems. The most common general strategy involves one or more transformation of a system to an easier-to-solve system that preserves solutions.[4] This can be done either with a direct method, where the solution is solved for using some finite algorithm, or an iterative method, with may require an infinite number of iterations to converge exactly and therefore must be truncated before convergence.

Systems of linear equations

edit

Common numerical methods include:

Many specialized methods exist for solving sparse matrices.[5]

Linear least squares

edit

Common numerical methods include:

Eigenvalue problems

edit

Common numerical methods include:

History

edit

Many of the ideas and methods used in NLA today predate the invention of the computer and were the result of various mathematicians and scientists, including Newton (1642-1727), Euler (1707-1783), Lagrange (1707-1783), Laplace (1749-1827), Legendre (1752-1833), Gauss (1777-1855), Cauchy (1789-1857), Jacbobi (1804-1851), Adams (1819-1892), Chebyshev (1821-1894), Hermite (1822-1901), Laguerre (1834-1886) and others.[2] [6]

Matrix theory was studied extensively during the 19th century by Cayley (1821-1895) and Sylvester (1814 - 1897), formalizing many of the ideas of previous mathematicians for NLA applications. It was not until the early 20th century when many of the modern methods came into practice, such as when Dwyer (1901-1982) applied Gauss's ideas to matrices to formulate modern Guassian Elimination.[6]

Research accelerated through the mid 20th century, with contributions from many mathematicians, including Turing (1912-1954) and von Neumann (1903-1957).[1] With the advent of the modern computer, NLA techniques continued to develop rapidly and ever larger systems are being solved as technology continues to improve.

See also

edit

References

edit
  1. ^ a b c Bau III, David; Trefethen, Lloyd N. (1997). Numerical linear algebra. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-361-9.
  2. ^ a b c d e Heath, Michael T. (2002). Scientific Computing: An Introductory Survey. McGraw-Hill. ISBN 0-07-239910-4.
  3. ^ J. H. Wilkinson and C. Reinsch, "Linear Algebra, volume II of Handbook for Automatic Computation" SIAM Review 14, 658 (1972).
  4. ^ Leader, Jeffery J. (2004). Numerical Analysis and Scientific Computation. Addison Wesley. ISBN 0-201-73499-0.
  5. ^ Golub, Gene H.; van Loan, Charles F. (1996).Matrix Computations. Johns Hopkins University Press. ISBN 978-0-8018-5414-9.
  6. ^ a b Golub, Gene H.; "A History of Modern Numerical Alegbra" 2005. Retrieved on 5 December 2011.
edit