Draft:Newton's method for systems of nonlinear equations

When multiple nonlinear equations need to be solved for more than one variable, Newton's Method for Systems of Equations may be used to solve the equations simultaneously for the solution vector.[1][2][3] The process is very similar to solving Newton's method for one variable, except the single nonlinear equation is replaced with a system of nonlinear equations, the derivative is replaced with a Jacobian matrix of partial derivatives, and the subtraction is replaced with a vector subtraction. Newton's method for systems of nonlinear equations reduces to Newton's method for nonlinear equations when the system of equations includes only one equation.

Procedure

edit

For single equations, Newton's method consists of the following iterations until the iterations no longer produce any changes to   of significance,

 

For systems of equations, the same is true, but for vector   instead of scalar  ,

 

Where   is the solution vector to  

  is the Jacobian matrix for  

  is a vector of known values

If the   vector is set to all zeros, the defining equations may be rewritten in the commonly found form below.

 

or

 

Simple example

edit

For example, the following set of equations needs to be solved for vector of points  , given the vector of known values, (2,3).

 

the function vector,  , and Jacobian Matrix,   for iteration k, and the vector of known values,  , are defined below.

 

Note that   could have been rewritten to absorb  , and thus eliminate   from the equations. The equation to solve for each iteration are

 

and

 

The iterations should be repeated until  , where   is a value acceptably small enough to meet application requirements.

If vector   is initially chosen to be  , that is,  , and   is chosen to be 1.e-03, then the example converges after four iterations to a value of  .

Iterations

edit

The following iterations were made during the course of the solution.

Iteration Convergence Sequence
Iteration Variable Variable Contents
0 X  
F(X)  
1 J  
C  
X  
F(X)  
2 J  
C  
X  
F(X)  
3 J  
C  
X  
F(X)  
4 J  
C  
X  
F(X)  

Practical considerations

edit

Newton's Method for systems of equations especially large sets of equations, can be finickier than for single equations. Care should be taken to insure a solution is found within a reasonable number of iterations.

Singular matrices

edit

The solution to the linear set of equations that must be solved may not necessarily result in a usable non-singular solution. This can be because the set of equations has no solution, or because of a poorly chosen starting vector for  . For example, had the initial  been chosen to be (0,0), the first iteration should have resulted in a singular matrix and the convergence would have failed. Care should be taken to choose a valid starting  that is known to produce a non-singular first iteration.

Asymptotic divergence to infinity

edit

Even though a system of equations is known to have a solution, the iterations may asymptotically diverge to infinity. This is especially more likely to happen with large number of equations. However, this condition can be mitigated or prevented by selecting a good starting point for  . In addition, the following steps may further mitigate divergence.

  • Limit the range of the linear solution, the C vector, to a small range, but large enough to converge in a reasonable number of iterations.
  • Limit the  vector iterations to known limits for each   entry.
  • Scale down the C vector entries slightly to slow down the convergence. Slow convergences are less likely to go divergent.

Slow convergence

edit

In time sensitive applications, convergence speed is importance in that slow convergence can have detrimental affects on the application that is being supported. Convergence time can be minimized through the following steps.

  • Good selection of the initial  starting point is very importance in minimizing the number of iterations required for an acceptable convergence error. For example, the example above, had the initial   been chosen to be (2,2) instead of (1,1), then seven iterations would have been required instead of four.
  • Limit the   vector iterations to known limits for each   entry. The   vector does not have diverge to slow things down. If no limit has been placed on the vector, or the limit is too big, the iterations may spend too much time recomputing large values instead of converging.
  • Scale down the C vector entries slightly to slow down the convergence. This may help prevent the iterations from jumping around and taking too long to converge.
  • Select a conference error point as large as possible that still meets the application requirements. For example, had an error of 1.e-15 been chosen for the example above, six iterations should have been required, as opposed to only four needed to converge to an error of 1,e-03. The additional two iterations may be acceptable for high precision applications, but would be a waste for applications that only need light precision.

Insure a solution does exist

edit

It is much easier to determine that a known solution exists or does not exist with single equations. For example,   has an obvious known solution (2), while   is obvious that no solution exists in the set of real number. With sets of equations, especially large sets, it is far more difficult to determine that a solution exists or does not exist. If a solution does not exists, the iterations will certainly fail, but if a solution does exist, the iterations may still fail. Upfront work may be required to determine that a solution does or does not exist before making conclusions.

Multiple solutions

edit

It is easier to determine that multiple solutions exist with single equations. The   from the preceding paragraph, for example, has a solution of  . For sets of equations, especially large sets, it may not be so obvious, and even if it is obvious, it may be more difficult to insure convergence takes place at the desired solution. Care should taken to start with an   as near as possible to the desired solution., and that limits are installed on the individual   entries to move the iterations away from undesirable solutions and toward the desirable solutions(s). It should be noted that many solutions exist in the example used above.

Digital verses continuous derivatives

edit

Derivatives should be calculated using continuous functions whenever possible to maximize accuracy and minimize convergence problems. If the function is unknown and not possible to calculate continuous derivatives, digital derivatives may be used, but care should be taken to maximize accuracy. Use double samples close together, if possible. If not possible, such as in a string of data, cubic interpolations are preferred due to the cubic iterations retention of a defined first derivative. If accuracy is not an issue, linear interpolations may be used, while keeping in mind that the first directives are not defined at the data point, requiring that the next or prior linear segment be used to estimate the derivative.

Applications

edit

Shaping the frequency response in filter design, such as constricting a Chebyshev pass band ripple to a percentage of the pass band.[4]

References

edit
  1. ^ Burden, Burton; Fairs, J. Douglas; Reunolds, Albert C (July 1981). Numerical Analysis (2nd ed.). Boston, MA, United States: Prindle, Weber & Schmidt. pp. 448 to 452. ISBN 0-87150-314-X.{{cite book}}: CS1 maint: date and year (link)
  2. ^ A. Evans, Gwynne (1995). Practical Numerical Analysis. Baffins Lane, Chichester, West Suffix, PO19 IUD, England: John Wiley & Sons, Ltd. pp. 30 to 33. ISBN 0471955353.{{cite book}}: CS1 maint: location (link)
  3. ^ Demidovich, Boris Pavlovich; Maron, Isaak Abramovich (1981). Computational Mathematics (Third printing ed.). Moscow: MIR Publishers. pp. 460–478. ISBN 9780828507042.
  4. ^ Pelz, Dieter (2005). "Microwave Lowpass Filters with a Constricted Equi-Ripple Passband" (PDF). AMW. 13 (7): 28 to 34 – via APPLIED MICROWAVE & WIRELESS.