In the mathematical subfield of numerical analysis , a Newton polynomial , named after its inventor Isaac Newton , is the interpolation polynomial for a given set of data points.
The Newton polynomial is sometimes called Newton interpolation polynomial. This is a bit misleading as there is only one interpolation polynomial for a given set of knots. The more precise name is interpolation polynomial in the Newton form.
Solving an interpolation problems leads to a problem in linear algebra where we have to solve a matrix. Using a standard monomial basis for our interpolation polynomial we get the very complicated Vandermonde matrix . By choosing another basis, the Newton basis, we get a much simpler lower triangular matrix which can solved faster.
We construct the Newton basis as
n
n
(
x
)
:=
∏
ν
=
0
n
(
x
−
x
ν
−
1
)
n
=
0
,
…
,
N
{\displaystyle n_{n}(x):=\prod _{\nu =0}^{n}(x-x_{\nu -1})\qquad n=0,\ldots ,N}
Using this basis we we to solve
A
a
=
(
1
0
1
x
1
−
x
0
1
x
2
−
x
1
⋱
⋮
⋮
⋱
1
x
N
−
x
0
…
…
∏
n
=
0
N
−
1
(
x
N
−
x
n
)
)
(
a
0
⋮
a
N
)
=
(
y
0
⋮
y
N
)
{\displaystyle \mathbf {A} \mathbf {a} ={\begin{pmatrix}1&&&&0\\1&x_{1}-x_{0}&&&\\1&x_{2}-x_{1}&\ddots &&\\\vdots &\vdots &&\ddots &\\1&x_{N}-x_{0}&\ldots &\ldots &\prod _{n=0}^{N-1}(x_{N}-x_{n})\end{pmatrix}}{\begin{pmatrix}a_{0}\\\vdots \\a_{N}\end{pmatrix}}={\begin{pmatrix}y_{0}\\\vdots \\y_{N}\end{pmatrix}}}
to solve the polynomial interpolation problem.
This matrix can be solved recursively by solving the following equations
∑
ν
=
0
n
a
ν
n
ν
(
x
n
)
=
y
n
n
=
0
,
.
.
.
,
N
{\displaystyle \sum _{\nu =0}^{n}a_{\nu }n_{\nu }(x_{n})=y_{n}\qquad n=0,...,N}
Given a set of N +1 data points
(
x
0
,
y
0
)
,
…
,
(
x
N
,
y
N
)
{\displaystyle (x_{0},y_{0}),\ldots ,(x_{N},y_{N})}
where no two x n are the same, the interpolation polynomial is a polynomial function N (x ) of degree N with
N
(
x
n
)
=
y
n
n
=
0
,
…
,
N
{\displaystyle N(x_{n})=y_{n}\qquad n=0,\ldots ,N}
According to the Stone-Weierstrass theorem such a function exists and is unique.
The Newton polynomial is the solution to the interpolation problem. It is given by the a linear combination of Newton basis polynomials:
N
(
x
)
:=
∑
n
=
0
N
a
n
n
n
(
x
)
{\displaystyle N(x):=\sum _{n=0}^{N}a_{n}n_{n}(x)}
with the Newton basis polynomials defined as
n
n
(
x
)
:=
∏
ν
=
0
n
(
x
−
x
ν
−
1
)
{\displaystyle n_{n}(x):=\prod _{\nu =0}^{n}(x-x_{\nu -1})}
and the coefficients defined as
a
n
:=
[
y
0
,
…
,
y
n
]
{\displaystyle a_{n}:=[y_{0},\ldots ,y_{n}]}
where
[
y
0
,
…
,
y
n
]
{\displaystyle [y_{0},\ldots ,y_{n}]}
is the notation for divided differences .
Thus the Newton polynomial can be written as
N
(
x
)
:=
[
y
0
]
+
[
y
0
,
y
1
]
(
x
−
x
0
)
+
…
+
[
y
0
,
…
,
y
N
]
(
x
−
x
0
)
…
(
x
−
x
N
−
1
)
{\displaystyle N(x):=[y_{0}]+[y_{0},y_{1}](x-x_{0})+\ldots +[y_{0},\ldots ,y_{N}](x-x_{0})\ldots (x-x_{N-1})}
The notion of divided differences is a recursive division process.
We define
[
y
ν
]
:=
y
ν
,
ν
=
0
,
…
,
n
{\displaystyle [y_{\nu }]:=y_{\nu }\qquad {\mbox{ , }}\nu =0,\ldots ,n}
[
y
ν
,
…
,
y
ν
+
j
]
:=
[
y
ν
+
1
,
…
y
ν
+
j
]
−
[
y
ν
,
…
y
ν
+
j
−
1
]
x
ν
+
j
−
x
ν
,
ν
=
0
,
…
,
n
−
j
,
j
=
1
,
…
,
n
{\displaystyle [y_{\nu },\ldots ,y_{\nu +j}]:={\frac {[y_{\nu +1},\ldots y_{\nu +j}]-[y_{\nu },\ldots y_{\nu +j-1}]}{x_{\nu +j}-x_{\nu }}}\qquad {\mbox{ , }}\nu =0,\ldots ,n-j,j=1,\ldots ,n}
For the first few [y n ] this yields
[
y
0
]
=
y
0
{\displaystyle [y_{0}]=y_{0}}
[
y
0
,
y
1
]
=
y
2
−
y
1
x
2
−
x
1
{\displaystyle [y_{0},y_{1}]={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}}
[
y
0
,
y
1
,
y
2
]
=
y
3
−
y
1
−
x
3
−
x
1
x
2
−
x
1
(
y
2
−
y
1
)
(
x
3
−
x
1
)
(
x
3
−
x
2
)
{\displaystyle [y_{0},y_{1},y_{2}]={\frac {y_{3}-y_{1}-{\frac {x_{3}-x_{1}}{x_{2}-x_{1}}}(y_{2}-y_{1})}{(x_{3}-x_{1})(x_{3}-x_{2})}}}
To make the recurive process more clear the divided differences can be put in a tabular form
x
0
y
0
=
[
y
0
]
[
y
0
,
y
1
]
x
1
y
1
=
[
y
1
]
[
y
0
,
y
1
,
y
2
]
[
y
1
,
y
2
]
[
y
0
,
y
1
,
y
2
,
y
3
]
x
2
y
2
=
[
y
2
]
[
y
1
,
y
2
,
y
3
]
[
y
2
,
y
3
]
x
3
y
3
=
[
y
3
]
{\displaystyle {\begin{matrix}x_{0}&y_{0}=[y_{0}]&&&\\&&[y_{0},y_{1}]&&\\x_{1}&y_{1}=[y_{1}]&&[y_{0},y_{1},y_{2}]&\\&&[y_{1},y_{2}]&&[y_{0},y_{1},y_{2},y_{3}]\\x_{2}&y_{2}=[y_{2}]&&[y_{1},y_{2},y_{3}]&\\&&[y_{2},y_{3}]&&\\x_{3}&y_{3}=[y_{3}]&&&\\\end{matrix}}}
On the diagonal of this table you will find the coefficients, as
a
0
=
y
0
{\displaystyle a_{0}=y_{0}}
a
1
=
[
y
0
,
y
1
]
{\displaystyle a_{1}=[y_{0},y_{1}]}
a
2
=
[
y
0
,
y
1
,
y
2
]
{\displaystyle a_{2}=[y_{0},y_{1},y_{2}]}
a
3
=
[
y
0
,
y
1
,
y
2
,
y
3
]
{\displaystyle a_{3}=[y_{0},y_{1},y_{2},y_{3}]}