The Newton-Cotes formulas are a group of formulas for evaluating numeric integration at equally spaced points.
Let
x
i
{\displaystyle x_{i}}
,
i
=
0
,
…
,
n
{\displaystyle i=0,\ldots ,n}
, be
n
+
1
{\displaystyle n+1}
equally spaced points, and
f
i
{\displaystyle f_{i}}
be the corresponding values. Let
h
{\displaystyle h}
be the space
h
=
x
i
+
1
−
x
i
{\displaystyle h=x_{i+1}-x_{i}}
, and let
s
{\displaystyle s}
be the interpolation variable
s
=
x
−
x
0
h
{\displaystyle s={\frac {x-x_{0}}{h}}}
. Thus to interpolate at x,
x
−
x
0
=
s
h
,
x
−
x
1
=
x
−
(
x
0
+
h
)
=
(
s
−
1
)
h
,
⋮
x
−
x
n
=
(
s
−
n
)
h
.
{\displaystyle {\begin{aligned}x-x_{0}&=sh\,,\\x-x_{1}&=x-(x_{0}+h)=(s-1)h\,,\\\vdots \\x-x_{n}&=(s-n)h\,.\end{aligned}}}
A polynomial
P
n
(
x
)
{\displaystyle P_{n}(x)}
of degree
n
{\displaystyle n}
can be derived to pass through these points and approximate the function
f
(
x
)
{\displaystyle f(x)}
. Using divided differences and Newton polynomial ,
P
n
(
x
)
{\displaystyle P_{n}(x)}
can be obtained as
P
n
(
x
)
=
[
f
0
]
+
[
f
0
,
f
1
]
(
x
−
x
0
)
+
⋯
+
[
f
0
,
…
,
f
n
]
(
x
−
x
0
)
(
x
1
)
…
(
x
−
x
n
−
1
)
=
[
f
0
]
+
[
f
0
,
f
1
]
s
h
+
⋯
+
[
f
0
,
…
,
f
n
]
s
(
s
−
1
)
…
(
s
−
n
+
1
)
h
n
.
{\displaystyle {\begin{aligned}P_{n}(x)&=[f_{0}]+[f_{0},f_{1}](x-x_{0})+\cdots +[f_{0},\ldots ,f_{n}](x-x_{0})(x_{1})\ldots (x-x_{n-1})\\&=[f_{0}]+[f_{0},f_{1}]sh+\cdots +[f_{0},\ldots ,f_{n}]s(s-1)\ldots (s-n+1)h^{n}\,.\end{aligned}}}
From the general form of polynomial interpolation error , the error of using
P
n
(
x
)
{\displaystyle P_{n}(x)}
to interpolate
f
(
x
)
{\displaystyle f(x)}
can be obtained as
E
interpolate
(
x
)
=
f
(
x
)
−
P
n
(
x
)
=
1
(
n
+
1
)
!
(
x
−
x
0
)
(
x
−
x
1
)
⋯
(
x
−
x
n
)
f
(
n
+
1
)
(
ξ
)
=
1
(
n
+
1
)
!
s
(
s
−
1
)
(
s
−
2
)
…
(
s
−
n
)
h
n
+
1
f
(
n
+
1
)
(
ξ
)
{\displaystyle {\begin{aligned}E_{\text{interpolate}}(x)&=f(x)-P_{n}(x)\\&={\frac {1}{(n+1)!}}(x-x_{0})(x-x_{1})\cdots (x-x_{n})f^{(n+1)}(\xi )\\&={\frac {1}{(n+1)!}}s(s-1)(s-2)\ldots (s-n)h^{n+1}f^{(n+1)}(\xi )\end{aligned}}}
where
x
0
⩽
ξ
⩽
x
n
{\displaystyle x_{0}\leqslant \xi \leqslant x_{n}}
.
Since
d
x
=
d
(
x
0
+
s
h
)
=
h
d
s
{\displaystyle dx=d(x_{0}+sh)=hds}
, the error term of numerical integration is
E
integrate
=
∫
x
0
x
n
E
interpolate
(
x
)
d
x
=
h
n
+
2
(
n
+
1
)
!
f
(
n
+
1
)
(
ξ
)
∫
0
n
s
(
s
−
1
)
⋯
(
s
−
n
)
d
s
.
{\displaystyle E_{\text{integrate}}=\int \limits _{x_{0}}^{x_{n}}E_{\text{interpolate}}(x)dx={\frac {h^{n+2}}{(n+1)!}}f^{(n+1)}(\xi )\int \limits _{0}^{n}s(s-1)\cdots (s-n)ds\,.}
(1 )
Error terms for different rules
edit
Let's consider the trapezoid rule in a single interval. In each interval, the integration uses two end points. Thus
n
+
1
=
2
{\displaystyle n+1=2}
. Then
n
=
1
{\displaystyle n=1}
. Applying (1 ), we get
E
integrate
=
h
∫
0
1
s
(
s
−
1
)
2
h
2
f
″
(
ξ
)
d
s
=
−
1
12
h
3
f
″
(
ξ
)
=
O
(
h
3
)
{\displaystyle E_{\text{integrate}}=h\int \limits _{0}^{1}{\frac {s(s-1)}{2}}h^{2}f''(\xi )ds=-{\frac {1}{12}}h^{3}f''(\xi )=O(h^{3})}
where
x
0
⩽
ξ
⩽
x
1
{\displaystyle x_{0}\leqslant \xi \leqslant x_{1}}
.
Thus the local error is
O
(
h
3
)
{\displaystyle O(h^{3})}
. Consider the composite trapezoid rule. Given that
n
=
x
n
−
x
0
h
{\displaystyle n={\frac {x_{n}-x_{0}}{h}}}
, the global error is
|
∑
i
=
0
n
−
1
−
1
12
h
3
f
″
(
ξ
i
)
|
=
n
[
−
1
12
(
x
n
−
x
0
)
h
2
f
″
(
ξ
¯
)
]
=
−
1
12
(
x
n
−
x
0
)
h
2
f
″
(
ξ
¯
)
=
O
(
h
2
)
,
{\displaystyle {\begin{aligned}\left|\sum _{i=0}^{n-1}-{\frac {1}{12}}h^{3}f''(\xi _{i})\right|&=n[-{\frac {1}{12}}(x_{n}-x_{0})h^{2}f''({\bar {\xi }})]\\&=-{\frac {1}{12}}(x_{n}-x_{0})h^{2}f''({\bar {\xi }})=O(h^{2})\,,\end{aligned}}}
(2 )
where
x
i
⩽
ξ
i
⩽
x
i
+
1
{\displaystyle x_{i}\leqslant \xi _{i}\leqslant x_{i+1}}
,
x
0
⩽
ξ
¯
⩽
x
n
{\displaystyle x_{0}\leqslant {\bar {\xi }}\leqslant x_{n}}
.
To justify (2 ), we can need the theorem below[ 2] in page 345:
If
g
(
x
)
{\displaystyle g(x)}
is continuous and the
c
i
≥
0
{\displaystyle c_{i}\geq 0}
, then for some value
θ
{\displaystyle \theta }
in the interval of all the arguments
g
(
θ
)
∑
c
i
=
∑
i
=
1
N
c
i
g
(
θ
i
)
.
{\displaystyle g(\theta )\sum c_{i}=\sum \limits _{i=1}^{N}c_{i}g(\theta _{i})\,.}
The Simpson's 1/3 Rule
edit
Consider Simpson's 1/3 rule . In this case, three equally spaced points are used for integration. Thus
n
+
1
=
3
{\displaystyle n+1=3}
. Applying (1 ), we get
E
integrate
=
h
∫
0
2
s
(
s
−
1
)
(
s
−
2
)
6
h
3
f
‴
(
ξ
)
d
s
=
0
{\displaystyle E_{\text{integrate}}=h\int \limits _{0}^{2}{\frac {s(s-1)(s-2)}{6}}h^{3}f'''(\xi )ds=0}
where
x
0
⩽
ξ
⩽
x
2
{\displaystyle x_{0}\leqslant \xi \leqslant x_{2}}
.
This doesn't mean that the error is zero. It simply means that the cubic term is identically zero. The error term can be obtained from the next term in the Newton polynomial , obtaining
E
integrate
=
h
∫
0
2
s
(
s
−
1
)
(
s
−
2
)
(
s
−
3
)
24
h
4
f
(
4
)
(
ξ
)
d
s
=
−
1
90
h
5
f
(
4
)
(
ξ
)
=
O
(
h
5
)
.
{\displaystyle E_{\text{integrate}}=h\int \limits _{0}^{2}{\frac {s(s-1)(s-2)(s-3)}{24}}h^{4}f^{(4)}(\xi )ds=-{\frac {1}{90}}h^{5}f^{(4)}(\xi )=O(h^{5})\,.}
Thus the local error is
O
(
h
5
)
{\displaystyle O(h^{5})}
and the global error is
O
(
h
4
)
{\displaystyle O(h^{4})}
.
The Simpson's 3/8 Rule
edit
Consider Simpson's 3/8 rule . In this case,
n
+
1
=
4
{\displaystyle n+1=4}
since four equally spaced points are used. Applying (1 ), we get
E
integrate
=
h
∫
0
3
s
(
s
−
1
)
(
s
−
2
)
(
s
−
3
)
24
h
4
f
(
4
)
(
ξ
)
d
s
=
−
3
80
h
5
f
(
4
)
(
ξ
)
=
O
(
h
5
)
{\displaystyle E_{\text{integrate}}=h\int \limits _{0}^{3}{\frac {s(s-1)(s-2)(s-3)}{24}}h^{4}f^{(4)}(\xi )ds=-{\frac {3}{80}}h^{5}f^{(4)}(\xi )=O(h^{5})}
where
x
0
⩽
ξ
⩽
x
3
{\displaystyle x_{0}\leqslant \xi \leqslant x_{3}}
.
Both the Simpon's 1/3 rule and the 3/8 rule have error terms of order
h
5
{\displaystyle h^{5}}
. With smaller coefficient, the 1/3 rule seems more accurate. Then why do we need the 3/8 rule? The 3/8 rule is useful when the total number of increments
n
{\displaystyle n}
is odd. Three increments can be used with the 3/8 rule, and then the rest even number of increments can be used with 1/3 rule.
Given the set of data points, solve the numerical integration
I
=
∫
3.1
3.9
f
(
x
)
d
x
{\displaystyle I=\int \limits _{3.1}^{3.9}f(x)dx}
x
{\displaystyle x}
f
(
x
)
=
−
1
x
{\displaystyle f(x)=-{\frac {1}{x}}}
3.1
-0.32258065
3.5
-0.28571429
3.9
-0.25641026
Use the trapezoid rule. First try
h
=
0.8
{\displaystyle h=0.8}
. That is, use only the two end points. We can get
I
(
h
=
0.8
)
=
0.8
2
(
−
0.32258065
−
0.25641026
)
=
−
0.23159636
{\displaystyle I(h=0.8)={\frac {0.8}{2}}(-0.32258065-0.25641026)=-0.23159636}
Compared with the exact solution
I
=
−
0.22957444
{\displaystyle I=-0.22957444}
we have
E
integrate
=
0.00202192
.
{\displaystyle E_{\text{integrate}}=0.00202192\,.}
Using all three points with
h
=
0.4
{\displaystyle h=0.4}
we can get
I
(
h
=
0.4
)
=
0.4
2
(
−
0.32258065
−
2
×
0.28571429
−
0.25641026
)
=
−
0.23008389
{\displaystyle I(h=0.4)={\frac {0.4}{2}}(-0.32258065-2\times 0.28571429-0.25641026)=-0.23008389}
and so
E
integrate
=
0.00050945
.
{\displaystyle E_{\text{integrate}}=0.00050945\,.}
Thus the error ratio is
E
integrate
(
h
=
0.8
)
E
integrate
(
h
=
0.4
)
=
3.97
{\displaystyle {\frac {E_{\text{integrate}}(h=0.8)}{E_{\text{integrate}}(h=0.4)}}=3.97}
. This is close to what we can get by inspecting
E
integrate
(
h
)
E
integrate
(
h
/
2
)
=
O
(
h
2
)
O
(
h
/
2
)
2
=
2
2
=
4
{\displaystyle {\frac {E_{\text{integrate}}(h)}{E_{\text{integrate}}(h/2)}}={\frac {O(h^{2})}{O(h/2)^{2}}}=2^{2}=4}
.
Using the data given below, find the maximum error incurred in using Newton's forward interpolation formula to approximate
x
=
0.14
{\displaystyle x=0.14}
.
x
{\displaystyle x}
e
x
{\displaystyle e^{x}}
0.1
1.10517
0.2
1.22140
0.3
1.34986
0.4
1.49182
0.5
1.64872
When using Simpson's 1/3, what is the error ratio supposed to be?
E
integrate
(
h
)
E
integrate
(
h
/
2
)
=
O
(
h
)
4
O
(
h
/
2
)
4
=
2
4
=
16
.
{\displaystyle {\frac {E_{\text{integrate}}(h)}{E_{\text{integrate}}(h/2)}}={\frac {O(h)^{4}}{O(h/2)^{4}}}=2^{4}=16\,.}