Truncation errors are defined as the errors that result from using an approximation in place of an exact mathematical procedure.
There are two ways to measure the errors:
Local Truncation Error (LTE): the error,
τ
{\displaystyle \tau }
, introduced by the approximation method at each step.
Global Truncation Error (GTE): the error,
e
{\displaystyle e}
, is the absolute difference between the correct value and the approximate value.
Assume that our methods take the form:
Let y n+1 and y n be approximation values.
Then
y
n
+
1
=
y
n
+
h
⋅
A
(
t
n
,
y
n
,
h
,
f
)
{\displaystyle y_{n+1}=y_{n}+h\cdot A(t_{n},y_{n},h,f)}
, where
h
{\displaystyle h}
is the time step, equal to
t
n
+
1
−
t
n
{\displaystyle t_{n+1}-t_{n}}
, and
A
{\displaystyle A}
is an increment function and is some algorithm for approximating the average slope
y
n
+
1
−
y
n
h
{\displaystyle {\frac {y_{n+1}-y_{n}}{h}}}
.
Three important examples of
A
{\displaystyle A}
are:
Euler’s method :
A
(
t
n
,
y
n
,
h
,
f
)
=
f
(
t
n
,
y
n
)
{\displaystyle A(t_{n},y_{n},h,f)=f(t_{n},y_{n})}
.
Modified Euler's method :
A
(
t
n
,
y
n
,
h
,
f
)
=
1
2
(
A
1
+
A
2
)
{\displaystyle A(t_{n},y_{n},h,f)={\frac {1}{2}}(A_{1}+A_{2})}
, where
A
1
=
f
(
y
n
,
y
n
)
, and
A
2
=
f
(
t
n
+
h
,
y
n
+
h
⋅
A
1
)
.
{\displaystyle {\begin{aligned}A_{1}&=f(y_{n},y_{n}){\text{, and}}\\A_{2}&=f(t_{n}+h,y_{n}+h\cdot A_{1}){\text{.}}\end{aligned}}}
Runge-Kutta method :
A
(
t
n
,
y
n
,
h
,
f
)
=
1
6
(
A
1
+
2
A
2
+
2
A
3
+
A
4
)
{\displaystyle A(t_{n},y_{n},h,f)={\frac {1}{6}}(A_{1}+2A_{2}+2A_{3}+A_{4})}
, where
A
1
=
f
(
t
n
,
y
n
)
,
A
2
=
f
(
t
n
+
1
2
h
,
y
n
+
1
2
h
⋅
A
1
)
,
A
3
=
f
(
t
n
+
1
2
h
,
y
n
+
1
2
h
⋅
A
2
)
, and
A
4
=
f
(
t
n
+
h
,
y
n
+
h
⋅
A
3
)
.
{\displaystyle {\begin{aligned}A_{1}&=f(t_{n},y_{n}){\text{,}}\\A_{2}&=f(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}h\cdot A_{1}){\text{,}}\\A_{3}&=f(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}h\cdot A_{2}){\text{, and}}\\A_{4}&=f(t_{n}+h,y_{n}+h\cdot A_{3}){\text{.}}\end{aligned}}}
Why do we care about truncation errors?
edit
In the case of one-step methods, the local truncation error provides us a measure to determine how the solution to the differential equation fails to solve the difference equation. The local truncation error for multistep methods is similar to that of one-step methods.
A one-step method with local truncation error
τ
n
(
h
)
{\displaystyle \tau _{n}(h)}
at the nth step:
This method is consistent with the differential equation it approximates if
lim
h
→
0
max
1
≤
n
≤
N
|
τ
n
(
h
)
|
=
0.
{\displaystyle \lim _{h\to 0}\max _{1\leq n\leq N}|\tau _{n}(h)|=0.}
Note that here we assume that the approximation values are exactly equal to the true solution at every step.
The method is convergent with respect to the differential equation it approximates if
lim
h
→
0
max
1
≤
n
≤
N
|
y
n
−
y
(
t
n
)
|
=
0
,
{\displaystyle \lim _{h\to 0}\max _{1\leq n\leq N}|y_{n}-y(t_{n})|=0,}
where
y
n
{\displaystyle y_{n}}
denotes the approximation obtained from the method at the nth step, and
y
(
t
n
)
{\displaystyle y(t_{n})}
the exact value of the solution of the differential equation.
How do we avoid truncation errors?
edit
The truncation error generally increases as the step size increases, while the roundoff error decreases as the step size increases.
Relationship Between Local Truncation Error and Global Truncation Error
edit
The global truncation error (GTE) is one order lower than the local truncation error (LTE).
That is,
if
τ
n
(
h
)
=
O
(
h
p
+
1
)
{\displaystyle \tau _{n}(h)=O(h^{p+1})}
, then
e
n
(
h
)
=
O
(
h
p
)
{\displaystyle e_{n}(h)=O(h^{p})}
.
We assume that perfect knowledge of the true solution at the initial time step.
Let
y
~
(
t
)
{\displaystyle {\tilde {y}}(t)}
be the exact solution of
{
y
′
=
f
(
t
,
y
)
, and
y
(
t
n
)
=
y
n
.
{\displaystyle {\Big \{}{\begin{aligned}y'&=f(t,y){\text{, and}}\\y(t_{n})&=y_{n}\,.\end{aligned}}}
The truncation error at step n+1 is defined as
τ
n
+
1
(
h
)
=
y
~
(
t
n
+
1
)
−
y
n
+
1
.
{\displaystyle \tau _{n+1}(h)={\tilde {y}}(t_{n+1})-y_{n+1}.}
Also, the global errors are defined as
e
0
(
h
)
=
0
e
n
+
1
(
h
)
=
y
(
t
n
+
1
)
−
y
n
+
1
=
[
y
(
t
n
+
1
)
−
y
~
(
t
n
+
1
)
]
+
[
y
~
(
t
n
+
1
)
−
y
n
+
1
]
.
{\displaystyle {\begin{aligned}e_{0}(h)&=0\\e_{n+1}(h)&=y(t_{n+1})-y_{n+1}\\&=[y(t_{n+1})-{\tilde {y}}(t_{n+1})]+[{\tilde {y}}(t_{n+1})-y_{n+1}]\,.\end{aligned}}}
According to the w:Triangle inequality , we obtain that
|
e
n
+
1
(
h
)
|
≤
|
y
(
t
n
+
1
)
−
y
~
(
t
n
+
1
)
|
+
|
y
~
(
t
n
+
1
)
−
y
n
+
1
|
.
{\displaystyle |e_{n+1}(h)|\leq |y(t_{n+1})-{\tilde {y}}(t_{n+1})|+|{\tilde {y}}(t_{n+1})-y_{n+1}|.}
(1 )
The second term on the right-hand side of (1 ) is the truncation error
τ
n
+
1
(
h
)
{\displaystyle \tau _{n+1}(h)}
.
Here we assume
τ
n
+
1
(
h
)
=
y
~
(
t
n
+
1
)
−
y
n
+
1
=
O
(
h
p
+
1
)
.
{\displaystyle \tau _{n+1}(h)={\tilde {y}}(t_{n+1})-y_{n+1}=O(h^{p+1}).}
(2 )
Thus,
|
y
~
(
t
n
+
1
)
−
y
n
+
1
|
≤
C
h
p
+
1
.
{\displaystyle |{\tilde {y}}(t_{n+1})-y_{n+1}|\leq Ch^{p+1}\,.}
(3 )
The first term on the right-hand side of (1 ) is the difference between two exact solutions.
Both
y
(
t
)
{\displaystyle y(t)}
and
y
~
(
t
)
{\displaystyle {\tilde {y}}(t)}
satisfy
y
′
=
f
(
t
,
y
)
{\displaystyle y'=f(t,y)}
so
{
y
′
(
t
)
=
f
(
t
,
y
)
, and
y
~
(
t
)
=
f
(
t
,
y
~
)
.
{\displaystyle {\Big \{}{\begin{aligned}y'(t)&=f(t,y){\text{, and}}\\{\tilde {y}}(t)&=f(t,{\tilde {y}})\,.\end{aligned}}}
By subtracting one equation from the other, we can get that
y
′
(
t
)
−
y
~
(
t
)
=
f
(
t
,
y
)
−
f
(
t
,
y
~
)
so
|
y
′
(
t
)
−
y
~
′
(
t
)
|
=
|
f
(
t
,
y
)
−
f
(
t
,
y
~
)
|
.
{\displaystyle {\begin{aligned}y'(t)-{\tilde {y}}(t)&=f(t,y)-f(t,{\tilde {y}})\quad {\text{so}}\\|y'(t)-{\tilde {y}}'(t)|&=|f(t,y)-f(t,{\tilde {y}})|.\end{aligned}}}
Since
f
{\displaystyle f}
is w:Lipschitz continuous , then
|
y
′
(
t
)
−
y
~
′
(
t
)
|
≤
L
|
y
(
t
)
−
y
~
(
t
)
|
,
{\displaystyle |y'(t)-{\tilde {y}}'(t)|\leq L|y(t)-{\tilde {y}}(t)|,}
where
t
>
t
n
.
{\displaystyle t>t_{n}.}
By w:Gronwall's inequality ,
|
y
(
t
)
−
y
~
(
t
)
|
≤
|
y
(
t
n
)
−
y
~
(
t
n
)
|
exp
(
∫
t
n
t
L
d
s
)
=
e
L
(
t
−
t
n
)
|
y
(
t
n
)
−
y
~
(
t
n
)
|
,
{\displaystyle {\begin{aligned}|y(t)-{\tilde {y}}(t)|&\leq |y(t_{n})-{\tilde {y}}(t_{n})|\exp \left(\int _{t_{n}}^{t}Lds\right)\\&=e^{L(t-t_{n})}|y(t_{n})-{\tilde {y}}(t_{n})|,\end{aligned}}}
where
t
∈
[
t
n
,
t
n
+
1
]
.
{\displaystyle t\in [t_{n},t_{n+1}].}
Setting
t
=
t
n
+
1
{\displaystyle t=t_{n+1}}
, we have that
|
y
(
t
n
+
1
)
−
y
~
(
t
n
+
1
)
|
≤
e
L
(
t
n
+
1
−
t
n
)
|
y
(
t
n
)
−
y
~
(
t
n
)
|
=
e
L
h
|
y
(
t
n
)
−
y
~
(
t
n
)
|
=
e
L
h
|
y
(
t
n
)
−
y
n
)
|
=
e
L
h
|
e
n
(
h
)
|
.
{\displaystyle {\begin{aligned}|y(t_{n+1})-{\tilde {y}}(t_{n+1})|&\leq e^{L(t_{n+1}-t_{n})}|y(t_{n})-{\tilde {y}}(t_{n})|\\&=e^{Lh}|y(t_{n})-{\tilde {y}}(t_{n})|\\&=e^{Lh}|y(t_{n})-y_{n})|\\&=e^{Lh}|e_{n}(h)|\,.\end{aligned}}}
(4 )
Plugging equation (3 ) and (4 ) into (1 ), we can get that
|
e
n
+
1
(
h
)
|
≤
e
L
h
|
e
n
(
h
)
|
+
C
h
p
+
1
.
{\displaystyle |e_{n+1}(h)|\leq e^{Lh}|e_{n}(h)|+Ch^{p+1}\,.}
(5 )
Note that equation (5 ) is a recursive inequality valid for all values of
n
{\displaystyle n}
.
Next, we are trying to use it to estimate
|
e
N
(
h
)
|
,
{\displaystyle |e_{N}(h)|,}
where we assume
N
h
=
T
{\displaystyle Nh=T}
.
Let
α
=
e
L
h
.
{\displaystyle \alpha =e^{Lh}.}
Dividing both sides of (4 ) by
α
n
+
1
,
{\displaystyle \alpha ^{n+1},}
we get that
|
e
n
+
1
(
h
)
|
α
n
+
1
≤
e
n
(
h
)
α
n
+
C
h
p
+
1
1
α
n
+
1
.
{\displaystyle {\frac {|e_{n+1}(h)|}{\alpha ^{n+1}}}\leq {\frac {e_{n}(h)}{\alpha ^{n}}}+Ch^{p+1}{\frac {1}{\alpha ^{n+1}}}\,.}
Summing over n = 0,1, 2,…, N-1,
|
e
1
(
h
)
|
α
1
≤
e
0
(
h
)
α
0
+
C
h
p
+
1
1
α
1
{\displaystyle {\frac {|e_{1}(h)|}{\alpha ^{1}}}\leq {\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}{\frac {1}{\alpha ^{1}}}}
,
|
e
2
(
h
)
|
α
2
≤
e
1
(
h
)
α
1
+
C
h
p
+
1
1
α
2
{\displaystyle {\frac {|e_{2}(h)|}{\alpha ^{2}}}\leq {\frac {e_{1}(h)}{\alpha ^{1}}}+Ch^{p+1}{\frac {1}{\alpha ^{2}}}}
,
⋮
{\displaystyle \vdots }
and
|
e
N
(
h
)
|
α
N
≤
e
N
−
1
(
h
)
α
N
−
1
+
C
h
p
+
1
1
α
N
.
{\displaystyle {\frac {|e_{N}(h)|}{\alpha ^{N}}}\leq {\frac {e_{N-1}(h)}{\alpha ^{N-1}}}+Ch^{p+1}{\frac {1}{\alpha ^{N}}}\,.}
Then we obtain
|
e
N
(
h
)
|
α
N
≤
e
0
(
h
)
α
0
+
C
h
p
+
1
(
1
α
1
+
1
α
2
+
⋯
+
1
α
N
)
=
e
0
(
h
)
α
0
+
C
h
p
+
1
[
1
α
N
(
1
+
α
+
α
2
+
⋯
+
α
N
−
1
)
]
=
e
0
(
h
)
α
0
+
C
h
p
+
1
[
1
α
N
(
α
N
−
1
α
−
1
)
]
.
{\displaystyle {\begin{aligned}{\frac {|e_{N}(h)|}{\alpha ^{N}}}&\leq {\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}({\frac {1}{\alpha ^{1}}}+{\frac {1}{\alpha ^{2}}}+\cdots +{\frac {1}{\alpha ^{N}}})\\&={\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}[{\frac {1}{\alpha ^{N}}}(1+\alpha +\alpha ^{2}+\cdots +\alpha ^{N-1})]\\&={\frac {e_{0}(h)}{\alpha ^{0}}}+Ch^{p+1}[{\frac {1}{\alpha ^{N}}}({\frac {\alpha ^{N}-1}{\alpha -1}})]\,.\end{aligned}}}
Since
e
0
(
h
)
=
0
,
{\displaystyle e_{0}(h)=0,}
we have
|
e
N
(
h
)
|
≤
C
h
p
+
1
[
1
α
N
(
α
N
−
1
α
−
1
)
]
≤
C
h
p
+
1
(
α
N
−
1
α
−
1
)
,
since
α
N
>
1
.
{\displaystyle {\begin{aligned}|e_{N}(h)|&\leq Ch^{p+1}[{\frac {1}{\alpha ^{N}}}({\frac {\alpha ^{N}-1}{\alpha -1}})]\\&\leq Ch^{p+1}({\frac {\alpha ^{N}-1}{\alpha -1}}),{\text{since }}\alpha ^{N}>1\,.\end{aligned}}}
Using the inequality
e
x
−
1
≥
x
,
{\displaystyle e^{x}-1\geq x,}
we get
α
−
1
=
e
L
h
−
1
≥
L
h
and
α
N
−
1
=
e
L
N
h
−
1
=
e
L
T
−
1
.
{\displaystyle {\begin{aligned}\alpha -1&=e^{Lh}-1\geq Lh\quad {\text{and}}\\\alpha ^{N}-1&=e^{LNh}-1=e^{LT}-1\,.\end{aligned}}}
Therefore, we can obtain that
|
e
N
(
h
)
|
≤
C
h
p
+
1
(
e
L
T
−
1
L
h
)
=
C
(
e
L
T
−
1
L
)
h
p
.
{\displaystyle |e_{N}(h)|\leq Ch^{p+1}({\frac {e^{LT}-1}{Lh}})=C({\frac {e^{LT}-1}{L}})h^{p}.}
That is,
|
e
N
(
h
)
|
=
O
(
h
p
)
.
{\displaystyle |e_{N}(h)|=O(h^{p}).}
(6 )
From equation (2 ) and (6 ),
τ
n
+
1
(
h
)
=
O
(
h
p
+
1
)
and
{\displaystyle \tau _{n+1}(h)=O(h^{p+1})\quad {\text{and}}}
|
e
N
(
h
)
|
=
O
(
h
p
)
,
{\displaystyle |e_{N}(h)|=O(h^{p}),}
so we can conclude that the global truncation error is one order lower than the local truncation error.
In this graph,
c
=
a
+
b
−
a
2
.
{\displaystyle c=a+{\frac {b-a}{2}}.}
The red line is the true value, the green line is the first step, and the blue line is the second step.
A
B
¯
{\displaystyle {\overline {AB}}}
is the local truncation error at step 1,
τ
1
=
e
1
{\displaystyle \tau _{1}=e_{1}}
, equal to
C
D
¯
.
{\displaystyle {\overline {CD}}.}
D
E
¯
{\displaystyle {\overline {DE}}}
is separation because after the first step we are on the wrong solution of the ODE.
E
F
¯
{\displaystyle {\overline {EF}}}
is
τ
2
.
{\displaystyle \tau _{2}.}
Thus,
C
F
¯
{\displaystyle {\overline {CF}}}
is the global truncation error at step 2,
e
2
.
{\displaystyle e_{2}.}
We can see from this,
e
n
+
1
=
e
n
+
h
[
A
(
t
n
)
,
y
(
t
n
)
,
h
,
f
)
−
A
(
t
n
,
y
n
,
h
,
f
)
]
+
τ
n
+
1
.
{\displaystyle e_{n+1}=e_{n}+h[A(t_{n}),y(t_{n}),h,f)-A(t_{n},y_{n},h,f)]+\tau _{n+1}.}
Then,
e
2
=
e
1
+
h
[
A
(
t
1
)
,
y
(
t
1
)
,
h
,
f
)
−
A
(
t
1
,
y
1
,
h
,
f
)
]
+
τ
2
.
{\displaystyle e_{2}=e_{1}+h[A(t_{1}),y(t_{1}),h,f)-A(t_{1},y_{1},h,f)]+\tau _{2}.}
e
2
=
A
B
¯
+
D
E
¯
+
C
F
¯
.
{\displaystyle e_{2}={\overline {AB}}+{\overline {DE}}+{\overline {CF}}.}
Find the order of the 2-steps Adams-Bashforth method. You need to show the order of truncation error.