Find
lim
x
→
0
e
x
−
1
x
{\displaystyle \lim _{x\rightarrow 0}{\frac {e^{x}-1}{x}}}
and
plot graph of the function for
x
∈
[
0
,
1
]
{\displaystyle x\in \left[0,1\right]}
This limit cannot be performed directly since it yields
0
0
{\displaystyle {\frac {0}{0}}}
form. So, the L'Hôpitals Rule technique must be used:
L'Hôpitals Rule States:
lim
x
→
c
f
(
x
)
g
(
x
)
=
lim
x
→
c
f
′
(
x
)
g
′
(
x
)
{\displaystyle \lim _{x\rightarrow c}{\frac {f(x)}{g(x)}}=\lim _{x\rightarrow c}{\frac {f'(x)}{g'(x)}}}
as long as f and g are functions that are differentiable on an open interval (a,b) which contains c, except at c itself.
Applying this technique the following is found:
f
(
x
)
=
e
x
−
1
f
′
(
x
)
=
e
x
{\displaystyle f(x)=e^{x}-1\qquad f'(x)=e^{x}}
g
(
x
)
=
x
g
′
(
x
)
=
1
{\displaystyle g(x)=x\qquad g'(x)=1}
lim
x
→
0
e
x
−
1
x
=
lim
x
→
0
e
x
1
=
e
0
=
1
{\displaystyle \lim _{x\rightarrow 0}{\frac {e^{x}-1}{x}}=\lim _{x\rightarrow 0}{\frac {e^{x}}{1}}=e^{0}=1}
Graph of
x
{\displaystyle {x}}
(on X-axis) versus
e
x
−
1
x
{\displaystyle {\frac {e^{x}-1}{x}}}
(on Y-axis)
MATLAB Code:
To generate the graph for
x
{\displaystyle {x}}
vs
e
x
−
1
x
{\displaystyle {\frac {e^{x}-1}{x}}}
for i = 1 : 1000
x =(( i - 1 ) / 1000 );
y =( exp ( x ) - 1 ) / x ;
plot ( x , y );
hold on ;
end
Solution for problem 1: Guillermo Varela 19:41, 27 January 2010 (UTC) and Srikanth Madala 19:41, 27 January 2010 (UTC)
Proofread problem 1: Guillermo Varela 19:41, 27 January 2010 (UTC) and Srikanth Madala 19:41, 27 January 2010 (UTC)
Pg. 2-3, Find
P
n
(
x
)
{\displaystyle P_{n}(x)}
and
R
n
+
1
(
x
)
{\displaystyle R_{n+1}(x)}
of
e
x
{\displaystyle e^{x}}
The Taylor's series expansion for any function f(x) can be expressed as follows:
f
(
x
)
=
P
n
(
x
)
+
R
n
+
1
(
x
)
{\displaystyle \displaystyle f(x)=P_{n}(x)+R_{n+1}(x)}
where
P
n
(
x
)
=
f
(
x
0
)
+
(
x
−
x
0
)
1
!
f
′
(
x
0
)
+
.
.
.
.
.
+
(
x
−
x
0
)
n
n
!
f
n
(
x
0
)
{\displaystyle P_{n}(x)={f}(x_{0})+{\frac {(x-x_{0})}{1!}}{f}'(x_{0})+.....+{\frac {(x-x_{0})^{n}}{n!}}{f}^{n}(x_{0})}
R
n
+
1
(
x
)
=
1
n
!
∫
x
0
x
(
x
−
t
)
n
f
n
+
1
(
t
)
d
t
{\displaystyle R_{n+1}(x)={\frac {1}{n!}}\int _{x_{0}}^{x}(x-t)^{n}f^{n+1}(t)dt}
If f(x) is considered to be
e
x
{\displaystyle e^{x}}
, then by using the above expansion,
P
n
(
x
)
{\displaystyle P_{n}(x)}
becomes:
P
n
(
x
)
=
e
x
0
+
(
x
−
x
0
)
1
!
e
x
0
+
(
x
−
x
0
)
2
2
!
e
x
0
+
.
.
.
.
.
+
(
x
−
x
0
)
n
n
!
e
x
0
{\displaystyle P_{n}(x)=e^{x_{0}}+{\frac {(x-x_{0})}{1!}}e^{x_{0}}+{\frac {(x-x_{0})^{2}}{2!}}e^{x_{0}}+.....+{\frac {(x-x_{0})^{n}}{n!}}e^{x_{0}}}
Let
x
0
=
0
{\displaystyle {x_{0}}=0}
, then:
P
n
(
x
)
=
e
0
+
x
1
!
e
0
+
x
2
2
!
e
0
+
.
.
.
.
.
+
x
n
n
!
e
0
=
1
+
x
1
!
+
x
2
2
!
+
.
.
.
.
.
.
.
.
.
.
.
+
x
n
n
!
{\displaystyle {\begin{matrix}P_{n}(x)&=&e^{0}+{\frac {x}{1!}}e^{0}+{\frac {x^{2}}{2!}}e^{0}+.....+{\frac {x^{n}}{n!}}e^{0}\\\\&=&1+{\frac {x}{1!}}+{\frac {x^{2}}{2!}}+...........+{\frac {x^{n}}{n!}}\end{matrix}}}
Similarly,
R
n
+
1
(
x
)
{\displaystyle R_{n+1}(x)}
becomes:
R
n
+
1
(
x
)
=
1
n
!
∫
t
=
x
0
=
0
t
=
x
(
x
−
t
)
n
e
t
d
t
{\displaystyle R_{n+1}(x)={\frac {1}{n!}}\int _{{t=x_{0}}=0}^{t=x}(x-t)^{n}e^{t}dt}
Using Integral mean value theorem (IMVT):
R
n
+
1
(
x
)
=
e
ξ
x
n
!
∫
0
x
(
x
−
t
)
n
d
t
=
e
ξ
x
n
!
x
n
+
1
n
+
1
=
x
n
+
1
e
ξ
x
n
+
1
!
{\displaystyle {\begin{matrix}R_{n+1}(x)&=&{\frac {e^{\xi _{x}}}{n!}}\int _{0}^{x}(x-t)^{n}dt\\\\&=&{\frac {e^{\xi _{x}}}{n!}}{\frac {x^{n+1}}{n+1}}={\frac {x^{n+1}\ e^{\xi _{x}}}{n+1!}}\end{matrix}}}
where
ξ
x
{\displaystyle \xi _{x}}
lies in
[
0
,
x
]
{\displaystyle \left[0,x\right]}
Solution for problem 2: Srikanth Madala 19:41, 27 January 2010 (UTC)
Proofread problem 2: Egm6341.s10.team2.patodon 04:21, 21 February 2010 (UTC)
Pg. 3-3.
f
(
x
)
=
s
i
n
(
x
)
g
(
x
)
=
s
i
n
(
x
−
π
2
)
=
−
c
o
s
(
x
)
{\displaystyle f(x)=sin(x)\qquad \qquad g(x)=sin(x-{\frac {\pi }{2}})=-cos(x)}
Plot f and g
x
ϵ
[
0
,
π
]
{\displaystyle \ x\epsilon [0,\pi ]}
Find Infinity norm of
i)
‖
f
(
x
)
‖
∞
{\displaystyle \left\|f(x)\right\|_{\infty }}
ii)
‖
g
(
x
)
‖
∞
{\displaystyle \left\|g(x)\right\|_{\infty }}
iii)
‖
f
(
x
)
−
g
(
x
)
‖
∞
{\displaystyle \left\|f(x)-g(x)\right\|_{\infty }}
The Infinity norm is defined as follows:
‖
f
(
.
)
‖
∞
=
m
a
x
|
f
(
x
)
|
{\displaystyle \left\|f(.)\right\|_{\infty }=max\left|f(x)\right|}
Using this definition the following is identified:
i)
‖
s
i
n
(
x
)
‖
∞
=
1
{\displaystyle \left\|sin(x)\right\|_{\infty }=1}
ii)
‖
−
c
o
s
(
x
)
‖
∞
=
1
{\displaystyle \left\|-cos(x)\right\|_{\infty }=1}
iii)
‖
s
i
n
(
x
)
+
c
o
s
(
x
)
‖
∞
=
1.4142
{\displaystyle \left\|sin(x)+cos(x)\right\|_{\infty }=1.4142}
Solution for problem 3: Guillermo Varela
Pg 5-1.
Prove the Integral Mean Value Theorem (IMVT) p. 2-3 for w(.) non-negative. i.e
w
≥
0
{\displaystyle \ w\geq 0}
We have the IMVT as
∫
a
b
f
(
x
)
w
(
x
)
d
x
=
f
(
ξ
)
∫
a
b
w
(
x
)
d
x
{\displaystyle {\int _{a}^{b}f(x)w(x)dx=f(\xi )\int _{a}^{b}w(x)dx}}
For a given function
f
(
x
)
{\displaystyle {f(x)}}
Let m be the minimum of the function
and M be the maximum of the same function
Then we know that,
m
≤
f
(
x
)
≤
M
{\displaystyle {m\leq f(x)\leq M}}
multiplying the inequality throughout by
w
(
x
)
≥
0
{\displaystyle {w(x)\geq 0}}
and integrating between
[
a
,
b
]
{\displaystyle {[a,b]}}
we get:
∫
a
b
m
.
w
(
x
)
d
x
≤
∫
a
b
f
(
x
)
.
w
(
x
)
d
x
≤
∫
a
b
M
.
w
(
x
)
d
x
{\displaystyle {\int _{a}^{b}m.w(x)dx\leq \int _{a}^{b}f(x).w(x)dx\leq \int _{a}^{b}M.w(x)dx}}
m
∫
a
b
w
(
x
)
d
x
≤
∫
a
b
f
(
x
)
.
w
(
x
)
d
x
≤
M
∫
a
b
w
(
x
)
d
x
{\displaystyle {m\int _{a}^{b}w(x)dx\leq \int _{a}^{b}f(x).w(x)dx\leq M\int _{a}^{b}w(x)dx}}
writing
∫
a
b
w
(
x
)
=
I
{\displaystyle {\int _{a}^{b}w(x)=I}}
, we get
m
I
≤
∫
a
b
f
(
x
)
w
(
x
)
d
x
≤
M
I
{\displaystyle {mI\leq \int _{a}^{b}f(x)w(x)dx\leq MI}}
It is seen that when w(x) = 0, the result is valid. Consider the case when w(x) > 0
dividing throughout by
I
{\displaystyle {I}}
m
≤
1
I
∫
a
b
f
(
x
)
w
(
x
)
d
x
≤
M
{\displaystyle {m\leq {\frac {1}{I}}\int _{a}^{b}f(x)w(x)dx\leq M}}
From the Intermediate Value Theorem, we know that there exists
ξ
∈
[
a
,
b
]
{\displaystyle {\xi \in \left[a,b\right]}}
such that
f
(
ξ
)
=
1
I
∫
a
b
f
(
x
)
w
(
x
)
d
x
{\displaystyle {f(\xi )={\frac {1}{I}}\int _{a}^{b}f(x)w(x)dx}}
i.e
f
(
ξ
)
∫
a
b
w
(
x
)
d
x
=
∫
a
b
f
(
x
)
w
(
x
)
d
x
{\displaystyle {f(\xi )\int _{a}^{b}w(x)dx=\int _{a}^{b}f(x)w(x)dx}}
Hence Proved
Solution for problem 4: --Egm6341.s10.team2.niki 23:22, 20 February 2010 (UTC)
Proofread problem 4:
Pg. 5-1. Use the integral mean value theorem (IMVT) to show eq. 5 P. 2-2 yields the equation 1 of pg. 2-3.
Solution for problem 5: Jiang Pengxiang
Proofread problem 5: Srikanth Madala 19:41, 27 January 2010 (UTC)
Pg. 5-3.
Repeat integration by parts to reveal
(
x
−
x
0
)
2
2
!
f
(
2
)
(
x
0
)
+
(
x
−
x
0
)
3
3
!
f
(
3
)
(
x
0
)
{\displaystyle {\frac {(x-x_{0})^{2}}{2!}}f^{(2)}(x_{0})+{\frac {(x-x_{0})^{3}}{3!}}f^{(3)}(x_{0})}
Plus the remainder.
Next, Assume eq. 4 and 5 of Pg. 2-2 are true, do integration by parts once more.
From Meeting 5 lecture notes:
f
(
x
)
=
f
(
x
0
)
+
∫
x
0
x
f
(
1
)
(
t
)
d
t
{\displaystyle {\begin{array}{lcr}f(x)&=&f(x_{0})+\int _{x_{0}}^{x}f^{(1)}(t)dt\end{array}}\!}
Integration by parts yields:
f
(
x
)
=
f
(
x
0
)
+
x
−
x
0
1
!
f
(
1
)
(
x
0
)
+
∫
x
0
x
(
x
−
t
)
f
(
2
)
(
t
)
d
t
{\displaystyle {\begin{array}{lcr}f(x)&=&f(x_{0})+{\frac {x-x_{0}}{1!}}f^{(1)}(x_{0})+\int _{x_{0}}^{x}(x-t)f^{(2)}(t)dt\end{array}}\!}
Performing integration by parts on the integral in the above result:
∫
x
0
x
(
x
−
t
)
f
(
2
)
(
t
)
d
t
=
∫
x
0
x
u
′
v
=
[
u
v
]
x
0
x
−
∫
u
v
′
=
[
(
t
x
−
t
2
2
)
(
f
(
2
)
)
(
t
)
]
x
0
x
−
∫
x
0
x
(
t
x
−
t
2
2
)
(
t
f
(
3
)
(
t
)
)
d
t
=
(
x
2
2
)
(
f
(
2
)
(
x
)
)
−
(
x
0
x
−
x
0
2
2
)
(
f
(
2
)
(
x
0
)
)
−
∫
x
0
x
(
t
x
−
t
2
2
)
(
t
f
(
3
)
(
t
)
)
d
t
=
(
x
2
2
)
(
f
(
2
)
(
x
)
)
+
(
x
0
2
2
)
(
f
(
2
)
(
x
0
)
)
−
(
x
0
x
)
f
(
2
)
(
x
0
)
−
∫
x
0
x
(
t
x
−
t
2
2
)
(
t
f
(
3
)
(
t
)
)
d
t
=
∫
x
0
x
(
x
2
2
)
(
f
(
3
)
(
t
)
)
d
t
+
x
0
x
f
(
2
)
)
(
x
0
)
{\displaystyle {\begin{array}{lcr}\int _{x_{0}}^{x}(x-t)f^{(2)}(t)dt\\=\int _{x_{0}}^{x}u'v\\=[uv]_{x_{0}}^{x}-\int uv'\\=[(tx-{\frac {t^{2}}{2}})(f^{(2)})(t)]_{x_{0}}^{x}-\int _{x_{0}}^{x}(tx-{\frac {t^{2}}{2}})(tf^{(3)}(t))dt\\=({\frac {x^{2}}{2}})(f^{(2)}(x))-(x_{0}x-{\frac {x_{0}^{2}}{2}})(f^{(2)}(x_{0}))-\int _{x_{0}}^{x}(tx-{\frac {t^{2}}{2}})(tf^{(3)}(t))dt\\=({\frac {x^{2}}{2}})(f^{(2)}(x))+({\frac {x_{0}^{2}}{2}})(f^{(2)}(x_{0}))-(x_{0}x)f^{(2)}(x_{0})-\int _{x_{0}}^{x}(tx-{\frac {t^{2}}{2}})(tf^{(3)}(t))dt\\=\int _{x_{0}}^{x}({\frac {x^{2}}{2}})(f^{(3)}(t))dt+x_{0}xf^{(2))}(x_{0})\end{array}}\!}
Solution for problem 6: Egm6341.s10.team2.patodon 04:15, 21 February 2010 (UTC)
Proofread problem 6: Guillermo Varela
Pg. 6-1
f
(
x
)
=
s
i
n
(
x
)
∀
x
ϵ
[
0
,
π
]
{\displaystyle \ f(x)=sin(x)\ \forall \ x\epsilon [0,\pi ]}
Construct Taylor Series of f(.) around
x
0
≈
π
4
{\displaystyle x_{0}\approx {\frac {\pi }{4}}}
for n=0,1,...,10 and plot for each n.
The Taylor's series expansion for any function
f
(
x
)
{\displaystyle f(x)}
can be expressed as follows:
f
(
x
)
=
P
n
(
x
)
+
R
n
+
1
(
x
)
{\displaystyle \displaystyle f(x)=P_{n}(x)+R_{n+1}(x)}
where
P
n
(
x
)
=
f
(
x
0
)
+
(
x
−
x
0
)
1
!
f
′
(
x
0
)
+
.
.
.
.
.
+
(
x
−
x
0
)
n
n
!
f
n
(
x
0
)
{\displaystyle P_{n}(x)={f}(x_{0})+{\frac {(x-x_{0})}{1!}}{f}'(x_{0})+.....+{\frac {(x-x_{0})^{n}}{n!}}{f}^{n}(x_{0})}
R
n
+
1
(
x
)
=
1
n
!
∫
x
0
x
(
x
−
t
)
n
f
n
+
1
(
t
)
d
t
{\displaystyle R_{n+1}(x)={\frac {1}{n!}}\int _{x_{0}}^{x}(x-t)^{n}f^{n+1}(t)dt}
If
f
(
x
)
{\displaystyle f(x)}
is considered to be
sin
x
{\displaystyle \sin x}
,
n
=
10
{\displaystyle n=10}
and
x
0
=
π
4
{\displaystyle x_{0}={\frac {\pi }{4}}}
, then by using the above expansion,
P
n
(
x
)
{\displaystyle P_{n}(x)}
and
R
n
+
1
(
x
)
{\displaystyle R_{n+1}(x)}
becomes:
P
10
(
x
)
=
sin
π
4
+
[
(
x
−
π
4
)
1
!
cos
π
4
]
−
[
(
x
−
π
4
)
2
2
!
sin
π
4
]
−
[
(
x
−
π
4
)
3
3
!
cos
π
4
]
+
[
(
x
−
π
4
)
4
4
!
sin
π
4
]
+
.
.
.
.
.
+
[
(
x
−
π
4
)
10
10
!
f
10
(
π
4
)
]
{\displaystyle P_{10}(x)=\sin {\frac {\pi }{4}}+\left[{\frac {(x-{\frac {\pi }{4}})}{1!}}\cos {\frac {\pi }{4}}\right]-\left[{\frac {(x-{\frac {\pi }{4}})^{2}}{2!}}\sin {\frac {\pi }{4}}\right]-\left[{\frac {(x-{\frac {\pi }{4}})^{3}}{3!}}\cos {\frac {\pi }{4}}\right]+\left[{\frac {(x-{\frac {\pi }{4}})^{4}}{4!}}\sin {\frac {\pi }{4}}\right]+.....+\left[{\frac {(x-{\frac {\pi }{4}})^{10}}{10!}}{f}^{10}({\frac {\pi }{4}})\right]}
where
f
10
(
π
4
)
=
−
sin
(
π
4
)
{\displaystyle {f}^{10}({\frac {\pi }{4}})=-\sin({\frac {\pi }{4}})}
(Please see the below Matlab code for more elaborate expansion)
R
11
(
x
)
=
1
10
!
∫
t
=
π
4
t
=
x
(
x
−
t
)
10
f
11
(
t
)
d
t
{\displaystyle R_{11}(x)={\frac {1}{10!}}\int _{t={\frac {\pi }{4}}}^{t=x}(x-t)^{10}f^{11}(t)dt}
where
f
11
(
t
)
=
−
cos
(
t
)
{\displaystyle {f}^{11}\left(t\right)=-\cos \left(t\right)}
(Please see the below Matlab code for more elaborate expansion)
MATLAB Code:
To generate the equation for
P
n
(
x
)
{\displaystyle P_{n}(x)}
p = 0
for n = 0 : 10
syms x ;
syms z ;
f = sin ( z );
g = diff ( f , n );
z =( pi / 4 );
p = p + ((( x - ( pi / 4 )) ^n ) / factorial ( n )) * g ;
end
p =
sin ( z ) + ( x - 1 / 4 * pi ) * cos ( z ) - 1 / 2 * ( x - 1 / 4 * pi ) ^2 * sin ( z ) - 1 / 6 * ( x - 1 / 4 * pi ) ^3 * cos ( z ) + 1 / 24 * ( x - 1 / 4 * pi ) ^4 * sin ( z ) + 1 / 120 * ( x - 1 / 4 * pi ) ^5 * cos ( z ) - 1 / 720 * ( x - 1 / 4 * pi ) ^6 * sin ( z ) - 1 / 5040 * ( x - 1 / 4 * pi ) ^7 * cos ( z ) + 1 / 40320 * ( x - 1 / 4 * pi ) ^8 * sin ( z ) + 1 / 362880 * ( x - 1 / 4 * pi ) ^9 * cos ( z ) - 1 / 3628800 * ( x - 1 / 4 * pi ) ^10 * sin ( z );
To generate the equation for
R
n
+
1
(
x
)
{\displaystyle R_{n+1}(x)}
>> syms t
>> syms x
>> f = sin ( x );
>> r = ( 1 / factorial ( 10 )) * int ((( x - t ) ^10 ) * diff ( f , 11 ), t ,( pi / 4 ), x );
>> r
r =
- 1301357606610903 / 51946031311566097350656 * cos ( x ) * ( x ^11 - 1 / 4194304 * pi ^11 ) + 1301357606610903 / 4722366482869645213696 * x * cos ( x ) * ( x ^10 - 1 / 1048576 * pi ^10 ) - 6506788033054515 / 4722366482869645213696 * x ^2 * cos ( x ) * ( x ^9 - 1 / 262144 * pi ^9 ) + 19520364099163545 / 4722366482869645213696 * x ^3 * cos ( x ) * ( x ^8 - 1 / 65536 * pi ^8 ) - 19520364099163545 / 2361183241434822606848 * x ^4 * cos ( x ) * ( x ^7 - 1 / 16384 * pi ^7 ) + 27328509738828963 / 2361183241434822606848 * x ^5 * cos ( x ) * ( x ^6 - 1 / 4096 * pi ^6 ) - 27328509738828963 / 2361183241434822606848 * x ^6 * cos ( x ) * ( x ^5 - 1 / 1024 * pi ^5 ) + 19520364099163545 / 2361183241434822606848 * x ^7 * cos ( x ) * ( x ^4 - 1 / 256 * pi ^4 ) - 19520364099163545 / 4722366482869645213696 * x ^8 * cos ( x ) * ( x ^3 - 1 / 64 * pi ^3 ) + 6506788033054515 / 4722366482869645213696 * x ^9 * cos ( x ) * ( x ^2 - 1 / 16 * pi ^2 ) - 1301357606610903 / 4722366482869645213696 * x ^10 * cos ( x ) * ( x - 1 / 4 * pi )
To generate the graph for
x
{\displaystyle x}
vs
y
=
f
(
x
)
=
P
n
(
x
)
+
R
n
+
1
(
x
)
{\displaystyle y=f(x)=P_{n}(x)+R_{n+1}(x)}
; where
x
∈
[
0
,
π
]
{\displaystyle x\in \left[0,\pi \right]}
z = pi / 4 ;
for i = 1 : 1 : 1000
x = (( i - 1 ) / 1000 ) * pi ;
p = sin ( z ) + ( x - 1 / 4 * pi ) * cos ( z ) - 1 / 2 * ( x - 1 / 4 * pi ) ^2 * sin ( z ) - 1 / 6 * ( x - 1 / 4 * pi ) ^3 * cos ( z ) + 1 / 24 * ( x - 1 / 4 * pi ) ^4 * sin ( z ) + 1 / 120 * ( x - 1 / 4 * pi ) ^5 * cos ( z ) - 1 / 720 * ( x - 1 / 4 * pi ) ^6 * sin ( z ) - 1 / 5040 * ( x - 1 / 4 * pi ) ^7 * cos ( z ) + 1 / 40320 * ( x - 1 / 4 * pi ) ^8 * sin ( z ) + 1 / 362880 * ( x - 1 / 4 * pi ) ^9 * cos ( z ) - 1 / 3628800 * ( x - 1 / 4 * pi ) ^10 * sin ( z );
r = - 1301357606610903 / 51946031311566097350656 * cos ( x ) * ( x ^11 - 1 / 4194304 * pi ^11 ) + 1301357606610903 / 4722366482869645213696 * x * cos ( x ) * ( x ^10 - 1 / 1048576 * pi ^10 ) - 6506788033054515 / 4722366482869645213696 * x ^2 * cos ( x ) * ( x ^9 - 1 / 262144 * pi ^9 ) + 19520364099163545 / 4722366482869645213696 * x ^3 * cos ( x ) * ( x ^8 - 1 / 65536 * pi ^8 ) - 19520364099163545 / 2361183241434822606848 * x ^4 * cos ( x ) * ( x ^7 - 1 / 16384 * pi ^7 ) + 27328509738828963 / 2361183241434822606848 * x ^5 * cos ( x ) * ( x ^6 - 1 / 4096 * pi ^6 ) - 27328509738828963 / 2361183241434822606848 * x ^6 * cos ( x ) * ( x ^5 - 1 / 1024 * pi ^5 ) + 19520364099163545 / 2361183241434822606848 * x ^7 * cos ( x ) * ( x ^4 - 1 / 256 * pi ^4 ) - 19520364099163545 / 4722366482869645213696 * x ^8 * cos ( x ) * ( x ^3 - 1 / 64 * pi ^3 ) + 6506788033054515 / 4722366482869645213696 * x ^9 * cos ( x ) * ( x ^2 - 1 / 16 * pi ^2 ) - 1301357606610903 / 4722366482869645213696 * x ^10 * cos ( x ) * ( x - 1 / 4 * pi )
y = p + r ;
plot ( x , y );
hold on ;
end
Solution for problem 7: Srikanth Madala 19:41, 27 January 2010 (UTC)
Proofread problem 7: Egm6341.s10.team2.patodon 04:22, 21 February 2010 (UTC)
Pg. 6-5
I
=
∫
0
1
exp
x
−
1
x
d
x
{\displaystyle I=\int _{0}^{1}{\frac {\exp ^{x}-1}{x}}dx}
Use 3 methods to find In:
1) Taylor Series Expansion, Fn
2) Composite Trapezoidal Rule
3) Composite Simpson Rule
for n=2,4,8 ... until the error is of order
10
−
6
{\displaystyle 10^{-6}}
1)Taylor Series Expansion
The goal is to perform the following integration,
I
=
∫
0
1
exp
x
−
1
x
d
x
{\displaystyle I=\int _{0}^{1}{\frac {\exp ^{x}-1}{x}}dx}
The problem with this is that it is an indefinite integral, which must be rewritten in another way in order to analyze it. The method discussed here will be Taylor Series Expansion or McClaurin Series Expansion. The function can be rewritten as follows:
The Taylor Series Expansion for
exp
x
{\displaystyle \exp ^{x}}
exp
x
=
∑
j
=
0
∞
x
j
j
!
=
1
+
∑
j
=
1
∞
x
j
j
!
{\displaystyle \exp ^{x}=\sum _{j=0}^{\infty }{\frac {x^{j}}{j!}}=1+\sum _{j=1}^{\infty }{\frac {x^{j}}{j!}}}
exp
x
−
1
=
1
+
∑
j
=
1
∞
x
j
j
!
−
1
=
∑
j
=
1
∞
x
j
j
!
{\displaystyle \exp ^{x}-1=1+\sum _{j=1}^{\infty }{\frac {x^{j}}{j!}}-1=\sum _{j=1}^{\infty }{\frac {x^{j}}{j!}}}
e
x
p
x
−
1
x
=
∑
j
=
1
∞
x
j
−
1
j
!
=
f
(
x
)
{\displaystyle {\frac {exp^{x}-1}{x}}=\sum _{j=1}^{\infty }{\frac {x^{j-1}}{j!}}=f(x)}
Using this new definition for the function one can then integrate it directly as follows:
I
=
∫
0
1
∑
j
=
1
n
x
j
−
1
j
!
d
x
{\displaystyle I=\int _{0}^{1}\sum _{j=1}^{n}{\frac {x^{j-1}}{j!}}dx}
Integrating this for a value of n=2 yields the following:
I
=
∫
0
1
∑
j
=
1
n
=
2
x
j
−
1
j
!
d
x
{\displaystyle I=\int _{0}^{1}\sum _{j=1}^{n=2}{\frac {x^{j-1}}{j!}}dx}
I
=
∫
0
1
x
0
1
!
+
x
1
2
!
d
x
=
[
x
]
0
1
+
[
x
2
4
]
0
1
=
1
+
1
4
=
1.25
{\displaystyle I=\int _{0}^{1}{\frac {x^{0}}{1!}}+{\frac {x^{1}}{2!}}dx=\left[x\right]_{0}^{1}+\left[{\frac {x^{2}}{4}}\right]_{0}^{1}=1+{\frac {1}{4}}=1.25}
For n=4:
I
=
∫
0
1
x
0
1
!
+
x
1
2
!
+
x
2
3
!
+
x
3
4
!
d
x
=
[
x
]
0
1
+
[
x
2
4
]
0
1
+
[
x
3
18
]
0
1
+
[
x
4
96
]
0
1
=
1
+
1
4
+
1
18
+
1
96
=
1.3160
{\displaystyle I=\int _{0}^{1}{\frac {x^{0}}{1!}}+{\frac {x^{1}}{2!}}+{\frac {x^{2}}{3!}}+{\frac {x^{3}}{4!}}dx=\left[x\right]_{0}^{1}+\left[{\frac {x^{2}}{4}}\right]_{0}^{1}+\left[{\frac {x^{3}}{18}}\right]_{0}^{1}+\left[{\frac {x^{4}}{96}}\right]_{0}^{1}=1+{\frac {1}{4}}+{\frac {1}{18}}+{\frac {1}{96}}=1.3160}
The percent difference between the actual value of the integral (1.3179022) and the estimated value is found by:
|
e
s
t
i
m
a
t
e
−
a
c
t
u
a
l
a
c
t
u
a
l
|
×
100
%
=
|
1.25
−
1.3179022
1.3179022
|
×
100
%
=
5.15
%
{\displaystyle \left|{\frac {estimate-actual}{actual}}\right|\times 100\%=\left|{\frac {1.25-1.3179022}{1.3179022}}\right|\times 100\%=5.15\%}
The following are the results for other values of n until the error is reduced to the power of
10
−
6
{\displaystyle 10^{-6}}
Taylor Series
n
Estimated Value
Percent Difference
n=2
1.2500000
5.152294305
n=4
1.3159722222
0.146443171
n=8
1.3179018152
2.91949E-05
n=16
1.3179021515
3.68355E-06
n=32
1.3179021515
3.68355E-06
Matlab Code used to generate the values for the table:
function I =taylor(n)
i=1;
Itot=0;
It=0;
while i<=n
if i==1
Itot=1;
else
It=1/(factorial(i)*i);
end
Itot=Itot+It;
i=i+1;
end
I=Itot;
2)Composite Trapezoidal Rule
The formula used to analyze the integral for a function using the composite trapezoidal rule is as follows:
∫
a
b
F
(
x
)
d
x
=
(
b
−
a
)
f
(
x
0
)
+
2
∑
i
=
1
n
−
1
f
(
x
i
)
+
f
(
x
n
)
2
n
{\displaystyle \int _{a}^{b}F(x)dx=(b-a){\frac {f(x_{0})+2\sum _{i=1}^{n-1}f(x_{i})+f(x_{n})}{2n}}}
It is also necessary to state the following, using L'Hopitals Rule
lim
0
exp
x
−
1
x
=
1
{\displaystyle \lim _{0}{\frac {\exp ^{x}-1}{x}}=1}
For n=2 the integration is approximated as follows:
∫
0
1
exp
x
−
1
x
d
x
{\displaystyle \int _{0}^{1}{\frac {\exp ^{x}-1}{x}}dx}
x
(
1
)
=
.5
x
(
2
)
=
1
{\displaystyle x(1)=.5x(2)=1}
I
=
(
1
−
0
)
f
(
x
0
)
+
2
f
(
x
1
)
+
f
(
x
2
)
4
=
(
1
)
1
+
(
2
×
1.2974
)
+
1.7183
4
=
1.328
{\displaystyle I=(1-0){\frac {f(x_{0})+2f(x_{1})+f(x_{2})}{4}}=(1){\frac {1+(2\times 1.2974)+1.7183}{4}}=1.328}
The error is calculated by comparing it to the results obtained using the Taylor Series expansion, as follows:
E
r
r
o
r
=
|
T
r
a
p
e
z
o
i
d
a
l
V
a
l
u
e
−
T
a
y
l
o
r
V
a
l
u
e
|
|
T
a
y
l
o
r
V
a
l
u
e
|
×
100
%
=
|
1.328
−
1.3179
|
1.3179
×
100
%
=
.7664
%
{\displaystyle Error={\frac {\left|TrapezoidalValue-TaylorValue\right|}{\left|TaylorValue\right|}}\times 100\%={\frac {\left|1.328-1.3179\right|}{1.3179}}\times 100\%=.7664\%}
This table displays the results for similar values:
Composite Trapezoidal Rule
n
Estimated Value
Percent Difference
n=2
1.3282917278
0.788338301
n=4
1.3205046195
0.197466812
n=8
1.3185530869
0.049388101
n=16
1.3180649052
0.012345774
n=32
1.3179428411
0.003083775
n=64
1.3179123240
0.000768187
n=128
1.3179046946
0.000189284
n=256
1.3179027872
4.45585E-05
n=512
1.3179023104
8.37696E-06
n=1024
1.3179021912
6.68424E-07
Matlab Code used to generate the values for the estimates:
function I=ctrapz(n)
i=0;
Itot=0;
It=0;
It2=0;
h=0;
while i<=n
if i==0
Itot1=1;
else if i<n
h=1/n;
It(i)=2*valu(h*i);
else
It2=valu(1);
end
end
Itot=Itot1+sum(It)+It2;
i=i+1;
end
I=Itot/(2*n);
function F= valu(x);
F=(exp(x)-1)/x;
The Composite Simpson's Rule
The rule is defined as follows:
I
n
=
(
b
−
a
)
f
(
x
0
+
4
∑
i
=
1
,
3
,
5..
n
−
1
f
(
x
i
)
+
2
∑
j
=
2
,
4
,
6...
n
−
2
f
(
x
j
)
+
f
(
x
n
)
3
n
{\displaystyle I_{n}=(b-a){\frac {f(x_{0}+4\sum _{i=1,3,5..}^{n-1}f(x_{i})+2\sum _{j=2,4,6...}^{n-2}f(x_{j})+f(x_{n})}{3n}}}
Using this definition the following is found:
Composite Simpson's Rule
n
Estimated Value
Percent Error
n=2
1.318008666
0.00807842
n=4
1.317908917
0.00050965
n=8
1.317902576
2.85304E-05
n=16
1.317902178
1.66813E-06
The Following MATLAB code was used to generate the values:
function I = simpb(a,b,w)
q=1;
i=a;
n=0;
Sum=0;
c=0;
while n<w
n=2^q;
h=(a+b)/n;
while i<=b
fx=(exp(i)-1)/(i);
if i==a
Sum=1;
else if i==b
Sum=Sum+fx;
else if i==(b-h)
Sum=Sum+(4*fx);
else if rem(c,2)==0
Sum=Sum+(2*fx);
else
Sum=Sum+(4*fx);
end
end
end
end
c=c+1;
i=i+h;
end
n
In=Sum*(h/3);
I=In;
i=a;
c=0;
q=q+1;
Sum=0;
end
By Comparing all of the methods one is able to conclude that the most efficient method to numerically integrate was the composite Simpson's rule.
Solution for problem 8: Guillermo Varela
Pg. 7-1
[
e
x
−
1
]
x
=
1
x
[
e
x
−
1
]
=
f
(
x
)
{\displaystyle {\frac {\mathbf {[e^{x}-1]} }{x}}={{\frac {1}{x}}[e^{x}-1]}={f(x)}}
1) Expand
exp
x
{\displaystyle \exp ^{x}}
in Taylor Series w/ remainder:
R
(
x
)
=
(
x
−
0
)
n
+
1
(
n
+
1
)
!
e
x
p
[
ζ
(
x
)
]
{\displaystyle {R(x)={\frac {(x-0)^{n+1}}{(n+1)!}}exp\left[\zeta (x)\right]}}
2) Find Taylor Series Expansion and Remainder of f(x). eq. 4 of p 6-3.
Given:
P
n
(
x
)
=
f
(
x
0
)
+
(
x
−
x
0
)
1
!
f
(
1
)
(
x
0
)
+
.
.
.
+
(
x
−
x
0
)
n
n
!
f
(
n
)
(
x
0
)
{\displaystyle {P_{n}(x)=f(x_{0})+{\frac {(x-x_{0})}{1!}}f^{(1)}(x_{0})+...+{\frac {(x-x_{0})^{n}}{n!}}f^{(n)}(x_{0})}}
[equation 4 p 2-2]
R
(
x
)
=
(
x
−
0
)
n
+
1
(
n
+
1
)
!
e
x
p
[
ζ
(
x
)
]
{\displaystyle {R(x)={\frac {(x-0)^{n+1}}{(n+1)!}}exp\left[\zeta (x)\right]}}
[equation 1 p 2-3]
P
n
(
x
)
=
e
x
0
+
(
x
−
x
0
)
1
!
e
x
0
+
.
.
.
+
(
x
−
x
0
)
n
n
!
e
x
0
{\displaystyle {P_{n}(x)=e^{x_{0}}+{\frac {(x-x_{0})}{1!}}e^{x_{0}}+...+{\frac {(x-x_{0})^{n}}{n!}}e^{x_{0}}}}
for the case that
x
0
=
0
{\displaystyle x_{0}=0}
, we get,
P
n
(
x
)
=
1
+
(
x
)
1
!
+
.
.
.
+
(
x
)
n
n
!
{\displaystyle {P_{n}(x)=1+{\frac {(x)}{1!}}+...+{\frac {(x)^{n}}{n!}}}}
P
n
(
x
)
{\displaystyle {P_{n}(x)}}
=
∑
j
=
0
∞
x
j
j
!
{\displaystyle {\sum _{j=0}^{\infty }{\frac {x^{j}}{j!}}}}
Using equation 1 p 2-3, we get the remainder as
R
n
+
1
(
x
)
=
(
x
−
x
0
)
n
+
1
(
n
+
1
)
!
f
(
n
+
1
)
(
ζ
(
x
)
)
{\displaystyle {R_{n+1}(x)={\frac {(x-x_{0})^{n+1}}{(n+1)!}}f^{(n+1)}(\zeta (x))}}
for
x
0
=
0
{\displaystyle x_{0}=0}
, we get
R
n
+
1
(
x
)
=
(
x
)
n
+
1
(
n
+
1
)
!
f
(
n
+
1
)
(
ζ
(
x
)
)
{\displaystyle {R_{n+1}(x)={\frac {(x)^{n+1}}{(n+1)!}}f^{(n+1)}(\zeta (x))}}
finally,
f
(
x
)
=
e
x
=
∑
j
=
0
∞
x
j
j
!
+
(
x
)
n
+
1
(
n
+
1
)
!
f
(
n
+
1
)
(
ζ
(
x
)
)
=
∑
j
=
0
∞
x
j
j
!
+
(
x
)
n
+
1
(
n
+
1
)
!
e
(
ζ
(
x
)
)
{\displaystyle {f(x)=e^{x}=\sum _{j=0}^{\infty }{\frac {x^{j}}{j!}}+{\frac {(x)^{n+1}}{(n+1)!}}f^{(n+1)}(\zeta (x))=\sum _{j=0}^{\infty }{\frac {x^{j}}{j!}}+{\frac {(x)^{n+1}}{(n+1)!}}e^{(\zeta (x))}}}
f
(
x
)
=
1
x
[
e
x
−
1
]
{\displaystyle {f(x)={\frac {1}{x}}[e^{x}-1]}}
e
x
=
∑
j
=
0
∞
x
j
j
!
=
1
+
x
1
!
+
x
2
2
!
+
.
.
.
+
x
n
n
!
{\displaystyle {e^{x}=\sum _{j=0}^{\infty }{\frac {x^{j}}{j!}}=1+{\frac {x}{1!}}+{\frac {x^{2}}{2!}}+...+{\frac {x^{n}}{n!}}}}
∴
[
e
x
−
1
]
=
x
1
!
+
x
2
2
!
+
.
.
.
+
x
n
n
!
{\displaystyle \therefore {[e^{x}-1]={\frac {x}{1!}}+{\frac {x^{2}}{2!}}+...+{\frac {x^{n}}{n!}}}}
dividing both sides by x we get,
[
e
x
−
1
]
x
=
f
(
x
)
=
∑
j
=
1
∞
x
j
−
1
j
!
{\displaystyle {{\frac {[e^{x}-1]}{x}}=f(x)=\sum _{j=1}^{\infty }{\frac {x^{j-1}}{j!}}}}
and remainder becomes
R
n
+
1
(
x
)
=
(
x
)
n
(
n
+
1
)
!
f
(
n
+
1
)
(
ζ
(
x
)
)
{\displaystyle {R_{n+1}(x)={\frac {(x)^{n}}{(n+1)!}}f^{(n+1)}(\zeta (x))}}
since
x
0
=
0
{\displaystyle x_{0}=0}
, we have
R
n
+
1
(
x
)
=
(
x
)
n
(
n
+
1
)
!
f
(
n
+
1
)
(
ζ
(
x
)
{\displaystyle {R_{n+1}(x)={\frac {(x)^{n}}{(n+1)!}}f^{(n+1)}(\zeta (x)}}
where
ζ
(
x
)
)
ϵ
[
0
,
x
]
{\displaystyle \zeta (x))\epsilon [0,x]}
Finally,
f
(
x
)
=
[
e
x
−
1
]
x
=
∑
j
=
1
∞
x
j
−
1
j
!
+
(
x
)
n
(
n
+
1
)
!
f
(
n
+
1
)
(
ζ
(
x
)
)
=
∑
j
=
1
∞
x
j
−
1
j
!
+
(
x
)
n
(
n
+
1
)
!
e
(
ζ
(
x
)
)
{\displaystyle {f(x)={\frac {[e^{x}-1]}{x}}=\sum _{j=1}^{\infty }{\frac {x^{j-1}}{j!}}+{\frac {(x)^{n}}{(n+1)!}}f^{(n+1)}(\zeta (x))=\sum _{j=1}^{\infty }{\frac {x^{j-1}}{j!}}+{\frac {(x)^{n}}{(n+1)!}}e^{(\zeta (x))}}}
Solution for problem 9: Egm6341.s10.team2.niki 23:23, 20 February 2010 (UTC)
Proofread problem 9:
P. 8-2.
Use eq. 2 of pg. 8-2 to obtain eq. 1 of Pg. 7-1.
Solution for problem 10: Jiang Pengxiang
Proofread problem 10: Srikanth Madala 19:41, 27 January 2010 (UTC)
P. 8-3
show eq. 4 is equal to eq. 2 by expanding eq. 4.
Equation 2:
p
2
(
x
j
)
=
f
(
x
j
)
{\displaystyle {\begin{array}{lcl}p_{2}(x_{j})&=&f(x_{j})\end{array}}}
Equation 4:
p
2
(
x
j
)
=
∑
i
=
0
2
ϑ
i
(
x
j
)
f
(
x
i
)
F
o
r
j
=
0
,
1
,
2
{\displaystyle {\begin{array}{lcl}p_{2}(x_{j})&=&\sum _{i=0}^{2}\vartheta _{i}(x_{j})f(x_{i})\;\;\;\;\;\;\;\;\;\;\;\;\;\;For\;j=0,1,2\\\end{array}}\!}
Performing summation:
p
2
(
x
j
)
=
∑
i
=
0
2
ϑ
i
(
x
j
)
f
(
x
i
)
=
ϑ
0
(
x
j
)
f
(
x
0
)
+
ϑ
1
(
x
j
)
f
(
x
1
)
+
ϑ
2
(
x
j
)
f
(
x
2
)
{\displaystyle {\begin{array}{lcl}p_{2}(x_{j})&=&\sum _{i=0}^{2}\vartheta _{i}(x_{j})f(x_{i})\\&=&\vartheta _{0}(x_{j})f(x_{0})+\vartheta _{1}(x_{j})f(x_{1})+\vartheta _{2}(x_{j})f(x_{2})\\\end{array}}}
Definition of Kronecker delta Meeting 8 Notes :
ϑ
i
(
x
j
)
=
δ
i
j
=
{
1
i
=
j
0
i
≠
j
{\displaystyle \vartheta _{i}(x_{j})=\delta _{ij}=\left\{{\begin{matrix}1&i=j\\0&i\neq j\end{matrix}}\right.}
Therefore,
ϑ
i
(
x
j
)
f
(
x
i
)
=
0
F
o
r
i
≠
j
{\displaystyle \vartheta _{i}(x_{j})f(x_{i})=0\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;For\;i\neq j\!}
ϑ
i
(
x
j
)
f
(
x
i
)
=
f
(
x
i
=
j
)
F
o
r
i
=
j
{\displaystyle \vartheta _{i}(x_{j})f(x_{i})=f(x_{i=j})\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;For\;i=j\!}
For
j
=
0
,
1
,
o
r
2
{\displaystyle j=0,1,\;or\;2\!}
, two of the three terms resulting from the summation over index-j are negated. Since two of the terms will have i where
i
≠
j
{\displaystyle i\neq j}
and therefore
ϑ
i
(
x
j
)
{\displaystyle \vartheta _{i}(x_{j})}
equal to
0
{\displaystyle 0\!}
for those two terms and
ϑ
i
(
x
j
)
{\displaystyle \vartheta _{i}(x_{j})}
equal to
1
{\displaystyle 1\!}
for the remaining one term where
i
=
j
{\displaystyle \;i=j\!}
.
Thus,
p
2
(
x
j
)
=
f
(
x
j
)
{\displaystyle p_{2}(x_{j})=f(x_{j})\!}
Solution for problem 11: Egm6341.s10.team2.patodon 04:16, 21 February 2010 (UTC)
Proofread problem 11:
Contributing Authors
edit
--Niki Nachappa Chenanda Ganapathy 16:20, 27 January 2010 (UTC)
--Guillermo Varela 19:23, 27 January 2010 (UTC)
--Srikanth Madala 19:41, 27 January 2010 (UTC)
--Patrick O'Donoughue 20:12, 27 January 2010 (UTC)
--Jiang Pengxiang 21:13, 27 January 2010 (UTC)<br/
Problem Assignments
Problem
Solution
Proofread
Problem 1
SM
GV
Problem 2
SM
PO
Problem 3
GV
NN
Problem 4
NN
JP
Problem 5
JP
SM
Problem 6
P0
GV
Problem 7
SM
PO
Problem 8
GV
NN
Problem 9
NN
JP
Problem 10
JP
SM
Problem 11
PO
GV