Determine whether the following pairs of functions are linearly independent:
1.
f
(
x
)
=
x
2
,
g
(
x
)
=
x
4
{\displaystyle \displaystyle \ 1.f(x)=x^{2},g(x)=x^{4}\ }
2.
f
(
x
)
=
cos
(
x
)
,
g
(
x
)
=
sin
(
3
x
)
{\displaystyle \displaystyle \ 2.f(x)=\cos(x),g(x)=\sin(3x)\ }
First use the Wronskian method, then use the Gramian method.
The Wronskian is defined as:
W
(
f
,
g
)
:=
det
[
f
g
f
′
g
′
]
=
f
g
′
−
g
f
′
{\displaystyle \displaystyle \ W(f,g):=\det {\begin{bmatrix}f&g\\f'&g'\end{bmatrix}}=fg'-gf'\ }
If
W
(
f
,
g
)
≠
0
{\displaystyle \displaystyle \ W(f,g)\neq 0\ }
, then the functions f and g are linearly independent.
The Gramian is defined as:
Γ
(
f
,
g
)
:=
det
[
<
f
,
f
>
<
f
,
g
>
<
g
,
f
>
<
g
,
g
>
]
{\displaystyle \displaystyle \ \Gamma (f,g):=\det {\begin{bmatrix}<f,f>&<f,g>\\<g,f>&<g,g>\end{bmatrix}}\ }
Where
<
f
,
g
>:=
∫
a
b
f
(
x
)
g
(
x
)
d
x
{\displaystyle \displaystyle \ <f,g>:=\int _{a}^{b}f(x)g(x)dx\ }
If
Γ
(
f
,
g
)
≠
0
{\displaystyle \displaystyle \ \Gamma (f,g)\neq 0\ }
, then the functions f and g are linearly independent.
1. Using Wronskian.
W
(
f
,
g
)
=
f
g
′
−
g
f
′
=
x
2
(
4
x
3
)
−
x
4
(
2
x
)
=
4
x
5
−
2
x
5
=
2
x
5
≠
0
{\displaystyle \displaystyle \ W(f,g)=fg'-gf'=x^{2}(4x^{3})-x^{4}(2x)=4x^{5}-2x^{5}=2x^{5}\neq 0}
Therefore, f and g are linearly independent.
2. Using Wronskian.
W
(
f
,
g
)
=
f
g
′
−
g
f
′
=
cos
(
x
)
(
3
cos
(
3
x
)
)
−
sin
(
3
x
)
(
−
sin
(
x
)
)
≠
0
{\displaystyle \displaystyle \ W(f,g)=fg'-gf'=\cos(x)(3\cos(3x))-\sin(3x)(-\sin(x))\neq 0}
Therefore, f and g are linearly independent.
1. Using Gramian with an interval of [-1,1]
<
f
,
f
>=
∫
−
1
1
x
2
(
x
2
)
d
x
=
1
5
x
5
∣
−
1
1
=
2
5
{\displaystyle \displaystyle \ <f,f>=\int _{-1}^{1}x^{2}(x^{2})dx={\frac {1}{5}}x^{5}\mid _{-1}^{1}={\frac {2}{5}}\ }
<
f
,
g
>=<
g
,
f
>=
∫
−
1
1
x
2
(
x
4
)
d
x
=
1
7
x
7
∣
−
1
1
=
2
7
{\displaystyle \displaystyle \ <f,g>=<g,f>=\int _{-1}^{1}x^{2}(x^{4})dx={\frac {1}{7}}x^{7}\mid _{-1}^{1}={\frac {2}{7}}\ }
<
g
,
g
>=
∫
−
1
1
x
4
(
x
4
)
d
x
=
1
9
x
9
∣
−
1
1
=
2
9
{\displaystyle \displaystyle \ <g,g>=\int _{-1}^{1}x^{4}(x^{4})dx={\frac {1}{9}}x^{9}\mid _{-1}^{1}={\frac {2}{9}}\ }
Γ
(
f
,
g
)
=<
f
,
f
><
g
,
g
>
−
<
f
,
g
><
g
,
f
>=
2
5
(
2
9
)
−
2
7
(
2
7
)
=
4
45
−
4
49
≠
0
{\displaystyle \displaystyle \ \Gamma (f,g)=<f,f><g,g>-<f,g><g,f>={\frac {2}{5}}({\frac {2}{9}})-{\frac {2}{7}}({\frac {2}{7}})={\frac {4}{45}}-{\frac {4}{49}}\neq 0\ }
Therefore, f and g are linearly independent.
2. Using Gramian with an interval of [-1,1]
<
f
,
f
>=
∫
−
1
1
cos
(
x
)
(
cos
(
x
)
)
d
x
=
∫
−
1
1
cos
2
(
x
)
d
x
=
1
2
∫
−
1
1
1
+
cos
(
2
x
)
d
x
=
1
2
(
x
+
1
2
sin
(
2
x
)
)
∣
−
1
1
=
1.4546
{\displaystyle \displaystyle \ <f,f>=\int _{-1}^{1}\cos(x)(\cos(x))dx=\int _{-1}^{1}\cos ^{2}(x)dx={\frac {1}{2}}\int _{-1}^{1}1+\cos(2x)dx={\frac {1}{2}}(x+{\frac {1}{2}}\sin(2x))\mid _{-1}^{1}=1.4546\ }
<
f
,
g
>=<
g
,
f
>=
∫
−
1
1
sin
(
3
x
)
cos
(
x
)
d
x
=
0
{\displaystyle \displaystyle <f,g>=<g,f>=\int _{-1}^{1}\sin(3x)\cos(x)dx=0\ }
<
g
,
g
>=<
g
,
g
>=
∫
−
1
1
sin
2
(
3
x
)
d
x
=
1
2
∫
−
1
1
1
−
c
o
s
(
6
x
)
d
x
=
1
2
(
x
−
1
6
sin
(
6
x
)
)
∣
−
1
1
=
1.04657
{\displaystyle \displaystyle <g,g>=<g,g>=\int _{-1}^{1}\sin ^{2}(3x)dx={\frac {1}{2}}\int _{-1}^{1}1-cos(6x)dx={\frac {1}{2}}(x-{\frac {1}{6}}\sin(6x))\mid _{-1}{1}=1.04657\ }
Γ
(
f
,
g
)
=<
f
,
f
><
g
,
g
>
−
<
f
,
g
><
g
,
f
>=
1.4546
(
1.04657
)
−
0
(
0
)
≠
0
{\displaystyle \displaystyle \ \Gamma (f,g)=<f,f><g,g>-<f,g><g,f>=1.4546(1.04657)-0(0)\neq 0\ }
Therefore, f and g are linearly independent.
This problem was solved and uploaded by David Herrick.
Verify that
b
1
{\displaystyle b_{1}}
and
b
2
{\displaystyle b_{2}}
in (1)-(2) p.7-34 are linearly independent using the Gramian
The given Grammian:
Γ
(
b
1
,
b
2
)
=
[
⟨
b
1
,
b
1
⟩
⟨
b
1
,
b
2
⟩
⟨
b
2
,
b
1
⟩
⟨
b
2
,
b
2
⟩
]
{\displaystyle \displaystyle \Gamma (b_{1},b_{2})={\begin{bmatrix}\left\langle b_{1},b_{1}\right\rangle \left\langle b_{1},b_{2}\right\rangle \\\left\langle b_{2},b_{1}\right\rangle \left\langle b_{2},b_{2}\right\rangle \end{bmatrix}}}
In reference to to (3) on p.8-9
⟨
b
1
,
b
1
⟩
=
(
b
1
⋅
b
2
)
{\displaystyle \displaystyle \left\langle b_{1},b_{1}\right\rangle =(b_{1}\cdot b_{2})}
Calculating the dot products yields:
⟨
b
1
,
b
1
⟩
=
4
+
49
=
53
{\displaystyle \displaystyle \left\langle b_{1},b_{1}\right\rangle =4+49=53}
⟨
b
1
,
b
2
⟩
=
3
+
21
=
24
{\displaystyle \displaystyle \left\langle b_{1},b_{2}\right\rangle =3+21=24}
⟨
b
2
,
b
1
⟩
=
3
+
21
=
24
{\displaystyle \displaystyle \left\langle b_{2},b_{1}\right\rangle =3+21=24}
⟨
b
2
,
b
2
⟩
=
2.25
+
9
=
11.25
{\displaystyle \displaystyle \left\langle b_{2},b_{2}\right\rangle =2.25+9=11.25}
Plugging the dot products into the Grammian yields:
Γ
=
596.25
−
576
=
20.25
≠
0
{\displaystyle \displaystyle \Gamma =596.25-576=20.25\neq 0}
Therefore, b1 and b2 are linearly independent.
Solved and uploaded by Derik Bell
Show that:
y
p
(
x
)
=
∑
i
=
0
n
(
y
p
,
i
(
x
)
)
{\displaystyle y_{p}(x)=\sum _{i=0}^{n}(y_{p,i}(x))}
is the particular solution to:
y
″
+
p
(
x
)
y
′
+
q
(
x
)
y
=
r
(
x
)
{\displaystyle y''+p(x)y'+q(x)y=r(x)}
Discuss the choice of particular solutions in the table on p8-3. In other words, for r(x) = kcos(wx), why would you need to have both cos(wx) and sin(wx) in the particular solution?
For a single excitation that satisfies
r
i
(
x
)
{\displaystyle r_{i}(x)}
,
y
p
,
i
″
+
p
(
x
)
y
p
,
i
′
+
q
(
x
)
y
p
,
i
=
r
i
(
x
)
{\displaystyle y_{p,i}''+p(x)y_{p,i}'+q(x)y_{p,i}=r_{i}(x)}
for example:
y
p
0
″
+
p
(
x
)
y
p
0
′
+
q
(
x
)
y
p
0
=
r
0
(
x
)
{\displaystyle y_{p0}''+p(x)y_{p0}'+q(x)y_{p0}=r_{0}(x)}
y
p
1
″
+
p
(
x
)
y
p
1
′
+
q
(
x
)
y
p
1
=
r
1
(
x
)
{\displaystyle y_{p1}''+p(x)y_{p1}'+q(x)y_{p1}=r_{1}(x)}
y
p
2
″
+
p
(
x
)
y
p
2
′
+
q
(
x
)
y
p
2
=
r
2
(
x
)
{\displaystyle y_{p2}''+p(x)y_{p2}'+q(x)y_{p2}=r_{2}(x)}
and so on until...
y
p
,
n
″
+
p
(
x
)
y
p
,
n
′
+
q
(
x
)
y
p
,
n
=
r
n
(
x
)
{\displaystyle y_{p,n}''+p(x)y_{p,n}'+q(x)y_{p,n}=r_{n}(x)}
where by linearity:
r
(
x
)
=
∑
i
=
0
n
r
i
(
x
)
{\displaystyle r(x)=\sum _{i=0}^{n}r_{i}(x)}
Since
y
p
,
i
{\displaystyle y_{p,i}}
is the solution to a single iteration of r(x) and
r
(
x
)
=
∑
i
=
1
n
r
i
(
x
)
{\displaystyle r(x)=\sum _{i=1}^{n}r_{i}(x)}
, then by linearity, the solution to r(x) is:
y
p
(
x
)
=
∑
i
=
1
n
y
p
,
i
{\displaystyle y_{p}(x)=\sum _{i=1}^{n}y_{p,i}}
Part 2:
kcos(wx) is a periodic function. As shown by (3) on p8-2, any periodic function can be broken down into a fourier trigonometric series:
r
(
x
)
=
a
0
+
∑
n
=
1
∞
[
a
n
c
o
s
(
n
w
x
)
+
b
n
s
i
n
(
n
w
x
)
]
{\displaystyle r(x)=a_{0}+\sum _{n=1}^{\infty }[a_{n}cos(nwx)+b_{n}sin(nwx)]}
r(x) can be further broken down as the sum of:
r
a
=
a
0
{\displaystyle r_{a}=a_{0}}
r
b
=
∑
n
=
1
∞
a
n
c
o
s
(
n
w
x
)
{\displaystyle r_{b}=\sum _{n=1}^{\infty }a_{n}cos(nwx)}
r
c
=
∑
n
=
1
∞
b
n
s
i
n
(
n
w
x
)
{\displaystyle r_{c}=\sum _{n=1}^{\infty }b_{n}sin(nwx)}
Where
r
(
x
)
=
r
a
+
r
b
+
r
c
{\displaystyle r(x)=r_{a}+r_{b}+r_{c}}
Since r(x) is expressed in terms of cos(x) and sin(x) the particular solution, which is also a sum of the individual particular solutions for each iteration of
r
a
{\displaystyle r_{a}}
,
r
b
{\displaystyle r_{b}}
, and
r
c
{\displaystyle r_{c}}
, needs to be in terms of sin(x) and cos(x) as well. That applies to all periodic functions as shown on p8-2, which sin(x) is as well. Therefore that justifies why the particular solutions for kcos(wx), ksin(wx),
k
e
α
x
cos
w
x
{\displaystyle ke^{\alpha x}\cos {wx}}
, and
k
e
α
x
sin
w
x
{\displaystyle ke^{\alpha x}\sin {wx}}
must all include both cos(x) and sin(x).
This problem was solved and uploaded by John North .
1. Show that cos(7x) and sin(7x) are linearly independent using the Wronskian and the Gramian (integrate over 1 period)
2. Find 2 equations for the two unknowns M,N and solve for M,N
3. Find the overall solution y(x) that corresponds to the initial condition y(0)=1, y'(0)=0. Plot the solution over 3 periods.
(1)
First, using Wronskian:
For 2 functions, f and g, the Wrosnkian is defined as
W
(
f
,
g
)
:=
d
e
t
[
f
g
f
′
g
′
]
=
f
g
′
−
g
f
′
{\displaystyle \displaystyle W(f,g):=det{\begin{bmatrix}f&g\\f'&g'\end{bmatrix}}=fg'-gf'}
Where f and g are linearly independent if
W
(
f
,
g
)
≠
0
{\displaystyle \displaystyle W(f,g)\neq 0}
For
f
=
c
o
s
(
7
x
)
,
g
=
s
i
n
(
7
x
)
,
f
′
=
−
7
s
i
n
(
7
x
)
,
g
′
=
7
c
o
s
(
7
x
)
{\displaystyle \displaystyle f=cos(7x),g=sin(7x),f'=-7sin(7x),g'=7cos(7x)}
Then,
W
(
f
,
g
)
:=
d
e
t
[
c
o
s
(
7
x
)
s
i
n
(
7
x
)
−
7
s
i
n
(
7
x
)
7
c
o
s
(
7
x
)
]
=
7
c
o
s
2
(
7
x
)
+
7
s
i
n
2
(
7
x
)
≠
0
{\displaystyle \displaystyle W(f,g):=det{\begin{bmatrix}cos(7x)&sin(7x)\\-7sin(7x)&7cos(7x)\end{bmatrix}}=7cos^{2}(7x)+7sin^{2}(7x)\neq 0}
Therefore, f and g are linearly independent
Second, using Gramian:
Consider two functions, f and g, where the scalar product is defined as
<
f
,
g
>:=
∫
a
b
f
(
x
)
g
(
x
)
d
x
{\displaystyle \displaystyle <f,g>:=\int _{a}^{b}f(x)g(x)dx}
And the Gramian defined as
Γ
(
f
,
g
)
:=
d
e
t
[
<
f
,
f
>
<
f
,
g
>
<
g
,
f
>
<
g
,
g
>
]
{\displaystyle \displaystyle \Gamma (f,g):=det{\begin{bmatrix}<f,f>&<f,g>\\<g,f>&<g,g>\end{bmatrix}}}
Then f and g are linearly indepdent if
Γ
(
f
,
g
)
≠
0
{\displaystyle \displaystyle \Gamma (f,g)\neq 0}
For f=cos(7x) and g=sin(7x) and integrating over one period (
2
π
7
{\displaystyle \displaystyle {\frac {2\pi }{7}}}
)
<
f
,
f
>=
∫
0
2
π
7
c
o
s
2
(
7
x
)
d
x
{\displaystyle \displaystyle <f,f>=\int _{0}^{{\frac {2\pi }{7}}{}}cos^{2}(7x)dx}
Letting
u
=
7
x
,
d
u
=
7
d
x
{\displaystyle \displaystyle u=7x,du=7dx}
and changing limits of integration by plugging in old limits into "u" equation
<
f
,
f
>=
1
7
∫
0
2
π
c
o
s
2
(
u
)
d
u
=
1
7
[
u
2
+
1
4
s
i
n
2
u
∣
0
2
π
]
=
π
7
{\displaystyle \displaystyle <f,f>={\frac {1}{7}}\int _{0}^{2\pi }cos^{2}(u)du={\frac {1}{7}}[{\frac {u}{2}}+{\frac {1}{4}}sin2u\mathbf {\mid } _{0}^{2\pi }]={\frac {\pi }{7}}}
<
g
,
g
>=
∫
0
2
π
7
s
i
n
2
(
7
x
)
d
x
{\displaystyle \displaystyle <g,g>=\int _{0}^{\frac {2\pi }{7}}sin^{2}(7x)dx}
Letting
u
=
7
x
,
d
u
=
7
d
x
{\displaystyle \displaystyle u=7x,du=7dx}
and changing the limits of integration by plugging in old limits into "u" equations
<
g
,
g
>=
1
7
∫
0
2
π
s
i
n
2
(
u
)
d
u
=
1
7
[
u
2
−
1
4
s
i
n
2
u
∣
0
2
π
]
=
π
7
{\displaystyle \displaystyle <g,g>={\frac {1}{7}}\int _{0}^{2\pi }sin^{2}(u)du={\frac {1}{7}}[{\frac {u}{2}}-{\frac {1}{4}}sin2u\mid _{0}^{2\pi }]={\frac {\pi }{7}}}
<
f
,
g
>=<
g
,
f
>=
∫
0
2
π
7
c
o
s
(
7
x
)
s
i
n
(
7
x
)
d
x
{\displaystyle \displaystyle <f,g>=<g,f>=\int _{0}^{\frac {2\pi }{7}}cos(7x)sin(7x)dx}
From Kreyszig p.479, it is apparent that sin and cos are orthogonal to each other, so the above integration will equal zero
<
f
,
g
>=<
g
,
f
>=
∫
0
2
π
7
c
o
s
(
7
x
)
s
i
n
(
7
x
)
d
x
=
0
{\displaystyle \displaystyle <f,g>=<g,f>=\int _{0}^{\frac {2\pi }{7}}cos(7x)sin(7x)dx=0}
Plugging in the results of each integral into the Gramian
Γ
(
f
,
g
)
=
d
e
t
[
<
f
,
f
>
<
f
,
g
>
<
g
,
f
>
<
g
,
g
>
]
=
d
e
t
[
π
7
0
0
π
7
]
=
π
2
49
≠
0
{\displaystyle \displaystyle {\boldsymbol {\Gamma }}(f,g)=det{\begin{bmatrix}<f,f>&<f,g>\\<g,f>&<g,g>\end{bmatrix}}=det{\begin{bmatrix}{\frac {\pi }{7}}&0\\0&{\frac {\pi }{7}}\end{bmatrix}}={\frac {\pi ^{2}}{49}}\neq 0}
Therefore, f and g are linearly independent
(2)
Given
y
″
−
3
y
′
−
10
y
=
3
c
o
s
(
7
x
)
{\displaystyle \displaystyle y''-3y'-10y=3cos(7x)}
And particular solutions of the form
y
p
(
x
)
=
M
c
o
s
(
7
x
)
+
N
s
i
n
(
7
x
)
,
y
p
′
(
x
)
=
−
M
7
s
i
n
(
7
x
)
+
N
7
c
o
s
(
7
x
)
,
y
p
″
(
x
)
=
−
M
49
c
o
s
(
7
x
)
−
N
49
s
i
n
(
7
x
)
{\displaystyle \displaystyle y_{p}(x)=Mcos(7x)+Nsin(7x),y'_{p}(x)=-M7sin(7x)+N7cos(7x),y''_{p}(x)=-M49cos(7x)-N49sin(7x)}
Plug particular solutions back into original ODE and collect like terms
−
M
49
c
o
s
(
7
x
)
−
N
49
s
i
n
(
7
x
)
+
M
21
s
i
n
(
7
x
)
−
N
21
c
o
s
(
7
x
)
−
M
10
c
o
s
(
7
x
)
−
N
10
s
i
n
(
7
x
)
=
3
c
o
s
(
7
x
)
{\displaystyle \displaystyle -M49cos(7x)-N49sin(7x)+M21sin(7x)-N21cos(7x)-M10cos(7x)-N10sin(7x)=3cos(7x)}
−
59
M
c
o
s
(
7
x
)
−
59
N
s
i
n
(
7
x
)
+
21
M
s
i
n
(
7
x
)
−
21
N
c
o
s
(
7
x
)
=
3
c
o
s
(
7
x
)
{\displaystyle \displaystyle -59Mcos(7x)-59Nsin(7x)+21Msin(7x)-21Ncos(7x)=3cos(7x)}
Equating coefficients
−
59
M
=
3
→
M
=
−
3
59
{\displaystyle \displaystyle -59M=3\rightarrow M=-{\frac {3}{59}}}
−
21
N
=
3
→
N
=
−
1
7
{\displaystyle \displaystyle -21N=3\rightarrow N=-{\frac {1}{7}}}
M
=
−
3
59
,
N
=
−
1
7
{\displaystyle \displaystyle M=-{\frac {3}{59}},N=-{\frac {1}{7}}}
(3)
The overall solution
y
(
x
)
=
y
p
(
x
)
+
y
h
(
x
)
{\displaystyle \displaystyle y(x)=y_{p}(x)+y_{h}(x)}
consists of the particular solution and homogeneous soloution
Homogeneous solotuion
y
″
−
3
y
′
−
10
y
=
0
{\displaystyle \displaystyle y''-3y'-10y=0}
a
2
−
4
b
=
3
2
−
4
(
10
)
=
49
>
0
{\displaystyle \displaystyle a^{2}-4b=3^{2}-4(10)=49>0}
so we have distinct real roots
λ
1
,
2
=
1
2
[
−
a
±
a
2
−
4
b
]
=
1
2
[
3
±
7
]
=
5
,
−
2
{\displaystyle \displaystyle \lambda _{1,2}={\frac {1}{2}}[-a\pm {\sqrt {a^{2}-4b}}]={\frac {1}{2}}[3\pm 7]=5,-2}
y
h
(
x
)
=
c
1
e
5
x
+
c
2
e
−
2
x
{\displaystyle \displaystyle y_{h}(x)=c_{1}e^{5x}+c_{2}e^{-2x}}
Using initial conditions
y
(
0
)
=
1
,
y
′
(
0
)
=
1
{\displaystyle \displaystyle y(0)=1,y'(0)=1}
and
y
h
′
(
x
)
=
5
c
1
e
5
x
−
2
c
2
e
−
2
x
{\displaystyle \displaystyle y'_{h}(x)=5c_{1}e^{5x}-2c_{2}e^{-2x}}
5
c
1
−
2
c
2
=
0
{\displaystyle \displaystyle 5c_{1}-2c_{2}=0}
c
1
+
c
2
=
1
{\displaystyle \displaystyle c_{1}+c_{2}=1}
Solving the two equations for the two unknowns yields
c
1
=
2
7
,
c
2
=
5
7
{\displaystyle \displaystyle c_{1}={\frac {2}{7}},c_{2}={\frac {5}{7}}}
y
h
(
x
)
=
2
7
e
5
x
+
5
7
e
−
2
x
{\displaystyle \displaystyle y_{h}(x)={\frac {2}{7}}e^{5x}+{\frac {5}{7}}e^{-2x}}
Particular solution
y
p
(
x
)
=
K
c
o
s
(
ω
x
)
+
M
s
i
n
(
ω
x
)
{\displaystyle \displaystyle y_{p}(x)=Kcos(\omega x)+Msin(\omega x)}
where
ω
=
7
,
K
=
M
=
−
3
59
,
M
=
N
=
−
1
7
{\displaystyle \displaystyle \omega =7,K=M=-{\frac {3}{59}},M=N=-{\frac {1}{7}}}
y
p
(
x
)
=
−
3
59
c
o
s
(
7
x
)
−
1
7
s
i
n
(
7
x
)
{\displaystyle \displaystyle y_{p}(x)=-{\frac {3}{59}}cos(7x)-{\frac {1}{7}}sin(7x)}
Giving us an overall solution of
y
(
x
)
=
y
h
(
x
)
+
y
p
(
x
)
=
2
7
e
5
x
+
5
7
e
−
2
x
−
3
59
c
o
s
(
7
x
)
−
1
7
s
i
n
(
7
x
)
{\displaystyle \displaystyle y(x)=y_{h}(x)+y_{p}(x)={\frac {2}{7}}e^{5x}+{\frac {5}{7}}e^{-2x}-{\frac {3}{59}}cos(7x)-{\frac {1}{7}}sin(7x)}
Plotting the solution over three periods
P
=
2
π
7
⇒
3
P
=
6
π
7
{\displaystyle \displaystyle P={\frac {2\pi }{7}}\Rightarrow 3P={\frac {6\pi }{7}}}
Matlabcode
EDU>> x=0:0.001:(6*pi)/7;
EDU>> y=(2/7).*exp(5.*x)+(5/7).*exp(-2.*x)-(3/59).*cos(7.*x)-(1/7).*sin(7.*x);
EDU>> plot(x,y)
Solved and uploaded by Joshua House
Find the solution to the following initial condition problem, and plot it over 3 periods.
y
″
+
4
y
′
+
13
y
=
2
e
−
2
x
c
o
s
(
3
x
)
{\displaystyle \displaystyle y''+4y'+13y=2e^{-2x}cos(3x)}
y
h
(
x
)
=
e
−
2
x
(
A
c
o
s
(
3
x
)
+
B
s
i
n
(
3
x
)
)
{\displaystyle \displaystyle y_{h}(x)=e^{-2x}(Acos(3x)+Bsin(3x))}
y
p
(
x
)
=
x
e
−
2
x
(
M
c
o
s
(
3
x
)
+
N
s
i
n
(
3
x
)
)
{\displaystyle \displaystyle y_{p}(x)=xe^{-2x}(Mcos(3x)+Nsin(3x))}
y
(
0
)
=
1
,
y
′
(
0
)
=
0
{\displaystyle \displaystyle y(0)=1,y'(0)=0}
First, we take the first and second derivative of the particular solution:
y
p
′
(
x
)
=
e
−
2
x
(
−
3
M
s
i
n
(
3
x
)
−
2
M
c
o
s
(
3
x
)
+
3
N
c
o
s
(
3
x
)
−
2
N
s
i
n
(
3
x
)
)
=
e
−
2
x
[
(
−
3
M
−
2
N
)
s
i
n
(
3
x
)
+
(
−
2
M
+
3
N
)
c
o
s
(
3
x
)
]
{\displaystyle \displaystyle y'_{p}(x)=e^{-2x}(-3Msin(3x)-2Mcos(3x)+3Ncos(3x)-2Nsin(3x))=e^{-2x}[(-3M-2N)sin(3x)+(-2M+3N)cos(3x)]}
y
p
″
(
x
)
=
e
−
2
x
(
−
9
M
c
o
s
(
3
x
)
+
6
M
s
i
n
(
3
x
)
−
9
N
s
i
n
(
3
x
)
−
6
N
c
o
s
(
3
x
)
)
−
2
e
−
2
x
(
−
3
M
s
i
n
(
3
x
)
−
2
M
c
o
s
(
3
x
)
+
3
N
c
o
s
(
3
x
)
−
2
N
s
i
n
(
3
x
)
)
{\displaystyle \displaystyle y''_{p}(x)=e^{-2x}(-9Mcos(3x)+6Msin(3x)-9Nsin(3x)-6Ncos(3x))-2e^{-2x}(-3Msin(3x)-2Mcos(3x)+3Ncos(3x)-2Nsin(3x))}
=
2
e
−
2
x
[
(
12
M
−
5
N
)
s
i
n
(
3
x
)
+
(
13
M
−
12
N
)
c
o
s
(
3
x
)
]
{\displaystyle \displaystyle =2e^{-2x}[(12M-5N)sin(3x)+(13M-12N)cos(3x)]}
Now, we plug the particular solution derivatives into the initial equation:
y
″
+
4
y
′
+
13
y
=
2
e
−
2
x
c
o
s
(
3
x
)
{\displaystyle \displaystyle y''+4y'+13y=2e^{-2x}cos(3x)}
2
e
−
2
x
[
(
12
M
−
5
N
)
s
i
n
(
3
x
)
+
(
13
M
−
12
N
)
c
o
s
(
3
x
)
]
+
4
[
e
−
2
x
[
(
−
3
M
−
2
N
s
i
n
(
3
x
)
+
(
−
2
M
+
3
N
)
c
o
s
(
3
x
)
]
]
{\displaystyle \displaystyle 2e^{-2x}[(12M-5N)sin(3x)+(13M-12N)cos(3x)]+4[e^{-2x}[(-3M-2Nsin(3x)+(-2M+3N)cos(3x)]]}
+
13
[
e
−
2
x
[
M
c
o
s
(
3
x
)
+
N
s
i
n
(
3
x
)
]
]
=
2
e
−
2
x
c
o
s
(
3
x
)
{\displaystyle \displaystyle +13[e^{-2x}[Mcos(3x)+Nsin(3x)]]=2e^{-2x}cos(3x)}
e
−
2
x
[
(
24
M
−
10
N
)
s
i
n
(
3
x
)
+
(
26
M
−
24
N
)
c
o
s
(
3
x
)
+
(
−
12
M
−
8
N
)
s
i
n
(
3
x
)
+
(
−
8
M
+
12
N
)
c
o
s
(
3
x
)
+
13
M
c
o
s
(
3
x
)
+
13
N
s
i
n
(
3
x
)
]
{\displaystyle \displaystyle e^{-2x}[(24M-10N)sin(3x)+(26M-24N)cos(3x)+(-12M-8N)sin(3x)+(-8M+12N)cos(3x)+13Mcos(3x)+13Nsin(3x)]}
=
2
e
−
2
x
c
o
s
(
3
x
)
{\displaystyle \displaystyle =2e^{-2x}cos(3x)}
(
12
M
−
5
N
)
s
i
n
(
3
x
)
+
(
31
M
−
12
N
)
c
o
s
(
3
x
)
=
2
c
o
s
(
3
x
)
{\displaystyle \displaystyle (12M-5N)sin(3x)+(31M-12N)cos(3x)=2cos(3x)}
Now, we equate the coefficients of sin(3x) and cos(3x) to determine the unknown coefficients M and N:
12
M
−
5
N
=
0
{\displaystyle \displaystyle 12M-5N=0}
31
M
−
12
N
=
2
{\displaystyle \displaystyle 31M-12N=2}
N
=
24
11
,
M
=
10
11
{\displaystyle \displaystyle N={\frac {24}{11}},M={\frac {10}{11}}}
Therefore, the particular solution is:
y
p
(
x
)
=
e
−
2
x
[
10
11
c
o
s
(
3
x
)
+
24
11
s
i
n
(
3
x
)
]
{\displaystyle \displaystyle y_{p}(x)=e^{-2x}[{\frac {10}{11}}cos(3x)+{\frac {24}{11}}sin(3x)]}
Now, we focus on the homogeneous part of the solution. It is given to us as:
y
h
(
x
)
=
e
−
2
x
[
A
c
o
s
(
3
x
)
+
B
s
i
n
(
3
x
)
]
{\displaystyle \displaystyle y_{h}(x)=e^{-2x}[Acos(3x)+Bsin(3x)]}
As you can see, this is identical to the particular solution, except that M is now A and N is now B. Therefore, the first derivative of the homogenous solution will be in the same form as the first derivative of the particular solution:
y
h
′
(
x
)
=
e
−
2
x
(
−
3
A
s
i
n
(
3
x
)
−
2
A
c
o
s
(
3
x
)
+
3
B
c
o
s
(
3
x
)
−
2
B
s
i
n
(
3
x
)
)
=
e
−
2
x
[
(
−
3
A
−
2
B
)
s
i
n
(
3
x
)
+
(
−
2
A
+
3
B
)
c
o
s
(
3
x
)
]
{\displaystyle \displaystyle y'_{h}(x)=e^{-2x}(-3Asin(3x)-2Acos(3x)+3Bcos(3x)-2Bsin(3x))=e^{-2x}[(-3A-2B)sin(3x)+(-2A+3B)cos(3x)]}
Remembering the two initial conditions, y(0) = 1 and y'(0) = 0, we apply these to the homogenous equations:
y
h
(
0
)
=
e
−
2
x
[
A
c
o
s
(
3
x
)
+
B
s
i
n
(
3
x
)
]
=
1
{\displaystyle \displaystyle y_{h}(0)=e^{-2x}[Acos(3x)+Bsin(3x)]=1}
y
h
′
(
x
)
=
e
−
2
x
[
(
−
3
A
−
2
B
)
s
i
n
(
3
x
)
+
(
−
2
A
+
3
B
)
c
o
s
(
3
x
)
]
=
0
{\displaystyle \displaystyle y'_{h}(x)=e^{-2x}[(-3A-2B)sin(3x)+(-2A+3B)cos(3x)]=0}
This yields two equations that we can use to solve for the coefficients A and B:
A
=
1
{\displaystyle \displaystyle A=1}
−
2
A
+
3
B
=
0
⇒
3
B
=
2
⇒
B
=
3
2
{\displaystyle \displaystyle -2A+3B=0\Rightarrow 3B=2\Rightarrow B={\frac {3}{2}}}
Therefore, the homoegenous solution is:
y
h
(
x
)
=
e
−
2
x
[
c
o
s
(
3
x
)
+
3
2
s
i
n
(
3
x
)
]
{\displaystyle \displaystyle y_{h}(x)=e^{-2x}[cos(3x)+{\frac {3}{2}}sin(3x)]}
We find the final solution, y(x), by adding the homogenous and particular solutions as seen below:
y
(
x
)
=
y
h
(
x
)
+
y
p
(
x
)
=
e
−
2
x
[
c
o
s
(
3
x
)
+
3
2
s
i
n
(
3
x
)
]
+
e
−
2
x
[
10
11
c
o
s
(
3
x
)
+
24
11
s
i
n
(
3
x
)
]
{\displaystyle \displaystyle y(x)=y_{h}(x)+y_{p}(x)=e^{-2x}[cos(3x)+{\frac {3}{2}}sin(3x)]+e^{-2x}[{\frac {10}{11}}cos(3x)+{\frac {24}{11}}sin(3x)]}
y
(
x
)
=
e
−
2
x
[
21
11
c
o
s
(
3
x
)
+
81
22
s
i
n
(
3
x
)
]
{\displaystyle \displaystyle y(x)=e^{-2x}[{\frac {21}{11}}cos(3x)+{\frac {81}{22}}sin(3x)]}
Matlab Code:
x= 0:0.001:(2*pi/3);
y = exp(-2.*x)*(21/11).*cos(3.*x) + exp(-2.*x)*(81/22).*sin(3.*x);
plot(x,y)
This problem was solved and uploaded by Will Knapper
v
=
4
e
1
+
2
e
2
=
c
1
b
1
+
c
2
b
2
{\displaystyle v=4e_{1}+2e_{2}=c_{1}b_{1}+c_{2}b_{2}}
The oblique basis vectors are:
b
1
=
2
e
1
+
7
e
2
{\displaystyle b_{1}=2e_{1}+7e_{2}}
b
2
=
1.5
e
1
+
3
e
2
{\displaystyle b_{2}=1.5e_{1}+3e_{2}}
1. Find the components
c
1
,
c
2
{\displaystyle c_{1},c_{2}}
using the Gram matrix as in (1)p.8-11.
2. Verify the results by using (1)-(2)p.7c-34 in (2)p8-11, and rely on the non-zero determinant of the matrix of components of
b
1
,
b
2
{\displaystyle b_{1},b_{2}}
relative to the basis
e
1
,
e
2
{\displaystyle e_{1},e_{2}}
, as discussed on p.7c-34.
1. Using Gram matrix as in (1) p.8-10:
[
<
b
1
,
b
1
>
<
b
1
,
b
2
>
<
b
2
,
b
1
>
<
b
2
,
b
2
>
]
{\displaystyle {\begin{bmatrix}<b_{1},b_{1}>&<b_{1},b_{2}>\\<b_{2},b_{1}>&<b_{2},b_{2}>\end{bmatrix}}}
{
c
1
c
2
}
{\displaystyle {\begin{Bmatrix}c_{1}\\c_{2}\end{Bmatrix}}}
=
{
<
b
1
,
v
>
<
c
2
,
v
>
}
{\displaystyle {\begin{Bmatrix}<b_{1},v>\\<c_{2},v>\end{Bmatrix}}}
From (3)p.8-9 we know that
<
b
i
,
b
j
>=
b
i
⋅
b
j
{\displaystyle <b_{i},b_{j}>=b_{i}\cdot b_{j}}
. Solving the various components we get:
[
53
24
24
11.25
]
{\displaystyle {\begin{bmatrix}53&24\\24&11.25\end{bmatrix}}}
{
c
1
c
2
}
{\displaystyle {\begin{Bmatrix}c_{1}\\c_{2}\end{Bmatrix}}}
=
{
22
12
}
{\displaystyle {\begin{Bmatrix}22\\12\end{Bmatrix}}}
In order to solve for
c
1
,
c
2
{\displaystyle c_{1},c_{2}}
we need to calculate the inverse of
[
53
24
24
11.25
]
{\displaystyle {\begin{bmatrix}53&24\\24&11.25\end{bmatrix}}}
.
This gives is the Gram matrix as used in (1)p.8-11:
c
=
Γ
−
1
d
{\displaystyle c=\Gamma ^{-1}d}
[
0.55556
−
1.18519
−
1.18519
2.61728
]
{\displaystyle {\begin{bmatrix}0.55556&-1.18519\\-1.18519&2.61728\end{bmatrix}}}
{
22
12
}
{\displaystyle {\begin{Bmatrix}22\\12\end{Bmatrix}}}
=
{
c
1
c
2
}
{\displaystyle {\begin{Bmatrix}c_{1}\\c_{2}\end{Bmatrix}}}
Thus:
c
1
=
−
2
;
c
2
=
5.333
=
16
3
{\displaystyle c_{1}=-2;c_{2}=5.333={\frac {16}{3}}}
2. Plugging in
b
1
,
b
2
{\displaystyle b_{1},b_{2}}
into
v
{\displaystyle v}
we get:
c
1
(
2
e
1
+
7
e
2
)
+
c
2
(
1.5
e
1
+
3
e
2
)
=
4
e
1
+
2
e
2
{\displaystyle c_{1}(2e_{1}+7e_{2})+c_{2}(1.5e_{1}+3e_{2})=4e_{1}+2e_{2}}
2
c
1
e
1
+
7
c
1
e
2
+
1.5
c
2
e
1
+
3
c
2
e
2
=
4
e
1
+
2
e
2
{\displaystyle 2c_{1}e_{1}+7c_{1}e_{2}+1.5c_{2}e_{1}+3c_{2}e_{2}=4e_{1}+2e_{2}}
e
1
(
2
c
1
+
1.5
c
2
)
+
e
2
(
7
c
1
+
3
c
2
)
=
4
e
1
+
2
e
2
{\displaystyle e_{1}(2c_{1}+1.5c_{2})+e_{2}(7c_{1}+3c_{2})=4e_{1}+2e_{2}}
Separating into components:
2
c
1
+
1.5
c
2
=
4
{\displaystyle 2c_{1}+1.5c_{2}=4}
and
7
c
1
+
3
c
2
=
2
{\displaystyle 7c_{1}+3c_{2}=2}
Solving the two linearly independent equations we get:
{\displaystyle \ }
c
1
=
−
2
;
c
2
=
16
3
{\displaystyle c_{1}=-2;c_{2}={\frac {16}{3}}}
This problem was solved and uploaded by Radina Dikova
Find the integral (see R5.9)
∫
x
n
l
o
g
(
1
+
x
)
d
x
{\displaystyle \displaystyle \ \int x^{n}log(1+x)dx\ }
using integration by parts and then with the help of General binomial theorem.
(
x
+
y
)
n
=
∑
k
=
0
n
(
n
k
)
x
n
−
k
y
k
{\displaystyle \displaystyle \ (x+y)^{n}=\sum _{k=0}^{n}{\binom {n}{k}}x^{n-k}y^{k}\ }
(
n
k
)
=
n
!
k
!
(
n
−
k
)
!
=
n
(
n
−
1
)
.
.
.
(
n
−
k
+
1
)
k
!
{\displaystyle \ {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}={\frac {n(n-1)...(n-k+1)}{k!}}\ }
The indefinite integral of
x
n
{\displaystyle \ x^{n}\ }
is
x
n
+
1
n
+
1
+
C
{\displaystyle \ {\frac {x^{n+1}}{n+1}}+C\ }
For n = 0, we get
∫
x
0
=
x
{\displaystyle \ \int x^{0}=x\ }
For n = 1, we get
∫
x
1
=
x
2
2
{\displaystyle \ \int x^{1}={\frac {x^{2}}{2}}\ }
And the indefinite integral of
l
o
g
(
1
+
x
)
{\displaystyle \ log(1+x)\ }
is
(
x
+
1
)
l
o
g
(
1
+
x
)
−
x
+
C
{\displaystyle \ (x+1)log(1+x)-x+C\ }
Integration by parts
- For n = 0
∫
x
0
l
o
g
(
1
+
x
)
d
x
=
l
o
g
(
1
+
x
)
[
x
+
1
]
−
x
{\displaystyle \ \int x^{0}log(1+x)dx=log(1+x)[x+1]-x\ }
- For n = 1
∫
x
1
l
o
g
(
1
+
x
)
d
x
=
1
2
∗
[
(
x
−
l
o
g
(
1
+
x
)
)
+
x
2
l
o
g
(
1
+
x
)
−
x
2
2
]
{\displaystyle \ \int x^{1}log(1+x)dx={\frac {1}{2}}*[(x-log(1+x))+x^{2}log(1+x)-{\frac {x^{2}}{2}}]\ }
Solved and uploaded by
Mike Wallace
Consider the L2-ODE-CC(5)p.7b-7 with log(1+x) as excitation:
y
″
−
3
y
′
+
2
y
=
r
(
x
)
{\displaystyle \ y''-3y'+2y=r(x)\ }
r
(
x
)
=
l
o
g
(
1
+
x
)
{\displaystyle \ r(x)=log(1+x)\ }
and the initial conditions
y
(
−
3
4
)
=
1
,
y
′
(
−
3
4
)
=
0
{\displaystyle \ y(-{\frac {3}{4}})=1,y'(-{\frac {3}{4}})=0\ }
Project the excitation r(x) on the polynomial basis
[
b
j
(
x
)
=
x
j
,
j
=
0
,
1
,
.
.
.
,
n
]
{\displaystyle \ [b_{j}(x)=x^{j},j=0,1,...,n]\ }
i.e., find
d
j
{\displaystyle \ d_{j}\ }
such that
r(x) ≈
r
n
(
x
)
=
∑
j
=
0
n
d
j
x
j
{\displaystyle \ r_{n}(x)=\sum _{j=0}^{n}d_{j}x^{j}\ }
for x in
[
−
3
4
,
3
]
,
{\displaystyle \ [-{\frac {3}{4}},3],\ }
and for n = 3, 6, 9
Plot
r
(
x
)
,
r
n
(
x
)
{\displaystyle \ r(x),r_{n}(x)\ }
to show uniform approximation and convergence
In a separate series of plots, compare the approximation of the function log(1+x) by 2 methods:
A. Projection on polynomial basis (1) p.8-17
B. Taylor series expansion about x = 0
Observe and discuss the pros and cons of each method
Find
y
n
(
x
)
{\displaystyle \ y_{n}(x)\ }
such that:
y
n
″
+
a
y
n
′
+
b
y
n
=
r
n
(
x
)
{\displaystyle \ y_{n}''+ay_{n}'+by_{n}=r_{n}(x)\ }
with the same initial conditions (2) p.7c-28
Plot
y
n
(
x
)
{\displaystyle \ y_{n}(x)\ }
for n = 3, 6, 9, for x in
[
−
3
4
,
3
]
.
{\displaystyle \ [-{\frac {3}{4}},3].\ }
In a series of separate plots, compare the results obtained with the projected excitation on polynomial basis to those with truncated Taylor series of the excitation. Plot also the numerical solution as a baseline for comparison.
Using
y
″
−
3
y
′
+
2
y
=
l
o
g
(
1
+
x
)
{\displaystyle \ y''-3y'+2y=log(1+x)\ }
For n = 0
b
0
=
x
0
{\displaystyle \ {b_{0}}={x^{0}}\ }
r
=
l
o
g
(
1
+
x
)
=
C
0
x
0
{\displaystyle \ r=log(1+x)=C_{0}x^{0}\ }
d
0
=<
x
0
,
l
o
g
(
1
+
x
)
>=
∫
−
3
4
3
l
o
g
(
1
+
x
)
d
x
=
2.14175
{\displaystyle \ d_{0}=<x^{0},log(1+x)>=\int _{-{\frac {3}{4}}}^{3}log(1+x)dx=2.14175\ }
<
x
0
,
x
0
>=
∫
−
3
4
3
x
0
x
0
d
x
=
x
|
−
3
4
3
=
15
4
{\displaystyle \ <x^{0},x^{0}>=\int _{-{\frac {3}{4}}}^{3}x^{0}x^{0}dx=x|_{-{\frac {3}{4}}}^{3}={\frac {15}{4}}\ }
d
0
=
C
0
<
x
0
,
x
0
>=
C
0
(
15
4
)
=
2.14175
{\displaystyle \ d_{0}=C_{0}<x^{0},x^{0}>=C_{0}({\frac {15}{4}})=2.14175\ }
C
0
=
0.57113
{\displaystyle \ C_{0}=0.57113\ }
Y
=
Y
h
+
Y
p
=
C
1
e
2
x
+
C
2
e
x
+
0.57113
{\displaystyle \ Y=Y_{h}+Y_{p}=C_{1}e^{2x}+C_{2}e^{x}+0.57113\ }
Y
′
=
2
C
1
e
2
x
+
C
2
e
x
{\displaystyle \ Y'=2C_{1}e^{2x}+C_{2}e^{x}\ }
y
(
3
4
)
=
C
1
e
6
4
+
C
2
e
3
4
+
0.57113
=
1
{\displaystyle \ y({\frac {3}{4}})=C_{1}e^{\frac {6}{4}}+C_{2}e^{\frac {3}{4}}+0.57113=1\ }
y
′
(
−
3
4
)
=
−
6
4
C
1
e
−
6
4
+
C
2
e
−
3
4
=
0
{\displaystyle \ y'(-{\frac {3}{4}})=-{\frac {6}{4}}C_{1}e^{-{\frac {6}{4}}}+C_{2}e^{-{\frac {3}{4}}}=0\ }
Subtracting Y from Y' in order to find the coefficients
C
1
e
−
6
4
−
0.57113
=
−
1
{\displaystyle \ C_{1}e^{-{\frac {6}{4}}}-0.57113=-1\ }
C
1
=
−
−
1.92205
{\displaystyle \ C_{1}=--1.92205\ }
C
2
=
1.81582
{\displaystyle \ C_{2}=1.81582\ }
For n = 0 the final solution will be
Y
=
−
1.92205
e
2
x
+
1.81582
e
x
+
0.57113
{\displaystyle \ Y=-1.92205e^{2x}+1.81582e^{x}+0.57113\ }
For n = 1
C
0
=
−
.09399
,
C
1
=
0.591217
{\displaystyle \ C_{0}=-.09399,C_{1}=0.591217\ }
We need to find the homogeneous Y
λ
2
−
3
λ
+
2
=
0
=>
(
λ
−
2
)
(
λ
−
1
)
=
0
{\displaystyle \ \lambda ^{2}-3\lambda +2=0=>(\lambda -2)(\lambda -1)=0\ }
λ
1
,
2
=
2
,
1
{\displaystyle \ \lambda _{1,2}=2,1\ }
Y
h
=
C
1
e
2
x
+
C
2
e
x
{\displaystyle \ Y_{h}=C_{1}e^{2x}+C_{2}e^{x}\ }
Y
=
Y
h
+
Y
p
=
C
1
e
2
x
+
C
2
e
x
−
0.09399
+
0.591217
x
{\displaystyle \ Y=Y_{h}+Y_{p}=C_{1}e^{2x}+C_{2}e^{x}-0.09399+0.591217x\ }
Solving for the initial conditions
y
(
3
4
)
=
C
1
e
6
4
+
C
2
e
3
4
−
0.09399
+
0.591217
(
3
4
)
=
1
{\displaystyle \ y({\frac {3}{4}})=C_{1}e^{\frac {6}{4}}+C_{2}e^{\frac {3}{4}}-0.09399+0.591217({\frac {3}{4}})=1\ }
Y
′
==
2
C
1
e
2
x
+
C
2
e
x
+
0.591217
{\displaystyle \ Y'==2C_{1}e^{2x}+C_{2}e^{x}+0.591217\ }
y
′
(
−
3
4
)
=
−
6
4
C
1
e
−
6
4
+
C
2
e
−
3
4
+
0.591217
=
0
{\displaystyle \ y'(-{\frac {3}{4}})=-{\frac {6}{4}}C_{1}e^{-{\frac {6}{4}}}+C_{2}e^{-{\frac {3}{4}}}+0.591217=0\ }
This gives coefficients of:
C
1
=
−
9.5398
,
C
2
=
7.761
{\displaystyle \ C_{1}=-9.5398,C_{2}=7.761\ }
For n = 1 the final solution will be;
Y
=
−
9.5398
e
2
x
+
7.761
e
x
+
0.591217
x
−
0.09399
{\displaystyle \ Y=-9.5398e^{2x}+7.761e^{x}+0.591217x-0.09399\ }
This problem was solved and uploaded by
Mike Wallace
Contribution Summary
edit