The Bessel function is canonical solution to Bessel's differential equation
x
2
y
″
+
x
y
′
+
(
x
2
−
ν
2
)
y
=
0
,
ν
∈
C
.
{\displaystyle x^{2}y''+xy'+(x^{2}-\nu ^{2})y=0,\,\nu \in \mathbb {C} ~.}
Solutions were first introduced by Daniel Bernoulli, but later generalized by Friedrich Bessel. The most common and most important case of the Bessel function is when
ν
∈
Z
,
{\displaystyle \nu \in \mathbb {Z} ~,}
which is called the order of the Bessel function.
Bessel functions arise when the method of separation of variables is applied to the Laplace or Helmholtz equation in cylindrical or spherical coordinates. They are very important for many problems dealing with physical phenomena, like wave or heat propagation.
Derivation of Bessel function using Frobenius's Method
edit
Consider the Bessel equation:
x
2
y
″
+
x
y
′
+
(
x
2
−
ν
2
)
y
=
0
{\displaystyle x^{2}y''+xy'+(x^{2}-\nu ^{2})y=0}
⇔
y
″
+
(
1
x
)
⏟
p
(
x
)
y
′
+
(
1
−
ν
2
x
2
)
⏟
q
(
x
)
y
=
0
{\displaystyle \Leftrightarrow y''+\underbrace {\left({\frac {1}{x}}\right)} _{p(x)}y'+\underbrace {\left(1-{\frac {\nu ^{2}}{x^{2}}}\right)} _{q(x)}y=0}
We're seeking solutions near
x
0
=
0
.
{\displaystyle x_{0}=0~.}
Since:
x
p
(
x
)
=
1
x
2
q
(
x
)
=
x
2
−
ν
2
{\displaystyle {\begin{aligned}xp(x)&=1\\x^{2}q(x)&=x^{2}-\nu ^{2}\end{aligned}}}
are power series in x,
x
0
=
0
{\displaystyle x_{0}=0}
is a regular singular point of the Bessel equation. This allows Frobenius's method to be applied.
We are seeking solutions of the form:
y
(
x
)
=
∑
n
=
0
∞
C
n
x
n
+
r
,
x
>
0
,
C
n
≠
0
{\displaystyle y(x)=\sum _{n=0}^{\infty }C_{n}x^{n+r},\,x>0,C_{n}\neq 0}
Differentiating yields:
y
′
(
x
)
=
∑
n
=
0
∞
(
n
+
r
)
C
n
x
n
+
r
−
1
y
″
(
x
)
=
∑
n
=
0
∞
(
n
+
r
−
1
)
(
n
+
r
)
C
n
x
n
+
r
−
2
{\displaystyle {\begin{aligned}y'(x)&=\sum _{n=0}^{\infty }(n+r)C_{n}x^{n+r-1}\\y''(x)&=\sum _{n=0}^{\infty }(n+r-1)(n+r)C_{n}x^{n+r-2}\end{aligned}}}
Conditions for
C
n
{\displaystyle C_{n}}
must be found. Substituting our expressions back into the Bessel equation:
0
=
x
2
y
″
+
x
y
′
+
(
x
2
−
ν
2
)
y
=
∑
n
=
0
∞
(
n
+
r
−
1
)
(
n
+
r
)
C
n
x
n
+
r
+
∑
n
=
0
∞
(
n
+
r
)
C
n
x
n
+
r
+
∑
n
=
0
∞
C
n
x
n
+
r
+
2
−
∑
n
=
0
∞
ν
2
C
n
x
n
+
r
{\displaystyle {\begin{aligned}0&=x^{2}y''+xy'+(x^{2}-\nu ^{2})y\\&=\sum _{n=0}^{\infty }(n+r-1)(n+r)C_{n}x^{n+r}+\sum _{n=0}^{\infty }(n+r)C_{n}x^{n+r}+\sum _{n=0}^{\infty }C_{n}x^{n+r+2}-\sum _{n=0}^{\infty }\nu ^{2}C_{n}x^{n+r}\end{aligned}}}
A substitution must be made in indices:
m
=
n
+
2
.
{\displaystyle m=n+2~.}
This yields:
0
=
∑
n
=
0
∞
[
(
n
+
r
−
1
)
(
n
+
r
)
+
(
n
+
r
)
−
ν
2
]
C
n
x
n
+
r
+
∑
m
=
2
∞
C
m
−
2
x
m
+
r
=
∑
n
=
0
∞
[
(
n
+
r
)
2
−
ν
2
]
C
n
x
n
+
r
+
∑
n
=
2
∞
C
n
−
2
x
n
+
r
=
(
r
2
−
ν
2
)
C
0
x
r
+
[
(
r
+
1
)
2
−
ν
2
]
C
1
x
r
+
1
+
∑
n
=
2
∞
{
[
(
n
+
r
)
2
−
ν
2
]
C
n
+
C
n
−
2
}
x
n
+
r
{\displaystyle {\begin{aligned}0&=\sum _{n=0}^{\infty }\left[(n+r-1)(n+r)+(n+r)-\nu ^{2}\right]C_{n}x^{n+r}+\sum _{m=2}^{\infty }C_{m-2}x^{m+r}\\&=\sum _{n=0}^{\infty }\left[(n+r)^{2}-\nu ^{2}\right]C_{n}x^{n+r}+\sum _{n=2}^{\infty }C_{n-2}x^{n+r}\\&=(r^{2}-\nu ^{2})C_{0}x^{r}+[(r+1)^{2}-\nu ^{2}]C_{1}x^{r+1}+\sum _{n=2}^{\infty }\left\{[(n+r)^{2}-\nu ^{2}]C_{n}+C_{n-2}\right\}x^{n+r}\end{aligned}}}
Dividing the equation above by
x
r
(
x
>
0
)
{\displaystyle x^{r}~(x>0)~}
yields:
0
=
(
r
2
−
ν
2
)
C
0
+
[
(
r
+
1
)
2
−
ν
2
]
C
1
x
+
∑
n
=
2
∞
{
[
(
n
+
r
)
2
−
ν
2
]
C
n
+
C
n
−
2
}
x
n
{\displaystyle 0=(r^{2}-\nu ^{2})C_{0}+[(r+1)^{2}-\nu ^{2}]C_{1}x+\sum _{n=2}^{\infty }\left\{[(n+r)^{2}-\nu ^{2}]C_{n}+C_{n-2}\right\}x^{n}}
By the "Identity Theorem" (which states that xn is linearly independent), it follows that:
(
r
2
−
ν
2
)
C
0
=
0
[
(
r
+
1
)
2
−
ν
2
]
C
1
=
0
[
(
n
+
r
)
2
−
ν
2
]
C
n
+
C
n
−
2
=
0
,
n
=
2
,
3
,
4
,
⋯
{\displaystyle {\begin{aligned}&(r^{2}-\nu ^{2})C_{0}=0\\&[(r+1)^{2}-\nu ^{2}]C_{1}=0\\&[(n+r)^{2}-\nu ^{2}]C_{n}+C_{n-2}=0,\,n=2,3,4,\cdots \end{aligned}}}
By assumption,
C
0
≠
0
,
{\displaystyle C_{0}\neq 0~,}
so we define a function:
h
(
r
)
:=
r
2
−
ν
2
=
0
(indicial equation)
{\displaystyle h(r):=r^{2}-\nu ^{2}=0\quad {\text{(indicial equation)}}}
The possible values for
r
=
±
ν
.
{\displaystyle r=\pm \nu ~.}
Let
r
1
:=
ν
,
r
2
:=
−
ν
{\displaystyle r_{1}:=\nu ,\,r_{2}:=-\nu ~}
and for convenience, let
ν
>
0
.
{\displaystyle \nu >0~.}
We obtain the following recurrence relations for
C
n
{\displaystyle C_{n}}
:
{
C
0
≠
0
(arbitrarily defined)
C
1
=
0
(follows from
[
(
r
+
1
)
2
−
ν
2
]
C
1
=
0
)
[
(
n
+
r
)
2
−
ν
2
]
⏟
h
(
n
+
r
)
C
n
=
−
C
n
−
2
,
n
=
2
,
3
,
4
,
⋯
{\displaystyle {\begin{cases}C_{0}\neq 0\quad {\text{(arbitrarily defined)}}\\C_{1}=0\quad {\text{(follows from }}[(r+1)^{2}-\nu ^{2}]C_{1}=0{\text{)}}\\\underbrace {[(n+r)^{2}-\nu ^{2}]} _{h(n+r)}C_{n}=-C_{n-2},\,n=2,3,4,\cdots \end{cases}}}
To get a solution to the Bessel equation, choose
r
1
=
ν
,
ν
≠
0
.
{\displaystyle r_{1}=\nu ~,\,\nu \neq 0~.}
Thus,
h
(
n
+
r
)
=
h
(
n
+
ν
)
≠
0
,
n
=
2
,
3
,
4
,
⋯
.
{\displaystyle h(n+r)=h(n+\nu )\neq 0,\,n=2,3,4,\cdots ~.}
We can now solve for
C
n
{\displaystyle C_{n}}
:
C
n
=
−
C
n
−
2
(
n
+
ν
)
2
−
ν
2
=
−
C
n
−
2
n
2
+
2
n
ν
{\displaystyle C_{n}=-{\frac {C_{n-2}}{(n+\nu )^{2}-\nu ^{2}}}=-{\frac {C_{n-2}}{n^{2}+2n\nu }}}
We end up with the recursion:
{
C
0
≠
0
C
1
=
0
C
n
=
−
C
n
−
2
n
(
n
+
2
ν
)
,
n
=
2
,
3
,
4
,
⋯
{\displaystyle {\begin{cases}C_{0}\neq 0\\C_{1}=0\\C_{n}=-{\frac {C_{n-2}}{n(n+2\nu )}},\,n=2,3,4,\cdots \end{cases}}}
Since the recursion has depth 2 and
C
1
=
0
{\displaystyle C_{1}=0}
, it follows that:
{
C
0
≠
0
C
2
n
+
1
=
0
,
n
=
0
,
1
,
2
,
⋯
C
2
n
=
−
C
2
n
−
2
2
n
(
2
n
+
2
ν
)
=
−
C
2
n
−
2
2
2
n
(
n
+
ν
)
,
n
=
1
,
2
,
3
,
⋯
{\displaystyle {\begin{cases}C_{0}\neq 0\\C_{2n+1}=0,n=0,1,2,\cdots \\C_{2n}=-{\frac {C_{2n-2}}{2n(2n+2\nu )}}=-{\frac {C_{2n-2}}{2^{2}n(n+\nu )}},\,n=1,2,3,\cdots \end{cases}}}
Because of the recursion, we get the following set of terms:
C
0
≠
0
C
2
=
−
C
0
2
2
⋅
1
⋅
(
1
+
ν
)
C
4
=
C
2
⋅
2
=
−
C
2
2
2
⋅
2
⋅
(
2
+
ν
)
=
(
−
1
)
2
C
0
2
4
⋅
1
⋅
2
⋅
(
1
+
ν
)
(
2
+
ν
)
=
(
−
1
)
2
C
0
2
4
⋅
2
!
⋅
(
1
+
ν
)
(
2
+
ν
)
C
6
=
C
2
⋅
3
=
−
C
4
2
2
⋅
3
⋅
(
3
+
ν
)
=
(
−
1
)
3
C
0
2
6
⋅
3
!
⋅
(
1
+
ν
)
(
2
+
ν
)
(
3
+
ν
)
⋮
C
2
n
=
(
−
1
)
n
C
0
2
2
n
⋅
n
!
⋅
(
1
+
ν
)
(
2
+
ν
)
⋯
(
n
+
ν
)
,
n
=
1
,
2
,
3
,
⋯
{\displaystyle {\begin{aligned}&C_{0}\neq 0\\&C_{2}=-{\frac {C_{0}}{2^{2}\cdot 1\cdot (1+\nu )}}\\&C_{4}=C_{2\cdot 2}=-{\frac {C_{2}}{2^{2}\cdot 2\cdot (2+\nu )}}={\frac {(-1)^{2}C_{0}}{2^{4}\cdot 1\cdot 2\cdot (1+\nu )(2+\nu )}}={\frac {(-1)^{2}C_{0}}{2^{4}\cdot 2!\cdot (1+\nu )(2+\nu )}}\\&C_{6}=C_{2\cdot 3}=-{\frac {C_{4}}{2^{2}\cdot 3\cdot (3+\nu )}}={\frac {(-1)^{3}C_{0}}{2^{6}\cdot 3!\cdot (1+\nu )(2+\nu )(3+\nu )}}\\&\vdots \\&C_{2n}={\frac {(-1)^{n}C_{0}}{2^{2n}\cdot n!\cdot (1+\nu )(2+\nu )\cdots (n+\nu )}},\,n=1,2,3,\cdots \end{aligned}}}
In order to simplify the expansion of y , we normalize
C
0
{\displaystyle C_{0}}
and choose:
C
0
:=
1
2
ν
Γ
(
1
+
ν
)
{\displaystyle C_{0}:={\frac {1}{2^{\nu }\Gamma (1+\nu )}}}
This simplifies our general term to:
C
2
n
=
(
−
1
)
n
2
2
n
+
ν
⋅
n
!
⋅
Γ
(
n
+
1
+
ν
)
,
n
=
0
,
1
,
2
,
⋯
{\displaystyle C_{2n}={\frac {(-1)^{n}}{2^{2n+\nu }\cdot n!\cdot \Gamma (n+1+\nu )}},\,n=0,1,2,\cdots }
The first solution to the Bessel equation can be written like this:
J
ν
(
x
)
=
∑
n
=
0
∞
(
−
1
)
n
n
!
⋅
Γ
(
n
+
1
+
ν
)
(
x
2
)
2
n
+
ν
{\displaystyle J_{\nu }(x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!\cdot \Gamma (n+1+\nu )}}\left({\frac {x}{2}}\right)^{2n+\nu }}
Second Solution of the Bessel Equation
edit
Bessel Functions of the Second Kind
edit
Hankel Functions, Bessel Functions of the Third Kind
edit
Complete Solution to the Bessel Equation
edit
For all
ν
∈
R
,
λ
∈
R
,
{\displaystyle \nu \in \mathbb {R} ,\lambda \in \mathbb {R} ,}
the complete solution of the Bessel equation:
x
2
y
″
+
x
y
′
+
(
λ
2
x
2
−
ν
2
)
y
=
0
{\displaystyle x^{2}y''+xy'+(\lambda ^{2}x^{2}-\nu ^{2})y=0}
can be written as:
y
(
x
)
=
C
1
J
ν
(
λ
x
)
+
C
2
Y
ν
(
λ
x
)
{\displaystyle y(x)=C_{1}J_{\nu }(\lambda x)+C_{2}Y_{\nu }(\lambda x)}
or:
y
(
x
)
=
C
1
H
ν
(
1
)
(
λ
x
)
+
C
2
H
ν
(
2
)
(
λ
x
)
.
{\displaystyle y(x)=C_{1}H_{\nu }^{(1)}(\lambda x)+C_{2}H_{\nu }^{(2)}(\lambda x)~.}
If
ν
∈
R
∖
Z
,
{\displaystyle \nu \in \mathbb {R} \backslash \mathbb {Z} ~,}
then:
y
(
x
)
=
C
1
J
ν
(
x
)
+
C
2
J
−
ν
(
x
)
.
{\displaystyle y(x)=C_{1}J_{\nu }(x)+C_{2}J_{-\nu }(x)~.}
Moreover:
J
ν
,
J
−
ν
,
Y
ν
{\displaystyle J_{\nu },J_{-\nu },Y_{\nu }}
have countably many zeroes.
If
v
≥
0
{\displaystyle v\geq 0}
, then
J
ν
(
λ
x
)
{\displaystyle J_{\nu }(\lambda x)}
is finite for all
x
∈
R
{\displaystyle x\in \mathbb {R} }
,
J
−
ν
(
λ
x
)
{\displaystyle J_{-\nu }(\lambda x)}
and
Y
ν
(
λ
x
)
{\displaystyle Y_{\nu }(\lambda x)}
are unbounded in the neighborhood of 0.