Let
S
{\displaystyle {\mathcal {S}}}
be a linear vector space.
Addition and scalar multiplication
edit
Let us first define addition and scalar
multiplication in this space. The addition operation acts completely in
S
{\displaystyle {\mathcal {S}}}
while the scalar multiplication operation may involve multiplication either
by a real (in
R
{\displaystyle \mathbb {R} }
) or by a complex number (in
C
{\displaystyle \mathbb {C} }
). These
operations must have the following closure properties:
If
x
,
y
∈
S
{\displaystyle \mathbf {x} ,\mathbf {y} \in {\mathcal {S}}}
then
x
+
y
∈
S
{\displaystyle \mathbf {x} +\mathbf {y} \in {\mathcal {S}}}
.
If
α
∈
R
{\displaystyle \alpha \in \mathbb {R} }
(or
C
{\displaystyle \mathbb {C} }
) and
x
∈
S
{\displaystyle \mathbf {x} \in {\mathcal {S}}}
then
α
x
∈
S
{\displaystyle \alpha ~\mathbf {x} \in {\mathcal {S}}}
.
And the following laws must hold for addition
x
+
y
{\displaystyle \mathbf {x} +\mathbf {y} }
=
y
+
x
{\displaystyle \mathbf {y} +\mathbf {x} \qquad }
Commutative law.
x
+
(
y
+
z
)
{\displaystyle \mathbf {x} +(\mathbf {y} +\mathbf {z} )}
=
(
x
+
y
)
+
z
{\displaystyle (\mathbf {x} +\mathbf {y} )+\mathbf {z} \qquad }
Associative law.
∃
0
∈
S
{\displaystyle \exists \mathbf {0} \in {\mathcal {S}}}
such that
0
+
x
=
x
∀
x
∈
S
{\displaystyle \mathbf {0} +\mathbf {x} =\mathbf {x} \quad \forall \mathbf {x} \in {\mathcal {S}}\qquad }
Additive identity.
∀
x
∈
S
∃
−
x
∈
S
{\displaystyle \forall \mathbf {x} \in {\mathcal {S}}\quad \exists -\mathbf {x} \in {\mathcal {S}}}
such that
−
x
+
x
=
0
{\displaystyle -\mathbf {x} +\mathbf {x} =\mathbf {0} \qquad }
Additive inverse.
For scalar multiplication we have the properties
α
(
β
x
)
=
(
α
β
)
x
{\displaystyle \alpha ~(\beta ~\mathbf {x} )=(\alpha ~\beta )~\mathbf {x} }
.
(
α
+
β
)
x
=
α
x
+
β
x
{\displaystyle (\alpha +\beta )~\mathbf {x} =\alpha ~\mathbf {x} +\beta ~\mathbf {x} }
.
α
(
x
+
y
)
=
α
x
+
α
y
{\displaystyle \alpha ~(\mathbf {x} +\mathbf {y} )=\alpha ~\mathbf {x} +\alpha ~\mathbf {y} }
.
1
x
=
x
{\displaystyle \mathbf {1} ~\mathbf {x} =\mathbf {x} }
.
0
x
=
0
{\displaystyle \mathbf {0} ~\mathbf {x} =\mathbf {0} }
.
The
n
{\displaystyle n}
tuples
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle (x_{1},x_{2},\dots ,x_{n})}
with
(
x
1
,
x
2
,
…
,
x
n
)
+
(
y
1
,
y
2
,
…
,
y
n
)
=
(
x
1
+
y
1
,
x
2
+
y
2
,
…
,
x
n
+
y
n
)
α
(
x
1
,
x
2
,
…
,
x
n
)
=
(
α
x
1
,
α
x
2
,
…
,
α
x
n
)
{\displaystyle {\begin{aligned}(x_{1},x_{2},\dots ,x_{n})+(y_{1},y_{2},\dots ,y_{n})&=(x_{1}+y_{1},x_{2}+y_{2},\dots ,x_{n}+y_{n})\\\alpha ~(x_{1},x_{2},\dots ,x_{n})&=(\alpha ~x_{1},\alpha ~x_{2},\dots ,\alpha ~x_{n})\end{aligned}}}
form a linear vector space.
Another example of a linear vector space is the set of
2
×
2
{\displaystyle 2\times 2}
matrices
with addition as usual and scalar multiplication, or more generally
n
×
m
{\displaystyle n\times m}
matrices.
α
[
x
11
x
12
x
21
x
22
]
=
[
α
x
11
α
x
12
α
x
21
α
x
22
]
{\displaystyle \alpha {\begin{bmatrix}x_{11}&x_{12}\\x_{21}&x_{22}\end{bmatrix}}={\begin{bmatrix}\alpha ~x_{11}&\alpha ~x_{12}\\\alpha ~x_{21}&\alpha ~x_{22}\end{bmatrix}}}
Example 3: Polynomials
edit
The space of
n
{\displaystyle n}
-th order polynomials forms a linear vector space.
p
n
=
∑
j
=
1
n
α
j
x
j
{\displaystyle p_{n}=\sum _{j=1}^{n}\alpha _{j}~x^{j}}
Example 4: Continuous functions
edit
The space of continuous functions, say in
[
0
,
1
]
{\displaystyle [0,1]}
, also forms a linear vector
space with addition and scalar multiplication defined as usual.
A set of vectors
x
1
,
x
2
,
…
,
x
n
∈
S
{\displaystyle \mathbf {x} _{1},\mathbf {x} _{2},\dots ,\mathbf {x} _{n}\in {\mathcal {S}}}
are said to be linearly
dependent if
∃
α
1
,
α
2
,
…
,
α
n
{\displaystyle \exists ~\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}
not all zero such that
α
x
1
+
α
x
2
+
⋯
+
α
x
n
=
0
{\displaystyle \alpha ~\mathbf {x} _{1}+\alpha ~\mathbf {x} _{2}+\dots +\alpha ~\mathbf {x} _{n}=\mathbf {0} }
If such a set of constants
α
1
,
α
2
,
…
,
α
n
{\displaystyle \alpha _{1},\alpha _{2},\dots ,\alpha _{n}}
do not exists
then the vectors are said to be linearly independent.
Consider the matrices
M
1
=
[
1
0
0
2
]
,
M
2
=
[
1
0
0
0
]
,
M
3
=
[
0
0
0
−
1
]
{\displaystyle {\boldsymbol {M}}_{1}={\begin{bmatrix}1&0\\0&2\end{bmatrix}},{\boldsymbol {M}}_{2}={\begin{bmatrix}1&0\\0&0\end{bmatrix}},{\boldsymbol {M}}_{3}={\begin{bmatrix}0&0\\0&-1\end{bmatrix}}}
These are linearly dependent since
M
1
−
M
2
+
2
M
3
=
0
{\displaystyle {\boldsymbol {M}}_{1}-{\boldsymbol {M}}_{2}+2~{\boldsymbol {M}}_{3}=\mathbf {0} }
.
The span of a set of vectors
(
T
)
{\displaystyle ({\boldsymbol {T}})}
is the set of all vectors that are
linear combinations of the vectors
x
i
{\displaystyle \mathbf {x} _{i}}
. Thus
span
(
T
)
=
{
T
1
,
T
2
,
…
,
T
n
}
{\displaystyle {\text{span}}({\boldsymbol {T}})=\{{\boldsymbol {T}}_{1},{\boldsymbol {T}}_{2},\dots ,{\boldsymbol {T}}_{n}\}}
where
T
i
=
α
1
x
1
+
α
2
x
2
+
⋯
+
α
n
x
n
{\displaystyle {\boldsymbol {T}}_{i}=\alpha _{1}~\mathbf {x} _{1}+\alpha _{2}~\mathbf {x} _{2}+\dots +\alpha _{n}~\mathbf {x} _{n}}
as
α
1
,
α
2
,
…
,
α
n
{\displaystyle \alpha _{1},\alpha _{2},\dots ,\alpha _{n}}
vary.
If the span =
S
{\displaystyle {\mathcal {S}}}
then
T
{\displaystyle {\boldsymbol {T}}}
is said to be a spanning set.
If
T
{\displaystyle {\boldsymbol {T}}}
is a spanning set and its elements are linearly independent then we call
it a basis for
S
{\displaystyle {\mathcal {S}}}
. A vector in
S
{\displaystyle {\mathcal {S}}}
has a unique representation as a
linear combination of the basis elements. why is it unqiue?
The dimension of a space
S
{\displaystyle {\mathcal {S}}}
is the number of elements in the basis. This is
independent of actual elements that form the basis and is a property of
S
{\displaystyle {\mathcal {S}}}
.
Example 1: Vectors in R2
edit
Any two non-collinear vectors
R
2
{\displaystyle \mathbb {R} ^{2}}
is a basis for
R
2
{\displaystyle \mathbb {R} ^{2}}
because any other vector in
R
2
{\displaystyle \mathbb {R} ^{2}}
can be expressed as a linear
combination of the two vectors.
A basis for the linear space of
2
×
2
{\displaystyle 2\times 2}
matrices is
[
1
0
0
0
]
,
[
1
1
0
0
]
,
[
1
1
0
1
]
,
[
1
3
1
1
]
{\displaystyle {\begin{bmatrix}1&0\\0&0\end{bmatrix}},{\begin{bmatrix}1&1\\0&0\end{bmatrix}},{\begin{bmatrix}1&1\\0&1\end{bmatrix}},{\begin{bmatrix}1&3\\1&1\end{bmatrix}}}
Note that there is a lot of nonuniqueness in the choice of bases. One
important skill that you should develop is to choose the right basis to solve
a particular problem.
Example 3: Polynomials
edit
The set
{
1
,
x
,
x
2
,
…
,
x
n
}
{\displaystyle \{1,x,x^{2},\dots ,x^{n}\}}
is a basis for polynomials of degree
n
{\displaystyle n}
.
Example 4: The natural basis
edit
A natural basis is the set
{
e
1
,
e
2
,
…
,
e
n
}
{\displaystyle \{\mathbf {e} _{1},\mathbf {e} _{2},\dots ,\mathbf {e} _{n}\}}
where the
j
{\displaystyle j}
th
entry of
e
k
{\displaystyle \mathbf {e} _{k}}
is
δ
j
k
=
{
1
for
j
=
k
0
for
j
≠
k
{\displaystyle \delta _{jk}={\begin{cases}1&{\mbox{for}}~j=k\\0&{\mbox{for}}~j\neq k\end{cases}}}
The quantity
δ
j
k
{\displaystyle \delta _{jk}}
is also called the Kronecker delta.
Inner Product Spaces
edit
To give more structure to the idea of a vector space we need concepts such as
magnitude and angle. The inner product provides that structure.
The inner product generalizes the concept of an angle and is defined as a
function
⟨
∙
,
∙
⟩
:
S
×
S
→
R
(
or
C
for a complex vector space
)
{\displaystyle \langle \bullet ,~\bullet \rangle :{\mathcal {S}}\times {\mathcal {S}}\rightarrow \mathbb {R} \quad ({\text{or}}~\mathbb {C} ~{\text{for a complex vector space}})}
with the properties
⟨
x
,
y
⟩
=
⟨
y
,
x
⟩
¯
{\displaystyle \langle \mathbf {x} ,~\mathbf {y} \rangle ={\overline {\langle \mathbf {y} ,~\mathbf {x} \rangle }}\qquad }
overbar indicates complex conjugation.
⟨
α
x
,
y
⟩
=
α
⟨
x
,
y
⟩
{\displaystyle \langle \alpha ~\mathbf {x} ,~\mathbf {y} \rangle =\alpha ~\langle \mathbf {x} ,~\mathbf {y} \rangle \quad }
Linear with respect to scalar multiplication.
⟨
x
+
y
,
z
⟩
=
⟨
x
,
z
⟩
+
⟨
y
,
z
⟩
{\displaystyle \langle \mathbf {x} +\mathbf {y} ,~\mathbf {z} \rangle =\langle \mathbf {x} ,~\mathbf {z} \rangle +\langle \mathbf {y} ,~\mathbf {z} \rangle \quad }
Linearity with respect to addition.
⟨
x
,
x
⟩
>
0
{\displaystyle \langle \mathbf {x} ,~\mathbf {x} \rangle >\mathbf {0} }
if
x
≠
0
{\displaystyle \mathbf {x} \neq 0}
and
⟨
x
,
x
⟩
=
0
{\displaystyle \langle \mathbf {x} ,~\mathbf {x} \rangle =\mathbf {0} }
if and only if
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} }
.
A vector space with an inner product is called an inner product space.
⟨
x
,
β
y
⟩
=
⟨
β
y
,
x
⟩
¯
=
β
¯
⟨
y
,
x
⟩
¯
=
β
¯
⟨
x
,
y
⟩
{\displaystyle \langle \mathbf {x} ,~\beta ~\mathbf {y} \rangle ={\overline {\langle \beta ~\mathbf {y} ,~\mathbf {x} \rangle }}={\overline {\beta }}~{\overline {\langle \mathbf {y} ,~\mathbf {x} \rangle }}={\overline {\beta }}\langle \mathbf {x} ,~\mathbf {y} \rangle }
Example 2: Discrete vectors
edit
In
R
n
{\displaystyle \mathbb {R} ^{n}}
with
x
=
{
x
1
,
x
2
,
…
,
x
n
}
{\displaystyle \mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}}
and
y
=
{
y
1
,
y
2
,
…
,
y
n
}
{\displaystyle \mathbf {y} =\{y_{1},y_{2},\dots ,y_{n}\}}
the Eulidean norm is given by
⟨
x
,
y
⟩
=
∑
n
x
n
y
n
{\displaystyle \langle \mathbf {x} ,~\mathbf {y} \rangle =\sum _{n}x_{n}~y_{n}}
With
x
,
y
∈
C
n
{\displaystyle \mathbf {x} ,\mathbf {y} \in \mathbb {C} ^{n}}
the standard norm is
⟨
x
,
y
⟩
=
∑
k
x
k
y
k
¯
{\displaystyle \langle \mathbf {x} ,~\mathbf {y} \rangle =\sum _{k}x_{k}~{\overline {y_{k}}}}
Example 3: Continuous functions
edit
For two complex valued continuous functions
f
(
x
)
{\displaystyle f(x)}
and
g
(
x
)
{\displaystyle g(x)}
in
[
0
,
1
]
{\displaystyle [0,1]}
we could approximately represent them by their function values at
equally spaced points.
Approximate
f
(
x
)
{\displaystyle f(x)}
and
g
(
x
)
{\displaystyle g(x)}
by
F
=
{
f
(
x
1
)
,
f
(
x
2
)
,
…
,
f
(
x
n
)
}
with
x
k
=
k
n
G
=
{
g
(
x
1
)
,
g
(
x
2
)
,
…
,
g
(
x
n
)
}
with
x
k
=
k
n
{\displaystyle {\begin{aligned}F&=\{f(x_{1}),f(x_{2}),\dots ,f(x_{n})\}\qquad {\text{with}}~x_{k}={\cfrac {k}{n}}\\G&=\{g(x_{1}),g(x_{2}),\dots ,g(x_{n})\}\qquad {\text{with}}~x_{k}={\cfrac {k}{n}}\end{aligned}}}
With that approximation, a natural norm is
⟨
F
,
G
⟩
=
1
n
∑
k
=
1
n
f
(
x
k
)
g
(
x
k
)
¯
{\displaystyle \langle F,~G\rangle ={\cfrac {1}{n}}~\sum _{k=1}^{n}f(x_{k})~{\overline {g(x_{k})}}}
Taking the limit as
n
→
∞
{\displaystyle n\rightarrow \infty }
(show this)
⟨
f
,
g
⟩
=
∫
0
1
f
(
x
)
g
(
x
)
¯
d
x
{\displaystyle \langle f,~g\rangle =\int _{0}^{1}f(x)~{\overline {g(x)}}~dx}
If we took non-equally spaced yet smoothly distributed points we would get
⟨
f
,
g
⟩
=
∫
0
1
f
(
x
)
g
(
x
)
¯
w
(
x
)
d
x
{\displaystyle \langle f,~g\rangle =\int _{0}^{1}f(x)~{\overline {g(x)}}~w(x)~dx}
where
w
(
x
)
>
0
{\displaystyle w(x)>0}
is a smooth weighting function (show this).
There are many other inner products possible. For functions that are not only
continuous but also differentiable, a useful norm is
⟨
f
,
g
⟩
=
∫
0
1
[
f
(
x
)
g
(
x
)
¯
+
f
′
(
x
)
g
′
(
x
)
¯
]
d
x
{\displaystyle \langle f,~g\rangle =\int _{0}^{1}\left[f(x)~{\overline {g(x)}}+f^{'}(x)~{\overline {g^{'}(x)}}\right]~dx}
We will continue further explorations into linear vector spaces in the next
lecture.