Vector spaces with an inner product
In
R
n
{\displaystyle {}\mathbb {R} ^{n}}
, we can add vectors and multiply them with a scalar. Moreover, a vector has a certain length, and the relation between two vectors is expressed by the angle between them. Length and angle can be made precise with the concept of an inner product . In order to introduce this, a real vector space or a complex vector space must be given. We want to discuss both cases in parallel, and we we use the symbol
K
{\displaystyle {}{\mathbb {K} }}
to denote
R
{\displaystyle {}\mathbb {R} }
or
C
{\displaystyle {}\mathbb {C} }
.
For
z
∈
C
{\displaystyle {}z\in \mathbb {C} }
,
z
¯
{\displaystyle {}{\overline {z}}}
means the
complex-conjugated
number; for
z
∈
R
{\displaystyle {}z\in \mathbb {R} }
,
this is just the number itself.
Let
V
{\displaystyle {}V}
be a
K
{\displaystyle {}{\mathbb {K} }}
-vector space .
An inner product on
V
{\displaystyle {}V}
is a mapping
V
×
V
⟶
K
,
(
v
,
w
)
⟼
⟨
v
,
w
⟩
,
{\displaystyle V\times V\longrightarrow {\mathbb {K} },(v,w)\longmapsto \left\langle v,w\right\rangle ,}
satisfying the following properties:
We have
⟨
λ
1
x
1
+
λ
2
x
2
,
y
⟩
=
λ
1
⟨
x
1
,
y
⟩
+
λ
2
⟨
x
2
,
y
⟩
{\displaystyle {}\left\langle \lambda _{1}x_{1}+\lambda _{2}x_{2},y\right\rangle =\lambda _{1}\left\langle x_{1},y\right\rangle +\lambda _{2}\left\langle x_{2},y\right\rangle \,}
for all
λ
1
,
λ
2
∈
K
{\displaystyle {}\lambda _{1},\lambda _{2}\in {\mathbb {K} }}
,
x
1
,
x
2
,
y
∈
V
{\displaystyle {}x_{1},x_{2},y\in V}
,
and
⟨
x
,
λ
1
y
1
+
λ
2
y
2
⟩
=
λ
1
¯
⟨
x
,
y
1
⟩
+
λ
2
¯
⟨
x
,
y
2
⟩
{\displaystyle {}\left\langle x,\lambda _{1}y_{1}+\lambda _{2}y_{2}\right\rangle ={\overline {\lambda _{1}}}\left\langle x,y_{1}\right\rangle +{\overline {\lambda _{2}}}\left\langle x,y_{2}\right\rangle \,}
for all
λ
1
,
λ
2
∈
K
{\displaystyle {}\lambda _{1},\lambda _{2}\in {\mathbb {K} }}
,
x
,
y
1
,
y
2
∈
V
{\displaystyle {}x,y_{1},y_{2}\in V}
.
We have
⟨
v
,
w
⟩
=
⟨
w
,
v
⟩
¯
{\displaystyle {}\left\langle v,w\right\rangle ={\overline {\left\langle w,v\right\rangle }}\,}
for all
v
,
w
∈
V
{\displaystyle {}v,w\in V}
.
We have
⟨
v
,
v
⟩
≥
0
{\displaystyle {}\left\langle v,v\right\rangle \geq 0}
for all
v
∈
V
{\displaystyle {}v\in V}
,
and
⟨
v
,
v
⟩
=
0
{\displaystyle {}\left\langle v,v\right\rangle =0}
if and only if
v
=
0
{\displaystyle {}v=0}
holds.
The used properties are called, in the real case, bilinear
(which is just another name for
multilinear ,
when we are dealing with the product of two vector spaces),
symmetry and positive-definiteness . In the complex case, we call the properties sesquilinear and hermitian . This looks a bit complicated at first sight but is necessary to ensure that also in the complex case we get positive definiteness and a reasonable concept of distance.
For example, in
R
3
{\displaystyle {}\mathbb {R} ^{3}}
, endowed with the standard inner product, we have
⟨
(
3
−
5
2
)
,
(
−
1
4
6
)
⟩
=
3
⋅
(
−
1
)
−
5
⋅
4
+
2
⋅
6
=
−
11
.
{\displaystyle {}\left\langle {\begin{pmatrix}3\\-5\\2\end{pmatrix}},{\begin{pmatrix}-1\\4\\6\end{pmatrix}}\right\rangle =3\cdot (-1)-5\cdot 4+2\cdot 6=-11\,.}
For a vector space
V
{\displaystyle {}V}
, endowed with an inner product, every
linear subspace
U
⊆
V
{\displaystyle {}U\subseteq V}
has again an inner scalar product, by restricting the inner product from
V
{\displaystyle {}V}
. In particular, for an euclidean vector space, every linear subspace
U
⊆
V
{\displaystyle {}U\subseteq V}
is again an euclidean vector space. Therefore, every linear subspace
U
⊆
R
n
{\displaystyle {}U\subseteq \mathbb {R} ^{n}}
carries the restricted standard inner product. Because there is always an
isomorphism
U
≅
R
m
{\displaystyle {}U\cong \mathbb {R} ^{m}}
,
we can also transfer the standard inner product from
R
m
{\displaystyle {}\mathbb {R} ^{m}}
to
U
{\displaystyle {}U}
. However, the result depends on the isomorphism chosen, and there is, in general, no relationship with the restricted standard inner product.
On
C
n
{\displaystyle {}\mathbb {C} ^{n}}
, we call the
inner product
given by
⟨
u
,
v
⟩
:=
∑
i
=
1
n
u
i
v
i
¯
{\displaystyle {}\left\langle u,v\right\rangle :=\sum _{i=1}^{n}u_{i}{\overline {v_{i}}}\,}
the
(complex) standard inner product .
For example, we have
⟨
(
4
−
3
i
2
+
7
i
)
,
(
−
2
+
5
i
3
−
6
i
)
⟩
=
(
4
−
3
i
)
⋅
−
2
+
5
i
¯
+
(
2
+
7
i
)
⋅
3
−
6
i
¯
=
(
4
−
3
i
)
⋅
(
−
2
−
5
i
)
+
(
2
+
7
i
)
⋅
(
3
+
6
i
)
=
−
23
−
14
i
−
36
+
33
i
=
−
59
+
19
i
.
{\displaystyle {}{\begin{aligned}\left\langle {\begin{pmatrix}4-3{\mathrm {i} }\\2+7{\mathrm {i} }\end{pmatrix}},{\begin{pmatrix}-2+5{\mathrm {i} }\\3-6{\mathrm {i} }\end{pmatrix}}\right\rangle &=(4-3{\mathrm {i} })\cdot {\overline {-2+5{\mathrm {i} }}}+(2+7{\mathrm {i} })\cdot {\overline {3-6{\mathrm {i} }}}\\&=(4-3{\mathrm {i} })\cdot (-2-5{\mathrm {i} })+(2+7{\mathrm {i} })\cdot (3+6{\mathrm {i} })\\&=-23-14{\mathrm {i} }-36+33{\mathrm {i} }\\&=-59+19{\mathrm {i} }.\end{aligned}}}
Norm
If an inner product is given, then we can define the length of a vector, and then also the distance between two vectors.
Let
V
{\displaystyle {}V}
denote a
vector space
over
K
{\displaystyle {}{\mathbb {K} }}
, endowed with an
inner product
⟨
−
,
−
⟩
{\displaystyle {}\left\langle -,-\right\rangle }
. For a vector
v
∈
V
{\displaystyle {}v\in V}
,
we call the real number
‖
v
‖
=
⟨
v
,
v
⟩
{\displaystyle {}\Vert {v}\Vert ={\sqrt {\left\langle v,v\right\rangle }}\,}
the
norm of
v
{\displaystyle {}v}
.
The inner product
⟨
v
,
v
⟩
{\displaystyle {}\left\langle v,v\right\rangle }
is always real and not negative; therefore, its square root is a uniquely determined real number. For a complex vector space with an inner product, it does not make any difference whether we determine the norm directly, or via the underlying real vector space, see
exercise ***** .
Let
V
{\displaystyle {}V}
denote a
vector space
over
K
{\displaystyle {}{\mathbb {K} }}
, endowed with an
inner product
⟨
−
,
−
⟩
{\displaystyle {}\left\langle -,-\right\rangle }
. Let
‖
−
‖
{\displaystyle {}\Vert {-}\Vert }
denote the associated
norm. Then the Cauchy-Schwarz estimate holds, that is,
|
⟨
v
,
w
⟩
|
≤
‖
v
‖
⋅
‖
w
‖
{\displaystyle {}\vert {\left\langle v,w\right\rangle }\vert \leq \Vert {v}\Vert \cdot \Vert {w}\Vert \,}
for all
v
,
w
∈
V
{\displaystyle {}v,w\in V}
.
For
w
=
0
{\displaystyle {}w=0}
,
the statement holds. So suppose
w
≠
0
{\displaystyle {}w\neq 0}
,
hence, also
‖
w
‖
≠
0
{\displaystyle {}\Vert {w}\Vert \neq 0}
holds. Therefore, we have the estimates
0
≤
⟨
v
−
⟨
v
,
w
⟩
‖
w
‖
2
w
,
v
−
⟨
v
,
w
⟩
‖
w
‖
2
w
⟩
=
⟨
v
,
v
⟩
−
⟨
v
,
w
⟩
‖
w
‖
2
⟨
w
,
v
⟩
−
⟨
v
,
w
⟩
¯
‖
w
‖
2
⟨
v
,
w
⟩
+
⟨
v
,
w
⟩
⟨
v
,
w
⟩
¯
‖
w
‖
4
⟨
w
,
w
⟩
=
⟨
v
,
v
⟩
−
⟨
v
,
w
⟩
‖
w
‖
2
⟨
v
,
w
⟩
¯
−
⟨
v
,
w
⟩
¯
‖
w
‖
2
⟨
v
,
w
⟩
+
⟨
v
,
w
⟩
⟨
v
,
w
⟩
¯
‖
w
‖
2
=
⟨
v
,
v
⟩
−
⟨
v
,
w
⟩
⟨
v
,
w
⟩
¯
‖
w
‖
2
=
⟨
v
,
v
⟩
−
|
⟨
v
,
w
⟩
|
2
‖
w
‖
2
.
{\displaystyle {}{\begin{aligned}0&\leq \left\langle v-{\frac {\left\langle v,w\right\rangle }{\Vert {w}\Vert ^{2}}}w,v-{\frac {\left\langle v,w\right\rangle }{\Vert {w}\Vert ^{2}}}w\right\rangle \\&=\left\langle v,v\right\rangle -{\frac {\left\langle v,w\right\rangle }{\Vert {w}\Vert ^{2}}}\left\langle w,v\right\rangle -{\frac {\overline {\left\langle v,w\right\rangle }}{\Vert {w}\Vert ^{2}}}\left\langle v,w\right\rangle +{\frac {\left\langle v,w\right\rangle {\overline {\left\langle v,w\right\rangle }}}{\Vert {w}\Vert ^{4}}}\left\langle w,w\right\rangle \\&=\left\langle v,v\right\rangle -{\frac {\left\langle v,w\right\rangle }{\Vert {w}\Vert ^{2}}}{\overline {\left\langle v,w\right\rangle }}-{\frac {\overline {\left\langle v,w\right\rangle }}{\Vert {w}\Vert ^{2}}}\left\langle v,w\right\rangle +{\frac {\left\langle v,w\right\rangle {\overline {\left\langle v,w\right\rangle }}}{\Vert {w}\Vert ^{2}}}\\&=\left\langle v,v\right\rangle -{\frac {\left\langle v,w\right\rangle {\overline {\left\langle v,w\right\rangle }}}{\Vert {w}\Vert ^{2}}}\\&=\left\langle v,v\right\rangle -{\frac {\vert {\left\langle v,w\right\rangle }\vert ^{2}}{\Vert {w}\Vert ^{2}}}.\end{aligned}}}
Multiplication with
‖
w
‖
2
{\displaystyle {}\Vert {w}\Vert ^{2}}
and taking the square root yields the result.
◻
{\displaystyle \Box }
The first two properties follow directly from the definition of an
inner product.
The compatibility with multiplication follows from
‖
λ
v
‖
2
=
⟨
λ
v
,
λ
v
⟩
=
λ
⟨
v
,
λ
v
⟩
=
λ
λ
¯
⟨
v
,
v
⟩
=
|
λ
|
2
‖
v
‖
2
.
{\displaystyle {}\Vert {\lambda v}\Vert ^{2}=\left\langle \lambda v,\lambda v\right\rangle =\lambda \left\langle v,\lambda v\right\rangle =\lambda {\overline {\lambda }}\left\langle v,v\right\rangle =\vert {\lambda }\vert ^{2}\Vert {v}\Vert ^{2}\,.}
In order to prove the triangle estimate, we write
‖
v
+
w
‖
2
=
⟨
v
+
w
,
v
+
w
⟩
=
‖
v
‖
2
+
‖
w
‖
2
+
⟨
v
,
w
⟩
+
⟨
v
,
w
⟩
¯
=
‖
v
‖
2
+
‖
w
‖
2
+
2
Re
(
⟨
v
,
w
⟩
)
≤
‖
v
‖
2
+
‖
w
‖
2
+
2
|
⟨
v
,
w
⟩
|
.
{\displaystyle {}{\begin{aligned}\Vert {v+w}\Vert ^{2}&=\left\langle v+w,v+w\right\rangle \\&=\Vert {v}\Vert ^{2}+\Vert {w}\Vert ^{2}+\left\langle v,w\right\rangle +{\overline {\left\langle v,w\right\rangle }}\\&=\Vert {v}\Vert ^{2}+\Vert {w}\Vert ^{2}+2\operatorname {Re} \,{\left(\left\langle v,w\right\rangle \right)}\\&\leq \Vert {v}\Vert ^{2}+\Vert {w}\Vert ^{2}+2\vert {\left\langle v,w\right\rangle }\vert .\end{aligned}}}
Due to
Fact ***** ,
this is
≤
(
‖
v
‖
+
‖
w
‖
)
2
{\displaystyle {}\leq {\left(\Vert {v}\Vert +\Vert {w}\Vert \right)}^{2}}
. This estimate transfers to the square roots.
◻
{\displaystyle \Box }
With the following statement, the polarization identity , we can reconstruct the inner product from its associated norm.
Let
V
{\displaystyle {}V}
denote a
vector space
over
K
{\displaystyle {}{\mathbb {K} }}
, endowed with an
inner product
⟨
−
,
−
⟩
{\displaystyle {}\left\langle -,-\right\rangle }
. Let
‖
−
‖
{\displaystyle {}\Vert {-}\Vert }
denote the associated
norm. Then, in case
K
=
R
{\displaystyle {}{\mathbb {K} }=\mathbb {R} }
,
the relation
⟨
v
,
w
⟩
=
1
2
(
‖
v
+
w
‖
2
−
‖
v
‖
2
−
‖
w
‖
2
)
{\displaystyle {}\left\langle v,w\right\rangle ={\frac {1}{2}}{\left(\Vert {v+w}\Vert ^{2}-\Vert {v}\Vert ^{2}-\Vert {w}\Vert ^{2}\right)}\,}
holds, and, in case
K
=
C
{\displaystyle {}{\mathbb {K} }=\mathbb {C} }
,
the relation
⟨
v
,
w
⟩
=
1
4
(
‖
v
+
w
‖
2
−
‖
v
−
w
‖
2
+
i
‖
v
+
i
w
‖
2
−
i
‖
v
−
i
w
‖
2
)
{\displaystyle \left\langle v,w\right\rangle ={\frac {1}{4}}{\left(\Vert {v+w}\Vert ^{2}-\Vert {v-w}\Vert ^{2}+{\mathrm {i} }\Vert {v+{\mathrm {i} }w}\Vert ^{2}-{\mathrm {i} }\Vert {v-{\mathrm {i} }w}\Vert ^{2}\right)}\,}
holds.
Proof
◻
{\displaystyle \Box }
Normed vector spaces
Due to
Fact ***** ,
the norm associated to an inner product is a norm in the sense of the following definition. In particular, a vector space with an inner product is a normed vector space.
Let
V
{\displaystyle {}V}
be a
K
{\displaystyle {}{\mathbb {K} }}
-vector space .
A
mapping
‖
−
‖
:
V
⟶
R
,
v
⟼
‖
v
‖
,
{\displaystyle \Vert {-}\Vert \colon V\longrightarrow \mathbb {R} ,v\longmapsto \Vert {v}\Vert ,}
is called
norm ,
if the following properties hold.
We have
‖
v
‖
≥
0
{\displaystyle {}\Vert {v}\Vert \geq 0}
for all
v
∈
V
{\displaystyle {}v\in V}
.
We have
‖
v
‖
=
0
{\displaystyle {}\Vert {v}\Vert =0}
if and only if
v
=
0
{\displaystyle {}v=0}
.
For
λ
∈
K
{\displaystyle {}\lambda \in {\mathbb {K} }}
and
v
∈
V
{\displaystyle {}v\in V}
,
we have
‖
λ
v
‖
=
|
λ
|
⋅
‖
v
‖
.
{\displaystyle {}\Vert {\lambda v}\Vert =\vert {\lambda }\vert \cdot \Vert {v}\Vert \,.}
For
v
,
w
∈
V
{\displaystyle {}v,w\in V}
,
we have
‖
v
+
w
‖
≤
‖
v
‖
+
‖
w
‖
.
{\displaystyle {}\Vert {v+w}\Vert \leq \Vert {v}\Vert +\Vert {w}\Vert \,.}
A
K
{\displaystyle {}{\mathbb {K} }}
-vector space
is called a
normed vector space
if a
norm
‖
−
‖
{\displaystyle {}\Vert {-}\Vert }
is defined on it.
On a euclidean vector space, the norm given via the the inner product is also called the euclidean norm . For
V
=
R
n
{\displaystyle {}V=\mathbb {R} ^{n}}
,
endowed with the standard inner product, we have
‖
v
‖
=
∑
i
=
1
n
v
i
2
.
{\displaystyle {}\Vert {v}\Vert ={\sqrt {\sum _{i=1}^{n}v_{i}^{2}}}\,.}
The sum metric is also called the taxicab-metric . The green line represents the euclidean distance, the other paths represent the sum distance.
For a vector
v
∈
V
{\displaystyle {}v\in V}
,
v
≠
0
{\displaystyle {}v\neq 0}
,
in a normed vector space
V
{\displaystyle {}V}
, the vector
v
‖
v
‖
{\displaystyle {}{\frac {v}{\Vert {v}\Vert }}}
is called the corresponding normalized . Such a normalized vector has norm
1
{\displaystyle {}1}
. Passing to the normalized vector is also called normalization .
Normed spaces as metric spaces
Let
M
{\displaystyle {}M}
be a set. A mapping
d
:
M
×
M
→
R
{\displaystyle {}d\colon M\times M\rightarrow \mathbb {R} }
is called a metric
(or a distance function ),
if for all
x
,
y
,
z
∈
M
{\displaystyle {}x,y,z\in M}
the following conditions hold:
d
(
x
,
y
)
=
0
{\displaystyle {}d{\left(x,y\right)}=0}
if and only if
x
=
y
{\displaystyle {}x=y}
(positivity),
d
(
x
,
y
)
=
d
(
y
,
x
)
{\displaystyle {}d{\left(x,y\right)}=d{\left(y,x\right)}}
(symmetry),
and
d
(
x
,
y
)
≤
d
(
x
,
z
)
+
d
(
z
,
y
)
{\displaystyle {}d{\left(x,y\right)}\leq d{\left(x,z\right)}+d{\left(z,y\right)}}
(triangle inequality).
A metric space is a pair
(
M
,
d
)
{\displaystyle {}(M,d)}
, where
M
{\displaystyle {}M}
is a set and
d
:
M
×
M
→
R
{\displaystyle {}d\colon M\times M\rightarrow \mathbb {R} }
is a metric.
This is indeed a metric.
A
normed vector space
V
{\displaystyle {}V}
is with the
corresponding metric a
metric space.
Proof
◻
{\displaystyle \Box }
In particular, a Euclidean space is a metric space .
Hence, every subset of an affine space over an Euclidean or normed vector space is a metric space.