# Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 24

Base change

We know, due to Theorem 23.15 , that in a finite-dimensional vector space, any two bases have the same length, the same number of vectors. Every vector has, with respect to every basis, unique coordinates (the coefficient tuple). How do these coordinates behave when we change the bases? This is answered by the following statement.

## Lemma

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}V}$ be a ${\displaystyle {}K}$-vector space of dimension ${\displaystyle {}n}$. Let ${\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}$ and ${\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{n}}$ denote bases of ${\displaystyle {}V}$. Suppose that

${\displaystyle {}v_{j}=\sum _{i=1}^{n}c_{ij}w_{i}\,}$

with coefficients ${\displaystyle {}c_{ij}\in K}$, which we collect into the ${\displaystyle {}n\times n}$-matrix

${\displaystyle {}M_{\mathfrak {w}}^{\mathfrak {v}}={\left(c_{ij}\right)}_{ij}\,.}$
Then a vector ${\displaystyle {}u}$, which has the coordinates ${\displaystyle {}{\begin{pmatrix}s_{1}\\\vdots \\s_{n}\end{pmatrix}}}$ with respect to the basis ${\displaystyle {}{\mathfrak {v}}}$, has the coordinates
${\displaystyle {}{\begin{pmatrix}t_{1}\\\vdots \\t_{n}\end{pmatrix}}=M_{\mathfrak {w}}^{\mathfrak {v}}{\begin{pmatrix}s_{1}\\\vdots \\s_{n}\end{pmatrix}}={\begin{pmatrix}c_{11}&c_{12}&\ldots &c_{1n}\\c_{21}&c_{22}&\ldots &c_{2n}\\\vdots &\vdots &\ddots &\vdots \\c_{n1}&c_{n2}&\ldots &c_{nn}\end{pmatrix}}{\begin{pmatrix}s_{1}\\\vdots \\s_{n}\end{pmatrix}}\,}$

with respect to the basis ${\displaystyle {}{\mathfrak {w}}}$.

### Proof

This follows directly from

${\displaystyle {}u=\sum _{j=1}^{n}s_{j}v_{j}=\sum _{j=1}^{n}s_{j}{\left(\sum _{i=1}^{n}c_{ij}w_{i}\right)}=\sum _{i=1}^{n}{\left(\sum _{j=1}^{n}s_{j}c_{ij}\right)}w_{i}\,,}$

and the definition of matrix multiplication.

${\displaystyle \Box }$

The matrix ${\displaystyle {}M_{\mathfrak {w}}^{\mathfrak {v}}}$, which describes the base change from ${\displaystyle {}{\mathfrak {v}}}$ to ${\displaystyle {}{\mathfrak {w}}}$, is called the transformation matrix. In the ${\displaystyle {}j}$-th column of the transformation matrix, there are the coordinates of ${\displaystyle {}v_{j}}$ with respect to the basis ${\displaystyle {}{\mathfrak {w}}}$. When we denote, for a vector ${\displaystyle {}u\in V}$ and a basis ${\displaystyle {}{\mathfrak {v}}}$, the corresponding coordinate tuple by ${\displaystyle {}\Psi _{\mathfrak {v}}(u)}$, then the transformation can be quickly written as

${\displaystyle {}\Psi _{\mathfrak {w}}(u)=M_{\mathfrak {w}}^{\mathfrak {v}}(\Psi _{\mathfrak {v}}(u))\,.}$

## Example

We consider in ${\displaystyle {}\mathbb {R} ^{2}}$ the standard basis,

${\displaystyle {}{\mathfrak {u}}={\begin{pmatrix}1\\0\end{pmatrix}},\,{\begin{pmatrix}0\\1\end{pmatrix}}\,,}$

and the basis

${\displaystyle {}{\mathfrak {v}}={\begin{pmatrix}1\\2\end{pmatrix}},\,{\begin{pmatrix}-2\\3\end{pmatrix}}\,.}$

The basis vectors of ${\displaystyle {}{\mathfrak {v}}}$ can be expressed directly with the standard basis, namely

${\displaystyle v_{1}={\begin{pmatrix}1\\2\end{pmatrix}}=1{\begin{pmatrix}1\\0\end{pmatrix}}+2{\begin{pmatrix}0\\1\end{pmatrix}}\,\,{\text{ and }}\,\,v_{2}={\begin{pmatrix}-2\\3\end{pmatrix}}=-2{\begin{pmatrix}1\\0\end{pmatrix}}+3{\begin{pmatrix}0\\1\end{pmatrix}}.}$

Therefore, we get immediately

${\displaystyle {}M_{\mathfrak {u}}^{\mathfrak {v}}={\begin{pmatrix}1&-2\\2&3\end{pmatrix}}\,.}$

For example, the vector which has with respect to ${\displaystyle {}{\mathfrak {v}}}$ the coordinates ${\displaystyle {}(4,-3)}$, has the coordinates

${\displaystyle {}M_{\mathfrak {u}}^{\mathfrak {v}}{\begin{pmatrix}4\\-3\end{pmatrix}}={\begin{pmatrix}1&-2\\2&3\end{pmatrix}}{\begin{pmatrix}4\\-3\end{pmatrix}}={\begin{pmatrix}10\\-1\end{pmatrix}}\,}$

with respect to the standard basis ${\displaystyle {}{\mathfrak {u}}}$. The transformation matrix ${\displaystyle {}M_{\mathfrak {v}}^{\mathfrak {u}}}$ is more difficult to compute: We have to write the standard vectors as linear combinations of ${\displaystyle {}v_{1}}$ and ${\displaystyle {}v_{2}}$. A direct computation (solving two linear systems) yields

${\displaystyle {}{\begin{pmatrix}1\\0\end{pmatrix}}={\frac {3}{7}}{\begin{pmatrix}1\\2\end{pmatrix}}-{\frac {2}{7}}{\begin{pmatrix}-2\\3\end{pmatrix}}\,}$

and

${\displaystyle {}{\begin{pmatrix}0\\1\end{pmatrix}}={\frac {2}{7}}{\begin{pmatrix}1\\2\end{pmatrix}}+{\frac {1}{7}}{\begin{pmatrix}-2\\3\end{pmatrix}}\,.}$

Hence,

${\displaystyle {}M_{\mathfrak {v}}^{\mathfrak {u}}={\begin{pmatrix}{\frac {3}{7}}&{\frac {2}{7}}\\-{\frac {2}{7}}&{\frac {1}{7}}\end{pmatrix}}\,.}$

Linear mappings

## Definition

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ be ${\displaystyle {}K}$-vector spaces. A mapping

${\displaystyle \varphi \colon V\longrightarrow W}$

is called a linear mapping, if the following two properties are fulfilled.

1. ${\displaystyle {}\varphi (u+v)=\varphi (u)+\varphi (v)}$ for all ${\displaystyle {}u,v\in V}$.
2. ${\displaystyle {}\varphi (sv)=s\varphi (v)}$ for all ${\displaystyle {}s\in K}$ and ${\displaystyle {}v\in V}$.

Here, the first property is called additivity and the second property is called compatibility with scaling. When we want to stress the base field, then we say ${\displaystyle {}K}$-linearity. The identity ${\displaystyle {}\operatorname {Id} _{V}\colon V\rightarrow V}$, the null mapping ${\displaystyle {}V\rightarrow 0}$ and the inclusion ${\displaystyle {}U\subseteq V}$ of a linear subspace are the simplest examples for linear mappings.

## Example

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}K^{n}}$ be the ${\displaystyle {}n}$-dimensional standard space. Then the ${\displaystyle {}i}$-th projection, this is the mapping

${\displaystyle K^{n}\longrightarrow K,\left(x_{1},\,\ldots ,\,x_{i-1},\,x_{i},\,x_{i+1},\,\ldots ,\,x_{n}\right)\longmapsto x_{i},}$

is a ${\displaystyle {}K}$-linear mapping. This follows immediately from componentwise addition and scalar multiplication on the standard space. The ${\displaystyle {}i}$-th projection is also called the ${\displaystyle {}i}$-th coordinate function.

## Lemma

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}U,V,W}$ denote vector spaces over ${\displaystyle {}K}$. Suppose that

${\displaystyle \varphi :U\longrightarrow V\,\,{\text{ and }}\,\,\psi :V\longrightarrow W}$

are linear mappings. Then also the composition

${\displaystyle \psi \circ \varphi \colon U\longrightarrow W}$

is a linear mapping.

### Proof

${\displaystyle \Box }$

## Lemma

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ be ${\displaystyle {}K}$-vector spaces. Let

${\displaystyle \varphi \colon V\longrightarrow W}$

be a bijective linear map. Then also the inverse mapping

${\displaystyle \varphi ^{-1}\colon W\longrightarrow V}$

is linear.

### Proof

${\displaystyle \Box }$

Determination on a basis

Behind the following statement (the determination theorem), there is the important principle that in linear algebra (of finite-dimensional vector spaces), the objects are determined by finitely many data.

## Theorem

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ be ${\displaystyle {}K}$-vector spaces. Let ${\displaystyle {}v_{i}}$, ${\displaystyle {}i\in I}$, denote a basis of ${\displaystyle {}V}$, and let ${\displaystyle {}w_{i}}$, ${\displaystyle {}i\in I}$, denote elements in ${\displaystyle {}W}$. Then there exists a unique linear mapping

${\displaystyle f\colon V\longrightarrow W}$

with

${\displaystyle f(v_{i})=w_{i}{\text{ for all }}i\in I.}$

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Example

The easiest linear mappings are (beside the null mapping) the linear maps from ${\displaystyle {}K}$ to ${\displaystyle {}K}$. Such a linear mapping

${\displaystyle \varphi \colon K\longrightarrow K,x\longmapsto \varphi (x),}$

is determined (by Theorem 24.7 , but this is also directly clear) by ${\displaystyle {}\varphi (1)}$, or by the value ${\displaystyle {}\varphi (t)}$ for an single element ${\displaystyle {}t\in K}$, ${\displaystyle {}t\neq 0}$. In particular, ${\displaystyle {}\varphi (x)=ax}$, with a uniquely determined ${\displaystyle {}a\in K}$. In the context of physics, for ${\displaystyle {}K=\mathbb {R} }$, and if there is a linear relation between two measurable quantities, we talk about proportionality, and ${\displaystyle {}a}$ is called the proportionality factor. In school, such a linear relation occurs as "rule of three“.

Linear mappings and matrices

Due to Theorem 24.7 , a linear mapping

${\displaystyle \varphi \colon K^{n}\longrightarrow K^{m}}$

is determined by the images ${\displaystyle {}\varphi (e_{j})}$, ${\displaystyle {}j=1,\ldots ,n}$, of the standard vectors. Every ${\displaystyle {}\varphi (e_{j})}$ is a linear combination

${\displaystyle {}\varphi (e_{j})=\sum _{i=1}^{m}a_{ij}e_{i}\,,}$

and therefore the linear mapping is determined by the elements ${\displaystyle {}a_{ij}}$. So, such a linear map is determined by the ${\displaystyle {}mn}$ elements ${\displaystyle {}a_{ij}}$, ${\displaystyle {}1\leq i\leq m}$, ${\displaystyle {}1\leq j\leq n}$, from the field. We can write such a data set as a matrix. Because of the determination theorem, this holds for linear maps in general, as soon as in both vector spaces bases are fixed.

## Definition

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}V}$ be an ${\displaystyle {}n}$-dimensional vector space with a basis ${\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}$, and let ${\displaystyle {}W}$ be an ${\displaystyle {}m}$-dimensional vector space with a basis ${\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{m}}$.

For a linear mapping

${\displaystyle \varphi \colon V\longrightarrow W,}$

the matrix

${\displaystyle {}M=M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )=(a_{ij})_{ij}\,,}$

where ${\displaystyle {}a_{ij}}$ is the ${\displaystyle {}i}$-th coordinate of ${\displaystyle {}\varphi (v_{j})}$ with respect to the basis ${\displaystyle {}{\mathfrak {w}}}$, is called the describing matrix for ${\displaystyle {}\varphi }$ with respect to the bases.

For a matrix ${\displaystyle {}M=(a_{ij})_{ij}\in \operatorname {Mat} _{m\times n}(K)}$, the linear mapping ${\displaystyle {}\varphi _{\mathfrak {w}}^{\mathfrak {v}}(M)}$ determined by

${\displaystyle v_{j}\longmapsto \sum _{i=1}^{m}a_{ij}w_{i}}$

in the sense of Theorem 24.7 ,

is called the linear mapping determined by the matrix ${\displaystyle {}M}$.

For a linear mapping ${\displaystyle {}\varphi \colon K^{n}\rightarrow K^{m}}$, we always assume that everything is with respect to the standard bases, unless otherwise stated. For a linear mapping ${\displaystyle {}\varphi \colon V\rightarrow V}$ from a vector space in itself (what is called an endomorphism), one usually takes the same bases on both sides. The identity on a vector space of dimension ${\displaystyle {}n}$ is described by the identity matrix, with respect to every basis.

## Theorem

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}V}$ be an ${\displaystyle {}n}$-dimensional vector space with a basis ${\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}$, and let ${\displaystyle {}W}$ be an ${\displaystyle {}m}$-dimensional vector space with a basis ${\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{m}}$. Then the mappings

${\displaystyle \varphi \longmapsto M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi ){\text{ and }}M\longmapsto \varphi _{\mathfrak {w}}^{\mathfrak {v}}(M),}$

defined in definition, are inverse to each other.

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Example

${\displaystyle \varphi \colon K^{n}\longrightarrow K^{m}}$

is usually described by the matrix ${\displaystyle {}M}$ with respect to the standard bases on the left and on the right. The result of the matrix multiplication

${\displaystyle {}{\begin{pmatrix}y_{1}\\\vdots \\y_{m}\end{pmatrix}}=M{\begin{pmatrix}x_{1}\\\vdots \\x_{n}\end{pmatrix}}\,}$

can be interpreted directly as a point in ${\displaystyle {}K^{m}}$. The ${\displaystyle {}j}$-th column of ${\displaystyle {}M}$ is the image of the ${\displaystyle {}j}$-th standard vector ${\displaystyle {}e_{j}}$.

Rotations

A rotation of the real plane ${\displaystyle {}\mathbb {R} ^{2}}$ around the origin, given the angle ${\displaystyle {}\alpha }$ counterclockwise, maps ${\displaystyle {}{\begin{pmatrix}1\\0\end{pmatrix}}}$ to ${\displaystyle {}{\begin{pmatrix}\cos \alpha \\\sin \alpha \end{pmatrix}}}$ and ${\displaystyle {}{\begin{pmatrix}0\\1\end{pmatrix}}}$ to ${\displaystyle {}{\begin{pmatrix}-\sin \alpha \\\cos \alpha \end{pmatrix}}}$. Therefore, plane rotations are described in the following way.

## Definition

${\displaystyle D(\alpha )\colon \mathbb {R} ^{2}\longrightarrow \mathbb {R} ^{2},}$

which is given by a rotation matrix ${\displaystyle {}{\begin{pmatrix}\operatorname {cos} \,\alpha &-\operatorname {sin} \,\alpha \\\operatorname {sin} \,\alpha &\operatorname {cos} \,\alpha \end{pmatrix}}}$ (with some ${\displaystyle {}\alpha \in \mathbb {R} }$)with respect to the standard basis is called

rotation.

A space rotation is a linear mapping of the space ${\displaystyle {}\mathbb {R} ^{3}}$ in itself around a rotation axis (a line through the origin) with an certain angle ${\displaystyle {}\alpha }$. If the vector ${\displaystyle {}v_{1}\neq 0}$ defines the axis, and ${\displaystyle {}u_{2}}$ and ${\displaystyle {}u_{3}}$ are orthogonal to ${\displaystyle {}v_{1}}$ and to each other, and all have length ${\displaystyle {}1}$, then the rotation is described by the matrix

${\displaystyle {\begin{pmatrix}1&0&0\\0&\operatorname {cos} \,\alpha &-\operatorname {sin} \,\alpha \\0&\operatorname {sin} \,\alpha &\operatorname {cos} \,\alpha \end{pmatrix}}}$

with respect to the basis ${\displaystyle {}v_{1},u_{2},u_{3}}$.

The kernel of a linear mapping

## Definition

Let ${\displaystyle {}K}$ denote a field, let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ denote ${\displaystyle {}K}$-vector spaces, and let

${\displaystyle \varphi \colon V\longrightarrow W}$

denote a ${\displaystyle {}K}$-linear mapping. Then

${\displaystyle {}\operatorname {kern} \varphi :=\varphi ^{-1}(0)={\left\{v\in V\mid \varphi (v)=0\right\}}\,}$
is called the kernel of ${\displaystyle {}\varphi }$.

The kernel is a linear subspace of ${\displaystyle {}V}$.

The following criterion for injectivity is important.

## Lemma

Let ${\displaystyle {}K}$ denote a field, let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ denote ${\displaystyle {}K}$-vector spaces, and let

${\displaystyle \varphi \colon V\longrightarrow W}$

denote a ${\displaystyle {}K}$-linear mapping. Then ${\displaystyle {}\varphi }$ is injective if and only if ${\displaystyle {}\operatorname {kern} \varphi =0}$ holds.

### Proof

If the mapping is injective, then there can exist, apart from ${\displaystyle {}0\in V}$, no other vector ${\displaystyle {}v\in V}$ with ${\displaystyle {}\varphi (v)=0}$. Hence, ${\displaystyle {}\varphi ^{-1}(0)=\{0\}}$.
So suppose that ${\displaystyle {}\operatorname {kern} \varphi =0}$, and let ${\displaystyle {}v_{1},v_{2}\in V}$ be given with ${\displaystyle {}\varphi (v_{1})=\varphi (v_{2})}$. Then, due to linearity,

${\displaystyle {}\varphi (v_{1}-v_{2})=\varphi (v_{1})-\varphi (v_{2})=0\,.}$

Therefore, ${\displaystyle {}v_{1}-v_{2}\in \operatorname {kern} \varphi }$, and so ${\displaystyle {}v_{1}=v_{2}}$.

${\displaystyle \Box }$