# Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 25

The dimension formula

The following statement is called dimension formula.

## Theorem

Let ${\displaystyle {}K}$ denote a field, let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ denote ${\displaystyle {}K}$-vector spaces, and let

${\displaystyle \varphi \colon V\longrightarrow W}$

denote a ${\displaystyle {}K}$-linear mapping. Suppose that ${\displaystyle {}V}$ has finite dimension. Then

${\displaystyle {}\dim _{}{\left(V\right)}=\dim _{}{\left(\operatorname {kern} \varphi \right)}+\dim _{}{\left(\operatorname {Im} \varphi \right)}\,}$

holds.

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Definition

Let ${\displaystyle {}K}$ denote a field, let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ denote ${\displaystyle {}K}$-vector spaces, and let

${\displaystyle \varphi \colon V\longrightarrow W}$

denote a ${\displaystyle {}K}$-linear mapping. Suppose that ${\displaystyle {}V}$ has finite dimension. Then we call

${\displaystyle {}\operatorname {rk} \,\varphi :=\dim _{}{\left(\operatorname {Im} \varphi \right)}\,}$
the rank of ${\displaystyle {}\varphi }$.

The dimension formula can also be expressed as

${\displaystyle {}\dim _{}{\left(V\right)}=\dim _{}{\left(\operatorname {kern} \varphi \right)}+\operatorname {rk} \,\varphi \,.}$

## Example

We consider the linear mapping

${\displaystyle \varphi \colon \mathbb {R} ^{3}\longrightarrow \mathbb {R} ^{4},{\begin{pmatrix}x\\y\\z\end{pmatrix}}\longmapsto M{\begin{pmatrix}x\\y\\z\end{pmatrix}}={\begin{pmatrix}y+z\\2y+2z\\x+3y+4z\\2x+4y+6z\end{pmatrix}},}$

given by the matrix

${\displaystyle {}M={\begin{pmatrix}0&1&1\\0&2&2\\1&3&4\\2&4&6\end{pmatrix}}\,.}$

To determine the kernel, we have to solve the homogeneous linear system

${\displaystyle {}{\begin{pmatrix}y+z\\2y+2z\\x+3y+4z\\2x+4y+6z\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\0\end{pmatrix}}\,.}$

The solution space is

${\displaystyle {}L={\left\{s{\begin{pmatrix}1\\1\\-1\end{pmatrix}}\mid s\in \mathbb {R} \right\}}\,,}$

and this is the kernel of ${\displaystyle {}\varphi }$. The kernel has dimension one, therefore the dimension of the image is ${\displaystyle {}2}$, due to the dimension formula.

## Corollary

Let ${\displaystyle {}K}$ denote a field, let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ denote ${\displaystyle {}K}$-vector spaces with the same dimension ${\displaystyle {}n}$. Let

${\displaystyle \varphi \colon V\longrightarrow W}$

denote a linear mapping. Then ${\displaystyle {}\varphi }$ is injective if and only if ${\displaystyle {}\varphi }$ is surjective.

### Proof

This follows from the dimension formula and Lemma 24.14 .

${\displaystyle \Box }$

Composition of linear mappings and matrices

## Lemma

In the correspondence between linear mappings and matrices, the composition of linear mappings corresponds to the matrix multiplication. More precisely: let ${\displaystyle {}U,V,W}$ denote vector spaces over a field ${\displaystyle {}K}$ with bases

${\displaystyle {\mathfrak {u}}=u_{1},\ldots ,u_{p},\,{\mathfrak {v}}=v_{1},\ldots ,v_{n}{\text{ and }}{\mathfrak {w}}=w_{1},\ldots ,w_{m}.}$

Let

${\displaystyle \psi :U\longrightarrow V{\text{ and }}\varphi :V\longrightarrow W}$

denote linear mappings. Then, for the describing matrix of ${\displaystyle {}\psi ,\,\varphi }$, and of the composition ${\displaystyle {}\varphi \circ \psi }$, the relation

${\displaystyle {}M_{\mathfrak {w}}^{\mathfrak {u}}(\varphi \circ \psi )=(M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi ))\circ (M_{\mathfrak {v}}^{\mathfrak {u}}(\psi ))\,}$

holds.

### Proof

We consider the chain of mappings

${\displaystyle U{\stackrel {\psi }{\longrightarrow }}V{\stackrel {\varphi }{\longrightarrow }}W.}$

Suppose that ${\displaystyle {}\psi }$ is described by the ${\displaystyle {}n\times p}$-matrix ${\displaystyle {}B=(b_{jk})_{jk}}$, and that ${\displaystyle {}\varphi }$ is described by the ${\displaystyle {}m\times n}$-matrix ${\displaystyle {}A={\left(a_{ij}\right)}_{ij}}$ (with respect to the bases). The composition ${\displaystyle {}\varphi \circ \psi }$ has the following effect on the base vector ${\displaystyle {}u_{k}}$.

{\displaystyle {}{\begin{aligned}{\left(\varphi \circ \psi \right)}{\left(u_{k}\right)}&=\varphi {\left(\psi {\left(u_{k}\right)}\right)}\\&=\varphi {\left(\sum _{j=1}^{n}b_{jk}v_{j}\right)}\\&=\sum _{j=1}^{n}b_{jk}\varphi (v_{j})\\&=\sum _{j=1}^{n}b_{jk}{\left(\sum _{i=1}^{m}a_{ij}w_{i}\right)}\\&=\sum _{i=1}^{m}{\left(\sum _{j=1}^{n}a_{ij}b_{jk}\right)}w_{i}\\&=\sum _{i=1}^{m}c_{ik}w_{i}.\end{aligned}}}

Here, these coefficients ${\displaystyle {}c_{ik}=\sum _{j=1}^{n}a_{ij}b_{jk}}$ are just the entries of the product matrix ${\displaystyle {}A\circ B}$.

${\displaystyle \Box }$

From this, we can conclude that the product of matrices is associative.

Invertible matrices

## Definition

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}M}$ denote an ${\displaystyle {}n\times n}$-matrix over ${\displaystyle {}K}$. Then ${\displaystyle {}M}$ is called invertible, if there exists a matrix ${\displaystyle {}A\in \operatorname {Mat} _{n}(K)}$ such that

${\displaystyle {}A\circ M=E_{n}=M\circ A\,}$
holds.

## Definition

Let ${\displaystyle {}K}$ denote a field. For an invertible matrix ${\displaystyle {}M\in \operatorname {Mat} _{n}(K)}$, the matrix ${\displaystyle {}A\in \operatorname {Mat} _{n}(K)}$ fulfilling

${\displaystyle {}A\circ M=E_{n}=M\circ A\,,}$

is called the inverse matrix of ${\displaystyle {}M}$. It is denoted by

${\displaystyle M^{-1}.}$

Linear mappings and change of basis

## Lemma

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ denote finite-dimensional ${\displaystyle {}K}$-vector spaces. Let ${\displaystyle {}{\mathfrak {v}}}$ and ${\displaystyle {}{\mathfrak {u}}}$ be bases of ${\displaystyle {}V}$ and ${\displaystyle {}{\mathfrak {w}}}$ and ${\displaystyle {}{\mathfrak {z}}}$ bases of ${\displaystyle {}W}$. Let

${\displaystyle \varphi \colon V\longrightarrow W}$

denote a linear mapping, which is described by the matrix ${\displaystyle {}M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi )}$ with respect to the bases ${\displaystyle {}{\mathfrak {v}}}$ and ${\displaystyle {}{\mathfrak {w}}}$. Then ${\displaystyle {}\varphi }$ is described with respect to the bases ${\displaystyle {}{\mathfrak {u}}}$ and ${\displaystyle {}{\mathfrak {z}}}$ by the matrix

${\displaystyle M_{\mathfrak {z}}^{\mathfrak {w}}\circ (M_{\mathfrak {w}}^{\mathfrak {v}}(\varphi ))\circ (M_{\mathfrak {u}}^{\mathfrak {v}})^{-1},}$

where ${\displaystyle {}M_{\mathfrak {u}}^{\mathfrak {v}}}$ and ${\displaystyle {}M_{\mathfrak {z}}^{\mathfrak {w}}}$ are the transformation matrices, which describe the change of basis from ${\displaystyle {}{\mathfrak {v}}}$ to ${\displaystyle {}{\mathfrak {u}}}$ and from ${\displaystyle {}{\mathfrak {w}}}$ to ${\displaystyle {}{\mathfrak {z}}}$.

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Corollary

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}V}$ denote a ${\displaystyle {}K}$-vector space of finite dimension. Let

${\displaystyle \varphi \colon V\longrightarrow V}$

be a linear mapping. Let ${\displaystyle {}{\mathfrak {u}}}$ and ${\displaystyle {}{\mathfrak {v}}}$ denote bases of ${\displaystyle {}V}$. Then the matrices which describe the linear mapping with respect to ${\displaystyle {}{\mathfrak {u}}}$ and ${\displaystyle {}{\mathfrak {v}}}$ respectively (on both sides), fulfil the relation

${\displaystyle {}M_{\mathfrak {u}}^{\mathfrak {u}}(\varphi )=M_{\mathfrak {u}}^{\mathfrak {v}}\circ M_{\mathfrak {v}}^{\mathfrak {v}}(\varphi )\circ (M_{\mathfrak {u}}^{\mathfrak {v}})^{-1}\,.}$

### Proof

This follows directly from Lemma 25.8 .

${\displaystyle \Box }$

## Definition

Two square matrices ${\displaystyle {}M,N\in \operatorname {Mat} _{n}(K)}$ are called similar, if there exists an invertible matrix ${\displaystyle {}B}$ with

${\displaystyle {}M=BNB^{-1}}$.

Due to Corollary 25.9 , for a linear mapping ${\displaystyle {}\varphi \colon V\rightarrow V}$, the describing matrices with respect to several bases are similar.

Properties of linear mappings

## Lemma

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ be vector spaces over ${\displaystyle {}K}$ of dimensions ${\displaystyle {}n}$ and ${\displaystyle {}m}$. Let

${\displaystyle \varphi \colon V\longrightarrow W}$

be a linear map, described by the matrix ${\displaystyle {}M\in \operatorname {Mat} _{m\times n}(K)}$

with respect to two bases. Then the following properties hold.
1. ${\displaystyle {}\varphi }$ is injective if and only if the columns of the matrix are linearly independent.
2. ${\displaystyle {}\varphi }$ is surjective if and only if the columns of the matrix form a generating system of ${\displaystyle {}K^{m}}$.
3. Let ${\displaystyle {}m=n}$. Then ${\displaystyle {}\varphi }$ is bijective if and only if the columns of the matrix form a basis of ${\displaystyle {}K^{m}}$, and this holds if and only if ${\displaystyle {}M}$ is invertible.

### Proof

Let ${\displaystyle {}{\mathfrak {v}}=v_{1},\ldots ,v_{n}}$ and ${\displaystyle {}{\mathfrak {w}}=w_{1},\ldots ,w_{m}}$ denote the bases of ${\displaystyle {}V}$ and ${\displaystyle {}W}$ respectively, and let ${\displaystyle {}s_{1},\ldots ,s_{n}}$ denote the column vectors of ${\displaystyle {}M}$. (1). The mapping ${\displaystyle {}\varphi }$ has the property

${\displaystyle {}\varphi (v_{j})=\sum _{i=1}^{m}s_{ij}w_{i}\,,}$

where ${\displaystyle {}s_{ij}}$ is the ${\displaystyle {}i}$-th entry of the ${\displaystyle {}j}$-th column vector. Therefore,

${\displaystyle {}\varphi {\left(\sum _{j=1}^{n}a_{j}v_{j}\right)}=\sum _{j=1}^{n}a_{j}{\left(\sum _{i=1}^{m}s_{ij}w_{i}\right)}=\sum _{i=1}^{m}{\left(\sum _{j=1}^{n}a_{j}s_{ij}\right)}w_{i}\,.}$

This is ${\displaystyle {}0}$ if and only if ${\displaystyle {}\sum _{j=1}^{n}a_{j}s_{ij}=0}$ for all ${\displaystyle {}i}$, and this is equivalent with

${\displaystyle {}\sum _{j=1}^{n}a_{j}s_{j}=0\,.}$

For this vector equation, there exists a nontrivial tuple ${\displaystyle {}{\left(a_{1},\ldots ,a_{n}\right)}}$, if and only if the columns are linearly dependent, and this holds if and only if ${\displaystyle {}\varphi }$ is not injective.
(2). See Exercise 25.3 .
(3). Let ${\displaystyle {}n=m}$. The first equivalence follows from (1) and (2). If ${\displaystyle {}\varphi }$ is bijective, then there exists a (linear) inverse mapping ${\displaystyle {}\varphi ^{-1}}$ with

${\displaystyle \varphi \circ \varphi ^{-1}=\operatorname {Id} _{W}{\text{ and }}\varphi ^{-1}\circ \varphi =\operatorname {Id} _{V}.}$

Let ${\displaystyle {}M}$ denote the matrix for ${\displaystyle {}\varphi }$, and ${\displaystyle {}N}$ the matrix for ${\displaystyle {}\varphi ^{-1}}$. The matrix for the identity is the identity matrix. Because of Lemma 25.5 , we have

${\displaystyle {}M\circ N=E_{n}=N\circ M\,}$

and therefore ${\displaystyle {}M}$ is invertible. The reverse implication is proved similarly.

${\displaystyle \Box }$

Finding the inverse matrix

## Method

Let ${\displaystyle {}M}$ denote a square matrix. How can we decide whether the matrix is invertible, and how can we find the inverse matrix ${\displaystyle {}M^{-1}}$?

For this we write down a table, on the left-hand side we write down the matrix ${\displaystyle {}M}$, and on the right-hand side we write down the identity matrix (of the right size). Now we apply on both sides step by step the same elementary row manipulations. The goal is to produce in the left-hand column, starting with the matrix, in the end the identity matrix. This is possible if and only if the matrix is invertible. We claim that we produce, by this method, in the right column the matrix ${\displaystyle {}M^{-1}}$ in the end. This rests on the following invariance principle. Every elementary row manipulation can be realized as a matrix multiplication with some elementary matrix ${\displaystyle {}E}$ from the left. If in the table we have somewhere the pair

${\displaystyle (M_{1},M_{2}),}$

after the next step (in the next line) we have

${\displaystyle (EM_{1},EM_{2}).}$

If we multiply the inverse of the second matrix (which we do not know yet, but we do know its existence, in case the matrix is invertible) with the first matrix, then we get

${\displaystyle {}(EM_{1})^{-1}EM_{2}=M_{1}^{-1}E^{-1}EM_{2}=M_{1}^{-1}M_{2}\,.}$

This means that this expression is not changed in each single step. In the beginning, this expression equals ${\displaystyle {}M^{-1}E_{n}}$, hence in the end, the pair ${\displaystyle {}(E_{n},N)}$ must fulfil

${\displaystyle {}N=E_{n}^{-1}N=M^{-1}E_{n}=M^{-1}\,.}$

## Example

We want to find for the matrix ${\displaystyle {}{\begin{pmatrix}1&3&1\\4&1&2\\0&1&1\end{pmatrix}}}$ its inverse matrix ${\displaystyle {}M^{-1}}$, following Method 25.12 .

${\displaystyle {}{\begin{pmatrix}1&3&1\\4&1&2\\0&1&1\end{pmatrix}}}$ ${\displaystyle {}{\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}}}$
${\displaystyle {}{\begin{pmatrix}1&3&1\\0&-11&-2\\0&1&1\end{pmatrix}}}$ ${\displaystyle {}{\begin{pmatrix}1&0&0\\-4&1&0\\0&0&1\end{pmatrix}}}$
${\displaystyle {}{\begin{pmatrix}1&3&1\\0&1&1\\0&-11&-2\end{pmatrix}}}$ ${\displaystyle {}{\begin{pmatrix}1&0&0\\0&0&1\\-4&1&0\end{pmatrix}}}$
${\displaystyle {}{\begin{pmatrix}1&3&1\\0&1&1\\0&0&9\end{pmatrix}}}$ ${\displaystyle {}{\begin{pmatrix}1&0&0\\0&0&1\\-4&1&11\end{pmatrix}}}$
${\displaystyle {}{\begin{pmatrix}1&3&1\\0&1&1\\0&0&1\end{pmatrix}}}$ ${\displaystyle {}{\begin{pmatrix}1&0&0\\0&0&1\\{\frac {-4}{9}}&{\frac {1}{9}}&{\frac {11}{9}}\end{pmatrix}}}$
${\displaystyle {}{\begin{pmatrix}1&0&-2\\0&1&1\\0&0&1\end{pmatrix}}}$ ${\displaystyle {}{\begin{pmatrix}1&0&-3\\0&0&1\\{\frac {-4}{9}}&{\frac {1}{9}}&{\frac {11}{9}}\end{pmatrix}}}$
${\displaystyle {}{\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}}}$ ${\displaystyle {}{\begin{pmatrix}{\frac {1}{9}}&{\frac {2}{9}}&{\frac {-5}{9}}\\{\frac {4}{9}}&{\frac {-1}{9}}&{\frac {-2}{9}}\\{\frac {-4}{9}}&{\frac {1}{9}}&{\frac {11}{9}}\end{pmatrix}}}$