# Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 26

Rank of matrices

## Definition

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}M}$ denote an ${\displaystyle {}m\times n}$-matrix over ${\displaystyle {}K}$. Then the dimension of the linear subspace of ${\displaystyle {}K^{m}}$, generated by the columns, is called the column rank of the matrix, written

${\displaystyle \operatorname {rk} \,M.}$

## Lemma

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}V}$ and ${\displaystyle {}W}$ denote ${\displaystyle {}K}$-vector spaces of dimensions ${\displaystyle {}n}$ and ${\displaystyle {}m}$. Let

${\displaystyle \varphi \colon V\longrightarrow W}$

be a linear mapping which is described by the matrix ${\displaystyle {}M\in \operatorname {Mat} _{m\times n}(K)}$, with respect to bases of the spaces. Then

${\displaystyle {}\operatorname {rk} \,\varphi =\operatorname {rk} \,M\,}$

holds.

### Proof

${\displaystyle \Box }$

To formulate the next statement, we introduce row rank of an ${\displaystyle {}m\times n}$-matrix to be the dimension of the linear subspace of ${\displaystyle {}K^{n}}$ generated by the rows.

## Lemma

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}M}$ denote an ${\displaystyle {}m\times n}$-matrix over ${\displaystyle {}K}$. Then the column rank coincides with the row rank. If ${\displaystyle {}M}$ is transformed with elementary row manipulations to a matrix ${\displaystyle {}M'}$ in the sense of Theorem 21.9 , then the rank equals the number of relevant rows of ${\displaystyle {}M'}$.

### Proof

Let ${\displaystyle {}r}$ denote the number of the relevant rows in the matrix ${\displaystyle {}M'}$ in echelon form, gained by elementary row manipulations. We have to show that this number is the column rank, and the row rank of ${\displaystyle {}M'}$ and of ${\displaystyle {}M}$. In an elementary row manipulation, the linear subspace generated by the rows is not changed, therefore the row rank is not changed. So the row rank of ${\displaystyle {}M}$ equals the row rank of ${\displaystyle {}M'}$. This matrix has row rank ${\displaystyle {}r}$, since the first ${\displaystyle {}r}$ rows are linearly independent, and beside this, there are only zero rows. But ${\displaystyle {}M'}$ has also column rank ${\displaystyle {}r}$, since the ${\displaystyle {}r}$ columns, where there is a new step, are linearly independent, and the other columns are linear combinations of these ${\displaystyle {}r}$ columns. By Exercise 26.2 , the column rank is preserved by elementary row manipulations.

${\displaystyle \Box }$

Both ranks coincide, so we only talk about the rank of a matrix.

## Corollary

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}M}$ denote an ${\displaystyle {}n\times n}$-matrix

over ${\displaystyle {}K}$. Then the following statements are equivalent.
1. ${\displaystyle {}M}$ is invertible.
2. The rank of ${\displaystyle {}M}$ is ${\displaystyle {}n}$.
3. The rows of ${\displaystyle {}M}$ are linearly independent.
4. The columns of ${\displaystyle {}M}$ are linearly independent.

### Proof

The equivalence of (2), (3) and (4) follows from the definition and from Lemma 26.3 .
For the equivalence of (1) and (2), let's consider the linear mapping

${\displaystyle \varphi \colon K^{n}\longrightarrow K^{n}}$

defined by ${\displaystyle {}M}$. The property that the column rank equals ${\displaystyle {}n}$, is equivalent with the map being surjective, and this is, due to Corollary 25.4 , equivalent with the map being bijective. Because of Lemma 25.11 , bijectivity is equivalent with the matrix being invertible.

${\displaystyle \Box }$

Determinants

## Definition

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}M}$ denote an ${\displaystyle {}n\times n}$-matrix over ${\displaystyle {}K}$ with entries ${\displaystyle {}a_{ij}}$. For ${\displaystyle {}i\in \{1,\ldots ,n\}}$, let ${\displaystyle {}M_{i}}$ denote the ${\displaystyle {}(n-1)\times (n-1)}$-matrix, which arises from ${\displaystyle {}M}$, when we remove the first column and the ${\displaystyle {}i}$-th row. Then one defines recursively the determinant of ${\displaystyle {}M}$ by

${\displaystyle {}\det M={\begin{cases}a_{11}\,,&{\text{ for }}n=1\,,\\\sum _{i=1}^{n}(-1)^{i+1}a_{i1}\det M_{i}&{\text{ for }}n\geq 2\,.\end{cases}}\,}$

The determinant is only defined for square matrices. For small ${\displaystyle {}n}$, the determinant can be computed easily.

## Example

For a ${\displaystyle {}2\times 2}$-matrix

${\displaystyle {}M={\begin{pmatrix}a&b\\c&d\end{pmatrix}}\,,}$

we have

${\displaystyle \det {\begin{pmatrix}a&b\\c&d\end{pmatrix}}=ad-cb.}$

## Example

For a ${\displaystyle {}3\times 3}$-matrix ${\displaystyle {}M={\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}}}$, we have

${\displaystyle \det {\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}}=a_{11}a_{22}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{13}a_{22}a_{31}-a_{11}a_{23}a_{32}-a_{12}a_{21}a_{33}.}$

This is called the rule of Sarrus.

## Lemma

For an upper triangular matrix

${\displaystyle {}M={\begin{pmatrix}b_{1}&\ast &\cdots &\cdots &\ast \\0&b_{2}&\ast &\cdots &\ast \\\vdots &\ddots &\ddots &\ddots &\vdots \\0&\cdots &0&b_{n-1}&\ast \\0&\cdots &\cdots &0&b_{n}\end{pmatrix}}\,,}$
we have
${\displaystyle {}\det M=b_{1}b_{2}\cdots b_{n-1}b_{n}\,.}$
In particular, for the

identity matrix we get ${\displaystyle {}\det E_{n}=1}$.

### Proof

This follows with a simple induction directly from the recursive definition of the determinant.

${\displaystyle \Box }$

Multilinearity

We want to show that the recursively defined determinant is a "multilinear“ and "alternating“ mapping, where we identify

${\displaystyle {}\operatorname {Mat} _{n}(K)\cong (K^{n})^{n}\,,}$

so a matrix is identified with the ${\displaystyle {}n}$-tuple of the rows of the matrix. We consider a matrix as a tuple of columns

${\displaystyle {\begin{pmatrix}v_{1}\\\vdots \\v_{n}\end{pmatrix}}}$

where the entries ${\displaystyle {}v_{i}}$ are row vectors of length ${\displaystyle {}n}$.

## Theorem

Let ${\displaystyle {}K}$ be a field, and ${\displaystyle {}n\in \mathbb {N} _{+}}$. Then the determinant

${\displaystyle \operatorname {Mat} _{n}(K)=(K^{n})^{n}\longrightarrow K,M\longmapsto \det M,}$

is multilinear. This means that for every ${\displaystyle {}k\in \{1,\ldots ,n\}}$, and for every choice of ${\displaystyle {}n-1}$ vectors ${\displaystyle {}v_{1},\ldots ,v_{k-1},v_{k+1},\ldots ,v_{n}\in K^{n}}$, and for any ${\displaystyle {}u,w\in K^{n}}$, the identity

${\displaystyle {}\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\u+w\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}=\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\u\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}+\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\w\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}\,}$

holds, and for ${\displaystyle {}s\in K}$, the identity

${\displaystyle {}\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\su\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}=s\det {\begin{pmatrix}v_{1}\\\vdots \\v_{k-1}\\u\\v_{k+1}\\\vdots \\v_{n}\end{pmatrix}}\,}$

holds.

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Theorem

Let ${\displaystyle {}K}$ be a field, and ${\displaystyle {}n\in \mathbb {N} _{+}}$. Then the determinant

${\displaystyle \operatorname {Mat} _{n}(K)=(K^{n})^{n}\longrightarrow K,M\longmapsto \det M,}$
has the following properties.
1. If in ${\displaystyle {}M}$ two rows are identical, then ${\displaystyle {}\det M=0}$. This means that the determinant is alternating.
2. If we exchange two rows in ${\displaystyle {}M}$, then the determinant changes with factor ${\displaystyle {}-1}$.

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Theorem

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}M}$ denote an ${\displaystyle {}n\times n}$-matrix

over ${\displaystyle {}K}$. Then the following statements are equivalent.
1. We have ${\displaystyle {}\det M\neq 0}$.
2. The rows of ${\displaystyle {}M}$ are linearly independent.
3. ${\displaystyle {}M}$ is invertible.
4. We have ${\displaystyle {}\operatorname {rk} \,M=n}$.

### Proof

The relation between rank, invertibility and linear independence was proven in Corollary 26.4 . Suppose now that the rows are linearly dependent. After exchanging rows, we may assume that ${\displaystyle {}v_{n}=\sum _{i=1}^{n-1}s_{i}v_{i}}$. Then, due to Theorem 26.9 and Theorem 26.10 , we get

${\displaystyle {}\det M=\det {\begin{pmatrix}v_{1}\\\vdots \\v_{n-1}\\\sum _{i=1}^{n-1}s_{i}v_{i}\end{pmatrix}}=\sum _{i=1}^{n-1}s_{i}\det {\begin{pmatrix}v_{1}\\\vdots \\v_{n-1}\\v_{i}\end{pmatrix}}=0\,.}$

Now suppose that the rows are linearly independent. Then, by exchanging of rows, scaling and addition of a row to another row, we can transform the matrix successively into the identity matrix. During these manipulations, the determinant is multiplied with some factor ${\displaystyle {}\neq 0}$. Since the determinant of the identity matrix is ${\displaystyle {}1}$, the determinant of the initial matrix is ${\displaystyle {}\neq 0}$.

${\displaystyle \Box }$

## Remark

In case ${\displaystyle {}K=\mathbb {R} }$, the determinant is in tight relation to volumes of geometric objects. If we consider in ${\displaystyle {}\mathbb {R} ^{n}}$ vectors ${\displaystyle {}v_{1},\ldots ,v_{n}}$, then they span a parallelotope. This is defined by

${\displaystyle {}P:={\left\{s_{1}v_{1}+\cdots +s_{n}v_{n}\mid s_{i}\in [0,1]\right\}}\,.}$

It consists of all linear combinations of these vectors, where all the scalars belong to the unit interval. If the vectors are linearly independent, then this is a "voluminous“ body, otherwise it is an object of smaller dimension. Now the relation

${\displaystyle {}\operatorname {vol} \,P=\vert {\det {\left(v_{1},\ldots ,v_{n}\right)}}\vert \,}$

holds, saying that the volume of the parallelotope is the modulus of the determinant of the matrix, consisting of the spanning vectors as columns.

The multiplication theorem for determinants

We discuss without proofs further important theorems about the determinant. The proofs rely on a systematic account of the properties which are characteristic for the determinant, namely the properties multilinear and alternating. By these properties, together with the condition that the determinant of the identity matrix is ${\displaystyle {}1}$, the determinant is already determined.

## Theorem

Let ${\displaystyle {}K}$ denote a field, and ${\displaystyle {}n\in \mathbb {N} _{+}}$. Then for matrices ${\displaystyle {}A,B\in \operatorname {Mat} _{n}(K)}$, the relation

${\displaystyle {}\det {\left(A\circ B\right)}=\det A\cdot \det B\,}$

holds.

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Definition

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}M=(a_{ij})_{ij}}$ be an ${\displaystyle {}m\times n}$-matrix over ${\displaystyle {}K}$. Then the ${\displaystyle {}n\times m}$-matrix

${\displaystyle {M^{\text{tr}}}={\left(b_{ij}\right)}_{ij}{\text{ with }}b_{ij}:=a_{ji}}$
is called the transposed matrix for ${\displaystyle {}M}$.

The transposed matrix arises by interchanging the role of the rows and the columns. For example, we have

${\displaystyle {}{{\begin{pmatrix}t&n&o&d\\r&s&s&x\\a&p&e&y\end{pmatrix}}^{\text{tr}}}={\begin{pmatrix}t&r&a\\n&s&p\\o&s&e\\d&x&y\end{pmatrix}}\,.}$

## Theorem

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}M}$ denote an ${\displaystyle {}n\times n}$-matrix over ${\displaystyle {}K}$. Then

${\displaystyle {}\det M=\det {M^{\text{tr}}}\,.}$

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

This implies that we can compute the determinant also by expanding with respect to the rows, as the following statement shows.

## Corollary

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}M={\left(a_{ij}\right)}_{ij}}$ be an ${\displaystyle {}m\times n}$-matrix over ${\displaystyle {}K}$. For ${\displaystyle {}i,j\in \{1,\ldots ,n\}}$, let ${\displaystyle {}M_{ij}}$ be the matrix which arises from ${\displaystyle {}M}$, by leaving out the ${\displaystyle {}i}$-th row and the ${\displaystyle {}j}$-th column. Then (for ${\displaystyle {}n\geq 2}$ and for every fixed ${\displaystyle {}i}$ and ${\displaystyle {}j}$)

${\displaystyle {}\det M=\sum _{i=1}^{n}(-1)^{i+j}a_{ij}\det M_{ij}=\sum _{j=1}^{n}(-1)^{i+j}a_{ij}\det M_{ij}\,.}$

### Proof

For ${\displaystyle {}j=1}$, the first equation is the recursive definition of the determinant. From that statement, the case ${\displaystyle {}i=1}$ follows, due to Theorem 26.15 . By exchanging columns and rows, the statement follows in full generality, see Exercise 26.13 .

${\displaystyle \Box }$

The determinant of a linear mapping

Let

${\displaystyle \varphi \colon V\longrightarrow V}$

be a linear mapping from a vector space of dimension ${\displaystyle {}n}$ into itself. This is described by a matrix ${\displaystyle {}M\in \operatorname {Mat} _{n}(K)}$ with respect to a given basis. We would like to define the determinant of the linear mapping, by the determinant of the matrix. However, we have here the problem whether this is well-defined, since a linear mapping is described by quite different matrices, with respect to different bases. But, because of Corollary 25.9 , when we have two describing matrices ${\displaystyle {}M}$ and ${\displaystyle {}N}$, and the matrix ${\displaystyle {}B}$ for the change of bases, we have the relation ${\displaystyle {}N=BMB^{-1}}$. The multiplication theorem for determinants yields then

${\displaystyle {}\det N=\det {\left(BMB^{-1}\right)}=(\det B)(\det M){\left(\det B^{-1}\right)}=(\det B){\left(\det B^{-1}\right)}(\det M)=\det M\,,}$

so that the following definition is in fact independent of the basis chosen.

## Definition

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}V}$ denote a ${\displaystyle {}K}$-vector space of finite dimension. Let

${\displaystyle \varphi \colon V\longrightarrow V}$

be a linear mapping, which is described by the matrix ${\displaystyle {}M}$, with respect to a basis. Then

${\displaystyle {}\det \varphi :=\det M\,}$
is called the determinant of the linear mapping ${\displaystyle {}\varphi }$.