Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 16



The determinant

Suppose an -matrix is given. Can we see "at a glance“ whether it is invertible? Does there exist an expression in the entries of the matrix to decide this? This question has a positive answer in terms of the determinant.


Let be a field, and let denote an -matrix over with entries . For , let denote the -matrix, which arises from , when we remove the first column and the -th row. Then one defines recursively the determinant of by

The determinant is only defined for square matrices. For small , the determinant can be computed easily.



For a -matrix

we have

For a -matrix, we can use the rule of Sarrus to compute the determinant. We repeat the first column as the fourth column and the second column as the fifth column. The products of diagonals from up to down enter with a positive sign, and the products of the other diagonals enter with a negative sign.

For a -matrix , we have

This is called the rule of Sarrus.


For an upper triangular matrix

we have
In particular, for the

identity matrix we get

.

This follows with a simple induction directly from the recursive definition of the determinant.



Multilinear and alternatings mappings

Wir führen zwei Begriffe ein, the we im Moment hauptsächlich zum weiteren Verständnis the Determinante brauchen.


Let be a field, and let and be vector spaces over . A mapping

is called multilinear if, for every and every -tuple with , the induced mapping

is

-linear.

For , this property is called bilinear. For example, the multiplikation in a field , that is, the mapping

is bilinear. Also, for a -vector space and its dual space , the evaluation mapping

is bilinear.


Let be a field, and let and be vector spaces over . Let

be a multilinear mapping, and let and . Then

Proof



Let be a field, let and denote -vector spaces, and let . A multilinear mapping

is called alternating if the following holds: Whenever in , two entries are identical, that is for a pair , then

For an alternating mapping, there is only one vector space occurring several times in the product on the left.


Let be a field, let and denote -vector spaces, and let . Suppose that

is an alternating mapping. Then

This means that if we swap two vectors, them the sign is changing.

Due to the definition of alternating and Lemma 16.6 , we have



The determinant is an alternating mapping

We want to show that the recursively defined determinant is a multilinear and alternating mapping. To make sense of this, we identify

that is, we identify a matrix with the -tuple of its rows. Thus, in the following, we consider a matrix as a column tuple

where the entries are row vectors of length .


Let be a field, and . Then the determinant

is multilinear. This means that for every , and for every choice of vectors , and for any , the identity

holds, and for , the identity

holds.

Let

where we denote the entries and the matrices arising from deleting a row in an analogous way. In particular, and . We prove the statement by induction over ,  For , we have and

due to the induction hypothesis. For , we have and . Altogether, we get

The compatibility with the scalar multiplication is proved in a similar way, see Exercise 16.22 .



Let be a field and . Then the determinant

is

alternating.

We proof the statement by induction over ,  So suppose that and set . Let and with be the relevant row. By definition, we have . Due to the induction hypothesis, we have for , because two rows coincide in these cases. Therefore,

where . The matrices and consist in the same rows, however, the row is in the -th row and in the -th row. All other rows occur in both matrices in the same order. By swapping altogether times adjacent rows, we can transform into . Due to the induction hypothesis and Lemma 16.8 , their determinants are related by the factor , thus . Using this, we obtain


The property of the determinant to be alternating simplifies its computation. In particular, it is clear how the determinat behaves under elementary row operations. If a row is multiplied with a number , the determinant has to be multiplied with as well. If two rows are swapped, then the sign of the determinant changes. If a row (or a multiple of a row) is added to another row, then the determinant does not change.


Let be a field, and let denote an -matrix

over . Then the following statements are equivalent.
  1. We have .
  2. The rows of are linearly independent.
  3. is invertible.
  4. We have .

The relation between rank, invertibility and linear independence was proven in Corollary 12.16 . Suppose now that the rows are linearly dependent. After exchanging rows, we may assume that . Then, due to Theorem 16.9 and Theorem 16.10 , we get


Now suppose that the rows are linearly independent. Then, by exchanging of rows, scaling and addition of a row to another row, we can transform the matrix successively into the identity matrix. During these manipulations, the determinant is multiplied with some factor . Since the determinant of the identity matrix is , due to Lemma 16.4 , the determinant of the initial matrix is .



In case , the determinant is in tight relation to volumes of geometric objects. If we consider in vectors , then they span a parallelotope. This is defined by

It consists of all linear combinations of these vectors, where all the scalars belong to the unit interval. If the vectors are linearly independent, then this is a "voluminous“ body, otherwise it is an object of smaller dimension. Now the relation

holds, saying that the volume of the parallelotope is the modulus of the determinant of the matrix, consisting of the spanning vectors as columns (or rows).

For a proof of the relation between determinant and volume just mentioned, see Fact *****.


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)