Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 17/refcontrol



Universal property of the determinant

The determinant fulfills the characteristic properties that it is multilinear and alternating. This, together with the property that the determinant of the identity matrix is , determines already the determinant in a unique way.


Let be a vector spaceMDLD/vector space over a fieldMDLD/field of dimensionMDLD/dimension (fgvs) . A mappingMDLD/mapping

is called a determinant function if the following two conditions are fulfilled.

  1. is multilinear.MDLD/multilinear
  2. is alternating.MDLD/alternating

LemmaLemma 17 2. change

Let be a fieldMDLD/field and . Let

be a determinant function.MDLD/determinant function Then fulfills the following properties.

  1. If a row of is multiplied with , then is multiplied by .
  2. If contains a zero row, then .
  3. If in two rows are swapped, then is multiplied with the factor .
  4. If a multiple of a row is added to another row, then does not change.
  5. If , then, for an upper triangular matrix,MDLD/upper triangular matrix we have .

(1) and (2) follow directly from multilinearity.MDLD/multilinearity
(3) follows from Lemma 16.8 .
To prove (4), we consider the situation where we add to the -th row the -multiple of the -th row, . Due to the parts already proven, we have


(5). If a diagonal element is , then set . We can add to the -th row suitable multiples of the -th rows, , in order to achieve that the new -th row is a zero row, without changing the value of the determinant function. Due to (2), this value is .

In case no diagonal element is , we may obtain, by several scalings, that all diagonal element are . By adding rows, we obtain furthermore the identity matrix. Therefore,




TheoremTheorem 17.3 change

Let be a fieldMDLD/field and . Then there exists exactly one determinant functionMDLD/determinant function

fulfilling

where denote the standard vectors,MDLD/standard vectors namely the

determinant.MDLD/determinant

The determinantMDLD/determinant fulfills, due to Theorem 16.9 , Theorem 16.10 and Lemma 16.4 , all the given properties.
Uniqueness. For every matrix , there exists a sequence of elementary row operations such that, in the end, we get an upper triangular matrix. Hence, due to Lemma 17 2. , the value of the determinant function is determined by the values on the upper triangular matrices. Therefore, after scaling and row addition, it is even determined by its value on the identity matrix.



The multiplication theorem for determinants

We discuss several important theorems about the determinant.


TheoremTheorem 17.4 change

Let denote a field, and . Then for matrices , the relation

holds.

We fix the matrix .

Suppose first that . Then, due to Theorem 16.11 the matrix is not invertibleMDLD/invertible (matrix) and therefore, also is not invertible. Hence, .

Suppose now that is invertible. In this case, we consider the well-defined mapping

We want to show that this mapping equals the mapping , by showing that it fulfills all the properties which, according to Theorem 17.3 , characterize the determinant. If denote the rows of , then is computed by applying the determinant to the rows , and then by multiplying with . Hence the multilinearity and the alternating property follows from Exercise 16.29 . If we start with , then and thus



Let denote a field,MDLD/field and let denote an -matrixMDLD/matrix over . Then

If is not invertible, then, due to Theorem 16.11 , the determinant is and the rank is smaller than . This does also hold for the transposed matrix, so that its determinant is again . So suppose that is invertible. We reduce the statement in this case to the corresponding statement for the elementary matrices, which can be verified directly, see Exercise 16.13 . Because of Lemma 12.18 , there exist elementary matricesMDLD/elementary matrices such that

is a diagonal matrix.MDLD/diagonal matrix Due to Exercise 4.20 , we have

and

The diagonal matrix is not changed under transposing it. Since the determinants of the elementary matrices are also not changed under transposition, we get, using Theorem 17.4 ,



Let be a field,MDLD/field and let be an -matrixMDLD/matrix over . For , let be the matrix which arises from , by leaving out the -th row and the -th column. Then (for and for every fixed and )

For , the first equation is the recursive definition of the determinant.MDLD/determinant From that statement, the case follows, due to Fact *****. By exchanging columns and rows, the statement follows in full generality, see Exercise 17.11 .



The determinant of a linear mapping

Let

be a linear mapping from a vector space of dimension into itself. This is described by a matrix with respect to a given basis. We would like to define the determinant of the linear mapping, by the determinant of the matrix. However, we have here the problem whether this is well-defined, since a linear mapping is described by quite different matrices, with respect to different bases. But, because of Corollary 11.12 , when we have two describing matrices and , and the matrix for the change of bases, we have the relation . The multiplication theorem for determinants yields then

so that the following definition is in fact independent of the basis chosen.


Let denote a field,MDLD/field and let denote a -vector spaceMDLD/vector space of finite dimension. Let

be a linear mapping,MDLD/linear mapping which is described by the matrixMDLD/matrix , with respect to a basis.MDLD/basis (vs) Then

is called the determinant of the linear mapping .



Cramer's rule

For a square matrixMDLD/square matrix , we call

the adjugate matrix of , where arises from by deleting the -th row and the -th column.

Note that in this definition, for the entries of the adjugate, the rows and the columns are swapped.


TheoremTheorem 17.9 change

Let be a field,MDLD/field and let denote an -matrixMDLD/matrix over . Then

If is

invertible,MDLD/invertible (matrix) then

Let . Let the coefficients of the adjugate matrix be denoted by

The coefficients of the product are

In case , this is , as this sum is the expansion of the determinant with respect to the -th column. So let , and let denote the matrix that arises from by replacing in the -th column by the -th column. If we expand with respect to the -th column, then we get

Therefore, these coefficients are , and the first equation holds.
The second equation is proved similarly, where we use now the expansion of the determinant with respect to the rows.


The following statement is called Cramer's rule.


Let be a field,MDLD/field and let

be an inhomogeneous linear systemMDLD/inhomogeneous linear system over . Suppose that the describing matrix is invertible.MDLD/invertible (matrix) Then the unique solution for is given by

.

For an invertible matrix , the solution of the linear system can be found by applying , that is, . Using Theorem 17.9 , this means . For the -th component, this means

The right-hand factor is the expansion of the determinant of the matrix shown in the numerator with respect to the -th column.



We solve the linear systemMDLD/linear system

using Cramer's rule. This yields

and


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)