Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 28



The characteristic polynomial

We want to determine, for a given endomorphism , the eigenvalues and the eigenspaces. For this, the characteristic polynomial is decisive.


For an -matrix with entries in a field , the polynomial

is called the characteristic polynomial[1]

of .

For , this means

In this definition, we use the determinant of a matrix, which we have only defined for matrices with entries in a field. The entries are now elements of the polynomial ring . But, since we can consider these elements also inside the field of rational functions ,[2] this is a useful definition. By definition, the determinant is an element in , but, because all entries of the matrix are polynomials, and because in the recursive definition of the determinant, only addition and multiplication is used, the characteristic polynomial is indeed a polynomial. The degree of the characteristic polynomial is , and its leading coefficient is , so it has the form

We have the important relation

for every , see Exercise 28.4 . Here, on the left-hand side, the number is inserted into the polynomial, and on the right-hand side, we have the determinant of a matrix which depends on .

For a linear mapping

on a finite-dimensional vector space, the characteristic polynomial is defined by

where is a describing matrix with respect to some basis. The multiplication theorem for the determinant shows that this definition is independent of the choice of the basis, see Exercise 28.3 .

The characteristic polynomial of the identity on an -dimensional vector space is


Let denote a field, and let denote an -dimensional vector space. Let

denote a linear mapping. Then is an eigenvalue of if and only if is a zero of the characteristic polynomial

.

Let denote a describing matrix for , and let be given. We have

if and only if the linear mapping

is not bijective (and not injective) (due to Theorem 26.11 and Lemma 25.11 ). This is, because of Lemma 27.11 and Lemma 24.14 , equivalent with

and this means that the eigenspace for is not the null space, thus is an eigenvalue for .



We consider the real matrix . The characteristic polynomial is

The eigenvalues are therefore (we have found these eigenvalues already in Example 27.9 , without using the characteristic polynomial).


For the matrix

the characteristic polynomial is

Finding the zeroes of this polynomial leads to the condition

which has no solution over , so that the matrix has no eigenvalues over . However, considered over the complex numbers , we have the two eigenvalues and . For the eigenspace for , we have to determine

a basis vector (hence an eigenvector) of this is . Analogously, we get


For an upper triangular matrix

the characteristic polynomial is

due to Lemma 26.8 . In this case, we have directly a factorization of the characteristic polynomial into linear factors, so that we can see immediately the zeroes and the eigenvalues of , namely just the diagonal elements (which might not be all different).



Multiplicities

For a more detailed investigation of eigenspaces, the following concepts are necessary. Let

denote a linear mapping on a finite-dimensional vector space , and . Then the exponent of the linear polynomial inside the characteristic polynomial is called the algebraic multiplicity of , symbolized as . The dimension of the corresponding eigenspace, that is

is called the geometric multiplicity of . Because of Theorem 28.2 , the algebraic multiplicity is positive if and only if the geometric multiplicity is positive. In general, these multiplicities might be different, we have however always one estimate.


Let denote a field, and let denote a finite-dimensional vector space. Let

denote a linear mapping and . Then we have the estimate

between the geometric and the

algebraic multiplicity.

Let and let be a basis of this eigenspace. We complement this basis with to get a basis of , using Theorem 23.23 . With respect to this basis, the describing matrix has the form

Ttherefore, the characteristic polynomial equals (using Exercise 26.9 ) , so that the algebraic multiplicity is at least .



We consider the -shearing matrix

with . The characteristic polynomial is

so that is the only eigenvalue of . The corresponding eigenspace is

From

we get that is an eigenvector, and in case , the eigenspace is one-dimensional (in case , we have the identity and the eigenspace is two-dimensional). So in case , the algebraic multiplicity of the eigenvalue equals , and the geometric multiplicity equals .



Diagonalizable mappings

The restriction of a linear mapping to an eigenspace is the homothety with the corresponding eigenvalue, so this is a quite simple linear mapping. If there are many eigenvalues with high-dimensional eigenspaces, then usually the linear mapping is simple in some sense. An extreme case are the so-called diagonalizable mappings.

For a diagonal matrix

the characteristic polynomial is just

If the number occurs -times as a diagonal entry, then also the linear factor occurs with exponent inside the factorization of the characteristic polynomial. This is also true when we just have an upper triangular matrix. But in the case of a diagonal matrix, we can also read of immediately the eigenspaces, see Example 27.7 . The eigenspace for consists of all linear combinations of the standard vectors , for which equals . In particular, the dimension of the eigenspace equals the number how often occurs as a diagonal element. Thus, for a diagonal matrix, the algebraic and the geometric multiplicities coincide.


Let denote a field, let denote a vector space, and let

denote a linear mapping. Then is called diagonalizable, if has a basis consisting of eigenvectors

for .

Let denote a field, and let denote a finite-dimensional vector space. Let

denote a

linear mapping. Then the following statements are equivalent.
  1. is diagonalizable.
  2. There exists a basis of such that the describing matrix is a diagonal matrix.
  3. For every describing matrix with respect to a basis , there exists an invertible matrix such that

    is a diagonal matrix.

The equivalence between (1) and (2) follows from the definition, from Example 27.7 , and the correspondence between linear mappings and matrices. The equivalence between (2) and (3) follows from Corollary 25.9 .



Let denote a field, and let denote a finite-dimensional vector space. Let

denote a linear mapping. Suppose that there exists different eigenvalues. Then is

diagonalizable.

Because of Lemma 27.14 , there exist linearly independent eigenvectors. These form, due to Corollary 23.21 , a basis.



We continue with Example 27.9 . There exists the two eigenvectors and for the different eigenvalues and , so that the mapping is diagonalizable, due to Corollary 28.10 . With respect to the basis , consisting of these eigenvectors, the linear mapping is described by the diagonal matrix

The transformation matrix, from the basis to the standard basis , consisting of and , is simply

The inverse matrix is

Because of Corollary 25.9 , we have the relation



Multiplicities and diagonalizable matrices

Let denote a field, and let denote a finite-dimensional vector space. Let

denote a linear mapping. Then is diagonalizable if and only if the characteristic polynomial is a product of linear factors and if for every zero with algebraic multiplicity , the identity

holds.

Proof

This proof was not presented in the lecture.


The product of two diagonal matrices is again a diagonal matrix. The following example shows that the product of two diagonalizable matrices is in general not diagonalizable.


Let and denote two lines in through the origin, and let and denote the reflections at these axes. A reflection at an axis is always diagonalizable, the axis and the line orthogonal to the axis are eigenlines (with eigenvalues and ). The composition

of the reflections is a plane rotation, the angle of rotation being twice the angle between the two lines. However, a rotation is only diagonalizable if the angle of rotation is or degree. If the angle between the axes is different from degree, then does not have any eigenvector.



Trigonalizable mappings

Let denote a field, and let denote a finite-dimensional vector space. A linear mapping is called trigonalizable, if there exists a basis such that the describing matrix of with respect to this basis is an

upper triangular matrix.

Diagonalizable linear mappings are in particular trigonalizable. The reverse statement is not true, as Example 28.7 shows.


Let denote a field, and let denote a finite-dimensional vector space. Let

denote a

linear mapping. Then the following statements are equivalent.
  1. is trigonalizable.
  2. The characteristic polynomial has a factorization into linear factors.
If is trigonalizable and is described by the matrix with respect to some basis, then there exists an invertible matrix

such that is an

upper triangular matrix.

Proof

This proof was not presented in the lecture.



Let denote a square matrix with complex entries. Then is

trigonalizable.



Footnotes
  1. Some authors define the characteristic polynomial as the determinant of , instead of . This does only change the sign.
  2. is called the field of rational polynomials; it consists of all fractions for polynomials with . For or , this field can be identified with the field of rational functions.


<< | Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)