Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 5



Solving systems of linear equations

It is not clear a priori what solving a (linear) system of equation is supposed to mean. Anyway, the goal is to find a quite good description of the solution set. If there is only one solution, then we want to find it. If there is no solution at all, we want to detect this in a reasonable way. In general, the solution set of a system of equations is large. In this case, solving a system means to identify "free“ variables, for which arbitrary values are allowed, and to describe explicitly how the other "dependent“ variables can be expressed in terms of the free variables. This is called an explicit description of the solution set.

Linear systems of equations can be solved systematically with the elimination process. With this method, variables are eliminated step by step, until a very simple equivalent linear system in triangular form arises, which can be solved directly (or from where we can deduce that there is no solution). We consider a typical example in many variables.


We want to solve the inhomogeneous linear system

over (or over ). Firstly, we eliminate by keeping the first row , replacing the second row by , and replacing the third row by . This yields

Now, we can eliminate from the (new) third row, with the help of the second row. Because of the fractions, we rather eliminate (which eliminates also ). We leave the first and the second row as they are, and we replace the third row by . This yields the system, in a new ordering of the variables,[1]

Now we can choose an arbitrary (free) value for . The third row determines uniquely, we must have

In the second equation, we can choose arbitrarily, this determines via

The first row determines , namely

Hence, the solution set is

A particularly simple solution is obtained by equating the free variables and with . This yields the special solution

The general solution set can also be written as

Here,

is a description of the general solution of the corresponding homogeneous linear system.


Let denote a field, and let two (inhomogeneous) systems of linear equations,

with respect to the same set of variables, be given. The systems are called equivalent, if their solution sets are identical.

Let be a field, and let

be an inhomogeneous system of linear equations over . Then the following manipulations on this system yield an equivalent system.

  1. Swapping two equations.
  2. The multiplication of an equation by a scalar .
  3. The omitting of an equation, if it occurs twice.
  4. The duplication of an equation (in the sense to write down the equation again).
  5. The omitting or adding of a zero row (zero equation).
  6. The replacement of an equation by the equation that arises if we add to another equation of the system.

Most statements are immediately clear. (2) follows from the fact that if

holds, then also

holds for every . If , then this implication can be reversed by multiplication with .

(6). Let be the equation

and be the equation

If a tuple satisfies both equations, then it also satisfies the equation . And if the tuple satisfies the equations and , then it also satisfies the equation and .


For finding the solution set of a linear system, the manipulations (2) and (6) are most important. In general, these two steps are combined, and the equation is replaced by an equation of the form (with ). Here, has to be chosen in such a way that the new equation contains one variable less than the old equation. This process is called elimination of a variable. This elimination is not only applied to one equation, but for all equations except one (suitable chosen) "working row“ , and with a fixed "working variable“. The following elimination lemma describes this step.


Let denote a field, and let denote an (inhomogeneous) system of linear equations over in the variables . Suppose that is a variable which occurs in at least one equation with a coefficient . Then every equation , different from ,[2] can be replaced by an equation , in which does not occur any more, and such that the new system of equations that consists of and the equations , is equivalent with the system .

Changing the numbering, we may assume . Let be the equation

(with ), and let be the equation

Then the equation

has the form

and does not occur in it. Because of , the systems are equivalent.


The method of this lemma, called Gauß elimination process, can be applied successively in order to obtain a linear system in triangular form.


Every (inhomogeneous) system of linear equations over a field can be transformed, by the manipulations described in Lemma 5.3 , to an equivalent linear system of the form

where, in each row, the first coefficient is different from . Here, either , and the last row can be omitted, or , and then the system has no solution at all.

With the help of renaming the variables, we get an equivalent system of the form

with diagonal elements

.

This follows directly from the elimination lemma, by eliminating successively variables. Elimination is applied firstly to the first variable (in the given ordering), say , which occurs in at least one equation with a coefficient (if it only occurs in one equation, then this elimination step is already done). This elimination process is applied as long as the new subsystem (without the working equation used in the elimination step before) has at least one equation with a coefficient for one variable different from . After this, we have in the end only equations without variables, and they are either only zero equations, or there is no solution.

When we set , and denote the other variables with , then we obtain the described system in triangular form.


It might happen that the variable does not appear in the system with a coefficient , and that, in the elimination process, more than one variable is eliminated at the same time. Then one gets a linear system in echelon form, which can be transformed to a triangular form by a change of variables.


A linear system can be written briefly as

with an -matrix and an -tuple . The manipulations at the equations that we do in the elimination procedure, can be performed directly for the matrix, or for the extended matrix that arises when we extend by the column . Essentially, we replace a row by the sum of the row with a multiple of another row. This has the advantage that we do not have to write down the variables. However, one should then not swap the variables. At the end, the arising matrix in echelon form can be interpreted again as a linear system.


Sometimes, we want to solve a simultaneous system of linear equations of the form

The goal is to find the solutions of the corresponding inhomogeneous linear systems for different vectors. In principle, we could consider independent linear systems, and solve them. However, it is smarter to perform those manipulations that we do on the left-hand side to achieve upper triangular form, simultaneously with all the vectors on the right-hand side. An important special case, for , is when the vectors are the standard vectors, see Method 12.11 .

We discuss briefly some further methods to solve a linear system.


Another method to solve a linear system is the substitution method. Here, the variables are also successively eliminated, but in another way. If we want to eliminate the variable , then we look at an equation, say , where occurs with a coefficient different from . In this equation, we isolate and get a new equation of the form

where does not occur in . Then in all other equations , we replace the variable by , and obtain (after some simplifications) a linear system without the variable , which is, together with , equivalent with the original system.


Another method to solve a linear system is the equating method. Here, the variables are also successively eliminated, but in another way. In this method, in every equation , , we isolate one fixed variable, say . Suppose that (after reordering) are the equations where the variable occurs with a coefficient different from . These equations are brought into the form

where in , the variable does not occur. The linear system consisting in

is equivalent to the original system. We continue with this system without .


The methods described in Theorem 5.5 , Remark 5.8 , and Remark 5.9 to solve a linear system differ with respect to speed, strategic conception, complexity of the coefficients, error-proneness. In the elimination method, the systematic reduction of the number of variables (reduction of dimension) is obvious, and it is unlikely to make mistakes (except for miscalculations). It is always clear how to continue. However, these advantages emerge starting with three variables. For two variables, it does not make a difference what method we choose.

The evaluation of the methods depends also on the features of the concrete system. Such features should be taken into account in order to find "short-cuts“ to the solution. The adequate choice of the solution method appropriate for the given problem is called adaptivity (a concept which is used in the didactic context with different meanings). If, for example, one row of the system has the form , then one should recognize that a part of the solution can be read off immediately, and one should not add to this row any other row. Here, one should replace in the other rows everywhere by , and continue then. Or: if four equations are given, where in two equations only the variables and appear, and where in the two other equations only the variables and appear, then one should realize that, in principle, two unrelated linear systems are given, each in two variables, and these should be solved independently. Or: it might be that a small subsystem of the system guarantees that there is no solution at all. Then only this has to be worked out, there is no need to consider the other equations. And: consider the exact question! If the question is whether a certain tuple is a solution, then we only have to plug this tuple into the equations, no manipulations are necessary.


A system of linear inequalities over the rational numbers or over the real numbers is a system of the form

where might be or . It is considerably more difficult to find the solution set of such a system than in the case of equations. In general, it is not possible to eliminate the variables.



Linear system in triangular form

Let an inhomogeneous system of linear equations in triangular form

with over a field be given, where the diagonal elements are all not . Then the solutions are in bijection with the tuples .

The entries can be chosen arbitrarily, and they determine a unique solution, and every solution is of this form.

This is clear, as when the tuple is given, the rows determine successively the other variables from bottom to top.


In case , there are no free variables, , and the linear system has exactly one solution.



The superposition principle for linear systems

Let denote a matrix over a field . Let and denote two -tuples, and let be a solution of the linear system

and a solution of the system

Then

is a solution of the system

Proof



Let be a field, and let

be an inhomogeneous linear system over , and let

be the corresponding homogeneous linear system. If is a solution of the inhomogeneous system and if is a solution of the homogeneous system, then is a solution of the inhomogeneous system.

This follows immediately from Theorem 5.13 .


In particular, this means that when is the solution space of a homogeneous linear system, and when is one (particular) solution of an inhomogeneous linear system, then the mapping

gives a bijection between and the solution set of the inhomogeneous linear system.



Footnotes
  1. Such a reordering is safe as long as we keep the names of the variables. But if we write down the system in matrix notation without the variables, then one has to be careful and remember the reordering of the columns.
  2. It is enough that these equations have a different index in the system.






<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)