Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 5/latex

\setcounter{section}{5}






\subtitle {Solving systems of linear equations}

It is not clear a priori what solving a \extrabracket {linear} {} {} system of equation is supposed to mean. Anyway, the goal is to find a quite good description of the solution set. If there is only one solution, then we want to find it. If there is no solution at all, we want to detect this in a reasonable way. In general, the solution set of a system of equations is large. In this case, solving a system means to identify \quotationshort{free}{} variables, for which arbitrary values are allowed, and to describe explicitly how the other \quotationshort{dependent}{} variables can be expressed in terms of the free variables. This is called an \keyword {explicit description} {} of the solution set.

Linear systems of equations can be solved systematically with the \keyword {elimination process} {.} With this method, variables are eliminated step by step, until a very simple equivalent linear system in triangular form arises, which can be solved directly \extrabracket {or from where we can deduce that there is no solution} {} {.} We consider a typical example in many variables.


\inputexample{}
{

We want to solve the inhomogeneous linear system
\mathdisp {\begin{matrix} 2x & +5y & +2z & & -v & = & 3 \\ 3x & -4y & & +u & +2v & = & 1 \\ 4x & & -2z & +2u & & = & 7 \, \end{matrix}} { }
over $\R$ \extrabracket {or over $\Q$} {} {.} Firstly, we eliminate $x$ by keeping the first row $I$, replacing the second row $II$ by \mathl{II - { \frac{ 3 }{ 2 } }I}{,} and replacing the third row $III$ by \mathl{III-2I}{.} This yields
\mathdisp {\begin{matrix} 2x & +5y & +2z & & -v & = & 3 \\ & - { \frac{ 23 }{ 2 } } y & -3z & +u & + { \frac{ 7 }{ 2 } } v & = & { \frac{ -7 }{ 2 } } \\ & -10y & -6z & +2u & +2v & = & 1 \, . \end{matrix}} { }
Now, we can eliminate $y$ from the \extrabracket {new} {} {} third row, with the help of the second row. Because of the fractions, we rather eliminate $z$ \extrabracket {which eliminates also $u$} {} {.} We leave the first and the second row as they are, and we replace the third row $III$ by \mathl{III-2II}{.} This yields the system, in a new ordering of the variables\extrafootnote {Such a reordering is safe as long as we keep the names of the variables. But if we write down the system in matrix notation without the variables, then one has to be careful and remember the reordering of the columns} {.} {,}
\mathdisp {\begin{matrix} 2x & +2z & & +5y & -v & = & 3 \\ & -3z & +u & - { \frac{ 23 }{ 2 } } y & + { \frac{ 7 }{ 2 } } v & = & { \frac{ -7 }{ 2 } } \\ & & & 13y & -5v & = & 8 \, . \end{matrix}} { }
Now we can choose an arbitrary \extrabracket {free} {} {} value for $v$. The third row determines $y$ uniquely, we must have
\mathrelationchaindisplay
{\relationchain
{ y }
{ =} { { \frac{ 8 }{ 13 } } + { \frac{ 5 }{ 13 } } v }
{ } { }
{ } { }
{ } { }
} {}{}{.} In the second equation, we can choose $u$ arbitrarily, this determines $z$ via
\mathrelationchainalign
{\relationchainalign
{z }
{ =} { - { \frac{ 1 }{ 3 } } { \left(- { \frac{ 7 }{ 2 } } -u - { \frac{ 7 }{ 2 } } v + { \frac{ 23 }{ 2 } } { \left({ \frac{ 8 }{ 13 } } + { \frac{ 5 }{ 13 } } v\right) } \right) } }
{ =} { - { \frac{ 1 }{ 3 } } { \left(- { \frac{ 7 }{ 2 } } -u - { \frac{ 7 }{ 2 } } v + { \frac{ 92 }{ 13 } } + { \frac{ 115 }{ 26 } } v\right) } }
{ =} { - { \frac{ 1 }{ 3 } } { \left({ \frac{ 93 }{ 26 } } -u + { \frac{ 12 }{ 13 } } v\right) } }
{ =} { -{ \frac{ 31 }{ 26 } } + { \frac{ 1 }{ 3 } } u - { \frac{ 4 }{ 13 } } v }
} {} {}{.} The first row determines $x$, namely
\mathrelationchainalign
{\relationchainalign
{x }
{ =} { { \frac{ 1 }{ 2 } } { \left(3 -2z -5y +v\right) } }
{ =} { { \frac{ 1 }{ 2 } } { \left(3 -2 { \left(-{ \frac{ 31 }{ 26 } } + { \frac{ 1 }{ 3 } } u - { \frac{ 4 }{ 13 } } v\right) } - 5 { \left({ \frac{ 8 }{ 13 } } + { \frac{ 5 }{ 13 } } v\right) } + v\right) } }
{ =} { { \frac{ 1 }{ 2 } } { \left({ \frac{ 30 }{ 13 } } - { \frac{ 2 }{ 3 } } u - { \frac{ 4 }{ 13 } } v\right) } }
{ =} { { \frac{ 15 }{ 13 } } - { \frac{ 1 }{ 3 } } u - { \frac{ 2 }{ 13 } } v }
} {} {}{.} Hence, the solution set is
\mathdisp {{ \left\{ { \left({ \frac{ 15 }{ 13 } } - { \frac{ 1 }{ 3 } } u - { \frac{ 2 }{ 13 } } v, { \frac{ 8 }{ 13 } } + { \frac{ 5 }{ 13 } } v ,-{ \frac{ 31 }{ 26 } } + { \frac{ 1 }{ 3 } } u - { \frac{ 4 }{ 13 } } v ,u,v\right) } \mid u,v \in \R \right\} }} { . }
A particularly simple solution is obtained by equating the free variables \mathcor {} {u} {and} {v} {} with $0$. This yields the special solution
\mathrelationchaindisplay
{\relationchain
{ (x,y,z,u,v) }
{ =} { \left( { \frac{ 15 }{ 13 } } , \, { \frac{ 8 }{ 13 } } , \, - { \frac{ 31 }{ 26 } } , \, 0 , \, 0 \right) }
{ } { }
{ } { }
{ } { }
} {}{}{.} The general solution set can also be written as
\mathdisp {{ \left\{ { \left({ \frac{ 15 }{ 13 } } , { \frac{ 8 }{ 13 } } , - { \frac{ 31 }{ 26 } } ,0,0\right) } + u { \left(- { \frac{ 1 }{ 3 } }, 0 , { \frac{ 1 }{ 3 } } ,1,0\right) } + v { \left(- { \frac{ 2 }{ 13 } }, { \frac{ 5 }{ 13 } }, - { \frac{ 4 }{ 13 } },0,1\right) } \mid u, v \in \R \right\} }} { . }
Here,
\mathdisp {{ \left\{ u { \left(- { \frac{ 1 }{ 3 } }, 0 , { \frac{ 1 }{ 3 } } ,1,0\right) } +v { \left(- { \frac{ 2 }{ 13 } }, { \frac{ 5 }{ 13 } }, -{ \frac{ 4 }{ 13 } },0,1\right) } \mid u,v \in \R \right\} }} { }
is a description of the general solution of the corresponding homogeneous linear system.

}




\inputdefinition
{ }
{

Let $K$ denote a field, and let two \extrabracket {inhomogeneous} {} {} systems of linear equations,

with respect to the same set of variables, be given. The systems are called \definitionword {equivalent}{,} if their solution sets are identical.

}




\inputfactproof
{System of linear equations/Set of variables/Equivalent system/Manipulations/Fact}
{Lemma}
{}
{

\factsituation {Let $K$ be a field, and let
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & c_1 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & c_m \end{matrix}} { }
be an inhomogeneous system of linear equations over $K$.}
\factconclusion {Then the following manipulations on this system yield an equivalent system. \enumerationsix {Swapping two equations. } {The multiplication of an equation by a scalar
\mathrelationchain
{\relationchain
{ s }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} } {The omitting of an equation, if it occurs twice. } {The duplication of an equation \extrabracket {in the sense to write down the equation again} {} {.} } {The omitting or adding of a zero row \extrabracket {zero equation} {} {.} } {The replacement of an equation $H$ by the equation that arises if we add to $H$ another equation $G$ of the system. }}
\factextra {}
}
{

Most statements are immediately clear. (2) follows from the fact that if
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 1}^n a_i \xi_i }
{ =} {c }
{ } { }
{ } { }
{ } { }
} {}{}{} holds, then also
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 1}^n (s a_i) \xi_i }
{ =} { s c }
{ } { }
{ } { }
{ } { }
} {}{}{} holds for every
\mathrelationchain
{\relationchain
{ s }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} If
\mathrelationchain
{\relationchain
{ s }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then this implication can be reversed by multiplication with $s^{-1}$.

(6). Let $G$ be the equation
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 1}^n a_ix_i }
{ =} { c }
{ } { }
{ } { }
{ } { }
} {}{}{,} and $H$ be the equation
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 1}^n b_ix_i }
{ =} { d }
{ } { }
{ } { }
{ } { }
} {}{}{.} If a tuple
\mathrelationchain
{\relationchain
{ (\xi_1 , \ldots , \xi_n) }
{ \in }{ K^n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} satisfies both equations, then it also satisfies the equation
\mathrelationchain
{\relationchain
{H' }
{ = }{G+H }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} And if the tuple satisfies the equations \mathcor {} {G} {and} {H'} {,} then it also satisfies the equation \mathcor {} {G} {and} {H=H'-G} {.}

}


For finding the solution set of a linear system, the manipulations (2) and (6) are most important. In general, these two steps are combined, and the equation $H$ is replaced by an equation of the form \mathl{H + \lambda G}{} \extrabracket {with
\mathrelationchain
{\relationchain
{ G }
{ \neq }{ H }
{ }{ }
{ }{ }
{ }{ }
} {}{}{}} {} {.} Here,
\mathrelationchain
{\relationchain
{ \lambda }
{ \in }{K }
{ }{ }
{ }{ }
{ }{}
} {}{}{} has to be chosen in such a way that the new equation contains one variable less than the old equation. This process is called \keyword {elimination of a variable} {.} This elimination is not only applied to one equation, but for all equations except one \extrabracket {suitable chosen} {} {} \quotationshort{working row}{} $G$, and with a fixed \quotationshort{working variable}{.} The following \keyword {elimination lemma} {} describes this step.




\inputfactproof
{Linear system/Elimination lemma/Fact}
{Lemma}
{}
{

\factsituation {Let $K$ denote a field, and let $S$ denote an \extrabracket {inhomogeneous} {} {} system of linear equations over $K$ in the variables \mathl{x_1 , \ldots , x_n}{.}}
\factcondition {Suppose that $x$ is a variable which occurs in at least one equation $G$ with a coefficient
\mathrelationchain
{\relationchain
{ a }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}}
\factconclusion {Then every equation $H$, different from $G$\extrafootnote {It is enough that these equations have a different index in the system} {.} {,} can be replaced by an equation $H'$, in which $x$ does not occur any more, and such that the new system of equations $S'$ that consists of $G$ and the equations $H'$, is equivalent with the system $S$.}
\factextra {}


}
{

Changing the numbering, we may assume
\mathrelationchain
{\relationchain
{x }
{ = }{x_1 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Let $G$ be the equation
\mathrelationchaindisplay
{\relationchain
{ ax_1 + \sum_{i = 2}^n a_ix_i }
{ =} {b }
{ } { }
{ } { }
{ } { }
} {}{}{} \extrabracket {with
\mathrelationchain
{\relationchain
{ a }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{}} {} {,} and let $H$ be the equation
\mathrelationchaindisplay
{\relationchain
{ cx_1 + \sum_{i = 2}^n c_ix_i }
{ =} {d }
{ } { }
{ } { }
{ } { }
} {}{}{.} Then the equation
\mathrelationchaindisplay
{\relationchain
{H' }
{ =} {H - { \frac{ c }{ a } } G }
{ } { }
{ } { }
{ } { }
} {}{}{} has the form
\mathrelationchaindisplay
{\relationchain
{ \sum_{i = 2}^n { \left(c_i- { \frac{ c }{ a } } a_i\right) } x_i }
{ =} { d -{ \frac{ c }{ a } } b }
{ } { }
{ } { }
{ } { }
} {}{}{,} and $x_1$ does not occur in it. Because of
\mathrelationchain
{\relationchain
{H }
{ = }{H' + { \frac{ c }{ a } } G }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the systems are equivalent.

}


The method of this lemma, called \keyword {Gauß elimination process} {,} can be applied successively in order to obtain a linear system in triangular form.




\inputfactproof
{Linear inhomogeneous system/Elimination/Echelon form and triangular form/Fact}
{Theorem}
{}
{

\factsituation {Every \extrabracket {inhomogeneous} {} {} system of linear equations over a field $K$}
\factconclusion {can be transformed, by the manipulations described in Lemma 5.3 , to an equivalent linear system of the form
\mathdisp {\begin{matrix} b_{1s_1} x_{s_1} & + b_{1 s_1 +1} x_{s_1+1} & \ldots & \ldots & \ldots & \ldots & \ldots & +b_{1 n} x_{n} & = & d_1 \\ 0 & \ldots & 0 & b_{2 s_2} x_{s_2} & \ldots & \ldots & \ldots & + b_{2 n} x_{n} & = & d_2 \\ \vdots & \ddots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & = & \vdots \\ 0 & \ldots & \ldots & \ldots & 0 & b_{m {s_m} } x_{s_m} & \ldots & +b_{m n} x_n & = & d_m \\ ( 0 & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & 0 & = & d_{m+1} ) , \end{matrix}} { }
where, in each row, the first coefficient \mathl{b_{1s_1}, b_{2 s_2} , \ldots , b_{m s_m}}{} is different from $0$.}
\factextra {Here, either
\mathrelationchain
{\relationchain
{ d_{m+1} }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and the last row can be omitted, or
\mathrelationchain
{\relationchain
{ d_{m+1} }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and then the system has no solution at all.

With the help of renaming the variables, we get an equivalent system of the form
\mathdisp {\begin{matrix}

c_{11} y_1 & + c_{12} y_2 & \ldots & + c_{1m} y_m & +c_{1 m+1} y_{m+1} & \ldots & +c_{1 n} y_{n} & = & d_1 \\

0 & c_{22} y_2 & \ldots & \ldots & \ldots & \ldots & + c_{2 n} y_{n} & = & d_2 \\

\vdots & \ddots & \ddots & \vdots & \vdots & \vdots & \vdots & = & \vdots \\

0 & \ldots & 0 & c_{mm} y_m & + c_{m m+1} y_{m+1} & \ldots & +c_{m n} y_n & = & d_m \\

( 0 & \ldots & \ldots & 0 & 0 & \ldots & 0 & = & d_{m+1} ) \end{matrix}} { }
with diagonal elements
\mathrelationchain
{\relationchain
{c_{ii} }
{ \neq }{0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}}
}
{

This follows directly from the elimination lemma, by eliminating successively variables. Elimination is applied firstly to the first variable \extrabracket {in the given ordering} {} {,} say \mathl{x_{s_1}}{,} which occurs in at least one equation with a coefficient $\neq 0$ \extrabracket {if it only occurs in one equation, then this elimination step is already done} {} {.} This elimination process is applied as long as the new subsystem \extrabracket {without the working equation used in the elimination step before} {} {} has at least one equation with a coefficient for one variable different from $0$. After this, we have in the end only equations without variables, and they are either only zero equations, or there is no solution.

When we set \mathl{y_1=x_{s_1},y_2=x_{s_2} , \ldots , y_m =x_{s_m}}{,} and denote the other variables with \mathl{y_{m+1} , \ldots , y_n}{,} then we obtain the described system in triangular form.

}


It might happen that the variable $x_1$ does not appear in the system with a coefficient $\neq 0$, and that, in the elimination process, more than one variable is eliminated at the same time. Then one gets a linear system in echelon form, which can be transformed to a triangular form by a change of variables.




\inputremark {}
{

A linear system can be written briefly as
\mathrelationchaindisplay
{\relationchain
{ Ax }
{ =} { c }
{ } { }
{ } { }
{ } { }
} {}{}{} with an $m \times n$-matrix $A$ and an $m$-tuple $c$. The manipulations at the equations that we do in the elimination procedure, can be performed directly for the matrix, or for the extended matrix that arises when we extend $A$ by the column $c$. Essentially, we replace a row by the sum of the row with a multiple of another row. This has the advantage that we do not have to write down the variables. However, one should then not swap the variables. At the end, the arising matrix in echelon form can be interpreted again as a linear system.

}




\inputremark {}
{

Sometimes, we want to solve a \keyword {simultaneous system of linear equations} {} of the form
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & c_1 & ( = & d_1, & = & e_1, \ldots ) \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & c_2 & ( = & d_2, & = & e_2, \ldots ) \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & c_m &( = & d_m, & = & e_m, \ldots ) \, . \end{matrix}} { }
The goal is to find the solutions of the corresponding inhomogeneous linear systems for different vectors. In principle, we could consider independent linear systems, and solve them. However, it is smarter to perform those manipulations that we do on the left-hand side to achieve upper triangular form, simultaneously with all the vectors on the right-hand side. An important special case, for
\mathrelationchain
{\relationchain
{ n }
{ = }{ m }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} is when the vectors are the standard vectors, see Method 12.11 .

}

We discuss briefly some further methods to solve a linear system.


\inputremark {}
{

Another method to solve a linear system is the \keyword {substitution method} {.} Here, the variables are also successively eliminated, but in another way. If we want to eliminate the variable $x_1$, then we look at an equation, say $G_1$, where $x_1$ occurs with a coefficient different from $0$. In this equation, we isolate $x_1$ and get a new equation of the form
\mathdisp {G_1': \, \, \, x_1 = F_1} { , }
where $x_1$ does not occur in $F_1$. Then in all other equations \mathl{G_2 , \ldots , G_m}{,} we replace the variable $x_1$ by $F_1$, and obtain \extrabracket {after some simplifications} {} {} a linear system \mathl{G_2' , \ldots , G_m'}{} without the variable $x_1$, which is, together with $G_1'$, equivalent with the original system.

}




\inputremark {}
{

Another method to solve a linear system is the \keyword {equating method} {.} Here, the variables are also successively eliminated, but in another way. In this method, in every equation
\mathcond {G_i} {}
{i=1 , \ldots , m} {}
{} {} {} {,} we isolate one fixed variable, say $x_1$. Suppose that \extrabracket {after reordering} {} {} \mathl{G_1 , \ldots , G_k}{} are the equations where the variable $x_1$ occurs with a coefficient different from $0$. These equations are brought into the form
\mathdisp {G_i': \, \, \, x_1= F_i} { , }
where in $F_i$, the variable $x_1$ does not occur. The linear system consisting in
\mathdisp {G_1', F_1=F_2, F_1=F_3 , \ldots , F_1=F_k, G_{k+1} , \ldots , G_m} { }
is equivalent to the original system. We continue with this system without $G_1'$.

}




\inputremark {}
{

The methods described in Theorem 5.5 , Remark 5.8 , and Remark 5.9 to solve a linear system differ with respect to speed, strategic conception, complexity of the coefficients, error-proneness. In the elimination method, the systematic reduction of the number of variables \extrabracket {reduction of dimension} {} {} is obvious, and it is unlikely to make mistakes \extrabracket {except for miscalculations} {} {.} It is always clear how to continue. However, these advantages emerge starting with three variables. For two variables, it does not make a difference what method we choose.

The evaluation of the methods depends also on the features of the concrete system. Such features should be taken into account in order to find \quotationshort{short-cuts}{} to the solution. The adequate choice of the solution method appropriate for the given problem is called \keyword {adaptivity} {} \extrabracket {a concept which is used in the didactic context with different meanings} {} {.} If, for example, one row of the system has the form
\mathrelationchain
{\relationchain
{ x }
{ = }{ 3 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then one should recognize that a part of the solution can be read off immediately, and one should not add to this row any other row. Here, one should replace in the other rows everywhere $x$ by $3$, and continue then. Or: if four equations are given, where in two equations only the variables \mathcor {} {x} {and} {y} {} appear, and where in the two other equations only the variables \mathcor {} {z} {and} {w} {} appear, then one should realize that, in principle, two unrelated linear systems are given, each in two variables, and these should be solved independently. Or: it might be that a small subsystem of the system guarantees that there is no solution at all. Then only this has to be worked out, there is no need to consider the other equations. And: consider the exact question! If the question is whether a certain tuple is a solution, then we only have to plug this tuple into the equations, no manipulations are necessary.

}




\inputremark {}
{

A \keyword {system of linear inequalities} {} over the rational numbers or over the real numbers is a system of the form
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & \star & c_1 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & \star & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & \star & c_m \, , \end{matrix}} { }
where \mathl{\star}{} might be \mathl{\leq}{} or \mathl{\geq}{.} It is considerably more difficult to find the solution set of such a system than in the case of equations. In general, it is not possible to eliminate the variables.

}






\subtitle {Linear system in triangular form}




\inputfactproof
{Linear inhomogeneous system/Strictly triangular/Solution/Fact}
{Theorem}
{}
{

\factsituation {Let an inhomogeneous system of linear equations in triangular form
\mathdisp {\begin{matrix} a_{11} x_1 & + a_{12} x_2 & \ldots & +a_{1m} x_m & \ldots & + a_{1 n} x_{n} & = & c_1 \\ 0 & a_{22} x_2 & \ldots & \ldots & \ldots & + a_{2 n} x_{n} & = & c_2 \\ \vdots & \ddots & \ddots & \vdots & \vdots & \vdots & = & \vdots \\ 0 & \ldots & 0 & a_{mm} x_m & \ldots & +a_{m n} x_n & = & c_m \\ \end{matrix}} { }
with
\mathrelationchain
{\relationchain
{m }
{ \leq }{n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} over a field $K$ be given, where the diagonal elements are all not $0$.}
\factconclusion {Then the solutions \mathl{(x_1 , \ldots , x_m, x_{m+1} , \ldots , x_n)}{} are in bijection with the tuples
\mathrelationchain
{\relationchain
{ ( x_{m+1} , \ldots , x_n) }
{ \in }{ K^{n-m} }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} The \mathl{n-m}{} entries \mathl{x_{m+1} , \ldots , x_n}{} can be chosen arbitrarily, and they determine a unique solution, and every solution is of this form.}
\factextra {}
}
{

This is clear, as when the tuple \mathl{(x_{m+1} , \ldots , x_n)}{} is given, the rows determine successively the other variables from bottom to top.

}


In case
\mathrelationchain
{\relationchain
{ m }
{ = }{ n }
{ }{ }
{ }{ }
{ }{}
} {}{}{,} there are no free variables,
\mathrelationchain
{\relationchain
{ K^0 }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and the linear system has exactly one solution.






\subtitle {The superposition principle for linear systems}




\inputfactproof
{Linear system/Superposition principle/Fact}
{Theorem}
{}
{

\factsituation {Let
\mathrelationchain
{\relationchain
{ M }
{ = }{ { \left( a_{ij} \right) }_{1 \leq i \leq m, 1 \leq j \leq n} }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} denote a matrix over a field $K$. Let
\mathrelationchain
{\relationchain
{ c }
{ = }{ { \left( c_1 , \ldots , c_m \right) } }
{ }{ }
{ }{ }
{ }{}
} {}{}{} and
\mathrelationchain
{\relationchain
{ d }
{ = }{ { \left( d_1 , \ldots , d_m \right) } }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} denote two $m$-tuples, and}
\factcondition {let
\mathrelationchain
{\relationchain
{ y }
{ = }{ { \left( y_1 , \ldots , y_n \right) } }
{ \in }{ K^n }
{ }{ }
{ }{ }
} {}{}{} be a solution of the linear system
\mathrelationchaindisplay
{\relationchain
{Mx }
{ =} {c }
{ } { }
{ } { }
{ } { }
} {}{}{,} and
\mathrelationchain
{\relationchain
{ z }
{ = }{{ \left( z_1 , \ldots , z_n \right) } }
{ \in }{ K^n }
{ }{ }
{ }{ }
} {}{}{} a solution of the system
\mathrelationchaindisplay
{\relationchain
{Mx }
{ =} {d }
{ } { }
{ } { }
{ } { }
} {}{}{.}}
\factconclusion {Then
\mathrelationchain
{\relationchain
{ y+z }
{ = }{ { \left( y_1 +z_1 , \ldots , y_n +z_n \right) } }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is a solution of the system
\mathrelationchaindisplay
{\relationchain
{ Mx }
{ =} { c+d }
{ } { }
{ } { }
{ } { }
} {}{}{.}}
\factextra {}

}
{See Exercise 5.19 .}





\inputfactproof
{Linear system/Superposition principle/Homogeneous and inhomogeneous/Fact}
{Corollary}
{}
{

\factsituation {Let $K$ be a field, and let
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & c_1 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & c_2 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & c_m \end{matrix}} { }
be an inhomogeneous linear system over $K$, and let
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & 0 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & 0 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & 0 \end{matrix}} { }
be the corresponding homogeneous linear system.}
\factcondition {If \mathl{{ \left( y_1 , \ldots , y_n \right) }}{} is a solution of the inhomogeneous system and if \mathl{{ \left( z_1 , \ldots , z_n \right) }}{} is a solution of the homogeneous system,}
\factconclusion {then \mathl{{ \left( y_1 +z_1 , \ldots , y_n + z_n \right) }}{} is a solution of the inhomogeneous system.}
\factextra {}
}
{

This follows immediately from Theorem 5.13 .

}


In particular, this means that when $L$ is the solution space of a homogeneous linear system, and when $y$ is one \extrabracket {particular} {} {} solution of an inhomogeneous linear system, then the mapping
\mathdisp {L \longrightarrow L' , z \longmapsto y+z} { , }
gives a bijection between $L$ and the solution set $L'$ of the inhomogeneous linear system.