Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 21
The lectures of the next weeks deal with linear algebra. We fix a field , and one might think of the real numbers . But since we are first concerned only with algebraic properties of , one might also think of the rational numbers. Starting with the theory of eigenspaces, also analytic properties like the existence of roots will be important.
- Systems of linear equations
In the context of polynomial interpolation, we have already encountered systems of linear equations.
Firstly, we give three further introductory examples, one from every day's life, one from geometry, and one from physics. They all lead to systems of linear equations.
At a booth on the Christmas market, there are three different pots of mulled wine. All three contain the ingredients cinnamon, cloves, red wine, and sugar, but the compositions differ. The mixtures of the mulled wines are
Every mulled wine is represented by a four-tuple, where the entries represent the respective shares of the ingredients. The set of all (possible) mulled wines forms a vector space (we will introduce this concept in the next lecture) and the three concrete mulled wines are vectors in this space.
Now suppose that none of the three mulled wines meets exactly our taste; in fact, the wanted mulled wine has the mixture
Is there a possibility to get the wanted mulled wine by pouring together the given mulled wines in some way? Are there numbers[1] such that
holds? This vector equation can be expressed by four equations in the "variables“ , where the equations come from the rows. When does there exist a solution, when none, when many? These are typical questions of linear algebra.
Suppose that two planes are given in ,[2]
and
How can we describe the intersecting line ? A point belongs to the intersection line if and only if it satisfies both plane equations. Therefore, both equations,
must hold. We multiply the first equation by , and subtract from that four times the second equation, and get
If we set , then and must hold. This means that the point belongs to . In the same way, setting , we find the point . Therefore, the intersecting line is the line connecting these points, so
An electrical network consists of several connected wires, which we call the edges of the network in this context. In every edge , there is a certain (depending on the material and the length of the edge) resistance . The points , where the edges meet, are called the vertices of the network. If we put to some edges of the network a certain electric tension (voltage), then we will have in every edge a certain current . The goal is to determine the currents from the data of the network and the voltages.
It is helpful to assign to each edge a fixed direction in order to distinguish the direction of the current in this edge (if the current is in the opposite direction, it gets a minus sign). We call these directed edges. In every vertex of the network, the currents of the adjacent edges come together; therefore, their sum must be . In an edge , there is a voltage drop , determined by Ohm's law to be
We call a closed, directed alignment of edges in a network a mesh. For such a mesh, the sum of voltages is , unless a certain voltage is enforced from "outside“.
We list these Kirchhoff's laws again.
- In every vertex, the sum of the currents equals .
- In every mesh, the sum of the voltages equals .
- If in a mesh, a voltage is enforced, then the sum of the voltages equals .
Due to "physical reasons“, we expect that, given voltages in every edge, there should be a well-defined current in every edge. In fact, these currents can be computed if we translate the stated laws into a system of linear equations and solve this system.
In the example given by the picture, suppose that the edges (with the resistances ) are directed from left to right and that the connecting edge from to (where the voltage is applied) is directed upwards. The four vertices and the three meshes and yield the system of linear equations
Here the and are given numbers, and the are the unknowns we are looking for.
We give now the definition of a homogeneous and of an inhomogeneous system of linear equations over a field for a given set of variables.
Let denote a field, and let for and . We call
a (homogeneous) system of linear equations in the variables . A tuple is called a solution of the linear system if holds for all .
If is given,[3] then
is called an inhomogeneous system of linear equations. A tuple is called a solution to the inhomogeneous linear system if
holds for all .
The set of all solutions of the system is called the solution set. In the homogeneous case, this is also called the solution space, as it is indeed, by Lemma 22.14 , a vector space.
A homogeneous system of linear equations always has the so-called trivial solution . An inhomogeneous system does not necessarily have a solution. For a given inhomogeneous linear system of equations, the homogeneous system that arises when we replace the tuple on the right-hand side by the null vector is called the corresponding homogeneous system.
The following situation describes a more abstract version of Example 21.1 .
Let denote a field, and . Suppose that in , there are vectors (or -tuples)
given. Let
be another vector. We want to know whether can be written as a linear combination of the . Thus, we are dealing with the question whether there are elements such that
holds. This equality of vectors means identity in every component, so that this condition yields a system of linear equations
- Solving linear systems
Systems of linear equations are best solved by the elimination method, where successively a variable gets eliminated, and in the end we get an equivalent simple system which can be solved directly (or read of that there is no solution). For small systems, also the substitution method or the equating method are useful.
Let denote a field, and let two (inhomogeneous) systems of linear equations,
with respect to the same set of variables, be given. The systems are called equivalent, if their solution sets are identical.Let be a field, and let
be an inhomogeneous system of linear equations over . Then the following manipulations on this system yield an equivalent system.
- Swapping two equations.
- The multiplication of an equation by a scalar .
- The omitting of an equation, if it occurs twice.
- The duplication of an equation (in the sense to write down the equation again).
- The omitting or adding of a zero row (zero equation).
- The replacement of an equation by the equation that arises if we add to another equation of the system.
Most statements are immediately clear. (2) follows from the fact that if
holds, then also
holds for every . If , then this implication can be reversed by multiplication with .
(6). Let be the equation
and be the equation
If a tuple satisfies both equations, then it also satisfies the equation . And if the tuple satisfies the equations and , then it also satisfies the equation and .
For finding the solution of a linear system, the manipulations (2) and (6) are most important, where in general these two steps are combined, and the equation is replaced by an equation of the form
(with
).
Here,
has to be chosen is such a way that the new equation contains one variable less than the old equation. This process is called elimination of a Variable. This elimination is not only applied to one equation, but for all equations except one
(suitable chosen)
"working row“ , and with a fixed "working variable“. The following elimination lemma describes this step.
Let denote a field, and let denote an (inhomogeneous) system of linear equations over in the variables . Suppose that is a variable which occurs in at least one equation with a coefficient . Then every equation , different from ,[4] can be replaced by an equation , in which does not occur any more, and such that the new system of equations that consists of and the equations , is equivalent with the system .
Changing the numbering, we may assume . Let be the equation
(with ), and let be the equation
Then the equation
has the form
and does not occur in it. Because of , the systems are equivalent.
Every (inhomogeneous) system of linear equations over a field can be transformed, by the manipulations described in Lemma 21.7 , to an equivalent linear system of the form
where in each row, the first coefficient is different from . Here, either , and the last row can be omitted, or ,
and then the system has no solution at all.This follows directly from the elimination lemma, by eliminating successively variables. Elimination is applied firstly to the first variable (in the given ordering), say , which occurs in at least one equation with a coefficient (if it only occurs in one equation, then this elimination step is already done). This elimination process is applied as long as the new subsystem (without the working equation used in the elimination step before) has at least one equation with a coefficient for one variable different from . After this, we have in the end only equations without variables, and they are either only zero equations, or there is no solution.
Let an inhomogeneous system of linear equations in triangular form
with over a field be given, where the diagonal elements are all not . Then the solutions are in bijection with the tuples .
The entries can be chosen arbitrarily, and they determine a unique solution, and every solution is of this form.This is clear, as when the tuple is given, the rows determine successively the other variables from bottom to top.
For
,
there are no free variables, and the linear system has exactly one solution.
We want to solve the inhomogeneous linear system
over (or over ). Firstly, we eliminate by keeping the first row , replacing the second row by , and replacing the third row by . This yields
Now, we can eliminate from the (new) third row, with the help of the second row. Because of the fractions, we rather eliminate (which eliminates also ). We leave the first and the second row as they are, and we replace the third row by . This yields the system, in a new ordering of the variables,[5]
Now we can choose an arbitrary (free) value for . The third row determines uniquely, we must have
In the second equation, we can choose arbitrarily, this determines via
The first row determines , namely
Hence, the solution set is
A particularly simple solution is obtained by equating the free variables and with . This yields the special solution
The general solution set can also be written as
Here,
is a description of the general solution of the corresponding homogeneous linear system.
A system of linear inequalities over the rational numbers or over the real numbers is a system of the form
where might be or . It is considerably more difficult to find the solution set of such a system than in the case of equations. In general, it is not possible to eliminate the variables.
- Footnotes
- ↑ In this example, only positive numbers have a practical interpretation. In linear algebra, everything is over a field, so we also allow negative numbers.
- ↑ Right here, we do not discuss that such equations define a plane. The solution sets are "shifted linear subspaces of dimension two“.
- ↑ Such a vector is sometimes called a disturbance vector of the system.
- ↑ It is enough that these equations have a different index in the system.
- ↑ Such a reordering is safe as long as we keep the names of the variables. But if we write down the system in matrix notation without the variables, then one has to be careful and remember the reordering of the columns.
<< | Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I | >> PDF-version of this lecture Exercise sheet for this lecture (PDF) |
---|