# Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 22

## Example

A healthy breakfast starts with a fruit salad. The following table shows how much vitamin C, calcium and magnesium various fruits have (in milligram with respect to 100 gram of the fruit).

apple orange grapes banana
vitamin C ${\displaystyle {}12}$ ${\displaystyle {}53}$ ${\displaystyle {}4}$ ${\displaystyle {}9}$
calcium ${\displaystyle {}7}$ ${\displaystyle {}40}$ ${\displaystyle {}12}$ ${\displaystyle {}5}$
magnesium ${\displaystyle {}6}$ ${\displaystyle {}10}$ ${\displaystyle {}8}$ ${\displaystyle {}27}$

My fruit salad today consists of the mentioned fruits with portions ${\displaystyle {}{\begin{pmatrix}3\\2\\7\\6\end{pmatrix}}}$ (meaning ${\displaystyle {}300}$ gram apple and so on). From that, one can calculate the total vitamin-C-amount, the calcium-amount and the magnesium-amount of the fruit salad, by simply multiplying for each fruit its portion with its specific amount, and summing up everything. The vitamin-C-amount of the complete fruit salad is thus

${\displaystyle {}12\cdot 3+53\cdot 2+4\cdot 7+9\cdot 6=224\,.}$

This operation is an example for how a matrix operates. The table yields immediately a ${\displaystyle {}3\times 4}$-matrix, namely ${\displaystyle {}{\begin{pmatrix}12&53&4&9\\7&40&12&5\\6&10&8&27\end{pmatrix}}}$, and the above calculation is realized by the matrix multiplication

${\displaystyle {}{\begin{pmatrix}12&53&4&9\\7&40&12&5\\6&10&8&27\end{pmatrix}}{\begin{pmatrix}3\\2\\7\\6\end{pmatrix}}={\begin{pmatrix}224\\215\\256\end{pmatrix}}\,.}$

One can also ask for a fruit salad which has certain amounts of vitamin C, calcium and magnesium, say ${\displaystyle {}{\begin{pmatrix}180\\110\\140\end{pmatrix}}}$. This leads to the linear system of linear equations in matrix form,

${\displaystyle {}{\begin{pmatrix}12&53&4&9\\7&40&12&5\\6&10&8&27\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\\x_{4}\end{pmatrix}}={\begin{pmatrix}180\\110\\140\end{pmatrix}}\,.}$

Matrices

A system of linear equations can easily be written with a matrix. This allows us to make the manipulations which lead to the solution of such a system, without writing down the variables. Matrices are quite simple objects; however, they can represent quite different mathematical objects (e.g., a family of column vectors, a family of row vectors, a linear mapping, a table of physical interactions, a vector field, etc.), which one has to keep in mind in order to prevent wrong conclusions.

## Definition

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}I}$ and ${\displaystyle {}J}$ denote index sets. An ${\displaystyle {}I\times J}$-matrix is a mapping

${\displaystyle I\times J\longrightarrow K,(i,j)\longmapsto a_{ij}.}$

If ${\displaystyle {}I=\{1,\ldots ,m\}}$ and ${\displaystyle {}J=\{1,\ldots ,n\}}$, then we talk about an ${\displaystyle {}m\times n}$-matrix. In this case, the matrix is usually written as

${\displaystyle {\begin{pmatrix}a_{11}&a_{12}&\ldots &a_{1n}\\a_{21}&a_{22}&\ldots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\ldots &a_{mn}\end{pmatrix}}.}$

We will usually restrict to this situation. For every ${\displaystyle {}i\in I}$, ${\displaystyle {}a_{ij}}$ , ${\displaystyle {}j\in J}$, is called the ${\displaystyle {}i}$-th row of the matrix, which is usually written as a row vector

${\displaystyle (a_{i1},a_{i2},\ldots ,a_{in}).}$

For every ${\displaystyle {}j\in J}$, ${\displaystyle {}a_{ij}}$ , ${\displaystyle {}i\in I}$, is called the ${\displaystyle {}j}$-th column of the matrix, usually written as a column vector

${\displaystyle {\begin{pmatrix}a_{1j}\\a_{2j}\\\vdots \\a_{mj}\end{pmatrix}}.}$

The elements ${\displaystyle {}a_{ij}}$ are called the entries of the matrix. For ${\displaystyle {}a_{ij}}$, the number ${\displaystyle {}i}$ is called the row index, and ${\displaystyle {}j}$ is called the column index of the entry. The position of the entry ${\displaystyle {}a_{ij}}$ is where the ${\displaystyle {}i}$-th row meets the ${\displaystyle {}j}$-th column. A matrix with ${\displaystyle {}m=n}$ is called a square matrix. An ${\displaystyle {}m\times 1}$-matrix is simply a column tuple (or column vector) of length ${\displaystyle {}m}$, and an ${\displaystyle {}1\times n}$-matrix is simply a row tuple (or row vector) of length ${\displaystyle {}n}$. The set of all matrices with ${\displaystyle {}m}$ rows and ${\displaystyle {}n}$ columns (and with entries in ${\displaystyle {}K}$) is denoted by ${\displaystyle {}\operatorname {Mat} _{m\times n}(K)}$, in case ${\displaystyle {}m=n}$ we also write ${\displaystyle {}\operatorname {Mat} _{n}(K)}$.

Two matrices ${\displaystyle {}A,B\in \operatorname {Mat} _{m\times n}(K)}$ are added by adding entries with corresponding entries. The multiplication of a matrix ${\displaystyle {}A}$ with an element ${\displaystyle {}r\in K}$ (a scalar) is also defined entrywise, so

${\displaystyle {}{\begin{pmatrix}a_{11}&a_{12}&\ldots &a_{1n}\\a_{21}&a_{22}&\ldots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\ldots &a_{mn}\end{pmatrix}}+{\begin{pmatrix}b_{11}&b_{12}&\ldots &b_{1n}\\b_{21}&b_{22}&\ldots &b_{2n}\\\vdots &\vdots &\ddots &\vdots \\b_{m1}&b_{m2}&\ldots &b_{mn}\end{pmatrix}}={\begin{pmatrix}a_{11}+b_{11}&a_{12}+b_{12}&\ldots &a_{1n}+b_{1n}\\a_{21}+b_{21}&a_{22}+b_{22}&\ldots &a_{2n}+b_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}+b_{m1}&a_{m2}+b_{m2}&\ldots &a_{mn}+b_{mn}\end{pmatrix}}\,}$

and

${\displaystyle {}r{\begin{pmatrix}a_{11}&a_{12}&\ldots &a_{1n}\\a_{21}&a_{22}&\ldots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\ldots &a_{mn}\end{pmatrix}}={\begin{pmatrix}ra_{11}&ra_{12}&\ldots &ra_{1n}\\ra_{21}&ra_{22}&\ldots &ra_{2n}\\\vdots &\vdots &\ddots &\vdots \\ra_{m1}&ra_{m2}&\ldots &ra_{mn}\end{pmatrix}}\,.}$

The multiplication of matrices is defined in the following way.

## Definition

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}A}$ denote an ${\displaystyle {}m\times n}$-matrix and ${\displaystyle {}B}$ an ${\displaystyle {}n\times p}$-matrix over ${\displaystyle {}K}$. Then the matrix product

${\displaystyle AB}$

is the ${\displaystyle {}m\times p}$-matrix, whose entries are given by

${\displaystyle {}c_{ik}=\sum _{j=1}^{n}a_{ij}b_{jk}\,.}$

Such a matrix multiplication is only possible when the number of columns of the left-hand matrix equals the number of rows of the right-hand matrix. Just think of the scheme

${\displaystyle {}(ROWROW){\begin{pmatrix}C\\O\\L\\U\\M\\N\end{pmatrix}}=(RC+O^{2}+WL+RU+OM+WN)\,,}$

the result is an ${\displaystyle {}1\times 1}$-Matrix. In particular, one can multiply an ${\displaystyle {}m\times n}$-matrix ${\displaystyle {}A}$ with a column vector of length ${\displaystyle {}n}$ (the vector on the right), and the result is a column vector of length ${\displaystyle {}m}$. The two matrices can also be multiplied with roles interchanged,

${\displaystyle {}{\begin{pmatrix}C\\O\\L\\U\\M\\N\end{pmatrix}}(ROWROW)={\begin{pmatrix}CR&CO&CW&CR&CO&CW\\OR&O^{2}&OW&OR&O^{2}&OW\\LR&LO&LW&LR&LO&LW\\UR&UO&UW&UR&UO&UW\\MR&MO&MW&MR&MO&MW\\NR&NO&NW&NR&NO&NW\end{pmatrix}}\,.}$

## Definition

An ${\displaystyle {}n\times n}$-matrix of the form

${\displaystyle {\begin{pmatrix}d_{11}&0&\cdots &\cdots &0\\0&d_{22}&0&\cdots &0\\\vdots &\ddots &\ddots &\ddots &\vdots \\0&\cdots &0&d_{n-1\,n-1}&0\\0&\cdots &\cdots &0&d_{nn}\end{pmatrix}}}$
is called a diagonal matrix.

## Definition

The ${\displaystyle {}n\times n}$-matrix

${\displaystyle {}E_{n}:={\begin{pmatrix}1&0&\cdots &\cdots &0\\0&1&0&\cdots &0\\\vdots &\ddots &\ddots &\ddots &\vdots \\0&\cdots &0&1&0\\0&\cdots &\cdots &0&1\end{pmatrix}}\,}$
is called identity matrix.

The identity matrix ${\displaystyle {}E_{n}}$ has the property ${\displaystyle {}E_{n}M=M=ME_{n}}$, for an arbitrary ${\displaystyle {}n\times n}$-matrix ${\displaystyle {}M}$.

## Remark

If we multiply an ${\displaystyle {}m\times n}$-matrix ${\displaystyle {}A=(a_{ij})_{ij}}$ with an column vector ${\displaystyle {}x={\begin{pmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{pmatrix}}}$, then we get

${\displaystyle {}Ax={\begin{pmatrix}a_{11}&a_{12}&\ldots &a_{1n}\\a_{21}&a_{22}&\ldots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\ldots &a_{mn}\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{pmatrix}}={\begin{pmatrix}a_{11}x_{1}+a_{12}x_{2}+\cdots +a_{1n}x_{n}\\a_{21}x_{1}+a_{22}x_{2}+\cdots +a_{2n}x_{n}\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\cdots +a_{mn}x_{n}\end{pmatrix}}\,.}$

Hence, an inhomogeneous system of linear equations with disturbance vector ${\displaystyle {}{\begin{pmatrix}c_{1}\\c_{2}\\\vdots \\c_{m}\end{pmatrix}}}$, can be written briefly as

${\displaystyle {}Ax=c\,.}$

Then, the manipulations on the equations, which do not change the solution set, can be replaced by corresponding manipulations on the rows of the matrix. It is not necessary to write down the variables.

Vector spaces

The central concept of linear algebra is a vector space.

## Definition

Let ${\displaystyle {}K}$ denote a field, and ${\displaystyle {}V}$ a set with a distinguished element ${\displaystyle {}0\in V}$, and with two mappings

${\displaystyle +\colon V\times V\longrightarrow V,(u,v)\longmapsto u+v,}$

and

${\displaystyle \cdot \colon K\times V\longrightarrow V,(s,v)\longmapsto sv=s\cdot v.}$

Then ${\displaystyle {}V}$ is called a ${\displaystyle {}K}$-vector space (or a vector space over ${\displaystyle {}K}$), if the following axioms hold[1] (where ${\displaystyle {}r,s\in K}$ and ${\displaystyle {}u,v,w\in V}$ are arbitrary). [2]

1. ${\displaystyle {}u+v=v+u}$,
2. ${\displaystyle {}(u+v)+w=u+(v+w)}$,
3. ${\displaystyle {}v+0=v}$,
4. For every ${\displaystyle {}v}$, there exists a ${\displaystyle {}z}$ such that ${\displaystyle {}v+z=0}$,
5. ${\displaystyle {}1\cdot u=u}$,
6. ${\displaystyle {}r(su)=(rs)u}$,
7. ${\displaystyle {}r(u+v)=ru+rv}$,
8. ${\displaystyle {}(r+s)u=ru+su}$.

The binary operation in ${\displaystyle {}V}$ is called (vector-)addition, and the operation ${\displaystyle {}K\times V\rightarrow V}$ is called scalar multiplication. The elements in a vector space are called vectors, and the elements ${\displaystyle {}r\in K}$ are called scalars. The null element ${\displaystyle {}0\in V}$ is called null vector, and for ${\displaystyle {}v\in V}$, the inverse element is called the negative of ${\displaystyle {}v}$, denoted by ${\displaystyle {}-v}$. The field which occurs in the definition of a vector space is called the base field. All the concepts of linear algebra refer to such a base field. In case ${\displaystyle {}K=\mathbb {R} }$, we talk about a real vector space, and in case ${\displaystyle {}K=\mathbb {C} }$, we talk about a complex vector space. For real and complex vector spaces there exist further structures like length, angle, inner product. But first we develop the algebraic theory of vector spaces over an arbitrary field.

## Example

Let ${\displaystyle {}K}$ denote a field, and let ${\displaystyle {}n\in \mathbb {N} _{+}}$. Then the product set

${\displaystyle {}K^{n}=\underbrace {K\times \cdots \times K} _{n{\text{-times}}}={\left\{(x_{1},\ldots ,x_{n})\mid x_{i}\in K\right\}}\,,}$

with componentwise addition and with scalar multiplication given by

${\displaystyle {}s(x_{1},\ldots ,x_{n})=(sx_{1},\ldots ,sx_{n})\,,}$

is a vector space. This space is called the ${\displaystyle {}n}$-dimensional standard space. In particular, ${\displaystyle {}K^{1}=K}$ is a vector space.

The null space ${\displaystyle {}0}$, consisting of just one element ${\displaystyle {}0}$, is a vector space. It might be considered as ${\displaystyle {}K^{0}=0}$.

The vectors in the standard space ${\displaystyle {}K^{n}}$ can be written as row vectors

${\displaystyle \left(a_{1},\,a_{2},\,\ldots ,\,a_{n}\right)}$

or as column vectors

${\displaystyle {\begin{pmatrix}a_{1}\\a_{2}\\\vdots \\a_{n}\end{pmatrix}}.}$

The vector

${\displaystyle {}e_{i}:={\begin{pmatrix}0\\\vdots \\0\\1\\0\\\vdots \\0\end{pmatrix}}\,,}$

where the ${\displaystyle {}1}$ is at the ${\displaystyle {}i}$-th position, is called ${\displaystyle {}i}$-th standard vector.

## Example

The complex numbers ${\displaystyle {}\mathbb {C} }$ form a field, and therefore they form also a vector space over the field itself. However, the set of complex numbers equals ${\displaystyle {}\mathbb {R} ^{2}}$ as an additive group. The multiplication of a complex number ${\displaystyle {}a+b{\mathrm {i} }}$ with a real number ${\displaystyle {}s=(s,0)}$ is componentwise, so this multiplication coincides with the scalar multiplication on ${\displaystyle {}\mathbb {R} ^{2}}$. Hence, the complex numbers are also a real vector space.

## Example

For a field ${\displaystyle {}K}$, and given natural numbers ${\displaystyle {}m,n}$, the set

${\displaystyle \operatorname {Mat} _{m\times n}(K)}$

of all ${\displaystyle {}m\times n}$-matrices with componentwise addition and componentwise scalar multiplication, is a ${\displaystyle {}K}$-vector space. The null element in this vector space is the null matrix

${\displaystyle {}0={\begin{pmatrix}0&\ldots &0\\\vdots &\ddots &\vdots \\0&\ldots &0\end{pmatrix}}\,.}$

## Example

Let ${\displaystyle {}R=K[X]}$ be the polynomial ring in one variable over the field ${\displaystyle {}K}$, consisting of all polynomials, that is expressions of the form

${\displaystyle a_{n}X^{n}+a_{n-1}X^{n-1}+\cdots +a_{2}X^{2}+a_{1}X+a_{0},}$

with ${\displaystyle {}a_{i}\in K}$. Using componentwise addition and componentwise multiplication with a scalar ${\displaystyle {}s\in K}$ (this is also multiplication with the constant polynomial ${\displaystyle {}s}$), the polynomial ring is a ${\displaystyle {}K}$-vector space.

## Lemma

Let ${\displaystyle {}K}$ be a field, and let ${\displaystyle {}V}$ be a ${\displaystyle {}K}$-vector space. Then the following properties hold (for

${\displaystyle {}v\in V}$ and ${\displaystyle {}s\in K}$).
1. We have ${\displaystyle {}0v=0}$.
2. We have ${\displaystyle {}s0=0}$.
3. We have ${\displaystyle {}(-1)v=-v}$.
4. If ${\displaystyle {}s\neq 0}$ and ${\displaystyle {}v\neq 0}$, then ${\displaystyle {}sv\neq 0}$.

### Proof

${\displaystyle \Box }$

Linear subspaces

## Definition

Let ${\displaystyle {}K}$ be a field, and ${\displaystyle {}V}$ be a ${\displaystyle {}K}$-vector space. A subset ${\displaystyle {}U\subseteq V}$ is called a linear subspace, if the following properties hold.

1. ${\displaystyle {}0\in U}$.
2. If ${\displaystyle {}u,v\in U}$, then also ${\displaystyle {}u+v\in U}$.
3. If ${\displaystyle {}u\in U}$ and ${\displaystyle {}s\in K}$, then also ${\displaystyle {}su\in U}$ holds.

Addition and scalar multiplication can be restricted to such a linear subspace. Hence, the linear subspace is itself a vector space, see Exercise 22.20 . The simplest linear subspaces in a vector space ${\displaystyle {}V}$ are the null space ${\displaystyle {}0}$ and the whole vector space ${\displaystyle {}V}$.

## Lemma

Let ${\displaystyle {}K}$ be a field, and let

${\displaystyle {\begin{matrix}a_{11}x_{1}+a_{12}x_{2}+\cdots +a_{1n}x_{n}&=&0\\a_{21}x_{1}+a_{22}x_{2}+\cdots +a_{2n}x_{n}&=&0\\\vdots &\vdots &\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\cdots +a_{mn}x_{n}&=&0\end{matrix}}}$

be a homogeneous system of linear equations over ${\displaystyle {}K}$. Then the set of all solutions to the system is a linear subspace of the standard space ${\displaystyle {}K^{n}}$.

### Proof

${\displaystyle \Box }$

Therefore, we talk about the solution space of the linear system. In particular, the sum of two solutions of a system of linear equations is again a solution. The solution set of an inhomogeneous linear system is not a vector space. However, one can add, to a solution of an inhomogeneous system, a solution of the corresponding homogeneous system, and get a solution to the inhomogeneous system again.

## Example

We have a look at the homogeneous version of Example 21.11 , so we consider the homogeneous linear system

${\displaystyle {\begin{matrix}2x&+5y&+2z&&-v&=&0\\\,3x&-4y&&+u&+2v&=&0\\\,,4x&&-2z&+2u&&=&0\,.\end{matrix}}}$

over ${\displaystyle {}\mathbb {R} }$. Due to Lemma 22.14 , the solution set ${\displaystyle {}L}$ is a linear subspace of ${\displaystyle {}\mathbb {R} ^{5}}$. We have described it explicitly in Example 21.11 as

${\displaystyle {\left\{u{\left(-{\frac {1}{3}},0,{\frac {1}{3}},1,0\right)}+v{\left(-{\frac {2}{13}},{\frac {5}{13}},-{\frac {4}{13}},0,1\right)}\mid u,v\in \mathbb {R} \right\}},}$

which also shows that the solution set is a vector space. With this description, it is clear that ${\displaystyle {}L}$ is in bijection with ${\displaystyle {}\mathbb {R} ^{2}}$, and this bijection respects the addition and also the scalar multiplication (the solution set ${\displaystyle {}L'}$ of the inhomogeneous system is also in bijection with ${\displaystyle {}\mathbb {R} ^{2}}$, but there is no reasonable addition nor scalar multiplication on ${\displaystyle {}L'}$). However, this bijection depends heavily on the chosen "basic solutions“ ${\displaystyle {}{\left(-{\frac {1}{3}},0,{\frac {1}{3}},1,0\right)}}$ and ${\displaystyle {}{\left(-{\frac {2}{13}},{\frac {5}{13}},-{\frac {4}{13}},0,1\right)}}$, which depends on the order of elimination. There are several equally good basic solutions for ${\displaystyle {}L}$.

This example shows also the following: the solution space of a linear system over ${\displaystyle {}K}$ is "in natural way“, that means independent on any choice, a linear subspace of ${\displaystyle {}K^{n}}$ (where ${\displaystyle {}n}$ is the number of variables). For this solution space, there always exists a "linear bijection“ (an "isomorphism“) to some ${\displaystyle {}K^{d}}$ (${\displaystyle {}d\leq n}$), but for is no natural choice for such a bijection. This is one of the main reasons to work with abstract vector spaces, instead of just ${\displaystyle {}K^{n}}$.

Footnotes
1. The first four axioms, which are independent of ${\displaystyle {}K}$, mean that ${\displaystyle {}(V,0,+)}$ is a commutative group.
2. Also for vector spaces, there is the convention that multiplication binds stronger than addition.