Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 22/latex

\setcounter{section}{22}




\inputexample{}
{






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Fruit salad (1).jpg} }
\end{center}
\imagetext {} }

\imagelicense { Fruit salad (1).jpg } {} {Fæ} {Commons} {public domain} {}

A healthy breakfast starts with a fruit salad. The following table shows how much vitamin C, calcium and magnesium various fruits have \extrabracket {in milligram with respect to 100 gram of the fruit} {} {.}

%Data for following table


\renewcommand{\leadrowzero}{ }

\renewcommand{\leadrowone}{ apple }

\renewcommand{\leadrowtwo}{ orange }

\renewcommand{\leadrowthree}{ grapes }

\renewcommand{\leadrowfour}{ banana }

\renewcommand{\leadrowfive}{ }

\renewcommand{\leadrowsix}{ }

\renewcommand{\leadrowseven}{ }

\renewcommand{\leadroweight}{ }

\renewcommand{\leadrownine}{ }

\renewcommand{\leadrowten}{ }

\renewcommand{\leadroweleven}{ }

\renewcommand{\leadrowtwelve}{ }


\renewcommand{\leadcolumnzero}{ }

\renewcommand{\leadcolumnone}{ vitamin C }

\renewcommand{\leadcolumntwo}{ calcium }

\renewcommand{\leadcolumnthree}{ magnesium }

\renewcommand{\leadcolumnfour}{ }

\renewcommand{\leadcolumnfive}{ }

\renewcommand{\leadcolumnsix}{ }

\renewcommand{\leadcolumnseven}{ }

\renewcommand{\leadcolumneight}{ }

\renewcommand{\leadcolumnnine}{ }

\renewcommand{\leadcolumnten}{ }

\renewcommand{\leadcolumneleven}{ }

\renewcommand{\leadcolumntwelve}{ }

\renewcommand{\leadcolumnthirteen}{ }

\renewcommand{\leadcolumnfourteen}{ }

\renewcommand{\leadcolumnfifteen}{ }

\renewcommand{\leadcolumnsixteen}{ }

\renewcommand{\leadcolumnseventeen}{ }

\renewcommand{\leadcolumneightteen}{ }

\renewcommand{\leadcolumnnineteen}{ }

\renewcommand{\leadcolumntwenty}{ }



\renewcommand{\aonexone}{ 12 }

\renewcommand{\aonextwo}{ 53 }

\renewcommand{\aonexthree}{ 4 }

\renewcommand{\aonexfour}{ 9 }

\renewcommand{\aonexfive}{ }

\renewcommand{\aonexsix}{ }

\renewcommand{\aonexseven}{ }

\renewcommand{\aonexeight}{ }

\renewcommand{\aonexnine}{ }

\renewcommand{\aonexten}{ }

\renewcommand{\aonexeleven}{ }

\renewcommand{\aonextwelve}{ }



\renewcommand{\atwoxone}{ 7 }

\renewcommand{\atwoxtwo}{ 40 }

\renewcommand{\atwoxthree}{ 12 }

\renewcommand{\atwoxfour}{ 5 }

\renewcommand{\atwoxfive}{ z_2 }

\renewcommand{\atwoxsix}{ }

\renewcommand{\atwoxseven}{ }

\renewcommand{\atwoxeight}{ }

\renewcommand{\atwoxnine}{ }

\renewcommand{\atwoxten}{ }

\renewcommand{\atwoxeleven}{ }

\renewcommand{\atwoxtwelve}{ }



\renewcommand{\athreexone}{ 6 }

\renewcommand{\athreextwo}{ 10 }

\renewcommand{\athreexthree}{ 8 }

\renewcommand{\athreexfour}{ 27 }

\renewcommand{\athreexfive}{ z_3 }

\renewcommand{\athreexsix}{ }

\renewcommand{\athreexseven}{ }

\renewcommand{\athreexeight}{ }

\renewcommand{\athreexnine}{ }

\renewcommand{\athreexten}{ }

\renewcommand{\athreexeleven}{ }

\renewcommand{\athreextwelve}{ }



\renewcommand{\afourxone}{ }

\renewcommand{\afourxtwo}{ }

\renewcommand{\afourxthree}{ }

\renewcommand{\afourxfour}{ }

\renewcommand{\afourxfive}{ }

\renewcommand{\afourxsix}{ }

\renewcommand{\afourxseven}{ }

\renewcommand{\afourxeight}{ }

\renewcommand{\afourxnine}{ }

\renewcommand{\afourxten}{ }

\renewcommand{\afourxeleven}{ }

\renewcommand{\afourxtwelve}{ }


\renewcommand{\afivexone}{ }

\renewcommand{\afivextwo}{ }

\renewcommand{\afivexthree}{ }

\renewcommand{\afivexfour}{ }

\renewcommand{\afivexfive}{ }

\renewcommand{\afivexsix}{ }

\renewcommand{\afivexseven}{ }

\renewcommand{\afivexeight}{ }

\renewcommand{\afivexnine}{ }

\renewcommand{\afivexten}{ }

\renewcommand{\afivexeleven}{ }

\renewcommand{\afivextwelve}{ }


\renewcommand{\asixxone}{ }

\renewcommand{\asixxtwo}{ }

\renewcommand{\asixxthree}{ }

\renewcommand{\asixxfour}{ }

\renewcommand{\asixxfive}{ }

\renewcommand{\asixxsix}{ }

\renewcommand{\asixxseven}{ }

\renewcommand{\asixxeight}{ }

\renewcommand{\asixxnine}{ }

\renewcommand{\asixxten}{ }

\renewcommand{\asixxeleven}{ }

\renewcommand{\asixxtwelve}{ }


\renewcommand{\asevenxone}{ }

\renewcommand{\asevenxtwo}{ }

\renewcommand{\asevenxthree}{ }

\renewcommand{\asevenxfour}{ }

\renewcommand{\asevenxfive}{ }

\renewcommand{\asevenxsix}{ }

\renewcommand{\asevenxseven}{ }

\renewcommand{\asevenxeight}{ }

\renewcommand{\asevenxnine}{ }

\renewcommand{\asevenxten}{ }

\renewcommand{\asevenxeleven}{ }

\renewcommand{\asevenxtwelve}{ }


\renewcommand{\aeightxone}{ }

\renewcommand{\aeightxtwo}{ }

\renewcommand{\aeightxthree}{ }

\renewcommand{\aeightxfour}{ }

\renewcommand{\aeightxfive}{ }

\renewcommand{\aeightxsix}{ }

\renewcommand{\aeightxseven}{ }

\renewcommand{\aeightxeight}{ }

\renewcommand{\aeightxnine}{ }

\renewcommand{\aeightxten}{ }

\renewcommand{\aeightxeleven}{ }

\renewcommand{\aeightxtwelve}{ }


\renewcommand{\aninexone}{ }

\renewcommand{\aninextwo}{ }

\renewcommand{\aninexthree}{ }

\renewcommand{\aninexfour}{ }

\renewcommand{\aninexfive}{ }

\renewcommand{\aninexsix}{ }

\renewcommand{\aninexseven}{ }

\renewcommand{\aninexeight}{ }

\renewcommand{\aninexnine}{ }

\renewcommand{\aninexten}{ }

\renewcommand{\aninexeleven}{ }

\renewcommand{\aninextwelve}{ }


\renewcommand{\atenxone}{ }

\renewcommand{\atenxtwo}{ }

\renewcommand{\atenxthree}{ }

\renewcommand{\atenxfour}{ }

\renewcommand{\atenxfive}{ }

\renewcommand{\atenxsix}{ }

\renewcommand{\atenxseven}{ }

\renewcommand{\atenxeight}{ }

\renewcommand{\atenxnine}{ }

\renewcommand{\atenxten}{ }

\renewcommand{\atenxeleven}{ }

\renewcommand{\atenxtwelve}{ }



\renewcommand{\aelevenxone}{ }

\renewcommand{\aelevenxtwo}{ }

\renewcommand{\aelevenxthree}{ }

\renewcommand{\aelevenxfour}{ }

\renewcommand{\aelevenxfive}{ }

\renewcommand{\aelevenxsix}{ }

\renewcommand{\aelevenxseven}{ }

\renewcommand{\aelevenxeight}{ }

\renewcommand{\aelevenxnine}{ }

\renewcommand{\aelevenxten}{ }

\renewcommand{\aelevenxeleven}{ }

\renewcommand{\aelevenxtwelve}{ }



\renewcommand{\atwelvexone}{ }

\renewcommand{\atwelvextwo}{ }

\renewcommand{\atwelvexthree}{ }

\renewcommand{\atwelvexfour}{ }

\renewcommand{\atwelvexfive}{ }

\renewcommand{\atwelvexsix}{ }

\renewcommand{\atwelvexseven}{ }

\renewcommand{\atwelvexeight}{ }

\renewcommand{\atwelvexnine}{ }

\renewcommand{\atwelvexten}{ }

\renewcommand{\atwelvexeleven}{ }

\renewcommand{\atwelvextwelve}{ }



\renewcommand{\athirteenxone}{ }

\renewcommand{\athirteenxtwo}{ }

\renewcommand{\athirteenxthree}{ }

\renewcommand{\athirteenxfour}{ }

\renewcommand{\athirteenxfive}{ }

\renewcommand{\athirteenxsix}{ }

\renewcommand{\athirteenxseven}{ }

\renewcommand{\athirteenxeight}{ }

\renewcommand{\athirteenxnine}{ }

\renewcommand{\athirteenxten}{ }

\renewcommand{\athirteenxeleven}{ }

\renewcommand{\athirteenxtwelve}{ }



\renewcommand{\afourteenxone}{ }

\renewcommand{\afourteenxtwo}{ }

\renewcommand{\afourteenxthree}{ }

\renewcommand{\afourteenxfour}{ }

\renewcommand{\afourteenxfive}{ }

\renewcommand{\afourteenxsix}{ }

\renewcommand{\afourteenxseven}{ }

\renewcommand{\afourteenxeight}{ }

\renewcommand{\afourteenxnine}{ }

\renewcommand{\afourteenxten}{ }

\renewcommand{\afourteenxeleven}{ }

\renewcommand{\afourteenxtwelve}{ }


\renewcommand{\afifteenxone}{ }

\renewcommand{\afifteenxtwo}{ }

\renewcommand{\afifteenxthree}{ }

\renewcommand{\afifteenxfour}{ }

\renewcommand{\afifteenxfive}{ }

\renewcommand{\afifteenxsix}{ }

\renewcommand{\afifteenxseven}{ }

\renewcommand{\afifteenxeight}{ }

\renewcommand{\afifteenxnine}{ }

\renewcommand{\afifteenxten}{ }

\renewcommand{\afifteenxeleven}{ }

\renewcommand{\afifteenxtwelve}{ }


\renewcommand{\asixteenxone}{ }

\renewcommand{\asixteenxtwo}{ }

\renewcommand{\asixteenxthree}{ }

\renewcommand{\asixteenxfour}{ }

\renewcommand{\asixteenxfive}{ }

\renewcommand{\asixteenxsix}{ }

\renewcommand{\asixteenxseven}{ }

\renewcommand{\asixteenxeight}{ }

\renewcommand{\asixteenxnine}{ }

\renewcommand{\asixteenxten}{ }

\renewcommand{\asixteenxeleven}{ }

\renewcommand{\asixteenxtwelve}{ }



\renewcommand{\aseventeenxone}{ }

\renewcommand{\aseventeenxtwo}{ }

\renewcommand{\aseventeenxthree}{ }

\renewcommand{\aseventeenxfour}{ }

\renewcommand{\aseventeenxfive}{ }

\renewcommand{\aseventeenxsix}{ }

\renewcommand{\aseventeenxseven}{ }

\renewcommand{\aseventeenxeight}{ }

\renewcommand{\aseventeenxnine}{ }

\renewcommand{\aseventeenxten}{ }

\renewcommand{\aseventeenxeleven}{ }

\renewcommand{\aseventeenxtwelve}{ }





\renewcommand{\aeightteenxone}{ }

\renewcommand{\aeightteenxtwo}{ }

\renewcommand{\aeightteenxthree}{ }

\renewcommand{\aeightteenxfour}{ }

\renewcommand{\aeightteenxfive}{ }

\renewcommand{\aeightteenxsix}{ }

\renewcommand{\aeightteenxseven}{ }

\renewcommand{\aeightteenxeight}{ }

\renewcommand{\aeightteenxnine}{ }

\renewcommand{\aeightteenxten}{ }

\renewcommand{\aeightteenxeleven}{ }

\renewcommand{\aeightteenxtwelve}{ }



\tableleadthreexfour My fruit salad today consists of the mentioned fruits with portions \mathl{\begin{pmatrix} 3 \\2\\ 7\\6 \end{pmatrix}}{} \extrabracket {meaning $300$ gram apple and so on} {} {.} From that, one can calculate the total vitamin-C-amount, the calcium-amount and the magnesium-amount of the fruit salad, by simply multiplying for each fruit its portion with its specific amount, and summing up everything. The vitamin-C-amount of the complete fruit salad is thus
\mathrelationchaindisplay
{\relationchain
{ 12 \cdot 3 + 53 \cdot 2 + 4 \cdot 7 + 9 \cdot 6 }
{ =} { 224 }
{ } { }
{ } { }
{ } { }
} {}{}{.} This operation is an example for how a matrix operates. The table yields immediately a $3 \times 4$-matrix, namely \mathl{\begin{pmatrix} 12 & 53 & 4 & 9 \\ 7 & 40 & 12 & 5 \\ 6 & 10 & 8 & 27 \end{pmatrix}}{,} and the above calculation is realized by the matrix multiplication
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} 12 & 53 & 4 & 9 \\ 7 & 40 & 12 & 5 \\ 6 & 10 & 8 & 27 \end{pmatrix} \begin{pmatrix} 3 \\2\\ 7\\6 \end{pmatrix} }
{ =} { \begin{pmatrix} 224 \\215\\ 256 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}

One can also ask for a fruit salad which has certain amounts of vitamin C, calcium and magnesium, say \mathl{\begin{pmatrix} 180 \\110\\ 140 \end{pmatrix}}{.} This leads to the linear system of linear equations in matrix form,
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} 12 & 53 & 4 & 9 \\ 7 & 40 & 12 & 5 \\ 6 & 10 & 8 & 27 \end{pmatrix} \begin{pmatrix} x_1 \\x_2\\ x_3\\x_4 \end{pmatrix} }
{ =} { \begin{pmatrix} 180 \\110\\ 140 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}

}






\subtitle {Matrices}

A system of linear equations can easily be written with a matrix. This allows us to make the manipulations that lead to the solution of such a system without writing down the variables. Matrices are quite simple objects; however, they can represent quite different mathematical objects \extrabracket {e.g., a family of column vectors, a family of row vectors, a linear mapping, a table of physical interactions, a relation, a linear vector field, etc.} {} {,} which one has to keep in mind in order to prevent wrong conclusions.




\inputdefinition
{ }
{

Let $K$ denote a field, and let \mathcor {} {I} {and} {J} {} denote index sets. An \mathl{I\times J}{-}\definitionword {matrix}{} is a mapping
\mathdisp {I \times J \longrightarrow K , (i,j) \longmapsto a_{ij}} { . }
If
\mathrelationchain
{\relationchain
{I }
{ = }{\{1 , \ldots , m\} }
{ }{}
{ }{}
{ }{}
} {}{}{} and
\mathrelationchain
{\relationchain
{ J }
{ = }{\{1 , \ldots , n\} }
{ }{}
{ }{}
{ }{}
} {}{}{,} then we talk about an \mathl{m \times n}{-}\definitionword {matrix}{.} In this case, the matrix is usually written as
\mathdisp {\begin{pmatrix} a_{11 } & a_{1 2} & \ldots & a_{1 n } \\ a_{21 } & a_{2 2} & \ldots & a_{2 n } \\

\vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } & a_{ m 2 } & \ldots & a_{ m n } \end{pmatrix}} { . }

}

We will usually restrict to this last situation.


For every
\mathrelationchain
{\relationchain
{ i }
{ \in }{ I }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the family
\mathcond {a_{ij}} {,}
{j \in J} {}
{} {} {} {,} is called the $i$-th \keyword {row} {} of the matrix, which is usually written as a \keyword {row tuple} {} \extrabracket {or \keyword {row vector} {}} {} {}
\mathdisp {(a_{i1}, a_{i2} , \ldots , a_{in})} { . }
For every
\mathrelationchain
{\relationchain
{ j }
{ \in }{ J }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the family
\mathcond {a_{ij}} {,}
{i \in I} {}
{} {} {} {,} is called the $j$-th \keyword {column} {} of the matrix, usually written as a column tuple \extrabracket {or column vector} {} {}
\mathdisp {\begin{pmatrix} a_{1j} \\a_{2j}\\ \vdots\\a_{mj} \end{pmatrix}} { . }
The elements \mathl{a_{ij}}{} are called the \keyword {entries} {} of the matrix. For \mathl{a_{ij}}{,} the number $i$ is called the \keyword {row index} {,} and $j$ is called the \keyword {column index} {} of the entry. The position of the entry \mathl{a_{ij}}{} is where the $i$-th row meets the $j$-th column. A matrix with
\mathrelationchain
{\relationchain
{m }
{ = }{n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is called a \keyword {square matrix} {.} An \mathl{m \times 1}{-}matrix is simply a column tuple \extrabracket {or column vector} {} {} of length $m$, and an \mathl{1 \times n}{-}matrix is simply a row tuple \extrabracket {or row vector} {} {} of length $n$. The set of all matrices with $m$ rows and $n$ columns \extrabracket {and with entries in $K$} {} {} is denoted by \mathl{\operatorname{Mat}_{ m \times n } (K)}{;} in case
\mathrelationchain
{\relationchain
{m }
{ = }{n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} we also write \mathl{\operatorname{Mat}_{ n } (K)}{.}


Two matrices
\mathrelationchain
{\relationchain
{A,B }
{ \in }{ \operatorname{Mat}_{ m \times n } (K) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} are added by adding corresponding entries. The multiplication of a matrix $A$ with an element
\mathrelationchain
{\relationchain
{ r }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} \extrabracket {a \keyword {scalar} {}} {} {} is also defined entrywise, so
\mathrelationchaindisplayhandleft
{\relationchaindisplayhandleft
{ \begin{pmatrix} a_{11 } & a_{1 2} & \ldots & a_{1 n } \\ a_{21 } & a_{2 2} & \ldots & a_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } & a_{ m 2 } & \ldots & a_{ m n } \end{pmatrix} + \begin{pmatrix} b_{11 } & b_{1 2} & \ldots & b_{1 n } \\ b_{21 } & b_{2 2} & \ldots & b_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ b_{ m 1 } & b_{ m 2 } & \ldots & b_{ m n } \end{pmatrix} }
{ =} { \begin{pmatrix} a_{11 } +b_{11} & a_{1 2} +b_{12} & \ldots & a_{1 n } +b_{1n} \\ a_{21 } +b_{21} & a_{2 2} +b_{22} & \ldots & a_{2 n } +b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } +b_{m1} & a_{ m 2 } +b_{m2} & \ldots & a_{ m n } +b_{mn} \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{} and
\mathrelationchaindisplay
{\relationchain
{ r \begin{pmatrix} a_{11 } & a_{1 2} & \ldots & a_{1 n } \\ a_{21 } & a_{2 2} & \ldots & a_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } & a_{ m 2 } & \ldots & a_{ m n } \end{pmatrix} }
{ =} { \begin{pmatrix} ra_{11 } & ra_{1 2} & \ldots & ra_{1 n } \\ ra_{21 } & ra_{2 2} & \ldots & ra_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ ra_{ m 1 } & ra_{ m 2 } & \ldots & ra_{ m n } \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}

The multiplication of matrices is defined in the following way:


\inputdefinition
{ }
{

Let $K$ denote a field, and let $A$ denote an $m \times n$-matrix and $B$ an $n\times p$-matrix over $K$. Then the \definitionword {matrix product}{}
\mathdisp {AB} { }
is the \mathl{m\times p}{-}matrix, whose entries are given by
\mathrelationchaindisplay
{\relationchain
{ c_{ik} }
{ =} { \sum_{j = 1}^n a_{ij} b_{jk} }
{ } { }
{ } { }
{ } { }
}

{}{}{.}

}


A matrix multiplication is only possible when the number of columns of the left-hand matrix equals the number of rows of the right-hand matrix. Just think of the scheme
\mathrelationchaindisplay
{\relationchain
{ (R O W R O W ) \begin{pmatrix} C \\O\\ L\\U\\ M\\ N \end{pmatrix} }
{ =} { (RC+O^2+WL+RU+OM+WN) }
{ } { }
{ } { }
{ } { }
} {}{}{,} the result is an $1 \times 1$-Matrix. In particular, one can multiply an \mathl{m \times n}{-}matrix $A$ with a column vector of length $n$ \extrabracket {the vector on the right} {} {,} and the result is a column vector of length $m$. The two matrices can also be multiplied with roles interchanged,
\mathrelationchaindisplay
{\relationchain
{ \begin{pmatrix} C \\O\\ L\\U\\ M\\ N \end{pmatrix} (R O W R O W) }
{ =} { \begin{pmatrix} CR & CO & CW & CR & CO & CW \\

OR &  O^2 &   OW  &  OR &  O^2 &  OW \\

L R & LO & L W& L R & L O & L W \\ U R & U O & U W & U R & U O & U W \\

MR &  MO & M W & M R &  M O  & MW\\
NR &  NO &  NW &  NR &   NO  &  NW

\end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}




\inputdefinition
{ }
{

An $n \times n$-matrix of the form
\mathdisp {\begin{pmatrix} d_{11} & 0 & \cdots & \cdots & 0 \\ 0 & d_{22} & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & d_{ n-1\, n-1} & 0 \\ 0 & \cdots & \cdots & 0 & d_{ n n} \end{pmatrix}} { }

is called a \definitionword {diagonal matrix}{.}

}




\inputdefinition
{ }
{

The $n \times n$-matrix
\mathrelationchaindisplay
{\relationchain
{ E_{ n } }
{ \defeq} { \begin{pmatrix} 1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & 1 & 0 \\ 0 & \cdots & \cdots & 0 & 1 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{}

is called \definitionword {identity matrix}{.}

}

The identity matrix $E_n$ has the property
\mathrelationchain
{\relationchain
{ E_n M }
{ = }{ M }
{ = }{ M E_n }
{ }{ }
{ }{ }
} {}{}{,} for an arbitrary \mathl{n\times n}{-}matrix $M$.




\inputremark {}
{

If we multiply an $m\times n$-matrix
\mathrelationchain
{\relationchain
{A }
{ = }{(a_{ij})_{ij} }
{ }{ }
{ }{ }
{ }{}
} {}{}{} with a column vector
\mathrelationchain
{\relationchain
{x }
{ = }{\begin{pmatrix} x_{1 } \\ x_{2 }\\ \vdots\\ x_{ n } \end{pmatrix} }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then we get
\mathrelationchaindisplay
{\relationchain
{ A x }
{ =} { \begin{pmatrix} a_{11 } & a_{1 2} & \ldots & a_{1 n } \\ a_{21 } & a_{2 2} & \ldots & a_{2 n } \\ \vdots & \vdots & \ddots & \vdots \\ a_{ m 1 } & a_{ m 2 } & \ldots & a_{ m n } \end{pmatrix} \begin{pmatrix} x_{1 } \\ x_{2 }\\ \vdots\\ x_{ n } \end{pmatrix} }
{ =} { \begin{pmatrix} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n} x_n \\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n} x_n\\ \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn} x_n \end{pmatrix} }
{ } { }
{ } {}
} {}{}{.} Hence, an inhomogeneous system of linear equations with \keyword {disturbance vector} {} $\begin{pmatrix} c_{1 } \\ c_{2 }\\ \vdots\\ c_{ m } \end{pmatrix}$ can be written briefly as
\mathrelationchaindisplay
{\relationchain
{Ax }
{ =} {c }
{ } { }
{ } { }
{ } { }
} {}{}{.} Then, the manipulations on the equations that do not change the solution set, can be replaced by corresponding manipulations on the rows of the matrix. It is not necessary to write down the variables.

}






\subtitle {Vector spaces}






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Vector Addition.svg} }
\end{center}
\imagetext {The addition of two arrows $a$ and $b$, a typical example for vectors.} }

\imagelicense { Vector Addition.svg } {} {Booyabazooka} {Commons} {PD} {}

The central concept of linear algebra is a vector space.


\inputdefinition
{ }
{

Let $K$ denote a field, and $V$ a set with a distinguished element
\mathrelationchain
{\relationchain
{0 }
{ \in }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} and with two mappings
\mathdisp {+ \colon V \times V \longrightarrow V , (u,v) \longmapsto u+v} { , }
and
\mathdisp {\cdot \colon K \times V \longrightarrow V , (s,v) \longmapsto s v = s \cdot v} { . }
Then $V$ is called a \definitionwordpremath {K}{ vector space }{} \extrabracket {or a vector space over $K$} {} {,} if the following axioms hold \extrabracket {where \mathcor {} {r,s \in K} {and} {u,v,w \in V} {} are arbitrary} {} {.}

\enumerationeight {
\mathrelationchain
{\relationchain
{ u+v }
{ = }{v+u }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} } {
\mathrelationchain
{\relationchain
{(u+v)+w }
{ = }{ u +(v+w) }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} } {
\mathrelationchain
{\relationchain
{ v+0 }
{ = }{v }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} } {For every $v$, there exists a $z$ such that
\mathrelationchain
{\relationchain
{v+z }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} } {
\mathrelationchain
{\relationchain
{1 \cdot u }
{ = }{ u }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} } {
\mathrelationchain
{\relationchain
{ r(su) }
{ = }{ (rs) u }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} } {
\mathrelationchain
{\relationchain
{ r(u+v) }
{ = }{ru + rv }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} } {
\mathrelationchain
{\relationchain
{ (r+s) u }
{ = }{ru + su }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}

}

}

The binary operation in $V$ is called (vector-)addition, and the operation $K \times V \rightarrow V$ is called \keyword {scalar multiplication} {.} The elements in a vector space are called \keyword {vectors} {,} and the elements
\mathrelationchain
{\relationchain
{r }
{ \in }{K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} are called \keyword {scalars} {.} The null element
\mathrelationchain
{\relationchain
{ 0 }
{ \in }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is called \keyword {null vector} {,} and for
\mathrelationchain
{\relationchain
{v }
{ \in }{V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} the inverse element is called the \keyword {negative} {} of $v$, denoted by $-v$. The field which occurs in the definition of a vector space is called the \keyword {base field} {.} All the concepts of linear algebra refer to such a base field. In case
\mathrelationchain
{\relationchain
{K }
{ = }{\R }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} we talk about a \keyword {real vector space} {,} and in case
\mathrelationchain
{\relationchain
{K }
{ = }{ \Complex }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} we talk about a \keyword {complex vector space} {.} For real and complex vector spaces there exist further structures like length, angle, inner product. But first we develop the algebraic theory of vector spaces over an arbitrary field.






\image{ \begin{center}
\includegraphics[width=5.5cm]{\imageinclude {Vector_space_illust.svg} }
\end{center}
\imagetext {} }

\imagelicense { Vector space illust.svg } {} {Oleg Alexandrov} {Commons} {PD} {}





\inputexample{}
{

Let $K$ denote a field, and let
\mathrelationchain
{\relationchain
{ n }
{ \in }{ \N_+ }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Then the product set
\mathrelationchaindisplay
{\relationchain
{ K^n }
{ =} { \underbrace{K \times \cdots \times K }_{n\text{-times} } }
{ =} { { \left\{ (x_1 , \ldots , x_{ n }) \mid x_i \in K \right\} } }
{ } {}
{ } {}
} {}{}{,} with componentwise addition and with scalar multiplication given by
\mathrelationchaindisplay
{\relationchain
{ s (x_1 , \ldots , x_{ n }) }
{ =} { (s x_1 , \ldots , s x_{ n }) }
{ } { }
{ } { }
{ } { }
} {}{}{,} is a vector space. This space is called the $n$-dimensional \keyword {standard space} {.} In particular,
\mathrelationchain
{\relationchain
{K^1 }
{ = }{K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is a vector space.

}

The null space $0$, consisting of just one element $0$, is a vector space. It might be considered as
\mathrelationchain
{\relationchain
{K^0 }
{ = }{0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}

The vectors in the standard space $K^n$ can be written as row vectors
\mathdisp {\left( a_1 , \, a_2 , \, \ldots , \, a_n \right)} { }
or as column vectors
\mathdisp {\begin{pmatrix} a_1 \\a_2\\ \vdots\\a_n \end{pmatrix}} { . }
The vector
\mathrelationchaindisplay
{\relationchain
{ e_i }
{ \defeq} { \begin{pmatrix} 0 \\ \vdots\\ 0\\1\\ 0\\ \vdots\\ 0 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{,} where the $1$ is at the $i$-th position, is called $i$-th \keyword {standard vector} {.}




\inputexample{ }
{

The complex numbers $$ form a field, and therefore they form also a vector space over the field $\Complex$ itself. However, the set of complex numbers equals $\R^2$ as an additive group. The multiplication of a complex number \mathl{a+b { \mathrm i}}{} with a real number
\mathrelationchain
{\relationchain
{ s }
{ = }{ (s,0) }
{ }{ }
{ }{ }
{ }{}
} {}{}{} is componentwise, so this multiplication coincides with the scalar multiplication on $\R^2$. Hence, the set of complex numbers is also a real vector space.

}




\inputexample{}
{

For a field $K$, and given natural numbers \mathl{m,n}{,} the set
\mathdisp {\operatorname{Mat}_{ m \times n } (K)} { }
of all \mathl{m \times n}{-}matrices, endowed with componentwise addition and componentwise scalar multiplication, is a $K$-vector space. The null element in this vector space is the \keyword {null matrix} {}
\mathrelationchaindisplay
{\relationchain
{0 }
{ =} { \begin{pmatrix} 0 & \ldots & 0 \\ \vdots & \ddots & \vdots \\0 & \ldots & 0 \end{pmatrix} }
{ } { }
{ } { }
{ } { }
} {}{}{.}

}




\inputexample{}
{

Let
\mathrelationchain
{\relationchain
{ R }
{ = }{ K[X] }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} be the polynomial ring in one variable over the field $K$, consisting of all polynomials, that is, expressions of the form
\mathdisp {a_nX^n+a_{n-1}X^{n-1} + \cdots + a_2X^2+a_1X+a_0} { , }
with
\mathrelationchain
{\relationchain
{ a_i }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} Using componentwise addition and componentwise multiplication with a scalar
\mathrelationchain
{\relationchain
{s }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} \extrabracket {this is also multiplication with the constant polynomial $s$} {} {,} the polynomial ring is a $K$-vector space.

}




\inputfactproof
{Vector space/Simple properties/Fact}
{Lemma}
{}
{

\factsituation {Let $K$ be a field, and let $V$ be a $K$-vector space.}
\factsegue {Then the following properties hold \extrabracket {for \mathcor {} {v \in V} {and} {s \in K} {}} {} {.}}
\factconclusion {\enumerationfour {We have
\mathrelationchain
{\relationchain
{ 0v }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.}

} {We have
\mathrelationchain
{\relationchain
{ s 0 }
{ = }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} } {We have
\mathrelationchain
{\relationchain
{ (-1) v }
{ = }{ -v }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} } {If
\mathrelationchain
{\relationchain
{ s }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} and
\mathrelationchain
{\relationchain
{ v }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then
\mathrelationchain
{\relationchain
{ s v }
{ \neq }{ 0 }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} }}
\factextra {}

}
{See Exercise 22.34 .}






\subtitle {Linear subspaces}




\inputdefinition
{ }
{

Let $K$ be a field, and let $V$ be a $K$-vector space. A subset
\mathrelationchain
{\relationchain
{ U }
{ \subseteq }{ V }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} is called a \definitionword {linear subspace}{} if the following properties hold. \enumerationthree {
\mathrelationchain
{\relationchain
{ 0 }
{ \in }{ U }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} } {If
\mathrelationchain
{\relationchain
{ u,v }
{ \in }{U }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then also
\mathrelationchain
{\relationchain
{ u+v }
{ \in }{U }
{ }{ }
{ }{ }
{ }{ }
} {}{}{.} } {If
\mathrelationchain
{\relationchain
{ u }
{ \in }{ U }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} and
\mathrelationchain
{\relationchain
{ s }
{ \in }{ K }
{ }{ }
{ }{ }
{ }{ }
} {}{}{,} then also
\mathrelationchain
{\relationchain
{ s u }
{ \in }{ U }
{ }{ }
{ }{ }
{ }{ }
} {}{}{} holds.

}

}

Addition and scalar multiplication can be restricted to such a linear subspace. Hence, the linear subspace is itself a vector space, see Exercise 22.20 . The simplest linear subspaces in a vector space $V$ are the null space $0$ and the whole vector space $V$.




\inputfactproof
{System of linear equations/Set of variables/Solution space is vector space/Fact}
{Lemma}
{}
{

\factsituation {Let $K$ be a field, and let
\mathdisp {\begin{matrix} a _{ 1 1 } x _1 + a _{ 1 2 } x _2 + \cdots + a _{ 1 n } x _{ n } & = & 0 \\ a _{ 2 1 } x _1 + a _{ 2 2 } x _2 + \cdots + a _{ 2 n } x _{ n } & = & 0 \\ \vdots & \vdots & \vdots \\ a _{ m 1 } x _1 + a _{ m 2 } x _2 + \cdots + a _{ m n } x _{ n } & = & 0 \end{matrix}} { }
be a homogeneous system of linear equations over $K$.}
\factconclusion {Then the set of all solutions to the system is a linear subspace of the standard space $K^n$.}
\factextra {}

}
{See Exercise 22.22 .}


Therefore, we talk about the \keyword {solution space} {} of the linear system. In particular, the sum of two solutions of a system of linear equations is again a solution. The solution set of an inhomogeneous linear system is not a vector space. However, one can add, to a solution of an inhomogeneous system, a solution of the corresponding homogeneous system, and get a solution of the inhomogeneous system again.




\inputexample{}
{

We take a look at the homogeneous version of Example 21.11 , so we consider the homogeneous linear system
\mathdisp {\begin{matrix} 2x & +5y & +2z & & -v & = & 0 \\ \, 3x & -4y & & +u & +2v & = & 0 \\ \, 4x & & -2z & +2u & & = & 0 \, \end{matrix}} { }
over $\R$. Due to Lemma 22.14 , the solution set $L$ is a linear subspace of $\R^5$. We have described it explicitly in Example 21.11 as
\mathdisp {{ \left\{ u { \left(- { \frac{ 1 }{ 3 } }, 0 , { \frac{ 1 }{ 3 } } ,1,0\right) } + v { \left(- { \frac{ 2 }{ 13 } }, { \frac{ 5 }{ 13 } }, -{ \frac{ 4 }{ 13 } },0,1\right) } \mid u,v \in \R \right\} }} { . }
This description also shows that the solution set is a vector space. Moreover, with this description, it is clear that $L$ is in bijection with $\R^2$, and this bijection respects the addition and also the scalar multiplication \extrabracket {the solution set $L'$ of the inhomogeneous system is also in bijection with $\R^2$, but there is no reasonable addition nor scalar multiplication on $L'$} {} {.} However, this bijection depends heavily on the chosen \quotationshort{basic solutions}{} \mathcor {} {{ \left(- { \frac{ 1 }{ 3 } }, 0 , { \frac{ 1 }{ 3 } } ,1,0\right) }} {and} {{ \left(- { \frac{ 2 }{ 13 } }, { \frac{ 5 }{ 13 } }, -{ \frac{ 4 }{ 13 } },0,1\right) }} {,} which depends on the order of elimination. There are several equally good basic solutions for $L$.

}

This example shows also the following: the solution space of a linear system over $K$ is \quotationshort{in a natural way}{,} that means, independent on any choice, a linear subspace of $K^n$ \extrabracket {where $n$ is the number of variables} {} {.} For this solution space, there always exists a \quotationshort{linear bijection}{} (an \quotationshort{isomorphism}{}) to some \mathl{K^{d}}{} \extrabracket {\mathrelationchainb
{\relationchainb
{d }
{ \leq }{n }
{ }{ }
{ }{ }
{ }{ }
} {}{}{}} {} {,} but there is no natural choice for such a bijection. This is one of the main reasons to work with abstract vector spaces, instead of just $K^n$.