# Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 2

Quantifiers

We consider again the propositions

Martians are green
and
I eat a hat,

and have a closer look at its structure (we replace my hat by a hat). In the first statement, a certain property is assigned to a certain kind of creatures, like we say that cheetahs are fast and sloths are slow. By the statement, one can mean that Martians are "usually“ or "almost always“ green, or, strictly speaking, that really all Martians are green. In mathematics, one is interested in statements which are true without exceptions (or that one can list the exceptions explicitly), so that we want to understand the statement in the strict sense. It is a universal statement. We have two predicates (representing a property, an attribute), to be a Martian and to be green. A predicate ${\displaystyle {}P}$ is something what can be assigned to an object, an item, an element or not. A predicate alone is not a proposition; with the help of a predicate, there are two different ways to build a proposition, the first is by applying (inserting) it to a concrete object ${\displaystyle {}a}$ to get the statement

${\displaystyle P(a),}$

which means that the object ${\displaystyle {}a}$ has the property ${\displaystyle {}P}$, what might be true or not. The second way is by using a quantifier. In this way one can construct the statement that all[1] objects (typically from a given basic set) have the property ${\displaystyle {}P}$, what again might be true or false. This is expressed formally as

${\displaystyle \forall xP(x).}$
The symbol
${\displaystyle \forall }$
is an abbreviation for for all[2], or for every and does not have any deeper meaning. It is called universal quantifier. The proposition about the Martians may be expressed by
${\displaystyle \forall x(M(x)\rightarrow G(x)).}$

This means that for all objects without any restriction, the following holds: if it is a Martian, then it is green. For every ${\displaystyle {}x}$ we have an implication inside the wide bracket.

The second statement can mean that I eat exactly one hat or at least one hat. The meaning of the indefinite article is not unique. In mathematics, it usually means at least one. Hence, we can paraphrase by saying

There exists a hat which I eat.

This is an existential proposition.[3] A formal representation is

${\displaystyle \exists x(H(x)\wedge E(x)),}$

where ${\displaystyle {}H(x)}$ means that the object ${\displaystyle {}x}$ is a hat and where ${\displaystyle {}E(x)}$ means that ${\displaystyle {}x}$ is eaten by me. One could also write

${\displaystyle \exists x(E(x)\wedge H(x)).}$
The symbol
${\displaystyle \exists }$
is called existence quantifier.

A universal proposition claims that a certain predicate holds for all objects (from a given set). Like all propositions, this might be true or false. A universal proposition is false if and only if there exists at least one object for which the predicate does not hold. Therefore the two quantifiers, the universal quantifier and the existence quantifier, can be expressed by one another with the help of the negation. We have the rules

${\displaystyle \neg (\forall xP(x)){\text{ is equivalent with }}\exists x(\neg P(x)),}$
${\displaystyle \neg (\exists xP(x)){\text{ is equivalent with }}\forall x(\neg P(x)),}$
${\displaystyle \forall xP(x){\text{ is equivalent with }}\neg (\exists x(\neg P(x)))}$

and

${\displaystyle \exists xP(x){\text{ is equivalent with }}\neg (\forall x(\neg P(x))).}$

Apart from monadic predicates like ${\displaystyle {}P(x)}$ there are also binary and multinary predicates of the form

${\displaystyle P(x,y){\text{ or }}Q(x,y,z){\text{ etc. }}}$

which express a relation between several objects like "is related with“, "is larger than“, "are parents of“ etc. Here one can quantify with respect to several variables, one has expressions like

${\displaystyle \forall x(\exists yP(x,y)),\,\exists x(\forall yP(x,y)),\,\forall x(\exists y(\forall zQ(x,y,z))){\text{ etc. }}}$

The name of the variable in a quantified statement is not important, it does not make a difference whether we write ${\displaystyle {}\forall aP(a)}$ or ${\displaystyle {}\forall tP(t)}$. The only thing one has to take into account is that only names (letters) for variables are used which are not already used in the given context.

The logic which deals with quantified statements is called predicate logic or quantificational logic. We will not deal with it systematically, as it occurs in mathematics as set theory. Instead of ${\displaystyle {}P(x)}$, that a predicate is assigned to an object, we usually write ${\displaystyle {}x\in P}$ where ${\displaystyle {}P}$ is the set of all object which fulfil the property. Multinary predicates occur in mathematics as relations.

Numbers

Without further justification, we may say that mathematics deals among other things with numbers. We work with the following sets, we assume that the students know these.

${\displaystyle {}\mathbb {N} =\{0,1,2,\ldots \}\,,}$

the set of natural numbers (including ${\displaystyle {}0}$).

${\displaystyle {}\mathbb {Z} =\{\ldots ,-2,-1,0,1,2,\ldots \}\,,}$

the set of the integers,

${\displaystyle {}\mathbb {Q} ={\left\{a/b\mid a\in \mathbb {Z} ,\,b\in \mathbb {Z} \setminus \{0\}\right\}}\,,}$

the set of the rational numbers and the set of the real numbers ${\displaystyle {}\mathbb {R} }$.

These sets are endowed with their natural operations like addition and multiplication, we will recall its properties soon. We think of the real numbers as points on a line, on which all the described number sets lie. Also, one can think of ${\displaystyle {}\mathbb {R} }$ as the set of all sequences of digits in the decimal system (finitely many digits before the point, maybe infinitely many digits after the point). During the lecture we will encounter all the important properties of the real numbers, the so-called axioms of the real numbers, from which we can deduce all other properties in a logical way. Then we will be able to make our current viewpoint more precisely.

Induction

The natural numbers have the characteristic property that one can reach every natural number starting from ${\displaystyle {}0}$ by just counting step by step (by taking the successor). Therefore, mathematical statements which refer to the natural numbers can be proven with the proof principle complete induction. The following example is supposed to explain this scheme of argumentation.

## Example

We consider in the plane ${\displaystyle {}E}$ a configuration of ${\displaystyle {}n}$ lines, and we ask ourselves what the maximal number of intersection points of such a configuration might be. It does not make a difference whether we think of the plane as ${\displaystyle {}\mathbb {R} ^{2}}$ (a Cartesian plane with real coordinates), or simple of a plane in the sense of elementary geometry. The only important thing is that two lines either intersect in exactly one point, or they are parallel. If ${\displaystyle {}n}$ is small, it is easy to find the answer.

 ${\displaystyle {}n}$ ${\displaystyle {}0}$ ${\displaystyle {}1}$ ${\displaystyle {}2}$ ${\displaystyle {}3}$ ${\displaystyle {}4}$ ${\displaystyle {}5}$ ${\displaystyle {}n}$ ${\displaystyle {}S(n)}$ ${\displaystyle {}0}$ ${\displaystyle {}0}$ ${\displaystyle {}1}$ ${\displaystyle {}3}$ ${\displaystyle {}6}$ ${\displaystyle {}?}$ ${\displaystyle {}?}$

But as soon an ${\displaystyle {}n}$ gets a bit larger (${\displaystyle {}n=5,10,\ldots }$?), the answer is not so clear anymore, as it gets very difficult to imagine the situation in a precise way. The imagination becomes just a rough idea of many lines with many intersection points, but it is not possible to draw any precise conclusion from this. A useful approach to understand the problem is to understand what may happen when we add a new line to a given line configuration, when instead of ${\displaystyle {}n}$ lines we consider ${\displaystyle {}n+1}$ lines. Suppose that, for some reason, we know what the maximal number of intersection points for ${\displaystyle {}n}$ lines is, maybe we have even a formula for this. If we then can understand how many new intersection points we may get by adding a new line, then we know the maximal number of intersection points for ${\displaystyle {}n+1}$ lines.

Now, this passage is indeed easy to understand. The new line can at most intersect every old line in exactly one point, therefore at most ${\displaystyle {}n}$ new intersection points may occur. If we choose the new line in such a way that it is not parallel to any of the given lines (what is possible since there are infinitely many directions) and such that the intersection points of the new line do not coincide with old intersection points (what is possible by possibly taking a line parallel to the direction found), we get exactly ${\displaystyle {}n}$ new intersection points. Hence, we deduce the (preliminary) formula

${\displaystyle {}S(n+1)=1+2+3+\cdots +n-2+n-1+n\,}$

or

${\displaystyle {}S(n)=1+2+3+\cdots +n-3+n-2+n-1\,,}$

so just the sum of the first ${\displaystyle {}n-1}$ natural numbers.

In the preceding example we are dealing with a sum where the number of summands is variable. For such a situation, the sum sign(or sigma sign) is appropriate. For given real numbers ${\displaystyle {}a_{1},\ldots ,a_{n}}$ we define

$\displaystyle {{}} \sum_{k [[Category:Wikiversity soft redirects|Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 2]] __NOINDEX__ 1}^n a_k := a_1 + a_2 + \cdots + a_{n-1} + a_n \, .$

In general, the ${\displaystyle {}a_{k}}$ depend in some way (say, by a formula) on ${\displaystyle {}k}$, in the example we just have ${\displaystyle {}a_{k}=k}$, but it could also be something like ${\displaystyle {}a_{k}=2k+1}$ or ${\displaystyle {}a_{k}=k^{2}}$. The ${\displaystyle {}k}$-th summand in the sum is ${\displaystyle {}a_{k}}$, the number ${\displaystyle {}k}$ is called the index of the summand. Accordingly, the product sign is defined by

$\displaystyle {{}} \prod_{k [[Category:Wikiversity soft redirects|Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 2]] __NOINDEX__ 1}^n a_k := a_1 \cdot a_2 \cdots a_{n-1} \cdot a_n \, .$

## Example

We would like to find an easy formula for the sum of the first ${\displaystyle {}n}$ natural numbers, which equals the maximal number of intersection points of a configuration of ${\displaystyle {}n+1}$ lines. We claim that

${\displaystyle {}\sum _{k=1}^{n}k={\frac {n(n+1)}{2}}\,}$

holds. For small numbers ${\displaystyle {}n}$, this is easy to check just by computing the left-hand and the right-hand side. To prove the identity in general, we try to understand what happens on the left and on the right, if we increase ${\displaystyle {}n}$ to ${\displaystyle {}n+1}$, like we have added in Example 2.1 another line to a line configuration. On the left-hand side, we just have the additional summand ${\displaystyle {}n+1}$. On the right-hand side, we go from ${\displaystyle {}{\frac {n(n+1)}{2}}}$ to ${\displaystyle {}{\frac {(n+1)(n+1+1)}{2}}}$. If we can show that the difference between these fractions is ${\displaystyle {}n+1}$, then the right-hand side behaves like the left-hand side. Then we can conclude: the identity holds for small ${\displaystyle {}n}$, say for ${\displaystyle {}n=1}$. By comparing the differences, it also holds for the next ${\displaystyle {}n}$, so it holds for ${\displaystyle {}n=2}$, then again for the next ${\displaystyle {}n}$ and so on. Since this argument always works, and since one arrives at every natural number by taking the successor again and again, the formula holds for every natural number.

The following statement gives the foundation for the principle of complete induction.

## Theorem

Suppose that for every natural number ${\displaystyle {}n}$ a statement ${\displaystyle {}A(n)}$ is given. Suppose further that the following conditions are fulfilled.

1. ${\displaystyle {}A(0)}$ is true.
2. For all ${\displaystyle {}n}$ we have: if ${\displaystyle {}A(n)}$ holds, then also ${\displaystyle {}A(n+1)}$ holds.
Then ${\displaystyle {}A(n)}$ holds for all ${\displaystyle {}n}$.

### Proof

Due to the first condition, ${\displaystyle {}A(0)}$ holds. Due to the second condition, we get that also ${\displaystyle {}A(1)}$ holds. Therefore also ${\displaystyle {}A(2)}$ holds. Therefore also ${\displaystyle {}A(3)}$ holds. Because we can move on step by step and reach every natural number, we conclude that the statement ${\displaystyle {}A(n)}$ holds for every natural number ${\displaystyle {}n}$

${\displaystyle \Box }$

The verification of ${\displaystyle {}A(0)}$ is called the base case, and the conclusion from ${\displaystyle {}A(n)}$ to ${\displaystyle {}A(n+1)}$ is called the induction step. Within the induction step, the validity of ${\displaystyle {}A(n)}$ is called the induction hypothesis. In some situations, the statement ${\displaystyle {}A(n)}$ is only valid (or defined) for ${\displaystyle {}n\geq n_{0}}$ for a certain ${\displaystyle {}n_{0}}$. Then the base case is the statement ${\displaystyle {}A(n_{0})}$ and the induction step has to be done for ${\displaystyle {}n\geq n_{0}}$.

We prove now the equality

${\displaystyle {}\sum _{k=1}^{n}k={\frac {n(n+1)}{2}}\,.}$

by induction.

The base case is for ${\displaystyle {}n=1}$, here the sum on the left consists of just one summand, which is ${\displaystyle {}1}$, and so the sum equals ${\displaystyle {}1}$. The right-hand side is ${\displaystyle {}{\frac {1\cdot 2}{2}}=1}$, so the formula holds for ${\displaystyle {}n=1}$.

For the induction step we assume that the formula holds for some ${\displaystyle {}n\geq 1}$. We have then to show that the formula also holds for ${\displaystyle {}n+1}$. Here ${\displaystyle {}n}$ is arbitrary. We have

align}"): {\displaystyle {{}} \begin{align} \sum_{k [[Category:Wikiversity soft redirects|Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 2]] __NOINDEX__ 1}^{n+1} k & = { \left( \sum_{k [[Category:Wikiversity soft redirects|Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 2]] __NOINDEX__ 1}^{n} k \right) } + n+1 \\ & = \frac{ n(n+1) }{ 2 } + n+1 \\ & = \frac{ n(n+1) +2(n+1) }{ 2 } \\ & = \frac{ (n+2)(n+1) }{ 2 } . \end{align}

In the second equation, we have used the induction hypothesis. The last term is the right hand side of the formula for ${\displaystyle {}n+1}$, so the formula is proven.

## Remark

Proofs by induction occur again and again. The condition, that this method of proof can be applied, is that we have a scheme of statements, which depend on the (variable) natural number ${\displaystyle {}n}$. This natural number ${\displaystyle {}n}$ is called the induction variable, we do induction over the induction variable ${\displaystyle {}n}$. In the statement itself, arbitrary mathematical objects may occur and the natural number ${\displaystyle {}n}$ can have many different meanings. It can be the exponent of a real number (see Theorem 4.11 or Theorem 5.11 ), the degree of a polynomial (as in Theorem 6.3 ), the degree of differentiability (see Exercise 15.17 ) or the number of vectors (as in Lemma 27.14 ).

Prime factorization

As a further example for the principle of induction, we prove the existence of prime factorization for natural numbers.

## Definition

A natural number ${\displaystyle {}n\geq 2}$ is called a prime number if it is only divisible

by ${\displaystyle {}1}$ and by ${\displaystyle {}n}$.

## Theorem

Every natural number ${\displaystyle {}n\in \mathbb {N} }$, ${\displaystyle {}n\geq 2}$, has a factorization into prime numbers. That means there exists a representation

${\displaystyle {}n=p_{1}\cdot p_{2}\cdots p_{r}\,}$

with prime numbers ${\displaystyle {}p_{i}}$.

### Proof

We prove the existence by induction over ${\displaystyle {}n}$, and we consider the statement ${\displaystyle {}A(n)}$ saying that every natural number ${\displaystyle {}m}$ with ${\displaystyle {}2\leq m\leq n}$ has a prime factorization. For ${\displaystyle {}n=2}$ we have a prime number. So suppose that ${\displaystyle {}n\geq 2}$ and assume that, by the induction hypothesis, every number ${\displaystyle {}m\leq n}$ has a prime factorization. We have to show that every number ${\displaystyle {}m\leq n+1}$ has a prime factorization. The only new number to consider is ${\displaystyle {}n+1}$. In the proof of this induction step, another important proof scheme occurs, the proof by cases. Here one argues depending on whether an additional property holds or not, and in both cases one has to prove the result.

Here we consider the cases whether ${\displaystyle {}n+1}$ is a prime number or not. If ${\displaystyle {}n+1}$ is a prime number, then we have immediately the prime factorization, just take the number itself. In this case we do not even use the induction hypothesis.

So now we consider the case where ${\displaystyle {}n+1}$ is not prime. This means that there exists a non-trivial decomposition ${\displaystyle {}n+1=ab}$ with smaller numbers ${\displaystyle {}a,b. For these numbers ${\displaystyle {}a}$ and ${\displaystyle {}b}$, there exist, due to the induction hypothesis, factorizations into prime numbers, and we can put these together to gain a prime factorization of ${\displaystyle {}n+1}$

${\displaystyle \Box }$

It is also true that the prime factorization is unique, but this we have not proved. This statement is called the fundamental theorem of arithmetic.

## Remark

Closely related with induction is the principle of recursive definition. Here, one would like to define for every natural number ${\displaystyle {}n}$ a mathematical expression. This can be done by assigning to ${\displaystyle {}0}$ explicitly an expression and to describe how the expression for ${\displaystyle {}n+1}$ might be computed from the expression for ${\displaystyle {}n}$. This rule is called the recursive step. The inductive structure of the natural numbers ensures that for every natural number a unique expression is determined. For example, one can define an expression ${\displaystyle {}F(n)}$ by the initial step

${\displaystyle {}F(0):=7\,}$

and the recursive step

${\displaystyle {}F(n+1):=F(n)\cdot n-n^{2}+3\,.}$

Footnotes
1. Other formulations are: every, an arbitrary, any object or element from a given basic set. If this set has a spacial character, then we talk about everywhere, if it is time like then we talk about always, ....
2. It is fair to say that the words for all and there exists are the most important words in mathematics.
3. Beside the formulation "there exists“ we have the formulations "there is“, "one can find“. If the existence of an object is known, then in a mathematical argumentation such an element is "just taken“, denoted somehow and worked with.

<< | Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)