# Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 7

Approximation

A basic thought of mathematics is the idea of approximation, which occurs in many contexts and which is important both for mathematics as an auxiliary science for the empirical sciences and for the construction of mathematics as a pure science, in particular in analysis.

The first example for this is measuring, say the length of a line segment or the duration of time. Depending on the context and the aim, there are quite different ideas of what an exact measurement is, and the desired accuracy has an impact on the measuring device to take.

The result of a measurement is given in respect to a unit of measurement by a decimal fraction, that is as a number with finitely many digits after the point. The number of digits after the point indicates the claimed exactness of the measurement. To describe results of a measurement one neither needs irrational numbers nor rational numbers with a periodic decimal expansion.

Let's have a look at meteorology. From measurements at several different measurement stations, one tries to set up the weather forecast for the following days with mathematical models and computer simulation. In order to make better forecasts, one needs more measurement stations.

Let's have a look at approximations as they appear in mathematics. A certain line segment can (at least ideally) be divided into ${\displaystyle {}n}$ parts of the same length and one may be interested in the length of the parts, or in the length of the diagonal in the unit square. These lengths could also be measured, however, mathematics offers better descriptions of these lengths by providing rational numbers and irrational numbers (like ${\displaystyle {}{\sqrt {2}}}$). The determination of a good approximation is then pursued within mathematics. Let us consider the fraction ${\displaystyle {}q={\frac {3}{7}}}$. An approximation of this number with an exactness of nine digits is given by

${\displaystyle {}0,428571428<{\frac {3}{7}}<0,428571429\,.}$

The decimal fractions on the left and on the right are both approximations (estimates) of the true fraction ${\displaystyle {}{\frac {3}{7}}}$ with an error which is smaller than ${\displaystyle {}{\frac {1}{10^{9}}}}$. This is a typical accuracy of a calculator, but depending on the aim one sometimes wants a better accuracy (a smaller error). The computation in this example rests on the division algorithm, and one can go on to achieve any wanted error bound (here, it is an additional aspect that because of the periodicity we can just read off the digits and repeat them and do not have to compute further). The approximation of a given number by a decimal fraction is also called rounding.

In the empirical and in the mathematical situation, we have the following principle of approximation.

Principle of approximation: There does not exist a universal accuracy for an approximation. A good approximation method is not a single approximation, but rather a method to produce for any given wanted accuracy (error, level of exactness, deviation) an approximation within the given accuracy. To increase the accuracy (make the error smaller) one has to increase the effort.

With this principle at the back of one's mind, many difficult concepts like convergent sequence and continuity become comprehensible.

Approximations appear also in the sense that empirical functions for which a certain sampling is known, shall be described by a mathematically easy function. An example for this is the interpolation theorem Later we will also encounter the Taylor formula, which approximates a given function in a small neighborhood of one point by a polynomial. Also, here the mentioned principle of approximation occurs, that in order to get a better approximation one has to increase the degree of the polynomials. In integration theory, the graph of a function is bounded by upper and lower staircase functions, in order to approximate the area below the graph. With finer staircase functions (shorter steps) we get better approximations.

How good an approximation is becomes sometimes clear if we want to compute with the approximations. For example, given certain estimates for the side lengths of a rectangle, what estimate does hold for the area of the rectangle? If we want to allow a certain error for the area of a rectangle, what error can we allow for the side lengths?

We are going to have a closer look at square roots and how these might be approximated. More precisely, we will describe square roots as a limit of a sequence. We have seen in the fourth lecture that there is no easier description for the square root of a prime number since these are irrational numbers.

Real sequences

We begin with a motivating example.

## Example

We would like to "compute“ the square root of a natural number, say of ${\displaystyle {}5}$. Such a number ${\displaystyle {}x}$ with the property ${\displaystyle {}x^{2}=5}$ does not exist within the rational numbers (this follows from unique prime factorization). If ${\displaystyle {}x\in R}$ is such an element, then also ${\displaystyle {}-x}$ has this property. Due to Corollary 6.6 , there can not be more than two solutions.

Though there is no solution within the rational numbers for the equation ${\displaystyle {}x^{2}=5}$, there exist arbitrarily good approximations for it with rational numbers. Arbitrarily good means that the error (the deviation) can be made so small that it is below any given positive bound. The classical method to approximate a square root is Heron's method. This is an iterative method, i.e., the next approximation is computed from the preceding approximation. Let us start with ${\displaystyle {}a:=x_{0}:=2}$ as a first approximation. Because of

${\displaystyle {}x_{0}^{2}=2^{2}=4<5\,}$

we see that ${\displaystyle {}x_{0}}$ is to small, ${\displaystyle {}x_{0}. From ${\displaystyle {}a^{2}<5}$ (${\displaystyle {}a}$ being positive) we get ${\displaystyle {}5/a^{2}>1}$ and therefore ${\displaystyle {}(5/a)^{2}>5}$, so ${\displaystyle {}5/a>{\sqrt {5}}}$. Hence we have the estimates

${\displaystyle {}a<{\sqrt {5}}<5/a\,,}$

where we get a rational number on the right hand side if ${\displaystyle {}a}$ is rational. Such an estimate provides a certain idea where ${\displaystyle {}{\sqrt {5}}}$ lies. The difference ${\displaystyle {}5/a-a}$ is a measure for how good the approximation is.

In particular, when we start with ${\displaystyle {}2}$, we get that the square root ${\displaystyle {}{\sqrt {5}}}$ is between ${\displaystyle {}2}$ and ${\displaystyle {}5/2}$. Then we take the arithmetic mean of the interval bounds, so

${\displaystyle {}x_{1}:={\frac {2+{\frac {5}{2}}}{2}}={\frac {9}{4}}\,.}$

Due to ${\displaystyle {}{\left({\frac {9}{4}}\right)}^{2}={\frac {81}{16}}>5}$, this value is too large and therefore ${\displaystyle {}{\sqrt {5}}}$ is in the interval ${\displaystyle {}[5\cdot {\frac {4}{9}},{\frac {9}{4}}]}$. Then, we take again the arithmetic mean of these interval bounds and we set

${\displaystyle {}x_{2}:={\frac {5\cdot {\frac {4}{9}}+{\frac {9}{4}}}{2}}={\frac {161}{72}}\,}$

to be the next approximation. Continuing like that, we get better and better approximations for ${\displaystyle {}{\sqrt {5}}}$.

In this way we get always a sequence of better and better approximations of the square root of a positive real number.

## Definition

Let ${\displaystyle {}c\in \mathbb {R} _{+}}$ denote a positive real number. The Heron-sequence, with the positive initial value ${\displaystyle {}x_{0}}$, is defined recursively by

${\displaystyle {}x_{n+1}:={\frac {x_{n}+{\frac {c}{x_{n}}}}{2}}\,}$

Accordingly, this method is called Heron's method for the computation of square roots. In particular, this method produces for every natural number ${\displaystyle {}n}$ a real number which approximates a number defined by a certain algebraic property within an error which is arbitrarily small. In many technical applications, it is enough to know a certain number within a certain accuracy, but the accuracy aimed at might depend on the technical goal. In general, there is no accuracy which will work for all possible applications. Instead, it is important to know how to improve a good approximation by a better approximation and to know how many (computational) steps one has to take in order to reach a certain desired approximation. This idea yields the concepts sequence and convergence.

## Definition

A real sequence is a mapping

${\displaystyle \mathbb {N} \longrightarrow \mathbb {R} ,n\longmapsto x_{n}.}$

We usually write a sequence as ${\displaystyle {}{\left(x_{n}\right)}_{n\in \mathbb {N} }}$ or simple as ${\displaystyle {}(x_{n})_{n}}$. For a given starting number ${\displaystyle {}x_{0}}$, the recursively defined numbers by Heron's method (for the computation of ${\displaystyle {}{\sqrt {c}}}$) form a sequence. Sometimes a sequence is not defined for all natural numbers, but just for all natural numbers ${\displaystyle {}\geq N}$. But all concepts and statements apply also in this situation.

## Definition

Let ${\displaystyle {}{\left(x_{n}\right)}_{n\in \mathbb {N} }}$ denote a real sequence, and let ${\displaystyle {}x\in \mathbb {R} }$. We say that the sequence converges to ${\displaystyle {}x}$, if the following property holds.

For every positive ${\displaystyle {}\epsilon >0}$, ${\displaystyle {}\epsilon \in \mathbb {R} }$, there exists some ${\displaystyle {}n_{0}\in \mathbb {N} }$, such that for all ${\displaystyle {}n\geq n_{0}}$, the estimate

${\displaystyle {}\vert {x_{n}-x}\vert \leq \epsilon \,}$

holds.

If this condition is fulfilled, then ${\displaystyle {}x}$ is called the limit of the sequence. For this we write

${\displaystyle {}\lim _{n\rightarrow \infty }x_{n}:=x\,.}$
If the sequence converges to a limit, we just say that the sequence converges, otherwise, that the sequence diverges.

One should think of the given ${\displaystyle {}\epsilon }$ as a small but positive real number which expresses the desired aiming accuracy (or the allowed error). The natural number ${\displaystyle {}n_{0}}$ represents the effort how far one has to go in order to achieve the desired accuracy, and in fact in such a way that above this effort number ${\displaystyle {}n_{0}}$, all the following members will stay within this allowed error. Thus, convergence means that every possible accuracy can be achieved by some suitable effort. The smaller the error is supposed to be (the better the approximation shall be), the higher the effort will be. Instead of arbitrary positive real numbers ${\displaystyle {}\epsilon }$, one can also work with unit fractions (the rational numbers of the form ${\displaystyle {}{\frac {1}{k}}}$, ${\displaystyle {}k\in \mathbb {N} _{+}}$), see Exercise 7.7 , or with the inverse powers of ten ${\displaystyle {}{\frac {1}{10^{\ell }}}}$, ${\displaystyle {}\ell \in \mathbb {N} }$.

For ${\displaystyle {}\epsilon >0}$ and a real number ${\displaystyle {}x}$, the interval ${\displaystyle {}]x-\epsilon ,x+\epsilon [}$ is also called the ${\displaystyle {}\epsilon }$-neighborhood of ${\displaystyle {}x}$. A sequence converging to ${\displaystyle {}0}$ is called null sequence.

## Example

A constant sequence ${\displaystyle {}x_{n}:=c}$ converges to the limit ${\displaystyle {}c}$. This follows immediately, since for every ${\displaystyle {}\epsilon >0}$, we can take ${\displaystyle {}n_{0}=0}$. Then we have

${\displaystyle {}\vert {x_{n}-c}\vert =\vert {c-c}\vert =\vert {0}\vert =0<\epsilon \,}$

for all ${\displaystyle {}n}$.

## Example

The sequence

${\displaystyle {}x_{n}={\frac {1}{n}}\,}$

converges to the limit ${\displaystyle {}0}$. To show this, let some positive ${\displaystyle {}\epsilon }$ be given. Due to the Archimedean axiom, there exists an ${\displaystyle {}n_{0}}$, such that ${\displaystyle {}{\frac {1}{n_{0}}}\leq \epsilon }$. Then for all ${\displaystyle {}n\geq n_{0}}$, the estimate

${\displaystyle {}\vert {x_{n}-0}\vert ={\frac {1}{n}}\leq {\frac {1}{n_{0}}}\leq \epsilon \,}$

holds.

## Example

We consider the sequence

${\displaystyle {}x_{n}=0.33\ldots 33\,,}$

with exactly ${\displaystyle {}n}$ digits after the point. We claim that this sequence converges to ${\displaystyle {}1/3}$. For this, we have to determine ${\displaystyle {}\vert {0,33\ldots 33-{\frac {1}{3}}}\vert }$, and before we can do this, we have to recall the meaning of a decimal expansion. We have

${\displaystyle {}x_{n}=0.33\ldots 33={\frac {33\ldots 33}{10^{n}}}={\frac {\sum _{j=0}^{n-1}3\cdot 10^{j}}{10^{n}}}\,,}$

and therefore

{\displaystyle {}{\begin{aligned}\vert {0,33\ldots 33-{\frac {1}{3}}}\vert &=\vert {{\frac {\sum _{j=0}^{n-1}3\cdot 10^{j}}{10^{n}}}-{\frac {1}{3}}}\vert \\&=\vert {\frac {3\cdot {\left(\sum _{j=0}^{n-1}3\cdot 10^{j}\right)}-10^{n}}{3\cdot 10^{n}}}\vert \\&=\vert {\frac {{\left(\sum _{j=0}^{n-1}9\cdot 10^{j}\right)}-10^{n}}{3\cdot 10^{n}}}\vert \\&=\vert {\frac {-1}{3\cdot 10^{n}}}\vert \\&={\frac {1}{3\cdot 10^{n}}}.\end{aligned}}}

If now a positive ${\displaystyle {}\epsilon }$ is given, then for ${\displaystyle {}n}$ sufficiently large, this last term is ${\displaystyle {}\leq \epsilon }$.

## Lemma

A real sequence has at most one limit.

### Proof

We assume that the sequence has two distinct limits ${\displaystyle {}x,y}$, ${\displaystyle {}x\neq y}$. Then ${\displaystyle {}d:=\vert {x-y}\vert >0}$. We consider ${\displaystyle {}\epsilon :=d/3>0}$. Because of the convergence to ${\displaystyle {}x}$ there exists an ${\displaystyle {}n_{0}}$ such that

${\displaystyle \vert {x_{n}-x}\vert \leq \epsilon {\text{ for all }}n\geq n_{0}}$

and because of the convergence to ${\displaystyle {}y}$ there exists an ${\displaystyle {}n_{0}'}$ such that

${\displaystyle \vert {x_{n}-y}\vert \leq \epsilon {\text{ for all }}n\geq n_{0}'.}$

hence both conditions hold simultaneously for ${\displaystyle {}n\geq \max\{n_{0},n_{0}'\}}$. Suppose that ${\displaystyle {}n}$ is as large as this maximum. Then due to the triangle inequality we arrive at the contradiction

${\displaystyle {}d=\vert {x-y}\vert \leq \vert {x-x_{n}}\vert +\vert {x_{n}-y}\vert \leq \epsilon +\epsilon =2d/3\,.}$
${\displaystyle \Box }$

Boundedness

## Definition

A subset ${\displaystyle {}M\subseteq \mathbb {R} }$ of the real numbers is called bounded, if there exist real numbers ${\displaystyle {}s\leq S}$ such that

${\displaystyle {}M\subseteq [s,S]}$.

In this situation, ${\displaystyle {}S}$ is also called an upper bound for ${\displaystyle {}M}$ and ${\displaystyle {}s}$ is called a lower bound for ${\displaystyle {}M}$. These concepts are also used for sequences, namely for the image set, the set of all members ${\displaystyle {}{\left\{x_{n}\mid n\in \mathbb {N} \right\}}}$. For the sequence ${\displaystyle {}1/n}$, ${\displaystyle {}n\in \mathbb {N} _{+}}$, ${\displaystyle {}1}$ is an upper bound and ${\displaystyle {}0}$ is a lower bound.

## Lemma

### Proof

Let ${\displaystyle {}{\left(x_{n}\right)}_{n\in \mathbb {N} }}$ be the convergent sequence with ${\displaystyle {}x\in \mathbb {R} }$ as its limit. Choose some ${\displaystyle {}\epsilon >0}$. Due to convergence there exists some ${\displaystyle {}n_{0}}$ such that

${\displaystyle \vert {x_{n}-x}\vert \leq \epsilon {\text{ for all }}n\geq n_{0}.}$

So in particular

${\displaystyle \vert {x_{n}}\vert \leq \vert {x}\vert +\vert {x-x_{n}}\vert \leq \vert {x}\vert +\epsilon {\text{ for all }}n\geq n_{0}.}$

Below ${\displaystyle {}n_{0}}$ there are ony finitely many members, hence the maximum

${\displaystyle {}B:=\max _{n

is welldefined. Therefore ${\displaystyle {}B}$ is an upper bound and ${\displaystyle {}-B}$ is a lower bound for ${\displaystyle {}{\left\{x_{n}\mid n\in \mathbb {N} \right\}}}$.

${\displaystyle \Box }$

It is easy to give a bounded but not convergent sequence.

## Example

The alternating sequence

${\displaystyle {}x_{n}:=(-1)^{n}\,}$

is bounded, but not convergent. The boundedness follows directly from ${\displaystyle {}x_{n}\in [-1,1]}$ for all ${\displaystyle {}n}$. However, there is no convergence. For if ${\displaystyle {}x\geq 0}$ were the limit, then for positive ${\displaystyle {}\epsilon <1}$ and every odd ${\displaystyle {}n}$ the relation

${\displaystyle {}\vert {x_{n}-x}\vert =\vert {-1-x}\vert =1+x\geq 1>\epsilon \,}$

holds, so these members are outside of this ${\displaystyle {}\epsilon }$-neighbourhood. In the same way we can argue against some negative limit.

The squeeze criterion

## Lemma

Suppose that ${\displaystyle {}{\left(x_{n}\right)}_{n\in \mathbb {N} }}$ and ${\displaystyle {}{\left(y_{n}\right)}_{n\in \mathbb {N} }}$ are convergent sequences such that ${\displaystyle {}x_{n}\geq y_{n}}$ for all ${\displaystyle {}n\in \mathbb {N} }$. Then ${\displaystyle {}\lim _{n\rightarrow \infty }x_{n}\geq \lim _{n\rightarrow \infty }y_{n}}$.

### Proof

${\displaystyle \Box }$

The following statement is called the squeeze criterion.

## Lemma

Let ${\displaystyle {}{\left(x_{n}\right)}_{n\in \mathbb {N} },\,{\left(y_{n}\right)}_{n\in \mathbb {N} }}$ and ${\displaystyle {}{\left(z_{n}\right)}_{n\in \mathbb {N} }}$ denote real sequences. Suppose that

${\displaystyle x_{n}\leq y_{n}\leq z_{n}{\text{ for all }}n\in \mathbb {N} }$

and that ${\displaystyle {}{\left(x_{n}\right)}_{n\in \mathbb {N} }}$ and ${\displaystyle {}{\left(z_{n}\right)}_{n\in \mathbb {N} }}$ converge to the same limit ${\displaystyle {}a}$. Then also ${\displaystyle {}{\left(y_{n}\right)}_{n\in \mathbb {N} }}$ converges to ${\displaystyle {}a}$.

### Proof

${\displaystyle \Box }$

<< | Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)