# Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I/Lecture 17

The Taylor formula

So far, we have only considered power series of the form ${\displaystyle {}\sum _{k=0}^{\infty }c_{k}x^{k}}$. Now we allow that the variable ${\displaystyle {}x}$ may be replaced by a "shifted variable“ ${\displaystyle {}x-a}$, in order to study the local behavior in the expansion point ${\displaystyle {}a}$. Convergence means, in this case, that some ${\displaystyle {}\epsilon >0}$ exists, such that for

${\displaystyle {}x\in ]a-\epsilon ,a+\epsilon [\,,}$

the series converges. In this situation, the function, presented by the power series, is again differentiable, and its derivative is given as in Theorem 16.1 . For a convergent power series

${\displaystyle {}f(x):=\sum _{k=0}^{\infty }c_{k}(x-a)^{k}\,,}$

the polynomials ${\displaystyle {}\sum _{k=0}^{n}c_{k}(x-a)^{k}}$ yield polynomial approximations for the function ${\displaystyle {}f}$ in the point ${\displaystyle {}a}$. Moreover, the function ${\displaystyle {}f}$ is arbitrarily often differentiable in ${\displaystyle {}a}$, and the higher derivatives in the point ${\displaystyle {}a}$ can be read of from the power series directly, namely

${\displaystyle {}f^{(n)}(a)=n!c_{n}\,.}$

We consider now the question whether we can find, starting with a differentiable function of sufficiently high order, approximating polynomials (or a power series). This is the content of the Taylor expansion.

## Definition

Let ${\displaystyle {}I\subseteq \mathbb {R} }$ denote an interval,

${\displaystyle f\colon I\longrightarrow \mathbb {R} }$

an ${\displaystyle {}n}$-times differentiable function, and ${\displaystyle {}a\in I}$. Then

${\displaystyle {}T_{a,n}(f)(x):=\sum _{k=0}^{n}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}\,}$
is called the Taylor polynomial of degree ${\displaystyle {}n}$ for ${\displaystyle {}f}$ in the point ${\displaystyle {}a}$.

So

${\displaystyle {}T_{a,0}(f)(x):=f(a)\,}$

is the constant approximation,

${\displaystyle {}T_{a,1}(f)(x):=f(a)+f'(a)(x-a)\,}$

is the linear approximation,

${\displaystyle {}T_{a,2}(f)(x):=f(a)+f'(a)(x-a)+{\frac {f^{\prime \prime }(a)}{2}}(x-a)^{2}\,}$

${\displaystyle {}T_{a,3}(f)(x):=f(a)+f'(a)(x-a)+{\frac {f^{\prime \prime }(a)}{2}}(x-a)^{2}+{\frac {f^{\prime \prime \prime }(a)}{6}}(x-a)^{3}\,}$

is the approximation of degree ${\displaystyle {}3}$, etc. The Taylor polynomial of degree ${\displaystyle {}n}$ is the (uniquely determined) polynomial of degree ${\displaystyle {}\leq n}$ with the property that its derivatives and the derivatives of ${\displaystyle {}f}$ at ${\displaystyle {}a}$ coincide up to order ${\displaystyle {}n}$.

## Theorem

Let ${\displaystyle {}I}$ denote a real interval,

${\displaystyle f\colon I\longrightarrow \mathbb {R} }$

an ${\displaystyle {}(n+1)}$-times differentiable function, and ${\displaystyle {}a\in I}$ an inner point of the interval. Then for every point ${\displaystyle {}x\in I}$, there exists some ${\displaystyle {}c\in I}$ such that

${\displaystyle f(x)=\sum _{k=0}^{n}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}+{\frac {f^{(n+1)}(c)}{(n+1)!}}(x-a)^{n+1}.}$
Here, ${\displaystyle {}c}$ may be chosen between

${\displaystyle {}a}$ and ${\displaystyle {}x}$.

### Proof

This proof was not presented in the lecture.
${\displaystyle \Box }$

## Corollary

Suppose that ${\displaystyle {}I}$ is a bounded closed interval,

${\displaystyle f\colon I\longrightarrow \mathbb {R} }$

is an ${\displaystyle {}(n+1)}$-times continuously differentiable function, ${\displaystyle {}a\in I}$ an inner point, and ${\displaystyle {}B:={\max {\left(\vert {f^{(n+1)}(c)}\vert ,c\in I\right)}}}$. Then, between ${\displaystyle {}f(x)}$ and the ${\displaystyle {}n}$-th Taylor polynomial, we have the estimate

${\displaystyle {}\vert {f(x)-\sum _{k=0}^{n}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}}\vert \leq {\frac {B}{(n+1)!}}\vert {x-a}\vert ^{n+1}\,.}$

### Proof

The number ${\displaystyle {}B}$ exists due to Theorem 11.13 , since the ${\displaystyle {}(n+1)}$-th derivative ${\displaystyle {}f^{(n+1)}}$ is continuous on the compact interval ${\displaystyle {}I}$. The statement follows, therefore, directly from Theorem 17.2 .

${\displaystyle \Box }$

Criteria for extrema

We have seen in the 15th lecture that it is a necessary condition for a differentiable function to have a local extremum at a point, that its derivative equals ${\displaystyle {}0}$ at this point. We give now an important sufficient criterion, which relies on higher derivatives.

## Theorem

Let ${\displaystyle {}I}$ denote a real interval,

${\displaystyle f\colon I\longrightarrow \mathbb {R} }$

an ${\displaystyle {}(n+1)}$-times continuously differentiable function, and ${\displaystyle {}a\in I}$ an inner point of the interval. Suppose that

${\displaystyle f'(a)=f^{\prime \prime }(a)=\ldots =f^{(n)}(a)=0{\text{ and }}f^{(n+1)}(a)\neq 0}$
is fulfilled. Then the following statements hold.
1. If ${\displaystyle {}n}$ is even, then ${\displaystyle {}f}$ does not have a local extremum in ${\displaystyle {}a}$.
2. Suppose that ${\displaystyle {}n}$ is odd. In case ${\displaystyle {}f^{(n+1)}(a)>0}$, the function ${\displaystyle {}f}$ has an isolated local minimum in ${\displaystyle {}a}$.
3. Suppose that ${\displaystyle {}n}$ is odd. In case ${\displaystyle {}f^{(n+1)}(a)<0}$, the function ${\displaystyle {}f}$ has an isolated local maximum in ${\displaystyle {}a}$.

### Proof

Under the given conditions, the Taylor formula becomes

${\displaystyle {}f(x)-f(a)={\frac {f^{(n+1)}(c)}{(n+1)!}}(x-a)^{n+1}\,,}$

with some ${\displaystyle {}c}$ (depending on ${\displaystyle {}x}$) between ${\displaystyle {}a}$ and ${\displaystyle {}x}$. Depending on whether ${\displaystyle {}f^{(n+1)}(a)>0}$ or ${\displaystyle {}f^{(n+1)}(a)<0}$ holds, we have (due to the continuity of the ${\displaystyle {}(n+1)}$-th derivative) ${\displaystyle {}f^{(n+1)}(x)>0}$ or ${\displaystyle {}f^{(n+1)}(x)<0}$ for ${\displaystyle {}x\in [a-\epsilon ,a+\epsilon ]}$, for a suitable ${\displaystyle {}\epsilon >0}$. For these ${\displaystyle {}x}$, we have ${\displaystyle {}c\in [a-\epsilon ,a+\epsilon ]}$, so that the sign of ${\displaystyle {}f^{(n+1)}(c)}$ depends on the sign of ${\displaystyle {}f^{(n+1)}(a)}$.
For ${\displaystyle {}n}$ even, ${\displaystyle {}n+1}$ is odd and therefore the sign of ${\displaystyle {}(x-a)^{n+1}}$ changes at ${\displaystyle {}x=a}$ (for ${\displaystyle {}x, the sign is negative, and for ${\displaystyle {}x>a}$, the sign is positive). Since the sign of ${\displaystyle {}f^{(n+1)}(c)}$ does not change, the sign of ${\displaystyle {}f(x)-f(a)}$ is changing. This means that there can not be an extremum.
Suppose now that ${\displaystyle {}n}$ is odd. Then ${\displaystyle {}n+1}$ is even, hence ${\displaystyle {}(x-a)^{n+1}>0}$ for all ${\displaystyle {}x\neq a}$ in the neighborhood. This means, in the neighborhood, in case ${\displaystyle {}f^{(n+1)}(a)>0}$, that ${\displaystyle {}f(x)>f(a)}$ holds, and we have an isolated minimum in ${\displaystyle {}a}$. If ${\displaystyle {}f^{(n+1)}(a)<0}$, then ${\displaystyle {}f(x) holds, and we have an isolated maximum in ${\displaystyle {}a}$.

${\displaystyle \Box }$

A special case of this is that in case ${\displaystyle {}f'(a)=0}$ and ${\displaystyle {}f^{\prime \prime }(a)>0}$, then we have an isolated minimum, and in case ${\displaystyle {}f'(a)=0}$ and ${\displaystyle {}f^{\prime \prime }(a)<0}$ we have an isolated maximum.

The Taylor series

## Definition

Let ${\displaystyle {}I\subseteq \mathbb {R} }$ denote an interval,

${\displaystyle f\colon I\longrightarrow \mathbb {R} }$

an infinitely often differentiable function, and ${\displaystyle {}a\in I}$. Then

${\displaystyle \sum _{k=0}^{\infty }{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}}$
is called the Taylor series of ${\displaystyle {}f}$ in the point ${\displaystyle {}a}$.

## Theorem

Let ${\displaystyle {}\sum _{n=0}^{\infty }c_{n}x^{n}}$ denote a power series which converges on the interval ${\displaystyle {}]-r,r[}$, and let

${\displaystyle f\colon ]-r,r[\longrightarrow \mathbb {R} }$

denote the function defined via Theorem 12.2 . Then ${\displaystyle {}f}$ is infinitely often differentiable, and the Taylor series of ${\displaystyle {}f}$ in ${\displaystyle {}0}$ coincides with the given power series.

### Proof

That ${\displaystyle {}f}$ is infinitely often differentiable, follows directly from Theorem 16.1 by induction. Therefore, the Taylor series exists in particular in the point ${\displaystyle {}0}$. Hence, we only have to show that the ${\displaystyle {}n}$-th derivative has ${\displaystyle {}c_{n}n!}$ as its value. But this follows also from Theorem 16.1 .

${\displaystyle \Box }$

## Example

We consider the function

${\displaystyle f\colon \mathbb {R} \longrightarrow \mathbb {R} ,x\longmapsto f(x),}$

given by

${\displaystyle {}f(x):={\begin{cases}0,\,{\text{ if }}x\leq 0\,,\\e^{-{\frac {1}{x}}},\,{\text{ if }}x>0\,.\end{cases}}\,}$

We claim that this function is infinitely often differentiable, which is only in ${\displaystyle {}0}$ not directly clear. We first show, by induction, that all derivatives of ${\displaystyle {}e^{-{\frac {1}{x}}}}$ have the form ${\displaystyle {}p{\left({\frac {1}{x}}\right)}e^{-{\frac {1}{x}}}}$ with certain polynomials ${\displaystyle {}p\in \mathbb {R} [Z]}$, and that therefore the limit for ${\displaystyle {}x\rightarrow 0,\,x>0}$ equals ${\displaystyle {}0}$ (see Exercise 17.16 and Exercise 17.17 ). Therefore, the limit exists for all derivatives and is ${\displaystyle {}0}$. So all derivatives in ${\displaystyle {}0}$ have value ${\displaystyle {}0}$, and therefore the Taylor series in ${\displaystyle {}0}$ is just the zero series. However, the Function ${\displaystyle {}f}$ is in no neighborhood of ${\displaystyle {}0}$ the zero function, since ${\displaystyle {}e^{-{\frac {1}{x}}}>0}$.

Power series ansatz

<< | Mathematics for Applied Sciences (Osnabrück 2023-2024)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)