# Waves in composites and metamaterials/Effective tensors using Hilbert space formalism

The content of these notes is based on the lectures by Prof. Graeme W. Milton (University of Utah) given in a course on metamaterials in Spring 2007.

## Recap

In the previous lecture we introduced the Hilbert space ${\displaystyle {\mathcal {H}}}$  of of square-integrable, ${\displaystyle Q}$ -periodic, complex vector fields with the inner product

${\displaystyle (\mathbf {a} ,\mathbf {b} )=\int _{Q}{\overline {\mathbf {a} (\mathbf {x} )}}\cdot \mathbf {b} (\mathbf {x} )~{\text{d}}\mathbf {x} \equiv \int _{\hat {Q}}{\widehat {\mathbf {a} }}(\mathbf {k} )\cdot {\widehat {\mathbf {b} }}(\mathbf {k} )~{\text{d}}\mathbf {k} ~.}$

We then decomposed ${\displaystyle {\mathcal {H}}}$  into three orthogonal subspaces.

1. The subspace ${\displaystyle {\mathcal {U}}}$  of uniform fields, i.e., ${\displaystyle \mathbf {a} (\mathbf {x} )}$  is independent of ${\displaystyle \mathbf {x} }$ , or in Fourier space, ${\displaystyle {\widehat {\mathbf {a} }}(\mathbf {k} )={\boldsymbol {0}}}$  unless ${\displaystyle \mathbf {k} ={\boldsymbol {0}}}$ .
2. The subspace ${\displaystyle {\mathcal {J}}}$  of zero divergence, zero average value fields, i.e., ${\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {a} (\mathbf {x} )=0}$  and ${\displaystyle \langle \mathbf {a} (\mathbf {x} )\rangle ={\boldsymbol {0}}}$ , or in Fourier space, ${\displaystyle {\widehat {\mathbf {a} }}({\boldsymbol {0}})={\boldsymbol {0}}}$  and ${\displaystyle \mathbf {k} \cdot {\widehat {\mathbf {a} }}(\mathbf {k} )=0}$ .
3. The subspace ${\displaystyle {\mathcal {E}}}$  of zero curl, zero average value fields, i.e., ${\displaystyle {\boldsymbol {\nabla }}\times \mathbf {a} (\mathbf {x} )={\boldsymbol {0}}}$  and ${\displaystyle \langle \mathbf {a} (\mathbf {x} )\rangle ={\boldsymbol {0}}}$ , or in Fourier space, ${\displaystyle {\widehat {\mathbf {a} }}({\boldsymbol {0}})={\boldsymbol {0}}}$  and ${\displaystyle {\widehat {\mathbf {a} }}(\mathbf {k} )=\alpha (\mathbf {k} )~\mathbf {k} }$ .

To determine the effective permittivity, we introduced the operator ${\displaystyle {\boldsymbol {\mathit {\Gamma }}}}$  with the properties

${\displaystyle {\text{(1)}}\qquad \mathbf {b} (\mathbf {x} )={\boldsymbol {\mathit {\Gamma }}}\cdot \mathbf {a} (\mathbf {x} )}$

if and only if

${\displaystyle \mathbf {b} \in {\mathcal {E}}\qquad {\text{and}}\qquad \mathbf {a} -{\boldsymbol {\epsilon }}_{0}\cdot \mathbf {b} \in {\mathcal {U}}\oplus {\mathcal {J}}\implies {\boldsymbol {\nabla }}\cdot (\mathbf {a} -{\boldsymbol {\epsilon }}_{0}\cdot \mathbf {b} )=0~.}$

We also found that in Fourier space,

${\displaystyle {\widehat {\mathbf {b} }}(\mathbf {k} )={\boldsymbol {\mathit {\widehat {\Gamma }}}}(\mathbf {k} )\cdot {\widehat {\mathbf {a} }}(\mathbf {k} )}$

where

${\displaystyle {\text{(2)}}\qquad {\boldsymbol {\mathit {\widehat {\Gamma }}}}(\mathbf {k} )={\begin{cases}{\cfrac {\mathbf {k} \otimes \mathbf {k} }{\mathbf {k} \cdot {\boldsymbol {\epsilon }}_{0}\cdot \mathbf {k} }}={\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )&\qquad {\text{with}}~~\mathbf {n} ={\cfrac {\mathbf {k} }{|\mathbf {k} |}}~~{\text{if}}~~\mathbf {k} \neq 0\\{\boldsymbol {\mathit {0}}}&\qquad {\text{if}}~~\mathbf {k} =0~.\end{cases}}}$

## Deriving a formula for the effective permittivity

Let us now derive a formula for the effective tensor. Recall that the polarization is defined as

${\displaystyle {\text{(3)}}\qquad \mathbf {P} =({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot \mathbf {E} =\mathbf {D} -{\boldsymbol {\epsilon }}_{0}\cdot \mathbf {E} }$

where the permittivity tensor is being thought of as an operator that operates on ${\displaystyle \mathbf {E} }$ .

Also notice that

${\displaystyle {\text{(4)}}\qquad \langle \mathbf {E} \rangle -\mathbf {E} \in {\mathcal {E}}\qquad {\text{(curl free)}}.}$

From (3) we have

${\displaystyle {\text{(5)}}\qquad \mathbf {P} -{\boldsymbol {\epsilon }}_{0}\cdot [\langle \mathbf {E} \rangle -\mathbf {E} ]=\mathbf {D} -{\boldsymbol {\epsilon }}_{0}\cdot \langle \mathbf {E} \rangle \in {\mathcal {U}}\oplus {\mathcal {J}}\qquad {\text{(divergence free)}}.}$

From the definition of ${\displaystyle {\boldsymbol {\mathit {\Gamma }}}}$  (equations (1) and (2)) and using (4) and (5), we can show that

${\displaystyle {\text{(6)}}\qquad {\boldsymbol {\mathit {\Gamma }}}\cdot \mathbf {P} =\langle \mathbf {E} \rangle -\mathbf {E} ~.}$

Let ${\displaystyle {\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0}}$  act on both sides of (6). Then we get

${\displaystyle ({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot {\boldsymbol {\mathit {\Gamma }}}\cdot \mathbf {P} =({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot \langle \mathbf {E} \rangle -({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot \mathbf {E} =({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot \langle \mathbf {E} \rangle -\mathbf {P} ~.}$

Therefore,

${\displaystyle {\text{(7)}}\qquad [{\boldsymbol {\mathit {1}}}+({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot {\boldsymbol {\mathit {\Gamma }}}]\cdot \mathbf {P} =({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot \langle \mathbf {E} \rangle ~.}$

Inverting (7) gives

${\displaystyle {\text{(8)}}\qquad \mathbf {P} =[{\boldsymbol {\mathit {1}}}+({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot {\boldsymbol {\mathit {\Gamma }}}]^{-1}\cdot ({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot \langle \mathbf {E} \rangle ~.}$

Averaging (8) gives

${\displaystyle {\text{(9)}}\qquad \langle \mathbf {P} \rangle =\langle [{\boldsymbol {\mathit {1}}}+({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot {\boldsymbol {\mathit {\Gamma }}}]^{-1}\cdot ({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\rangle \cdot \langle \mathbf {E} \rangle ~.}$

Averaging (3) leads to the relation

${\displaystyle {\text{(10)}}\qquad \langle \mathbf {P} \rangle =({\boldsymbol {\epsilon }}_{\text{eff}}-{\boldsymbol {\epsilon }}_{0})\cdot \langle \mathbf {E} \rangle ~.}$

Comparing (9) and (10) shows us that

${\displaystyle {\text{(11)}}\qquad {{\boldsymbol {\epsilon }}_{\text{eff}}={\boldsymbol {\epsilon }}_{0}+\langle [{\boldsymbol {\mathit {1}}}+({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot {\boldsymbol {\mathit {\Gamma }}}]^{-1}\cdot ({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\rangle ~.}}$

Recall from the previous lecture that, in equation (11), the operator ${\displaystyle {\boldsymbol {\epsilon }}}$  is local in real space while the operator ${\displaystyle {\boldsymbol {\mathit {\Gamma }}}}$  is local in Fourier space. It is therefore not obvious how one can invert ${\displaystyle [{\boldsymbol {\mathit {1}}}+({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot {\boldsymbol {\mathit {\Gamma }}}]}$ .

Let us define

${\displaystyle {\boldsymbol {\delta }}:={\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0}~.}$

Then (8) can be written as

${\displaystyle \mathbf {P} =[{\boldsymbol {\mathit {1}}}+{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}}]^{-1}\cdot {\boldsymbol {\delta }}\cdot \langle \mathbf {E} \rangle ~.}$

Assuming that ${\displaystyle -{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}}<{\boldsymbol {\mathit {1}}}}$ , we can expand the first operator in terms of an infinite series, i.e.,

${\displaystyle [{\boldsymbol {\mathit {1}}}+{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}}]^{-1}=\sum _{n=0}^{\infty }(-{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}})^{n}~.}$

Then we have

${\displaystyle \mathbf {P} =\left[\sum _{n=0}^{\infty }(-{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}})^{n}\right]\cdot {\boldsymbol {\delta }}\cdot \langle \mathbf {E} \rangle ~.}$

Also, from the definition of ${\displaystyle \mathbf {P} }$ , we have

${\displaystyle \mathbf {P} ={\boldsymbol {\delta }}\cdot \mathbf {E} ~.}$

Hence,

${\displaystyle {\boldsymbol {\delta }}\cdot \mathbf {E} =\left[\sum _{n=0}^{\infty }(-{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}})^{n}\right]\cdot {\boldsymbol {\delta }}\cdot \langle \mathbf {E} \rangle ~.}$

Now,

${\displaystyle (-{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}})^{n}\cdot {\boldsymbol {\delta }}=(-1)^{n}~{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}}\cdot {\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}}\dots {\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}}\cdot {\boldsymbol {\delta }}={\boldsymbol {\delta }}\cdot (-{\boldsymbol {\mathit {\Gamma }}}\cdot {\boldsymbol {\delta }})^{n}~.}$

Therefore,

${\displaystyle {\boldsymbol {\delta }}\cdot \mathbf {E} ={\boldsymbol {\delta }}\cdot \left[\sum _{n=0}^{\infty }(-{\boldsymbol {\mathit {\Gamma }}}\cdot {\boldsymbol {\delta }})^{n}\right]\cdot \langle \mathbf {E} \rangle }$

or,

${\displaystyle {\text{(12)}}\qquad \mathbf {E} =\left[\sum _{n=0}^{\infty }(-{\boldsymbol {\mathit {\Gamma }}}\cdot {\boldsymbol {\delta }})^{n}\right]\cdot \langle \mathbf {E} \rangle ~.}$

Let us now define

{\displaystyle {\text{(13)}}\qquad {\begin{aligned}\mathbf {P} ^{(m)}&:=\left[\sum _{n=0}^{m}(-{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}})^{n}\right]\cdot {\boldsymbol {\delta }}\cdot \langle \mathbf {E} \rangle \\\mathbf {E} ^{(m)}&:=\left[\sum _{n=0}^{m}(-{\boldsymbol {\mathit {\Gamma }}}\cdot {\boldsymbol {\delta }})^{n}\right]\cdot \langle \mathbf {E} \rangle ~.\end{aligned}}}

Then we can write

${\displaystyle {\text{(14)}}\qquad {\boldsymbol {\mathit {\Gamma }}}\cdot \mathbf {P} ^{(m)}=\langle \mathbf {E} \rangle -\mathbf {E} ^{(m+1)}\qquad {\text{and}}\qquad {\boldsymbol {\delta }}\cdot \mathbf {E} ^{(m)}=\mathbf {P} ^{(m)}~.}$

These recurrence relations may be used to compute these fields inductively. An algorithm that may be used is outlined below:

• Set ${\displaystyle m=0}$ . Then
${\displaystyle \mathbf {E} ^{(0)}=\langle \mathbf {E} \rangle ~.}$
• Compute ${\displaystyle \mathbf {P} ^{(m)}(\mathbf {x} )}$  in real space using the relation
${\displaystyle \mathbf {P} ^{(m)}={\boldsymbol {\delta }}\cdot \mathbf {E} ^{(m)}~.}$
• Take a fast Fourier transform to find ${\displaystyle {\widehat {\mathbf {P} }}^{(m)}(\mathbf {k} )}$ .
• From (14) we get
${\displaystyle {\widehat {\mathbf {E} }}^{(m+1)}={\begin{cases}-{\boldsymbol {\mathit {\widehat {\Gamma }}}}(\mathbf {k} )\cdot {\widehat {\mathbf {P} }}^{(m)}(\mathbf {k} )&{\text{for}}~~\mathbf {k} \neq 0\\\langle \mathbf {E} \rangle &{\text{for}}~~\mathbf {k} =0~.\end{cases}}}$
• Compute ${\displaystyle {\widehat {\mathbf {E} }}^{(m+1)}(\mathbf {k} )}$  in Fourier space.
• Take an inverse fast Fourier transform to find ${\displaystyle \mathbf {E} ^{(m+1)}(\mathbf {x} )}$  in real space.
• Increment ${\displaystyle m}$  by 1 and repeat.

This is the method of Moulinec and Suquet (Mouli94). The method also extends to nonlinear problems (Mouli98). However, there are other iterative methods that have faster convergence.

### Convergence of expansions

For simplicity, let us assume that ${\displaystyle {\boldsymbol {\epsilon }}_{0}}$  is isotropic, i.e., ${\displaystyle {\boldsymbol {\epsilon }}_{0}=\epsilon _{0}~{\boldsymbol {\mathit {1}}}}$ . Then,

${\displaystyle {\boldsymbol {\mathit {\Gamma }}}={\cfrac {{\boldsymbol {\mathit {\Gamma }}}_{1}}{\epsilon _{0}}}}$

where ${\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{1}}$  is the projection from ${\displaystyle {\mathcal {H}}}$  onto ${\displaystyle {\mathcal {E}}}$ .

Define the norm of a field ${\displaystyle \mathbf {P} }$  as

${\displaystyle |\mathbf {P} |:=(\mathbf {P} ,\mathbf {P} )^{1/2}=\langle {\overline {\mathbf {P} \rangle \cdot \mathbf {P} }}^{1/2}~.}$

Also, define the norm of a linear operator ${\displaystyle {\boldsymbol {A}}}$  acting on ${\displaystyle \mathbf {P} }$  as

${\displaystyle \lVert {\boldsymbol {A}}\rVert _{}:=\sup _{\mathbf {P} \in {\mathcal {H}}~,~|\mathbf {P} |=1}|{\boldsymbol {A}}\cdot \mathbf {P} |~.}$

Therefore,

${\displaystyle |{\boldsymbol {A}}\cdot \mathbf {P} |\leq \lVert {\boldsymbol {A}}\rVert _{}\cdot |\mathbf {P} |~.}$

Hence,

${\displaystyle |{\boldsymbol {A}}\cdot {\boldsymbol {B}}\cdot \mathbf {P} |\leq \lVert {\boldsymbol {A}}\rVert _{}\cdot |{\boldsymbol {B}}\cdot \mathbf {P} |\leq \lVert {\boldsymbol {A}}\rVert _{}\cdot \lVert {\boldsymbol {B}}\rVert \cdot |\mathbf {P} |~.}$

So

${\displaystyle \lVert {\boldsymbol {A}}\cdot {\boldsymbol {B}}\rVert _{}\leq \lVert {\boldsymbol {A}}\rVert _{}\cdot \lVert {\boldsymbol {B}}\rVert _{}~.}$

In addition, we have the triangle inequality

${\displaystyle \lVert {\boldsymbol {A}}+{\boldsymbol {B}}\rVert _{}\leq \lVert {\boldsymbol {A}}\rVert _{}+\lVert {\boldsymbol {B}}\rVert _{}~.}$

So from (12) and (13), we have

{\displaystyle {\begin{aligned}|\mathbf {E} -\mathbf {E} ^{(m)}|&=\left|\sum _{n=m+1}^{\infty }(-{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}})^{n}\cdot \langle \mathbf {E} \rangle \right|\\&\leq \lVert \sum _{n=m+1}^{\infty }(-{\boldsymbol {\delta }}\cdot {\boldsymbol {\mathit {\Gamma }}})^{n}\rVert _{}\cdot |\langle \mathbf {E} \rangle |\\&\leq \left(\sum _{n=m+1}^{\infty }\lVert {\cfrac {\boldsymbol {\delta }}{\epsilon _{0}}}\rVert ^{n}\cdot \lVert {\boldsymbol {\mathit {\Gamma }}}_{1}\rVert _{}^{n}\right)\cdot |\langle \mathbf {E} \rangle |~.\end{aligned}}}

But ${\displaystyle \lVert {\boldsymbol {\mathit {\Gamma }}}_{1}\rVert _{}=1}$  since it is a projection onto ${\displaystyle {\mathcal {E}}}$ . Hence,

${\displaystyle |\mathbf {E} -\mathbf {E} ^{(m)}|\leq \left(\sum _{n=m+1}^{\infty }\lVert {\cfrac {\boldsymbol {\delta }}{\epsilon _{0}}}\rVert ^{n}\right)\cdot |\langle \mathbf {E} \rangle |~.}$

Therefore the series converges provided

${\displaystyle \lVert {\cfrac {\boldsymbol {\delta }}{\epsilon _{0}}}\rVert <1~.}$

In that case

${\displaystyle |\mathbf {E} -\mathbf {E} ^{(m)}|\leq {\cfrac {\gamma ^{m+1}}{1-\gamma }}~|\langle \mathbf {E} \rangle |}$

where

${\displaystyle \gamma :=\lVert {\cfrac {\boldsymbol {\delta }}{\epsilon _{0}}}\rVert _{}~.}$

To get a better understanding of the norm ${\displaystyle \gamma }$ , let us consider a ${\displaystyle n}$ -phase composite with isotropic phases, i.e.,

${\displaystyle {\boldsymbol {\epsilon }}=\sum _{i=1}^{n}\chi _{i}~\epsilon _{i}~{\boldsymbol {\mathit {1}}}}$

where

${\displaystyle \chi _{i}={\begin{cases}1&{\text{in phase}}~~i\\0&{\text{otherwise}}.\end{cases}}}$

In this case,

${\displaystyle \gamma =\lVert {\cfrac {\boldsymbol {\delta }}{\epsilon _{0}}}\rVert =\sup _{\mathbf {P} \in {\mathcal {H}}~,~|\mathbf {P} |=1}\sum _{i=1}^{n}\left\langle {\overline {\left({\cfrac {\epsilon _{i}-\epsilon _{0}}{\epsilon _{0}}}~\mathbf {P} _{i}\right)}}\cdot \left({\cfrac {\epsilon _{i}-\epsilon _{0}}{\epsilon _{0}}}~\mathbf {P} _{i}\right)\right\rangle ^{1/2}}$

where

${\displaystyle \mathbf {P} _{i}=\chi _{i}~\mathbf {P} \quad {\text{and}}\quad |\mathbf {P} |=\left(\sum _{i}{\overline {\mathbf {P} _{i}}}\cdot \mathbf {P} \right)^{1/2}=1~.}$

Hence,

${\displaystyle \gamma =\sup _{\mathbf {P} \in {\mathcal {H}}~,~|\mathbf {P} |=1}\sum _{i=1}^{n}\left|{\cfrac {\epsilon _{i}-\epsilon _{0}}{\epsilon _{0}}}\right|\langle {\overline {\mathbf {P} }}_{i}\cdot \mathbf {P} _{i}\rangle ^{1/2}~.}$

Since, ${\displaystyle \langle {\overline {\mathbf {P} }}_{i}\cdot \mathbf {P} _{i}\rangle ^{1/2}}$  are weights, it makes sense to put the weights where ${\displaystyle \epsilon _{i}}$  is maximum. Hence, we can write

${\displaystyle \gamma =\max _{i}\left|{\cfrac {\epsilon _{i}-\epsilon _{0}}{\epsilon _{0}}}\right|=\max _{i}{\cfrac {|\epsilon _{i}-\epsilon _{0}|}{|\epsilon _{0}|}}~.}$

For ${\displaystyle \gamma }$  to be less than 1, we therefore require that, for all ${\displaystyle i}$ ,

${\displaystyle |\epsilon _{i}-\epsilon _{0}|<|\epsilon _{0}|~.}$

A geometrical representation of this situation is shown in Figure 1.

 Figure 1. Allowable values of ${\displaystyle \epsilon _{i}}$  for convergence of series expansion.

If the value of ${\displaystyle \epsilon _{0}}$  is sufficiently large, then we get convergence if all the ${\displaystyle \epsilon _{i}}$  s lie in one half of the complex plane (shown by the green line in the figure).

Similarly, we can expand

${\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}={\boldsymbol {\epsilon }}_{0}+\langle [{\boldsymbol {\mathit {1}}}+({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\cdot {\boldsymbol {\mathit {\Gamma }}}]^{-1}\cdot ({\boldsymbol {\epsilon }}-{\boldsymbol {\epsilon }}_{0})\rangle }$

in the form

${\displaystyle {\text{(15)}}\qquad {\boldsymbol {\epsilon }}_{\text{eff}}={\boldsymbol {\epsilon }}_{0}+\sum _{j=0}^{\infty }{\boldsymbol {\mathit {\Gamma }}}_{0}\cdot {\boldsymbol {\delta }}\cdot (-{\boldsymbol {\mathit {\Gamma }}}\cdot {\boldsymbol {\delta }})^{j}\cdot {\boldsymbol {\mathit {\Gamma }}}_{0}}$

where ${\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{0}}$  is a projection onto ${\displaystyle {\mathcal {U}}}$ , i.e.,

${\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{0}\cdot \mathbf {P} =\langle \mathbf {P} \rangle ~.}$

In this case, we find that the series converges provided

${\displaystyle |\epsilon _{i}-\epsilon _{0}|<|\epsilon _{0}|\qquad {\text{for all}}~~i~.}$

Note that each term in (15) is an analytic function of ${\displaystyle \epsilon _{1}}$  (in fact a polynomial). So, if we truncate the series, we have an analytic function of ${\displaystyle \epsilon _{1}}$ .

Since a sequence of analytic functions which is uniformly convergent in a domain converges to a function which is analytic in that domain (see, for example, Rudin76), we deduce that ${\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}}$  is an analytic function of ${\displaystyle \epsilon _{1}}$  in the ${\displaystyle \epsilon _{0}}$  disk (see Figure~1) with ${\displaystyle r<|\epsilon _{0}|}$  provided ${\displaystyle |\epsilon _{i}-\epsilon _{0}|<\epsilon _{0}}$  for ${\displaystyle i=2,3,\dots ,n}$ .

Similarly, the effective tensor is an analytic function of ${\displaystyle \epsilon _{2}}$ , ${\displaystyle \epsilon _{3}}$ , etc.

Since ${\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}}$  is independent of ${\displaystyle \epsilon _{0}}$ , by taking the union of all such regions of analyticity, we conclude that ${\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}}$  is an analytic function of ${\displaystyle \epsilon _{1},\dots ,\epsilon _{n}}$  provided all these ${\displaystyle \epsilon _{i}}$  s lie inside a half-plane (see Figure~1). This means that there exists a ${\displaystyle \theta }$  such that

${\displaystyle {\text{Re}}(e^{i\theta }~\epsilon _{j})>0\quad {\text{for all}}~j~.}$

#### Corollary:

A corollary of the above observations is the following. If each ${\displaystyle \epsilon _{i}(\omega )}$  is an analytic function of ${\displaystyle \omega }$  for ${\displaystyle {\text{Im}}(\omega )>0}$  (which is what one expects with ${\displaystyle \omega }$  as the frequency) and ${\displaystyle {\text{Im}}[\epsilon _{i}(\omega )]>0}$  for all ${\displaystyle \omega }$  with ${\displaystyle {\text{Re}}{\omega }>0}$ , then ${\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}(\omega )}$  will be analytic in ${\displaystyle \omega }$ .

#### Another interesting property:

Now, if ${\displaystyle \mathbf {D} ={\boldsymbol {\epsilon }}\cdot \mathbf {E} }$ , we have

${\displaystyle \lambda ~\mathbf {D} =\lambda ~{\boldsymbol {\epsilon }}\cdot \mathbf {E} \qquad {\text{for all constants}}~\lambda ~.}$

Therefore,

${\displaystyle \langle \mathbf {D} \rangle ={\boldsymbol {\epsilon }}_{\text{eff}}\cdot \langle \mathbf {E} \rangle \qquad \implies \qquad \langle \lambda ~\mathbf {D} \rangle =\lambda ~{\boldsymbol {\epsilon }}_{\text{eff}}\cdot \langle \mathbf {E} \rangle ~.}$

This means that

${\displaystyle (\lambda ~{\boldsymbol {\epsilon }})_{\text{eff}}=\lambda ~{\boldsymbol {\epsilon }}_{\text{eff}}~.}$

Therefore, ${\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}(\epsilon _{1},\epsilon _{2},\dots ,\epsilon _{n})}$  is homogeneous of degree one and

${\displaystyle {{\boldsymbol {\epsilon }}_{\text{eff}}(\lambda ~\epsilon _{1},\lambda ~\epsilon _{2},\dots ,\lambda ~\epsilon _{n})=\lambda ~{\boldsymbol {\epsilon }}_{\text{eff}}(\epsilon _{1},\epsilon _{2},\dots ,\epsilon _{n})~.}}$

For a two-phase composite, if we take

${\displaystyle \lambda ={\cfrac {1}{\epsilon _{2}}}}$

we get

${\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}(\epsilon _{1},\epsilon _{2})=\epsilon _{2}~{\boldsymbol {\epsilon }}_{\text{eff}}(\epsilon _{1}/\epsilon _{2},1)~.}$

Therefore, it suffices to study the analytic function ${\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}(\epsilon _{1},1)={\boldsymbol {\epsilon }}_{\text{eff}}(\epsilon _{1})}$ . For further details see Milton02.

## References

• [Milton02]     G. W. Milton. Theory of Composites. Cambridge University Press, New York, 2002.
• [Mouli94]     H. Moulinec and P. Suquet. A fast numerical method for computing the linear and nonlinear mechanical properties of composites. Comptes rendus de l'Académie des sciences II, 318(11):1417--1423, 1994.
• [Mouli98]     H. Moulinec and P. Suquet. A numerical method for computing the overall response of nonlinear composites with complex microstructure. Comput. Methods Appl. Mech. Engrg., 157:69--94, 1998.
• [Rudin76]     W. Rudin. Principles of Mathematical Analysis. McGraw-Hill, New York, 1976.