The content of these notes is based on the lectures by Prof. Graeme W. Milton (University of Utah) given in a course on metamaterials in Spring 2007.
Hierarchical Laminates
edit
In the previous lecture we found that, for rank-1 laminates, the effective permittivity can be calculated using the formula of Tartar-Murat-Lurie-Cherkaev. In this lecture we extend the ideas used to arrive at that formula to hierarchical laminates.
[ 1]
An example of a hierarchical laminate is shown in Figure 1. The idea of such materials goes back to Maxwell. In the rank-2 laminate shown in the figure there are two length scales which are assumed to be sufficiently separated so that the ideas in the previous lecture can be exploited. There has to be a separation of length scales so that the layer material can be replaced by its effective tensor.
Figure 1. A rank-2 hierarchical laminate.
Recall the Tartar-Murat-Lurie-Cherkaev formula for the effective permittivity of a rank-1 laminate:
f
1
[
ϵ
2
1
−
ϵ
eff
]
−
1
=
[
ϵ
2
1
−
ϵ
1
]
−
1
−
f
2
ϵ
2
Γ
1
(
n
)
.
{\displaystyle f_{1}~[\epsilon _{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {\epsilon }}_{\text{eff}}]^{-1}=[\epsilon _{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {\epsilon }}_{1}]^{-1}-{\cfrac {f_{2}}{\epsilon _{2}}}~{\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )~.}
By iterating this formula one gets, for a rank-
m
{\displaystyle m}
laminate,
(1)
f
1
[
ϵ
2
1
−
ϵ
eff
]
−
1
=
[
ϵ
2
1
−
ϵ
1
]
−
1
−
f
2
ϵ
2
M
{\displaystyle {\text{(1)}}\qquad f_{1}~[\epsilon _{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {\epsilon }}_{\text{eff}}]^{-1}=[\epsilon _{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {\epsilon }}_{1}]^{-1}-{\cfrac {f_{2}}{\epsilon _{2}}}~{\boldsymbol {M}}}
where
M
:=
∑
j
=
1
m
c
j
Γ
1
(
n
j
)
;
c
j
=
f
(
j
−
1
)
−
f
(
j
)
1
−
f
1
≥
0
{\displaystyle {\boldsymbol {M}}:=\sum _{j=1}^{m}c_{j}~{\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} _{j})~;~~c_{j}={\cfrac {f^{(j-1)}-f^{(j)}}{1-f_{1}}}\geq 0}
and
m
{\displaystyle m}
is the number of laminates in the hierarchy,
f
(
j
)
{\displaystyle f^{(j)}}
is the
proportion of phase
1
{\displaystyle 1}
in a rank-
j
{\displaystyle j}
laminate, and
n
j
{\displaystyle \mathbf {n} _{j}}
is the orientation of the
j
{\displaystyle j}
-th laminate.
In particular,
f
(
m
)
=
f
1
;
f
(
0
)
=
1
⟹
f
(
j
−
1
)
>
f
(
j
)
.
{\displaystyle f^{(m)}=f_{1}~;~f^{(0)}=1\qquad \implies \qquad f^{(j-1)}>f^{(j)}~.}
Then,
∑
j
=
1
m
c
j
=
f
(
0
)
−
f
(
m
)
1
−
f
1
=
1
.
{\displaystyle \sum _{j=1}^{m}c_{j}={\cfrac {f^{(0)}-f^{(m)}}{1-f_{1}}}=1~.}
For a rank-3 laminate, if the normals
n
1
{\displaystyle \mathbf {n} _{1}}
,
n
2
{\displaystyle \mathbf {n} _{2}}
, and
n
3
{\displaystyle \mathbf {n} _{3}}
are three orthogonal vectors, then
M
=
c
1
n
1
⊗
n
1
+
c
2
n
2
⊗
n
2
+
c
3
n
3
⊗
n
3
.
{\displaystyle {\boldsymbol {M}}=c_{1}~\mathbf {n} _{1}\otimes \mathbf {n} _{1}+c_{2}~\mathbf {n} _{2}\otimes \mathbf {n} _{2}+c_{3}~\mathbf {n} _{3}\otimes \mathbf {n} _{3}~.}
If we choose the
f
(
j
)
{\displaystyle f^{(j)}}
s so that
c
1
=
c
2
=
c
3
=
1
3
{\displaystyle c_{1}=c_{2}=c_{3}={\frac {1}{3}}}
then
M
=
1
3
(
n
1
⊗
n
1
+
n
2
⊗
n
2
+
n
3
⊗
n
3
)
=
1
.
{\displaystyle {\boldsymbol {M}}={\frac {1}{3}}~(\mathbf {n} _{1}\otimes \mathbf {n} _{1}+\mathbf {n} _{2}\otimes \mathbf {n} _{2}+\mathbf {n} _{3}\otimes \mathbf {n} _{3})={\boldsymbol {\mathit {1}}}~.}
In this case, equation (1) coincides with the solution for the Hashin sphere assemblage!
This implies that different geometries can have the same
ϵ
eff
(
ϵ
2
,
ϵ
1
)
{\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}(\epsilon _{2},{\boldsymbol {\epsilon }}_{1})}
.
The answer is yes.
In this case we use an anisotropic reference material
ϵ
0
{\displaystyle {\boldsymbol {\epsilon }}_{0}}
and define the polarization as
(2)
P
(
x
)
:=
[
ϵ
(
x
)
−
ϵ
0
]
⋅
E
(
x
)
=
D
(
x
)
−
ϵ
0
⋅
E
(
x
)
.
{\displaystyle {\text{(2)}}\qquad \mathbf {P} (\mathbf {x} ):=[{\boldsymbol {\epsilon }}(\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}]\cdot \mathbf {E} (\mathbf {x} )=\mathbf {D} (\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}\cdot \mathbf {E} (\mathbf {x} )~.}
The volume average of this field is given by
⟨
P
⟩
:=
⟨
ϵ
⋅
E
⟩
−
ϵ
0
⋅
⟨
E
⟩
=
⟨
D
⟩
−
ϵ
0
⋅
⟨
E
⟩
.
{\displaystyle \langle \mathbf {P} \rangle :=\langle {\boldsymbol {\epsilon }}\cdot \mathbf {E} \rangle -{\boldsymbol {\epsilon }}_{0}\cdot \langle \mathbf {E} \rangle =\langle \mathbf {D} \rangle -{\boldsymbol {\epsilon }}_{0}\cdot \langle \mathbf {E} \rangle ~.}
Therefore, the difference between the field and its volume average is
(3)
P
(
x
)
−
⟨
P
⟩
=
[
D
(
x
)
−
⟨
D
⟩
]
−
ϵ
0
⋅
[
E
(
x
)
−
⟨
E
⟩
]
.
{\displaystyle {\text{(3)}}\qquad \mathbf {P} (\mathbf {x} )-\langle \mathbf {P} \rangle =[\mathbf {D} (\mathbf {x} )-\langle \mathbf {D} \rangle ]-{\boldsymbol {\epsilon }}_{0}\cdot [\mathbf {E} (\mathbf {x} )-\langle \mathbf {E} \rangle ]~.}
Let us introduce a new matrix
Γ
(
n
)
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )}
defined through its action on a vector
a
{\displaystyle \mathbf {a} }
, i.e.,
(4)
b
=
Γ
(
n
)
⋅
a
if and only if
{
Γ
1
(
n
)
⋅
b
=
b
Γ
1
(
n
)
⋅
(
a
−
ϵ
0
⋅
b
)
=
0
{\displaystyle {\text{(4)}}\qquad \mathbf {b} ={\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\cdot \mathbf {a} \qquad {\text{if and only if}}\qquad {\begin{cases}{\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot \mathbf {b} =\mathbf {b} \\{\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot (\mathbf {a} -{\boldsymbol {\epsilon }}_{0}\cdot \mathbf {b} )={\boldsymbol {0}}\\\end{cases}}}
where
Γ
1
(
n
)
=
n
⊗
n
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )=\mathbf {n} \otimes \mathbf {n} }
and projects parallel to
n
{\displaystyle \mathbf {n} }
. Therefore,
Γ
1
(
n
)
⋅
b
=
b
⟹
α
n
=
b
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot \mathbf {b} =\mathbf {b} \qquad \implies \qquad \alpha ~\mathbf {n} =\mathbf {b} }
where
α
=
b
⋅
n
{\displaystyle \alpha =\mathbf {b} \cdot \mathbf {n} }
. Also,
Γ
1
(
n
)
⋅
(
a
−
ϵ
0
⋅
b
)
=
0
⟹
a
⋅
n
−
α
(
n
⋅
ϵ
0
⋅
n
)
=
0
.
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot (\mathbf {a} -{\boldsymbol {\epsilon }}_{0}\cdot \mathbf {b} )={\boldsymbol {0}}\qquad \implies \qquad \mathbf {a} \cdot \mathbf {n} -\alpha ~(\mathbf {n} \cdot {\boldsymbol {\epsilon }}_{0}\cdot \mathbf {n} )=0~.}
Therefore,
α
=
a
⋅
n
n
⋅
ϵ
0
⋅
n
and
b
=
(
a
⋅
n
n
⋅
ϵ
0
⋅
n
)
n
=
(
n
⊗
n
)
⋅
a
n
⋅
ϵ
0
⋅
n
=
Γ
1
(
n
)
⋅
a
n
⋅
ϵ
0
⋅
n
.
{\displaystyle \alpha ={\cfrac {\mathbf {a} \cdot \mathbf {n} }{\mathbf {n} \cdot {\boldsymbol {\epsilon }}_{0}\cdot \mathbf {n} }}\qquad {\text{and}}\qquad \mathbf {b} =\left({\cfrac {\mathbf {a} \cdot \mathbf {n} }{\mathbf {n} \cdot {\boldsymbol {\epsilon }}_{0}\cdot \mathbf {n} }}\right)~\mathbf {n} ={\cfrac {(\mathbf {n} \otimes \mathbf {n} )\cdot \mathbf {a} }{\mathbf {n} \cdot {\boldsymbol {\epsilon }}_{0}\cdot \mathbf {n} }}={\cfrac {{\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot \mathbf {a} }{\mathbf {n} \cdot {\boldsymbol {\epsilon }}_{0}\cdot \mathbf {n} }}~.}
From the definition of
Γ
(
n
)
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )}
we then have
(5)
Γ
(
n
)
=
Γ
1
(
n
)
n
⋅
ϵ
0
⋅
n
.
{\displaystyle {\text{(5)}}\qquad {\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )={\cfrac {{\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )}{\mathbf {n} \cdot {\boldsymbol {\epsilon }}_{0}\cdot \mathbf {n} }}~.}
Taking the projection of both sides of equation (3) we get
Γ
1
(
n
)
⋅
[
P
(
x
)
−
⟨
P
⟩
]
=
Γ
1
(
n
)
⋅
[
D
(
x
)
−
⟨
D
⟩
]
−
Γ
1
(
n
)
⋅
{
ϵ
0
⋅
[
E
(
x
)
−
⟨
E
⟩
]
}
.
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot [\mathbf {P} (\mathbf {x} )-\langle \mathbf {P} \rangle ]={\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot [\mathbf {D} (\mathbf {x} )-\langle \mathbf {D} \rangle ]-{\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot \{{\boldsymbol {\epsilon }}_{0}\cdot [\mathbf {E} (\mathbf {x} )-\langle \mathbf {E} \rangle ]\}~.}
Now continuity of the normal component of
D
{\displaystyle \mathbf {D} }
and the piecewise
constant nature of the field implies that the normal component of
D
{\displaystyle \mathbf {D} }
is constant. Therefore,
Γ
1
(
n
)
⋅
[
D
(
x
)
−
⟨
D
⟩
]
=
[
n
⋅
D
(
x
)
−
n
⋅
⟨
D
⟩
]
n
=
0
.
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot [\mathbf {D} (\mathbf {x} )-\langle \mathbf {D} \rangle ]=[\mathbf {n} \cdot \mathbf {D} (\mathbf {x} )-\mathbf {n} \cdot \langle \mathbf {D} \rangle ]~\mathbf {n} ={\boldsymbol {0}}~.}
Hence we have,
(6)
Γ
1
(
n
)
⋅
{
P
(
x
)
−
⟨
P
⟩
+
ϵ
0
⋅
[
E
(
x
)
−
⟨
E
⟩
]
}
=
0
.
{\displaystyle {\text{(6)}}\qquad {\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot \left\{\mathbf {P} (\mathbf {x} )-\langle \mathbf {P} \rangle +{\boldsymbol {\epsilon }}_{0}\cdot [\mathbf {E} (\mathbf {x} )-\langle \mathbf {E} \rangle ]\right\}={\boldsymbol {0}}~.}
Recall from the previous lecture that
(7)
Γ
1
(
n
)
⋅
E
(
x
)
=
E
(
x
)
−
⟨
E
⟩
+
Γ
1
(
n
)
⋅
⟨
E
⟩
⟹
Γ
1
(
n
)
⋅
[
E
(
x
)
−
⟨
E
⟩
]
=
E
(
x
)
−
⟨
E
⟩
.
{\displaystyle {\text{(7)}}\qquad {\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot \mathbf {E} (\mathbf {x} )=\mathbf {E} (\mathbf {x} )-\langle \mathbf {E} \rangle +{\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot \langle \mathbf {E} \rangle \quad \implies \quad {\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )\cdot [\mathbf {E} (\mathbf {x} )-\langle \mathbf {E} \rangle ]=\mathbf {E} (\mathbf {x} )-\langle \mathbf {E} \rangle ~.}
Since the conditions in (4) are satisfied with
a
:=
P
(
x
)
−
⟨
P
⟩
and
b
:=
−
[
E
(
x
)
−
⟨
E
⟩
]
{\displaystyle \mathbf {a} :=\mathbf {P} (\mathbf {x} )-\langle \mathbf {P} \rangle \quad {\text{and}}\quad \mathbf {b} :=-[\mathbf {E} (\mathbf {x} )-\langle \mathbf {E} \rangle ]}
from the definition of
Γ
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}}
we then have
(8)
−
[
E
(
x
)
−
⟨
E
⟩
]
=
Γ
(
n
)
⋅
[
P
(
x
)
−
⟨
P
⟩
]
.
{\displaystyle {\text{(8)}}\qquad -[\mathbf {E} (\mathbf {x} )-\langle \mathbf {E} \rangle ]={\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\cdot [\mathbf {P} (\mathbf {x} )-\langle \mathbf {P} \rangle ]~.}
Now, from equation (2) we have
E
(
x
)
=
[
ϵ
(
x
)
−
ϵ
0
]
−
1
⋅
P
(
x
)
.
{\displaystyle \mathbf {E} (\mathbf {x} )=[{\boldsymbol {\epsilon }}(\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}]^{-1}\cdot \mathbf {P} (\mathbf {x} )~.}
Plugging this into (8) gives
−
[
ϵ
(
x
)
−
ϵ
0
]
−
1
⋅
P
(
x
)
+
⟨
E
⟩
=
Γ
(
n
)
⋅
[
P
(
x
)
−
⟨
P
⟩
]
{\displaystyle -[{\boldsymbol {\epsilon }}(\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}]^{-1}\cdot \mathbf {P} (\mathbf {x} )+\langle \mathbf {E} \rangle ={\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\cdot [\mathbf {P} (\mathbf {x} )-\langle \mathbf {P} \rangle ]}
or,
{
[
ϵ
(
x
)
−
ϵ
0
]
−
1
+
Γ
(
n
)
}
⋅
P
(
x
)
=
⟨
E
⟩
+
Γ
(
n
)
⋅
⟨
P
⟩
.
{\displaystyle \{[{\boldsymbol {\epsilon }}(\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}\cdot \mathbf {P} (\mathbf {x} )=\langle \mathbf {E} \rangle +{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\cdot \langle \mathbf {P} \rangle ~.}
Define
V
:=
⟨
E
⟩
+
Γ
(
n
)
⋅
⟨
P
⟩
{\displaystyle \mathbf {V} :=\langle \mathbf {E} \rangle +{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\cdot \langle \mathbf {P} \rangle }
and note that this quantity is constant throughout the laminate. Therefore we can write
{
[
ϵ
(
x
)
−
ϵ
0
]
−
1
+
Γ
(
n
)
}
⋅
P
(
x
)
=
V
{\displaystyle \{[{\boldsymbol {\epsilon }}(\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}\cdot \mathbf {P} (\mathbf {x} )=\mathbf {V} }
or
P
(
x
)
=
{
[
ϵ
(
x
)
−
ϵ
0
]
−
1
+
Γ
(
n
)
}
−
1
⋅
V
.
{\displaystyle \mathbf {P} (\mathbf {x} )=\{[{\boldsymbol {\epsilon }}(\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}^{-1}\cdot \mathbf {V} ~.}
If we now take a volume average, we get
(9)
⟨
P
⟩
=
⟨
{
[
ϵ
(
x
)
−
ϵ
0
]
−
1
+
Γ
(
n
)
}
−
1
⟩
⋅
V
.
{\displaystyle {\text{(9)}}\qquad \langle \mathbf {P} \rangle =\langle \{[{\boldsymbol {\epsilon }}(\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}^{-1}\rangle \cdot \mathbf {V} ~.}
Also, from the definition of
P
(
x
)
{\displaystyle \mathbf {P} (\mathbf {x} )}
we have
⟨
P
⟩
=
[
ϵ
eff
−
ϵ
0
]
⋅
⟨
E
⟩
⟹
⟨
E
⟩
=
[
ϵ
eff
−
ϵ
0
]
−
1
⋅
⟨
P
⟩
.
{\displaystyle \langle \mathbf {P} \rangle =[{\boldsymbol {\epsilon }}_{\text{eff}}-{\boldsymbol {\epsilon }}_{0}]\cdot \langle \mathbf {E} \rangle \qquad \implies \qquad \langle \mathbf {E} \rangle =[{\boldsymbol {\epsilon }}_{\text{eff}}-{\boldsymbol {\epsilon }}_{0}]^{-1}\cdot \langle \mathbf {P} \rangle ~.}
Therefore,
V
=
{
[
ϵ
eff
−
ϵ
0
]
−
1
+
Γ
(
n
)
}
⋅
⟨
P
⟩
{\displaystyle \mathbf {V} =\{[{\boldsymbol {\epsilon }}_{\text{eff}}-{\boldsymbol {\epsilon }}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}\cdot \langle \mathbf {P} \rangle }
or,
(10)
⟨
P
⟩
=
{
[
ϵ
eff
−
ϵ
0
]
−
1
+
Γ
(
n
)
}
−
1
⋅
V
.
{\displaystyle {\text{(10)}}\qquad \langle \mathbf {P} \rangle =\{[{\boldsymbol {\epsilon }}_{\text{eff}}-{\boldsymbol {\epsilon }}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}^{-1}\cdot \mathbf {V} ~.}
Comparing equation (9) and (10) and invoking the
arbitrariness of
ϵ
0
{\displaystyle {\boldsymbol {\epsilon }}_{0}}
, we get
(11)
{
[
ϵ
eff
−
ϵ
0
]
−
1
+
Γ
(
n
)
}
−
1
=
⟨
{
[
ϵ
(
x
)
−
ϵ
0
]
−
1
+
Γ
(
n
)
}
−
1
⟩
.
{\displaystyle {\text{(11)}}\qquad {\{[{\boldsymbol {\epsilon }}_{\text{eff}}-{\boldsymbol {\epsilon }}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}^{-1}=\langle \{[{\boldsymbol {\epsilon }}(\mathbf {x} )-{\boldsymbol {\epsilon }}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}^{-1}\rangle ~.}}
This relation has a simple form and can be used when the phases are anisotropic.
For a simple (rank-1) laminate where
ϵ
0
=
ϵ
2
{\displaystyle {\boldsymbol {\epsilon }}_{0}={\boldsymbol {\epsilon }}_{2}}
, equation (11) reduces to
f
1
(
ϵ
eff
−
ϵ
2
)
−
1
=
(
ϵ
1
−
ϵ
2
)
−
1
+
f
2
Γ
(
n
)
{\displaystyle {f_{1}~({\boldsymbol {\epsilon }}_{\text{eff}}-{\boldsymbol {\epsilon }}_{2})^{-1}=({\boldsymbol {\epsilon }}_{1}-{\boldsymbol {\epsilon }}_{2})^{-1}+f_{2}~{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )}}
where
Γ
(
n
)
=
n
⊗
n
n
⋅
ϵ
2
⋅
n
.
{\displaystyle {{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )={\cfrac {\mathbf {n} \otimes \mathbf {n} }{\mathbf {n} \cdot {\boldsymbol {\epsilon }}_{2}\cdot \mathbf {n} }}~.}}
Linear Elastic laminates
edit
For elasticity, exactly the same analysis can be applied. In this case we introduce a reference stiffness tensor
C
0
{\displaystyle {\boldsymbol {\mathsf {C}}}_{0}}
and define the second order polarization tensor as
P
(
x
)
=
[
C
(
x
)
−
C
0
]
:
ε
(
x
)
{\displaystyle {\boldsymbol {P}}(\mathbf {x} )=[{\boldsymbol {\mathsf {C}}}(\mathbf {x} )-{\boldsymbol {\mathsf {C}}}_{0}]:{\boldsymbol {\varepsilon }}(\mathbf {x} )}
where the strain
ε
(
x
)
{\displaystyle {\boldsymbol {\varepsilon }}(\mathbf {x} )}
is given by
ε
(
x
)
=
1
2
[
∇
u
+
(
∇
u
)
T
]
.
{\displaystyle {\boldsymbol {\varepsilon }}(\mathbf {x} )={\frac {1}{2}}~[{\boldsymbol {\nabla }}\mathbf {u} +({\boldsymbol {\nabla }}\mathbf {u} )^{T}]~.}
Following the same process as before, we can show that the effective elastic stiffness of a hierarchical laminate can be determined from the formula
(12)
{
[
C
eff
−
C
0
]
−
1
+
Γ
(
n
)
}
−
1
=
⟨
{
[
C
(
x
)
−
C
0
]
−
1
+
Γ
(
n
)
}
−
1
⟩
{\displaystyle {\text{(12)}}\qquad {\{[{\boldsymbol {\mathsf {C}}}_{\text{eff}}-{\boldsymbol {\mathsf {C}}}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}^{-1}=\langle \{[{\boldsymbol {\mathsf {C}}}(\mathbf {x} )-{\boldsymbol {\mathsf {C}}}_{0}]^{-1}+{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )\}^{-1}\rangle }}
where (the components are in a rectangular Cartesian basis)
[
Γ
(
n
)
]
i
j
l
m
=
1
4
[
n
i
{
C
−
1
(
n
)
}
j
l
n
m
+
n
i
{
C
−
1
(
n
)
}
j
m
n
l
+
n
j
{
C
−
1
(
n
)
}
i
l
n
m
+
n
j
{
C
−
1
(
n
)
}
i
m
n
l
]
{\displaystyle [{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )]_{ijlm}={\cfrac {1}{4}}~\left[n_{i}~\{{\boldsymbol {C}}^{-1}(\mathbf {n} )\}_{jl}~n_{m}+n_{i}~\{{\boldsymbol {C}}^{-1}(\mathbf {n} )\}_{jm}~n_{l}+n_{j}~\{{\boldsymbol {C}}^{-1}(\mathbf {n} )\}_{il}~n_{m}+n_{j}~\{{\boldsymbol {C}}^{-1}(\mathbf {n} )\}_{im}~n_{l}\right]}
and
C
−
1
(
n
)
=
n
⋅
C
0
⋅
n
.
{\displaystyle {\boldsymbol {C}}^{-1}(\mathbf {n} )=\mathbf {n} \cdot {\boldsymbol {\mathsf {C}}}_{0}\cdot \mathbf {n} ~.}
Note that
C
−
1
(
n
)
{\displaystyle {\boldsymbol {C}}^{-1}(\mathbf {n} )}
has the same form as the acoustic tensor.
If
C
0
{\displaystyle {\boldsymbol {\mathsf {C}}}_{0}}
is isotropic, i.e.,
C
0
:
ε
=
λ
0
tr
(
ε
)
1
+
2
μ
0
ε
{\displaystyle {\boldsymbol {\mathsf {C}}}_{0}:{\boldsymbol {\varepsilon }}=\lambda _{0}~{\text{tr}}({\boldsymbol {\varepsilon }})~{\boldsymbol {\mathit {1}}}+2~\mu _{0}~{\boldsymbol {\varepsilon }}}
where
λ
0
{\displaystyle \lambda _{0}}
is the Lame modulus and
μ
0
{\displaystyle \mu _{0}}
is the shear modulus,
Γ
(
n
)
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )}
simplifies to
[
Γ
(
n
)
]
i
j
l
m
=
(
1
λ
0
+
2
μ
0
−
1
μ
0
)
n
i
n
j
n
l
n
m
+
1
4
μ
0
(
n
i
δ
j
l
n
m
+
n
i
δ
j
m
n
l
+
n
j
δ
i
l
n
m
+
n
j
δ
i
m
n
l
)
.
{\displaystyle [{\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )]_{ijlm}=\left({\cfrac {1}{\lambda _{0}+2~\mu _{0}}}-{\cfrac {1}{\mu _{0}}}\right)~n_{i}~n_{j}~n_{l}~n_{m}+{\cfrac {1}{4~\mu _{0}}}(n_{i}~\delta _{jl}~n_{m}+n_{i}~\delta _{jm}~n_{l}+n_{j}~\delta _{il}~n_{m}+n_{j}~\delta _{im}~n_{l})~.}
The methods discussed above can be generalized if we think in terms
of a Hilbert space formalism. Recall that our goal is to find a
general formula for
ϵ
eff
{\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}}
and
C
eff
{\displaystyle {\boldsymbol {\mathsf {C}}}_{\text{eff}}}
.
Let us consider a periodic material with unit cell
Q
{\displaystyle Q}
. We will call
such materials
Q
{\displaystyle Q}
-periodic.
Consider the Hilbert space
H
{\displaystyle {\mathcal {H}}}
of square-integrable,
Q
{\displaystyle Q}
-periodic,
complex vector fields with the inner product
(
a
,
b
)
=
∫
Q
a
(
x
)
¯
⋅
b
(
x
)
d
x
{\displaystyle (\mathbf {a} ,\mathbf {b} )=\int _{Q}{\overline {\mathbf {a} (\mathbf {x} )}}\cdot \mathbf {b} (\mathbf {x} )~{\text{d}}\mathbf {x} }
where
a
{\displaystyle \mathbf {a} }
and
b
{\displaystyle \mathbf {b} }
are vector fields and
(
∙
)
¯
{\displaystyle {\overline {(\bullet )}}}
denotes
the complex conjugate. We can use Parseval's theorem to express the
inner product in Fourier space as
(
a
,
b
)
=
∫
Q
^
a
^
(
k
)
⋅
b
^
(
k
)
d
k
{\displaystyle (\mathbf {a} ,\mathbf {b} )=\int _{\hat {Q}}{\widehat {\mathbf {a} }}(\mathbf {k} )\cdot {\widehat {\mathbf {b} }}(\mathbf {k} )~{\text{d}}\mathbf {k} }
where
k
{\displaystyle \mathbf {k} }
is the phase vector.
The Hilbert space
H
{\displaystyle {\mathcal {H}}}
can be decomposed into three orthogonal subspaces.
The subspace
U
{\displaystyle {\mathcal {U}}}
of uniform fields, i.e.,
a
(
x
)
{\displaystyle \mathbf {a} (\mathbf {x} )}
is independent of
x
{\displaystyle \mathbf {x} }
, or in Fourier space,
a
^
(
k
)
=
0
{\displaystyle {\widehat {\mathbf {a} }}(\mathbf {k} )={\boldsymbol {0}}}
unless
k
=
0
{\displaystyle \mathbf {k} ={\boldsymbol {0}}}
.
The subspace
J
{\displaystyle {\mathcal {J}}}
of zero divergence, zero average value fields, i.e.,
∇
⋅
a
(
x
)
=
0
{\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {a} (\mathbf {x} )=0}
and
⟨
a
(
x
)
⟩
=
0
{\displaystyle \langle \mathbf {a} (\mathbf {x} )\rangle ={\boldsymbol {0}}}
, or in Fourier space,
a
^
(
0
)
=
0
{\displaystyle {\widehat {\mathbf {a} }}({\boldsymbol {0}})={\boldsymbol {0}}}
and
k
⋅
a
^
(
k
)
=
0
{\displaystyle \mathbf {k} \cdot {\widehat {\mathbf {a} }}(\mathbf {k} )=0}
.
The subspace
E
{\displaystyle {\mathcal {E}}}
of zero curl, zero average value fields, i.e.,
∇
×
a
(
x
)
=
0
{\displaystyle {\boldsymbol {\nabla }}\times \mathbf {a} (\mathbf {x} )={\boldsymbol {0}}}
and
⟨
a
(
x
)
⟩
=
0
{\displaystyle \langle \mathbf {a} (\mathbf {x} )\rangle ={\boldsymbol {0}}}
, or in Fourier space,
a
^
(
0
)
=
0
{\displaystyle {\widehat {\mathbf {a} }}({\boldsymbol {0}})={\boldsymbol {0}}}
and
a
^
(
k
)
=
α
(
k
)
k
{\displaystyle {\widehat {\mathbf {a} }}(\mathbf {k} )=\alpha (\mathbf {k} )~\mathbf {k} }
.
Thus we can write
H
=
U
⊕
J
⊕
E
.
{\displaystyle {\mathcal {H}}={\mathcal {U}}\oplus {\mathcal {J}}\oplus {\mathcal {E}}~.}
In Fourier space, we can clearly see that
(
a
,
b
)
=
0
{\displaystyle (\mathbf {a} ,\mathbf {b} )={\boldsymbol {0}}}
if we choose
a
{\displaystyle \mathbf {a} }
from any one of
U
,
J
,
E
{\displaystyle {\mathcal {U}},{\mathcal {J}},{\mathcal {E}}}
and
b
{\displaystyle \mathbf {b} }
from a
a different subspace. Therefore the three subspaces are orthogonal.
Similarly, for elasticity,
H
{\displaystyle {\mathcal {H}}}
is the Hilbert space of square-integrable,
Q
{\displaystyle Q}
-periodic, complex valued, symmetric matrix valued fields with
inner product
(
A
,
B
)
=
∫
Q
A
(
x
)
¯
:
B
(
x
)
d
x
.
{\displaystyle ({\boldsymbol {A}},{\boldsymbol {B}})=\int _{Q}{\overline {{\boldsymbol {A}}(\mathbf {x} )}}:{\boldsymbol {B}}(\mathbf {x} )~{\text{d}}\mathbf {x} ~.}
In Fourier space, we have
(
A
,
B
)
=
∫
Q
^
A
^
(
k
)
:
B
^
(
k
)
d
k
.
{\displaystyle ({\boldsymbol {A}},{\boldsymbol {B}})=\int _{\hat {Q}}{\widehat {\boldsymbol {A}}}(\mathbf {k} ):{\widehat {\boldsymbol {B}}}(\mathbf {k} )~{\text{d}}\mathbf {k} ~.}
Again we decompose the space
H
{\displaystyle {\mathcal {H}}}
into three orthogonal subspaces
U
{\displaystyle {\mathcal {U}}}
,
J
{\displaystyle {\mathcal {J}}}
, and
E
{\displaystyle {\mathcal {E}}}
where
U
{\displaystyle {\mathcal {U}}}
is the subspace of uniform fields, i.e.,
A
(
x
)
{\displaystyle {\boldsymbol {A}}(\mathbf {x} )}
is independent of
x
{\displaystyle \mathbf {x} }
, or in Fourier space,
A
^
(
k
)
=
0
{\displaystyle {\widehat {\boldsymbol {A}}}(\mathbf {k} )={\boldsymbol {\mathit {0}}}}
unless
k
=
0
{\displaystyle \mathbf {k} ={\boldsymbol {0}}}
.
J
{\displaystyle {\mathcal {J}}}
is the subspace of zero divergence, zero average value fields, i.e.,
∇
⋅
A
(
x
)
=
0
{\displaystyle {\boldsymbol {\nabla }}\cdot {\boldsymbol {A}}(\mathbf {x} )={\boldsymbol {0}}}
and
⟨
A
(
x
)
⟩
=
0
{\displaystyle \langle {\boldsymbol {A}}(\mathbf {x} )\rangle ={\boldsymbol {\mathit {0}}}}
, or in Fourier space,
A
^
(
0
)
=
0
{\displaystyle {\widehat {\boldsymbol {A}}}({\boldsymbol {0}})={\boldsymbol {\mathit {0}}}}
and
A
^
(
k
)
⋅
k
=
0
{\displaystyle {\widehat {\boldsymbol {A}}}(\mathbf {k} )\cdot \mathbf {k} ={\boldsymbol {0}}}
.
E
{\displaystyle {\mathcal {E}}}
is the subspace of zero average "strain" fields, i.e.,
⟨
A
(
x
)
⟩
=
0
{\displaystyle \langle {\boldsymbol {A}}(\mathbf {x} )\rangle ={\boldsymbol {\mathit {0}}}}
, or in Fourier space,
A
^
(
0
)
=
0
{\displaystyle {\widehat {\boldsymbol {A}}}({\boldsymbol {0}})={\boldsymbol {\mathit {0}}}}
and
A
^
(
k
)
=
k
⊗
u
(
k
)
+
u
(
k
)
⊗
k
{\displaystyle {\widehat {\boldsymbol {A}}}(\mathbf {k} )=\mathbf {k} \otimes \mathbf {u} (\mathbf {k} )+\mathbf {u} (\mathbf {k} )\otimes \mathbf {k} }
.
Problem of determining the effective tensor in an abstract setting
edit
Let us first consider the problem of determining the effective
permittivity. The approach will be to split relevant fields into
components that belong to orthogonal subpaces of
H
{\displaystyle {\mathcal {H}}}
.
Since
∇
⋅
D
=
0
{\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {D} =0}
, we can split
D
(
x
)
{\displaystyle \mathbf {D} (\mathbf {x} )}
into two parts
D
(
x
)
=
D
0
+
D
1
(
x
)
{\displaystyle \mathbf {D} (\mathbf {x} )=\mathbf {D} _{0}+\mathbf {D} _{1}(\mathbf {x} )}
where
D
0
∈
U
{\displaystyle \mathbf {D} _{0}\in {\mathcal {U}}}
and
D
1
∈
J
{\displaystyle \mathbf {D} _{1}\in {\mathcal {J}}}
.
Also, since
∇
×
E
=
0
{\displaystyle {\boldsymbol {\nabla }}\times \mathbf {E} ={\boldsymbol {0}}}
, we can split
E
(
x
)
{\displaystyle \mathbf {E} (\mathbf {x} )}
into two parts
E
(
x
)
=
E
0
+
E
1
(
x
)
{\displaystyle \mathbf {E} (\mathbf {x} )=\mathbf {E} _{0}+\mathbf {E} _{1}(\mathbf {x} )}
where
E
0
∈
U
{\displaystyle \mathbf {E} _{0}\in {\mathcal {U}}}
and
E
1
∈
E
{\displaystyle \mathbf {E} _{1}\in {\mathcal {E}}}
.
The constitutive relation linking
D
{\displaystyle \mathbf {D} }
and
E
{\displaystyle \mathbf {E} }
is
D
(
x
)
=
ϵ
(
x
)
⋅
E
(
x
)
≡
ϵ
(
E
)
{\displaystyle \mathbf {D} (\mathbf {x} )={\boldsymbol {\epsilon }}(\mathbf {x} )\cdot \mathbf {E} (\mathbf {x} )\equiv {\boldsymbol {\epsilon }}(\mathbf {E} )}
where
ϵ
(
∙
)
{\displaystyle {\boldsymbol {\epsilon }}(\bullet )}
can be thought of as an operator which is local in
real space and maps
H
{\displaystyle {\mathcal {H}}}
to
H
{\displaystyle {\mathcal {H}}}
. Therefore, we can write
D
0
+
D
1
=
ϵ
(
E
0
+
E
1
)
.
{\displaystyle \mathbf {D} _{0}+\mathbf {D} _{1}={\boldsymbol {\epsilon }}(\mathbf {E} _{0}+\mathbf {E} _{1})~.}
The effective permittivity
ϵ
eff
{\displaystyle {\boldsymbol {\epsilon }}_{\text{eff}}}
is defined through the relation
D
0
=
ϵ
eff
(
E
0
)
.
{\displaystyle \mathbf {D} _{0}={\boldsymbol {\epsilon }}_{\text{eff}}(\mathbf {E} _{0})~.}
Let
Γ
1
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}_{1}}
denote the projection operator that effects the projection
of any vector in
H
{\displaystyle {\mathcal {H}}}
onto the subspace
E
{\displaystyle {\mathcal {E}}}
. This projection is
local in Fourier space. We can show that, if
b
(
x
)
=
Γ
1
⋅
a
(
x
)
{\displaystyle \mathbf {b} (\mathbf {x} )={\boldsymbol {\mathit {\Gamma }}}_{1}\cdot \mathbf {a} (\mathbf {x} )}
then
b
^
(
k
)
=
Γ
^
1
(
k
)
⋅
a
^
(
k
)
{\displaystyle {\widehat {\mathbf {b} }}(\mathbf {k} )={\boldsymbol {\mathit {\widehat {\Gamma }}}}_{1}(\mathbf {k} )\cdot {\widehat {\mathbf {a} }}(\mathbf {k} )}
where
Γ
^
1
(
k
)
=
{
k
⊗
k
|
k
|
2
=
Γ
1
(
n
)
with
n
=
k
|
k
|
if
k
≠
0
0
if
k
=
0
.
{\displaystyle {\boldsymbol {\mathit {\widehat {\Gamma }}}}_{1}(\mathbf {k} )={\begin{cases}{\cfrac {\mathbf {k} \otimes \mathbf {k} }{|\mathbf {k} |^{2}}}={\boldsymbol {\mathit {\Gamma }}}_{1}(\mathbf {n} )&\qquad {\text{with}}~~\mathbf {n} ={\cfrac {\mathbf {k} }{|\mathbf {k} |}}~~{\text{if}}~~\mathbf {k} \neq 0\\{\boldsymbol {\mathit {0}}}&\qquad {\text{if}}~~\mathbf {k} =0~.\end{cases}}}
More generally, if we choose some reference matrix
ϵ
0
{\displaystyle {\boldsymbol {\epsilon }}_{0}}
, we can define
an operator
Γ
{\displaystyle {\boldsymbol {\mathit {\Gamma }}}}
which is local in Fourier space via the relation
b
(
x
)
=
Γ
⋅
a
(
x
)
{\displaystyle \mathbf {b} (\mathbf {x} )={\boldsymbol {\mathit {\Gamma }}}\cdot \mathbf {a} (\mathbf {x} )}
if and only if
b
∈
E
and
a
−
ϵ
0
(
b
)
∈
U
⊕
J
⟹
∇
⋅
(
a
−
ϵ
0
(
b
)
)
=
0
.
{\displaystyle \mathbf {b} \in {\mathcal {E}}\qquad {\text{and}}\qquad \mathbf {a} -{\boldsymbol {\epsilon }}_{0}(\mathbf {b} )\in {\mathcal {U}}\oplus {\mathcal {J}}\implies {\boldsymbol {\nabla }}\cdot (\mathbf {a} -{\boldsymbol {\epsilon }}_{0}(\mathbf {b} ))=0~.}
In Fourier space,
b
^
(
k
)
=
Γ
^
(
k
)
⋅
a
^
(
k
)
{\displaystyle {\widehat {\mathbf {b} }}(\mathbf {k} )={\boldsymbol {\mathit {\widehat {\Gamma }}}}(\mathbf {k} )\cdot {\widehat {\mathbf {a} }}(\mathbf {k} )}
where
Γ
^
(
k
)
=
{
k
⊗
k
k
⋅
ϵ
0
(
k
)
=
Γ
(
n
)
with
n
=
k
|
k
|
if
k
≠
0
0
if
k
=
0
.
{\displaystyle {\boldsymbol {\mathit {\widehat {\Gamma }}}}(\mathbf {k} )={\begin{cases}{\cfrac {\mathbf {k} \otimes \mathbf {k} }{\mathbf {k} \cdot {\boldsymbol {\epsilon }}_{0}(\mathbf {k} )}}={\boldsymbol {\mathit {\Gamma }}}(\mathbf {n} )&\qquad {\text{with}}~~\mathbf {n} ={\cfrac {\mathbf {k} }{|\mathbf {k} |}}~~{\text{if}}~~\mathbf {k} \neq 0\\{\boldsymbol {\mathit {0}}}&\qquad {\text{if}}~~\mathbf {k} =0~.\end{cases}}}
In the next lecture we will derive relations for the effective tensors
using these ideas.
↑ The discussion in this lecture is
based on Milton02 . Please consult that book for more details
and references.
[Milton02] G. W. Milton. Theory of Composites . Cambridge University Press, New York, 2002.