The w:Polar decomposition theorem states that any second order tensor whose determinant is positive can be decomposed uniquely into a symmetric part and an orthogonal part.
In continuum mechanics, the deformation gradient
F
{\displaystyle {\boldsymbol {F}}}
is such a tensor
because
det
(
F
)
>
0
{\displaystyle \det(\mathbf {F} )>0}
. Therefore we can write
F
=
R
⋅
U
=
V
⋅
R
{\displaystyle {\boldsymbol {F}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}={\boldsymbol {V}}\cdot {\boldsymbol {R}}}
where
R
{\displaystyle {\boldsymbol {R}}}
is an orthogonal tensor (
R
⋅
R
T
=
1
{\displaystyle {\boldsymbol {R}}\cdot {\boldsymbol {R}}^{T}={\boldsymbol {\mathit {1}}}}
) and
U
,
V
{\displaystyle {\boldsymbol {U}},{\boldsymbol {V}}}
are symmetric tensors (
U
=
U
T
{\displaystyle {\boldsymbol {U}}={\boldsymbol {U}}^{T}}
and
V
=
V
T
{\displaystyle {\boldsymbol {V}}={\boldsymbol {V}}^{T}}
) called the
right stretch tensor and the left stretch tensor , respectively.
This decomposition is called the polar decomposition of
F
{\displaystyle {\boldsymbol {F}}}
.
Recall that the right Cauchy-Green deformation tensor is defined as
C
=
F
T
⋅
F
{\displaystyle {\boldsymbol {C}}={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}}
Clearly this is a symmetric tensor. From the polar decomposition of
F
{\displaystyle {\boldsymbol {F}}}
we have
C
=
U
T
⋅
R
T
⋅
R
⋅
U
=
U
⋅
U
=
U
2
{\displaystyle {\boldsymbol {C}}={\boldsymbol {U}}^{T}\cdot {\boldsymbol {R}}^{T}\cdot {\boldsymbol {R}}\cdot {\boldsymbol {U}}={\boldsymbol {U}}\cdot {\boldsymbol {U}}={\boldsymbol {U}}^{2}}
If you know
C
{\displaystyle {\boldsymbol {C}}}
then you can calculate
U
{\displaystyle {\boldsymbol {U}}}
and hence
R
{\displaystyle {\boldsymbol {R}}}
using
R
=
F
⋅
U
−
1
{\displaystyle {\boldsymbol {R}}={\boldsymbol {F}}\cdot {\boldsymbol {U}}^{-1}}
.
How do you find the square root of a tensor?
edit
If you want to find
U
{\displaystyle {\boldsymbol {U}}}
given
C
{\displaystyle {\boldsymbol {C}}}
you will need to take the square root of
C
{\displaystyle {\boldsymbol {C}}}
. How does one do that?
We use what is called the spectral decomposition or eigenprojection of
C
{\displaystyle {\boldsymbol {C}}}
. The spectral decomposition involves expressing
C
{\displaystyle {\boldsymbol {C}}}
in terms of its eigenvalues and eigenvectors. The tensor product of the eigenvectors acts as a basis while the eigenvalues give the magnitude of the projection.
Thus,
C
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
{\displaystyle {\boldsymbol {C}}=\sum _{i=1}^{3}\lambda _{i}^{2}~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}}
where
λ
i
2
{\displaystyle \lambda _{i}^{2}}
are the principal values (eigenvalues) of
C
{\displaystyle {\boldsymbol {C}}}
and
N
i
{\displaystyle {\boldsymbol {N}}_{i}}
are the principal directions (eigenvectors) of
C
{\displaystyle {\boldsymbol {C}}}
.
Therefore,
U
2
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
{\displaystyle {\boldsymbol {U}}^{2}=\sum _{i=1}^{3}\lambda _{i}^{2}~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}}
Since the basis does not change, we then have
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
{\displaystyle {\boldsymbol {U}}=\sum _{i=1}^{3}\lambda _{i}~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}}
Therefore the
λ
i
{\displaystyle \lambda _{i}}
can be interpreted as principal stretches and
the vectors
N
i
{\displaystyle {\boldsymbol {N}}_{i}}
are the directions of the principal stretches.
If
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
{\displaystyle {\boldsymbol {U}}=\sum _{i=1}^{3}\lambda _{i}~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}}
show that
U
2
=
U
⋅
U
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
.
{\displaystyle {\boldsymbol {U}}^{2}={\boldsymbol {U}}\cdot {\boldsymbol {U}}=\sum _{i=1}^{3}\lambda _{i}^{2}~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}~.}
Example of polar decomposition
edit
Let us assume that the motion is given by
x
1
=
1
4
[
4
X
1
+
(
9
−
3
X
1
−
5
X
2
−
X
1
X
2
)
t
]
x
2
=
X
2
+
(
4
+
2
X
1
)
t
{\displaystyle {\begin{aligned}x_{1}&={\cfrac {1}{4}}\left[4~X_{1}+(9-3~X_{1}-5~X_{2}-X_{1}~X_{2})~t\right]\\x_{2}&=X_{2}+(4+2~X_{1})~t\end{aligned}}}
The adjacent figure shows how a unit square subjected to this motion evolves over time.
An example of a motion.
The deformation gradient is given by
F
=
∂
x
∂
X
⟹
F
i
j
=
∂
x
i
∂
X
j
{\displaystyle {\boldsymbol {F}}={\frac {\partial \mathbf {x} }{\partial {\boldsymbol {X}}}}\quad \implies \quad F_{ij}={\frac {\partial x_{i}}{\partial X_{j}}}}
Therefore
F
11
=
∂
x
1
∂
X
1
=
1
4
[
4
+
(
−
3
−
X
2
)
t
]
F
12
=
∂
x
1
∂
X
2
=
1
4
[
(
−
5
−
X
1
)
t
]
F
21
=
∂
x
2
∂
X
1
=
2
t
F
22
=
∂
x
2
∂
X
2
=
1
{\displaystyle {\begin{aligned}F_{11}&={\frac {\partial x_{1}}{\partial X_{1}}}={\cfrac {1}{4}}\left[4+(-3-X_{2})~t\right]\\F_{12}&={\frac {\partial x_{1}}{\partial X_{2}}}={\cfrac {1}{4}}\left[(-5-X_{1})~t\right]\\F_{21}&={\frac {\partial x_{2}}{\partial X_{1}}}=2~t\\F_{22}&={\frac {\partial x_{2}}{\partial X_{2}}}=1\end{aligned}}}
At
t
=
1
{\displaystyle t=1}
at the position
X
=
(
0
,
0
)
{\displaystyle {\boldsymbol {X}}=(0,0)}
we have
F
=
[
∂
x
1
∂
X
1
∂
x
1
∂
X
2
∂
x
2
∂
X
1
∂
x
2
∂
X
2
]
=
1
4
[
1
−
5
8
4
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}{\frac {\partial x_{1}}{\partial X_{1}}}&{\frac {\partial x_{1}}{\partial X_{2}}}\\{\frac {\partial x_{2}}{\partial X_{1}}}&{\frac {\partial x_{2}}{\partial X_{2}}}\end{bmatrix}}={\cfrac {1}{4}}{\begin{bmatrix}1&-5\\8&4\end{bmatrix}}}
You can calculate the deformation gradient at other points in a similar manner.
We have
C
=
F
T
⋅
F
{\displaystyle {\boldsymbol {C}}={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}}
Therefore,
C
=
F
T
F
=
1
16
[
65
27
27
41
]
{\displaystyle \mathbf {C} =\mathbf {F} ^{T}~\mathbf {F} ={\cfrac {1}{16}}{\begin{bmatrix}65&27\\27&41\end{bmatrix}}}
To compute
U
{\displaystyle {\boldsymbol {U}}}
we have to find the eigenvalues and eigenvectors of
C
{\displaystyle {\boldsymbol {C}}}
.
The eigenvalue problem is
(
C
−
λ
2
I
)
N
=
0
{\displaystyle (\mathbf {C} -\lambda ^{2}~\mathbf {I} )\mathbf {N} =\mathbf {0} }
where
I
=
[
1
0
0
1
]
{\displaystyle \mathbf {I} ={\begin{bmatrix}1&0\\0&1\end{bmatrix}}}
To find the eigenvalues we solve the characteristic equation
det
(
C
−
λ
2
I
)
=
0
{\displaystyle \det(\mathbf {C} -\lambda ^{2}~\mathbf {I} )=0}
Plugging in the numbers, we get
det
[
65
16
−
λ
2
27
16
27
16
41
16
−
λ
2
]
=
0
{\displaystyle \det {\begin{bmatrix}{\cfrac {65}{16}}-\lambda ^{2}&{\cfrac {27}{16}}\\{\cfrac {27}{16}}&{\cfrac {41}{16}}-\lambda ^{2}\end{bmatrix}}=0}
or
λ
4
−
53
8
λ
2
+
121
16
=
0
{\displaystyle \lambda ^{4}-{\cfrac {53}{8}}~\lambda ^{2}+{\cfrac {121}{16}}=0}
This equation has two solutions
λ
1
2
=
53
16
+
3
16
97
=
5.159
λ
2
2
=
53
16
−
3
16
97
=
1.466
{\displaystyle {\begin{aligned}\lambda _{1}^{2}&={\cfrac {53}{16}}+{\cfrac {3}{16}}~{\sqrt {97}}=5.159\\\lambda _{2}^{2}&={\cfrac {53}{16}}-{\cfrac {3}{16}}~{\sqrt {97}}=1.466\end{aligned}}}
Taking the square roots we get the values of the principal stretches
λ
1
=
2.2714
λ
2
=
1.2107
{\displaystyle \lambda _{1}=2.2714\qquad \lambda _{2}=1.2107}
To compute the eigenvectors we plug into the eigenvalues into the eigenvalue problem to get
{
[
65
27
27
41
]
−
λ
1
2
[
1
0
0
1
]
}
[
N
1
(
1
)
N
2
(
1
)
]
=
[
0
0
]
{\displaystyle \left\{{\begin{bmatrix}65&27\\27&41\end{bmatrix}}-\lambda _{1}^{2}~{\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right\}~{\begin{bmatrix}N_{1}^{(1)}\\N_{2}^{(1)}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}}
Because this system of equations is not linearly independent, we need another equation to solve this system of equations for
N
1
(
1
)
{\displaystyle N_{1}^{(1)}}
and
N
2
(
1
)
{\displaystyle N_{2}^{(1)}}
.
This problem is eliminated by using the following equation (which implies that
N
{\displaystyle \mathbf {N} }
is a unit vector)
N
2
(
1
)
=
1
−
(
N
1
(
1
)
)
2
{\displaystyle N_{2}^{(1)}={\sqrt {1-(N_{1}^{(1)})^{2}}}}
Solving, we get
N
1
=
[
N
1
(
1
)
N
2
(
1
)
]
=
[
0.8385
0.5449
]
{\displaystyle \mathbf {N} _{1}={\begin{bmatrix}N_{1}^{(1)}\\N_{2}^{(1)}\end{bmatrix}}={\begin{bmatrix}0.8385\\0.5449\end{bmatrix}}}
We can do the same thing for the other eigenvector
N
2
{\displaystyle \mathbf {N} _{2}}
to get
N
2
=
[
N
1
(
2
)
N
2
(
2
)
]
=
[
−
0.5449
0.8385
]
{\displaystyle \mathbf {N} _{2}={\begin{bmatrix}N_{1}^{(2)}\\N_{2}^{(2)}\end{bmatrix}}={\begin{bmatrix}-0.5449\\0.8385\end{bmatrix}}}
Therefore,
N
1
⊗
N
1
=
N
1
N
1
T
=
[
0.8385
0.5449
]
[
0.8385
0.5449
]
=
[
0.7031
0.4569
0.4569
0.2969
]
{\displaystyle {\boldsymbol {N}}_{1}\otimes {\boldsymbol {N}}_{1}=\mathbf {N} _{1}~\mathbf {N} _{1}^{T}={\begin{bmatrix}0.8385\\0.5449\end{bmatrix}}{\begin{bmatrix}0.8385&0.5449\end{bmatrix}}={\begin{bmatrix}0.7031&0.4569\\0.4569&0.2969\end{bmatrix}}}
and
N
2
⊗
N
2
=
N
2
N
2
T
=
[
−
0.5449
0.8385
]
[
−
0.5449
0.8385
]
=
[
0.2969
−
0.4569
−
0.4569
0.7031
]
{\displaystyle {\boldsymbol {N}}_{2}\otimes {\boldsymbol {N}}_{2}=\mathbf {N} _{2}~\mathbf {N} _{2}^{T}={\begin{bmatrix}-0.5449\\0.8385\end{bmatrix}}{\begin{bmatrix}-0.5449&0.8385\end{bmatrix}}={\begin{bmatrix}0.2969&-0.4569\\-0.4569&0.7031\end{bmatrix}}}
Therefore,
C
=
λ
1
2
N
1
⊗
N
1
+
λ
2
2
N
2
⊗
N
2
⟹
C
=
5.159
[
0.7031
0.4569
0.4569
0.2969
]
+
1.466
[
0.2969
−
0.4569
−
0.4569
0.7031
]
{\displaystyle {\boldsymbol {C}}=\lambda _{1}^{2}~{\boldsymbol {N}}_{1}\otimes {\boldsymbol {N}}_{1}+\lambda _{2}^{2}~{\boldsymbol {N}}_{2}\otimes {\boldsymbol {N}}_{2}\quad \implies \quad \mathbf {C} =5.159~{\begin{bmatrix}0.7031&0.4569\\0.4569&0.2969\end{bmatrix}}+1.466~{\begin{bmatrix}0.2969&-0.4569\\-0.4569&0.7031\end{bmatrix}}}
We usually don't see any problem to calculate
C
{\displaystyle {\boldsymbol {C}}}
at this point and go straight to the right stretch tensor.
The right stretch tensor
U
{\displaystyle {\boldsymbol {U}}}
is given by
U
=
λ
1
N
1
⊗
N
1
+
λ
2
N
2
⊗
N
2
⟹
U
=
2.2714
[
0.7031
0.4569
0.4569
0.2969
]
+
1.2107
[
0.2969
−
0.4569
−
0.4569
0.7031
]
{\displaystyle {\boldsymbol {U}}=\lambda _{1}~{\boldsymbol {N}}_{1}\otimes {\boldsymbol {N}}_{1}+\lambda _{2}~{\boldsymbol {N}}_{2}\otimes {\boldsymbol {N}}_{2}\quad \implies \quad \mathbf {U} =2.2714~{\begin{bmatrix}0.7031&0.4569\\0.4569&0.2969\end{bmatrix}}+1.2107~{\begin{bmatrix}0.2969&-0.4569\\-0.4569&0.7031\end{bmatrix}}}
or
U
=
[
1.9565
0.4846
0.4846
1.5256
]
{\displaystyle \mathbf {U} ={\begin{bmatrix}1.9565&0.4846\\0.4846&1.5256\end{bmatrix}}}
We can invert this matrix to get
U
−
1
=
[
0.5548
−
0.1762
−
0.1762
0.7114
]
{\displaystyle \mathbf {U} ^{-1}={\begin{bmatrix}0.5548&-0.1762\\-0.1762&0.7114\end{bmatrix}}}
We can now find the rotation matrix by using th relation
R
=
F
⋅
U
−
1
{\displaystyle {\boldsymbol {R}}={\boldsymbol {F}}\cdot {\boldsymbol {U}}^{-1}}
In matrix form,
R
=
1
4
[
1
−
5
8
4
]
[
0.5548
−
0.1762
−
0.1762
0.7114
]
=
[
0.3590
−
0.9334
0.9334
0.3590
]
{\displaystyle \mathbf {R} ={\cfrac {1}{4}}{\begin{bmatrix}1&-5\\8&4\end{bmatrix}}{\begin{bmatrix}0.5548&-0.1762\\-0.1762&0.7114\end{bmatrix}}={\begin{bmatrix}0.3590&-0.9334\\0.9334&0.3590\end{bmatrix}}}
You can check whether this matrix is orthogonal by seeing whether
R
R
T
=
R
T
R
=
I
{\displaystyle \mathbf {R} ~\mathbf {R} ^{T}=\mathbf {R} ^{T}~\mathbf {R} =\mathbf {I} }
.
You thus get the polar decomposition of
F
{\displaystyle {\boldsymbol {F}}}
. In an actual calculation you have to be careful about floating point errors. Otherwise you might not get a matrix that is orthogonal.