Linear algebra/Orthogonal matrix

(Redirected from Orthogonal matrix)
Alternative notations
${\displaystyle Q^{-1}=Q^{\mathrm {T} }}$ ${\displaystyle Q^{\mathrm {T} }Q=QQ^{\mathrm {T} }=I}$
${\displaystyle {\underline {\underline {Q}}}^{-1}={\underline {\underline {Q}}}^{\mathrm {T} }}$ ${\displaystyle {\underline {\underline {Q}}}^{\mathrm {T} }\cdot {\underline {\underline {Q}}}^{-1}={\underline {\underline {Q}}}^{-1}\cdot {\underline {\underline {Q}}}^{\mathrm {T} }={\underline {\underline {I}}}}$
${\displaystyle Q_{jk}^{-1}=Q_{kj}\equiv Q_{jk}^{\mathrm {T} }}$ ${\displaystyle \sum _{k}Q_{ki}Q_{kj}^{-1}=\sum _{k}Q_{ik}^{-1}Q_{jk}=\delta _{ij}}$

A real square matrix is orthogonal (orthogonal[1]) if and only if its columns form an orthonormal basis in a Euclidean space in which all numbers are real-valued and dot product is defined in the usual fashion.[2][3] An orthonormal basis in an N dimensional space is one where, (1) all the basis vectors have unit magnitude.[4]

Three important results that are easy to prove

Among the first things a novice should learn are those that are easy to prove.

Orthonormal basis vectors are hiding in plain sight

Theorem:

• If the rows of a square matrix form an orthonormal set of (basis) vectors,
then the transpose of that matrix is its own inverse ${\displaystyle (\mathbf {M} ^{T}=\mathbf {M} ^{-1})}$

Visual understanding

Suppose the rows of a matrix form an orthonormal set of basis vectors, as shown in the i-th row in matrix A to the right. The ij-the element of the product AB takes the dot product of the i-th row of A with the j-th column of matrix B, as shown in the upper part of the diagram. in the diagram's upper part, the j-th column is higlighted in yellow. In the diagram's lower part, matrix B is replaced by it's transpose, which shifts the elements in column j to a row (highlighted in cyan.) This establishes that the product of A with the transpose of B creates elements that are the dot product of rows of A with rows of B.

If A is a orthogonal matrix and B is its transpose, this procedure creates matrix elements that are dot products among the rows of the orthogonal matrix.

Rigorous proof

This proof illustrates how subscripts are used to manipulate and understand tensors.

1. Suppose

${\displaystyle \mathbf {v_{i}} =\Sigma _{j}v_{j}\,\mathbf {{\hat {e}}_{j}} =\Sigma _{j}M_{ij}\,\mathbf {{\hat {e}}_{j}} }$  is the i-th element of a orthonormal set of basis vectors.
Here ${\displaystyle \mathbf {{\hat {e}}_{j}} }$  are the original unit vectors used to define the new set of unit vectors that extract from the rows of matrix ${\displaystyle \mathbf {\underline {\underline {M}}} }$

2. Now we relabel how we write the sums for ${\displaystyle \mathbf {v_{i}} }$  and ${\displaystyle \mathbf {v_{j}} }$  as follows:

${\displaystyle \mathbf {v_{i}} =\Sigma _{\alpha }M_{i\alpha }\,\mathbf {{\hat {e}}_{\alpha }} }$
${\displaystyle \mathbf {v_{j}} =\Sigma _{\beta }M_{j\beta }\,\mathbf {{\hat {e}}_{\beta }} }$
Hint: In the first of these two equations, I replace ${\displaystyle j}$  by ${\displaystyle \alpha }$ because summed variables can be changed at will. Sometimes they are called "dummy variables" because they "do not speak" after the sum is done. For example, summing n from 1 to 3 equals 1+2+3, which is the same as summing m from 1 to 3. In the second one I relabeled my dummy variable as ${\displaystyle \beta }$  because the same dummy variable cannot serve two purposes in a single expression.

3. This yields the following expression for the dot product between our two vectors:

${\displaystyle \mathbf {v_{i}} \cdot \mathbf {v_{j}} \cdot =\left(\sum _{\alpha }M_{i\alpha }\,\mathbf {{\hat {e}}_{\alpha }} \right)\cdot \left(\sum _{\beta }M_{j\beta }\,\mathbf {{\hat {e}}_{\beta }} \right)}$
${\displaystyle \mathbf {v_{i}} \cdot \mathbf {v_{j}} \cdot =\left(\sum _{\alpha }M_{i\alpha }\,\mathbf {{\hat {e}}_{\alpha }} \right)\cdot \left(\sum _{\beta }M_{j\beta }\,\mathbf {{\hat {e}}_{\beta }} \right)=\sum _{\alpha \beta }\,(\mathbf {{\hat {e}}_{\alpha }} \,\cdot \,\mathbf {{\hat {e}}_{\beta }} )\,M_{i\alpha }M_{j\beta }\,}$

4. This last term introduces the Kronecker delta symbol:

${\displaystyle \mathbf {v_{i}} \cdot \mathbf {v_{j}} \cdot =\left(\sum _{\alpha }M_{i\alpha }\,\mathbf {{\hat {e}}_{\alpha }} \right)\cdot \left(\sum _{\beta }M_{j\beta }\,\mathbf {{\hat {e}}_{\beta }} \right)=\sum _{\alpha \beta }M_{i\alpha }M_{j\beta }\,\underbrace {\mathbf {{\hat {e}}_{\alpha }} \,\cdot \,\mathbf {{\hat {e}}_{\beta }} } _{\delta _{\alpha \beta }}=\sum _{\alpha }M_{i\alpha }M_{j\alpha }\,}$

The last term almost looks like the product of the matrix with itself. It can be turned into a product using the transpose on the second term in the product, using ${\displaystyle M_{j\alpha }=M_{\alpha j}^{T}.}$

5. If ${\displaystyle {\underline {\underline {M}}}}$  is orthogonal, then ${\displaystyle \,{\underline {\underline {M}}}^{T}={\underline {\underline {M}}}^{-1},}$  and we conclude that the rows of ${\displaystyle \,{\underline {\underline {M}}}}$  (i.e., the vectors ${\displaystyle \mathbf {v_{i}} )}$  form an orthonormal collection of vectors (i.e. a "rotated" basis for the vector space.)

${\displaystyle \mathbf {v_{i}} \cdot \mathbf {v_{j}} \cdot =\left(\sum _{\alpha }M_{i\alpha }\,\mathbf {{\hat {e}}_{\alpha }} \right)\cdot \left(\sum _{\beta }M_{j\beta }\,\mathbf {{\hat {e}}_{\beta }} \right)=\sum _{\alpha }M_{i\alpha }M_{j\alpha }\,=\sum _{\alpha }M_{i\alpha }M_{\alpha j}^{T}\,=\sum _{\alpha }M_{i\alpha }M_{\alpha j}^{-1}\,=\delta _{ij}}$

Change of basis for tensors

If a matrix is used to rotate vectors, then use it twice to rotate tensors

A common use of the orthogonal matrix is to express a vector in one reference frame into a "rotated"[8] frame.

Here, we let ${\displaystyle {\underline {\underline {M}}}}$  denote any matrix (i.e. "tensor"), while ${\displaystyle {\underline {\underline {R}}}}$  is any orthogonal matrix (typically a rotation.) Let ${\displaystyle {\underline {v}}}$  and ${\displaystyle {\underline {p}}}$  be two vectors, and let ${\displaystyle {\underline {v}}'}$  and ${\displaystyle {\underline {p}}'}$  represent the same vectors in a rotated reference frame.

Theorem
• If   ${\displaystyle {\underline {v}}'={\underline {\underline {R}}}\cdot {\underline {v}}}$ ,   then:  ${\displaystyle {\underline {\underline {M}}}'={\underline {\underline {R}}}\cdot {\underline {\underline {M}}}\cdot {\underline {\underline {R}}}^{-1}}$
Proof
1. Define ${\displaystyle {\underline {p}}={\underline {\underline {M}}}\cdot {\underline {v}}.}$
2. Assume ${\displaystyle {\underline {v}}'={\underline {\underline {R}}}\cdot {\underline {v}}}$  and ${\displaystyle {\underline {p}}'={\underline {\underline {R}}}\cdot {\underline {p}}.}$
3. Do some tensor algebra and express ${\displaystyle {\underline {p}}'}$  in terms of ${\displaystyle {\underline {v}}'.}$

In this context, the only difference between the tensor and scalar algebras is that with tensors, vector's do not always commute: ${\displaystyle {\underline {\underline {A}}}\cdot {\underline {\underline {B}}}-{\underline {\underline {B}}}\cdot {\underline {\underline {A}}}}$  does not always vanish.

Derivation of the rotation tensor

The rotation matrix usually the first orthogonal matrix students encounter. While it is conceptually easier to rotate vectors than to rotate a coordinate system, it is algebraically easier to rotate a coordinate system. From the figure, the unit vectors in a rotated reference frame obey:

{\displaystyle {\begin{aligned}{\hat {x}}=&{\hat {x}}'\cos \theta -{\hat {y}}'\sin \theta \qquad &{\hat {y}}=&{\hat {x}}'\sin \theta +{\hat {y}}'\cos \theta \end{aligned}}}

Students will quickly see the sine and cosine components in this equation, but the minus sign might seem confusing. It comes from the fact that ${\displaystyle {\hat {x}}}$  has a negative component when projected along the ${\displaystyle {\hat {y}}'}$  direction. Now express the vector ${\displaystyle {\underline {V}}}$ , first in the unprimed coordinate system, then in primed:

${\displaystyle {\underline {V}}=V_{x}{\hat {x}}+{\underline {V}}_{y}{\hat {y}}}$

To complete the proof, substitute the expressions that expressed the ${\displaystyle ({\hat {x}},{\hat {y}})}$  unit vectors in terms of the ${\displaystyle ({\hat {x}}',{\hat {y}}')}$  unit vectors:

${\displaystyle {\underline {V}}=V_{x}\overbrace {\left({\hat {x}}'\cos \theta -{\hat {y}}'\sin \theta \right)} ^{\hat {x}}+V_{y}\overbrace {\left({\hat {x}}'\sin \theta +{\hat {y}}'\cos \theta \right)} ^{\hat {y}}}$  ${\displaystyle {\underline {V}}=+V_{x}{\hat {x}}'\cos \theta -V_{x}{\hat {y}}'\sin \theta +V_{y}{\hat {x}}'\sin \theta +V_{y}{\hat {y}}'\cos \theta }$

${\displaystyle {\underline {V}}=+\underbrace {\left(V_{x}\cos \theta +V_{y}\sin \theta \right)} _{V_{x}'}{\hat {x}}'+\underbrace {\left(-V_{x}\sin \theta +V_{y}\cos \theta \right)} _{V_{x}'}{\hat {y}}'}$

This latter expression solves our problem, as we were seeking an expression of the form, ${\displaystyle {\underline {V}}=V_{x}'{\hat {x}}'+V_{y}'{\hat {y}}'.}$

Note how in this formalism, there is no distinction between the primed and unprimed vector ${\displaystyle {\underline {V}}\,.}$  This tends to confuse everyone, including the author. Such confusion can be avoided when writing a textbook or article. But in the free-wheeling world of both scientific literature, as well as wikis, such chaos cannot be avoided. That's why it is good to carefully read books.

Going back to the notation of many WMF pages, we have the following formula for the components of a vector if the coordinate system is rotated by ${\displaystyle \theta }$  about the z axis: ${\displaystyle {\begin{bmatrix}V'_{x}\\V'_{y}\end{bmatrix}}={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}\,{\begin{bmatrix}V_{x}\\V_{y}\end{bmatrix}}}$

3. The physics student's first alternative to the "usual fashion" is the dot product in special relativity, where ${\displaystyle \mathbf {\ell } \cdot \mathbf {\ell } '=xx'+yy'+zz'=c^{2}tt'}$