Linear algebra/Orthogonal matrix
- This article contains excerpts from Wikipedia's Orthogonal matrix.
- If complex numbers are involved, see Unitary matrix.
A real square matrix is orthogonal (orthogonal[1]) if and only if its columns form an orthonormal basis in a Euclidean space in which all numbers are real-valued and dot product is defined in the usual fashion.[2][3] An orthonormal basis in an N dimensional space is one where, (1) all the basis vectors have unit magnitude.[4]
Fundamental properties
edit- A real square matrix is orthogonal if and only if its columns form an orthonormal basis on the Euclidean space ℝn, which is the case if and only if its rows form an orthonormal basis of ℝn.[5]
- The determinant of any orthogonal matrix is +1 or −1. But the converse is not true; having a determinant of ±1 is no guarantee of orthogonality. An orthogonal matrix with a determinant equal to +1 is called a special orthogonal matrix.
- An orthogonal matrix can always be diagonalized over the complex numbers to exhibit a full set of eigenvalues, all of which must have (complex) modulus 1.
- All permutation matrices are orthogonal (but the converse is not true.)[6]
- All orthogonal matrices are unitary (but the converse is not true.)[7]
- Under the operation of multiplication, the n × n orthogonal matrices form the orthogonal group known as O(n).
Three important results that are easy to prove
editAmong the first things a novice should learn are those that are easy to prove.
Orthonormal basis vectors are hiding in plain sight
editTheorem:
- If the rows of a square matrix form an orthonormal set of (basis) vectors,
- then the transpose of that matrix is its own inverse
Visual understanding
editSuppose the rows of a matrix form an orthonormal set of basis vectors, as shown in the i-th row in matrix A to the right. The ij-the element of the product AB takes the dot product of the i-th row of A with the j-th column of matrix B, as shown in the upper part of the diagram. in the diagram's upper part, the j-th column is higlighted in yellow. In the diagram's lower part, matrix B is replaced by it's transpose, which shifts the elements in column j to a row (highlighted in cyan.) This establishes that the product of A with the transpose of B creates elements that are the dot product of rows of A with rows of B.
If A is a orthogonal matrix and B is its transpose, this procedure creates matrix elements that are dot products among the rows of the orthogonal matrix.
Rigorous proof
editThis proof illustrates how subscripts are used to manipulate and understand tensors.
1. Suppose
- is the i-th element of a orthonormal set of basis vectors.
- Here are the original unit vectors used to define the new set of unit vectors that extract from the rows of matrix
2. Now we relabel how we write the sums for and as follows:
- Hint: In the first of these two equations, I replace by because summed variables can be changed at will. Sometimes they are called "dummy variables" because they "do not speak" after the sum is done. For example, summing n from 1 to 3 equals 1+2+3, which is the same as summing m from 1 to 3. In the second one I relabeled my dummy variable as because the same dummy variable cannot serve two purposes in a single expression.
3. This yields the following expression for the dot product between our two vectors:
4. This last term introduces the Kronecker delta symbol:
The last term almost looks like the product of the matrix with itself. It can be turned into a product using the transpose on the second term in the product, using
5. If is orthogonal, then and we conclude that the rows of (i.e., the vectors form an orthonormal collection of vectors (i.e. a "rotated" basis for the vector space.)
Change of basis for tensors
editA common use of the orthogonal matrix is to express a vector in one reference frame into a "rotated"[8] frame.
Here, we let denote any matrix (i.e. "tensor"), while is any orthogonal matrix (typically a rotation.) Let and be two vectors, and let and represent the same vectors in a rotated reference frame.
- Theorem
- If , then:
- Proof
- Define
- Assume and
- Do some tensor algebra and express in terms of
In this context, the only difference between the tensor and scalar algebras is that with tensors, vector's do not always commute: does not always vanish.
Derivation of the rotation tensor
editThe rotation matrix usually the first orthogonal matrix students encounter. While it is conceptually easier to rotate vectors than to rotate a coordinate system, it is algebraically easier to rotate a coordinate system. From the figure, the unit vectors in a rotated reference frame obey:
Students will quickly see the sine and cosine components in this equation, but the minus sign might seem confusing. It comes from the fact that has a negative component when projected along the direction. Now express the vector , first in the unprimed coordinate system, then in primed:
To complete the proof, substitute the expressions that expressed the unit vectors in terms of the unit vectors:
This latter expression solves our problem, as we were seeking an expression of the form,
Note how in this formalism, there is no distinction between the primed and unprimed vector This tends to confuse everyone, including the author. Such confusion can be avoided when writing a textbook or article. But in the free-wheeling world of both scientific literature, as well as wikis, such chaos cannot be avoided. That's why it is good to carefully read books.
Going back to the notation of many WMF pages, we have the following formula for the components of a vector if the coordinate system is rotated by about the z axis:
See also
editNotes
edit- ↑ The term "orthogonal" is confusing. A better word in this context would be orthonormal. See the lede sentence in w:special:Permalink/1181197344
- ↑ w:Special:Permalink/1181197344#Matrix_properties
- ↑ The physics student's first alternative to the "usual fashion" is the dot product in special relativity, where
- ↑ "Unit magnitude" means the dot product of the vector with itself equals 1
- ↑ Most of this page is based on https://en.wikipedia.org/w/index.php?title=Orthogonal_matrix&oldid=1028769520
- ↑ https://en.wikipedia.org/w/index.php?title=Permutation_matrix&oldid=1015641816#Properties
- ↑ https://en.wikipedia.org/w/index.php?title=Permutation_matrix&oldid=1015641816#Properties
- ↑ The quotation marks on "rotation' are intended to include orthogonal matrices that are also reflections of an axis through the origin.