# PlanetPhysics/Matrix

A matrix is defined as a rectangular array of elements (usually the elements are real or complex numbers). An algebra of matrices is developed by defining addition of matrices, multiplication of matrices, multiplication of a matrix by a scalar (real or complex number), differentiation of matrices, etc. The definitions chosen for the above-mentioned operations will be such as to make the calculus of matrices highly applicable. A matrix ${\displaystyle \mathbf {A} }$ may be denoted as follows:

${\displaystyle \mathbf {A} =\left({\begin{matrix}a_{1}^{1}&a_{2}^{1}&\dots &a_{n}^{1}\\a_{1}^{2}&a_{2}^{2}&\dots &a_{n}^{2}\\\dots &\dots &\dots &\dots \\a_{1}^{m}&a_{2}^{m}&\dots &a_{n}^{m}\end{matrix}}\right)=\left\|a_{j}^{i}\right\|}$

If ${\displaystyle m=n}$, we say that ${\displaystyle \mathbf {A} }$ is a square matrix of order ${\displaystyle n}$. If ${\displaystyle \mathbf {B} }$ is the matrix of elements ${\displaystyle \left\|b_{j}^{i}\right|,i=1,2,\dots ,m,j=1,2,\dots ,n}$, then ${\displaystyle \mathbf {B} }$ is said to be equal to ${\displaystyle \mathbf {A} }$, written ${\displaystyle \mathbf {B} =\mathbf {A} }$ or ${\displaystyle \mathbf {A} =\mathbf {B} }$, if and only if ${\displaystyle a_{j}^{i}=b_{j}^{i}}$ for the complete range of values of ${\displaystyle i}$ and ${\displaystyle j}$.

Two matrices can be compared for equality if and only if they are comparable in the sense that they have the same number of rows and the same number of columns.

The sum of two comparable matrices ${\displaystyle \mathbf {A} }$, ${\displaystyle \mathbf {B} }$ is defined as a new matric ${\displaystyle \mathbf {C} }$ whose elements ${\displaystyle c_{j}^{i}}$ are obtained by adding the corresponding elements of ${\displaystyle \mathbf {A} }$ and ${\displaystyle \mathbf {B} }$. Thus

${\displaystyle \left\|c_{j}^{i}\right\|=\mathbf {C} =\mathbf {A} +\mathbf {B} =\left\|a_{j}^{i}\right\|+\left\|b_{j}^{i}\right\|=\left\|a_{j}^{i}+b_{j}^{i}\right\|}$

We note that ${\displaystyle \mathbf {A} +\mathbf {B} =\mathbf {B} +\mathbf {A} }$.

We call ${\displaystyle \mathbf {A} }$ a zero matrix if and only if each element of ${\displaystyle \mathbf {A} }$ is equal to the real number zero.

The product of a matrix ${\displaystyle \mathbf {A} }$ by a numbe ${\displaystyle k}$ (real or complex) is defined as the matrix whose elements are each ${\displaystyle k}$ times those of ${\displaystyle \mathbf {A} }$, that is

${\displaystyle k\mathbf {A} =k\left\|a_{j}^{i}\right\|=\|ka_{j}^{i}\|}$

Every matrix ${\displaystyle \mathbf {A} }$ can be associated with a negative matrix ${\displaystyle \mathbf {B} =-\mathbf {A} }$ such that ${\displaystyle \mathbf {A} +(-\mathbf {A} )=(-\mathbf {A} )+\mathbf {A} =0}$ (zero matrix).

The rule for multiplying a matrix ${\displaystyle \mathbf {A} }$ by a scalar ${\displaystyle k}$ should not be confused with the rule for multiplying a determinant by ${\displaystyle k}$, for in this latter case the elements of only one row or only one column are multiplied by ${\displaystyle k}$.

Before defining the product of two matrices let us consider the following sets of linear transformations:

${\displaystyle {\begin{matrix}{ccl}A:&z^{i}=a_{j}^{i}y^{i}&i=1,2,\dots ,m;j=1,2,\dots ,n\\B:&y^{i}=b_{k}^{j}x^{k}&k=1,2,\dots ,p\end{matrix}}}$

Since the ${\displaystyle z}$'s depend on the ${\displaystyle y}$'s, which in turn depend on the ${\displaystyle x}$'s, we can solve for the ${\displaystyle z}$'s in terms of the ${\displaystyle x}$'s. We write this transformation as follows:

${\displaystyle {\begin{matrix}AB:&z^{i}=a_{j}^{i}b_{k}^{j}x^{k}&c_{k}^{i}=a_{j}^{i}b_{k}^{j}\end{matrix}}}$

This suggests a method for defining multiplication of the matrics ${\displaystyle \mathbf {A} }$, ${\displaystyle \mathbf {B} }$.

If ${\displaystyle \mathbf {A} =\left\|a_{j}^{i}\right\|,i=1,2,\dots ,m,j=1,2,\dots ,n,\mathbf {B} =\left\|b_{j}^{i}\right\|,i=1,2,\dots ,n,j=1,2,\dots ,p}$, then ${\displaystyle \mathbf {A} \mathbf {B} }$ is defined as the matrix ${\displaystyle \mathbf {C} }$ such that

${\displaystyle \mathbf {C} =\mathbf {A} \mathbf {B} =\left\|a_{j}^{i}\right\|\cdot \left\|b_{j}^{i}\right\|=\left\|a_{\alpha }^{i}b_{j}^{\alpha }\right\|=\left\|c_{j}^{i}\right\|}$

Let us note that the number of columns of the matrix ${\displaystyle \mathbf {A} }$ must be equal the number of rows of ${\displaystyle \mathbf {B} }$. The matrix ${\displaystyle \mathbf {C} }$ of (3) is an ${\displaystyle m\times p}$ matrix. In the case of square matrices the definition for multiplication of matrices corresponds to that for multiplication of determinants. This implies that ${\displaystyle \left|C\right|=\left|A\right|\cdot \left|B\right|}$, where ${\displaystyle \left|C\right|}$ denotes the determinant of the set of elements comprising the square matrix ${\displaystyle \mathbf {C} =\mathbf {A} {B}}$.

A square matrix ${\displaystyle \mathbf {A} }$ is said to be a symmetric matrix if and only if ${\displaystyle \mathbf {A} =\mathbf {A} ^{T}}$. If ${\displaystyle \mathbf {A} =-\mathbf {A} ^{T}}$, we say the ${\displaystyle \mathbf {A} }$ is a skew-symmetric matrix. We now exhibit a symmetric matrix ${\displaystyle \mathbf {A} }$ and a skew-symmetric matrix ${\displaystyle \mathbf {B} }$.

${\displaystyle {\begin{matrix}\mathbf {A} =\mathbf {A} ^{T}=\left({\begin{matrix}{rrrr}2&-1&4&-2\\-1&0&3&5\\4&3&1&-1\\-2&5&-1&3\end{matrix}}\right)&\mathbf {B} =-\mathbf {B} ^{T}=\left({\begin{matrix}{rrr}0&-1&3\\1&0&-2\\-3&2&0\end{matrix}}\right)\end{matrix}}}$

We let the reader verify that ${\displaystyle {\frac {1}{2}}\left(\mathbf {A} +\mathbf {A} ^{T}\right)}$ is a symmetric matrix if ${\displaystyle \mathbf {A} }$ is a square matrix. Let the reader first prove that ${\displaystyle \left(\mathbf {A} ^{T}\right)^{T}=\mathbf {A} }$,

${\displaystyle \left(\mathbf {A} +\mathbf {B} \right)^{T}=\mathbf {A} ^{T}+\mathbf {B} ^{T}}$

It is easily seen that ${\displaystyle {\frac {1}{2}}\left(\mathbf {A} +\mathbf {A} ^{T}\right)}$ is a symmetric matrix. Any square matrix ${\displaystyle \mathbf {A} }$ can obviously be written as

${\displaystyle \mathbf {A} ={\frac {1}{2}}\left(\mathbf {A} +\mathbf {A} ^{T}\right)+{\frac {1}{2}}\left(\mathbf {A} -\mathbf {A} ^{T}\right)}$

Hence every square matrix can be written as the sum of a symmetric and a skew-symmetric matrix.

## References

[1] Lass, Harry. "Elements of pure and applied mathematics" New York: McGraw-Hill Companies, 1957.

This entry is a derivative of the Public domain work [1].