Let
and
denote the bases of
and
respectively, and let
denote the column vectors of
. (1). The mapping
has the property
-
![{\displaystyle {}\varphi (v_{j})=\sum _{i=1}^{m}s_{ij}w_{i}\,,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/915cb5c634553af51dcd859dc3173a33230ec7c7)
where
is the
-th entry of the
-th column vector. Therefore,
-
![{\displaystyle {}\varphi {\left(\sum _{j=1}^{n}a_{j}v_{j}\right)}=\sum _{j=1}^{n}a_{j}{\left(\sum _{i=1}^{m}s_{ij}w_{i}\right)}=\sum _{i=1}^{m}{\left(\sum _{j=1}^{n}a_{j}s_{ij}\right)}w_{i}\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/be3e38d12cce2b33396bf01dd984ab9af05fb54b)
This is
if and only if
for all
, and this is equivalent with
-
![{\displaystyle {}\sum _{j=1}^{n}a_{j}s_{j}=0\,.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f7618bf456894b6470d7fa30805c4d2df1fff685)
For this vector equation, there exists a nontrivial tuple
, if and only if the columns are linearly dependent, and this holds if and only if
is not injective.
(2). See
exercise.
(3). Let
.
The first equivalence follows from (1) and (2). If
is bijective, then there exists a
(linear)
inverse mapping
with
-
Let
denote the matrix for
, and
the matrix for
. The matrix for the identity is the
identity matrix.
Because of
fact,
we have
-
![{\displaystyle {}M\circ N=E_{n}=N\circ M\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/002a2b8c7f2d9f7fb93b3977ffdf52049c6ef128)
and therefore
is invertible. The reverse implication is proved similarly.