Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 14/refcontrol



Linear forms

In a shop, there are different products. A purchase is described by an -tuple (purchase tuple), where the -th entry says how much of the -th product is bought (with respect to a certain unit). The set of all purchases (including refunds) forms an -dimensional vector space. A price list for the products is also described by an -tuple (price tuple), where the -th entry tells the price of the -th product (with respect to the same unit). The set of all price tuples forms also an -dimensional vector space (think about comparing prices, inflation, tax, etc.). It is obviously nonsense to consider a purchase tuple and a price tuple in the same vector space and to add them. To the contrary, the right way to process a purchase tuple and a price tuple is to compute the total price , which belongs to . Purchase tuple and price tuple are dual to each other, the vector space of price tuples is dual to the vector space of purchase tuples.


Let be a fieldMDLD/field and let be a -vector space.MDLD/vector space A linear mappingMDLD/linear mapping

is called a Linear form on .

A linear formMDLD/linear form on is of the form

for a tuple . The projections

are the easiest linear forms.

The zero mapping to is also a linear form, called the zero form.

We have encountered many linear forma already, for example, the price function for a purchase of several products, or the content of vitamin of fruit salads of various fruits. With respect to a basis of and a basis of (where is just an element from different from ), the describing matrix of a linear form is simply a row with entries.


Many important examples of linear forms on some vector spaces of infinite dimension arise in analysis. For a real interval , the set of functions , or the set of continuous functions , or the set of continuously differentiable functions form real vector spaces. For a point , the evaluation is a linear form (because addition and scalar multiplication is defined pointwisely on these spaces). Also, the evaluation of the derivative at ,

is a linear form. For , the integral, that is, the mapping

is a linear form. This rests on the linearity of the integral.


Let be a field,MDLD/field and let and denote vector spacesMDLD/vector spaces over . For a linear formMDLD/linear form

and a vector , the mapping

is linear.MDLD/linear It is just the composition

where denotes the mapping .

The kernel of the zero form is the total space; for any other linear form with , the dimension is . This follows from the dimension formula. With the exception of the zero form, a linear form is always surjective.


LemmaLemma 14.6 change

Let be an -dimensionalMDLD/dimensional (vs) -vector space,MDLD/vector space and let denote an -dimensional linear subspace.MDLD/linear subspace Then there exists a linear formMDLD/linear form such that

.

Proof



Let denote a -vector spaceMDLD/vector space and let be a vector different from . Then there exists a linear formMDLD/linear form such that

.

The one-dimensional -linear subspaceMDLD/linear subspace contains a direct complement,MDLD/direct complement that is,

with some linear subspace . The projection onto for this decomposition sends to .



Let be a field,MDLD/field and let be a -vector space.MDLD/vector space Let be vectors. Suppose that for every , there exists a linear formMDLD/linear form

such that

Then the are linearly independent.MDLD/linearly independent

Proof




The dual space

Let be a fieldMDLD/field and let denote an -vector space.MDLD/vector space Then the space of homomorphismsMDLD/space of homomorphisms

is called the dual space of .

Addition and scalar multiplication are defined as in the general case of a space of homomorphisms, thus and . For a finite-dimensional , we obtain, due to Corollary 13.12 , that the dimension of the dual space equals the dimension of .


Let denote a finite-dimensionalMDLD/finite-dimensional -vector space,MDLD/vector space endowed with a basisMDLD/basis (vs) . Then the linear formsMDLD/linear forms

defined by[1]

are called the dual basis

of the given basis.


Because of Theorem 10.10 , this rule defines indeed a linear form. The linear form assigns to an arbitrary vector the -th coordinate of with respect to the given basis. Note that for , we have

It is important to stress that does not only depend on the vector , but on the basis. There doe not exist something like a "dual vector“ for a vector. This looks different in the situation where an inner product is given on .


For the standard basisMDLD/standard basis of , the dual basisMDLD/dual basis consists in the projections onto some component, that is, we have , where

This basis is called the standard dual basis.


Let be a finite-dimensionalMDLD/finite-dimensional -vector space,MDLD/vector space endowed with a basisMDLD/basis (vs) . Then the dual basisMDLD/dual basis

is a basis of the

dual space.MDLD/dual space

Suppose that

where . If we apply this linear form to , we get directly

Therefore, the are linearly independent.MDLD/linearly independent Due to Corollary 13.12 , the dual space has dimension , thus we have a basis already.



LemmaLemma 14.13 change

Let be a finite-dimensionalMDLD/finite-dimensional (vs) -vector space,MDLD/vector space endowed with a basisMDLD/basis (vs) , and the corresponding dual basisMDLD/dual basis

Then, for every vector

, the equality

holds. The linear forms yield the scalars (coordinates)

of a vector with respect to a basis.

The vector has a unique representation

with . The right hand side of the claimed equality is therefore



Let be a finite-dimensionalMDLD/finite-dimensional -vector space.MDLD/vector space Let be a basisMDLD/basis (vs) of with the dual basisMDLD/dual basis , and let be another basis with the dual basis , and with

Then

where is the transposed matrixMDLD/transposed matrix of the inverse matrixMDLD/inverse matrix of

.

We have

Here, we have the "product“ of the -th column of and the -th column of , which is also the product of the -th row of and the -te column of . For , this is , and for , this is . Therefore, the given linear form coincides with .


With the help of transformation matrices, this can be expressed as


We consider with the standard basis , its dual basisMDLD/dual basis , and the basis consisting in and . We want to express the dual basis and as a linear combination of the standard dual basis, that is, we want to determine the coefficients and (and and ) in

(and in ). Here, and . In order to compute this, we have to express and as a linear combination of and . This is

and

Therefore, we have

and

Hence,

With similar computations we get

The transformation matrixMDLD/transformation matrix from to is thus

The transposed matrix of this is

The inverse task to express the standard dual basis with and , is easier to solve, because we can read of directly the representations of the with respect to the standard basis. We have

and

as becomes clear by evaluation on both sides.



The trace

Let be a fieldMDLD/field and let be an -matrixMDLD/matrix over . Then

is called the trace of .

Let be a field,MDLD/field and let denote a finite-dimensionalMDLD/finite-dimensional (vs) -vector space.MDLD/vector space Let be a linear mapping,MDLD/linear mapping which is described by the matrixMDLD/matrix with respect to a basis.MDLD/basis (vs) Then is called the trace

of , written as .

Because of Exercise 14.15 , this is independent of the basis chosen. The trace is a linear form on the vector space of all square matrices, and on the vector space of all endomorphisms.



Footnotes
  1. This symbol is called Kronecker-DeltaMDLD/Kronecker-Delta.


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)