# Tensors/Bases, components, and dual spaces

This article discusses the space of linear forms (that is, linear, real-valued, functions of vectors) and the space of linear, real-valued, functions of linear forms. Recall from Definitions that another name for a linear, real-valued, functions of linear forms is a *1 ^{st} rank contravariant tensor*.

Subject classification: this is a mathematics resource. |

Subject classification: this is a physics resource. |

Educational level: this is a secondary education resource. |

Educational level: this is a tertiary (university) resource. |

*This article presumes that the reader has read Tensors/Definitions*.

*In this article, all vector spaces are real and finite-dimensional*.

The goal of this article is to show that a linear, real-valued, functions of linear forms is really just a vector in the original vector space, and that this association is completely natural.

First, a quick review of linear forms. The set of linear forms is itself a vector space, though it is not the same as the original vector space. To be a vector space, it must

- have a zero element. The zero form is just the form that gives a result of zero for any argument vector.

- allow scalar multiplication. This is defined by for any scalar , any form , and any vector .

- allow addition. This is defined by for any forms and and any vector .

Also, a quick review of some facts about vector spaces. Every vector space has a *basis* (plural *bases*, from the Latin), also sometimes called a "coordinate system"^{[1]}. In fact, any vector space has many bases. Each basis is just a set of **N** (3 for definiteness) *basis vectors* such that any vector is a *linear combination* of the basis vectors. That is, any vector can be written as a sum of basis vectors with certain coefficients. If we use the traditional notation for Cartesian basis vectors in 3-dimensional space, they are , , and . So any vector **V** can be written as a sum of those basis vectors, with certain coefficients, as in:

The numbers 5, 3, and -2 are the *components* of the vector. In general, the components of **V** are written V^{1}, V^{2}, and V^{3}. So we can write:

- Once a basis for a vector space has been chosen, every vector is completely characterized by its components in that basis. The components of a vector in a different basis will be different, even though it's the same vector. If all of the vectors in a vector space can be expressed as a linear combination of a set of vectors, that set is said to "span" the vector space.

It needs to be pointed out that the concepts being discussed here, and tensor algebra in general, do **not** depend on the dimension being 3, do **not** depend on the basis being Cartesian or orthogonal, and do **not** depend on the basis vectors having length 1. We are only using the common x/y/z basis as an example to make things look familiar to the reader.

Now the set of linear forms is itself a vector space, as noted above. In fact, whenever one chooses a basis for the original vector space, there is a "natural" basis for the form space. We will call that basis , , and . These basis forms are defined as the functions that pick out the corresponding component of their argument vector.

- for any vector
**V**.

- for any vector

Are these 3 forms a basis? That is, can any form be expressed as a linear combination of , , and ? Yes. Given any form , let

That is, apply the form to the corresponding basis vectors to get its components.

To show that these numbers are the components of , we must show that

That is,

- for any vector
**V**.

- for any vector

Proof: The right-hand-side is

- .

- by linearity

- by linearity again

## Theorem: 1^{st} rank contravariant tensors are just vectors, in a natural way
edit

Recall from Definitions that a 1^{st} rank contravariant tensor is a linear, real-valued, function of linear forms. That is, for any form , is a number.

Given a vector **V**, there is a natural tensor that corresponds to it, given by

- for any form

But does this work the other way? Given a contravariant tensor , can we come up with a vector **V** that corresponds to it in the same way?

We simply define **V** by its components, as follows:

That's all there is to it! We need to show that, with this definition of **V**, we have

- for all forms

Let the components of be , , and , as usual.

Note that we required that the vector space have a basis, and that the basis be finite-dimensional (that's why the disclaimer at the top of the page.) But the result is independent of the basis—once we have found **V**, it corresponds to in a natural way that does not depend on the basis chosen.

By the way, the formula

will come in handy in the next article.

What we have proved is the "double dual" theorem. For any vector space, its *dual space* is the space of linear forms on the original vectors. A 1^{st} rank contravariant tensor is actually a linear form on the dual space, so it is an element of the dual space of the dual space. The double dual theorem says that the the double dual space is the same as the original space—there is a natural correspondence between vectors and forms on forms. The double dual theorem is *not true* if the vector space is not finite-dimensional. (An example of an infinite-dimensional vector space is the space of real-valued functions of real numbers.)

Why on Earth are we writing vector components with superscripts instead of subscripts?The reader may have noticed the notational peculiarity of using superscripts for vector components. This is the convention when working with tensors. It is an aspect of "index notation." One would do well to get used to this if one is interested in the study of tensors. It makes many manipulations very simple in their presentation, as will be seen in the next article. |

The next article in this series is Tensors/Calculations with index notation.

## See also edit

## Footnotes edit

- ↑ It's best not to use the term "coordinate system" for vector spaces, because of confusion when one gets into differential geometry and curvilinear coordinates.