Vector notation is ubiquitous in the modern literature on solid mechanics, fluid mechanics, biomechanics, nonlinear finite elements and a host of other subjects in mechanics. A student has to be familiar with the notation in order to be able to read the literature. In this section we introduce the notation that is used, common operations in vector algebra, and some ideas from vector calculus.
A vector is an object that has certain properties. What are these properties? We usually say that these properties are:
a vector has a magnitude (or length)
a vector has a direction.
To make the definition of the vector object more precise we may also say that vectors are objects that satisfy the properties of a vector space .
The standard notation for a vector is lower case bold type (for example
a
{\displaystyle \mathbf {a} \,}
).
In Figure 1(a) you can see a vector
a
{\displaystyle \mathbf {a} }
in red. This vector can be represented in component form with respect to the basis (
e
1
,
e
2
{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2}\,}
) as
a
=
a
1
e
1
+
a
2
e
2
{\displaystyle \mathbf {a} =a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}\,}
where
e
1
{\displaystyle \mathbf {e} _{1}\,}
and
e
2
{\displaystyle \mathbf {e} _{2}\,}
are orthonormal unit vectors . Recall that unit vectors are vectors of length 1. These vectors are also called basis vectors.
You could also represent the same vector
a
{\displaystyle \mathbf {a} \,}
in terms of another set of basis vectors (
g
1
,
g
2
{\displaystyle \mathbf {g} _{1},\mathbf {g} _{2}\,}
) as shown in Figure 1(b). In that case, the components of the vector are
(
b
1
,
b
2
)
{\displaystyle (b_{1},b_{2})\,}
and we can write
a
=
b
1
g
1
+
b
2
g
2
.
{\displaystyle \mathbf {a} =b_{1}\mathbf {g} _{1}+b_{2}\mathbf {g} _{2}\,~.}
Note that the basis vectors
g
1
{\displaystyle \mathbf {g} _{1}\,}
and
g
2
{\displaystyle \mathbf {g} _{2}\,}
do not necessarily have to be unit vectors. All we need is that they be linearly independent , that is, it should not be possible for us to represent one solely in terms of the others.
In three dimensions, using an orthonormal basis , we can write the vector
a
{\displaystyle \mathbf {a} \,}
as
a
=
a
1
e
1
+
a
2
e
2
+
a
3
e
3
{\displaystyle \mathbf {a} =a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}+a_{3}\mathbf {e} _{3}\,}
where
e
3
{\displaystyle \mathbf {e} _{3}\,}
is perpendicular to both
e
1
{\displaystyle \mathbf {e} _{1}\,}
and
e
2
{\displaystyle \mathbf {e} _{2}\,}
. This is the usual basis in which we express arbitrary vectors.
Figure 1: A vector and its basis.
Some vector operations are shown in Figure 2.
Figure 2: Vector operations.
Addition and subtraction
edit
If
a
{\displaystyle \mathbf {a} \,}
and
b
{\displaystyle \mathbf {b} \,}
are vectors, then the sum
c
=
a
+
b
{\displaystyle \mathbf {c} =\mathbf {a} +\mathbf {b} \,}
is also a vector (see Figure 2(a)).
The two vectors can also be subtracted from one another to give another vector
d
=
a
−
b
{\displaystyle \mathbf {d} =\mathbf {a} -\mathbf {b} \,}
.
Multiplication by a scalar
edit
Multiplication of a vector
b
{\displaystyle \mathbf {b} \,}
by a scalar
λ
{\displaystyle \lambda \,}
has the effect of stretching or shrinking the vector (see Figure 2(b)).
You can form a unit vector
b
^
{\displaystyle {\hat {\mathbf {b} }}\,}
that is parallel to
b
{\displaystyle \mathbf {b} \,}
by dividing by the length of the vector
|
b
|
{\displaystyle |\mathbf {b} |\,}
. Thus,
b
^
=
b
|
b
|
.
{\displaystyle {\hat {\mathbf {b} }}={\cfrac {\mathbf {b} }{|\mathbf {b} |}}~.}
Scalar product of two vectors
edit
The scalar product or inner product or dot product of two vectors is defined as
a
∙
b
=
|
a
|
|
b
|
cos
(
θ
)
{\displaystyle \mathbf {a} \bullet \mathbf {b} =|\mathbf {a} ||\mathbf {b} |\cos(\theta )}
where
θ
{\displaystyle \theta \,}
is the angle between the two vectors (see Figure 2(b)).
If
a
{\displaystyle \mathbf {a} \,}
and
b
{\displaystyle \mathbf {b} \,}
are perpendicular to each other,
θ
=
π
/
2
{\displaystyle \theta =\pi /2\,}
and
cos
(
θ
)
=
0
{\displaystyle \cos(\theta )=0\,}
. Therefore,
a
∙
b
=
0
{\displaystyle {\mathbf {a} }\bullet {\mathbf {b} }=0}
.
The dot product therefore has the geometric interpretation as the length of the projection of
a
{\displaystyle \mathbf {a} \,}
onto the unit vector
b
^
{\displaystyle {\hat {\mathbf {b} }}\,}
when the two vectors are placed so that they start from the same point.
The scalar product leads to a scalar quantity and can also be written in component form (with respect to a given basis) as
a
∙
b
=
a
1
b
1
+
a
2
b
2
+
a
3
b
3
=
∑
i
=
1..3
a
i
b
i
.
{\displaystyle {\mathbf {a} }\bullet {\mathbf {b} }=a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3}=\sum _{i=1..3}a_{i}b_{i}~.}
If the vector is
n
{\displaystyle n}
dimensional, the dot product is written as
a
∙
b
=
∑
i
=
1..
n
a
i
b
i
.
{\displaystyle {\mathbf {a} }\bullet {\mathbf {b} }=\sum _{i=1..n}a_{i}b_{i}~.}
Using the Einstein summation convention, we can also write the scalar product as
a
∙
b
=
a
i
b
i
.
{\displaystyle {\mathbf {a} }\bullet {\mathbf {b} }=a_{i}b_{i}~.}
Also notice that the following also hold for the scalar product
a
∙
b
=
b
∙
a
{\displaystyle {\mathbf {a} }\bullet {\mathbf {b} }={\mathbf {b} }\bullet {\mathbf {a} }}
(commutative law).
a
∙
(
b
+
c
)
=
a
∙
b
+
a
∙
c
{\displaystyle {\mathbf {a} }\bullet {(\mathbf {b} +\mathbf {c} )}={\mathbf {a} }\bullet {\mathbf {b} }+{\mathbf {a} }\bullet {\mathbf {c} }}
(distributive law).
Vector product of two vectors
edit
The vector product (or cross product) of two vectors
a
{\displaystyle \mathbf {a} \,}
and
b
{\displaystyle \mathbf {b} \,}
is another vector
c
{\displaystyle \mathbf {c} \,}
defined as
c
=
a
×
b
=
|
a
|
|
b
|
sin
(
θ
)
c
^
{\displaystyle \mathbf {c} ={\mathbf {a} }\times {\mathbf {b} }=|\mathbf {a} ||\mathbf {b} |\sin(\theta ){\hat {\mathbf {c} }}}
where
θ
{\displaystyle \theta \,}
is the angle between
a
{\displaystyle \mathbf {a} \,}
and
b
{\displaystyle \mathbf {b} \,}
, and
c
^
{\displaystyle {\hat {\mathbf {c} }}\,}
is a unit vector perpendicular to the plane containing
a
{\displaystyle \mathbf {a} \,}
and
b
{\displaystyle \mathbf {b} \,}
in the right-handed sense (see Figure 3 for a geometric interpretation)
Figure 3: Vector product of two vectors.
In terms of the orthonormal basis
(
e
1
,
e
2
,
e
3
)
{\displaystyle (\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3})\,}
, the cross product can be written in the form of a determinant
a
×
b
=
|
e
1
e
2
e
3
a
1
a
2
a
3
b
1
b
2
b
3
|
.
{\displaystyle {\mathbf {a} }\times {\mathbf {b} }={\begin{vmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\mathbf {e} _{3}\\a_{1}&a_{2}&a_{3}\\b_{1}&b_{2}&b_{3}\end{vmatrix}}~.}
In index notation, the cross product can be written as
a
×
b
≡
ε
i
j
k
e
i
a
j
b
k
.
{\displaystyle {\mathbf {a} }\times {\mathbf {b} }\equiv \varepsilon _{ijk}\mathbf {e} _{i}a_{j}b_{k}~.}
where
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
is the Levi-Civita symbol (also called the permutation symbol, alternating tensor).
Identities from vector algebra
edit
Some useful vector identities are given below.
a
×
b
=
−
b
×
a
{\displaystyle {\mathbf {a} }\times {\mathbf {b} }=-{\mathbf {b} }\times {\mathbf {a} }}
.
a
×
(
b
+
c
)
=
a
×
b
+
a
×
c
{\displaystyle {\mathbf {a} }\times {(\mathbf {b} +\mathbf {c} })={\mathbf {a} }\times {\mathbf {b} }+{\mathbf {a} }\times {\mathbf {c} }}
.
a
×
(
b
×
c
)
=
b
(
a
∙
c
)
−
c
(
a
∙
b
)
{\displaystyle {\mathbf {a} }\times {({\mathbf {b} }\times {\mathbf {c} })}=\mathbf {b} ({\mathbf {a} }\bullet {\mathbf {c} })-\mathbf {c} ({\mathbf {a} }\bullet {\mathbf {b} })}
.
(
a
×
b
)
×
c
=
b
(
a
∙
c
)
−
a
(
b
∙
c
)
{\displaystyle {({\mathbf {a} }\times {\mathbf {b} })}\times {\mathbf {c} }=\mathbf {b} ({\mathbf {a} }\bullet {\mathbf {c} })-\mathbf {a} ({\mathbf {b} }\bullet {\mathbf {c} })}
.
a
×
a
=
0
{\displaystyle {\mathbf {a} }\times {\mathbf {a} }=\mathbf {0} }
.
a
∙
(
a
×
b
)
=
b
∙
(
a
×
b
)
=
0
{\displaystyle {\mathbf {a} }\bullet {({\mathbf {a} }\times {\mathbf {b} })}={\mathbf {b} }\bullet {({\mathbf {a} }\times {\mathbf {b} })}=\mathbf {0} }
.
(
a
×
b
)
∙
c
=
a
∙
(
b
×
c
)
{\displaystyle {({\mathbf {a} }\times {\mathbf {b} })}\bullet {\mathbf {c} }={\mathbf {a} }\bullet {({\mathbf {b} }\times {\mathbf {c} })}}
.
So far we have dealt with constant vectors. It also helps if the vectors are allowed to vary in space. Then we can define derivatives and integrals and deal with vector fields. Some basic ideas of vector calculus are discussed below.
Derivative of a vector valued function
edit
Let
a
(
x
)
{\displaystyle \mathbf {a} (x)\,}
be a vector function that can be represented as
a
(
x
)
=
a
1
(
x
)
e
1
+
a
2
(
x
)
e
2
+
a
3
(
x
)
e
3
{\displaystyle \mathbf {a} (x)=a_{1}(x)\mathbf {e} _{1}+a_{2}(x)\mathbf {e} _{2}+a_{3}(x)\mathbf {e} _{3}\,}
where
x
{\displaystyle x\,}
is a scalar.
Then the derivative of
a
(
x
)
{\displaystyle \mathbf {a} (x)\,}
with respect to
x
{\displaystyle x\,}
is
d
a
(
x
)
d
x
=
lim
Δ
x
→
0
a
(
x
+
Δ
x
)
−
a
(
x
)
Δ
x
=
d
a
1
(
x
)
d
x
e
1
+
d
a
2
(
x
)
d
x
e
2
+
d
a
3
(
x
)
d
x
e
3
.
{\displaystyle {\cfrac {d\mathbf {a} (x)}{dx}}=\lim _{\Delta x\rightarrow 0}{\cfrac {\mathbf {a} (x+\Delta x)-\mathbf {a} (x)}{\Delta x}}={\cfrac {da_{1}(x)}{dx}}\mathbf {e} _{1}+{\cfrac {da_{2}(x)}{dx}}\mathbf {e} _{2}+{\cfrac {da_{3}(x)}{dx}}\mathbf {e} _{3}~.}
If
a
(
x
)
{\displaystyle \mathbf {a} (x)\,}
and
b
(
x
)
{\displaystyle \mathbf {b} (x)\,}
are two vector functions, then from the chain rule we get
d
(
a
∙
b
)
d
x
=
a
∙
d
b
d
x
+
d
a
d
x
∙
b
d
(
a
×
b
)
d
x
=
a
×
d
b
d
x
+
d
a
d
x
×
b
d
[
a
∙
(
b
×
c
)
]
d
x
=
d
a
d
x
∙
(
b
×
c
)
+
a
∙
(
d
b
d
x
×
c
)
+
a
∙
(
b
×
d
c
d
x
)
{\displaystyle {\begin{aligned}{\cfrac {d({\mathbf {a} }\bullet {\mathbf {b} })}{dx}}&={\mathbf {a} }\bullet {\cfrac {d\mathbf {b} }{dx}}+{\cfrac {d\mathbf {a} }{dx}}\bullet {\mathbf {b} }\\{\cfrac {d({\mathbf {a} }\times {\mathbf {b} })}{dx}}&={\mathbf {a} }\times {\cfrac {d\mathbf {b} }{dx}}+{\cfrac {d\mathbf {a} }{dx}}\times {\mathbf {b} }\\{\cfrac {d[{\mathbf {a} }\bullet {({\mathbf {b} }\times {\mathbf {c} })}]}{dx}}&={\cfrac {d\mathbf {a} }{dx}}\bullet {({\mathbf {b} }\times {\mathbf {c} })}+{\mathbf {a} }\bullet {\left({\cfrac {d\mathbf {b} }{dx}}\times {\mathbf {c} }\right)}+{\mathbf {a} }\bullet {\left({\mathbf {b} }\times {\cfrac {d\mathbf {c} }{dx}}\right)}\end{aligned}}}
Scalar and vector fields
edit
Let
x
{\displaystyle \mathbf {x} \,}
be the position vector of any point in space. Suppose that
there is a scalar function (
g
{\displaystyle g\,}
) that assigns a value to each point in space. Then
g
=
g
(
x
)
{\displaystyle g=g(\mathbf {x} )\,}
represents a scalar field . An example of a scalar field is the temperature . See Figure4(a).
Figure 4: Scalar and vector fields.
If there is a vector function (
a
{\displaystyle \mathbf {a} \,}
) that assigns a vector to each point in space, then
a
=
a
(
x
)
{\displaystyle \mathbf {a} =\mathbf {a} (\mathbf {x} )\,}
represents a vector field . An example is the displacement field. See Figure 4(b).
Gradient of a scalar field
edit
Let
φ
(
x
)
{\displaystyle \varphi (\mathbf {x} )\,}
be a scalar function. Assume that the partial derivatives of the function are continuous in some region of space. If the point
x
{\displaystyle \mathbf {x} \,}
has coordinates (
x
1
,
x
2
,
x
3
{\displaystyle x_{1},x_{2},x_{3}\,}
) with respect to the basis (
e
1
,
e
2
,
e
3
{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}\,}
), the gradient of
φ
{\displaystyle \varphi \,}
is defined as
∇
φ
=
∂
φ
∂
x
1
e
1
+
∂
φ
∂
x
2
e
2
+
∂
φ
∂
x
3
e
3
.
{\displaystyle {\boldsymbol {\nabla }}{\varphi }={\frac {\partial \varphi }{\partial x_{1}}}~\mathbf {e} _{1}+{\frac {\partial \varphi }{\partial x_{2}}}~\mathbf {e} _{2}+{\frac {\partial \varphi }{\partial x_{3}}}~\mathbf {e} _{3}~.}
In index notation,
∇
φ
≡
φ
,
i
e
i
.
{\displaystyle {\boldsymbol {\nabla }}{\varphi }\equiv \varphi _{,i}~\mathbf {e} _{i}~.}
The gradient is obviously a vector and has a direction. We can think of the gradient at a point being the vector perpendicular to the level contour at that point.
It is often useful to think of the symbol
∇
{\displaystyle {\boldsymbol {\nabla }}{}}
as an operator of the form
∇
=
∂
∂
x
1
e
1
+
∂
∂
x
2
e
2
+
∂
∂
x
3
e
3
.
{\displaystyle {\boldsymbol {\nabla }}{}={\frac {\partial }{\partial x_{1}}}~\mathbf {e} _{1}+{\frac {\partial }{\partial x_{2}}}~\mathbf {e} _{2}+{\frac {\partial }{\partial x_{3}}}~\mathbf {e} _{3}~.}
Divergence of a vector field
edit
If we form a scalar product of a vector field
u
(
x
)
{\displaystyle \mathbf {u} (\mathbf {x} )\,}
with the
∇
{\displaystyle {\boldsymbol {\nabla }}{}}
operator, we get a scalar quantity called the
divergence of the vector field. Thus,
∇
∙
u
=
∂
u
1
∂
x
1
+
∂
u
2
∂
x
2
+
∂
u
3
∂
x
3
.
{\displaystyle {\boldsymbol {\nabla }}\bullet {\mathbf {u} }={\frac {\partial u_{1}}{\partial x_{1}}}+{\frac {\partial u_{2}}{\partial x_{2}}}+{\frac {\partial u_{3}}{\partial x_{3}}}~.}
In index notation,
∇
∙
u
≡
u
i
,
i
.
{\displaystyle {\boldsymbol {\nabla }}\bullet {\mathbf {u} }\equiv u_{i,i}~.}
If
∇
∙
u
=
0
{\displaystyle {\boldsymbol {\nabla }}\bullet {\mathbf {u} }=0}
, then
u
{\displaystyle \mathbf {u} \,}
is called a divergence-free field.
The physical significance of the divergence of a vector field is the rate at which some density exits a given region of space. In the absence of the creation or destruction of matter, the density within a region of space can change only by having it flow into or out of the region.
Curl of a vector field
edit
The curl of a vector field
u
(
x
)
{\displaystyle \mathbf {u} (\mathbf {x} )\,}
is a vector defined as
∇
×
u
=
[
e
1
e
2
e
3
∂
∂
x
1
∂
∂
x
2
∂
∂
x
3
u
1
u
2
u
3
]
{\displaystyle {\boldsymbol {\nabla }}\times {\mathbf {u} }={\begin{bmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\mathbf {e} _{3}\\{\frac {\partial }{\partial x_{1}}}&{\frac {\partial }{\partial x_{2}}}&{\frac {\partial }{\partial x_{3}}}\\u_{1}&u_{2}&u_{3}\\\end{bmatrix}}}
The physical significance of the curl of a vector field is the amount of rotation or angular momentum of the contents of a region of space.
Laplacian of a scalar or vector field
edit
The Laplacian of a scalar field
φ
(
x
)
{\displaystyle \varphi (\mathbf {x} )\,}
is a scalar defined as
∇
2
φ
:=
∇
∙
∇
φ
=
∂
2
φ
∂
x
1
2
+
∂
2
φ
∂
x
2
2
+
∂
2
φ
∂
x
3
2
.
{\displaystyle \nabla ^{2}{\varphi }:={\boldsymbol {\nabla }}\bullet {{\boldsymbol {\nabla }}{\varphi }}={\frac {\partial ^{2}\varphi }{\partial x_{1}^{2}}}+{\frac {\partial ^{2}\varphi }{\partial x_{2}^{2}}}+{\frac {\partial ^{2}\varphi }{\partial x_{3}^{2}}}~.}
The Laplacian of a vector field
u
(
x
)
{\displaystyle \mathbf {u} (\mathbf {x} )\,}
is a vector defined as
∇
2
u
:=
(
∇
2
u
1
)
e
1
+
(
∇
2
u
2
)
e
2
+
(
∇
2
u
3
)
e
3
.
{\displaystyle \nabla ^{2}{\mathbf {u} }:=(\nabla ^{2}{u_{1}})\mathbf {e} _{1}+(\nabla ^{2}{u_{2}})\mathbf {e} _{2}+(\nabla ^{2}{u_{3}})\mathbf {e} _{3}~.}
Green-Gauss divergence theorem
edit
Let
u
(
x
)
{\displaystyle \mathbf {u} (\mathbf {x} )\,}
be a continuous and differentiable vector field on a body
Ω
{\displaystyle \Omega \,}
with boundary
Γ
{\displaystyle \Gamma \,}
. The divergence theorem states that
∫
Ω
∇
∙
u
d
V
=
∫
Γ
n
∙
u
d
A
{\displaystyle {\int _{\Omega }{\boldsymbol {\nabla }}\bullet {\mathbf {u} }~dV=\int _{\Gamma }{\mathbf {n} }\bullet {\mathbf {u} }~dA}}
where
n
{\displaystyle \mathbf {n} \,}
is the outward unit normal to the surface (see Figure 5).
In index notation,
∫
Ω
u
i
,
i
d
V
=
∫
Γ
n
i
u
i
d
A
{\displaystyle \int _{\Omega }u_{i,i}~dV=\int _{\Gamma }n_{i}u_{i}~dA}
Figure 5: Volume for application of the divergence theorem.
Identities in vector calculus
edit
Some frequently used identities from vector calculus are listed below.
∇
∙
(
a
+
b
)
=
∇
∙
a
+
∇
∙
b
{\displaystyle {\boldsymbol {\nabla }}\bullet {(\mathbf {a} +\mathbf {b} )}={\boldsymbol {\nabla }}\bullet {\mathbf {a} }+{\boldsymbol {\nabla }}\bullet {\mathbf {b} }}
.
∇
×
(
a
+
b
)
=
∇
×
a
+
∇
×
b
{\displaystyle {\boldsymbol {\nabla }}\times {(\mathbf {a} +\mathbf {b} )}={\boldsymbol {\nabla }}\times {\mathbf {a} }+{\boldsymbol {\nabla }}\times {\mathbf {b} }}
.
∇
∙
(
φ
a
)
=
(
∇
φ
)
∙
a
+
φ
(
∇
∙
a
)
{\displaystyle {\boldsymbol {\nabla }}\bullet {(\varphi \mathbf {a} )}=({\boldsymbol {\nabla }}{\varphi })\bullet \mathbf {a} +\varphi ({\boldsymbol {\nabla }}\bullet {\mathbf {a} })}
.
∇
×
(
φ
a
)
=
(
∇
φ
)
×
a
+
φ
(
∇
×
a
)
{\displaystyle {\boldsymbol {\nabla }}\times {(\varphi \mathbf {a} )}={({\boldsymbol {\nabla }}{\varphi })}\times {\mathbf {a} }+\varphi ({\boldsymbol {\nabla }}\times {\mathbf {a} })}
.
∇
∙
(
a
×
b
)
=
b
∙
(
∇
×
a
)
−
a
∙
(
∇
×
b
)
{\displaystyle {\boldsymbol {\nabla }}\bullet {({\mathbf {a} }\times {\mathbf {b} })}={\mathbf {b} }\bullet {({\boldsymbol {\nabla }}\times {\mathbf {a} })}-{\mathbf {a} }\bullet {({\boldsymbol {\nabla }}\times {\mathbf {b} })}}
.
∇
(
φ
a
)
=
a
⊗
(
∇
φ
)
+
φ
∇
a
{\displaystyle {\boldsymbol {\nabla }}(\varphi ~\mathbf {a} )=\mathbf {a} \otimes ({\boldsymbol {\nabla }}\varphi )+\varphi ~{\boldsymbol {\nabla }}\mathbf {a} }