# Robotic Mechanics and Modeling/Inverse Kinematics

## Inverse Position Kinematics (Copied and edited from Link)

The inverse kinematics problem is the opposite of the forward kinematics problem and can be summarized as follows: given the desired position of the end effector, what combinations of joint angles (or prismatic displacements) can be used to achieve this position?

Two types of solutions can be considered: a closed-form solution and a numerical solution. Closed-form or analytical solutions are sets of equations that fully describe the connection between the end-effector position and the joint angles. Numerical solutions are found through the use of numerical algorithms, and can exist even when no closed-form solution is available. There may also be multiple solutions, or no solution at all.

The inverse kinematics problem for this 2D manipulator can be solved algebraically. The solution to the forward kinematics problem is:

${\displaystyle _{bs}^{ee}T=\,_{0}^{4}T(\theta _{1},\theta _{2},\theta _{3})={\begin{bmatrix}c_{123}&-s_{123}&l_{1}c_{1}+l_{2}c_{12}+l_{3}c_{123}\\s_{123}&c_{123}&l_{1}s_{1}+l_{2}s_{12}+l_{3}s_{123}\\0&0&1\end{bmatrix}}}$

To find the kinematic equations, we can use the following relationship:

${\displaystyle {\begin{bmatrix}x\\y\\1\end{bmatrix}}=T{\begin{bmatrix}0\\0\\1\end{bmatrix}}}$

Thus, the resulting kinematic equations are:

${\displaystyle {\begin{array}{l}x=l_{1}\cos \theta _{1}+l_{2}\cos(\theta _{1}+\theta _{2})+l_{3}\cos(\theta _{1}+\theta _{2}+\theta _{3})\\y=l_{1}\sin \theta _{1}+l_{2}\sin(\theta _{1}+\theta _{2})+l_{3}\sin(\theta _{1}+\theta _{2}+\theta _{3})\end{array}}}$

For simplicity in looking at the inverse problem of the three-link manipulator now modeled as a two-link manipulator, the displacement over the distance ${\displaystyle l_{3}}$  is set to zero. An appropriate 4x4 homogeneous transform (in 3D) including the orientation of the third body with ${\displaystyle l_{3}=0}$  in the above figure is the following:

${\displaystyle _{0}^{3}T={\begin{bmatrix}c_{123}&-s_{123}&0&l_{1}c_{1}+l_{2}c_{12}\\s_{123}&c_{123}&0&l_{1}s_{1}+l_{2}s_{12}\\0&0&1&0\\0&0&0&1\\\end{bmatrix}}}$

Now assume a given end-effector orientation in the following form:

${\displaystyle _{bs}^{ee}T={\begin{bmatrix}c_{\phi }&-s_{\phi }&0&x\\s_{\phi }&c_{\phi }&0&y\\0&0&1&0\\0&0&0&1\\\end{bmatrix}}}$

Equating the two previous expressions results in:

${\displaystyle {\begin{array}{lll}c_{\phi }&=&c_{123}\\s_{\phi }&=&s_{123}\\x&=&l_{1}c_{1}+l_{2}c_{12}\\y&=&l_{1}s_{1}+l_{2}s_{12}\\\end{array}}}$

As:

${\displaystyle {\begin{array}{lll}c_{12}&=&c_{1}c_{2}-s_{1}s_{2}\\s_{12}&=&c_{1}s_{2}+s_{1}c_{2}\\\end{array}}}$ ,

squaring both the expressions for ${\displaystyle x}$  and ${\displaystyle y}$  and adding them, leads to:

${\displaystyle x^{2}+y^{2}=l_{1}^{2}+l_{2}^{2}+2l_{1}l_{2}c_{2}}$

Solving for ${\displaystyle c_{2}}$  leads to:

${\displaystyle c_{2}={\frac {x^{2}+y^{2}-l_{1}^{2}-l_{2}^{2}}{2l_{1}l_{2}}}}$ ,

while ${\displaystyle s_{2}}$  equals:

${\displaystyle s_{2}=\pm {\sqrt {1-c_{2}^{2}}}}$ ,

and, finally, ${\displaystyle \theta _{2}}$ :

${\displaystyle \theta _{2}={\mbox{Atan2}}(s_{2},c_{2})}$

Note: The choice of the sign for ${\displaystyle s_{2}}$  corresponds with one of the two solutions in the figure above.

The expressions for ${\displaystyle x}$  and ${\displaystyle y}$  may now be solved for ${\displaystyle \theta _{1}}$ . In order to do so, write them like this:

${\displaystyle {\begin{array}{lll}x&=&k_{1}c_{1}-k_{2}s_{1}\\y&=&k_{1}s_{1}+k_{2}c_{1}\\\end{array}}}$

where ${\displaystyle k_{1}=l_{1}+l_{2}c_{2}}$ , and ${\displaystyle k_{2}=l_{2}s_{2}}$ .

Let:

${\displaystyle {\begin{array}{lll}r&=&{\sqrt {k_{1}^{2}+k_{2}^{2}}}\\\gamma &=&{\mbox{Atan2}}(k2,k1)\\\end{array}}}$

Then:

${\displaystyle {\begin{array}{lll}k_{1}&=&r\cos \gamma \\k_{2}&=&r\sin \gamma \\\end{array}}}$

Applying these to the above equations for ${\displaystyle x}$  and ${\displaystyle y}$ :

${\displaystyle {\begin{array}{lll}x/r&=&\cos \gamma \,c_{1}+\sin \gamma \,s_{1}\\y/r&=&\cos \gamma \,s_{1}+\sin \gamma \,c_{1}\\\end{array}}}$ ,

or:

${\displaystyle {\begin{array}{lll}\cos(\gamma +\theta _{1})&=&{\frac {x}{r}}\\\sin(\gamma +\theta _{1})&=&{\frac {y}{r}}\\\end{array}}}$

Thus:

${\displaystyle \gamma +\theta _{1}={\mbox{Atan2}}(y,x)}$

Hence:

${\displaystyle \theta _{1}={\mbox{Atan2}}(y,x)-{\mbox{Atan2}}(k_{2},k_{1})}$

Note: If ${\displaystyle x=y=0}$ , ${\displaystyle \theta _{1}}$  actually becomes arbitrary.

${\displaystyle \theta _{3}}$  may now be solved from the first two equations for ${\displaystyle s_{\phi }}$  and ${\displaystyle c_{\phi }}$ :

${\displaystyle \theta _{3}=\phi -\theta _{1}-\theta _{2}={\mbox{Atan2}}(s_{\phi },c_{\phi })-\theta _{1}-\theta _{2}}$

The inverse velocity problem seeks the joint rates that provide a specified end-effector twist. This is solved by inverting the Jacobian matrix. It can happen that the robot is in a configuration where the Jacobian does not have an inverse. These are termed singular configurations of the robot.

## Inverse Velocity Kinematics

This problem can easily be solved by inverting the Jacobian...

### Singularities

If the Jacobian ${\displaystyle J}$  is invertible, inverting it can be used to easily calculate the joint velocities if the (Cartesian) end-effector velocity is given. Locations (combinations of ${\displaystyle \theta _{i}}$  where the Jacobian is not invertible are called singularities. Setting the determinant of ${\displaystyle J}$  equal to zero and solving for ${\displaystyle \theta }$  allows for finding these singularities. These positions correspond to the loss of a degree of freedom.

E Topics in Calculus

In vector calculus, the Jacobian matrix (/əˈkbiən/,[1][2][3] /ɪ-, jɪ-/) of a vector-valued function in several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian in literature.[4]

Suppose f : ℝn → ℝm is a function such that each of its first-order partial derivatives exist on n. This function takes a point x ∈ ℝn as input and produces the vector f(x) ∈ ℝm as output. Then the Jacobian matrix of f is defined to be an m×n matrix, denoted by J, whose (i,j)th entry is ${\displaystyle \mathbf {J} _{ij}={\frac {\partial f_{i}}{\partial x_{j}}}}$ , or explicitly

${\displaystyle \mathbf {J} ={\begin{bmatrix}{\dfrac {\partial \mathbf {f} }{\partial x_{1}}}&\cdots &{\dfrac {\partial \mathbf {f} }{\partial x_{n}}}\end{bmatrix}}={\begin{bmatrix}{\dfrac {\partial f_{1}}{\partial x_{1}}}&\cdots &{\dfrac {\partial f_{1}}{\partial x_{n}}}\\\vdots &\ddots &\vdots \\{\dfrac {\partial f_{m}}{\partial x_{1}}}&\cdots &{\dfrac {\partial f_{m}}{\partial x_{n}}}\end{bmatrix}}.}$

This matrix, whose entries are functions of x, is also denoted variously by Df, Jf, and (f1,...,fm)/(x1,...,xn). (However, some literature defines the Jacobian as the transpose of the matrix given above.)

The Jacobian matrix represents the differential of f at every point where f is differentiable. In detail, if h is a displacement vector represented by a column matrix, the matrix product J(x) ⋅ h is another displacement vector, that is the best approximation of the change of f in a neighborhood of x, if f(x) is differentiable at x.[a] This means that the function that maps y to f(x) + J(x) ⋅ (yx) is the best linear approximation of f for points close to x. This linear function is known as the derivative or the differential of f at x.

When m = n, the Jacobian matrix is square, so its determinant is a well-defined function of x, known as the Jacobian determinant of f. It carries important information about the local behavior of f. In particular, the function f has locally in the neighborhood of a point x an inverse function that is differentiable if and only if the Jacobian determinant is nonzero at x (see Jacobian conjecture). The Jacobian determinant also appears when changing the variables in multiple integrals (see substitution rule for multiple variables).

When m = 1, that is when f : ℝn → ℝ is a scalar-valued function, the Jacobian matrix reduces to a row vector. This row vector of all first-order partial derivatives of f is the transpose of the gradient of f, i.e. ${\displaystyle \mathbf {J} _{f}=(\nabla f)^{\intercal }}$ . Here we are adopting the convention that the gradient vector ${\displaystyle \nabla f}$  is a column vector. Specialising further, when m = n = 1, that is when f : ℝ → ℝ is a scalar-valued function of a single variable, the Jacobian matrix has a single entry. This entry is the derivative of the function f.

These concepts are named after the mathematician Carl Gustav Jacob Jacobi (1804–1851).

## Inverse

According to the inverse function theorem, the matrix inverse of the Jacobian matrix of an invertible function is the Jacobian matrix of the inverse function. That is, if the Jacobian of the function f : ℝn → ℝn is continuous and nonsingular at the point p in n, then f is invertible when restricted to some neighborhood of p and

${\displaystyle \mathbf {J} _{\mathbf {f} ^{-1}}\circ \mathbf {f} ={\mathbf {J} _{\mathbf {f} }}^{-1}.}$

Conversely, if the Jacobian determinant is not zero at a point, then the function is locally invertible near this point, that is, there is a neighbourhood of this point in which the function is invertible.

The (unproved) Jacobian conjecture is related to global invertibility in the case of a polynomial function, that is a function defined by n polynomials in n variables. It asserts that, if the Jacobian determinant is a non-zero constant (or, equivalently, that it does not have any complex zero), then the function is invertible and its inverse is a polynomial function.

## References

1. "Jacobian - Definition of Jacobian in English by Oxford Dictionaries". Oxford Dictionaries - English. Archived from the original on 1 December 2017. Retrieved 2 May 2018.
2. "the definition of jacobian". Dictionary.com. Archived from the original on 1 December 2017. Retrieved 2 May 2018.
3. Team, Forvo. "Jacobian pronunciation: How to pronounce Jacobian in English". forvo.com. Retrieved 2 May 2018.
4. W., Weisstein, Eric. "Jacobian". mathworld.wolfram.com. Archived from the original on 3 November 2017. Retrieved 2 May 2018.{{cite web}}: CS1 maint: multiple names: authors list (link)

Cite error: <ref> tags exist for a group named "lower-alpha", but no corresponding <references group="lower-alpha"/> tag was found