University of Florida/Egm6321/f12.Rep5hid

R5.1 Proof that exponentiation of Transverse of a Matrix equals the Transverse of the Exponentiation ExpansionEdit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

GivenEdit

 

and

 

 

 

 

 

(Eq.(2)p.20-2b)

Show That[1]Edit

 

 

 

 

 

(Eq.5.1.1)

SolutionEdit

We will first expand the LHS, then the RHS of (Eq. 5.1.1) using (Eq.(2)p.20-2b) and compare the two expressions.

Expanding the LHS,


 

But we know that

 


 

 

 

 

 

(Eq.5.1.2)


Now expanding the RHS,


 


Which on calculating, reduces to


 


or


 

 

 

 

 

(Eq.5.1.3)


Comparing (Eq. 5.1.2) and (Eq. 5.1.3)

We conclude the LHS = RHS, Hence Proved.

R5.2. Exponentiation of a Complex Diagonal Matrix [2]Edit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

GivenEdit

A Diagonal Matrix

 , where,  .

 

 

 

 

(Eq.(2)p.20-2b)

ProblemEdit

Show that

 , where,  .

 

 

 

 

(Eq.(3)p.20-2b)

SolutionEdit

We know, from Lecture Notes [3],

 

 

 

 

 

(Eq.(2)p.15-3)

Let us consider a Simple yet Generic 4x4 Complex Diagonal Matrix  

 

 

 

 

 

(Eq.5.2.2)

where  .

Applying (Eq.(2)p.15-3) to (Eq.5.2.2) and expanding,

 

 

 

 

 

(Eq.5.2.3)

Simplifying Term 2 and other higher power terms (upto Term k) in the following way,

 

 

 

 

 

 

(Eq.5.2.4)

Similarly,

 

 

 

 

 

(Eq.5.2.5)

Using (Eq.5.2.4) and (Eq.5.2.5) in (Eq.5.2.3) and carrying out simple matrix addition, we get,

 

 

 

 

 

(Eq.5.2.6)

But every diagonal term of the matrix is of the form,

 

 

 

 

 

(Eq.5.2.7)

Therefore, (Eq.5.2.6) can be rewritten as,

 

 

 

 

 

(Eq.5.2.8)

  are nothing but the diagonal elements of the original matrix in (Eq.5.2.2). Hence,

 

 

 

 

 

(Eq.5.2.9)

Similarly it can be easily found for an   complex diagonal matrix that

 

 

 

 

 

(Eq.5.2.10)

Hence Proved.

R5.3 Show form of Exponentiation of Matrix in terms of Eigenvalues of that matrixEdit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

GivenEdit

A matrix   can be decomposed as

 

 

 

 

 

(Eq.(2) p.20-4)

where

  is the diagonal matrix of eigenvalues of matrix  

 

 

 

 

 

(Eq.(5) p.20-3)

and

  is the matrix established by n linearly independent eigenvectors  of matrix  , that is,

 

 

 

 

 

(Eq.(2) p.20-3)

ProblemEdit

Show that

 

SolutionEdit

The power series expansion of exponentiation of matrix   in terms of that matrix has been given as

 

 

 

 

 

(Eq.(2) p.15-3)

Since matrix   can be decomposed as,

 

 

 

 

 

(Eq.(2) p.20-4)

Expanding the   power of matrix   yields

 

 

 

 

 

(Eq.5.3.1)

Where the factors   which are neighbors of factors   can be all cancelled in pairs, that is,

 

 

 

 

 

(Eq.5.3.2)

 

 

 

 

 

(Eq.5.3.3)

 

 

 

 

 

(Eq.5.3.4)

Thus, the equation (Eq. 5.3.1) can be expressed as

 

 

 

 

 

(Eq.5.3.5)

 

 

 

 

 

(Eq.5.3.6)

According to the equation (Eq.(2) p.15-3), now we have,

 

 

 

 

 

(Eq.5.3.7)

Referring to the conclusion obtained in R5.2, which is

 

 

 

 

 

(Eq.(3) p.20-2b)

Replacing the matrix   with  , the elements   with  ,where   and then substituting into (Eq. 5.3.7) yields

 

 

 

 

 

(Eq.5.3.8)

R5.4 Show Decomposed Form of Matrix and its ExponentiationEdit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

GivenEdit

Exponentiation of a matrix   can be decomposed as

 

 

 

 

 

(Eq.(3) p.20-4)

The matrix   is defined in lecture note

 

 

 

 

 

(Eq.(1) p.20-2)

ProblemEdit

Show

 

 

 

 

 

(Eq.(1) p.20-5)

and

 

 

 

 

 

(Eq.(2) p.20-5)

SolutionEdit

To show the equation (Eq.(1) p.20-5), we should first find eigenvalues   of matrix   using the matrix equation as follow, indroduce   to represent Identity matrix.

 

 

 

 

Since the two eigenvalues of matrix   are both obtained, now solve for the corresponding two eigenvectors.

 

 

 

 

 

 

 

(Eq.5.4.1)

Thus, for the first value of  , we have

 

 

 

 

 

(Eq.5.4.2)

Substituting   into the equations above and solving yields, for the eigenvalue  , that

 

 

 

 

 

(Eq.5.4.3)

Similarly, we have the equation which can be used for solving eigenvector corresponding to  ,

 

 

 

 

 

(Eq.5.4.4)

Substituting   into the equations above and solving yields, for the eigenvalue  , that

 

 

 

 

 

(Eq.5.4.5)

Now we have obtained two eigenvectors   and   of matrix  , where

 ,  

 

 

 

 

(Eq.5.4.6)

Thus we have

 

 

 

 

 

(Eq.5.4.7)

Then, calculating the inverse matrix of matrix   yields

 

 

 

 

 

(Eq.5.4.8)

Therefore we reach the conclusion that,

 

 

 

 

 

 

(Eq.(1) p.20-5)


According to the conclusion we have reached in R5.3, we have,

 

 

 

Doing the multiplication of matrices at the right side of equation above yields

 

 

 

 

 

 

 

(Eq.5.4.9)

Consider Euler’s Formula,[4]

 

 

 

 

 

(Eq.5.4.10)

Replacing   with   yields

 

 

 

 

 

(Eq.5.4.11)

Solve (Eq.5.4.10) together with (Eq.5.4.11), we have

 

 

 

 

 

 

(Eq.5.4.12)

Substituting (Eq.5.4.12) into (Eq.5.4.9) yields

 

 

 

 

 

(Eq.(2) p.20-5)

Obviously,

 

R*5.5 Generating a class of exact L2-ODE-VC [5]Edit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

GivenEdit

A L2-ODE-VC [6]:

 

 

 

 

 

(Eq. 5.5.1)

The first intregal   can also be expressed as:

 

 

 

 

 

(Eq. 5.5.2)

Problem [7]Edit

Show that(Eq. 5.5.1) and (Eq. 5.5.2) lead to a general class of exact L2-ODE-VC of the form:

 

 

 

 

 

(Eq. 5.5.3)

SolutionEdit

NomenclatureEdit

 

Derivation of Eq. 5.5.3Edit

The first exactness condition for L2-ODE-VC: [8]

 

 

 

 

 

(Eq. 5.5.4)

From (Eq. 5.5.1) and (Eq. 5.5.4), we can infer that

 

 

 

 

 

(Eq. 5.5.5)

Integrating (Eq. 5.5.5), w.r.t p, we obtain:

 

 

 

 

 

(Eq. 5.5.6)

Partial derivatives of   w.r.t to x and y can be written as:

 

 

 

 

 

(Eq. 5.5.7)

 

 

 

 

 

(Eq. 5.5.8)

Substituting the partial derivatives of   w.r.t x,y and p [(Eq. 5.5.7), (Eq. 5.5.8), (Eq. 5.5.6)] into (Eq. 5.5.4), we obtain:

 

 

 

 

 

(Eq. 5.5.8)

Comparing (Eq. 5.5.8) with (Eq. 5.5.1), we can write:

 

 

 

 

 

(Eq. 5.5.9)

Thus  

Integrating w.r.t x,

 

 

 

 

 

(Eq. 5.5.10)

Substituting the   obtained in (Eq. 5.5.10) back into the expression for   obtained in (Eq. 5.5.6), we obtain:

 

 

 

 

 

(Eq. 5.5.11)

The partial derivative of   (Eq. 5.5.11) w.r.t y,

 

 

 

 

 

(Eq. 5.5.12)

But from (Eq. 5.5.1) and (Eq. 5.5.2), we see that  .

So,  

Since,   is only a function of  , so, we can now say that   and  .

Thus   is a constant.

Hence we obtain the following expression for  :

 

 

 

 

 

(Eq. 5.5.13)

which represents a general class of Exact L2-ODE-VC.

R*5.6 Solving a L2-ODE-VC[9]Edit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

GivenEdit

 

 

 

 

 

(Eq. 5.6.1)

ProblemEdit

1. Show that (Eq. 5.6.1) is exact.

2. Find  

3. Solve for  

SolutionEdit

NomenclatureEdit

 

 

Exactness Conditions[10]Edit

The exactness conditions for N2-ODE (Non Linear Second Order Differential Equation) are:

First Exactness condition

For an equation to be exact, they must be of the form

 


 

 

 

 

 

(Eq. 5.6.2)


 

 

 

 

 

(Eq. 5.6.3)


Second Exactness Condition

 

 

 

 

 

(Eq. 5.6.4)

 

 

 

 

 

(Eq. 5.6.5)

WorkEdit

We have

 


Where we can identify

 


and

 


Thus the equation satisfies the first exactness condition.


For the second exactness condition, we first calculate the various partial derivatives of f and g.

 

 

 

 

 

 

 

 

 

 


Substituting the values in (Eq. 5.6.4) we get

 


 


Therefore the first equation satisfies.

Substituting the values in (Eq. 5.6.5) we get

 


 


Therefore the second equation satisfies as well.

Thus the second exactness condition is satisfied and the given differential equation is exact.

Now, we have  

Integrating w.r.t. p, we get

 

where h(x,y) is a function of integration as we integrated only partially w.r.t. p.


 

 

 

 

 

(Eq. 5.6.6)


Partially differentiating (Eq. 5.6.6) w.r.t x


 


Partially differentiating (Eq. 5.6.6) w.r.t y


 


From equation (Eq. 5.6.3), we have


 


 


 


We have established that

 


Comparing the two equations, we get,


 


On integrating,


 

Thus,


 


Thus we have


 


 


 

This N1-ODE can be solved using the Integrating Factor Method that we very well know.


 


 


 

R*5.7 Show equivalence to symmetry of second partial derivatives of first integral[11]Edit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

GivenEdit

 

 

 

 

 

(Eq.(1) p.22-3)

where

 

 

 

 

 

(Eq.(3) p.21-7)

ProblemEdit

Show equivalence to symmetry of mixed second partial derivatives of first integral, that is

 

where

 

SolutionEdit

 

 

 

 

 

(Eq.(3) p.21-7)

 

 

 

 

 

(Eq.(2) p.22-4)

From (Eq.(2)p.22-4), we have,

 

 

 

 

 

 

(Eq. 5.7.1)

 

 

 

 

 

(Eq.(3) p.22-4)

Substituting (Eq.(3)p.21-7),(Eq. 5.7.1) and (Eq.(3)p.22-4) into (Eq.(1)p.22-3) yields

 

 

 

 

 

 

(Eq. 5.7.2)

Because

 

 

Thus

 

 

 

 

 

(Eq. 5.7.3)

Substituting (Eq. 5.7.3) into (Eq. 5.7.2) yields

 

 

 

 

 

(Eq. 5.7.4)

Because

 

 

 

 

 

(Eq. 5.7.5)

Substitute (Eq. 5.7.5) into (Eq. 5.7.4), we have

 

 

 

 

 

 

(Eq. 5.7.6)

Since   and   can be the second and first derivative of any solution function   of any second order ODE in terms of which the equation   is hold. That is, the factor  , which consists of two derivatives of solution function and the derivative operater so that depends partly on the solution functin of ODE, can be arbitrary and thus linearly indepent of the derivative operater  , which is a factor of the third term on left hand side of (Eq. 5.7.6).

Similarly, comparing the first and the third terms on left hand side of (Eq. 5.7.6) yields that the factor 1 (which can be treated as a unit nature number basis of function space) of the first term and the derivative operater (which is another basis of derivative function space) of the third term are linearly independent of each other.

For the left side of (Eq. 5.7.6) being zero under any circumstances, we should have,

 

 

 

 

 

(Eq. 5.7.7)

while

 

 

 

 

 

(Eq. 5.7.8)

 

 

 

 

 

(Eq. 5.7.9)

From (Eq. 5.7.7),since the factor   is arbitrary, we obtain,

 

 

 

 

 

(Eq. 5.7.10)

Thus,

 

 

 

 

 

(Eq. 5.7.11)

From(Eq. 5.7.9), consider   to be also a function of variables x,y and p, which can be represented as  , thus,

 

 

 

 

 

(Eq. 5.7.12)

Since the partial derivative opraters   are linearly independent, we have,

 

 

 

 

 

(Eq. 5.7.13)

 

 

 

 

 

(Eq. 5.7.14)

 

 

 

 

 

(Eq. 5.7.15)

Obviously the only condition by which the three equations above are all satisfied is that the function   is a numerical constant.

Thus, we have

 

 

 

 

 

(Eq. 5.7.16)

where   is a constant. To find the value of constant  , try the process as follow.

 

 

 

 

 

(Eq. 5.7.17)

Find integral on both sides of (Eq. 5.7.17) in terms of x,

 

 

 

 

 

(Eq. 5.7.18)

where the term   is an arbitrarily selected function of independent variables y and p. Then find integral on both sides of (Eq. 5.7.18) in terms of p,

 

 

 

 

 

(Eq. 5.7.19)

where the term   is an arbitrarily selected function of variables x and y.

The first partial derivatives of both sides of (Eq. 5.7.19) in terms of x could be

 

 

 

 

 

(Eq. 5.7.20)

 

 

 

 

 

(Eq. 5.7.21)

Then find partial derivative of both sides of (Eq. 5.7.21) in terms of p,

 

 

 

 

 

(Eq. 5.7.22)

 

 

 

 

 

(Eq. 5.7.23)

 

 

 

 

 

(Eq. 5.7.24)

Because the right hand side of (Eq. 5.7.24) is a function of two variables y and p, while the left hand side is a function of p' only, the equation (Eq. 5.7.24) could not hold if the constant   has a non-zero value. Thus, the only condition by which the equation (Eq. 5.7.24) will be satisfied is that   while   ,that is,  .

Substituting   into (Eq. 5.7.16) yields,

 

 

 

 

 

(Eq. 5.7.25)

Thus we have

 

 

 

 

 

(Eq. 5.7.26)

We are now left with  

Thus

 

 

 

 

 

(Eq. 5.7.27)

R*5.8. Working with the coefficients in 1st exactness conditionEdit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.


GivenEdit

 

 

 

 

 

(Eq.(1) p.22-2)

Problem [12]Edit

Using The Coefficients in the 1st exactness condition prove that (Eq.(1)p.22-3) can be written in the form

 

SolutionEdit

NomenclatureEdit

 
 



For an equation to be exact, they must be of the form

 

 

 

 

 

 

(Eq. 5.8.2)

 
 
 

using chain and product rule

 

 

 

 

 

(Eq. 5.8.3)

 
 
 
 
 

 

 

 

 

(Eq. 5.8.4)

plugging Eq(2),(3),&(40 into Eq(1)

 

after cancellation of the opposite term

 

Now, we can club the terms


 


and


 


Since 1 and q, i.e the second derivative of y, are in general non linear, for the equation to hold true, their coefficients must both be equal to zero.

Thus we say that


 


and


 


Which is the required proof.


R5.9: Use of MacLaurin SeriesEdit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

ProblemEdit

Use Taylor Series at x=0 (MacLaurin Series) to derive[13]

 


 

SolutionEdit

The Taylor's series [14] expansion of a function f(x) about a real or complex number c is given by the formula

 

 

 

 

 

(Eq. 5.9.1)

When the neighborhood for the expansion is zero, i.e c = 0, the resulting series is called the Maclaurin Series.

Part aEdit

We have the function


 

Table for Maclaurin Series
   
   
   
   
And so on.. ..

Rewriting the Maclaurin series expansion,

 

 

 

 

 

(Eq. 5.9.2)


Substituting the values from the tables in (Eq. 5.9.2) we get


 

 

 

 

 

(Eq. 5.9.3)

 

 

 

 

 

(Eq. 5.9.4)

Where[15]

 
 

We can represent


 


(Eq. 5.9.4) can be written as   , hence proved.

Part bEdit

We have the function

 

We will use a slightly different approach here when compared to part a of the solution. We will expand   and multiply the resulting expanded function with  


Table for Maclaurin Series
   
   
   
   
And so on.. ..

Rewriting the Maclaurin series expansion,

 

 

 

 

 

(Eq. 5.9.5)


Substituting the values from the tables in (Eq. 5.9.5) we get


 

 

 

 

 

(Eq. 5.9.6)


Multiplying (Eq. 5.9.6) with  


 


This expression does not match the expression that we have been asked to prove. This, we believe is because there has been a misprint and the expression to be found out must be  


Expanding   using Maclaurin's series

Table for Maclaurin Series
   
   
   
   
   
   
And so on.. ..


Rewriting the Maclaurin series expansion,

 

 

 

 

 

(Eq. 5.9.7)


Substituting the values from the tables in (Eq. 5.9.7) we get

 

 

 

 

 

(Eq. 5.9.8)


Multiplying (Eq. 5.9.8) with  


 


 

 

 

 

 

(Eq. 5.9.9)

Which is the expression in the RHS.

R5.10 Gauss Hypergeometric Series[16]Edit

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

ProblemEdit

1. Use MATLAB to plot   near x=0 to show the local maximum (or maxima) in this region.


2. Show that  

 

 

 

 

((1) pg. 64-9b)

SolutionEdit

The MATLAB code, shown below, will plot the hypergeometric function   over the interval:  .

x = [0:0.01:0.8]';
plot(x,hypergeom([5,-10],1,x))

The plot of the hypergeometric function near x=0 reveals a local maximum of 0.1481 at x = 0.23.

 


The hypergeometric function   can be expressed as   using the Pochhammer Symbol


where