1.75 problem 75

1.75.1 Solution using Matrix exponential method
1.75.2 Solution using explicit Eigenvalue and Eigenvector method
1.75.3 Maple step by step solution
1.75.4 Maple dsolve solution
1.75.5 Mathematica DSolve solution

Internal problem ID [7767]
Book : Own collection of miscellaneous problems
Section : section 1.0
Problem number : 75
Date solved : Monday, October 21, 2024 at 04:16:14 PM
CAS classification : system_of_ODEs

\begin{align*} \frac {d}{d t}x \left (t \right )&=x \left (t \right )+y \left (t \right )\\ \frac {d}{d t}y \left (t \right )&=y \left (t \right )\\ \frac {d}{d t}z \left (t \right )&=z \left (t \right ) \end{align*}

1.75.1 Solution using Matrix exponential method

In this method, we will assume we have found the matrix exponential \(e^{A t}\) allready. There are different methods to determine this but will not be shown here. This is a system of linear ODE’s given as

\begin{align*} \vec {x}'(t) &= A\, \vec {x}(t) \end{align*}

Or

\begin{align*} \left [\begin {array}{c} \frac {d}{d t}x \left (t \right ) \\ \frac {d}{d t}y \left (t \right ) \\ \frac {d}{d t}z \left (t \right ) \end {array}\right ] &= \left [\begin {array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {array}\right ]\, \left [\begin {array}{c} x \left (t \right ) \\ y \left (t \right ) \\ z \left (t \right ) \end {array}\right ] \end{align*}

For the above matrix \(A\), the matrix exponential can be found to be

\begin{align*} e^{A t} &= \left [\begin {array}{ccc} {\mathrm e}^{t} & t \,{\mathrm e}^{t} & 0 \\ 0 & {\mathrm e}^{t} & 0 \\ 0 & 0 & {\mathrm e}^{t} \end {array}\right ] \end{align*}

Therefore the homogeneous solution is

\begin{align*} \vec {x}_h(t) &= e^{A t} \vec {c} \\ &= \left [\begin {array}{ccc} {\mathrm e}^{t} & t \,{\mathrm e}^{t} & 0 \\ 0 & {\mathrm e}^{t} & 0 \\ 0 & 0 & {\mathrm e}^{t} \end {array}\right ] \left [\begin {array}{c} c_{1} \\ c_{2} \\ c_{3} \end {array}\right ] \\ &= \left [\begin {array}{c} {\mathrm e}^{t} c_{1}+t \,{\mathrm e}^{t} c_{2} \\ {\mathrm e}^{t} c_{2} \\ {\mathrm e}^{t} c_{3} \end {array}\right ]\\ &= \left [\begin {array}{c} {\mathrm e}^{t} \left (c_{2} t +c_{1}\right ) \\ {\mathrm e}^{t} c_{2} \\ {\mathrm e}^{t} c_{3} \end {array}\right ] \end{align*}

Since no forcing function is given, then the final solution is \(\vec {x}_h(t)\) above.

1.75.2 Solution using explicit Eigenvalue and Eigenvector method

This is a system of linear ODE’s given as

\begin{align*} \vec {x}'(t) &= A\, \vec {x}(t) \end{align*}

Or

\begin{align*} \left [\begin {array}{c} \frac {d}{d t}x \left (t \right ) \\ \frac {d}{d t}y \left (t \right ) \\ \frac {d}{d t}z \left (t \right ) \end {array}\right ] &= \left [\begin {array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {array}\right ]\, \left [\begin {array}{c} x \left (t \right ) \\ y \left (t \right ) \\ z \left (t \right ) \end {array}\right ] \end{align*}

The first step is find the homogeneous solution. We start by finding the eigenvalues of \(A\). This is done by solving the following equation for the eigenvalues \(\lambda \)

\begin{align*} \operatorname {det} \left ( A- \lambda I \right ) &= 0 \end{align*}

Expanding gives

\begin{align*} \operatorname {det} \left (\left [\begin {array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {array}\right ]-\lambda \left [\begin {array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {array}\right ]\right ) &= 0 \end{align*}

Therefore

\begin{align*} \operatorname {det} \left (\left [\begin {array}{ccc} 1-\lambda & 1 & 0 \\ 0 & 1-\lambda & 0 \\ 0 & 0 & 1-\lambda \end {array}\right ]\right ) &= 0 \end{align*}

Since the matrix \(A\) is triangular matrix, then the determinant is the product of the elements along the diagonal. Therefore the above becomes

\begin{align*} (1-\lambda )(1-\lambda )(1-\lambda )&=0 \end{align*}

The roots of the above are the eigenvalues.

\begin{align*} \lambda _1 &= 1 \end{align*}

This table summarises the above result

eigenvalue algebraic multiplicity type of eigenvalue
\(1\) \(1\) real eigenvalue

Now the eigenvector for each eigenvalue are found.

Considering the eigenvalue \(\lambda _{1} = 1\)

We need to solve \(A \vec {v} = \lambda \vec {v}\) or \((A-\lambda I) \vec {v} = \vec {0}\) which becomes

\begin{align*} \left (\left [\begin {array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {array}\right ] - \left (1\right ) \left [\begin {array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {array}\right ]\right ) \left [\begin {array}{c} v_{1} \\ v_{2} \\ v_{3} \end {array}\right ]&=\left [\begin {array}{c} 0 \\ 0 \\ 0 \end {array}\right ]\\ \left [\begin {array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end {array}\right ] \left [\begin {array}{c} v_{1} \\ v_{2} \\ v_{3} \end {array}\right ]&=\left [\begin {array}{c} 0 \\ 0 \\ 0 \end {array}\right ] \end{align*}

Now forward elimination is applied to solve for the eigenvector \(\vec {v}\). The augmented matrix is

\[ \left [\begin {array}{@{}ccc!{\ifdefined \HCode |\else \color {red}\vline width 0.6pt\fi }c@{}} 0&1&0&0\\ 0&0&0&0\\ 0&0&0&0 \end {array} \right ] \]

Therefore the system in Echelon form is

\[ \left [\begin {array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end {array}\right ] \left [\begin {array}{c} v_{1} \\ v_{2} \\ v_{3} \end {array}\right ] = \left [\begin {array}{c} 0 \\ 0 \\ 0 \end {array}\right ] \]

The free variables are \(\{v_{1}, v_{3}\}\) and the leading variables are \(\{v_{2}\}\). Let \(v_{1} = t\). Let \(v_{3} = s\). Now we start back substitution. Solving the above equation for the leading variables in terms of free variables gives equation \(\{v_{2} = 0\}\)

Hence the solution is

\[ \left [\begin {array}{c} t \\ v_{2} \\ s \end {array}\right ] = \left [\begin {array}{c} t \\ 0 \\ s \end {array}\right ] \]

Since there are two free Variable, we have found two eigenvectors associated with this eigenvalue. The above can be written as

\begin{align*} \left [\begin {array}{c} t \\ v_{2} \\ s \end {array}\right ] &= \left [\begin {array}{c} t \\ 0 \\ 0 \end {array}\right ] + \left [\begin {array}{c} 0 \\ 0 \\ s \end {array}\right ]\\ &= t \left [\begin {array}{c} 1 \\ 0 \\ 0 \end {array}\right ] + s \left [\begin {array}{c} 0 \\ 0 \\ 1 \end {array}\right ] \end{align*}

By letting \(t = 1\) and \(s = 1\) then the above becomes

\[ \left [\begin {array}{c} t \\ v_{2} \\ s \end {array}\right ] = \left [\begin {array}{c} 1 \\ 0 \\ 0 \end {array}\right ] + \left [\begin {array}{c} 0 \\ 0 \\ 1 \end {array}\right ] \]

Hence the two eigenvectors associated with this eigenvalue are

\[ \left (\left [\begin {array}{c} 1 \\ 0 \\ 0 \end {array}\right ],\left [\begin {array}{c} 0 \\ 0 \\ 1 \end {array}\right ]\right ) \]

The following table gives a summary of this result. It shows for each eigenvalue the algebraic multiplicity \(m\), and its geometric multiplicity \(k\) and the eigenvectors associated with the eigenvalue. If \(m>k\) then the eigenvalue is defective which means the number of normal linearly independent eigenvectors associated with this eigenvalue (called the geometric multiplicity \(k\)) does not equal the algebraic multiplicity \(m\), and we need to determine an additional \(m-k\) generalized eigenvectors for this eigenvalue.

multiplicity


eigenvalue algebraic \(m\) geometric \(k\) defective? eigenvectors
\(1\) \(3\) \(2\) Yes \(\left [\begin {array}{cc} 0 & 1 \\ 0 & 0 \\ 1 & 0 \end {array}\right ]\)

Now that we found the eigenvalues and associated eigenvectors, we will go over each eigenvalue and generate the solution basis. The only problem we need to take care of is if the eigenvalue is defective. eigenvalue \(1\) is real and repated eigenvalue of multiplicity \(3\).There are three possible cases that can happen. This is illustrated in this diagram

This eigenvalue has algebraic multiplicity of \(3\), and geometric multiplicity \(2\), therefore this is defective eigenvalue. The defect is \(1\). This falls into case \(2\) shown above. We need to find rank-2 eigenvector \(\vec {v}_3\). This eigenvector must therefore satisfy \(\left (A-\lambda I \right )^2 \vec {v}_3= \vec {0}\). But

\begin{align*} \left (A-\lambda I \right )^2 &= \left ( \left [\begin {array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {array}\right ]-1 \left [\begin {array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {array}\right ]\right )^2 \\ &= \left [\begin {array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end {array}\right ] \end{align*}

Therefore \(\vec {v}_3\) could be any eigenvector vector we want (but not the zero vector). Let

\[ \vec {v}_3 = \left [\begin {array}{c} \eta _{1} \\ \eta _{2} \\ \eta _{3} \end {array}\right ] \]

To determine the actual \(\vec {v}_3\) we need now to enforce the condition that \(\vec {v}_3\) satisfies

\begin{align*} \left (A-\lambda I \right ) \vec {v}_3 &= \vec {u} \tag {1} \end{align*}

Where \(\vec {u}\) is linear combination of \(\vec {v}_1,\vec {v}_2\). Hence

\begin{align*} \vec {u} &= \alpha \vec {v}_1 + \beta \vec {v}_2 \end{align*}

Where \(\alpha ,\beta \) are arbitrary constants (not both zero). Eq. (1) becomes

\begin{align*} \left (A-\lambda I \right ) \left [\begin {array}{c} \eta _{1} \\ \eta _{2} \\ \eta _{3} \end {array}\right ] &= \alpha \left [\begin {array}{c} 0 \\ 0 \\ 1 \end {array}\right ] + \beta \left [\begin {array}{c} 1 \\ 0 \\ 0 \end {array}\right ]\\ \left [\begin {array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end {array}\right ] \left [\begin {array}{c} \eta _{1} \\ \eta _{2} \\ \eta _{3} \end {array}\right ] &= \alpha \left [\begin {array}{c} 0 \\ 0 \\ 1 \end {array}\right ] + \beta \left [\begin {array}{c} 1 \\ 0 \\ 0 \end {array}\right ]\\ \left [\begin {array}{c} \eta _{2} \\ 0 \\ 0 \end {array}\right ] &= \left [\begin {array}{c} \beta \\ 0 \\ \alpha \end {array}\right ] \end{align*}

Expanding the above gives the following equations equations

\begin{align*} \eta _{2} = \beta \\ 0 = \alpha \end{align*}

solving for \(\alpha ,\beta \) from the above gives

\begin{align*} \eta _{2} = \beta \\ 0 = \alpha \end{align*}

Since \(\alpha ,\beta \) are not both zero, then we just need to determine \(\eta _i\) values, not all zero, which satisfy the above equations for \(\alpha ,\beta \) not both zero. By inspection we see that the following values satisfy this condition

\[ [\eta _{2} = -1] \]

Hence we found the missing generalized eigenvector

\[ \vec {v}_3 = \left [\begin {array}{c} 0 \\ -1 \\ 0 \end {array}\right ] \]

Which implies that

\begin{align*} \alpha &=0\\ \beta &=-1 \end{align*}

Therefore

\begin{align*} \vec {u} &= \alpha \vec {v}_1 + \beta \vec {v}_2\\ &= 0 \left [\begin {array}{c} 0 \\ 0 \\ 1 \end {array}\right ] + (-1) \left [\begin {array}{c} 1 \\ 0 \\ 0 \end {array}\right ]\\ &= \left [\begin {array}{c} -1 \\ 0 \\ 0 \end {array}\right ] \end{align*}

Therefore the missing generalized eigenvector is now found. We have found three generalized eigenvectors for eigenvalue \(1\). Therefore the three basis solutions associated with this eigenvalue are

\begin{align*} \vec {x}_1(t) &= \vec {v}_1 e^{\lambda t}\\ &= \left [\begin {array}{c} 0 \\ 0 \\ 1 \end {array}\right ] {\mathrm e}^{t}\\ &= \left [\begin {array}{c} 0 \\ 0 \\ {\mathrm e}^{t} \end {array}\right ] \end{align*}

And

\begin{align*} \vec {x}_2(t) &= \vec {v}_2 e^{\lambda t}\\ &= \left [\begin {array}{c} 1 \\ 0 \\ 0 \end {array}\right ] {\mathrm e}^{t}\\ &= \left [\begin {array}{c} {\mathrm e}^{t} \\ 0 \\ 0 \end {array}\right ] \end{align*}

And

\begin{align*} \vec {x}_3(t) &=\left ( \vec {u} t + \vec {v}_3 \right ) e^{\lambda t} \\ &= \left (\left [\begin {array}{c} -1 \\ 0 \\ 0 \end {array}\right ] t + \left [\begin {array}{c} 0 \\ -1 \\ 0 \end {array}\right ]\right ) {\mathrm e}^{t} \end{align*}

Therefore the final solution is

\begin{align*} \vec {x}_h(t) &= c_{1} \vec {x}_{1}(t) + c_{2} \vec {x}_{2}(t) + c_{3} \vec {x}_{3}(t) \end{align*}

Which is written as

\begin{align*} \left [\begin {array}{c} x \left (t \right ) \\ y \left (t \right ) \\ z \left (t \right ) \end {array}\right ] &= c_{1} \left [\begin {array}{c} 0 \\ 0 \\ {\mathrm e}^{t} \end {array}\right ] + c_{2} \left [\begin {array}{c} {\mathrm e}^{t} \\ 0 \\ 0 \end {array}\right ] + c_{3} \left [\begin {array}{c} -t \,{\mathrm e}^{t} \\ -{\mathrm e}^{t} \\ 0 \end {array}\right ] \end{align*}

Which becomes

\begin{align*} \left [\begin {array}{c} x \left (t \right ) \\ y \left (t \right ) \\ z \left (t \right ) \end {array}\right ] = \left [\begin {array}{c} {\mathrm e}^{t} \left (-t c_3 +c_2 \right ) \\ -c_3 \,{\mathrm e}^{t} \\ c_1 \,{\mathrm e}^{t} \end {array}\right ] \end{align*}
1.75.3 Maple step by step solution
1.75.4 Maple dsolve solution

Solving time : 0.063 (sec)
Leaf size : 26

dsolve([diff(x(t),t) = x(t)+y(t), diff(y(t),t) = y(t), diff(z(t),t) = z(t)] 
       ,{op([x(t), y(t), z(t)])})
 
\begin{align*} x \left (t \right ) &= \left (c_2 t +c_1 \right ) {\mathrm e}^{t} \\ y \left (t \right ) &= c_2 \,{\mathrm e}^{t} \\ z \left (t \right ) &= c_3 \,{\mathrm e}^{t} \\ \end{align*}
1.75.5 Mathematica DSolve solution

Solving time : 0.022 (sec)
Leaf size : 62

DSolve[{{D[x[t],t]== x[t]+y[t],D[y[t],t] == y[t],D[z[t],t]==z[t]},{}}, 
       {x[t],y[t],z[t]},t,IncludeSingularSolutions->True]
 
\begin{align*} x(t)\to e^t (c_2 t+c_1) \\ y(t)\to c_2 e^t \\ z(t)\to c_3 e^t \\ x(t)\to e^t (c_2 t+c_1) \\ y(t)\to c_2 e^t \\ z(t)\to 0 \\ \end{align*}