Processing math: 100%
2.5 HW5
2.5.1 Questions
;
;
;
;
;
2.5.2 Problem 1
Let \alpha _{i} be the i^{th} eigenvalue of M_{1} and let v_{i} be an eigenvector associated with \alpha _{i}. This implies M_{1}v_{i}=\alpha _{i}v_{i}
Similarly, let \beta _{i}
be the i^{th} eigenvalue of M_{2} and let u_{i} be an eigenvector associated with \beta _{i}. This implies M_{2}u_{i}=\beta _{i}u_{i}
We start by post
multiplying M_{1}M_{2} with an eigenvector of M_{1} associated with eigenvalue \alpha _{i}, this results in M_{1}M_{2}v_{i}=M_{2}M_{1}v_{i}
Where we just
took advantage of commuting M_{1}M_{2} by changing the order in the RHS above. But M_{1}v_{i}=\alpha _{i}v_{i}, hence the
above becomes M_{1}M_{2}v_{i}=M_{2}\alpha _{i}v_{i}
Since \alpha _{i} is scalar, we can move it to the left and obtain M_{1}\left ( M_{2}v_{i}\right ) =\alpha _{i}\left ( M_{2}v_{i}\right )
We see now that
M_{2}v_{i} itself is an eigenvector of M_{1}.
What the above means is that if v_{i} is an eigenvector of M_{1} associated with an eigenvalue \alpha _{i}, then so will
be M_{2}v_{i}. Now an important point follows: Since the eigenvalues are distinct, then all the eigenvectors
that belong to each eigenvalues are scalar multiple of each others. What this means,
is that M_{2}v_{i} is some scaled version of v_{i} since both are in the same eigenspace associated
with \alpha _{i}. The eigenspace associated with an eigenvalue is just the space spanned by all
the eigenvectors of this eigenvalue. This means this space is one dimensional in this
case.
This is critical, since it then tells us that M_{2}v_{i}=\beta _{i}v_{i} where \beta _{i} is the above scalar, which is the eigenvalue of M_{2}.
Without this restriction, we could not say that M_{2}v_{i}=\beta _{i}v_{i}.
Therefore, the above means that each eigenvector of M_{1} is also an eigenvector of M_{2}. Or said in other
way, the matrix M_{1} and M_{2} share the same eigenspaces.
But this complete the proof. Since the nonsingular matrix T which diagonalizes a matrix is made
of up of the eigenvectors of the matrix. The columns of T are the eigenvectors of the matrix. And
since M_{1},M_{2} share the same eigenvectors, hence the same T will diagonalize both of them at the same
time.
QED
2.5.3 Problem 2
Part (a)
We want to show that \frac{d}{dt}\Psi \left ( t\right ) =A\left ( t\right ) \Psi \left ( t\right ) . Where \Psi \left ( t\right ) =e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }. To expand e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }we will use the definition of matrix exponential e^{M}=I+M+\frac{1}{2}M^{2}+\frac{1}{3!}M^{3}+\cdots
Therefore e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }=I+\int _{0}^{t}A\left ( \tau \right ) d\tau +\frac{1}{2}\left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) \left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) +\frac{1}{3!}\left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) \left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) \left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) +\cdots
To make it easier to see, we will expand only the first 2 terms in expansion:\begin{equation} e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }=I+\int _{0}^{t}A\left ( \tau \right ) d\tau +\frac{1}{2}\left [ \int _{0}^{t}A\left ( \tau \right ) d\tau \int _{0}^{t}A\left ( \tau \right ) d\tau \right ] +\cdots \tag{1} \end{equation}
Taking the time derivative of the above and using the product rule \frac{d}{dt}\left ( XY\right ) =X\frac{d}{dt}Y+Y\frac{d}{dt}X gives\begin{align*} \frac{d}{dt}\Psi \left ( t\right ) & =\frac{d}{dt}\left ( e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }\right ) \\ & =\frac{d}{dt}\left ( I+\int _{0}^{t}A\left ( \tau \right ) d\tau +\frac{1}{2}\left [ \int _{0}^{t}A\left ( \tau \right ) d\tau \int _{0}^{t}A\left ( \tau \right ) d\tau \right ] +\cdots \right ) \\ & =\overset{0}{\overbrace{\frac{d}{dt}I}}+\frac{d}{dt}\int _{0}^{t}A\left ( \tau \right ) d\tau +\frac{1}{2}\frac{d}{dt}\left [ \int _{0}^{t}A\left ( \tau \right ) d\tau \int _{0}^{t}A\left ( \tau \right ) d\tau \right ] +\cdots \\ & =A\left ( t\right ) +\frac{1}{2}\left [ \left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) A\left ( t\right ) +A\left ( t\right ) \left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) \right ] +\cdots \end{align*}
Taking advantage of the commute property we write the second term above as \frac{d}{dt}\Psi \left ( t\right ) =A\left ( t\right ) +\frac{1}{2}\left [ A\left ( t\right ) \left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) +A\left ( t\right ) \left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) \right ] +\cdots
Therefore\begin{align*} \frac{d}{dt}\Psi \left ( t\right ) & =A\left ( t\right ) +\frac{1}{2}\left [ 2A\left ( t\right ) \left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) \right ] +\cdots \\ & =A\left ( t\right ) +A\left ( t\right ) \int _{0}^{t}A\left ( \tau \right ) d\tau +\cdots \end{align*}
Since all the A\left ( t\right ) are on the left side, we can now factor A\left ( t\right ) out and obtain \frac{d}{dt}\Psi \left ( t\right ) =A\left ( t\right ) \overset{\Psi \left ( t\right ) }{\overbrace{\left ( I+\int _{0}^{t}A\left ( \tau \right ) d\tau +\cdots \right ) }}
Comparing the term inside \left ( \cdot \right )
in the above expression above with equation (1) we see it is \Psi \left ( t\right ) . (If we have expanded more terms, it
would be more clear, but the idea is the same as shown above). Therefore we conclude that \frac{d}{dt}e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }=A\left ( t\right ) e^{^{\int _{0}^{t}A\left ( \tau \right ) d\tau }}
Or \fbox{$\frac{d}{dt}\Psi \left ( t\right ) =A\left ( t\right ) \Psi \left ( t\right ) $}
Hence \Psi \left ( t\right ) satisfies the state equation. Now we need to show that x\left ( t\right ) =e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }x\left ( 0\right ) is the state solution. Since \Psi \left ( t\right ) is the
fundamental matrix, then each of its columns is an independent solution to x^{\prime }=A\left ( t\right ) x by definition. Hence
a linear combinations of the columns of \Psi \left ( t\right ) gives the solution x\left ( t\right ) . As shown in class, we now obtain
the general solution by assuming x\left ( t\right ) =\Psi \left ( t\right ) \theta \left ( t\right ) and then from this end up with the fundamental solution x\left ( t\right ) as \vec{x}\left ( t\right ) =\Psi \left ( t\right ) \Psi ^{-1}\left ( 0\right ) x\left ( 0\right ) +{\displaystyle \int \limits _{0}^{t}} \Psi \left ( t\right ) \Psi ^{-1}\left ( \tau \right ) B\left ( \tau \right ) u\left ( \tau \right ) d\tau
But since this is free system, so there is no input u\left ( t\right ) and since \Psi \left ( 0\right ) =I then \Psi ^{-1}\left ( 0\right ) =I and the above reduces to \vec{x}\left ( t\right ) =\Psi \left ( t\right ) x\left ( 0\right )
But \Psi \left ( t\right ) =e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }
hence \vec{x}\left ( t\right ) =e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }x\left ( 0\right )
Part (b)
We are told that A\left ( t\right ) A\left ( \tau \right ) =A\left ( \tau \right ) A\left ( t\right )
Lets integrate both sides from 0 to t w.r.t to \tau . The equality will remain since we
are integrating over the same interval of equal quantities, hence{\displaystyle \int \limits _{0}^{t}} A\left ( t\right ) A\left ( \tau \right ) d\tau ={\displaystyle \int \limits _{0}^{t}} A\left ( \tau \right ) A\left ( t\right ) d\tau
Now the integral on the LHS has
A\left ( t\right ) which can be taken out of the integral, keeping the order to the left, and the integral on
RHS has A\left ( t\right ) which can now be taken out of the integral, keeping the order to the right, which
results in A\left ( t\right ) \left ({\displaystyle \int \limits _{0}^{t}} A\left ( \tau \right ) d\tau \right ) =\left ({\displaystyle \int \limits _{0}^{t}} A\left ( \tau \right ) d\tau \right ) A\left ( t\right )
But the above is the assumptions we used in part (a). Therefore, A\left ( t\right ) A\left ( \tau \right ) =A\left ( \tau \right ) A\left ( t\right ) \overset{\text{implies}}{\Rightarrow }A\left ( t\right ) \left ({\displaystyle \int \limits _{0}^{t}} A\left ( \tau \right ) d\tau \right ) =\left ({\displaystyle \int \limits _{0}^{t}} A\left ( \tau \right ) d\tau \right ) A\left ( t\right )
Therefore we can use
the same solution found in (a) \vec{x}\left ( t\right ) =e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }x\left ( 0\right )
2.5.4 Problem 3
Since A\left ( t\right ) A\left ( \tau \right ) =A\left ( \tau \right ) A\left ( t\right ) , then from problem (2) we know that \Psi \left ( t\right ) =e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }
is the fundamental matrix for x^{\prime }\left ( t\right ) =A\left ( t\right ) x\left ( t\right ) . We now
need to show that, given that A\left ( t\right ) has distinct eigenvalues for each t, the fundamental
matrix can be written as \Psi \left ( t\right ) =T^{-1}e^{\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau }T
For some constant matrix T. The important point is that T
must be constant in the above. In addition, we need to show that the above \Psi \left ( t\right ) satisfies
\frac{d}{dt}\Psi \left ( t\right ) =A\left ( t\right ) \Psi \left ( t\right ) .
The first step is to find the constant T matrix. Since A\left ( t\right ) A\left ( \tau \right ) =A\left ( \tau \right ) A\left ( t\right ) \,, then by selecting \tau =0, which is the initial time,
then A\left ( t\right ) A\left ( 0\right ) =A\left ( 0\right ) A\left ( t\right ) . Therefore, each A\left ( t\right ) commutes with the same matrix A\left ( 0\right ) . i.e. A\left ( t_{1}\right ) will commute with A\left ( 0\right )
and A\left ( t_{2}\right ) will commute with A\left ( 0\right ) and so on. But by problem 1, we showed that when two
matrices commute, then they have the same eigenvectors. Therefore, we can select the
eigenvectors of A\left ( 0\right ) to use to construct the T matrix from, by using the n linearly independent
eigenvectors of A\left ( 0\right ) as the columns of T. Lets call it T_{0}. Therefore, T_{0} is now constant and do not
change. Now that we found a constant T_{0} matrix to use for diagonalization of each A\left ( t\right ) matrix,
we will show the rest of the solution using T_{0}\,. Since e^{M}=I+M+\frac{1}{2}M^{2}+\frac{1}{3!}M^{3}+\cdots ={\displaystyle \sum \limits _{i=0}^{\infty }} \frac{M^{i}}{i!}
Therefore, applying the above to \begin{align*} \Psi \left ( t\right ) & =e^{\int _{0}^{t}A\left ( \tau \right ) d\tau }\\ & ={\displaystyle \sum \limits _{i=0}^{\infty }} \frac{1}{i!}\left ( \int _{0}^{t}A\left ( \tau \right ) d\tau \right ) ^{i} \end{align*}
Since A has distinct eigenvalues at all time, we can diagonalize it using the constant T_{0}, hence\begin{align*} \Psi \left ( t\right ) & ={\displaystyle \sum \limits _{i=0}^{\infty }} \frac{1}{i!}\left ( \int _{0}^{t}T_{0}^{-1}\Lambda \left ( \tau \right ) T_{0}d\tau \right ) ^{i}\\ & =I+\int _{0}^{t}T_{0}^{-1}\Lambda \left ( \tau \right ) T_{0}d\tau +\frac{1}{2}\int _{0}^{t}T_{0}^{-1}\Lambda \left ( \tau \right ) T_{0}d\tau \int _{0}^{t}T_{0}^{-1}\Lambda \left ( \tau \right ) T_{0}d\tau +\cdots \\ & =I+T_{0}^{-1}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) T_{0}+\frac{1}{2}T_{0}^{-1}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) \left ( T_{0}T_{0}^{-1}\right ) \left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) T_{0}+\cdots \end{align*}
All the inner T_{0}T_{0}^{-1} result in I since T_{0} is invertible, therefore the above become \Psi \left ( t\right ) =I+T_{0}^{-1}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) T_{0}+\frac{1}{2!}T_{0}^{-1}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) ^{2}T_{0}+\frac{1}{3!}T_{0}^{-1}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) ^{3}T_{0}+\cdots
Pre-multiply both sides by
T_{0} T_{0}\Psi \left ( t\right ) =T_{0}+\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) T_{0}+\frac{1}{2!}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) ^{2}T_{0}+\frac{1}{3!}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) ^{3}T_{0}+\cdots
Post multiply both sides by T_{0}^{-1}, and again replacing all of the T_{0}T_{0}^{-1} products with I gives\begin{align*} T_{0}\Psi \left ( t\right ) T_{0}^{-1} & =I+\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) T_{0}T_{0}^{-1}+\frac{1}{2!}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) ^{2}T_{0}T_{0}^{-1}+\frac{1}{3!}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) ^{3}T_{0}T_{0}^{-1}+\cdots \\ & =I+\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) +\frac{1}{2!}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) ^{2}+\frac{1}{3!}\left ( \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) ^{3}+\cdots \\ & =e^{\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau } \end{align*}
Therefore\begin{equation} \Psi \left ( t\right ) =T_{0}^{-1}e^{\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau }T_{0} \tag{1} \end{equation}
Given equation (1), we need now to show that it leads to \frac{d}{dt}\Psi \left ( t\right ) =A\left ( t\right ) \Psi \left ( t\right ) . \begin{align} \frac{d}{dt}\Psi \left ( t\right ) & =\frac{d}{dt}\left ( T_{0}^{-1}e^{\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau }T_{0}\right ) \nonumber \\ & =T_{0}^{-1}\left ( \frac{d}{dt}e^{\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau }\right ) T_{0} \tag{2} \end{align}
Since \Lambda \left ( \tau \right ) is a diagonal matrix (by definition, it has the eigenvalues on the diagonal), therefore it
commutes with another \Lambda \left ( t\right ) (any diagonal matrix commutes with another diagonal matrix). Hence
\begin{equation} \fbox{$\Lambda \left ( \tau \right ) \Lambda \left ( t\right ) =\Lambda \left ( t\right ) \Lambda \left ( \tau \right ) $} \tag{3} \end{equation}
What this means is that we can expand e^{\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau } in power series and simplified as follows e^{\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau }=I+\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\frac{1}{2}\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\frac{1}{3!}\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\cdots
Substituting
this into (2)\begin{align*} \frac{d}{dt}\Psi \left ( t\right ) & =T_{0}^{-1}\left ( \frac{d}{dt}\left [ I+\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\frac{1}{2}\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\frac{1}{3!}\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\cdots \right ] \right ) T_{0}\\ & =T_{0}^{-1}\left ( \left [ \Lambda \left ( t\right ) +\frac{1}{2}\left ( \Lambda \left ( t\right ) \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \Lambda \left ( t\right ) \right ) +\cdots \right ] \right ) T_{0} \end{align*}
Since \Lambda \left ( \tau \right ) commute, then using (3)\begin{align*} \frac{d}{dt}\Psi \left ( t\right ) & =T_{0}^{-1}\left ( \left [ \Lambda \left ( t\right ) +\frac{1}{2}\left ( \Lambda \left ( t\right ) \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\Lambda \left ( t\right ) \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \right ) +\cdots \right ] \right ) T_{0}\\ & =T_{0}^{-1}\left ( \left [ \Lambda \left ( t\right ) +\Lambda \left ( t\right ) \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\cdots \right ] \right ) T_{0}\\ & =T_{0}^{-1}\left ( \Lambda \left ( t\right ) \left [ I+\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\frac{1}{2}\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\cdots \right ] \right ) T_{0}\\ & =\overset{A(t)}{\overbrace{\left [ T_{0}^{-1}\Lambda \left ( t\right ) T_{0}\right ] }}\overset{\Psi \left ( t\right ) \text{ from (1)}}{\overbrace{T^{-1}\left ( I+\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\frac{1}{2}\int _{0}^{t}\Lambda \left ( \tau \right ) d\tau \int _{0}^{t}\Lambda \left ( \tau \right ) d\tau +\cdots \right ) T_{0}}} \end{align*}
Hence \frac{d}{dt}\Psi \left ( t\right ) =A\left ( t\right ) \Psi \left ( t\right )
2.5.5 Problem 4
Part (a)
For A\left ( t\right ) =\begin{pmatrix} -\frac{4}{t} & -\frac{2}{t^{2}}\\ 1 & 0 \end{pmatrix} , we first need to find the fundamental matrix \Psi \left ( t\right ) and then \Phi \left ( t,\tau \right ) =\Psi \left ( t\right ) \Psi ^{-1}\left ( \tau \right ) . Let the 2 linearly independent
initial conditions be X^{01}=\begin{pmatrix} 1\\ 0 \end{pmatrix} ,X^{02}=\begin{pmatrix} 0\\ 1 \end{pmatrix}
We know solve x^{\prime }=A\left ( t\right ) x using both of these initial conditions and obtain two linearly
independent solutions to use to construct \Psi \left ( t\right ) with. Using the first initial conditions x_{1}\left ( 1\right ) =1,x_{2}\left ( 1\right ) =0. The two
equations to solve are\begin{align} x_{1}^{\prime } & =-\frac{4}{t}x_{1}-\frac{2}{t^{2}}x_{2}\tag{1}\\ x_{2}^{\prime } & =x_{1}\tag{2} \end{align}
From the second equation \frac{d}{dt}x_{2}=x_{1}
Integrate both sides\begin{align*} \int _{1}^{t}dx_{2} & =\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau \\ x_{2}\left ( t\right ) -x_{2}\left ( 1\right ) & =\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau \end{align*}
But x_{2}\left ( 1\right ) =0, hence x_{2}\left ( t\right ) =\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau . Substituting this in (1) gives x_{1}^{\prime }=-\frac{4}{t}x_{1}-\frac{2}{t^{2}}\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau
Multiply both sides by \frac{t^{2}}{2} \frac{t^{2}}{2}x^{\prime }=-2tx_{1}-\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau
Taking derivative of both sides
with respect to t gives\begin{align} tx_{1}^{\prime }+\frac{t^{2}}{2}x_{1}^{\prime \prime } & =-2x_{1}-2tx_{1}^{\prime }-x_{1}\left ( t\right ) \nonumber \\ \frac{t^{2}}{2}x_{1}^{\prime \prime }+3tx_{1}^{\prime }+3x_{1} & =0\nonumber \\ t^{2}x_{1}^{\prime \prime }+6tx_{1}^{\prime }+3x_{1} & =0\tag{3} \end{align}
This second order differential is now solved for x_{1}\left ( t\right ) . The initial conditions is x_{1}\left ( 1\right ) =1 and x_{1}^{\prime }\left ( 1\right ) . However, we do
not know x_{1}^{\prime }\left ( 1\right ) , as not given, but we can obtain it from the first equation (1) by noting that at t=1 we
find x^{\prime }\left ( 1\right ) =-\frac{4}{1}x_{1}\left ( 1\right ) -\frac{2}{1^{2}}x_{2}(1)=-4. Therefore (3) can now be solved for x_{1} since we have two initial conditions. Hence the
problem to solve is\begin{align*} t^{2}x_{1}^{\prime \prime }+6tx_{1}^{\prime }+6x_{1} & =0\\ x_{1}\left ( 1\right ) & =1\\ x_{1}^{\prime }\left ( 1\right ) & =-4 \end{align*}
Equation (3) is in the form of Euler equation. Euler ODE has solution of the form x_{1}\left ( t\right ) =t^{\alpha }. Substituting
this trial solution in (3) gives\begin{align*} t^{2}\left ( \alpha \left ( \alpha -1\right ) t^{\alpha -2}\right ) +6t\alpha t^{\alpha -1}+6t^{\alpha } & =0\\ \alpha \left ( \alpha -1\right ) t^{\alpha }+6\alpha t^{\alpha }+6t^{\alpha } & =0 \end{align*}
For non-trivial solution, and assuming t>0 which is the case here, dividing the above by t^{\alpha } gives\begin{align*} \alpha \left ( \alpha -1\right ) +6\alpha +6 & =0\\ \alpha ^{2}+5\alpha +6 & =0 \end{align*}
Hence \alpha =\left \{ -2,-3\right \}
Therefore the solution is a combination of solutions using these, which is\begin{equation} x_{1}\left ( t\right ) =\frac{c_{1}}{t^{2}}+\frac{c_{2}}{t^{3}}\tag{4} \end{equation}
Now we apply the initial conditions. At t=1\,,x_{1}\left ( 1\right ) =1, hence\begin{equation} 1=c_{1}+c_{2}\tag{5} \end{equation}
And x_{1}^{\prime }\left ( t\right ) =-2\frac{c_{1}}{t^{3}}-3\frac{c_{2}}{t^{4}}
And we have x_{1}^{\prime }\left ( 1\right ) =-4 hence\begin{equation} -4=-2c_{1}-3c_{2}\tag{6} \end{equation}
We now have (5),(6), which is two equations in two unknowns. The solution is\begin{align*} 1 & =c_{1}+c_{2}\\ -4 & =-2c_{1}-3c_{2} \end{align*}
The solution is: c_{1}=-1,c_{2}=2. Hence the the solution is now found, using (4), it is \fbox{$x_1\left ( t\right ) =\frac{-1}{t^2}+\frac{2}{t^3}$}
Now that we know x_{1}\left ( t\right ) \,, we can
find x_{2}\left ( t\right ) from x_{2}\left ( t\right ) =\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau , therefore x_{2}\left ( t\right ) =\int _{1}^{t}\frac{-1}{\tau ^{2}}+\frac{2}{\tau ^{3}}d\tau
Hence \fbox{$x_2\left ( t\right ) =\frac{t-1}{t^2}$}
This gives us the first column of \Psi ^{1}=\begin{pmatrix} \frac{-1}{t^{2}}+\frac{2}{t^{3}}\\ \frac{t-1}{t^{2}}\end{pmatrix}
Now we need to do the same the
X^{02}.
Using the second initial conditions x_{1}\left ( 1\right ) =0,x_{2}\left ( 1\right ) =1. The two equations to solve are\begin{align} x_{1}^{\prime } & =-\frac{4}{t}x_{1}-\frac{2}{t^{2}}x_{2}\tag{1A}\\ x_{2}^{\prime } & =x_{1} \tag{2A} \end{align}
From the second equation \frac{d}{dt}x_{2}=x_{1}
Integrate both sides\begin{align*} \int _{1}^{t}dx_{2} & =\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau \\ x_{2}\left ( t\right ) -x_{2}\left ( 1\right ) & =\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau \end{align*}
But x_{2}\left ( 1\right ) =1, hence x_{2}\left ( t\right ) =1+\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau . Substituting this in (1A) gives x_{1}^{\prime }=-\frac{4}{t}x_{1}-\frac{2}{t^{2}}\left ( 1+\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau \right )
Multiply both sides by \frac{t^{2}}{2} \frac{t^{2}}{2}x^{\prime }=-2tx_{1}-1-\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau
Taking derivative of both
sides with respect to t gives\begin{align*} tx_{1}^{\prime }+\frac{t^{2}}{2}x_{1}^{\prime \prime } & =-2x_{1}-2tx_{1}^{\prime }-x_{1}\left ( t\right ) \\ \frac{t^{2}}{2}x_{1}^{\prime \prime }+3tx_{1}^{\prime }+3x_{1} & =0\\ t^{2}x_{1}^{\prime \prime }+6tx_{1}^{\prime }+3x_{1} & =0 \end{align*}
This is the same second order differential as was found for X^{01} but the initial conditions are now
different. The initial conditions are x_{1}\left ( 1\right ) =0 and x_{1}^{\prime }\left ( 1\right ) . However, we do not know x_{1}^{\prime }\left ( 1\right ) , as not given, but we can
obtain it from the first equation (1) by noting that at t=1 we find x^{\prime }\left ( 1\right ) =-\frac{4}{1}x_{1}\left ( 1\right ) -\frac{2}{1^{2}}x_{2}(1)=-2. Therefore (3A) can
now be solved for x_{1} since we have two initial conditions. Hence the problem to solve is\begin{align} t^{2}x_{1}^{\prime \prime }+6tx_{1}^{\prime }+6x_{1} & =0\tag{3A}\\ x_{1}\left ( 1\right ) & =0\nonumber \\ x_{1}^{\prime }\left ( 1\right ) & =-2\nonumber \end{align}
Equation (3A) is in the form of Euler equation. Euler ODE has solution of the form x_{1}\left ( t\right ) =t^{\alpha }.
Substituting this trial solution in (3A) gives\begin{align*} t^{2}\left ( \alpha \left ( \alpha -1\right ) t^{\alpha -2}\right ) +6t\alpha t^{\alpha -1}+6t^{\alpha } & =0\\ \alpha \left ( \alpha -1\right ) t^{\alpha }+6\alpha t^{\alpha }+6t^{\alpha } & =0 \end{align*}
For non-trivial solution, and assuming t>0 which is the case here, diving the above by t^{\alpha } gives\begin{align*} \alpha \left ( \alpha -1\right ) +6\alpha +6 & =0\\ \alpha ^{2}+5\alpha +6 & =0 \end{align*}
Hence \alpha =\left \{ -2,-3\right \}
Therefore the solution is a combination of solutions using these, which is\begin{equation} x_{1}\left ( t\right ) =\frac{c_{1}}{t^{2}}+\frac{c_{2}}{t^{3}} \tag{4A} \end{equation}
Now we apply the initial conditions. At t=1\,,x_{1}\left ( 1\right ) =0, hence\begin{equation} 0=c_{1}+c_{2} \tag{5A} \end{equation}
And x_{1}^{\prime }\left ( t\right ) =-2\frac{c_{1}}{t^{3}}-3\frac{c_{2}}{t^{4}}
And we have x_{1}^{\prime }\left ( 1\right ) =-2 hence\begin{equation} -2=-2c_{1}-3c_{2} \tag{6A} \end{equation}
We now have (5A),(6A), which is two equations in two unknowns. The solution is\begin{align*} 0 & =c_{1}+c_{2}\\ -2 & =-2c_{1}-3c_{2} \end{align*}
The solution is: c_{1}=-2,c_{2}=2. Hence the the solution is now found, using (4A), it is x_{1}\left ( t\right ) =\frac{-2}{t^{2}}+\frac{2}{t^{3}}
Now that we know x_{1}\left ( t\right ) \,,
we can find x_{2}\left ( t\right ) from x_{2}\left ( t\right ) =1+\int _{1}^{t}x_{1}\left ( \tau \right ) d\tau , therefore x_{2}\left ( t\right ) =1+\int _{1}^{t}\frac{-2}{\tau ^{2}}+\frac{2}{\tau ^{3}}d\tau
Hence \fbox{$x_2\left ( t\right ) =\frac{2t-1}{t^2}$}
This gives us the second column of \Psi ^{2}=\begin{pmatrix} \frac{-2}{t^{2}}+\frac{2}{t^{3}}\\ \frac{2t-1}{t^{2}}\end{pmatrix}
Hence the
fundamental matrix is \Psi =\begin{pmatrix} \frac{-1}{t^{2}}+\frac{2}{t^{3}} & \frac{-2}{t^{2}}+\frac{2}{t^{3}}\\ \frac{t-1}{t^{2}} & \frac{2t-1}{t^{2}}\end{pmatrix}
The inverse is now found. \Psi ^{-1}=\frac{\begin{pmatrix} \frac{2t-1}{t^{2}} & \frac{2}{t^{2}}-\frac{2}{t^{3}}\\ -\frac{t-1}{t^{2}} & \frac{-1}{t^{2}}+\frac{2}{t^{3}}\end{pmatrix} }{1/t^{4}}=\begin{pmatrix} 2t^{3}-t^{2} & 2t^{2}-2t\\ t^{2}-t^{3} & -\frac{t^{2}-t^{3}}{t-t^{2}}\left ( t-2\right ) \end{pmatrix}
Therefore the state transition function is\begin{align*} \Phi \left ( t,\tau \right ) & =\Psi \left ( t\right ) \Psi ^{-1}\left ( \tau \right ) \\ & =\begin{pmatrix} \frac{-1}{t^{2}}+\frac{2}{t^{3}} & \frac{-2}{t^{2}}+\frac{2}{t^{3}}\\ \frac{t-1}{t^{2}} & \frac{2t-1}{t^{2}}\end{pmatrix}\begin{pmatrix} 2\tau ^{3}-\tau ^{2} & 2\tau ^{2}-2\tau \\ \tau ^{2}-\tau ^{3} & -\frac{\tau ^{2}-\tau ^{3}}{\tau -\tau ^{2}}\left ( \tau -2\right ) \end{pmatrix} \\ & =\begin{pmatrix} -\frac{1}{t^{3}}\tau ^{2}\left ( t-2\tau \right ) & -\frac{2}{t^{3}}\tau \left ( t-\tau \right ) \\ \frac{1}{t^{2}}\tau ^{2}\left ( t-\tau \right ) & -\frac{1}{t^{2}}\tau \left ( \tau -2t\right ) \end{pmatrix} \\ & =\frac{\tau }{t^{2}}\begin{pmatrix} -\frac{\tau ^{2}}{t}\left ( t-2\tau \right ) & -\frac{2}{t}\left ( t-\tau \right ) \\ \tau \left ( t-\tau \right ) & -\left ( \tau -2t\right ) \end{pmatrix} \end{align*}
Part (b)
For A\left ( t\right ) =\begin{pmatrix} 2 & -e^{t}\\ e^{t} & 1 \end{pmatrix} \,we first need to find the fundamental matrix \Psi \left ( t\right ) and then \Phi \left ( t,\tau \right ) =\Psi \left ( t\right ) \Psi ^{-1}\left ( \tau \right ) . Let the two linearly independent
initial conditions be X^{01}=\begin{pmatrix} 1\\ 0 \end{pmatrix} ,X^{02}=\begin{pmatrix} 1\\ 0 \end{pmatrix}
We know solve x^{\prime }=A\left ( t\right ) x using both of these initial conditions and obtain two linearly
independent solutions to use to construct \Psi \left ( t\right ) with. Using the first initial conditions x_{1}\left ( 1\right ) =1,x_{2}\left ( 1\right ) =0. The two
equations to solve are\begin{align} x_{1}^{\prime } & =2x_{1}-e^{t}x_{2}\tag{1}\\ x_{2}^{\prime } & =e^{-t}x_{1}+x_{2} \tag{2} \end{align}
Starting with (2), x_{2}^{\prime }-x_{2}=e^{-t}x_{1}, this is in the form x^{\prime }+p\left ( t\right ) x=f\left ( t\right ) , hence the integrating factor is e^{\int p\left ( t\right ) dt}=e^{^{-\int dt}}=e^{-t} and the solution is \frac{d}{dt}\left ( e^{-t}x_{2}\right ) =e^{-t}\left ( e^{-t}x_{1}\right )
Integrating both sides\begin{align*} \left [ e^{-\tau }x_{2}\left ( \tau \right ) \right ] _{1}^{t} & =\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \\ e^{-t}x_{2}\left ( t\right ) -\overset{\text{zero}}{\overbrace{e^{-1}x_{2}\left ( 1\right ) }} & =\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \\ e^{-t}x_{2}\left ( t\right ) & =\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \end{align*}
Hence\begin{equation} x_{2}=e^{t}\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \tag{3} \end{equation}
Substituting this solution in (1) gives\begin{align*} x_{1}^{\prime } & =2x_{1}-e^{2t}\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \\ e^{-2t}x_{1}^{\prime }-2x_{1}e^{-2t} & =-\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \end{align*}
Differentiating\begin{align*} -2e^{-2t}x_{1}^{\prime }+e^{-2t}x_{1}^{\prime \prime }-2x_{1}^{\prime }e^{-2t}+4x_{1}e^{-2t} & =-e^{-2t}x_{1}\left ( t\right ) \\ e^{-2t}x_{1}^{\prime \prime }-4e^{-2t}x_{1}^{\prime }+5e^{-2t}x_{1} & =0\\ x_{1}^{\prime \prime }-4x_{1}^{\prime }+5x_{1} & =0 \end{align*}
This is a constant coefficient ODE. Its solution can be found from the characteristic
polynomial. \lambda ^{2}-4\lambda +5=0, the solution is \left \{ 2+i,2-i=0\right \} , hence x_{1}=c_{1}e^{\left ( 2+i\right ) t}+c_{2}e^{\left ( 2-i\right ) t}
Since the roots are complex, this can be written as \sin /\cos , \begin{align*} x_{1} & =c_{1}e^{2t}e^{it}+c_{2}e^{2t}e^{-it}\\ & =e^{2t}\left ( c_{1}e^{it}+c_{2}e^{-it}\right ) \\ & =e^{2t}\left ( c_{1}\left ( \cos t+i\sin t\right ) +c_{2}\left ( \cos t-i\sin t\right ) \right ) \\ & =e^{2t}\left ( \cos t\left ( c_{1}+c_{2}\right ) +\sin t\left ( ic_{1}-ic_{2}\right ) \right ) \end{align*}
Let c_{1}+c_{2}=A and i\left ( c_{1}-c_{2}\right ) =B, some new constants. Hence the above becomes\begin{equation} x_{1}\left ( t\right ) =e^{2t}\left ( A\cos t+B\sin t\right ) \tag{4} \end{equation}
From initial conditions, x_{1}\left ( 1\right ) =1. But we are not given x_{1}^{\prime }\left ( 1\right ) . We can find this from (1) x_{1}^{\prime }=2x_{1}-e^{t}x_{2} by noting that at t=1,\begin{align*} x^{\prime }\left ( 1\right ) & =2x_{1}\left ( 1\right ) -e^{1}x_{2}\left ( 1\right ) \\ & =2 \end{align*}
Hence now we have the two initial conditions to find A,B from (4). At t=1, (4) becomes\begin{equation} 1=e^{2}\left ( A\cos 1+B\sin 1\right ) \tag{5} \end{equation}
Taking derivative of (4) x_{1}^{\prime }\left ( t\right ) =2e^{2t}\left ( A\cos t+B\sin t\right ) +e^{2t}\left ( -A\sin t+B\cos t\right )
And at t=1 this becomes\begin{equation} 2=2e^{2}\left ( A\cos 1+B\sin 1\right ) +e^{2}\left ( -A\sin 1+B\cos 1\right ) \tag{6} \end{equation}
From (5),(6) we can solve for A,B, \begin{align*} 1 & =e^{2}\left ( A\cos 1+B\sin 1\right ) \\ 2 & =2e^{2}\left ( A\cos 1+B\sin 1\right ) +e^{2}\left ( -A\sin 1+B\cos 1\right ) \end{align*}
The solution is A=\frac{\cos 1}{e^{2}},B=\frac{\sin 1}{e^{2}}. Therefore from (4) we obtain\begin{align*} x_{1}\left ( t\right ) & =e^{2t}\left ( \frac{\cos 1\cos t}{e^{2}}+\frac{\sin 1\sin t}{e^{2}}\right ) \\ & =e^{2t}\left ( \frac{\cos 1\cos t+\sin 1\sin t}{e^{2}}\right ) \end{align*}
But \cos 1\cos t+\sin 1\sin t=\cos \left ( 1-t\right ) , hence x_{1}\left ( t\right ) =e^{2\left ( t-1\right ) }\cos \left ( 1-t\right )
Now that we found x_{1}\left ( t\right ) we go to (3) and find x_{2}\left ( t\right ) \begin{align*} x_{2} & =e^{t}\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \\ & =e^{t}\int _{1}^{t}e^{-2\tau }e^{2\left ( \tau -1\right ) }\cos \left ( 1-\tau \right ) d\tau \\ & =e^{t}\int _{1}^{t}e^{-2}\cos \left ( 1-\tau \right ) d\tau \end{align*}
Hence x_{2}=-e^{t-2}\sin \left ( 1-t\right )
Therefore, the first columns of the fundamental matrix is found \Psi ^{1}=\begin{pmatrix} e^{2t-2}\cos \left ( 1-t\right ) \\ -e^{t-2}\sin \left ( 1-t\right ) \end{pmatrix}
We now find the
second column \Psi ^{2}. Using the second initial conditions x_{1}\left ( 1\right ) =0,x_{2}\left ( 1\right ) =1. The two equations to solve are\begin{align} x_{1}^{\prime } & =2x_{1}-e^{t}x_{2}\tag{1A}\\ x_{2}^{\prime } & =e^{-t}x_{1}+x_{2} \tag{2A} \end{align}
Starting with (2), x_{2}^{\prime }-x_{2}=e^{-t}x_{1}, this is in the form x^{\prime }+p\left ( t\right ) x=f\left ( t\right ) , hence the integrating factor is e^{\int p\left ( t\right ) dt}=e^{^{-\int dt}}=e^{-t} and the solution is \frac{d}{dt}\left ( e^{-t}x_{2}\right ) =e^{-t}\left ( e^{-t}x_{1}\right )
Integrating both sides\begin{align} \left [ e^{-\tau }x_{2}\left ( \tau \right ) \right ] _{1}^{t} & =\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \nonumber \\ e^{-t}x_{2}\left ( t\right ) -e^{-1}x_{2}\left ( 1\right ) & =\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \nonumber \\ e^{-t}x_{2}\left ( t\right ) -e^{-1} & =\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau \nonumber \\ x_{2}\left ( t\right ) & =e^{t}\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau +e^{t-1} \tag{3A} \end{align}
Substituting this solution in (1A) gives\begin{align*} x_{1}^{\prime } & =2x_{1}-e^{t}\left ( e^{t}\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau +e^{t-1}\right ) \\ x_{1}^{\prime } & =2x_{1}-e^{2t}\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau -e^{2t-1}\\ e^{-2t}x_{1}^{\prime }-2x_{1}e^{-2t} & =-\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau -e^{-1} \end{align*}
Differentiating\begin{align*} -2e^{-2t}x_{1}^{\prime }+e^{-2t}x_{1}^{\prime \prime }-2x_{1}^{\prime }e^{-2t}+4x_{1}e^{-2t} & =-e^{-2t}x_{1}\left ( t\right ) \\ e^{-2t}x_{1}^{\prime \prime }-4e^{-2t}x_{1}^{\prime }+5e^{-2t}x_{1} & =0\\ x_{1}^{\prime \prime }-4x_{1}^{\prime }+5x_{1} & =0 \end{align*}
This is a constant coefficient ODE. Its solution can be found from the characteristic polynomial. \lambda ^{2}-4\lambda +5=0,
the solution is \left \{ 2+i,2-i=0\right \} , hence x_{1}=c_{1}e^{\left ( 2+i\right ) t}+c_{2}e^{\left ( 2-i\right ) t}
Since the roots are complex, this can be written as \sin /\cos , giving, as
above\begin{equation} x_{1}\left ( t\right ) =e^{2t}\left ( A\cos t+B\sin t\right ) \tag{4A} \end{equation}
From initial conditions, x_{1}\left ( 1\right ) =0. But we are not given x_{1}^{\prime }\left ( 1\right ) . We can find this from (1A) x_{1}^{\prime }=2x_{1}-e^{t}x_{2} by noting that at t=1,\begin{align*} x^{\prime }\left ( 1\right ) & =2x_{1}\left ( 1\right ) -e^{1}x_{2}\left ( 1\right ) \\ & =-e^{1} \end{align*}
Hence now we have the two initial conditions to find A,B from (4). At t=1, (4A) becomes\begin{align} 0 & =e^{2}\left ( A\cos 1+B\sin 1\right ) \nonumber \\ 0 & =A\cos 1+B\sin 1 \tag{5A} \end{align}
Taking derivative of (4A) x_{1}^{\prime }\left ( t\right ) =2e^{2t}\left ( A\cos t+B\sin t\right ) +e^{2t}\left ( -A\sin t+B\cos t\right )
And at t=1 this becomes\begin{equation} -e^{1}=2e^{2}\left ( A\cos 1+B\sin 1\right ) +e^{2}\left ( -A\sin 1+B\cos 1\right ) \tag{6A} \end{equation}
From (5A),(6A) we can solve for A,B, \begin{align*} 0 & =e^{2}\left ( A\cos 1+B\sin 1\right ) \\ -e^{1} & =2e^{2}\left ( A\cos \left ( 1\right ) +B\sin \left ( 1\right ) \right ) +e^{2}\left ( -A\sin \left ( 1\right ) +B\cos \left ( 1\right ) \right ) \end{align*}
The solution is A=\frac{\sin 1}{e},B=\frac{-\cos 1}{e}. Therefore (4A) becomes\begin{align*} x_{1}\left ( t\right ) & =e^{2t}\left ( A\cos t+B\sin t\right ) \\ & =e^{2t}\left ( \frac{\sin 1\cos t}{e}-\frac{\cos 1\sin t}{e}\right ) \\ & =e^{2t}\left ( \frac{\sin 1\cos t-\cos 1\sin t}{e}\right ) \end{align*}
But \sin 1\cos t-\cos 1\sin t=\sin \left ( 1-t\right ) , hence x_{1}\left ( t\right ) =e^{2t-1}\sin \left ( 1-t\right )
Now that we found x_{1}\left ( t\right ) we go to (3A) and find x_{2}\left ( t\right ) \begin{align*} x_{2}\left ( t\right ) & =e^{t}\int _{1}^{t}e^{-2\tau }x_{1}\left ( \tau \right ) d\tau +e^{t-1}\\ & =e^{t-1}+e^{t}\int _{1}^{t}e^{-2\tau }e^{2\tau -1}\sin \left ( 1-\tau \right ) d\tau \\ & =e^{t-1}+e^{t}\int _{1}^{t}e^{-1}\sin \left ( 1-\tau \right ) d\tau \\ & =e^{t-1}+e^{t-1}\int _{1}^{t}\sin \left ( 1-\tau \right ) d\tau \\ & =e^{t-1}+e^{t-1}\left ( -1+\cos \left ( 1-t\right ) \right ) \\ & =e^{t-1}\cos \left ( 1-t\right ) \end{align*}
Therefore, the second column of the fundamental matrix is found \Psi ^{2}=\begin{pmatrix} e^{2t-1}\sin \left ( 1-t\right ) \\ e^{t-1}\cos \left ( 1-t\right ) \end{pmatrix}
Hence the fundamental matrix
is \Psi =\begin{pmatrix} e^{2t-2}\cos \left ( 1-t\right ) & e^{2t-1}\sin \left ( 1-t\right ) \\ -e^{t-2}\sin \left ( 1-t\right ) & e^{t-1}\cos \left ( 1-t\right ) \end{pmatrix}
The inverse is now found. \Psi ^{-1}=\begin{pmatrix} e^{2-2t}\cos \left ( 1-t\right ) & -e^{2-t}\sin \left ( 1-t\right ) \\ e^{1-2t}\sin \left ( 1-t\right ) & e^{1-t}\cos \left ( 1-t\right ) \end{pmatrix}
Therefore the state transition function, after some simplification, is\begin{align*} \Phi \left ( t,\tau \right ) & =\Psi \left ( t\right ) \Psi ^{-1}\left ( \tau \right ) \\ & =\begin{pmatrix} e^{2t-2}\cos \left ( 1-t\right ) & e^{2t-1}\sin \left ( 1-t\right ) \\ -e^{t-2}\sin \left ( 1-t\right ) & e^{t-1}\cos \left ( 1-t\right ) \end{pmatrix}\begin{pmatrix} e^{2-2\tau }\cos \left ( 1-\tau \right ) & -e^{2-\tau }\sin \left ( 1-\tau \right ) \\ e^{1-2\tau }\sin \left ( 1-\tau \right ) & e^{1-\tau }\cos \left ( 1-\tau \right ) \end{pmatrix} \\ & =\begin{pmatrix} e^{2\left ( t-\tau \right ) }\cos \left ( t-\tau \right ) & -e^{2t-\tau }\sin \left ( t-\tau \right ) \\ e^{t-2\tau }\sin \left ( t-\tau \right ) & e^{t-\tau }\cos \left ( t-\tau \right ) \end{pmatrix} \end{align*}
2.5.6 key solution