;
;
;
;
;
Given\begin{align*} x^{\prime } & =Ax+bu\\ y & =Cx \end{align*}
Replacing \(u\) with \(kx+v\) results in\begin{align*} x^{\prime } & =Ax+b\left ( kx+v\right ) \\ & =Ax+bkx+bv\\ & =\left ( A+bk\right ) x+bv \end{align*}
In the above the dimensions are \(A_{n\times n},\) \(b_{n\times 1},k_{1\times n},v_{1\times 1},x_{n\times 1}\). The transfer function is\begin{equation} H_{cl}\left ( s\right ) =C\left ( sI-\left ( A+bk\right ) \right ) ^{-1}b \tag{1} \end{equation}
Let the controllability matrix for the open loop system \(\left ( A,b\right ) \) be \(\mathbb{C} \) with some rank \(m\), not necessarily full rank.\[\mathbb{C} =\begin{bmatrix} b & Ab & A^{2}b & \cdots & A^{n-1}b \end{bmatrix} \] We need to show that the rank of closed loop controllability matrix \(\mathbb{C} _{cl}\) will also have the same rank \(m.\)\[\mathbb{C} _{cl}=\begin{bmatrix} b & \left ( A+bk\right ) b & \left ( A+bk\right ) ^{2}b & \cdots & \left ( A+bk\right ) ^{n-1}b \end{bmatrix} \] Given any matrix, we know that we can perform elementary column or row operations on it without changing its rank. In other words, column operations are rank-preserving. And this is the main tool used to proof this.
For example, we can add the first column to the second, and this will not change the rank of the matrix. So the idea of the is this: We will perform column operations on each column \(\mathbb{C} _{cl}\) to convert it back to the same corresponding column of \(\mathbb{C} \).
The first step is to expand \(\mathbb{C} _{cl}\) columns in order to see more clearly what operations are needed. Only the first 3 columns are expanded due to space limitation and this is sufficient to show the point
\begin{align} \mathbb{C} _{cl} & =\begin{bmatrix} b & Ab+bkb & \left ( A^{2}+\left ( bk\right ) ^{2}+Abk+bkA\right ) b & \cdots \end{bmatrix} \nonumber \\ & =\begin{bmatrix} b & Ab+bkb & A^{2}b+bkbkb+Abkb+bkAb & \cdots \end{bmatrix} \tag{2} \end{align}
The first column of \(\mathbb{C} _{cl}\) is the same as the first column of \(\mathbb{C} \), so we go to the next column which is \(Ab+bkb\) which we want to make it \(Ab\). post-multiplying the first column by \(kb\) and subtracting the result from the second column makes the second column become \(Ab\).
Now we will work on the third column which is \(A^{2}b+\left ( bk\right ) ^{2}b+Abkb+bkAb\) and search for column operations that converts this to \(A^{2}b\). If we post-multiply the second column of \(\mathbb{C} _{cl}\) by \(kb\) and subtract the result from the third column, now the third column becomes \begin{align*} \mathbb{C} _{cl}\left ( 3\right ) & =\left [ A^{2}b+bkbkb+Abkb+bkAb\right ] -\left [ Ab+bkb\right ] kb\\ & =\left [ A^{2}b+bkbkb+Abkb+bkAb\right ] -\left [ Abkb+bkbkb\right ] \\ & =A^{2}b+bkAb \end{align*}
We still have \(bkAb\) left to remove. So we need to do more column operations. If we now post-multiply the first column by \(kAb\) and subtract the result from \(\mathbb{C} _{cl}\left ( 3\right ) \), then we finally obtain \(\mathbb{C} _{cl}\left ( 3\right ) =A^{2}b.\) We continue doing this for each column in \(\mathbb{C} _{cl}\) converting each column to the same columns as \(\mathbb{C} \).
This shows that whatever rank \(\mathbb{C} \) had, then \(\mathbb{C} _{cl}\) will have the same rank. This is what we are asked to show.
The controllability matrix \(\mathbb{C} \) is\begin{align*} \mathbb{C} & =\begin{bmatrix} b & Ab & A^{2}b & A^{3}b \end{bmatrix} \\ & =\fbox{$\begin{bmatrix} 0 & 1 & 0 & -2\\ 0 & 0 & 0 & 1\\ 1 & 0 & -2 & 0\\ 0 & 0 & 1 & 0 \end{bmatrix} $} \end{align*}
We find the rank of this matrix we can exchange the rows to convert it to row echelon form\[\begin{bmatrix} 0 & 1 & 0 & -2\\ 0 & 0 & 0 & 1\\ 1 & 0 & -2 & 0\\ 0 & 0 & 1 & 0 \end{bmatrix} \Rightarrow \begin{bmatrix} 1 & 0 & -2 & 0\\ 0 & 1 & 0 & -2\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} \] We now see that it is full rank, since there are no zero pivots. Hence the rank \(4\). Since the rank is the same as \(n\) the size of \(A\) therefore
\begin{align*} p\left ( \lambda \right ) & =\left \vert \lambda I-A\right \vert \\ & =\begin{vmatrix} \lambda & 0 & -1 & 0\\ 0 & \lambda & 0 & -1\\ 2 & 1 & \lambda & 0\\ -1 & 1 & 0 & \lambda \end{vmatrix} \\ & =\lambda ^{4}+3\lambda ^{2}+3 \end{align*}
Now we solve \(p\left ( \lambda \right ) =0\). The roots of this characteristic equation (the same as eigenvalues of \(A\)) are found to be\begin{align*} \lambda _{1} & =-0.34+1.27j\\ \lambda _{2} & =-0.34-1.27j\\ \lambda _{3} & =+0.34+1.27j\\ \lambda _{4} & =+0.34+1.27j \end{align*}
We see that there are two eigenvalues whose real part is positive, hence the
The target system is the companion form, which is \[ x^{\prime }=\overset{\tilde{A}}{\overbrace{\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -a_{0} & -a_{1} & -a_{2} & -a_{3}\end{bmatrix} }}x+\overset{\tilde{b}}{\overbrace{\begin{bmatrix} 0\\ 0\\ 0\\ 1 \end{bmatrix} }}u \] Where the last row of \(\tilde{A}\) is taken from the characteristic polynomial terms in original \(A\) but in reverse order and by changing the sign. The characteristic polynomial of the original \(A\) was found above, here it is again\begin{align*} P\left ( s\right ) & =a_{4}s^{4}+a_{3}s^{2}+a_{2}s+a_{0}\\ & =s^{4}+3s^{2}+3 \end{align*}
Hence \(a_{0}=3,a_{1}=0,a_{2}=3,a_{3}=0\), therefore the target system is\[ x^{\prime }=\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -3 & 0 & -3 & 0 \end{bmatrix} x+\begin{bmatrix} 0\\ 0\\ 0\\ 1 \end{bmatrix} u \] Now we find \(\mathbb{C} ,\tilde{\mathbb{C}}\) and then find \begin{equation} T=\tilde{\mathbb{C}}\mathbb{C} ^{-1} \tag{3} \end{equation} The controllability matrix \(\mathbb{C} \) of the original system was found in part (a) as\[\mathbb{C} =\begin{bmatrix} 0 & 1 & 0 & -2\\ 0 & 0 & 0 & 1\\ 1 & 0 & -2 & 0\\ 0 & 0 & 1 & 0 \end{bmatrix} \] Hence\[\mathbb{C} ^{-1}=\begin{bmatrix} 0 & 0 & 1 & 2\\ 1 & 2 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 \end{bmatrix} \] The controllability matrix \(\tilde{\mathbb{C}}\) is given by the following\begin{align*} \tilde{\mathbb{C}} & =\begin{bmatrix} \tilde{b} & \tilde{A}\tilde{b} & \tilde{A}^{2}\tilde{b} & \tilde{A}^{3}\tilde{b}\end{bmatrix} \\ & =\begin{bmatrix} 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & -3\\ 1 & 0 & -3 & 0 \end{bmatrix} \end{align*}
Now we can find \(T\) using (3)
\begin{align*} T & =\begin{bmatrix} 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & -3\\ 1 & 0 & -3 & 0 \end{bmatrix}\begin{bmatrix} 0 & 0 & 1 & 2\\ 1 & 2 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 \end{bmatrix} \\ & =\fbox{$\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 1 & -1 & 0 & 0\\ 0 & 0 & 1 & -1 \end{bmatrix} $} \end{align*}
To check \(T\) we apply it to \(A\) and see if we obtain \(\tilde{A}\)\begin{align*} \tilde{A} & =TAT^{-1}\\ & =\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 1 & -1 & 0 & 0\\ 0 & 0 & 1 & -1 \end{bmatrix}\begin{bmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -2 & -1 & 0 & 0\\ 1 & -1 & 0 & 0 \end{bmatrix}\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 1 & -1 & 0 & 0\\ 0 & 0 & 1 & -1 \end{bmatrix} ^{-1}\\ & =\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -3 & 0 & -3 & 0 \end{bmatrix} \end{align*}
So \(T\) has been verified OK.
Let the control input be \(u=\tilde{K}x+v\), where \(\tilde{K}=\begin{bmatrix} k_{0} & k_{1} & k_{2} & k_{3}\end{bmatrix} \). Therefore the closed loop system become\begin{align*} x^{\prime } & =\tilde{A}x+\tilde{b}\left ( \tilde{K}x+v\right ) \\ & =\overset{A_{\text{closed}}}{\overbrace{\left ( \tilde{A}+\tilde{b}\tilde{K}\right ) }}x+\tilde{b}v \end{align*}
Hence\begin{align} A_{\text{closed}} & =\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -3 & 0 & -3 & 0 \end{bmatrix} +\begin{bmatrix} 0\\ 0\\ 0\\ 1 \end{bmatrix}\begin{bmatrix} k_{0} & k_{1} & k_{2} & k_{3}\end{bmatrix} \nonumber \\ & =\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ k_{0}-3 & k_{1} & k_{2}-3 & k_{3}\end{bmatrix} \tag{4} \end{align}
The characteristic polynomial of the closed loop \(A_{\text{closed}}\) is found from\begin{align} p\left ( \lambda \right ) & =\left \vert \lambda I-A_{\text{closed}}\right \vert \nonumber \\ & =\begin{vmatrix} \lambda & -1 & 0 & 0\\ 0 & \lambda & -1 & 0\\ 0 & 0 & \lambda & -1\\ 3-k_{0} & -k_{1} & 3-k_{2} & \lambda -k_{3}\end{vmatrix} \nonumber \\ & =\lambda ^{4}-\lambda ^{3}k_{3}+\lambda ^{2}\left ( 3-k_{2}\right ) -\lambda k_{1}+\left ( 3-k_{0}\right ) \tag{5} \end{align}
We want the above polynomial to be equal to the polynomial with the desired roots given below, where the two unstable roots of the open loop have now been replaced with the given two stable roots. The stable roots of the original system are not modified since they are already stable.\begin{align*} \lambda _{1} & =-0.34+1.27j\\ \lambda _{2} & =-0.34-1.27j\\ \lambda _{3} & =-1+1j\\ \lambda _{4} & =-1-j \end{align*}
In other words, we want to force (5) to be the same as the following desired characteristic polynomial\begin{align} p_{\text{design}}\left ( \lambda \right ) & =\left ( \lambda -\lambda _{1}\right ) \left ( \lambda -\lambda _{2}\right ) \left ( \lambda -\lambda _{3}\right ) \left ( \lambda -\lambda _{4}\right ) \nonumber \\ & =\left ( \lambda -\left ( -0.34+1.27j\right ) \right ) \left ( \lambda -\left ( -0.34-1.27j\right ) \right ) \left ( \lambda -\left ( -1+1j\right ) \right ) \left ( \lambda -\left ( -1-j\right ) \right ) \nonumber \\ & =\lambda ^{4}+2.68\lambda ^{3}+5.088\,5\lambda ^{2}+4.817\lambda +3.457 \tag{6} \end{align}
Comparing coefficients of (5) with (6) and solving for \(k_{i}\) gives\begin{align*} k_{3} & =-2.68\\ 3-k_{2} & =5.088\,5\\ k_{1} & =-4.817\\ \left ( 3-k_{0}\right ) & =3.457 \end{align*}
Hence\begin{align*} k_{3} & =-2.68\\ k_{2} & =-2.088\,5\\ k_{1} & =-4.817\\ k_{0} & =-0.457 \end{align*}
And the required gain vector is\[ \fbox{$\tilde{K}=\begin{bmatrix} -0.457 & -4.817 & -2.0885 & -2.68 \end{bmatrix} $}\]
In the above the gain \(\tilde{K}\) vector was found for \(\tilde{A}\) based system (the controllable form), however our original system is \(A.\) The gain vector is transformed using \(T\) found earlier\begin{align*} \tilde{K} & =KT^{-1}\\ K & =\tilde{K}T\\ & =\begin{bmatrix} -0.457 & -4.817 & -2.0885 & -2.68 \end{bmatrix}\begin{bmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 1 & -1 & 0 & 0\\ 0 & 0 & 1 & -1 \end{bmatrix} \end{align*}
Hence\[ \fbox{$K=\begin{bmatrix} -2.0885 & 1.631\,5 & -2.68 & -2.137 \end{bmatrix} $}\] To verify the above, we now find the eigenvalues of \(\left [ A-bK\right ] \) and see if it gives the same eigenvalues we have designed for under \(\tilde{A}\).\begin{align*} \left [ A+bK\right ] & =\begin{bmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -2 & -1 & 0 & 0\\ 1 & -1 & 0 & 0 \end{bmatrix} +\begin{bmatrix} 0\\ 0\\ 1\\ 0 \end{bmatrix}\begin{bmatrix} -2.0885 & 1.6315 & -2.68 & -2.137 \end{bmatrix} \\ & =\begin{bmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ -4.0885 & 0.6315 & -2.68 & -2.137\\ 1 & -1 & 0 & 0 \end{bmatrix} \end{align*}
The eigenvalues of the above matrix is \(\left \{ -0.34-1.27j,-0.34+1.27j,-1-1j,-1+1j\right \} \) and these are the same eigenvalues used in the design under \(\tilde{A}.\)
Let the first system be\begin{align} x_{1}^{\prime } & =A_{1}x_{1}+B_{1}u\nonumber \\ y & =C_{1}x_{1}+D_{1}u \tag{1} \end{align}
And the second system be\begin{align*} x_{2}^{\prime } & =A_{2}x_{2}+B_{2}u\\ y & =C_{2}x_{1}+D_{2}u \end{align*}
And assume there exists a non-singular constant matrix \(T\) such that \(x_{2}=Tx_{1}\). We need to \(T\). By applying this transformation to (1) we obtain the transformations\begin{align*} A_{2} & = TA_{1}T^{-1}\\ B_{2} & = TB_{1}\\ C_{2} & = C_{1}T^{-1}\\ D_{2} & = D_{1} \end{align*}
Now, let \(\Theta _{2}\) be the observability matrix for first system given by \[ \Theta _{2}=\begin{pmatrix} C_{2}\\ C_{2}A_{2}\\ C_{2}A_{2}^{2}\\ \vdots \\ C_{2}A_{2}^{n-1}\end{pmatrix} \] Applying the above transformations to \(\Theta _{2}\) results in\[ \Theta _{2}=\begin{pmatrix} C_{2}\\ C_{2}A_{2}\\ C_{2}A_{2}^{2}\\ \vdots \\ C_{2}A_{2}^{n-1}\end{pmatrix} =\begin{pmatrix} C_{1}T^{-1}\\ \left ( C_{1}T^{-1}\right ) \left ( TA_{1}T^{-1}\right ) \\ \left ( C_{1}T^{-1}\right ) \left ( TA_{1}^{2}T^{-1}\right ) \\ \vdots \\ \left ( C_{1}T^{-1}\right ) \left ( TA_{1}^{n-1}T^{-1}\right ) \end{pmatrix} =\begin{pmatrix} C_{1}T^{-1}\\ C_{1}A_{1}T^{-1}\\ C_{1}A_{1}^{2}T^{-1}\\ \vdots \\ C_{1}A_{1}^{n-1}T^{-1}\end{pmatrix} =\begin{pmatrix} C_{1}\\ C_{1}A_{1}\\ C_{1}A_{1}^{2}\\ \vdots \\ C_{1}A_{1}^{n-1}\end{pmatrix} T^{-1}=\Theta _{1}T^{-1}\] Therefore\begin{align*} \Theta _{2} & =\Theta _{1}T^{-1}\\ \Theta _{2}T & =\Theta _{1} \end{align*}
Hence\[ \fbox{$T=\Theta _2^{-1}\Theta _1$} \]
The system is given by \(x^{\prime }=Ax;y=Cx\) \begin{align*} \begin{pmatrix} x_{1}^{\prime }\\ x_{2}^{\prime }\end{pmatrix} & =\overset{A}{\overbrace{\begin{pmatrix} 0 & \omega _{0}\\ -\omega _{0} & 0 \end{pmatrix} }}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} \\ y & =\overset{C}{\overbrace{\begin{pmatrix} 1 & 0 \end{pmatrix} }}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} \end{align*}
The observer state estimator is given by \(\hat{x}^{\prime }=A\hat{x}+L\left ( y-\hat{y}\right ) \)
This diagram shows the flow for the observer
In our case, there is no input \(u(t)\) since it is a free system, and it simplifies to
And the goal is to determine \(L\) based on eigenvalue requirements. In the above diagram, \(y=Cx\) and \(\hat{y}=C\hat{x}\).
Now, Let the error in state estimation be \(e=\left ( \hat{x}-x\right ) \), therefore the rate of change of the error becomes \[ e^{\prime }=\left ( A-LC\right ) e \] We need to determine \(L\) such that the eigenvalues of \(\left ( A-LC\right ) \) are \(\lambda _{1}=-1\) and \(\lambda _{2}=-2\). Before showing the design steps using the actual data given in the problem, the design steps are given below for the general case.
The first step is to check if \(\left ( A,C\right ) \) is observable. The observability matrix is \[\begin{pmatrix} C\\ CA \end{pmatrix} =\begin{pmatrix} 1 & 0\\ 0 & \omega _{0}\end{pmatrix} \] Since the determinant is \(\omega _{0}\), hence not zero. Then this is invertible and full rank. Hence \(\left ( A,C\right ) \) is observable. Therefore \(\left ( A^{T},C^{T}\right ) \) is controllable pair. Lets call them \(\left ( A_{o},B_{o}\right ) \) so that we do not have to use transpose in all the notation. Hence \(A_{o}=A^{T}=\allowbreak \begin{pmatrix} 0 & -\omega _{0}\\ \omega _{0} & 0 \end{pmatrix} \) and \(B_{o}=\) \(C^{T}=\begin{pmatrix} 1\\ 0 \end{pmatrix} \). Therefore we can design \(A_{o}+B_{o}K\) as we did for state feedback to find \(K\), then use \(K\) to determine \(L\) using \(L=-K^{T}.\) The controllability matrix for \(\left ( A_{o},B_{o}\right ) \) is\begin{align*} \mathbb{C} & =\begin{pmatrix} B_{o} & A_{o}B_{o}\end{pmatrix} \\ & =\begin{pmatrix} 1 & 0\\ 0 & \omega _{0}\end{pmatrix} \end{align*}
And the characteristic equation is \begin{align*} \left \vert sI-A^{T}\right \vert & =0\\\begin{vmatrix} s & \omega _{0}\\ -\omega _{0} & s \end{vmatrix} & =0\\ s^{2}+\omega _{0}^{2} & =0 \end{align*}
Hence, the controllability companion form is\begin{align*} \tilde{A}_{o} & =\begin{pmatrix} 0 & 1\\ -\alpha _{0} & -\alpha _{1}\end{pmatrix} =\begin{pmatrix} 0 & 1\\ -\omega _{0}^{2} & 0 \end{pmatrix} \\ \tilde{B}_{o} & =\begin{pmatrix} 0\\ 1 \end{pmatrix} \end{align*}
Hence the controllability matrix of the companion form is\begin{align*} \tilde{\mathbb{C}} & =\begin{pmatrix} \tilde{B}_{o} & \tilde{A}_{o}\tilde{B}_{o}\end{pmatrix} \\ & =\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \end{align*}
Therefore the transformation operator \(T\) is \begin{align*} T & =\tilde{\mathbb{C}}\mathbb{C} ^{-1}\\ & =\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}\begin{pmatrix} 1 & 0\\ 0 & \omega _{0}\end{pmatrix} ^{-1}=\begin{pmatrix} 0 & \frac{1}{\omega _{0}}\\ 1 & 0 \end{pmatrix} \end{align*}
Now we want\begin{align*} A_{\text{closed}} & =\tilde{A}_{o}+\tilde{B}_{o}\tilde{K}\\ & =\begin{pmatrix} 0 & 1\\ -\omega _{0}^{2} & 0 \end{pmatrix} +\begin{bmatrix} 0\\ 1 \end{bmatrix}\begin{bmatrix} k_{0} & k_{1}\end{bmatrix} \\ & =\begin{bmatrix} 0 & 1\\ k_{0}-\omega _{0}^{2} & k_{1}\end{bmatrix} \end{align*}
It has the following characteristic polynomial\[ p\left ( \lambda \right ) =\lambda ^{2}-\lambda k_{1}+\left ( \omega _{0}^{2}-k_{0}\right ) \] The desired \(p^{\ast }\left ( \lambda \right ) =\left ( \lambda +1\right ) \left ( \lambda +2\right ) =\allowbreak \lambda ^{2}+3\lambda +2\). Comparing coefficients of this polynomial to the above gives\begin{align*} \,k_{1} & =-3\\ \omega _{0}^{2}-k_{0} & =2\\ k_{0} & =\omega _{0}^{2}-2 \end{align*}
Hence, the gain vector is found to be \[ \tilde{K}=\begin{bmatrix} k_{0} & k_{1}\end{bmatrix} =\begin{bmatrix} \omega _{0}^{2}-2 & -3 \end{bmatrix} \] The above \(\tilde{K}\) was designed for the controllable companion form. We need to transform it back to the original \(\left ( A^{T},C^{T}\right ) \) system using \(T\) found earlier\begin{align*} K & =\tilde{K}T\\ & =\begin{bmatrix} \omega _{0}^{2}-2 & -3 \end{bmatrix}\begin{pmatrix} 0 & \frac{1}{\omega _{0}}\\ 1 & 0 \end{pmatrix} \\ & =\begin{pmatrix} -3 & \frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \end{pmatrix} \end{align*}
Therefore, the observability gain vector is found as \begin{align*} L & =-K^{T}\\ & =-\begin{pmatrix} -3 & \frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \end{pmatrix} ^{T}\\ & =\begin{pmatrix} 3\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \end{pmatrix} \end{align*}
Before continuing, let us verify the eigenvalues of \(\left ( A-LC\right ) \) are where they should be now.\begin{align*} \left ( A-LC\right ) & =\begin{pmatrix} 0 & \omega _{0}\\ -\omega _{0} & 0 \end{pmatrix} -\begin{pmatrix} 3\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \end{pmatrix}\begin{pmatrix} 1 & 0 \end{pmatrix} \\ & =\begin{pmatrix} -3 & \omega _{0}\\ \frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) -\omega _{0} & 0 \end{pmatrix} \end{align*}
The eigenvalues are \(-1,-2\). Verified.
Now we continue the observer design. The observer is the following system\begin{align*} \hat{x}^{\prime } & =A\hat{x}+L\left ( y-\hat{y}\right ) \\ & =A\hat{x}+L\left ( y-C\hat{x}\right ) \end{align*}
Or\begin{align*} \begin{pmatrix} \hat{x}_{1}^{\prime }\\ \hat{x}_{2}^{\prime }\end{pmatrix} & =\overset{A}{\overbrace{\begin{pmatrix} 0 & \omega _{0}\\ -\omega _{0} & 0 \end{pmatrix} }}\begin{pmatrix} \hat{x}_{1}\\ \hat{x}_{2}\end{pmatrix} +\begin{pmatrix} 3\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \end{pmatrix} \left ( \overset{y}{\overbrace{\begin{pmatrix} 1 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} }}-\overset{\hat{y}}{\overbrace{\begin{pmatrix} 1 & 0 \end{pmatrix}\begin{pmatrix} \hat{x}_{1}\\ \hat{x}_{2}\end{pmatrix} }}\right ) \\ & =\begin{pmatrix} 0 & \omega _{0}\\ -\omega _{0} & 0 \end{pmatrix}\begin{pmatrix} \hat{x}_{1}\\ \hat{x}_{2}\end{pmatrix} +\begin{pmatrix} 3\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \end{pmatrix} \left ( x_{1}-\hat{x}_{1}\right ) \\ & =\begin{pmatrix} 0 & \omega _{0}\\ -\omega _{0} & 0 \end{pmatrix}\begin{pmatrix} \hat{x}_{1}\\ \hat{x}_{2}\end{pmatrix} +\begin{pmatrix} 3\left ( x_{1}-\hat{x}_{1}\right ) \\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \left ( x_{1}-\hat{x}_{1}\right ) \end{pmatrix} \end{align*}
Therefore \begin{align*} \hat{x}_{1}^{\prime } & =\omega _{0}\hat{x}_{2}+L\left ( 1\right ) \left ( x_{1}-\hat{x}_{1}\right ) \\ \hat{x}_{2}^{\prime } & =-\omega _{0}\hat{x}_{1}+L\left ( 2\right ) \left ( x_{1}-\hat{x}_{1}\right ) \end{align*}
Where \(L\left ( 1\right ) =3\) and \(L\left ( 2\right ) =-\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \). In part(d), we will change these values to tune the observer. A Matlab script is written to generate \(L\) from different design eigenvalue locations.
The system we are given is free system, which means it is driven only by initial conditions. Therefore the model for the plant itself is the following, where \(\omega _{0}=1\) was used to test the free system before adding the observer. The states \(x_{1},x_{2}\) were initialized to \(1\) in this example
Now we will add the observer designed in part(a) and compare the observer state estimation to the actual \(x_{1}\) of the plant. The model is the following
Tracking of x1 \(\omega _{0}\) is given the values \(\left \{ 1,10,100,1000\right \} \) rad/sec and result showing \(x_{1}\left ( t\right ) ,\hat{x}_{1}\left ( t\right ) \) on the same plot is displayed to see how well the observer will estimate the true \(x_{1}\left ( t\right ) \) as the frequency changes. The result is the following
Tracking of \(x_{2}\left ( t\right ) \) A plot showing the true \(x_{2}\) and \(\hat{x}_{2}\) is now given, similar to the above. The model was changed slightly to add a sink to plot \(x_{2},\hat{x}_{2}\) as follows
Now the frequency was set to \(\omega _{0}=1,10,100,1000\) rad/sec and the simulation was run. The following is the result and the observations
The model was changed slightly to add an XY graph as follows
The result of the simulation to generate the phase plots is shown below
A small Matlab program was written to tune the observer. This was done by changing the location of the design eigenvalues and generating new \(L\) observer gain vector for each new set of eigenvalues, then using the new \(L\) in the simulink model in part(c) to see the effect on the phase plot. The goal is to obtain a straight line in the phase plane, since a straight line indicates that \(\hat{x}_{1}\) is tracking \(x_{1}\) well. Few eigenvalues are tried. This table shows summary of each pair of eigenvalues and the corresponding \(L\) vector generated. We show one final result which was found to be the best one from the ones tried
eigenvalues | \(L\) generated by the design script |
\(\left [ -1,-2\right ] \) (original eigenvalues) | \(\begin{pmatrix} 3\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-2\right ) \end{pmatrix} \) |
\(\left [ -1.5,-2\right ] \) | \(\begin{pmatrix} 3.5\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-3\right ) \end{pmatrix} \) |
\(\left [ -2,-3\right ] \) | \(\begin{pmatrix} 5\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-6\right ) \end{pmatrix} \) |
\(\left [ -2.5,-3.5\right ] \) | \(\begin{pmatrix} 6\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-8.75\right ) \end{pmatrix} \) |
\(\left [ -3,-4\right ] \) | \(\begin{pmatrix} 7\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-12\right ) \end{pmatrix} \) |
\(\left [ -4,-5\right ] \) | \(\begin{pmatrix} 9\\ -\frac{1}{\omega _{0}}\left ( \omega _{0}^{2}-20\right ) \end{pmatrix} \) |
Using the eigenvalues at \(\left [ -4,-5\right ] \) the initial overshoot was found to be become small. This was noticed most for large frequencies. Here is the phase plot of \(x_{1}-\hat{x}_{1}\) using the last entry in the above table. To make it easier to compare with the original eigenvalues design, a plot of \(x_{1},\hat{x}_{1}\) vs. time was also added. This plot shows more clearly that by making the eigenvalues more negative, the convergence became faster.
Example use is
There are two main reasons, as was explained in class. One is that we do not know what the initial conditions that the plant starts at, and this could change each time. But most importantly, the observer could be started at any time during the operation of the overall system and it does not have to be started at the same instance as the plant. Since the observer could start at later time, the initial conditions that the plant was in have been lost and no longer available to the observer. So there will always be some initial settling time. So having different initial conditions for the plant and the observer is the more common case.