State space realization. Fundamental to state space.
Assume we do not have to look inside the system and we want to model the system? When we model the system, we have idea of what is relevant. The input and the output. From the input/output point of view, it does not matter what the internal of the system are. The constraint is only that the internal states are bounded.
Given system \((A,B,C,D)\), we write \(H_\ast (s)\), which is the realization of this system, as \[ H_\ast (s)= C (s I-A)^{-1} B+D \] This is called the transfer function of the system. Let \(H(s)\) be some given transfer function matrix, of dimensions \(r \times m\), where \(r\) is the number of outputs and \(m\) is the number of inputs. Each entry in this transfer function matrix is a ratio of two polynomials in \(s\). We say that \(H(s)\) is a realization of \(\sum (A,B,C,D)\) if \(H_\ast (s) = H(s)\).
In other words, we say \(H(s)\) is a realization, if we can find \(\sum (A,B,C,D)\) or construct \(\sum (A,B,C,D)\) such that \(H_\ast (s) = H(s)\).
When is \(H(s)\) realizable? i.e. given some \(H(s)\), can we find \(\sum (A,B,C,D)\) whose transfer function is this \(H(s)\)?
Is the set of realizable transfer functions common or rare? If we can realize \(H(s)\), then let \(\sum \) be a realization. Now do the ”Gedankan” experiment. Pick any non singular \(n \times n\) matrix \(T\), and form\begin{align*} \tilde{A} &= TAT^{-1}\\ \tilde{B} &= TB\\ \tilde{C} &= CT^{-1}\\ \tilde{D} &= D \end{align*}
Hence \begin{align*} \tilde{H}_\ast (s) &= \tilde{C}( sI- \tilde{A} )^{-1} \tilde{B} + \tilde{D}\\ &= CT^{-1} (s I-TAT^{-1})^{-1} TB + D\\ &= CT^{-1} (s ITT^{-1} - TAT^{-1})^{-1} TB + D\\ &= CT^{-1} (T ( sI - A) T^{-1})^{-1}TB + D\\ &= CT^{-1}T ( sI-A)^{-1} T^{-1} T B + D \\ &= C( sI-A)^{-1} B + D \end{align*}
So we see that \(\tilde{C},\tilde{A},\tilde{B},\tilde{D}\) has the same realization as \(A,B,C,D\). So if one realization exist, then there are infinite number of realization that can be found using the \(T\) transformation as above.
Reader: How does state \(x\) relates to state \(\tilde{x}\) under \(T?\).
Let \(\tilde{x}=Tx\), then\begin{align*} \tilde{x}^{\prime } & =Tx^{\prime }\\ & =T\left ( Ax+Bu\right ) \\ & =T\left ( AT^{-1}\tilde{x}+T^{-1}\tilde{B}u\right ) \\ & =TAT^{-1}\tilde{x}+TT^{-1}\tilde{B}u\\ & =\tilde{A}\tilde{x}+\tilde{B}u \end{align*}
So new system is the same as original before transformation. The big question is: When is \(H(s)\) realizable. We start with SISO, after that we will talk about MIMO. Let \(H(s)=\frac{N(s)}{D (s)}\) be some given \(H(s)\) that we want to realize. Define a proper T.F. as one which has \(\deg \left ( N\left ( s\right ) \right ) \leq \deg \left ( D\left ( s\right ) \right ) \). Define a strict proper T.F. as one which has \(\deg \left ( N\left ( s\right ) \right ) <\deg \left ( D(s)\right ) \).
Every proper T.F. is realizable. In this, the word proper is important. Is the improper case important? Example was given for a system where the input is step function, showing the output is Dirac delta \(\delta (t)\).
Theorem 1: If \(H(s)\) is proper, then it is realizable. Example: \(H(s) = \frac{s^{3}+3s^{2}+2s+4}{s^{3}+6s^{2}-2s-7}\), the associated ODE is \(y'''+6y''-2 y'-7y = 4u'''+3u''+2u'+4u\).
A recipe to realize \(H(s)\): If \(H(s)\) is proper, make it strict proper by long division and write it as \(H_{proper}(s) =\gamma +H_{strict}(s)\). Doing long division gives
\[ H = 4 + \frac{-21 s^2+ 10 s + 32}{s^3+ 6 s^2 - 2 s -7} \]
Reader: Verify the following is a realization of the above \(H(s)\):
\(A= \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 7 & 2 & - \end{pmatrix} ,B=\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} ,C=\begin{pmatrix} 32 & 10 & -21 \end{pmatrix} ,D=\left [ 4\right ] \)
We generalize the above to \(H(s) =\gamma +\frac{\beta _{2}s^{2}+\beta _{1}s+\beta _{0}}{s^{3}+\alpha _{2}s^{2}+\alpha _{1}s+\alpha _{0}}\). Notice we always keep the leading term in the denominator as unity. Realization of this is \(A=\begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ -\alpha _{0} & -\alpha _{1} & -\alpha _{2}\end{pmatrix} ,B=\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} ,C=\begin{pmatrix} \beta _{0} & \beta _{1} & \beta _{2}\end{pmatrix} ,D=\left [ \gamma \right ] \)
Reader: Propose a realization for general case. Mason rule is used to generalize to \(n\times n\) case.
HW 1 assigned.