If the transfer function \(H\left ( s\right ) \) is proper, then is it realizable. (SISO for now). Reminder: Need to show this for the general case.
Proof: We must find \({\displaystyle \sum } \left ( A,B,C,D\right ) \) such that \(H_{\ast }\left ( s\right ) =H\left ( s\right ) \) where \(H_{\ast }\left ( s\right ) \) is the transfer function obtained from \({\displaystyle \sum } \left ( A,B,C,D\right ) \) and \(H\left ( s\right ) \) is the transfer function we are given. We propose \(A=\begin{pmatrix} 0 & 1 & 0 & \cdots & 0\\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & 0 & 1\\ -\alpha _{0} & -\alpha _{1} & -\alpha _{2} & \cdots & -\alpha _{n-1}\end{pmatrix} \). Let \(H\left ( s\right ) =\gamma +\frac{\beta _{n-1}s^{n-1}+\beta _{n-2}s^{n-2}+\cdots +\beta _{0}}{s^{n}+\alpha _{n-1}s^{n-1}+\cdots +\alpha _{0}}\) and propose \(B=\begin{pmatrix} 0\\ 0\\ \vdots \\ 0\\ 1 \end{pmatrix} \) and propose \(C=\begin{pmatrix} \beta _{0} & \beta _{1} & \cdots & \beta _{n-1}\end{pmatrix} \) and \(D=\left [ \gamma \right ] \), now we need to show that \(H_{\ast }\left ( s\right ) =H\left ( s\right ) \) using Mason rule.
Reader: Use Mason rule to show that this realization works. Now what about MIMO? Assume we are given \(H\left ( s\right ) =\begin{pmatrix} H_{11}\left ( s\right ) & H_{12}\left ( s\right ) \\ H_{21}\left ( s\right ) & H_{22}\left ( s\right ) \end{pmatrix} \). We can do each on its own, then need to ”patched” to show that the matrix of them all work. Example using \(2\times 2\).
If each \(H_{ij}\left ( s\right ) \) is proper, let \({\displaystyle \sum \limits _{ij}} =A_{ij},B_{ij},C_{ij},D_{ij}\) be realization for \(H_{ij}\left ( s\right ) \). Note each \(A_{ij}\) can be different size. Propose \(A=\begin{pmatrix} A_{11} & 0 & 0 & 0\\ 0 & A_{12} & 0 & 0\\ 0 & 0 & A_{21} & 0\\ 0 & 0 & 0 & A_{22}\end{pmatrix} \) and \(B=\begin{pmatrix} B_{11} & 0\\ B_{12} & 0\\ 0 & B_{21}\\ 0 & B_{22}\end{pmatrix} \) and \(C=\begin{pmatrix} C_{11} & C_{12} & 0 & 0\\ 0 & 0 & C_{21} & C_{22}\end{pmatrix} \) and \(D=\begin{pmatrix} D_{11} & D_{12}\\ D_{21} & D_{22}\end{pmatrix} \). Now we claim \(\left ( A,B,C,D\right ) \) is realization of \(\begin{pmatrix} H_{11}\left ( s\right ) & H_{12}\left ( s\right ) \\ H_{21}\left ( s\right ) & H_{22}\left ( s\right ) \end{pmatrix} \). We need to calculate \begin{align*} H_{\ast }\left ( s\right ) & =C\left ( sI-A\right ) ^{-1}B+D\\ & =\begin{pmatrix} C_{11} & C_{12} & 0 & 0\\ 0 & 0 & C_{21} & C_{22}\end{pmatrix} \left ( sI-\begin{pmatrix} A_{11} & 0 & 0 & 0\\ 0 & A_{12} & 0 & 0\\ 0 & 0 & A_{21} & 0\\ 0 & 0 & 0 & A_{22}\end{pmatrix} ^{-1}\right ) \begin{pmatrix} B_{11} & 0\\ B_{12} & 0\\ 0 & B_{21}\\ 0 & B_{22}\end{pmatrix} +\begin{pmatrix} D_{11} & D_{12}\\ D_{21} & D_{22}\end{pmatrix} \end{align*}
Reader: The above reduces to\[ H_{\ast }\left ( s\right ) =\begin{pmatrix} C_{11}\left ( sI-A_{11}\right ) ^{-1}B_{11}+D_{11} & C_{12}\left ( sI-A_{12}\right ) ^{-1}B_{12}+D_{12}\\ C_{21}\left ( sI-A_{21}\right ) ^{-1}B_{21}+D_{21} & C_{22}\left ( sI-A_{22}\right ) ^{-1}B_{22}+D_{22}\end{pmatrix} \] What about other dimensions?
Reader: Propose realization with \(H\left ( s\right ) \) that is \(3\times 2\). i.e. \(\begin{pmatrix} H_{11}\left ( s\right ) & H_{12}\left ( s\right ) \\ H_{21}\left ( s\right ) & H_{22}\left ( s\right ) \\ H_{31}\left ( s\right ) & H_{32}\left ( s\right ) \end{pmatrix} \,.\) Try it. What should \(A,B,C,D\) look like? Note: Even though \(H_{ij}\left ( s\right ) \) might each be minimal, when we obtain the realization, it might no longer be minimal. Some realization are ”nicer” than others for analysis and design.
Motivation example: \(A=\begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ -\alpha _{0} & -\alpha _{1} & -\alpha _{2}\end{pmatrix} ,b=\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} ,C=\begin{pmatrix} \beta _{0} & \beta _{1} & \beta _{2}\end{pmatrix} \). When we add feedback, we ask what is the effect of feedback? This system is nice to study feedback. We often add feedback to improve time performance. \(u\left ( t\right ) =k_{1}x_{1}+k_{2}x_{2}+k_{3}x_{3}+v\) where \(v\) is new input. We can pick \(k_{i}\). The closed loop becomes
\[ x^{\prime }=\begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ -\alpha _{0} & -\alpha _{1} & -\alpha _{2}\end{pmatrix} +\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} \left ( k_{1}x_{1}+k_{2}x_{2}+k_{3}x_{3}+v\right ) \] Reader Determine the new \(A\) matrix from the old. \[ x^{\prime }=\begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ k_{1}-\alpha _{0} & k_{2}-\alpha _{1} & k_{3}-\alpha _{2}\end{pmatrix} +\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} v \] Notice: State feedback preservers the companion form of \(A\) and \(b\). Find closed form transfer function. \[ H_{closed}=\frac{\beta _{0}+\beta _{1}s+\beta _{2}s^{2}}{s^{3}+\left ( \alpha _{2}-k_{3}\right ) s^{2}+\left ( \alpha _{1}-k_{2}\right ) s+\left ( \alpha _{0}-k_{1}\right ) }\] Note: \(k_{i}\) affects one coefficient each. So poles of closed loop can be arbitrarily assigned anywhere we want.