Notes by Nasser M Abbasi
\(\blacksquare \) Linear system \[\begin{pmatrix} \dot{x}_{1}\\ \dot{x}_{2}\end{pmatrix} =\left [ A\right ] _{n\times n}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} \] For nonlinear, \[\begin{pmatrix} \dot{x}_{1}\\ \dot{x}_{2}\end{pmatrix} =\begin{pmatrix} f_{1}\left ( x_{1},x_{2}\right ) \\ f_{2}\left ( x_{1},x_{2}\right ) \end{pmatrix} \] To find First integral \(F\left ( x_{1},x_{2}\right ) \), (also the orbit equation in phase plane) solve \(\frac{dx_{2}}{dx_{1}}=\frac{f_{2}}{f_{1}}\). This gives an ODE to solve. For example, if \[ x^{\prime \prime }+x-\frac{1}{2}x^{2}=0 \] Then \(x_{1}=x,x_{2}=x^{\prime }\,\) and \(\dot{x}_{1}=x_{2},\dot{x}_{2}=-x_{1}+\frac{1}{2}x_{1}^{2}\). Hence\[\begin{pmatrix} \dot{x}_{1}\\ \dot{x}_{2}\end{pmatrix} =\begin{pmatrix} f_{1}\\ f_{2}\end{pmatrix} =\begin{pmatrix} x_{2}\\ -x_{1}+\frac{1}{2}x_{1}^{2}\end{pmatrix} \] And \(\frac{dx_{2}}{dx_{1}}=\frac{f_{2}}{f_{1}}\,\) gives \(\frac{dx_{2}}{dx_{1}}=\frac{-x_{1}+\frac{1}{2}x_{1}^{2}}{x_{2}}\) or \(x_{2}dx_{2}=\left ( -x_{1}+\frac{1}{2}x_{1}^{2}\right ) dx_{1}\). Integrating \(\frac{1}{2}x_{2}=-\frac{1}{2}x_{1}^{2}+\frac{1}{6}x_{1}^{3}+E\) or \(\frac{1}{2}x_{2}+\frac{1}{2}x_{1}^{2}-\frac{1}{6}x_{1}^{3}=E\). Hence \[ F\left ( x_{1},x_{2}\right ) =\frac{1}{2}x_{2}+\frac{1}{2}x_{1}^{2}-\frac{1}{6}x_{1}^{3}\] Is the first integral.
\(\blacksquare \) Hamiltonian \(H\left ( x_{1},x_{2}\right ) \) is first integral. It is the energy of the system. For second order ODE, the equation of motion using \(H\) becomes\begin{align*} \dot{q} & =\frac{\partial H}{\partial p}\\ \dot{p} & =-\frac{\partial H}{\partial q} \end{align*}
But \(p=\dot{x}\) or in state space notation \[ p=\dot{x}_{1}=x_{2}\] And \(q=x\) or in state space notation \[ q=x_{1}\] Hence the above becomes, in state space as \[\begin{pmatrix} \dot{q}\\ \dot{p}\end{pmatrix} =\begin{pmatrix} \dot{x}_{1}\\ \dot{x}_{2}\end{pmatrix} \] Therefore, state space can be written as (for second order ODE)\[\begin{pmatrix} \dot{x}_{1}\\ \dot{x}_{2}\end{pmatrix} =\begin{pmatrix} \dot{q}\\ \dot{p}\end{pmatrix} =\begin{pmatrix} \frac{\partial H}{\partial x_{2}}\\ -\frac{\partial H}{\partial x_{1}}\end{pmatrix} \] For example, in the last example, we had \(F\left ( x_{1},x_{2}\right ) =\frac{1}{2}x_{2}+\frac{1}{2}x_{1}^{2}-\frac{1}{6}x_{1}^{3}\) which is also \(H\). Hence applying the above gives\[\begin{pmatrix} \dot{q}\\ \dot{p}\end{pmatrix} =\begin{pmatrix} x_{2}\\ -\left ( x_{1}-\frac{1}{2}x_{1}^{2}\right ) \end{pmatrix} =\begin{pmatrix} x_{2}\\ -x_{1}+\frac{1}{2}x_{1}^{2}\end{pmatrix} \] Which is the original system equation of motion. Hence, if we know \(H\) or \(F\), we can find the equation of motion. (For constant \(H\)) which is the normal case.
\(\blacksquare \) Hessian. Given \(F\left ( x_{1},x_{2}\right ) \), as first integral, then \(\nabla F\) is the gradient, written as \(\begin{pmatrix} \frac{\partial F}{\partial x_{1}}\\ \frac{\partial F}{\partial x_{2}}\end{pmatrix} \). \(\nabla \) is called del operator. and the Hessian is \(\nabla ^{2}F=\begin{pmatrix} \frac{\partial ^{2}F}{\partial x_{1}\partial x_{1}} & \frac{\partial ^{2}F}{\partial x_{1}\partial x_{2}}\\ \frac{\partial ^{2}F}{\partial x_{2}\partial x_{1}} & \frac{\partial ^{2}F}{\partial x_{2}\partial x_{2}}\end{pmatrix} =\begin{pmatrix} \frac{\partial ^{2}F}{\partial ^{2}x_{1}} & \frac{\partial ^{2}F}{\partial x_{1}\partial x_{2}}\\ \frac{\partial ^{2}F}{\partial x_{2}\partial x_{1}} & \frac{\partial ^{2}F}{\partial ^{2}x_{2}}\end{pmatrix} \). A critical point is called non-degnerate if \(\det \nabla ^{2}F\) evaluates at the critical point is non-zero. This means the linearization is non-degenerate around that critical point.
\(\blacksquare \) For scalar function, say \(f\left ( x,y\right ) \) its gradient is \(\nabla F=\begin{pmatrix} \frac{\partial f}{\partial x}\\ \frac{\partial f}{\partial y}\end{pmatrix} \) which represent the tangent vector when evaluated at some point \(p=\left ( x_{0},y_{0}\right ) \). This can also be written as \(\nabla F=\frac{\partial f}{\partial x}\mathbf{i+}\frac{\partial f}{\partial y}\mathbf{j}\) in vector notation.
For a vector of functions, say \(\vec{F}=\begin{pmatrix} f\left ( x,y\right ) \\ g\left ( x,y\right ) \end{pmatrix} \) then its gradient is matrix given by\(\ \nabla \vec{F}=\) \(\begin{pmatrix} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y}\\ \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}\end{pmatrix} \). This is called the Jacobian also. Normally this is evaluated at point (equilibrium point) and its eigenvalues indicate if the linearized system is stable or not.
\(\blacksquare \) Directional derivative at a point is given by \(\hat{n}\cdot \nabla f\left ( x,y\right ) \) where \(\nabla f\left ( x,y\right ) \) is the gradient (a vector field), and the result is evalulated at some specific point. For example, if \(f=\sqrt{x^{2}+y^{2}}\) and we want the directional derivative in direction defined by vector \(2\hat{\imath }+2\hat{\jmath }+\hat{k}\), then \(\hat{n}=\frac{1}{3}\left ( 2\hat{\imath }+2\hat{\jmath }+\hat{k}\right ) \) and \(\nabla f\left ( x,y\right ) =\frac{1}{\sqrt{x^{2}+y^{2}}}\left ( x\hat{\imath }+y\hat{\jmath }\right ) \). Hence \(\frac{1}{3}\left ( 2\hat{\imath }+2\hat{\jmath }+\hat{k}\right ) \cdot \frac{1}{\sqrt{x^{2}+y^{2}}}\left ( x\hat{\imath }+y\hat{\jmath }\right ) =\frac{2}{3}\left ( \frac{x+y}{\sqrt{x^{2}+y^{2}}}\right ) \). At the point \(\left ( 0,-2,1\right ) \) this gives \(-\frac{2}{3}\).
This all can also be written using vector notation. if \(f=\sqrt{x^{2}+y^{2}}\) and we want the directional derivative in direction defined by vector \(2\hat{\imath }+2\hat{\jmath }+\hat{k}\), then \(\hat{n}=\frac{1}{3}\left ( 2\hat{\imath }+2\hat{\jmath }+\hat{k}\right ) =\begin{pmatrix} \frac{2}{3}\\ \frac{2}{3}\\ \frac{1}{3}\end{pmatrix} \) and \(\nabla f\left ( x,y\right ) =\frac{1}{\sqrt{x^{2}+y^{2}}}\left ( x\hat{\imath }+y\hat{\jmath }\right ) =\begin{pmatrix} \frac{x}{\sqrt{x^{2}+y^{2}}}\\ \frac{y}{\sqrt{x^{2}+y^{2}}}\\ 0 \end{pmatrix} \). Hence \begin{align*} \begin{pmatrix} \frac{2}{3}\\ \frac{2}{3}\\ \frac{1}{3}\end{pmatrix} \cdot \begin{pmatrix} \frac{x}{\sqrt{x^{2}+y^{2}}}\\ \frac{y}{\sqrt{x^{2}+y^{2}}}\\ 0 \end{pmatrix} & =\frac{2}{3}\frac{x}{\sqrt{x^{2}+y^{2}}}+\frac{2}{3}\frac{y}{\sqrt{x^{2}+y^{2}}}\\ & =\frac{2}{3}\left ( \frac{y}{\sqrt{x^{2}+y^{2}}}\right ) \end{align*}
At the point \(\left ( 0,-2,1\right ) \) this gives \(-\frac{2}{3}\).
\(\blacksquare \) For vector function, say \(F\left ( x,y\right ) =\begin{pmatrix} f\left ( x,y\right ) \\ g\left ( x,y\right ) \end{pmatrix} \) its divergence is scalar, given by
\begin{align*} \nabla \cdot \vec{F} & =\operatorname{div}\left ( F\right ) \\ & =\frac{\partial f}{\partial x}+\frac{\partial g}{\partial y} \end{align*}
\(\blacksquare \) If in the above, \(F\left ( x_{1},x_{2}\right ) \) was the state space vector in \(\dot{x}=F\left ( x_{1},x_{2}\right ) =\begin{pmatrix} f_{1}\left ( x_{1},x_{2}\right ) \\ f_{2}\left ( x_{1},x_{2}\right ) \end{pmatrix} \), then there is a theory which says if \(\nabla \cdot \vec{F}\) do not change sign over the whole domain \(D\), then the system can only have periodic solutions. This assumes \(D\) is simply connected (i.e. no holes in it) and that \(f_{1}\left ( x_{1},x_{2}\right ) ,f_{2}\left ( x_{1},x_{2}\right ) \) are smooth functions.
\(\blacksquare \) Morse function. If \(F\left ( x_{1},x_{2}\right ) \) is non-degenerate around critical point \(x=a\), then \(F\left ( x_{1},x_{2}\right ) \) is called Morse function around \(x=a\). To find Morse function, expand \(F\left ( x\right ) \) in Taylor series around the critical point. \[ F\left ( x_{1},x_{2}\right ) =\bar{F}\left ( a\right ) +\left ( x-a\right ) \nabla F\left ( a\right ) +\frac{1}{2}\left ( x-a\right ) \nabla ^{2}F\left ( a\right ) \left ( x-a\right ) ^{T}\] But \(\nabla F\left ( a\right ) =0\) since critical point, hence\[ F\left ( x_{1},x_{2}\right ) =\bar{F}\left ( a\right ) +\frac{1}{2}\left ( x-a\right ) \nabla ^{2}F\left ( a\right ) \left ( x-a\right ) ^{T}\] The above should come out as quadratic in \(x_{1},x_{2}\) if \(F\) is non-degenerate.
\(\blacksquare \) Jordan form
Given linearized system \(\dot{x}=Ax\), find \(T\) such that \(z=T^{-1}x\) which makes system \(\dot{z}=Bz\) where it is now decoupled. \(B\) is the Jordan form of \(A\). For case of non-zero eigenvalue of \(A\) at each critical point, \(B=\begin{pmatrix} \lambda _{1} & 0\\ 0 & \lambda _{2}\end{pmatrix} \) or \(B=\begin{pmatrix} \lambda _{1} & 1\\ 0 & \lambda _{1}\end{pmatrix} \) depending if eigenvalues of \(A\) are distinct or repeated. Now solve \(\dot{z}=Bz\) since decoupled and then convert back to \(x\) space when done using \(x=Tz\). The matrix \(T\) is the matrix of the eigenvectors of \(A\). Each eigenvector is column of \(T\). Note that \(A\) is constant, since it is evaluated at critical point.
\(\blacksquare \) Eigenvalues of \(A\) can also be found using \(\lambda =\frac{1}{2}\left ( \operatorname{trace}\left ( A\right ) ^{2}-4\det \left ( A\right ) \right ) \)
\(\blacksquare \) From the book ”a critical point which, after linearisation, corresponds with a positive attractor, turns out to be asymptotically stable”. This means if all eigenvalues (of the Jacobian at that point) are negative, then asymptotically stable. But if one eigenvalue is zero, we can not say that. Normally we just say unable to decide (if the system is non-linear).