Convert the pair of second-order equations \begin{align*} y^{\prime \prime }\left ( t\right ) +3z^{\prime }\left ( t\right ) +2y\left ( t\right ) & =0\\ z^{\prime \prime }\left ( t\right ) +3y^{\prime }\left ( t\right ) +2z\left ( t\right ) & =0 \end{align*}
into a system of 4 first-order equations for the variables \(x_{1}=y,x_{2}=y^{\prime },x_{3}=z,x_{4}=z^{\prime }\)
Solution\[ x_{1}=y,x_{2}=y^{\prime },x_{3}=z,x_{4}=z^{\prime }\] Taking derivative gives\begin{align*} \dot{x}_{1} & =y^{\prime },\dot{x}_{2}=y^{\prime \prime },\dot{x}_{3}=z^{\prime },\dot{x}_{4}=z^{\prime \prime }\\ \dot{x}_{1} & =x_{2},\dot{x}_{2}=-\left ( 3z^{\prime }+2y\right ) ,\dot{x}_{3}=x_{4},\dot{x}_{4}=-\left ( 3y^{\prime }+2z\right ) \end{align*}
Hence\begin{align*} \dot{x}_{1} & =x_{2}\\ \dot{x}_{2} & =-3x_{4}-2x_{1}\\ \dot{x}_{3} & =x_{4}\\ \dot{x}_{4} & =-3x_{2}-2x_{3} \end{align*}
Or In Matrix form \[\begin{pmatrix} \dot{x}_{1}\\ \dot{x}_{2}\\ \dot{x}_{3}\\ \dot{x}_{4}\end{pmatrix} =\begin{pmatrix} 0 & 1 & 0 & 0\\ -2 & 0 & 0 & -3\\ 0 & 0 & 0 & 1\\ 0 & -3 & -2 & 0 \end{pmatrix} \,\begin{pmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{pmatrix} \]\[ \mathbf{\dot{x}}=\mathbf{Ax}\]
Determine whether the given set of elements \(\mathbf{x}=\begin{pmatrix} x_{1}\\ x_{2}\\ x_{3}\end{pmatrix} \,\) where \(x_{1}+x_{2}+x_{3}=1\) form a vector space under the properties of vector addition and scalar multiplication defined in Section 3.1.
Solution
We need to check that using vector addition \(+\) and scalar multiplication \(c\) the following is true. For any \(\mathbf{x,y}\) in \(V\) then \(\left ( \mathbf{x}+\mathbf{y}\right ) \) is still in \(V\). And for \(\mathbf{x}\) in \(V\) then \(c\mathbf{x}\) is still in \(V\).
Checking for addition This fails. Here is an example. Let \(\mathbf{x}=\begin{pmatrix} \frac{1}{3}\\ \frac{1}{3}\\ \frac{1}{3}\end{pmatrix} ,\mathbf{y=}\begin{pmatrix} \frac{2}{3}\\ 0\\ \frac{1}{3}\end{pmatrix} \). Both \(\mathbf{x,y}\) are in \(V\), but vector addition gives\[\begin{pmatrix} \frac{1}{3}\\ \frac{1}{3}\\ \frac{1}{3}\end{pmatrix} +\begin{pmatrix} \frac{2}{3}\\ 0\\ \frac{1}{3}\end{pmatrix} =\begin{pmatrix} 1\\ \frac{1}{3}\\ \frac{2}{3}\end{pmatrix} \] We see that the resulting vector is not in \(V\), because sum of its elements \(1+\frac{1}{3}+\frac{2}{3}=\allowbreak 2\neq 1\). Hence not in \(V\).
So it does not form a vector space.
Find basis for \(\Re ^{3}\) which includes the vectors \(\begin{pmatrix} 1\\ 1\\ 0 \end{pmatrix} \) and \(\begin{pmatrix} 1\\ 3\\ 4 \end{pmatrix} \)
Solution
We need to find 3rd vector \(\begin{pmatrix} x_{1}\\ x_{2}\\ x_{3}\end{pmatrix} \) such that it is linearly independent to above two vectors. If we take the cross product of the above two vectors, then we get a vector that is perpendicular to the plane that the two given vectors span. This will give us the third vector we need\begin{align*} \begin{pmatrix} 1\\ 1\\ 0 \end{pmatrix} \times \begin{pmatrix} 1\\ 3\\ 4 \end{pmatrix} & =\begin{vmatrix} i & j & k\\ 1 & 1 & 0\\ 1 & 3 & 4 \end{vmatrix} \\ & =i\left ( 4\right ) -j\left ( 4\right ) +k\left ( 3-1\right ) \\ & =4i-4j+2k \end{align*}
Hence the third vector is \(\begin{pmatrix} 4\\ -4\\ 2 \end{pmatrix} \).
To verify this result, we now check that \(c_{1}\) \(\begin{pmatrix} 1\\ 1\\ 0 \end{pmatrix} +c_{2}\begin{pmatrix} 1\\ 3\\ 4 \end{pmatrix} +c_{3}\begin{pmatrix} 4\\ -4\\ 2 \end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \) implies \(c_{1}=0,c_{2}=0,c_{3}=0\) as only solution. Writing the above as \[\begin{pmatrix} 1 & 1 & 4\\ 1 & 3 & -4\\ 0 & 4 & 2 \end{pmatrix}\begin{pmatrix} c_{1}\\ c_{2}\\ c_{3}\end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \] Hence augmented matrix is\[\begin{pmatrix} 1 & 1 & 4 & 0\\ 1 & 3 & -4 & 0\\ 0 & 4 & 2 & 0 \end{pmatrix} \] Replacing row 2 by row 2 minus row 1\[\begin{pmatrix} 1 & 1 & 4 & 0\\ 0 & 2 & -8 & 0\\ 0 & 4 & 2 & 0 \end{pmatrix} \] Replacing row 3 by row 3 minus twice row 2\[\begin{pmatrix} 1 & 1 & 4 & 0\\ 0 & 2 & -8 & 0\\ 0 & 0 & 16 & 0 \end{pmatrix} \] This implies that Gaussian elimination gives\[\begin{pmatrix} 1 & 1 & 4\\ 0 & 2 & -8\\ 0 & 0 & 16 \end{pmatrix}\begin{pmatrix} c_{1}\\ c_{2}\\ c_{3}\end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \] Back substituting gives \(c_{3}=0\). From second row we obtain \(2c_{2}-8c_{3}=0\), hence \(c_{2}=0\) and from first row \(c_{1}+c_{2}+4c_{3}=0\) hence \(c_{1}=0\). This shows that \[\begin{pmatrix} 1\\ 1\\ 0 \end{pmatrix} ,\begin{pmatrix} 1\\ 3\\ 4 \end{pmatrix} ,\begin{pmatrix} 4\\ -4\\ 2 \end{pmatrix} \]
Are linearly independent. Hence they span \(\Re ^{3}\) and form a basis.
Determine whether the given solutions are a basis for the set of all solutions\[ \mathbf{\dot{x}}=\begin{pmatrix} 4 & -2 & 2\\ -1 & 3 & 1\\ 1 & -1 & 5 \end{pmatrix} \mathbf{x}\]\[ x^{1}\left ( t\right ) =\begin{pmatrix} e^{2t}\\ e^{2t}\\ 0 \end{pmatrix} ,x^{2}\left ( t\right ) =\begin{pmatrix} 0\\ e^{4t}\\ e^{4t}\end{pmatrix} ,x^{3}\left ( t\right ) =\begin{pmatrix} e^{6t}\\ 0\\ e^{6t}\end{pmatrix} \] Solution
We pick \(t=0\) to check linear independence (we can choose any \(t\) value, but \(t=0\) is the simplest). At \(t=0\) the given solutions become\[ x^{1}=\begin{pmatrix} 1\\ 1\\ 0 \end{pmatrix} ,x^{2}=\begin{pmatrix} 0\\ 1\\ 1 \end{pmatrix} ,x^{3}=\begin{pmatrix} 1\\ 0\\ 1 \end{pmatrix} \] we now check that \(c_{1}\) \(\begin{pmatrix} 1\\ 1\\ 0 \end{pmatrix} +c_{2}\begin{pmatrix} 0\\ 1\\ 1 \end{pmatrix} +c_{3}\begin{pmatrix} 1\\ 0\\ 1 \end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \) implies \(c_{1}=0,c_{2}=0,c_{3}=0\) as only solution. Writing the above as \[\begin{pmatrix} 1 & 0 & 1\\ 1 & 1 & 0\\ 0 & 1 & 1 \end{pmatrix}\begin{pmatrix} c_{1}\\ c_{2}\\ c_{3}\end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \] Hence augmented matrix is\[\begin{pmatrix} 1 & 0 & 1 & 0\\ 1 & 1 & 0 & 0\\ 0 & 1 & 1 & 0 \end{pmatrix} \] Replacing row 2 by row 2 minus row 1 gives\[\begin{pmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & -1 & 0\\ 0 & 1 & 1 & 0 \end{pmatrix} \] Replacing row 3 by row 3 minus row 2 gives\[\begin{pmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & -1 & 0\\ 0 & 0 & 2 & 0 \end{pmatrix} \] This implies that Gaussian elimination gives\[\begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & -1\\ 0 & 0 & 2 \end{pmatrix}\begin{pmatrix} c_{1}\\ c_{2}\\ c_{3}\end{pmatrix} =\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix} \] Back substituting gives \(2c_{3}=0\) or \(c_{3}=0\). From second row \(c_{2}-c_{3}=0\). Hence \(c_{2}=0\) and from first row \(c_{1}+c_{3}=0\), hence \(c_{1}=0\). This shows that \[ x^{1}\left ( t\right ) =\begin{pmatrix} e^{2t}\\ e^{2t}\\ 0 \end{pmatrix} ,x^{2}\left ( t\right ) =\begin{pmatrix} 0\\ e^{4t}\\ e^{4t}\end{pmatrix} ,x^{3}\left ( t\right ) =\begin{pmatrix} e^{6t}\\ 0\\ e^{6t}\end{pmatrix} \] Are linearly independent. Hence they form basis for the set of all solutions for the system given above.
Compute the determinant of\[\begin{pmatrix} 2 & -1 & 6 & 3\\ 1 & 0 & 1 & -1\\ 1 & 3 & 0 & 2\\ 1 & -1 & 1 & 0 \end{pmatrix} \] Solution\begin{equation} \begin{vmatrix} 2 & -1 & 6 & 3\\ 1 & 0 & 1 & -1\\ 1 & 3 & 0 & 2\\ 1 & -1 & 1 & 0 \end{vmatrix} =2\begin{vmatrix} 0 & 1 & -1\\ 3 & 0 & 2\\ -1 & 1 & 0 \end{vmatrix} +\begin{vmatrix} 1 & 1 & -1\\ 1 & 0 & 2\\ 1 & 1 & 0 \end{vmatrix} +6\begin{vmatrix} 1 & 0 & -1\\ 1 & 3 & 2\\ 1 & -1 & 0 \end{vmatrix} -3\begin{vmatrix} 1 & 0 & 1\\ 1 & 3 & 0\\ 1 & -1 & 1 \end{vmatrix} \tag{1} \end{equation} But \begin{align} \begin{vmatrix} 0 & 1 & -1\\ 3 & 0 & 2\\ -1 & 1 & 0 \end{vmatrix} & =0-\begin{vmatrix} 3 & 2\\ -1 & 0 \end{vmatrix} -\begin{vmatrix} 3 & 0\\ -1 & 1 \end{vmatrix} \nonumber \\ & =-2-3\nonumber \\ & =-5\tag{2} \end{align}
And\begin{align} \begin{vmatrix} 1 & 1 & -1\\ 1 & 0 & 2\\ 1 & 1 & 0 \end{vmatrix} & =\begin{vmatrix} 0 & 2\\ 1 & 0 \end{vmatrix} -\begin{vmatrix} 1 & 2\\ 1 & 0 \end{vmatrix} -\begin{vmatrix} 1 & 0\\ 1 & 1 \end{vmatrix} \nonumber \\ & =-2+2-1\nonumber \\ & =-1\tag{3} \end{align}
And\begin{align} \begin{vmatrix} 1 & 0 & -1\\ 1 & 3 & 2\\ 1 & -1 & 0 \end{vmatrix} & =\begin{vmatrix} 2 & 2\\ -1 & 0 \end{vmatrix} +0-\begin{vmatrix} 1 & 3\\ 1 & -1 \end{vmatrix} \nonumber \\ & =2-\left ( -1-3\right ) \nonumber \\ & =6\tag{4} \end{align}
And\begin{align} \begin{vmatrix} 1 & 0 & 1\\ 1 & 3 & 0\\ 1 & -1 & 1 \end{vmatrix} & =\begin{vmatrix} 3 & 0\\ -1 & 1 \end{vmatrix} +0+\begin{vmatrix} 1 & 3\\ 1 & -1 \end{vmatrix} \nonumber \\ & =3+\left ( -1-3\right ) \nonumber \\ & =-1\tag{5} \end{align}
Substituting (2,3,4,5) into (1) gives\begin{align*} \begin{vmatrix} 2 & -1 & 6 & 3\\ 1 & 0 & 1 & -1\\ 1 & 3 & 0 & 2\\ 1 & -1 & 1 & 0 \end{vmatrix} & =2\left ( -5\right ) +\left ( -1\right ) +6\left ( 6\right ) -3\left ( -1\right ) \\ & =28 \end{align*}
Find the inverse if it exist of \[ A=\begin{pmatrix} \cos \theta & 0 & -\sin \theta \\ 0 & 1 & 0\\ \sin \theta & 0 & \cos \theta \end{pmatrix} \] Solution\begin{equation} A^{-1}=\frac{1}{\left \vert A\right \vert }adj\left ( A\right ) ^{T}\tag{1} \end{equation} But \begin{align} \left \vert A\right \vert & =\cos \theta \begin{vmatrix} 1 & 0\\ 0 & \cos \theta \end{vmatrix} -0-\sin \theta \begin{vmatrix} 0 & 1\\ \sin \theta & 0 \end{vmatrix} \nonumber \\ & =\cos ^{2}\theta +\sin ^{2}\theta \nonumber \\ & =1\tag{2} \end{align}
And\begin{align*} adj\left ( A\right ) & =\begin{pmatrix} \cos \theta & 0 & -\sin \theta \\ 0 & \cos ^{2}\theta +\sin ^{2}\theta & 0\\ \sin \theta & 0 & \cos \theta \end{pmatrix} \\ & =\begin{pmatrix} \cos \theta & 0 & -\sin \theta \\ 0 & 1 & 0\\ \sin \theta & 0 & \cos \theta \end{pmatrix} \end{align*}
Hence\begin{equation} adj\left ( A\right ) ^{T}=\begin{pmatrix} \cos \theta & 0 & \sin \theta \\ 0 & 1 & 0\\ -\sin \theta & 0 & \cos \theta \end{pmatrix} \tag{3} \end{equation} Substituting (2,3) into (1) gives\[ A^{-1}=\begin{pmatrix} \cos \theta & 0 & \sin \theta \\ 0 & 1 & 0\\ -\sin \theta & 0 & \cos \theta \end{pmatrix} \]