Given rotation matrix \(R_{\theta }=\begin {bmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end {bmatrix} \) verify that \(R_{\theta +\theta ^{\prime }}=R_{\theta ^{\prime }}R_{\theta }\)
Solution
\begin {equation} R_{\theta +\theta ^{\prime }}=\begin {bmatrix} \cos \left (\theta +\theta ^{\prime }\right ) & \sin \left (\theta +\theta ^{\prime }\right ) \\ -\sin \left (\theta +\theta ^{\prime }\right ) & \cos \left (\theta +\theta ^{\prime }\right ) \end {bmatrix} \tag {1} \end {equation} But \begin {align} R_{\theta ^{\prime }}R_{\theta } & =\begin {bmatrix} \cos \theta ^{\prime } & \sin \theta ^{\prime }\\ -\sin \theta ^{\prime } & \cos \theta ^{\prime }\end {bmatrix}\begin {bmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end {bmatrix} \nonumber \\ & =\begin {bmatrix} \cos \theta ^{\prime }\cos \theta -\sin \theta ^{\prime }\sin \theta & \cos \theta ^{\prime }\sin \theta +\sin \theta ^{\prime }\cos \theta \\ -\sin \theta ^{\prime }\cos \theta -\cos \theta ^{\prime }\sin \theta & -\sin \theta ^{\prime }\sin \theta +\cos \theta ^{\prime }\cos \theta \end {bmatrix} \tag {2} \end {align}
But from trig identities we know that \begin {align} \cos \theta ^{\prime }\cos \theta -\sin \theta ^{\prime }\sin \theta & =\cos \left ( \theta +\theta ^{\prime }\right ) \tag {3}\\ \cos \theta ^{\prime }\sin \theta +\sin \theta ^{\prime }\cos \theta & =\sin \left ( \theta +\theta ^{\prime }\right ) \tag {4}\\ -\sin \theta ^{\prime }\cos \theta -\cos \theta ^{\prime }\sin \theta & =-\left ( \sin \theta ^{\prime }\cos \theta +\cos \theta ^{\prime }\sin \theta \right ) \nonumber \\ & =-\sin \left (\theta +\theta ^{\prime }\right ) \tag {5}\\ -\sin \theta ^{\prime }\sin \theta +\cos \theta ^{\prime }\cos \theta & =\cos \left ( \theta +\theta ^{\prime }\right ) \tag {6} \end {align}
Substituting (3,4,5,6) into (2) gives\[ R_{\theta ^{\prime }}R_{\theta }=\begin {bmatrix} \cos \left (\theta +\theta ^{\prime }\right ) & \sin \left (\theta +\theta ^{\prime }\right ) \\ -\sin \left (\theta +\theta ^{\prime }\right ) & \cos \left (\theta +\theta ^{\prime }\right ) \end {bmatrix} \] Which is the same as (1). Hence \[ R_{\theta +\theta ^{\prime }}=R_{\theta ^{\prime }}R_{\theta }\]
Part 1
Recall from problem 1.6.4 in chapter 1, that the relativistic transformation of coordinates when we go from frame of reference to another is \begin {align*} x^{\prime } & =x\cosh \theta -ct\sinh \theta \\ ct^{\prime } & =-x\sinh \theta +ct\cosh \theta \end {align*}
(ps. I added \(c\) to the formula as book assumes it is \(1\). This makes it more clear).
Where \(\theta \) is the rapidity difference between the two frames. Write this in matrix form. Say we go to a third frame with coordinates \(x^{\prime \prime },t^{\prime \prime }\), moving with rapidity \(\theta ^{\prime }\) with respect to the one with primed coordinates. Show that the matrix relating the doubly primed coordinates to the unprimed ones corresponds to rapidity \(\theta +\theta ^{\prime }\).
Part 2. Find the expression of \(\theta \) in terms of the relative velocity.
Solution
In Matrix form Lorentz transformation becomes\begin {equation} \begin {pmatrix} x^{\prime }\\ ct^{\prime }\end {pmatrix} =\begin {pmatrix} \cosh \theta & -\sinh \theta \\ -\sinh \theta & \cosh \theta \end {pmatrix}\begin {pmatrix} x\\ ct \end {pmatrix} \tag {1} \end {equation} In the third frame (double primed), we have\begin {equation} \begin {pmatrix} x^{\prime \prime }\\ ct^{\prime \prime }\end {pmatrix} =\begin {pmatrix} \cosh \theta ^{\prime } & -\sinh \theta ^{\prime }\\ -\sinh \theta ^{\prime } & \cosh \theta ^{\prime }\end {pmatrix}\begin {pmatrix} x^{\prime }\\ ct^{\prime }\end {pmatrix} \tag {2} \end {equation} Substituting (1) in the RHS of (2) gives\begin {equation} \begin {pmatrix} x^{\prime \prime }\\ ct^{\prime \prime }\end {pmatrix} =\begin {pmatrix} \cosh \theta ^{\prime } & -\sinh \theta ^{\prime }\\ -\sinh \theta ^{\prime } & \cosh \theta ^{\prime }\end {pmatrix}\begin {pmatrix} \cosh \theta & -\sinh \theta \\ -\sinh \theta & \cosh \theta \end {pmatrix}\begin {pmatrix} x\\ ct \end {pmatrix} \tag {3} \end {equation} But \begin {align*} \begin {pmatrix} \cosh \theta ^{\prime } & -\sinh \theta ^{\prime }\\ -\sinh \theta ^{\prime } & \cosh \theta ^{\prime }\end {pmatrix}\begin {pmatrix} \cosh \theta & -\sinh \theta \\ -\sinh \theta & \cosh \theta \end {pmatrix} & =\begin {pmatrix} \cosh \theta ^{\prime }\cosh \theta +\sinh \theta ^{\prime }\sinh \theta & -\cosh \theta ^{\prime }\sinh \theta -\sinh \theta ^{\prime }\cosh \theta \\ -\sinh \theta ^{\prime }\cosh \theta -\cosh \theta ^{\prime }\sinh \theta & \sinh \theta ^{\prime }\sinh \theta +\cosh \theta ^{\prime }\cosh \theta \end {pmatrix} \\ & =\begin {pmatrix} \cosh \left (\theta +\theta ^{\prime }\right ) & -\sinh \left (\theta +\theta ^{\prime }\right ) \\ -\sinh \left (\theta +\theta ^{\prime }\right ) & \cosh \left (\theta +\theta ^{\prime }\right ) \end {pmatrix} \end {align*}
Substituting the above in (3) gives\begin {equation} \begin {pmatrix} x^{\prime \prime }\\ ct^{\prime \prime }\end {pmatrix} =\begin {pmatrix} \cosh \left (\theta +\theta ^{\prime }\right ) & -\sinh \left (\theta +\theta ^{\prime }\right ) \\ -\sinh \left (\theta +\theta ^{\prime }\right ) & \cosh \left (\theta +\theta ^{\prime }\right ) \end {pmatrix}\begin {pmatrix} x\\ ct \end {pmatrix} \tag {4} \end {equation} Therefore the matrix \[\begin {pmatrix} \cosh \left (\theta +\theta ^{\prime }\right ) & -\sinh \left (\theta +\theta ^{\prime }\right ) \\ -\sinh \left (\theta +\theta ^{\prime }\right ) & \cosh \left (\theta +\theta ^{\prime }\right ) \end {pmatrix} \] Relates the unprimed frame to the doubly primed by rapidity \(\theta +\theta ^{\prime }\), which is what we are asked to show.
Need to find the expression of \(\theta \) in terms of the relative velocity. The relative velocity it taken as that between the unprimed \(\left (x,ct\right ) \) and the one primed frame \(\left (x^{\prime },ct^{\prime }\right ) \).
The Lorentz transformation can also be written as \begin {align} x^{\prime } & =\frac {x-vt}{\sqrt {1-\frac {v^{2}}{c^{2}}}}\tag {1}\\ t^{\prime } & =\frac {t-\frac {vx}{c^{2}}}{\sqrt {1-\frac {v^{2}}{c^{2}}}} \tag {2} \end {align}
But we also can write the above in terms of rapidity \(\theta \) as given in the text book as\begin {equation} \begin {pmatrix} x^{\prime }\\ ct^{\prime }\end {pmatrix} =\begin {pmatrix} \cosh \theta & -\sinh \theta \\ -\sinh \theta & \cosh \theta \end {pmatrix}\begin {pmatrix} x\\ ct \end {pmatrix} \tag {3} \end {equation} Or\begin {align} x^{\prime } & =x\cosh \theta -ct\sinh \theta \tag {4}\\ ct^{\prime } & =-x\sinh \theta +ct\cosh \theta \nonumber \\ t^{\prime } & =-\frac {x}{c}\sinh \theta +t\cosh \theta \tag {5} \end {align}
Equating (1,4) and (2,5) gives the following two equations\begin {align} \frac {x-vt}{\sqrt {1-\frac {v^{2}}{c^{2}}}} & =x\cosh \theta -ct\sinh \theta \tag {6}\\ \frac {t-\frac {vx}{c^{2}}}{\sqrt {1-\frac {v^{2}}{c^{2}}}} & =-\frac {x}{c}\sinh \theta +t\cosh \theta \tag {7} \end {align}
Dividing Eq (6) by Eq (7) to get rid of the root term gives\begin {equation} \frac {x-vt}{t-\frac {vx}{c^{2}}}=\frac {x\cosh \theta -ct\sinh \theta }{-\frac {x}{c}\sinh \theta +t\cosh \theta } \tag {8} \end {equation} Dividing the numerator and the denominator of RHS of the above by \(\cosh \theta \) gives\[ \frac {x-vt}{t-\frac {vx}{c^{2}}}=\frac {x-ct\tanh \theta }{t-\frac {x}{c}\tanh \theta }\] Now we solve for \(v\), the relative velocity from the above by simplifying the above. This results in\begin {align*} \left (x-vt\right ) \left (t-\frac {x}{c}\tanh \theta \right ) & =\left ( t-\frac {vx}{c^{2}}\right ) \left (x-ct\tanh \theta \right ) \\ xt-\frac {x^{2}}{c}\tanh \theta -vt^{2}+vt\frac {x}{c}\tanh \theta & =tx-ct^{2}\tanh \theta -\frac {vx^{2}}{c^{2}}+\frac {vx}{c^{2}}ct\tanh \theta \\ v\left (-t^{2}+t\frac {x}{c}\tanh \theta +\frac {x^{2}}{c^{2}}-\frac {x}{c}t\tanh \theta \right ) & =tx-xt+\frac {x^{2}}{c}\tanh \theta -ct^{2}\tanh \theta \\ v\left (-t^{2}+\frac {x^{2}}{c^{2}}\right ) & =\left (\frac {x^{2}}{c}-ct^{2}\right ) \tanh \theta \\ v & =\frac {\frac {x^{2}}{c}-ct^{2}}{\frac {x^{2}}{c^{2}}-t^{2}}\tanh \theta \\ & =\frac {\frac {x^{2}-c^{2}t^{2}}{c}}{\frac {x^{2}-c^{2}t^{2}}{c^{2}}}\tanh \theta \\ & =\frac {c^{2}\left (x^{2}-c^{2}t^{2}\right ) }{c\left (x^{2}-c^{2}t^{2}\right ) }\tanh \theta \\ & =\frac {c\left (x^{2}-c^{2}t^{2}\right ) }{x^{2}-c^{2}t^{2}}\tanh \theta \\ & =c\tanh \theta \end {align*}
Therefore, the relative velocity is\[ v=c\tanh \theta \]
Find the inverse of Lorentz Transformation matrix from problem 8.1.2 and the rotation matrix \(R_{\theta }\). Does the answer makes sense? (You must be on top of the identities for hyperbolic and trigonometric functions to do this. Remember: when in trouble go back to the definitions in terms of exponential).
Solution
The Lorentz Transformation matrix from problem 8.1.2 above is\begin {align*} \begin {pmatrix} x^{\prime }\\ t^{\prime }\end {pmatrix} & =\begin {pmatrix} \cosh \theta & -\sinh \theta \\ -\sinh \theta & \cosh \theta \end {pmatrix}\begin {pmatrix} x\\ t \end {pmatrix} \\ & =L_{\theta }\begin {pmatrix} x\\ t \end {pmatrix} \end {align*}
Where \[ L_{\theta }=\begin {pmatrix} \cosh \theta & -\sinh \theta \\ -\sinh \theta & \cosh \theta \end {pmatrix} \] While the rotation matrix is \[ R_{\theta }=\begin {pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end {pmatrix} \] The question is asking to find the \(L_{\theta }^{-1}\) and \(R_{\theta }^{-1}\). \begin {align} L_{\theta }^{-1} & =\frac {1}{\det \left (L_{\theta }\right ) }\begin {pmatrix} L_{22} & -L_{12}\\ -L_{21} & L_{11}\end {pmatrix} \nonumber \\ & =\frac {1}{\cosh ^{2}\theta -\sinh ^{2}\theta }\begin {pmatrix} \cosh \theta & \sinh \theta \\ \sinh \theta & \cosh \theta \end {pmatrix} \nonumber \\ & =\begin {pmatrix} \cosh \theta & \sinh \theta \\ \sinh \theta & \cosh \theta \end {pmatrix} \tag {1} \end {align}
The inverse of the matrix undoes whatever the matrix does. Let us check this on the above result. \begin {equation} L_{\left (-\theta \right ) }=\begin {pmatrix} \cosh \left (-\theta \right ) & -\sinh \left (-\theta \right ) \\ -\sinh \left (-\theta \right ) & \cosh \left (-\theta \right ) \end {pmatrix} =\begin {pmatrix} \cosh \left (\theta \right ) & \sinh \left (\theta \right ) \\ \sinh \left (\theta \right ) & \cosh \left (\theta \right ) \end {pmatrix} \tag {2} \end {equation} We see that (2) is the same as (1). Hence the result of (1) makes sense. For the transformation matrix, we have\begin {align} R_{\theta }^{-1} & =\frac {1}{\det \left (R_{\theta }\right ) }\begin {pmatrix} R_{22} & -R_{12}\\ -R_{21} & R_{11}\end {pmatrix} \nonumber \\ & =\frac {1}{\cos ^{2}\theta +\sin ^{2}\theta }\begin {pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end {pmatrix} \nonumber \\ & =\begin {pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end {pmatrix} \tag {3} \end {align}
The inverse of the matrix undoes whatever the matrix does. Let us check this on the above result. \begin {equation} R_{\left (-\theta \right ) }=\begin {pmatrix} \cos \left (-\theta \right ) & \sin \left (-\theta \right ) \\ -\sin \left (-\theta \right ) & \cos \left (-\theta \right ) \end {pmatrix} =\begin {pmatrix} \cos \left (\theta \right ) & -\sin \left (\theta \right ) \\ \sin \left (\theta \right ) & \cos \left (\theta \right ) \end {pmatrix} \tag {4} \end {equation} We see that (4) is the same as (2). Hence the result of (3) makes sense.
(1) Solve the following simultaneous equations using Crammer rule.\begin {align*} 3x-y-z & =2\\ x-2y-3z & =0\\ 4x+y+2z & =4 \end {align*}
solution
In Matrix form\[\begin {pmatrix} 3 & -1 & -1\\ 1 & -2 & -3\\ 4 & 1 & 2 \end {pmatrix}\begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} 2\\ 0\\ 4 \end {pmatrix} \] Then, using Crammer rule\begin {equation} x=\frac {\begin {vmatrix} 2 & -1 & -1\\ 0 & -2 & -3\\ 4 & 1 & 2 \end {vmatrix} }{\begin {vmatrix} 3 & -1 & -1\\ 1 & -2 & -3\\ 4 & 1 & 2 \end {vmatrix} },y=\frac {\begin {vmatrix} 3 & 2 & -1\\ 1 & 0 & -3\\ 4 & 4 & 2 \end {vmatrix} }{\begin {vmatrix} 3 & -1 & -1\\ 1 & -2 & -3\\ 4 & 1 & 2 \end {vmatrix} },z=\frac {\begin {vmatrix} 3 & -1 & 2\\ 1 & -2 & 0\\ 4 & 1 & 4 \end {vmatrix} }{\begin {vmatrix} 3 & -1 & -1\\ 1 & -2 & -3\\ 4 & 1 & 2 \end {vmatrix} } \tag {A} \end {equation} But \(\det \relax (A) \) is (using expansion along the first row)\begin {align} \begin {vmatrix} 3 & -1 & -1\\ 1 & -2 & -3\\ 4 & 1 & 2 \end {vmatrix} & =3\begin {vmatrix} -2 & -3\\ 1 & 2 \end {vmatrix} -\left (-1\right ) \begin {vmatrix} 1 & -3\\ 4 & 2 \end {vmatrix} +\left (-1\right ) \begin {vmatrix} 1 & -2\\ 4 & 1 \end {vmatrix} \tag {1}\\ & =3\left (-4+3\right ) +\left (2+12\right ) -\left (1+8\right ) \nonumber \\ & =2\nonumber \end {align}
And\begin {align} \begin {vmatrix} 2 & -1 & -1\\ 0 & -2 & -3\\ 4 & 1 & 2 \end {vmatrix} & =2\begin {vmatrix} -2 & -3\\ 1 & 2 \end {vmatrix} -\left (-1\right ) \begin {vmatrix} 0 & -3\\ 4 & 2 \end {vmatrix} +\left (-1\right ) \begin {vmatrix} 0 & -2\\ 4 & 1 \end {vmatrix} \tag {2}\\ & =2\left (-4+3\right ) +\left (12\right ) -\relax (8) \nonumber \\ & =2\nonumber \end {align}
And\begin {align} \begin {vmatrix} 3 & 2 & -1\\ 1 & 0 & -3\\ 4 & 4 & 2 \end {vmatrix} & =3\begin {vmatrix} 0 & -3\\ 4 & 2 \end {vmatrix} -\relax (2) \begin {vmatrix} 1 & -3\\ 4 & 2 \end {vmatrix} +\left (-1\right ) \begin {vmatrix} 1 & 0\\ 4 & 4 \end {vmatrix} \tag {3}\\ & =3\left (12\right ) -2\left (2+12\right ) -\relax (4) \nonumber \\ & =4\nonumber \end {align}
And \begin {align} \begin {vmatrix} 3 & -1 & 2\\ 1 & -2 & 0\\ 4 & 1 & 4 \end {vmatrix} & =3\begin {vmatrix} -2 & 0\\ 1 & 4 \end {vmatrix} -\left (-1\right ) \begin {vmatrix} 1 & 0\\ 4 & 4 \end {vmatrix} +\relax (2) \begin {vmatrix} 1 & -2\\ 4 & 1 \end {vmatrix} \tag {4}\\ & =3\left (-8\right ) +\relax (4) +2\left (1+8\right ) \nonumber \\ & =-2\nonumber \end {align}
Substituting (1,2,3,4) into (A) gives the solution\begin {align*} x & =\frac {2}{2}=1\\ y & =\frac {4}{2}=2\\ z & =\frac {-2}{2}=-1 \end {align*}
(Done again using Gaussian elimination method)
(1) Solve the following simultaneous equations by matrix inversion\begin {align*} 3x-y-z & =2\\ x-2y-3z & =0\\ 4x+y+2z & =4 \end {align*}
(2)\begin {align*} 3x+y+2z & =3\\ 2x-3y-z & =-2\\ x+y+z & =1 \end {align*}
Solution
In Matrix form\[\begin {pmatrix} 3 & -1 & -1\\ 1 & -2 & -3\\ 4 & 1 & 2 \end {pmatrix}\begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} 2\\ 0\\ 4 \end {pmatrix} \] Then \begin {equation} \begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} 3 & -1 & -1\\ 1 & -2 & -3\\ 4 & 1 & 2 \end {pmatrix} ^{-1}\begin {pmatrix} 2\\ 0\\ 4 \end {pmatrix} \tag {1} \end {equation} To find the matrix inverse, the method of Gaussian elimination is used. \[\begin {pmatrix} 3 & -1 & -1 & 1 & 0 & 0\\ 1 & -2 & -3 & 0 & 1 & 0\\ 4 & 1 & 2 & 0 & 0 & 1 \end {pmatrix} \] Swapping \(R_{2}\) and \(R_{1}\)\[\begin {pmatrix} 1 & -2 & -3 & 0 & 1 & 0\\ 3 & -1 & -1 & 1 & 0 & 0\\ 4 & 1 & 2 & 0 & 0 & 1 \end {pmatrix} \] \(R_{2}=R_{2}-3R_{1}\)\[\begin {pmatrix} 1 & -2 & -3 & 0 & 1 & 0\\ 0 & 5 & 8 & 1 & -3 & 0\\ 4 & 1 & 2 & 0 & 0 & 1 \end {pmatrix} \] \(R_{3}=R_{3}-4R_{1}\)\[\begin {pmatrix} 1 & -2 & -3 & 0 & 1 & 0\\ 0 & 5 & 8 & 1 & -3 & 0\\ 0 & 9 & 14 & 0 & -4 & 1 \end {pmatrix} \] \(R_{2}=9R_{2}\ \) and \(R_{3}=5R_{3}\) gives\[\begin {pmatrix} 1 & -2 & -3 & 0 & 1 & 0\\ 0 & 45 & 72 & 9 & -27 & 0\\ 0 & 45 & 70 & 0 & -20 & 5 \end {pmatrix} \] \(R_{3}=R_{3}-R_{2}\)\[\begin {pmatrix} 1 & -2 & -3 & 0 & 1 & 0\\ 0 & 45 & 72 & 9 & -27 & 0\\ 0 & 0 & -2 & -9 & 7 & 5 \end {pmatrix} \] \(R_{2}=\frac {R_{2}}{45},R_{3}=\frac {R_{3}}{-2}\)\[\begin {pmatrix} 1 & -2 & -3 & 0 & 1 & 0\\ 0 & 1 & \frac {72}{45} & \frac {9}{45} & -\frac {27}{45} & 0\\ 0 & 0 & 1 & \frac {9}{2} & \frac {7}{-2} & \frac {5}{-2}\end {pmatrix} \] \(R_{2}=R_{2}-\frac {72}{45}R_{3}\)\[\begin {pmatrix} 1 & -2 & -3 & 0 & 1 & 0\\ 0 & 1 & 0 & \frac {9}{45}-\left (\frac {72}{45}\right ) \left (\frac {9}{2}\right ) & -\frac {27}{45}-\left (\frac {72}{45}\right ) \left (-\frac {7}{2}\right ) & -\left (\frac {72}{45}\right ) \left (-\frac {5}{2}\right ) \\ 0 & 0 & 1 & \frac {9}{2} & \frac {7}{-2} & \frac {5}{-2}\end {pmatrix} =\begin {pmatrix} 1 & -2 & -3 & 0 & 1 & 0\\ 0 & 1 & 0 & -7 & 5 & 4\\ 0 & 0 & 1 & \frac {9}{2} & \frac {7}{-2} & \frac {5}{-2}\end {pmatrix} \] \(R_{1}=R_{1}+3R_{3}\)\[\begin {pmatrix} 1 & -2 & 0 & 3\left (\frac {9}{2}\right ) & 1+3\left (-\frac {7}{2}\right ) & 3\left (-\frac {5}{2}\right ) \\ 0 & 1 & 0 & -7 & 5 & 4\\ 0 & 0 & 1 & \frac {9}{2} & \frac {7}{-2} & \frac {5}{-2}\end {pmatrix} =\begin {pmatrix} 1 & -2 & 0 & \frac {27}{2} & -\frac {19}{2} & -\frac {15}{2}\\ 0 & 1 & 0 & -7 & 5 & 4\\ 0 & 0 & 1 & \frac {9}{2} & \frac {7}{-2} & \frac {5}{-2}\end {pmatrix} \] \(R_{1}=R_{1}+2R_{2}\)\[\begin {pmatrix} 1 & 0 & 0 & \frac {27}{2}+2\left (-7\right ) & -\frac {19}{2}+2\relax (5) & -\frac {15}{2}+2\relax (4) \\ 0 & 1 & 0 & -7 & 5 & 4\\ 0 & 0 & 1 & \frac {9}{2} & -\frac {7}{2} & -\frac {5}{2}\end {pmatrix} =\begin {pmatrix} 1 & 0 & 0 & -\frac {1}{2} & \frac {1}{2} & \frac {1}{2}\\ 0 & 1 & 0 & -7 & 5 & 4\\ 0 & 0 & 1 & \frac {9}{2} & -\frac {7}{2} & -\frac {5}{2}\end {pmatrix} \] Since now the LHS matrix is \(I\), then the RHS is the inverse. Therefore \[\begin {pmatrix} 3 & -1 & -1\\ 1 & -2 & -3\\ 4 & 1 & 2 \end {pmatrix} ^{-1}=\begin {pmatrix} -\frac {1}{2} & \frac {1}{2} & \frac {1}{2}\\ -7 & 5 & 4\\ \frac {9}{2} & -\frac {7}{2} & -\frac {5}{2}\end {pmatrix} \] Using the above in (1) gives\begin {align*} \begin {pmatrix} x\\ y\\ z \end {pmatrix} & =\begin {pmatrix} -\frac {1}{2} & \frac {1}{2} & \frac {1}{2}\\ -7 & 5 & 4\\ \frac {9}{2} & -\frac {7}{2} & -\frac {5}{2}\end {pmatrix}\begin {pmatrix} 2\\ 0\\ 4 \end {pmatrix} \\ & =\begin {pmatrix} 1\\ 2\\ -1 \end {pmatrix} \end {align*}
Hence \(x=1,y=2,z=-1.\)
In Matrix form
\begin {align} \begin {pmatrix} 3 & 1 & 2\\ 2 & -3 & -1\\ 1 & 1 & 1 \end {pmatrix}\begin {pmatrix} x\\ y\\ z \end {pmatrix} & =\begin {pmatrix} 3\\ -2\\ 1 \end {pmatrix} \nonumber \\\begin {pmatrix} x\\ y\\ z \end {pmatrix} & =\begin {pmatrix} 3 & 1 & 2\\ 2 & -3 & -1\\ 1 & 1 & 1 \end {pmatrix} ^{-1}\begin {pmatrix} 3\\ -2\\ 1 \end {pmatrix} \tag {1} \end {align}
To find the matrix inverse, the method of Gaussian elimination is used. \[\begin {pmatrix} 3 & 1 & 2 & 1 & 0 & 0\\ 2 & -3 & -1 & 0 & 1 & 0\\ 1 & 1 & 1 & 0 & 0 & 1 \end {pmatrix} \] Swapping \(R_{3}\) and \(R_{1}\)\[\begin {pmatrix} 1 & 1 & 1 & 0 & 0 & 1\\ 2 & -3 & -1 & 0 & 1 & 0\\ 3 & 1 & 2 & 1 & 0 & 0 \end {pmatrix} \] \(R_{2}=R_{2}-2R_{1}\)\[\begin {pmatrix} 1 & 1 & 1 & 0 & 0 & 1\\ 0 & -5 & -3 & 0 & 1 & -2\\ 3 & 1 & 2 & 1 & 0 & 0 \end {pmatrix} \] \(R_{3}=R_{3}-3R_{1}\)\[\begin {pmatrix} 1 & 1 & 1 & 0 & 0 & 1\\ 0 & -5 & -3 & 0 & 1 & -2\\ 0 & -2 & -1 & 1 & 0 & -3 \end {pmatrix} \] \(R_{2}=2R_{2},R_{3}=5R_{3}\)\[\begin {pmatrix} 1 & 1 & 1 & 0 & 0 & 1\\ 0 & -10 & -6 & 0 & 2 & -4\\ 0 & -10 & -5 & 5 & 0 & -15 \end {pmatrix} \] \(R_{3}=R_{3}-R_{2}\)\[\begin {pmatrix} 1 & 1 & 1 & 0 & 0 & 1\\ 0 & -10 & -6 & 0 & 2 & -4\\ 0 & 0 & 1 & 5 & -2 & -11 \end {pmatrix} \] \(R_{2}=\frac {R_{2}}{-10}\)\[\begin {pmatrix} 1 & 1 & 1 & 0 & 0 & 1\\ 0 & 1 & \frac {3}{5} & 0 & \frac {1}{-5} & \frac {2}{5}\\ 0 & 0 & 1 & 5 & -2 & -11 \end {pmatrix} \] \(R_{2}=R_{2}-\frac {3}{5}R_{3}\)\[\begin {pmatrix} 1 & 1 & 1 & 0 & 0 & 1\\ 0 & 1 & 0 & -\frac {3}{5}\relax (5) & \frac {1}{-5}-\frac {3}{5}\left ( -2\right ) & \frac {2}{5}-\frac {3}{5}\left (-11\right ) \\ 0 & 0 & 1 & 5 & -2 & -11 \end {pmatrix} =\begin {pmatrix} 1 & 1 & 1 & 0 & 0 & 1\\ 0 & 1 & 0 & -3 & 1 & 7\\ 0 & 0 & 1 & 5 & -2 & -11 \end {pmatrix} \] \(R_{1}=R_{1}-R_{3}\)\[\begin {pmatrix} 1 & 1 & 0 & -5 & 2 & 12\\ 0 & 1 & 0 & -3 & 1 & 7\\ 0 & 0 & 1 & 5 & -2 & -11 \end {pmatrix} \] \(R_{1}=R_{1}-R_{2}\)\[\begin {pmatrix} 1 & 0 & 0 & -2 & 1 & 5\\ 0 & 1 & 0 & -3 & 1 & 7\\ 0 & 0 & 1 & 5 & -2 & -11 \end {pmatrix} \] Since now the LHS matrix is \(I\), then the RHS is the inverse. Therefore \[\begin {pmatrix} 3 & 1 & 2\\ 2 & -3 & -1\\ 1 & 1 & 1 \end {pmatrix} ^{-1}=\begin {pmatrix} -2 & 1 & 5\\ -3 & 1 & 7\\ 5 & -2 & -11 \end {pmatrix} \] Using the above in (1) gives\begin {align*} \begin {pmatrix} x\\ y\\ z \end {pmatrix} & =\begin {pmatrix} -2 & 1 & 5\\ -3 & 1 & 7\\ 5 & -2 & -11 \end {pmatrix}\begin {pmatrix} 3\\ -2\\ 1 \end {pmatrix} \\ & =\begin {pmatrix} -3\\ -4\\ 8 \end {pmatrix} \end {align*}
Hence \(x=-3,y=-4,z=8.\)
For the matrix\[ A=\begin {bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 10 \end {bmatrix} \] Find the cofactor and the inverse. Verify that your inverse does the job.
Solution
The cofactor matrix \(A_{C}\) has elements \(\left (A_{C}\right ) _{ij}=\left ( -1\right ) ^{i+j}\) \(\left \vert A\right \vert _{ij}\) where \(\left \vert A\right \vert _{ij}\) is determinant of \(A\) with row \(i\) and column \(j\) removed. Hence\begin {equation} A_{C}=\begin {bmatrix} +A_{11} & -A_{12} & +A_{13}\\ -A_{21} & +A_{22} & -A_{23}\\ +A_{31} & -A_{32} & +A_{33}\end {bmatrix} \tag {1} \end {equation} Where \begin {align*} A_{11} & =\begin {vmatrix} 5 & 6\\ 8 & 10 \end {vmatrix} =2\\ A_{12} & =\begin {vmatrix} 4 & 6\\ 7 & 10 \end {vmatrix} =-2\\ A_{13} & =\begin {vmatrix} 4 & 5\\ 7 & 8 \end {vmatrix} =-3\\ A_{21} & =\begin {vmatrix} 2 & 3\\ 8 & 10 \end {vmatrix} =-4\\ A_{22} & =\begin {vmatrix} 1 & 3\\ 7 & 10 \end {vmatrix} =-11\\ A_{23} & =\begin {vmatrix} 1 & 2\\ 7 & 8 \end {vmatrix} =-6\\ A_{31} & =\begin {vmatrix} 2 & 3\\ 5 & 6 \end {vmatrix} =-3\\ A_{32} & =\begin {vmatrix} 1 & 3\\ 4 & 6 \end {vmatrix} =-6\\ A_{33} & =\begin {vmatrix} 1 & 2\\ 4 & 5 \end {vmatrix} =-3 \end {align*}
Substituting all the above into (1) gives the cofactor matrix\begin {align*} A_{C} & =\begin {bmatrix} +2 & -\left (-2\right ) & +\left (-3\right ) \\ -\left (-4\right ) & +\left (-11\right ) & -\left (-6\right ) \\ +\left (-3\right ) & -\left (-6\right ) & +\left (-3\right ) \end {bmatrix} \\ & =\begin {bmatrix} 2 & 2 & -3\\ 4 & -11 & 6\\ -3 & 6 & -3 \end {bmatrix} \end {align*}
The inverse of \(A\) is\begin {equation} A^{-1}=\frac {1}{\det \relax (A) }A_{C}^{T} \tag {2} \end {equation} So we just need to find \(\det \relax (A) \) and transpose the cofactor matrix. But \[ \det \relax (A) =A_{11}-2A_{12}+3A_{13}\] By expanding along the first row. Hence\begin {align*} \det \relax (A) & =\relax (2) -2\left (-2\right ) +3\left ( -3\right ) \\ & =-3 \end {align*}
Hence (2) becomes\begin {align*} A^{-1} & =\frac {-1}{3}\begin {bmatrix} 2 & 2 & -3\\ 4 & -11 & 6\\ -3 & 6 & -3 \end {bmatrix} ^{T}\\ & =\frac {-1}{3}\begin {bmatrix} 2 & 4 & -3\\ 2 & -11 & 6\\ -3 & 6 & -3 \end {bmatrix} \\ & =\begin {bmatrix} -\frac {2}{3} & -\frac {4}{3} & 1\\ -\frac {2}{3} & \frac {11}{3} & -2\\ 1 & -2 & 1 \end {bmatrix} \end {align*}
To verify \begin {align*} AA^{-1} & =\begin {bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 10 \end {bmatrix}\begin {bmatrix} -\frac {2}{3} & -\frac {4}{3} & 1\\ -\frac {2}{3} & \frac {11}{3} & -2\\ 1 & -2 & 1 \end {bmatrix} \\ & =\begin {bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {bmatrix} \end {align*}
And \begin {align*} A^{-1}A & =\begin {bmatrix} -\frac {2}{3} & -\frac {4}{3} & 1\\ -\frac {2}{3} & \frac {11}{3} & -2\\ 1 & -2 & 1 \end {bmatrix}\begin {bmatrix} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 10 \end {bmatrix} \\ & =\begin {bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end {bmatrix} \end {align*}
Verified. It does the job.
Show that\[ \left (MN\right ) ^{\dag }=N^{\dag }M^{\dag }\] Consequently the product of two Hermitian matrices is not generally Hermitian unless they commute.
solution
\(A^{\dag }\) is called the adjoint of matrix. It is the transpose of \(A\) followed by taking the complex conjugate of each entry in the result. Hence for a real matrix \(A\) the adjoint is the same as transpose, since complex conjugate of real value is itself. So we start by finding the transpose \(\left (MN\right ) ^{T}\) then at the end apply conjugate.\begin {align*} \left (MN\right ) _{ij}^{T} & =\left (MN\right ) _{ji}\\ & =\sum _{k}M_{jk}N_{ki}\\ & =\sum _{k}M_{kj}^{T}N_{ik}^{T}\\ & =\sum _{k}N_{ik}^{T}M_{kj}^{T}\\ & =\left (N^{T}M^{T}\right ) _{ij} \end {align*}
The sum above over \(k\), where \(k\) goes from \(1\) to the number of columns in \(M\) (which must be the same as the number of rows in \(N\) for the product to be possible). The above shows that\[ \left (MN\right ) ^{T}=N^{T}M^{T}\] Therefore\begin {align*} \left (MN\right ) ^{\dag } & =\left (N^{T}M^{T}\right ) ^{\ast }\\ & =\left (N^{T}\right ) ^{\ast }\left (M^{T}\right ) ^{\ast }\\ & =N^{\dag }M^{\dag } \end {align*}
A matrix \(A\) is called Hermitian if \(A^{\dag }=A\) or \(A^{\dag }=-A\). Also, any real matrix \(A\) is always Hermitian.
Assuming \(M,N\,\) are Hermitian, and assuming for now that we look at the positive case. i.e. \(M^{\dag }=M,N^{\dag }=N\). Hence\[ M^{\dag }N^{\dag }=\left (NM\right ) ^{\dag }\] Now, if \(N,M\) commute, then \(NM=MN\) and the above becomes\[ M^{\dag }N^{\dag }=\left (MN\right ) ^{\dag }\] Hence the product \(M^{\dag }N^{\dag }\) is Hermitian. But if \(N,M\) do not commute, then we can not say that.
(1) Show that \begin {equation} \operatorname {Tr}\left (MN\right ) =\operatorname {Tr}\left (NM\right ) \tag {8.4.53} \end {equation} (First part only).
solution
The trace of a matrix \(A\) is the sum of elements on the diagonal. The matrix must be square for this to apply. Hence \[ Tr\relax (A) =\sum _{v}A_{vv}\] Where the sum \(v\) is over the number of rows or columns (since they are the same, since matrix is square)
In the following, we will use the definition of matrix product given by \(\left (MN\right ) _{ij}=\sum _{k}M_{jk}N_{ki}\) where the sum \(k\) is over the number of columns of \(M\). Now we can write\begin {align*} \operatorname {Tr}\left (MN\right ) & =\sum _{v}\left (MN\right ) _{vv}\\ & =\sum _{v}\left (\sum _{k}M_{vk}N_{kv}\right ) \\ & =\sum _{v}\left (\sum _{k}N_{kv}M_{vk}\right ) \end {align*}
Assuming \(N,M\) are square matrices, then we can replace the inner sum to be over \(v\) instead of \(k\), since these will be the same for square \(N,M\). Hence the above becomes\begin {equation} \operatorname {Tr}\left (MN\right ) =\sum _{v}\left (\sum _{v}N_{vv}M_{vv}\right ) \tag {1} \end {equation} Now we do the same for product \(NM\). \begin {align*} \operatorname {Tr}\left (NM\right ) & =\sum _{v}\left (NM\right ) _{vv}\\ & =\sum _{v}\left (\sum _{k}N_{vk}M_{kv}\right ) \\ & =\sum _{v}\left (\sum _{k}M_{kv}N_{vk}\right ) \end {align*}
Assuming \(N,M\) are square matrices, then we can replace the inner sum to be over \(v\) instead of \(k\), since these will be the same for square \(N,M\). Hence the above becomes\begin {equation} \operatorname {Tr}\left (NM\right ) =\sum _{v}\left (\sum _{v}M_{vv}N_{vv}\right ) \tag {2} \end {equation} Comparing (1,2) shows they are the same. Hence \(\operatorname {Tr}\left ( MN\right ) =\operatorname {Tr}\left (NM\right ) \). Note that this solution assumed that \(M,N\) are both square matrices of the same size.
Consider four Dirac matrices that obey\begin {equation} M_{i}M_{j}+M_{j}M_{i}=2\delta _{ij}I \tag {8.4.56} \end {equation} Where the Kronecker delta symbol is defined as follows\begin {equation} \delta _{ij}=1\qquad \text {if}\ i=j,0\text { if}\ i\neq j \tag {8.4.57} \end {equation} Thus the square of each Dirac matrix is the unit matrix and any two distinct Dirac matrices anticommute. Using the latter property show that the matrices are traceless. (Use equation (8.4.54)
solution
Eq (8.4.54) from the book says\begin {equation} \operatorname {Tr}\left (ABC\right ) =\operatorname {Tr}\left (BCM\right ) =\operatorname {Tr}\left (CAB\right ) \tag {8.4.54} \end {equation} Some definitions first. Two matrices \(A,B\) anticommute means \(AB=-BA\). A matrix is traceless means the trace of the matrix (the sum of the diagonal elements) is zero.
There are Four Dirac matrices \(M_{1},M_{2},M_{3},M_{4}\). Each is \(4\times 4\) matrix.
From \(M_{i}M_{j}+M_{j}M_{i}=2\delta _{ij}I\) then\begin {align*} 2M_{i}M_{i} & =2\delta _{ii}I\\ M_{i}M_{i} & =\delta _{ii}I\\ \frac {M_{i}M_{i}}{\delta _{ii}} & =I \end {align*}
Premultiplying both sides by \(M_{j}\) gives\[ \frac {M_{j}M_{i}M_{i}}{\delta _{ii}}=M_{j}\] Taking trace of both sides\begin {align*} \operatorname {Tr}\left (M_{j}\right ) & =\operatorname {Tr}\left ( \frac {M_{j}M_{i}M_{i}}{\delta _{ii}}\right ) \\ & =\frac {1}{\delta _{ii}}\operatorname {Tr}\left (M_{j}M_{i}M_{i}\right ) \end {align*}
But Dirac matrices anticommute. Hence \(M_{j}M_{i}=-M_{i}M_{j}\). The above becomes\[ \operatorname {Tr}\left (M_{j}\right ) =-\frac {1}{\delta _{ii}}\operatorname {Tr}\left (M_{i}M_{j}M_{i}\right ) \] Using property \(\operatorname {Tr}\left (M_{i}M_{j}M_{i}\right ) =\operatorname {Tr}\left (M_{j}M_{i}M_{i}\right ) \) the above becomes\begin {align*} \operatorname {Tr}\left (M_{j}\right ) & =-\frac {1}{\delta _{ii}}\operatorname {Tr}\left (M_{j}M_{i}M_{i}\right ) \\ & =-\frac {1}{\delta _{ii}}\operatorname {Tr}\left (M_{j}M_{i}^{2}\right ) \end {align*}
But \(M_{i}^{2}=I\), therefore\[ \operatorname {Tr}\left (M_{j}\right ) =-\frac {1}{\delta _{ii}}\operatorname {Tr}\left (M_{j}\right ) \] The above is possible only if \(\operatorname {Tr}\left (M_{j}\right ) =0\) since \(\frac {1}{\delta _{ii}}\) is just a number. The above is like saying \(n=-3n\) which is only possible if \(n=0\). Hence the trace of any Dirac matrix is zero, which means it is traceless.
Show that the following matrix \(U\) is unitary. Argue that the determinant of a unitary matrix must be unimodular complex number. What is it for this example?\[ U=\begin {bmatrix} \frac {1+i\sqrt {3}}{4} & \frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\\ \frac {-\sqrt {3}\left (1+i\right ) }{2\sqrt {2}} & \frac {i+\sqrt {3}}{4}\end {bmatrix} \] solution
A matrix \(U\) is unitary if \(U^{\dag }=U^{-1}\). Where \(U^{\dag }\) means to take the transpose followed by complex conjugate. For the above\begin {equation} U^{-1}=\frac {1}{\det \relax (U) }\begin {bmatrix} U_{22} & -U_{12}\\ -U_{21} & U_{11}\end {bmatrix} \tag {1} \end {equation} But \begin {align*} \det \relax (U) & =\begin {vmatrix} \frac {1+i\sqrt {3}}{4} & \frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\\ \frac {-\sqrt {3}\left (1+i\right ) }{2\sqrt {2}} & \frac {i+\sqrt {3}}{4}\end {vmatrix} \\ & =\left (\frac {1+i\sqrt {3}}{4}\right ) \left (\frac {i+\sqrt {3}}{4}\right ) -\left (\frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\right ) \left ( \frac {-\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\right ) \\ & =\frac {1}{4}i-\left (-\frac {3}{4}i\right ) \\ & =i \end {align*}
Hence (1) becomes\begin {align} U^{-1} & =\frac {1}{i}\begin {bmatrix} \frac {i+\sqrt {3}}{4} & -\frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\\ \frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}} & \frac {1+i\sqrt {3}}{4}\end {bmatrix} \nonumber \\ & =-i\begin {bmatrix} \frac {i+\sqrt {3}}{4} & -\frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\\ \frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}} & \frac {1+i\sqrt {3}}{4}\end {bmatrix} \nonumber \\ & =\begin {bmatrix} -i\left (\frac {i+\sqrt {3}}{4}\right ) & \left (-i\right ) \left ( -\frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\right ) \\ \left (-i\right ) \left (\frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\right ) & \left (-i\right ) \left (\frac {1+i\sqrt {3}}{4}\right ) \end {bmatrix} \nonumber \\ & =\begin {bmatrix} \left (\frac {1-i\sqrt {3}}{4}\right ) & \frac {\sqrt {3}\left (i-1\right ) }{2\sqrt {2}}\\ \frac {\sqrt {3}\left (1-i\right ) }{2\sqrt {2}} & \frac {-i+\sqrt {3}}{4}\end {bmatrix} \tag {1} \end {align}
Now \(U^{\dag }\) is found.\begin {align} U^{\dag } & =\left (U^{T}\right ) ^{\ast }\nonumber \\ & =\left ( \begin {bmatrix} \frac {1+i\sqrt {3}}{4} & \frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\\ \frac {-\sqrt {3}\left (1+i\right ) }{2\sqrt {2}} & \frac {i+\sqrt {3}}{4}\end {bmatrix} ^{T}\right ) ^{\ast }\nonumber \\ & =\begin {bmatrix} \frac {1+i\sqrt {3}}{4} & \frac {-\sqrt {3}\left (1+i\right ) }{2\sqrt {2}}\\ \frac {\sqrt {3}\left (1+i\right ) }{2\sqrt {2}} & \frac {i+\sqrt {3}}{4}\end {bmatrix} ^{\ast }\nonumber \\ & =\begin {bmatrix} \frac {1-i\sqrt {3}}{4} & \frac {-\sqrt {3}\left (1-i\right ) }{2\sqrt {2}}\\ \frac {\sqrt {3}\left (1-i\right ) }{2\sqrt {2}} & \frac {-i+\sqrt {3}}{4}\end {bmatrix} \nonumber \\ & =\begin {bmatrix} \frac {1-i\sqrt {3}}{4} & \frac {\sqrt {3}\left (i-1\right ) }{2\sqrt {2}}\\ \frac {\sqrt {3}\left (1-i\right ) }{2\sqrt {2}} & \frac {-i+\sqrt {3}}{4}\end {bmatrix} \tag {2} \end {align}
Comparing (1,2) shows they are the same. Hence \(U\) is unitary.
A unimodular complex number \(z\) is one whose \(\left \vert z\right \vert =1\). For this example, we found above that \(\left \vert U\right \vert =i\). But \(\left \vert i\right \vert =1\). Verified.
Show that if\[ L=\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} \] then\[ L^{2}=-\begin {bmatrix} 1 & 0\\ 0 & 1 \end {bmatrix} \] Now consider \(F\relax (L) =e^{\theta L}\) and show by writing out the series and using \(L^{2}=-I\), that the series converges to a familiar matrix discussed earlier in the chapter.
Solution
\begin {align} e^{\theta L} & =I+\theta L+\frac {\left (\theta L\right ) ^{2}}{2!}+\frac {\left (\theta L\right ) ^{3}}{3!}+\cdots \nonumber \\ & =I+\theta \begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} +\frac {1}{2!}\theta ^{2}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{2}+\frac {1}{3!}\theta ^{3}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{3}+\cdots \tag {1} \end {align}
But \[\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{2}=-I \]\[\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{3}=\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{2}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} =-IL=-L \]\[\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{4}=\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{2}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{2}=\left (-I\right ) \left (-I\right ) =I \]\[\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{5}=\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{4}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} =IL=L \]\[\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{6}=\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{4}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{2}=-I \]\[\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{7}=\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} ^{6}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} =-L \] And so on. Hence (1) becomes\begin {align*} e^{\theta L} & =I+\theta L-\frac {1}{2!}\theta ^{2}I-\frac {1}{3!}\theta ^{3}L+\frac {1}{4!}\theta ^{4}I+\frac {1}{5!}\theta ^{5}L-\frac {1}{6!}\theta ^{6}I-\frac {1}{7!}\theta ^{7}L+\cdots \\ & =\begin {bmatrix} 1 & 0\\ 0 & 1 \end {bmatrix} +\theta \begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} -\frac {1}{2}\theta ^{2}\begin {bmatrix} 1 & 0\\ 0 & 1 \end {bmatrix} -\frac {1}{3!}\theta ^{3}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} +\frac {1}{4!}\theta ^{4}\begin {bmatrix} 1 & 0\\ 0 & 1 \end {bmatrix} +\frac {1}{5!}\theta ^{5}\begin {bmatrix} 0 & -1\\ 1 & 0 \end {bmatrix} -\frac {1}{6!}\theta ^{6}\begin {bmatrix} 1 & 0\\ 0 & 1 \end {bmatrix} -\cdots \\ & =\begin {bmatrix} 1 & 0\\ 0 & 1 \end {bmatrix} +\begin {bmatrix} 0 & -\theta \\ \theta & 0 \end {bmatrix} -\frac {1}{2}\begin {bmatrix} \theta ^{2} & 0\\ 0 & \theta ^{2}\end {bmatrix} -\frac {1}{3!}\begin {bmatrix} 0 & \theta ^{3}\\ \theta ^{3} & 0 \end {bmatrix} +\frac {1}{4!}\begin {bmatrix} \theta ^{4} & 0\\ 0 & \theta ^{4}\end {bmatrix} +\frac {1}{5!}\begin {bmatrix} 0 & -\theta ^{5}\\ \theta ^{5} & 0 \end {bmatrix} -\frac {1}{6!}\begin {bmatrix} \theta ^{6} & 0\\ 0 & \theta ^{6}\end {bmatrix} -\cdots \\ & =\begin {bmatrix} 1-\frac {1}{2}\theta ^{2}+\frac {1}{4!}\theta ^{4}-\frac {1}{6!}\theta ^{6}+\cdots & -\theta +\frac {1}{3!}\theta ^{3}-\frac {1}{5!}\theta ^{5}+\cdots \\ \theta -\frac {1}{3!}\theta ^{3}+\frac {1}{5!}\theta ^{5}-\cdots & 1-\frac {1}{2}\theta ^{2}+\frac {1}{4!}\theta ^{4}-\frac {1}{6!}\theta ^{6}+\cdots \end {bmatrix} \\ & =\begin {bmatrix} \cos \left (\theta \right ) & -\sin \theta \\ \sin \left (\theta \right ) & \cos \left (\theta \right ) \end {bmatrix} \end {align*}
Hence \[ e^{\theta L}=R_{\theta }^{T}\] Where \(R_{\theta }\) is the rotation matrix in 2D.
Show that if \(H\) is Hermitian, then \(U=e^{iH}\) is unitary. (Write the exponential as a series and take the adjoint of each term and sum and re-exponentiate. Use the fact that exponents can be combined if only one matrix is in the picture).
solution
A matrix \(H\) is Hermitian if \(H^{\dag }=H\). Where the dagger means to take the transpose followed by conjugate. If \(H\) is real, then this implies the same as saying \(H\) is symmetric. A unitary matrix \(U\) means one whose dagger is same as its inverse. i.e. \[ U^{\dag }=U^{-1}\] Starting from the input given, expanding in Taylor series gives\begin {align*} U & =e^{iH}\\ & =I+iH+\frac {\left (iH\right ) ^{2}}{2!}+\frac {\left (iH\right ) ^{3}}{3!}+\frac {\left (iH\right ) ^{4}}{4!}+\frac {\left (iH\right ) ^{5}}{5!}+\frac {\left (iH\right ) ^{6}}{6!}+\cdots \\ & =I+iH-\frac {H^{2}}{2!}-i\frac {H^{3}}{3!}+\frac {H^{4}}{4!}+i\frac {H^{5}}{5!}-\frac {H^{6}}{6!}\cdots \\ & =\left (I-\frac {H^{2}}{2!}+\frac {H^{4}}{4!}-\frac {H^{6}}{6!}\cdots \right ) +i\left (H-\frac {H^{3}}{3!}+\frac {H^{5}}{5!}-\cdots \right ) \end {align*}
Hence \[ U^{\dag }=\left (I^{\dag }-\frac {H^{\dag 2}}{2!}+\frac {H^{\dag 4}}{4!}-\frac {H^{\dag 6}}{6!}\cdots \right ) -i\left (H^{\dag }-\frac {H^{\dag 3}}{3!}+\frac {H^{\dag 5}}{5!}-\cdots \right ) \] Where the \(+i\) changed to \(-i\) in the above since we are taking complex conjugate. But \(H^{\dag }=H\) since matrix \(H\) is Hermitian. The above becomes\begin {align*} U^{\dag } & =\left (I-\frac {H^{2}}{2!}+\frac {H^{4}}{4!}-\frac {H^{6}}{6!}\cdots \right ) -i\left (H-\frac {H^{3}}{3!}+\frac {H^{5}}{5!}-\cdots \right ) \\ & =e^{-iH} \end {align*}
But \(e^{-iH}=U^{-1}\) from definition of \(U=e^{iH}\). Therefore\[ U^{\dag }=U^{-1}\] Hence \(U\) is unitary.
Show that \begin {equation} \begin {bmatrix} \vec {\sigma }\cdot \vec {a}\end {bmatrix}\begin {bmatrix} \vec {\sigma }\cdot \vec {b}\end {bmatrix} =\vec {a}\cdot \vec {b}I+i\vec {\sigma }\cdot \left (\vec {a}\times \vec {b}\right ) \tag {1} \end {equation} Where \(\vec {a},\vec {b}\) are ordinary three dimensional vectors and\[ \vec {\sigma }=\vec {i}\sigma _{x}+\vec {j}\sigma _{y}+\vec {k}\sigma _{z}\] solution
The LHS of (1) is \begin {align*} \begin {bmatrix} \vec {\sigma }\cdot \vec {a}\end {bmatrix}\begin {bmatrix} \vec {\sigma }\cdot \vec {b}\end {bmatrix} & =\left (\sigma _{x}a_{x}+\sigma _{y}a_{y}+\sigma _{z}a_{z}\right ) \left ( \sigma _{x}b_{x}+\sigma _{y}b_{y}+\sigma _{z}b_{z}\right ) \\ & =\sigma _{x}^{2}a_{x}b_{x}+\sigma _{x}\sigma _{y}a_{x}b_{y}+\sigma _{x}\sigma _{z}a_{x}b_{z}\\ & +\sigma _{y}\sigma _{x}a_{y}b_{x}+\sigma _{y}^{2}a_{y}b_{y}+\sigma _{y}\sigma _{z}a_{y}b_{z}\\ & +\sigma _{z}\sigma _{x}a_{z}b_{x}+\sigma _{z}\sigma _{y}a_{z}b_{y}+\sigma _{z}^{2}a_{z}b_{z} \end {align*}
But for Pauli matrix \(\sigma _{i}^{2}=I\). Hence the above becomes\[\begin {bmatrix} \vec {\sigma }\cdot \vec {a}\end {bmatrix}\begin {bmatrix} \vec {\sigma }\cdot \vec {b}\end {bmatrix} =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b\right ) +\sigma _{x}\sigma _{y}a_{x}b_{y}+\sigma _{x}\sigma _{z}a_{x}b_{z}+\sigma _{y}\sigma _{x}a_{y}b_{x}+\sigma _{y}\sigma _{z}a_{y}b_{z}+\sigma _{z}\sigma _{x}a_{z}b_{x}+\sigma _{z}\sigma _{y}a_{z}b_{y}\] But \(\sigma _{y}\sigma _{x}=-\sigma _{x}\sigma _{y}\) and \(\sigma _{x}\sigma _{z}=-\sigma _{z}\sigma _{x}\) and \(\sigma _{z}\sigma _{y}=-\sigma _{y}\sigma _{z}\). (I verified these by working them out). Hence the above becomes\begin {align} \begin {bmatrix} \vec {\sigma }\cdot \vec {a}\end {bmatrix}\begin {bmatrix} \vec {\sigma }\cdot \vec {b}\end {bmatrix} & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b\right ) +\sigma _{x}\sigma _{y}a_{x}b_{y}+\sigma _{x}\sigma _{z}a_{x}b_{z}-\sigma _{x}\sigma _{y}a_{y}b_{x}+\sigma _{y}\sigma _{z}a_{y}b_{z}-\sigma _{x}\sigma _{z}a_{z}b_{x}-\sigma _{y}\sigma _{z}a_{z}b_{y}\nonumber \\ & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b\right ) +\left (\sigma _{x}\sigma _{y}\right ) \left (a_{x}b_{y}-a_{y}b_{x}\right ) +\left (\sigma _{x}\sigma _{z}\right ) \left (a_{x}b_{z}-a_{z}b_{x}\right ) +\left ( \sigma _{y}\sigma _{z}\right ) \left (a_{y}b_{z}-a_{z}b_{y}\right ) \tag {2} \end {align}
Now we will simplify RHS of (1) and see if we get the same result as above. \begin {align} \vec {a}\cdot \vec {b}I+i\vec {\sigma }\cdot \left (\vec {a}\times \vec {b}\right ) & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\right ) +i\vec {\sigma }\cdot \left (\vec {a}\times \vec {b}\right ) \nonumber \\ & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\right ) +i\begin {pmatrix} \sigma _{x} & \sigma _{y} & \sigma _{z}\end {pmatrix} \cdot \begin {vmatrix} e_{i} & e_{j} & e_{k}\\ a_{x} & a_{y} & a_{z}\\ b_{x} & b_{y} & b_{z}\end {vmatrix} \nonumber \\ & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\right ) +i\begin {pmatrix} \sigma _{x} & \sigma _{y} & \sigma _{z}\end {pmatrix} \cdot \begin {pmatrix} a_{y}b_{z}-a_{z}b_{y} & -\left (a_{x}b_{z}-a_{z}b_{x}\right ) & a_{x}b_{y}-a_{y}b_{x}\end {pmatrix} \nonumber \\ & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\right ) +i\begin {pmatrix} \sigma _{x} & \sigma _{y} & \sigma _{z}\end {pmatrix} \cdot \begin {pmatrix} a_{y}b_{z}-a_{z}b_{y} & a_{z}b_{x}-a_{x}b_{z} & a_{x}b_{y}-a_{y}b_{x}\end {pmatrix} \nonumber \\ & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\right ) +i\left (\sigma _{x}\left (a_{y}b_{z}-a_{z}b_{y}\right ) +\sigma _{y}\left (a_{z}b_{x}-a_{x}b_{z}\right ) +\sigma _{z}\left (a_{x}b_{y}-a_{y}b_{x}\right ) \right ) \nonumber \\ & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\right ) +i\sigma _{x}\left ( a_{y}b_{z}-a_{z}b_{y}\right ) +i\sigma _{y}\left (a_{z}b_{x}-a_{x}b_{z}\right ) +i\sigma _{z}\left (a_{x}b_{y}-a_{y}b_{x}\right ) \tag {3} \end {align}
But from property of Pauli matrices (eq 8.4.48) in text, we have (Verified these by working them out)\begin {align} i\sigma _{z} & =\sigma _{x}\sigma _{y}\tag {4}\\ i\sigma _{x} & =\sigma _{y}\sigma _{z}\tag {5}\\ -i\sigma _{y} & =\sigma _{x}\sigma _{z} \tag {6} \end {align}
Substituting (4,5,6) into (3) gives\begin {align} \vec {a}\cdot \vec {b}I+i\vec {\sigma }\cdot \left (\vec {a}\times \vec {b}\right ) & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\right ) +\left (\sigma _{y}\sigma _{z}\right ) \left (a_{y}b_{z}-a_{z}b_{y}\right ) -\left ( \sigma _{x}\sigma _{z}\right ) \left (a_{z}b_{x}-a_{x}b_{z}\right ) +\left ( \sigma _{x}\sigma _{y}\right ) \left (a_{x}b_{y}-a_{y}b_{x}\right ) \nonumber \\ & =I\left (a_{x}b_{x}+a_{y}b_{y}+a_{z}b_{z}\right ) +\left (\sigma _{y}\sigma _{z}\right ) \left (a_{y}b_{z}-a_{z}b_{y}\right ) +\left ( \sigma _{x}\sigma _{z}\right ) \left (a_{x}b_{z}-a_{z}b_{x}\right ) +\left ( \sigma _{x}\sigma _{y}\right ) \left (a_{x}b_{y}-a_{y}b_{x}\right ) \tag {7} \end {align}
Comparing (2,7) shows they are the same. Hence\[\begin {bmatrix} \vec {\sigma }\cdot \vec {a}\end {bmatrix}\begin {bmatrix} \vec {\sigma }\cdot \vec {b}\end {bmatrix} =\vec {a}\cdot \vec {b}I+i\vec {\sigma }\cdot \left (\vec {a}\times \vec {b}\right ) \]
(a) Consider a horizontal spring-mass system. The spring has a spring constant \(k\) and is fixed at one end. The other end is attached to a block of mass \(m\) that can move without friction on a horizontal surface. The spring is stretched a length \(a\) beyond its rest length and let go. Without solving the problem using Newton’s second law, find the angular frequency of oscillations and show that it is independent of \(a\).
(b) Derive the Planck mass, length, and time in terms of Planck’s constant \(\hbar \), Newton’s constant \(G\), and speed of light \(c\). Evaluate these quantities in SI units. (10 points)
(c) Identify the relevant physical quantities and use dimensional analysis to find the characteristic length for a black hole of mass \(M\).
solution
I was not sure if we are supposed to solve this using dimensional analysis or using Physics. So I solved it both ways. Please select the method that we are supposed to have used.
Using physics
Taking the relaxed position (which is also the equilibrium position as \(x=0\)) and spring extension is measured relative to this, then spring potential energy is given by \(V\relax (x) =\frac {1}{2}kx^{2}\) and the Force the spring exerts on the mass is \(F=-kx\). Using the relation \[ \,V^{\prime }\relax (x) =m\omega ^{2}x \] Then\begin {align*} \,kx & =m\omega ^{2}x\\ k & =m\omega ^{2} \end {align*}
Hence\[ \omega =\sqrt {\frac {k}{m}}\] Where \(m\) is the mass of the block attached to the spring. We see the angular frequency of oscillations \(\omega \) is independent of \(a\). The mass will oscillated around \(x=0\) from \(x=+a\) to \(x=-a\). When it is at \(x=\pm a\) the force on the mass will be maximum of \(F=-ka\) and the velocity will be zero there. When the mass at \(x=0\), the force is zero, but the velocity of mass will be largest there. The maximum amplitude of the mass from equilibrium is \(a\).
Using dimensional analysis
Let us assume that the angular frequency of the spring depends on the attached mass \(m\) and on the spring constant \(k\) and on the initial displacement \(a\) (we will find later that it does not depend on \(a\)).
The units of angular frequency \(\omega \) is radians per second or \(T^{-1}\). Units of mass \(m\) is \(M\). Units of \(k\) are \(MT^{-2}\) (force per unit length). And initial extension is length with units \(L\). Hence assuming
\begin {equation} \omega =m^{x}k^{y}a^{z} \tag {1} \end {equation}
Using dimensional analysis we replace the above with the units of each physical quantity which gives
\begin {align*} T^{-1} & =\left [ M\right ] ^{x}\left [ MT^{-2}\right ] ^{y}\left [ L\right ] ^{z}\\ & =M^{x+y}T^{-2y}L^{z} \end {align*}
Comparing exponents gives\begin {align*} -2y & =-1\\ x+y & =0\\ z & =0 \end {align*}
Hence \(y=\frac {1}{2}\) and \(x=-\frac {1}{2}\) and \(z=0\). Therefore Eq. (1) becomes\begin {align*} \omega & =m^{-\frac {1}{2}}k^{\frac {1}{2}}\\ & =\sqrt {\frac {k}{m}} \end {align*}
Which is the same result obtained above. This shows that \(\omega \) does not depend on \(a\), because \(z=0\).
Plank mass
Using dimensional analysis, let \(m_{p}\) be the Planck mass. Using units \(M,L,T\,\) for mass, length and time respectively, then the units of \(m_{p}\) is \(M\). Since we want \(m_{p}\) to be expressed in terms of \(\hbar ,G,c\), then we write\begin {equation} m_{p}=\hbar ^{x}G^{y}c^{z} \tag {1} \end {equation} And then solve for \(x,y,z\) exponents such that RHS gives units of \(M\). We know that units of \(\hbar =ML^{2}T^{-1}\) and units of \(G=M^{-1}L^{3}T^{-2}\) and units of \(c=LT^{-1}\). The above becomes\begin {align*} M & =\left (ML^{2}T^{-1}\right ) ^{x}\left (M^{-1}L^{3}T^{-2}\right ) ^{y}\left (LT^{-1}\right ) ^{z}\\ & =M^{x}L^{2x}T^{-x}M^{-y}L^{3y}T^{-2y}L^{z}T^{-z}\\ & =M^{x-y}L^{2x+3y+z}T^{-x-2y-z} \end {align*}
Therefore we need to satisfy the following equations\begin {align*} x-y & =1\\ 2x+3y+z & =0\\ -x-2y-z & =0 \end {align*}
Or\[\begin {pmatrix} 1 & -1 & 0\\ 2 & 3 & 1\\ -1 & -2 & -1 \end {pmatrix}\begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} 1\\ 0\\ 0 \end {pmatrix} \] The augmented matrix is\[\begin {pmatrix} 1 & -1 & 0 & 1\\ 2 & 3 & 1 & 0\\ -1 & -2 & -1 & 0 \end {pmatrix} \] \(R_{2}=R_{2}-2R_{1}\)\[\begin {pmatrix} 1 & -1 & 0 & 1\\ 0 & 5 & 1 & -2\\ -1 & -2 & -1 & 0 \end {pmatrix} \] \(R_{3}=R_{3}+R_{1}\)\[\begin {pmatrix} 1 & -1 & 0 & 1\\ 0 & 5 & 1 & -2\\ 0 & -3 & -1 & 1 \end {pmatrix} \] \(R_{2}=3R_{2},R_{3}=5R_{3}\)\[\begin {pmatrix} 1 & -1 & 0 & 1\\ 0 & 15 & 3 & -6\\ 0 & -15 & -5 & 5 \end {pmatrix} \] \(R_{3}=R_{3}+R_{2}\)\[\begin {pmatrix} 1 & -1 & 0 & 1\\ 0 & 15 & 3 & -6\\ 0 & 0 & -2 & -1 \end {pmatrix} \] System is now in echelon form, so no more transformations are needed. The system becomes\[\begin {pmatrix} 1 & -1 & 0\\ 0 & 15 & 3\\ 0 & 0 & -2 \end {pmatrix}\begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} 1\\ -6\\ -1 \end {pmatrix} \] Last row give \(-2z=-1\) or \(z=\frac {1}{2}\). Second row gives \(15y+3z=-6\) or \(15y+3\left (\frac {1}{2}\right ) =-6\), or \(y=-\frac {1}{2}\) and first row gives \(x-y=1\) or \(x+\frac {1}{2}=1\), hence \(x=\frac {1}{2}\). The solution is\begin {equation} \begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} \frac {1}{2}\\ -\frac {1}{2}\\ \frac {1}{2}\end {pmatrix} \tag {2} \end {equation} Using (2) in (1) gives\begin {align*} m_{p} & =\hbar ^{x}G^{y}c^{z}\\ & =\hbar ^{\frac {1}{2}}G^{-\frac {1}{2}}c^{\frac {1}{2}}\\ & =\sqrt {\frac {\hbar c}{G}} \end {align*}
Units in SI Using \(c=299792458\) m/s and \(\hbar =1.054571817\times 10^{-34}\) J.s, and \(G=6.6743015\times 10^{-11}\) m\(^{3}\)kg\(^{-1}\)s\(^{-2}\), the above gives\begin {align*} m_{p} & =\sqrt {\frac {\left (1.054571817\times 10^{-34}\right ) \left ( 299792458\right ) }{\left (6.6743015\times 10^{-11}\right ) }}\\ & =2.176\,4\times 10^{-8}\text { kg} \end {align*}
Planck length
We now repeat the above method, but for Planck length which has units \(L\). Therefore the equation is\begin {equation} l_{p}=\hbar ^{x}G^{y}c^{z} \tag {3} \end {equation} And now we solve for \(x,y,z\) exponents such that RHS gives units of \(L\). We know that units of \(\hbar =ML^{2}T^{-1}\) and units of \(G=M^{-1}L^{3}T^{-2}\) and units of \(c=LT^{-1}\). Using dimensional analysis, the above becomes\begin {align*} L & =\left (ML^{2}T^{-1}\right ) ^{x}\left (M^{-1}L^{3}T^{-2}\right ) ^{y}\left (LT^{-1}\right ) ^{z}\\ & =M^{x}L^{2x}T^{-x}M^{-y}L^{3y}T^{-2y}L^{z}T^{-z}\\ L & =M^{x-y}L^{2x+3y+z}T^{-x-2y-z} \end {align*}
Therefore we need to satisfy the following equations\begin {align*} x-y & =0\\ 2x+3y+z & =1\\ -x-2y-z & =0 \end {align*}
Similar steps using augmented matrix will now be done. No need to duplicate these again. The final solution is\begin {equation} \begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} \frac {1}{2}\\ \frac {1}{2}\\ -\frac {3}{2}\end {pmatrix} \tag {4} \end {equation} Using (4) in (3) gives\begin {align*} l_{p} & =\hbar ^{x}G^{y}c^{z}\\ & =\hbar ^{\frac {1}{2}}G^{\frac {1}{2}}c^{-\frac {3}{2}}\\ & =\sqrt {\frac {\hbar G}{c^{3}}} \end {align*}
Units in SI Using \(c=299792458\) m/s and \(\hbar =1.054571817\times 10^{-34}\) J.s, and \(G=6.6743015\times 10^{-11}\) m\(^{3}\)kg\(^{-1}\)s\(^{-2}\), the above gives\begin {align*} l_{p} & =\sqrt {\frac {\left (1.054571817\times 10^{-34}\right ) \left ( 6.6743015\times 10^{-11}\right ) }{\left (299792458\right ) ^{3}}}\\ & =1.616\,3\times 10^{-35}\text { meter} \end {align*}
Planck time
We now repeat the above method, but for Planck time which has units \(T\). Therefore the equation is\begin {equation} t_{p}=\hbar ^{x}G^{y}c^{z} \tag {5} \end {equation} And now solve for \(x,y,z\) exponents such that RHS gives units of \(T\). We know that units of \(\hbar =ML^{2}T^{-1}\) and units of \(G=M^{-1}L^{3}T^{-2}\) and units of \(c=LT^{-1}\). The above becomes\begin {align*} T & =\left (ML^{2}T^{-1}\right ) ^{x}\left (M^{-1}L^{3}T^{-2}\right ) ^{y}\left (LT^{-1}\right ) ^{z}\\ & =M^{x}L^{2x}T^{-x}M^{-y}L^{3y}T^{-2y}L^{z}T^{-z}\\ T & =M^{x-y}L^{2x+3y+z}T^{-x-2y-z} \end {align*}
Therefore we need to satisfy the following equations\begin {align*} x-y & =0\\ 2x+3y+z & =0\\ -x-2y-z & =1 \end {align*}
Similar steps using augmented matrix will now be done. No need to duplicate these again. The final solution came out to be\begin {equation} \begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} \frac {1}{2}\\ \frac {1}{2}\\ -\frac {5}{2}\end {pmatrix} \tag {6} \end {equation} Using (5) in (6) gives\begin {align*} t_{p} & =\hbar ^{x}G^{y}c^{z}\\ & =\hbar ^{\frac {1}{2}}G^{\frac {1}{2}}c^{-\frac {5}{2}}\\ & =\sqrt {\frac {\hbar G}{c^{5}}} \end {align*}
Units in SI Using \(c=299792458\) m/s and \(\hbar =1.054571817\times 10^{-34}\) J.s, and \(G=6.6743015\times 10^{-11}\) m\(^{3}\)kg\(^{-1}\)s\(^{-2}\), the above gives\begin {align*} t_{p} & =\sqrt {\frac {\left (1.054571817\times 10^{-34}\right ) \left ( 6.6743015\times 10^{-11}\right ) }{\left (299792458\right ) ^{5}}}\\ & =5.3912\times 10^{-44}\text { second} \end {align*}
The characteristic length of a black hole should depend on its mass \(M\) and universal gravitational constant \(G\) and \(c\). Therefore\[ L_{c}=M^{x}G^{y}c^{z}\] The units of \(G=M^{-1}L^{3}T^{-2}\) and units of \(c=LT^{-1}\). The above becomes\begin {align*} L_{c} & =M^{x}\left (M^{-1}L^{3}T^{-2}\right ) ^{y}\left (LT^{-1}\right ) ^{z}\\ & =M^{x-y}L^{3y+z}T^{-2y-z} \end {align*}
Hence\begin {align*} x-y & =0\\ 3y+z & =1\\ 2y+z & =0 \end {align*}
Solving gives\[\begin {pmatrix} x\\ y\\ z \end {pmatrix} =\begin {pmatrix} 1\\ 1\\ -2 \end {pmatrix} \] Hence\[ L_{c}=\frac {MG}{c^{2}}\]