covered parts of chapter 1. LU decomposition. We go from \(Ax=b\) to \(Ux=c\) where \(U\) contains the pivots on the diagonal and zeros in the lower triangle. To find \(c\) we use \(Lc=b\) where \(L\) has ones on the diagonal, and has the multipliers used to obtain \(U\) just below the diagonal and has zero everywhere else. Putting these togother gives \(LUx=b\). And this is the whole point of LU decomposition. For each new \(b'\) we find \(c'\) from \(Lc'=b'\) and then find \(x\) from \(Ux=c'\). We do not need to do the pivoting again to obtain \(U\) and \(L\) again. It is done once. It is. The cost now is \(n^2\) each time to solve for \(x\). i.e. the elimination is done only once.
The above is valid for any \(A\). It does not have to be symmetric.
We go one step further. Let \(U=D\tilde{U}\) where \(D\) is diagonal matrix and contains the pivotes on the diagonal. Again, \(A\) do not have to be symmteric for this.
Now, if \(A\) is symmetric, then \(\tilde{U}=L^T\) and now we get special case of \(A=LDL^T\).
If in addition, \(A\) has all positive pivots, then we can write \(D=\sqrt{D}\sqrt{D}\), and \(A=LDL^T\) becomes \(A=\tilde{L}\tilde{L}^T\) where \(\tilde{L}=L\sqrt{D}\)
Did 1.2.1 to practice the pivoting and finding \(LU\).