These are the lecture notes of the second lecture for the course Math 501, Spring 2007 at CSUF. These notes are written by Nasser M. Abbasi (student in the class).
Lecture was given by Dr C.H.Lee, Mathematics dept. CSU Fullerton on 1/24/2007.
Lecture started with walkthrough on using Microsoft powerpoint for presentations.
Note the following correction to last lecture note. For the error term in the 2D Taylor expansion, it is given by
\[ E_{n}\left (x,y\right ) =\frac {1}{\left (n+1\right ) !}\left ( h\frac {\partial }{\partial x}+k\frac {\partial }{\partial x}\right ) ^{n+1}f\left (x+\theta h,y+\theta k\right ) \]
Where \(0\leq \theta \leq 1\)
For example, for \(n=1\), and for \(f\left (x,y\right ) =e^{xy},\)the above works out to be
\[ E_{1}\left (x,y\right ) =hk+\frac {1}{2}\left (h\left (y+\theta k\right ) +k\left (x+\theta h\right ) \right ) ^{2}e^{\left (x+\theta h\right ) \left (y+\theta k\right ) }\]
I wrote a small function which computes the above error term for any \(n\), here is some of the terms for increasing \(n\)
\(n\) | \(E_{n}\) |
\(1\) | |
\(2\) | |
\(3\) | |
To see how the 2D error term behaves as \(n\) increases, we can select some values for \(\theta ,h,k\) and evaluate \(E_{n}.\) This is the result for \(\theta =0.5,h=0.1,k=0.1.\) First, this is a plot of the function \(e^{xy}\)
For \(\left (x,y\right ) =\left (2,5\right ) \)
For \(\left (x,y\right ) =\left (1,1\right ) \)
For \(\left (x,y\right ) =\left (0,0\right ) \)
Notice the following: If point \(\left (x,y\right ) \) selected maps to a large function value \(f\left (x,y\right ) \) then more terms as needed to make the error term smaller. Notice we need at least 4 terms before \(E_{n}\) would continue to decrease as \(n\) increases. For example, for \(n=2\), \(E_{2}\) was actually smaller than \(E_{3}\) in the examples above. It is only after \(n=4\) than the error term would continue to become smaller as \(n\) in increased.
Definition: Order of convergence:
Let \([x_{n}]\) be a sequence, \([x_{n}]\) is said to converge to \(L\) at order \(P\) if \(\exists \) two positive constants \(c<1\) and \(N>0\) s.t. \[ \left \vert x_{n+1}-L\right \vert \leq c\left \vert x_{n}-L\right \vert ^{P}\]
for all \(n\geq N\)
Note than if we write \(E_{n}\equiv \left \vert x_{n}-L\right \vert \), then the above can be written as
\[ E_{n+1}\leq c\left (E_{n}\right ) ^{P}\]
Taking the log, we write
\[ \ln E_{n+1}\leq \ln c+P\ln E_{n}\]
Which can be considered an equation of a line with intercept \(\ln c\) and slope \(P\)
The larger the slope \(P\) the faster the sequence \([x_{n}]\) will converge to \(L\)
Example
Suppose \(\lim _{n->\infty }\left [ x_{n}\right ] ->L\) at order \(2\) (quadratic), then
\[ E_{n+1}\leq c\ \left (E_{n}\right ) ^{2}\]
For some positive constant \(c.\) (Note lecture notes said \(c<1\) here, but text book says \(c\) not necessarily less than one for the quadratic case). Now suppose \(E_{1}=10^{-1}\), then
\begin {align*} E_{2} & \leq cE_{1}\leq c\left (10^{-1}\right ) ^{2}\leq c\ 10^{-2}\\ E_{3} & \leq c\left (E_{2}\right ) ^{2}\leq c\left (c\ 10^{-2}\right ) ^{2}\leq c^{3}10^{-4}\\ E_{4} & \leq c\left (E_{3}\right ) ^{2}\leq c\left (c^{3}10^{-4}\right ) ^{2}\leq c^{7}10^{-8}\\ E_{5} & \leq c\left (c^{7}10^{-8}\right ) ^{2}\leq c^{15}10^{-16} \end {align*}
So after \(5\) iterations, error became very small. Error went from order \(10^{-1}\) to order \(10^{-16}\). Compare that to something that converges linearly:
Suppose \(\lim _{n->\infty }\left [ x_{n}\right ] ->L\) at order \(1\) (Linear), then
\[ E_{n+1}\leq c\ E_{n}\]
For some positive constant \(c<1\) (note for linear case, \(c\) must be less than one)\(.\) Now suppose \(E_{1}=10^{-1}\), then
\begin {align*} E_{2} & \leq cE_{1}\leq c10^{-1}\\ E_{3} & \leq cE_{2}\leq c\left (c\ 10^{-1}\right ) \leq c^{2}10^{-1}\\ E_{4} & \leq cE_{3}\leq c\left (c^{2}10^{-1}\right ) \leq c^{3}10^{-1}\\ E_{5} & \leq c\left (c^{3}10^{-1}\right ) \leq c^{4}10^{-1} \end {align*}
Compare that to the quadratic case above.
Any \(P>1\) is considered superlinear
Example:
Show that \(x_{n+1}=\frac {1}{2}x_{n}+\frac {1}{x_{n}}\) is a sequences which converges to \(L=\sqrt {2}\)quadratically. Use \(x_{1}=2.\)
We want to show that \(E_{n+1}\leq c\left (E_{n}\right ) ^{2}\) for some \(\operatorname {positive}\) \(c\) (note: Lecture notes said also that \(c<1\) here) and for some \(n>N\)
\begin {align*} E_{n+1} & =\left \vert x_{n+1}-L\right \vert \\ & =\frac {1}{2}x_{n}+\frac {1}{x_{n}}-\sqrt {2}\\ & =\frac {x_{n}^{2}-\sqrt {2}}{2x_{n}}-\sqrt {2}\\ & =\frac {x_{n}^{2}-2\sqrt {2}x_{n}+2+2\sqrt {2}x_{n}}{2x_{n}}-\sqrt {2}\\ & =\frac {\left (x_{n}-\sqrt {2}\right ) ^{2}+2\sqrt {2}x_{n}}{2x_{n}}-\sqrt {2}\\ & =\frac {\left (x_{n}-\sqrt {2}\right ) ^{2}}{2x_{n}}+\sqrt {2}-\sqrt {2}\\ & =\frac {\left (x_{n}-\sqrt {2}\right ) ^{2}}{2x_{n}}\\ & =\frac {\left (E_{n}\right ) ^{2}}{2\left (E_{n}+\sqrt {2}\right ) }\\ & \leq \frac {\left (E_{n}\right ) ^{2}}{2\sqrt {2}} \end {align*}
Hence it converges quadratically, with \(c=\frac {1}{2\sqrt {2}}\)
Definition: Big \(O:\) Let \(\left [ x_{n}\right ] \) and \(\left [ \alpha _{n}\right ] \) be 2 sequences. We say that \[ \,x_{n}=O\left (\alpha _{n}\right ) \] if \(\exists \) \(C>0\) and \(N>0\) s.t. \(\left \vert x_{n}\right \vert \leq C\left \vert \alpha _{n}\right \vert \) for all \(n\geq N\) (Note: Book only says that \(C\) and \(N\) are constants, lecture notes says they are also positive).
Example: Given \(x_{n}=\frac {n+1}{n^{2}}\) and \(\alpha _{n}=\frac {1}{n}\) show that \(x_{n}=O\left (\alpha _{n}\right ) \)
\begin {align*} x_{n} & =\frac {n+1}{n^{2}}\\ & =\frac {1}{n}+\frac {1}{n^{2}}\\ & \leq \frac {1}{n}+\frac {1}{n}=2\left (\frac {1}{n}\right ) \end {align*}
Hence
\[ x_{n}\leq 2\left (\alpha _{n}\right ) \]
Hence \(x_{n}=O\left (\alpha _{n}\right ) \)
Example: Find a simpler sequence \(\left [ \alpha _{n}\right ] \) s.t. \(x_{n}=O\left (\alpha _{n}\right ) \) where \(x_{n}=\frac {\sqrt {n}}{\left ( 3n+1\right ) ^{3}}\)
\begin {align*} x_{n} & =\frac {\sqrt {n}}{\left (3n+1\right ) ^{3}}\\ & \leq \frac {\sqrt {n}}{\left (3n\right ) ^{3}}\\ & =\frac {1}{9}\frac {n^{\frac {1}{2}}}{n^{3}}\\ & =\frac {1}{9}n^{-3+\frac {1}{2}}\\ & =\frac {1}{9}n^{-\frac {5}{2}} \end {align*}
Hence \(x_{n}=O\left (\frac {1}{n^{\frac {5}{2}}}\right ) \) where here \(\alpha _{n}=\frac {1}{n^{\frac {5}{2}}}\) and \(C=\frac {1}{9}\)
Definition: small \(o\). Given 2 sequences \(\left [ x_{n}\right ] \) and \(\left [ \alpha _{n}\right ] \) we say \(x_{n}=o\left (\alpha _{n}\right ) \) if \(\lim _{n\rightarrow \infty }\frac {x_{n}}{\alpha _{n}}=0\). i.e. \(\exists \) sequence \(\left [ \varepsilon _{n}\right ] \) that converges to zero, and \(N>0\) s.t. \(\left \vert x_{n}\right \vert \leq \left \vert \varepsilon _{n}\right \vert \left \vert \alpha _{n}\right \vert \) for all \(n>N\)
The above means that \(x_{n}\rightarrow 0\) much faster than \(\alpha _{n}\) does. Graphically, this is illustrated as follows
Example: \(x_{n}=\frac {1}{n\ln n}\), hence \(x_{n}=o\left (\frac {1}{n}\right ) \) where here \(\varepsilon _{n}=\frac {1}{\ln n}\)
Also we can write \(x_{n}=o\left (\frac {1}{\ln n}\right ) \) where \(\varepsilon _{n}=\frac {1}{n}\)
Example: Show that \(e^{-n}=o\left (n^{-k}\right ) \) for any \(k>0\)
One way is to show that \(\lim _{n\rightarrow \infty }\frac {x_{n}}{\alpha _{n}}=0\)
\begin {align*} \lim _{n\rightarrow \infty }\frac {x_{n}}{\alpha _{n}} & =\lim _{n\rightarrow \infty }\frac {n^{k}}{e^{n}}\\ & =\lim _{n\rightarrow \infty }\frac {n^{k}}{1+n+\frac {n^{2}}{2}+\frac {n^{3}}{3!}+\cdots }\\ & =0 \end {align*}
Hence \(x_{n}=o\left (\alpha _{n}\right ) \)
HW section 1.2, problems 6(b,e), 7(b,c), 10,28,40
Note: Next week we meet in room MH380
Definition: Let us denote by \([x_{n}]=[x_{1},x_{2},x_{3},\cdots ]\) an infinite sequence of numbers. Then \(E\) is called the shift operator if \(E\left [ x\right ] =\left [ x_{2},x_{3},x_{4},\cdots \right ] \)
Some properties of \(E\)
\(Ex_{1}=x_{2}\)
\(Ex_{n}=x_{n+1}\)
\(E\left (Ex_{1}\right ) =x_{3}\)
\(E^{k}x_{n}=x_{n+k}\)
\(E^{0}x_{n}=x_{n}\)
Definition: \(x_{n+1}\) is called a recursive sequence if \(x_{n+1}=f\left (x_{n},x_{n-1},\cdots ,x_{r}\right ) \) where \(r\leq n\)
Example: \(x_{n+1}=2x_{n}^{2}-7x_{n+1}+5\). Here 2 initial conditions are needed for this sequence to be defined for all \(n\). In this example the sequence is not linear.
Definition: \(x_{n+1}\) is called a linear recursive sequence if \(x_{n+1}=a_{n}x_{n}+a_{n-1}x_{n-1}+\cdots +a_{r}x_{r}\) where \(r\leq n\). (i.e. \(x_{n+1}\) is a linear combination of previous numbers in the sequence)
Given a linear recursive sequence our goal is to find its solution explicitly. i.e. in a non-recursive form.
\begin {align*} x_{n+1} & =a_{n}x_{n}+a_{n-1}x_{n-1}+\cdots +a_{r}x_{r}\\ E^{n+1-r}x_{r} & =a_{n}E^{n-r}x_{r}+a_{n-1}E^{n-r-1}x_{r}+\cdots +a_{r}E^{0}x_{r}\\ E^{n+1-r}x_{r}-a_{n}E^{n-r}x_{r}-a_{n-1}E^{n-r-1}x_{r}-\cdots -a_{r}E^{0}x_{r} & =0\\ \left (E^{n+1-r}-a_{n}E^{n-r}-a_{n-1}E^{n-r-1}-\cdots -a_{r}E^{0}\right ) x_{r} & =0 \end {align*}
Hence since \(x_{r}\neq 0\) we obtain a polynomial in \(E\) in degree \(n+1-r\)
\[ E^{n+1-r}-a_{n}E^{n-r}-a_{n-1}E^{n-r-1}-\cdots -a_{r}=0 \]
We can now find its roots. Assume the roots are called \(\lambda _{1},\lambda _{2},\cdots ,\lambda _{n+1-r}\) then
case 1 all roots \(\lambda \) are distinct (simple roots) then the solution to the difference equation is
\[ x_{n}=k_{1}\lambda _{1}^{n}+k_{2}\lambda _{2}^{n}+\cdots +k_{n+1-r}\lambda _{n+1-r}^{n}\]
Where \(k_{1},k_{2},\cdots ,k_{n+1-r}\) are coefficients to be determined from initial conditions.
Example: Suppose we have linear recursive equation \(x_{n+1}=3x_{n}-2x_{n-1}\) with \(x_{1}=1,x_{2}=0\)
apply the shift operator, we write
\begin {align*} E^{2}x_{n-1} & =3E^{1}x_{n-1}-2E^{0}x_{n-1}\\ E^{2}x_{n-1}-3E^{1}x_{n-1}+2E^{0}x_{n-1} & =0\\ \left (E^{2}-3E^{1}+2E^{0}\right ) x_{n-1} & =0 \end {align*}
Hence the polynomial is \(E^{2}-3E^{1}+2=0\) and its roots are given by \(\left ( E-2\right ) \left (E-1\right ) =0\,,\)hence \(\lambda _{1}=2,\lambda _{2}=1\). Simple roots. Hence the solution is given by
\begin {align*} x_{n} & =k_{1}\lambda _{1}^{n}+k_{2}\lambda _{2}^{n}\\ & =k_{1}2^{n}+k_{2}1^{n}\\ & =k_{1}2^{n}+k_{2} \end {align*}
Now apply initial conditions to find \(k_{1}\) and \(k_{2}\)
\(n=1,x_{1}=1\Rightarrow 1=k_{1}2+k_{2}\)
\(n=2,x_{2}=0\Rightarrow 0=k_{1}4+k_{2}\)
Solving the above 2 equations for \(k_{1},k_{2}\) we obtain \(k_{1}=-\frac {1}{2},k_{2}=2\,\ \)Hence the solution is
\[ x_{n}=-\frac {1}{2}2^{n}+2 \]
Case 2: Roots are multiple roots. Let \(\lambda _{\ast }\) be \(k\) multiple root. Then
\begin {align*} x_{n} & =A_{0}\lambda _{\ast }^{n}+A_{1}\frac {d}{d\lambda _{\ast }}\left ( \lambda _{\ast }^{n}\right ) +A_{2}\frac {d^{2}}{d\lambda _{\ast }^{2}}\left ( \lambda _{\ast }^{n}\right ) +\cdots +A_{k-1}\frac {d^{k-1}}{d\lambda _{\ast }^{k-1}}\left (\lambda _{\ast }^{n}\right ) \\ & =A_{0}\lambda _{\ast }^{n}+A_{1}n\lambda _{\ast }^{n-1}+A_{2}n\left ( n-1\right ) \lambda _{\ast }^{n-2}+\cdots +A_{k-1}n\left (n-1\right ) \left ( n-2\right ) \cdots \left (n-k+2\right ) \lambda _{\ast }^{n-k+1} \end {align*}
Example: Solve \(4x_{n}=-7x_{n-1}-2x_{n-2}+x_{n-3}\)
The polynomial in \(E\) is \begin {align*} 4E^{3}x_{n-3}+7E^{2}x_{n-3}+2E^{1}x_{n-3}-E^{0}x_{n-3} & =0\\ 4E^{3}+7E^{2}+2E^{1}-1 & =0\\ \left (E+1\right ) ^{2}\left (4E-1\right ) & =0 \end {align*}
Hence \(\lambda _{\ast }=-1\) with multiplicity \(k=2\) and \(\lambda _{1}=\frac {1}{4}\) a simple root.
Solution is \begin {align*} x_{n} & =\overset {simple}{\overbrace {\left [ k_{0}\left (\lambda _{1}\right ) ^{n}\right ] }}+\overset {multiple\ root}{\overbrace {\left [ A_{0}\lambda _{\ast }^{n}+A_{1}\frac {d}{d\lambda _{\ast }}\left (\lambda _{\ast }^{n}\right ) \right ] }}\\ & =k_{0}\left (\frac {1}{4}\right ) ^{n}+\left [ A_{0}\left (-1\right ) ^{n}+A_{1}n\left (\lambda _{\ast }^{n-1}\right ) \right ] \\ & =k_{0}\left (\frac {1}{4}\right ) ^{n}+A_{0}\left (-1\right ) ^{n}+A_{1}n\left (-1\right ) ^{n-1} \end {align*}
Where the 3 coefficients \(k_{0},A_{0},A_{1}\) can be found from initial conditions.
Next we address stability of these difference equations. For this we need definitions of stable and bounded sequence.
Definition: Bounded: A sequence \([x_{n}]\) is bounded if \(\exists \) \(c>0\) and \(N>0\) s.t. \(\left \vert x_{n}\right \vert \leq c\) \(\forall n\geq N\)
Definition: Stability: A difference equation is stable if its solutions are bounded.
Theorem: Let \(x_{n}\) be the solution of the characteristic polynomial of the difference equation. The solution of the difference equation is stable if
All simple roots \(\leq 1\)
All repeated roots \(<1\)
Proof: Suppose \(\lambda ^{\prime }s\) are the simple roots of the characteristic polynomial of the difference equation, then
\begin {align*} \left \vert x_{n}\right \vert & =\left \vert k_{1}\lambda _{1}^{n}+k_{2}\lambda _{2}^{n}+\cdots +k_{n-r+1}\lambda _{n-r+1}^{n}\right \vert \\ & \leq \left \vert k_{1}\right \vert \left \vert \lambda _{1}^{n}\right \vert +\left \vert k_{2}\right \vert \left \vert \lambda _{2}^{n}\right \vert +\cdots +\left \vert k_{n-r+1}\right \vert \left \vert \lambda _{n-r+1}^{n}\right \vert \end {align*}
The above is bounded if each \(\lambda \leq 1\)
Now assume the roots are multiple roots order \(k.\) Then if \(\left \vert \lambda \right \vert <1\) then
\[ \lim _{n\rightarrow \infty }n^{k}\lambda ^{n}=0 \]
Hence \(\left \vert x_{n}\right \vert \) is bounded. See textbook page 33 for more detailed proof.
Example: \[ x_{n+2}=3x_{n+1}-2x_{n}\]
The characteristic equation is
\begin {align*} E^{2}-3E+2 & =0\\ \left (E-1\right ) \left (E-2\right ) & =0 \end {align*}
Hence \(\lambda _{1}=1,\lambda _{2}=2\). Since simple roots, and one root \(\lambda _{2}>1\) then NOT stable.
Example:
\[ x_{n+2}-2x_{n+1}+2x_{n}=0 \]
The characteristic equation is
\[ E^{2}-2E+2=0 \]
The roots are \(\lambda _{1}=1-i,\lambda _{2}=1+i\)
Since simple roots, and the size of the root is \(>1\), then NOT stable (the size of the root is \(\left \vert \lambda _{2}\right \vert =\sqrt {2}\))
HW, section 1.3 problem 9,11,12,25
The digits are \(0-9\,\ \)and each digit it multiplied by power of 10. For example the number \(427.325\) in base 10 can be written as follows
\[ \left (427.325\right ) _{10}=4\times 10^{2}+2\times 10^{1}+7\times 10^{0}+3\times 10^{-1}+2\times 10^{-2}+5\times 10^{-3}\]
The digits are \(0,1\) and each digit is multiplied by powers of 2. For example the number \(1001.11101\) in base 2 can be written as
\begin {align*} \left (1001.11101\right ) _{2} & =1\times 2^{3}+0\times 2^{2}+0\times 2^{1}+1\times 2^{0}+1\times 2^{-1}+1\times 2^{-2}+1\times 2^{-3}+0\times 2^{-4}+1\times 2^{-5}\\ & =8+0+0+1+\frac {1}{2}+\frac {1}{4}+\frac {1}{8}+0+\frac {1}{32}\\ & =\left (9.90625\right ) _{10} \end {align*}
Example: Calculate \(\frac {1}{10}\)
In decimal, the answer is \(0.1\)
In Binary (depending on machine accuracy) the answer is \begin {align*} 1\div 10 & \simeq \left (1.000\cdots \right ) _{2}\div \left (1010.000\cdots \right ) _{2}\\ & =\left (0.00011001100110011\cdots \right ) _{2} \end {align*}
With 7 digits accuracy machine we get
\[ 1\div 10\simeq 0.1000000\overset {error}{\overbrace {1490116}}\cdots \]
Lets do conversion between binary and decimal
Example: Convert \(53_{10}\) to binary
\begin {align*} 53\div 2 & =26\ \ R\ 1\\ 26\div 2 & =13\ \ R\ 0\\ 13\div 2 & =6\ \ R\ \ 1\\ 6\div 2 & =3\ \ R\ 0\\ 3\div 2 & =1\ \ R\ 1\\ 1\div 2 & =0\ \ R\ \ 1 \end {align*}
Hence \(53_{10}=110101_{2}\)
Example: Convert \(19_{10}\) to binary\begin {align*} 19\div 2 & =9\ \ R\ 1\\ 9\div 2 & =4\ \ R\ 1\\ 4\div 2 & =2\ \ R\ \ 0\\ 2\div 2 & =1\ \ R\ 0\\ 1\div 2 & =0\ \ R\ 1 \end {align*}
Hence \(19_{10}=10011_{2}\) Example: Convert \(0.7_{10}\) to binary\begin {align*} 0.7\times 2 & =0.4+1\\ 0.4\times 2 & =0.8+0\\ 0.8\times 2 & =0.6+1\\ 0.6\times 2 & =0.2+1\\ 0.2\times 2 & =0.4+0\\ 0.4\times 2 & =0.8+0\\ & \vdots \\ & repated \end {align*}
Hence \(0.7_{10}=0.1\overbrace {0110}\ \overbrace {0110}\cdots \)
Hence \(53.7_{10}=\ 110101.1\overbrace {0110}\ \overbrace {0110}\cdots \)