Which of the following series satisfy the M–test and hence converge uniformly on the interval \(I=[0,1]\)?
(e) \(\sum _{k=1}^{\infty }\frac{e^{kx}}{k^{2}}\), (f) \(\sum _{k=1}^{\infty }\frac{e^{-kx}}{k^{2}}\)
Solution
Using theorem 3.27, we need to find \(\left \vert u_{k}\left ( x\right ) \right \vert \leq m_{k}\) for all \(x\in I\) such that \(\sum _{k=1}^{\infty }m_{k}<\infty \) to show that series \(\sum _{k=1}^{\infty }u_{k}\left ( x\right ) \) converges uniformly. In this case \(u_{k}\left ( x\right ) =\frac{e^{kx}}{k^{2}}\). At \(x=0,u_{k}\left ( 0\right ) =\frac{1}{k^{2}}\) and at \(x=1,u_{k}\left ( 1\right ) =\frac{e^{k}}{k^{2}}\). Hence if we pick \(m_{k}=\frac{e^{k}}{k^{2}}\) then this will satisfy the condition \(\left \vert u_{k}\left ( x\right ) \right \vert \leq m_{k}\). But\[ \sum _{k=1}^{\infty }m_{k}=\sum _{k=1}^{\infty }\frac{e^{k}}{k^{2}}\] does not converge. This can be shown by ratio test. \(\frac{m_{k+1}}{m_{k}}=\frac{\frac{e^{k+1}}{\left ( k+1\right ) ^{2}}}{\frac{e^{k}}{k^{2}}}=\frac{e^{k+1}k^{2}}{e^{k}\left ( k+1\right ) ^{2}}=e\frac{k^{2}}{\left ( k+1\right ) ^{2}}\) and as \(k\rightarrow \infty \) this goes to \(e\). Which is larger than \(1\). Therefore \(\sum _{k=1}^{\infty }\frac{e^{kx}}{k^{2}}\) is not uniform convergent.
In this case \(u_{k}\left ( x\right ) =\frac{e^{-kx}}{k^{2}}\). At \(x=0,u_{k}\left ( 0\right ) =\frac{1}{k^{2}}\) and at \(x=1,u_{k}\left ( 1\right ) =\frac{1}{e^{k}k^{2}}\). Hence if we pick \(m_{k}=\frac{1}{e^{k}k^{2}}\) then this will satisfy the condition \(\left \vert u_{k}\left ( x\right ) \right \vert \leq m_{k}\).\[ \sum _{k=1}^{\infty }m_{k}=\sum _{k=1}^{\infty }\frac{1}{e^{k}k^{2}}\] Using the ratio test \(\frac{m_{k+1}}{m_{k}}=\frac{\frac{1}{e^{k+1}\left ( k+1\right ) ^{2}}}{\frac{1}{e^{k}k^{2}}}=\frac{e^{k}k^{2}}{e^{k+1}\left ( k+1\right ) ^{2}}=\frac{1}{e}\frac{k^{2}}{\left ( k+1\right ) ^{2}}\)and as \(k\rightarrow \infty \) this goes to \(\frac{1}{e}\). Which is smaller than \(1\). Hence by the ratio test \(\sum _{k=1}^{\infty }m_{k}\) converges. Therefore \(\sum _{k=1}^{\infty }\frac{e^{-kx}}{k^{2}}\) is uniform convergent.
First, without explicitly evaluating them, how fast do you expect the Fourier coefficients of the following functions to go to zero as \(k\rightarrow \infty \) ? Then prove your claim by evaluating the coefficients. (a) \(x-\pi \), (c) \(x^{2}\), (e) \(\sin ^{2}x\).
Solution
\(f\left ( x\right ) =x-\pi \). This is an odd function. Hence \(f\left ( -\pi \right ) \neq f\left ( \pi \right ) \). Because of this, there will be a jump discontinuity in the \(2\pi \) periodic extension. This also implies that the Fourier series is not uniform convergent.
Due to the jump discontinuity the convergence will be slow relative to a Fourier series which converges uniformly, and therefore we expect the \(b_{n}\) terms to be of the form \(\frac{1}{n}\) instead of \(\frac{1}{n^{r}}\) with \(r>1\), as would be the case with the faster uniform convergence.
Now we will find the Fourier series to confirm this.\begin{align*} b_{n} & =\frac{1}{\pi }\int _{-\pi }^{\pi }\left ( x-\pi \right ) \sin nxdx\\ & =\frac{1}{\pi }\int _{-\pi }^{\pi }x\sin nxdx-\frac{1}{\pi }\int _{-\pi }^{\pi }\pi \sin nxdx \end{align*}
But \(\int _{-\pi }^{\pi }\pi \sin nxdx=0\) since this is an integration over one period. Therefore the above becomes\begin{align*} b_{n} & =\frac{1}{\pi }\int _{-\pi }^{\pi }x\sin nxdx\\ & =\frac{1}{\pi }\left ( -\frac{1}{n}\left [ x\cos nx\right ] _{-\pi }^{\pi }+\frac{1}{n}\int _{-\pi }^{\pi }\cos nxdx\right ) \end{align*}
But \(\int _{-\pi }^{\pi }\cos nxdx=0\) since this is an integration over one period. The above becomes\begin{align*} b_{n} & =\frac{-1}{n\pi }\left [ x\cos nx\right ] _{-\pi }^{\pi }\\ & =\frac{-1}{n\pi }\left [ \pi \cos n\pi +\pi \cos n\pi \right ] \\ & =\frac{-1}{n\pi }\left [ 2\pi \left ( -1\right ) ^{n}\right ] \\ & =\frac{-2\left ( -1\right ) ^{n}}{n} \end{align*}
Hence the Fourier series is\[ x-\pi \sim \sum _{n=1}^{\infty }\frac{-2\left ( -1\right ) ^{n}}{n}\sin nx \] The coefficient is \(b_{n}=\frac{-2\left ( -1\right ) ^{n}}{n}\). We see now that \[ \sum _{n=1}^{\infty }\sqrt{b_{n}^{2}}=4\sum _{n=1}^{\infty }\frac{1}{n}\] But \(\sum _{n=1}^{\infty }\frac{1}{n}\) does not converge, which implies it is not uniform convergent as expected. Piecewise convergence is of order \(O\left ( \frac{1}{n}\right ) \) (slow).
\(f\left ( x\right ) =x^{2}\). This is an even function and \(f\left ( -\pi \right ) =f\left ( \pi \right ) \). Hence there will be no jump discontinuity in the \(2\pi \) periodic extension. Therefore this is uniform convergent. Hence we expect the coefficient to have \(\frac{1}{n^{r}}\) where \(r>1\). For example \(\frac{1}{n^{2}}\). This is because \(\sum _{n=1}^{\infty }\sqrt{a_{n}^{2}}\) should now converge. This is considered fast convergence. Now we will find the Fourier series to confirm this. Since \(f\left ( x\right ) \) is an even function then only \(a_{n}\) exist.
\[ a_{0}=\frac{1}{\pi }\int _{-\pi }^{\pi }x^{2}dx=\frac{1}{\pi }\left [ \frac{x^{3}}{3}\right ] _{-\pi }^{\pi }=\frac{1}{3\pi }\left ( \pi ^{3}+\pi ^{3}\right ) =\frac{2}{3}\pi ^{2}\] And\[ a_{n}=\frac{1}{\pi }\int _{-\pi }^{\pi }x^{2}\cos nxdx \] Integration by parts. Let \(u=x^{2},\cos \left ( nx\right ) =dv\). Then \(du=2x,v=\frac{1}{n}\sin \left ( nx\right ) \). Then using \(\int udv=uv-\int vdu\) the above becomes\begin{align*} b_{n} & =\frac{1}{\pi }\left ( \frac{1}{n}\left [ x^{2}\sin \left ( nx\right ) \right ] _{0}^{\pi }-\frac{2}{n}\int _{0}^{\pi }x\sin \left ( nx\right ) dx\right ) \\ & =\frac{1}{\pi }\left ( -\frac{2}{n}\int _{0}^{\pi }x\sin \left ( nx\right ) dx\right ) \\ & =-\frac{2}{n\pi }\int _{0}^{\pi }x\sin \left ( nx\right ) dx \end{align*}
Integration by parts again. Let \(u=x,\sin \left ( nx\right ) =dv\). Then \(du=1,v=\frac{-1}{n}\cos \left ( nx\right ) \). Then using \(\int udv=uv-\int vdu\) the above becomes\begin{align*} b_{n} & =-\frac{2}{n\pi }\left ( \frac{-1}{n}\left [ x\cos \left ( nx\right ) \right ] _{0}^{\pi }+\frac{1}{n}\int _{0}^{\pi }\cos \left ( nx\right ) \right ) \\ & =--\frac{2}{n\pi }\left ( \frac{-1}{n}\left [ \pi \cos \left ( n\pi \right ) \right ] +\frac{1}{n}\left [ \frac{\sin nx}{n}\right ] _{0}^{\pi }\right ) \\ & =-\frac{2}{n\pi }\left ( \frac{-1}{n}\left [ \pi \left ( -1\right ) ^{n}\right ] \right ) \\ & =\frac{2}{n^{2}\pi }\left [ \pi \left ( -1\right ) ^{n}\right ] \\ & =\frac{2}{n^{2}}\left ( -1\right ) ^{n} \end{align*}
Hence\begin{equation} x^{2}\sim \frac{1}{3}\pi ^{2}+\sum _{n=1}^{\infty }\frac{2}{n^{2}}\left ( -1\right ) ^{n}\cos \left ( nx\right ) \tag{1} \end{equation} We see that the coefficient is \(a_{n}=\frac{2}{n^{2}}\left ( -1\right ) ^{n}\), therefore \[ \sum _{n=1}^{\infty }\sqrt{a_{n}^{2}}=4\sum _{n=1}^{\infty }\frac{1}{n^{2}}\] But now \(\sum _{n=1}^{\infty }\frac{1}{n^{2}}\) now converges since the power on \(n\) is larger than \(1\), which implies uniform convergent. Piecewise convergence is of order \(O\left ( \frac{1}{n^{2}}\right ) \) (fast).
\(f\left ( x\right ) =\sin ^{2}x\). This is an even function and \(f\left ( -\pi \right ) =f\left ( \pi \right ) \). This is the same as part c. There will be no jump discontinuity in the \(2\pi \) periodic extension. Therefore this is uniform convergent. Hence we expect the coefficient to have \(\frac{1}{n^{r}}\) where \(r>1\). For example \(\frac{1}{n^{2}}\) this is because \(\sum _{n=1}^{\infty }\sqrt{a_{n}^{2}}\) should converge. This is fast convergence. Now we will find the Fourier series to confirm this.
But \(\sin ^{2}x=\frac{1}{2}-\frac{1}{2}\cos 2x\), hence this is the Fourier series for \(\sin ^{2}x\). If we need to show this explicitly, then since even function only \(a_{n}\) exist. \begin{align*} a_{0} & =\frac{1}{\pi }\int _{-\pi }^{\pi }\sin ^{2}xdx\\ & =\frac{1}{\pi }\int _{-\pi }^{\pi }\left ( \frac{1}{2}-\frac{1}{2}\cos 2x\right ) dx\\ & =\frac{1}{2\pi }\left ( \int _{-\pi }^{\pi }dx-\overset{0}{\overbrace{\int _{-\pi }^{\pi }\cos 2xdx}}\right ) \\ & =\frac{1}{2\pi }2\pi \\ & =1 \end{align*}
And\begin{align*} a_{n} & =\frac{1}{\pi }\int _{-\pi }^{\pi }\sin ^{2}x\cos nxdx\\ & =\frac{1}{\pi }\int _{-\pi }^{\pi }\left ( \frac{1}{2}-\frac{1}{2}\cos 2x\right ) \cos nxdx\\ & =\frac{1}{2\pi }\left ( \int _{-\pi }^{\pi }\cos nxdx-\int _{-\pi }^{\pi }\cos 2x\cos nxdx\right ) \end{align*}
But \(\int _{-\pi }^{\pi }\cos nxdx=0\) since integration over one period, and \(\int _{-\pi }^{\pi }\cos 2x\cos nxdx=0\) for all values other than \(n=2\) by orthogonality. Hence the above simplifies to \begin{align*} a_{2} & =\frac{1}{2\pi }\left ( -\int _{-\pi }^{\pi }\cos ^{2}2xdx\right ) \\ & =\frac{1}{2\pi }\left ( -\pi \right ) \\ & =-\frac{1}{2} \end{align*}
Hence\begin{align*} \sin ^{2}x & =\frac{a_{0}}{2}+\sum _{n=1}^{\infty }a_{n}\cos nx\\ & =\frac{1}{2}-\frac{1}{2}\cos 2x \end{align*}
We see that \(\sum _{n=1}^{\infty }\sqrt{a_{n}^{2}}=\frac{1}{2}<\infty \). Uniform convergence. Only 2 terms are needed. Very fast convergence.
Using the criteria of Theorem 3.31, determine how many continuous derivatives the functions represented by the following Fourier series have (a) \(\sum _{k=-\infty }^{\infty }\frac{e^{ikx}}{1+k^{4}}\,\), (f) \(\sum _{k=1}^{\infty }\left ( 1-\cos \frac{1}{k^{2}}\right ) e^{ikx}\)
Theorem 3.31. Let \(0\leq n\in \mathbb{Z} \). If the Fourier coefficients of \(f(x)\) satisfy\[ \sum _{k=-\infty }^{\infty }\left \vert k\right \vert ^{m}\left \vert c_{k}\right \vert <\infty \] Then the Fourier series \(f\left ( x\right ) =\sum _{k=-\infty }^{\infty }c_{k}e^{ikx}\) converges uniformly to an \(n\)–times continuously differentiable function \(\tilde{f}\left ( x\right ) \in C^{n}\), which is the \(2\pi \) periodic extension of \(f(x)\).
Solution
\[ f\left ( x\right ) \sim \sum _{k=-\infty }^{\infty }\frac{e^{ikx}}{1+k^{4}}\] Therefore \(c_{k}=\frac{1}{1+k^{4}}\), hence the series to consider is \begin{align*} \sum _{k=-\infty }^{\infty }\left \vert k\right \vert ^{n}\left \vert c_{k}\right \vert & =\sum _{k=-\infty }^{\infty }\left \vert \frac{k^{n}}{1+k^{4}}\right \vert \\ & =\sum _{k=-\infty }^{\infty }\left \vert \frac{1}{\frac{1}{k^{n}}+k^{4-n}}\right \vert \end{align*}
As \(k\rightarrow \infty \) the term \(\frac{1}{k^{n}}\rightarrow 0\). Then we just need to consider \(k^{4-n}\). We want \(4-n>1\) for uniform convergence. Hence\begin{align*} 4-n & >1\\ n & <4 \end{align*}
Therefore \(n=3\). The Fourier series converges uniformly to an \(3\)–times continuously differentiable function
\[ f\left ( x\right ) \sim \sum _{k=1}^{\infty }\left ( 1-\cos \frac{1}{k^{2}}\right ) e^{ikx}\] Therefore \(c_{k}=1-\cos \frac{1}{k^{2}}\), hence the series to consider is \[ \sum _{k=-\infty }^{\infty }\left \vert k\right \vert ^{m}\left \vert c_{k}\right \vert =\sum _{k=-\infty }^{\infty }\left \vert k^{m}\right \vert \left \vert \left ( 1-\cos \frac{1}{k^{2}}\right ) \right \vert \] But \(\left \vert \cos \frac{1}{k^{2}}\right \vert \leq 1\), hence \[ \sum _{k=-\infty }^{\infty }\left \vert k^{m}\right \vert \left \vert \left ( 1-\cos \frac{1}{k^{2}}\right ) \right \vert \leq 2\sum _{k=-\infty }^{\infty }\left \vert k^{m}\right \vert \] There is no \(n\geq 0\) which will make \(\sum _{k=-\infty }^{\infty }\left \vert k^{n}\right \vert <\infty \). The Fourier series does not converges uniformly to any continuously differentiable function.
Which of the following sequences converge in norm to the zero function for \(x\in \mathbb{R} \)? (c) \(v_{n}\left ( x\right ) =\left \{ \begin{array} [c]{ccc}1 & & n<x<n+\frac{1}{n}\\ 0 & & \text{otherwise}\end{array} \right . \), (e) \(v_{n}\left ( x\right ) =\left \{ \begin{array} [c]{ccc}\frac{1}{\sqrt{n}} & & n<x<2n\\ 0 & & \text{otherwise}\end{array} \right . \)
solution
Using definition 3.35: A sequence \(v_{n}\left ( x\right ) \) is said to converge in the norm to \(f\) if \(\left \Vert v_{n}-f\right \Vert \rightarrow 0\) as \(n\rightarrow \infty \). Therefore, we need to show, since \(f=0\) here, that\[ \lim _{n\rightarrow \infty }\left \Vert v_{n}\right \Vert \rightarrow 0 \] The norm is \(L^{2}\) which is defined as \(\left \Vert v_{n}\right \Vert =\sqrt{\frac{1}{2\pi }\int _{-\pi }^{\pi }\left \vert v_{n}\left ( x\right ) \right \vert ^{2}dx}\), hence\begin{align*} \left \Vert v_{n}\right \Vert & =\sqrt{\frac{1}{2\pi }\int _{-\pi }^{\pi }\left \vert \left \{ \begin{array} [c]{ccc}1 & & n<x<n+\frac{1}{n}\\ 0 & & \text{otherwise}\end{array} \right . \right \vert ^{2}dx}\\ & =\sqrt{\frac{1}{2\pi }\int _{-\pi }^{\pi }\left \{ \begin{array} [c]{ccc}1 & & n<x<n+\frac{1}{n}\\ 0 & & \text{otherwise}\end{array} \right . dx} \end{align*}
Let us look at the integral \(\int _{-\pi }^{\pi }\left \{ \begin{array} [c]{ccc}1 & & n<x<n+\frac{1}{n}\\ 0 & & \text{otherwise}\end{array} \right . dx\). The maximum value of top branch integral is \(\int _{-\pi }^{\pi }dx\) which will occur when \(x=n>0\) and \(x=n+\frac{1}{n}<\pi \). As this is when the whole pulse is between \(\left [ -\pi ,\pi \right ] \). When \(x=n+\frac{1}{n}>\pi \) the area will be smaller as part of the above will be outside \(\left [ -\pi ,\pi \right ] \,\). So we could now consider the integral (its maximum) to be \begin{align*} \int _{-\pi }^{\pi }dx & \leq \int _{n}^{n+\frac{1}{n}}dx\\ & =\left ( n+\frac{1}{n}\right ) -n\\ & =\frac{1}{n} \end{align*}
Therefore \begin{align*} \lim _{n\rightarrow \infty }\left \Vert v_{n}\right \Vert & \leq \sqrt{\frac{1}{2\pi }\left \{ \begin{array} [c]{ccc}\frac{1}{n} & & 0<n,\frac{n^{2}+1}{n}<\pi \\ 0 & & \text{otherwise}\end{array} \right . }\\ & =\left \{ \begin{array} [c]{ccc}\sqrt{\frac{1}{2\pi n}} & & 0<n,\frac{n^{2}+1}{n}<\pi \\ 0 & & \text{otherwise}\end{array} \right . \\ & =0 \end{align*}
Hence this sequence converges to \(0\) function in the norm
Using definition 3.35: A sequence \(v_{n}\left ( x\right ) \) is said to converge in the norm to \(f\) if \(\left \Vert v_{n}-f\right \Vert \rightarrow 0\) as \(n\rightarrow \infty \). Therefore, we need to show, since \(f=0\) here, that\[ \lim _{n\rightarrow \infty }\left \Vert v_{n}\right \Vert \rightarrow 0 \] The norm is \(L^{2}\) which is defined as \(\left \Vert v_{n}\right \Vert =\sqrt{\frac{1}{2\pi }\int _{-\pi }^{\pi }\left \vert v_{n}\left ( x\right ) \right \vert ^{2}dx}\), hence\begin{align*} \left \Vert v_{n}\right \Vert & =\sqrt{\frac{1}{2\pi }\int _{-\pi }^{\pi }\left \vert \left \{ \begin{array} [c]{ccc}\frac{1}{\sqrt{n}} & & n<x<2n\\ 0 & & \text{otherwise}\end{array} \right . \right \vert ^{2}dx}\\ & =\sqrt{\frac{1}{2\pi }\int _{-\pi }^{\pi }\left \{ \begin{array} [c]{ccc}\frac{1}{n} & & n<x<2n\\ 0 & & \text{otherwise}\end{array} \right . dx} \end{align*}
Let us look at the integral \(\int _{-\pi }^{\pi }\left \{ \begin{array} [c]{ccc}\frac{1}{n} & & n<x<2n\\ 0 & & \text{otherwise}\end{array} \right . dx\). The maximum value of this integral is \(\frac{1}{n}\int _{-\pi }^{\pi }dx\) which will occur when \(x=n>0\) and \(x=2n<\pi \) As this is when the whole pulse is between \(\left [ -\pi ,\pi \right ] \). So we could now consider the integral (its maximum) to be \begin{align*} \int _{-\pi }^{\pi }\frac{1}{n}dx & \leq \frac{1}{n}\int _{n}^{2n}dx\\ & =\frac{1}{n}\left ( 2n-n\right ) \\ & =\frac{n}{n}\\ & =1 \end{align*}
Therefore \begin{align*} \left \Vert v_{n}\right \Vert & \leq \sqrt{\frac{1}{2\pi }}\left \{ \begin{array} [c]{ccc}1 & & 0<n,2n<\pi \\ 0 & & \text{otherwise}\end{array} \right . \\ & =\left \{ \begin{array} [c]{ccc}\sqrt{\frac{1}{2\pi }} & & 0<n<\frac{\pi }{2}\\ 0 & & \text{otherwise}\end{array} \right . \end{align*}
Therefore as \(n\rightarrow \infty \) then \(\left \Vert v_{n}\right \Vert \rightarrow 0\) as the top branch will not be consider as it is limited to \(0<n,2n<\pi \) or \(0<n<\frac{\pi }{2}\) only. Hence this sequence converges to \(0\) function in the norm
For each \(n=1,2,\cdots \), define the function \(f_{n}\left ( x\right ) =\left \{ \begin{array} [c]{ccc}1 & & \frac{k}{m}\leq x\leq \frac{k+1}{m}\\ 0 & & \text{otherwise}\end{array} \right . \), where \(n=\frac{1}{2}m\left ( m+1\right ) +k\) and \(0\leq k\leq m\). (a) Show first that \(m,k\) are uniquely determined by \(n.\) (b) Then prove that, on the interval \(\left [ 0,1\right ] \) the sequence \(f_{n}\left ( x\right ) \) converges in norm to \(0\) but does not converge pointwise anywhere.
solution
Proof by contradiction. Assuming there exist \(m_{1},m_{2}\geq 0\) where \(m_{1}\neq m_{2}\) such that\begin{align*} n & =\frac{1}{2}m_{1}\left ( m_{1}+1\right ) +k\\ n & =\frac{1}{2}m_{2}\left ( m_{2}+1\right ) +k \end{align*}
Therefore\begin{align*} \frac{1}{2}m_{1}\left ( m_{1}+1\right ) +k & =\frac{1}{2}m_{2}\left ( m_{2}+1\right ) +k\\ \frac{1}{2}m_{1}\left ( m_{1}+1\right ) & =\frac{1}{2}m_{2}\left ( m_{2}+1\right ) \\ m_{1}\left ( m_{1}+1\right ) & =m_{2}\left ( m_{2}+1\right ) \end{align*}
The above is true if \(m_{1}=m_{2}\) or if \(m_{2}=-m_{1}-1\). But \(m\) has to be positive. Hence we take the case \(m_{1}=m_{2}\). Therefore assumption is not valid. Hence \(m\) is unique.
Same proof for \(k\). Assuming there exist \(k_{1},k_{2}\geq 0\) where \(k_{1}\neq k_{2}\) such that\begin{align*} n & =\frac{1}{2}m\left ( m+1\right ) +k_{1}\\ n & =\frac{1}{2}m\left ( m+1\right ) +k_{2} \end{align*}
Then \[ \frac{1}{2}m\left ( m+1\right ) +k_{1}=\frac{1}{2}m\left ( m+1\right ) +k_{2}\] Hence \(k_{1}=k_{2}\). Therefore assumption is not valid. Hence \(k\) is unique.
\[ f_{n}\left ( x\right ) =\left \{ \begin{array} [c]{ccc}1 & & \frac{k}{m}\leq x\leq \frac{k+1}{m}\\ 0 & & \text{otherwise}\end{array} \right . \] On the interval \(\left [ 0,1\right ] \), the norm is \(L^{2}\) which is defined as \(\left \Vert f_{n}\right \Vert =\sqrt{\frac{1}{\frac{1}{2}}\int _{0}^{1}\left \vert v_{n}\left ( x\right ) \right \vert ^{2}dx}\), hence\begin{align*} \left \Vert f_{n}\right \Vert & =\sqrt{2\int _{0}^{1}\left \vert \left \{ \begin{array} [c]{ccc}1 & & \frac{k}{m}\leq x\leq \frac{k+1}{m}\\ 0 & & \text{otherwise}\end{array} \right . \right \vert ^{2}dx}\\ & =\sqrt{2\int _{0}^{1}\left \{ \begin{array} [c]{ccc}1 & & \frac{k}{m}\leq x\leq \frac{k+1}{m}\\ 0 & & \text{otherwise}\end{array} \right . dx} \end{align*}
Let us look at few values of \(n\) and see what happens.
For \(n=1,n=\frac{1}{2}m\left ( m+1\right ) +k\). Hence if \(m=1\) then \(n=\frac{1}{2}\left ( 2\right ) +0=1\), Hence \(m=1,k=0\). Therefore \(\frac{k}{m}\leq x\leq \frac{k+1}{m}\) becomes \(0\leq x\leq 1\).
For \(n=2,n=\frac{1}{2}m\left ( m+1\right ) +k\). Hence if \(m=1\) then \(n=\frac{1}{2}\left ( 2\right ) +1=1\), Hence \(m=1,k=1\). Therefore \(\frac{k}{m}\leq x\leq \frac{k+1}{m}\) becomes \(1\leq x\leq 2\).
For \(n=3,n=\frac{1}{2}m\left ( m+1\right ) +k\). Hence if \(m=1\) then \(n=\frac{1}{2}\left ( 2\right ) +2=1\), But \(k\leq m\). Try \(m=2\) then \(n=\frac{1}{2}\left ( 2\right ) \left ( 3\right ) +0=1\). Hence \(m=2,k=0\). Therefore \(\frac{k}{m}\leq x\leq \frac{k+1}{m}\) becomes \(0\leq x\leq \frac{1}{2}\).
It looks like the width is becoming smaller as \(n\) increases. To verify this, I wrote a small program which determines the width (we only need the width which remains inside \(\left [ 0,1\right ] \). Here is the code
And the output obtained
We see from the above that as \(n\) increases the range \(\frac{k}{m}\leq x\leq \frac{k+1}{m}\) either goes outside the \(\left [ 0,1\right ] \) domain as in the case of \(n=2,5,9\) or stays inside \(\left [ 0,1\right ] \) but it becomes smaller with \(n=10\) giving \(0\leq x\leq \frac{1}{4}\) while \(n=1\) it was \(0\leq x\leq 1\).
Since we are integrating \(1\) over this range, and the width of integration is getting smaller and smaller, then for very large \(n\) the integral goes to zero as the width goes to zero.
In other words, we can bound the integral from above as\begin{align*} \sqrt{2\int _{0}^{1}\left \{ \begin{array} [c]{ccc}1 & & \frac{k}{m}\leq x\leq \frac{k+1}{m}\\ 0 & & \text{otherwise}\end{array} \right . dx} & \leq \lim _{n\rightarrow \infty }\sqrt{2\int _{0}^{\frac{1}{n}}dx}\\ & =\lim _{n\rightarrow \infty }\sqrt{2}\frac{1}{n}\\ & =0 \end{align*}
Hence the sequence \(f_{n}\left ( x\right ) \) converges in norm to \(0\). For piecewise convergence. The definition is that for any \(\varepsilon >0\), there exist \(N\left ( \varepsilon ,x\right ) \) such that \(\left \vert f_{n}\left ( x\right ) \right \vert <\varepsilon \) for all \(n\geq N\) for \(x\in \left [ 0,1\right ] \). This means if we fix \(x\) then \(\lim _{n\rightarrow \infty }\left \vert f_{n}\left ( x\right ) \right \vert =0\). But this does not happen here. Since the pulse shifts left and right all the time as the width gets smaller as \(n\) increases. For example, if we look at \(x=\frac{1}{2}\) and then increase \(n\), we see that \(f_{n}\left ( \frac{1}{2}\right ) \) do not go to zero there as the function moves around due to changing of the domain. Hence it is not piecewise convergent.
The convection-diffusion equation \(u_{t}+cu_{x}=\gamma u_{xx}\) is a simple model for the diffusion of a pollutant in a fluid flow moving with constant speed \(c\). Show that \(v\left ( t,x\right ) =u\left ( t,x+ct\right ) \) solves the heat equation. What is the physical interpretation of this change of variables?
solution
\[ \frac{\partial v}{\partial t}=\frac{\partial u}{\partial t}+\frac{\partial u}{\partial x}\frac{dx}{dt}\] But \(\frac{dx}{dt}=c\), the speed of fluid. Hence the above becomes\[ \frac{\partial v}{\partial t}=\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x}\] But \(\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x}=\) \(\gamma u_{xx}\), hence the above becomes\[ \frac{\partial v}{\partial t}=\gamma u_{xx}\] But \(\frac{\partial u}{\partial x}=\frac{\partial v}{\partial t}\frac{dt}{dx}+\frac{\partial v}{\partial x}\frac{dx}{dx}=\frac{\partial v}{\partial x}\) and \(\frac{\partial ^{2}u}{\partial x^{2}}=\frac{\partial ^{2}v}{\partial ^{2}t}\frac{dt}{dx}+\frac{\partial ^{2}v}{\partial x^{2}}\frac{dx}{dx}=\frac{\partial ^{2}v}{\partial x^{2}}\). Hence the above gives\[ \frac{\partial v}{\partial t}=\gamma v_{xx}\] Which is the heat equation. The change of variable puts the observer as moving with the same speed as fluid instead of stationary observer. It is a coordinates transformation.
For each of the following initial temperature distributions, (i ) write out the Fourier series solution to the heated ring (4.30–32), and (ii ) find the resulting equilibrium temperature (a) \(f\left ( x\right ) =\cos x\), (c) \(f\left ( x\right ) =\left \vert x\right \vert \).
The heated ring problem (4.30–32) is: Solve for \(u\left ( x,t\right ) \) in \[ \frac{\partial u}{\partial t}=\frac{\partial ^{2}u}{\partial x^{2}}\qquad -\pi <x<\pi ,t>0 \] With periodic BC \(u\left ( -\pi ,t\right ) =u\left ( \pi ,t\right ) ,u_{x}\left ( -\pi ,t\right ) =u_{x}\left ( \pi ,t\right ) \) for \(t\geq 0\). With initial conditions \(u\left ( x,0\right ) =f\left ( x\right ) \)
solution
Starting with the series solution as given in (4.34)\begin{equation} u\left ( x,t\right ) =\frac{a_{0}}{2}+\sum _{n=1}^{\infty }e^{-n^{2}t}\left ( a_{n}\cos nx+b_{n}\sin nx\right ) \tag{1} \end{equation} At \(t=0\) the above becomes (using \(u\left ( x,0\right ) =\cos x\))\[ \cos x=\frac{a_{0}}{2}+\sum _{n=1}^{\infty }a_{n}\cos nx+b_{n}\sin nx \] Hence \(a_{n},b_{n}\) are the Fourier series coefficients of \(\cos x\). Therefore \(a_{1}=1\) and all other \(a_{n},b_{n}\) are zero in order to match the left side with the right side.
The solution in (1) now becomes\[ u\left ( x,t\right ) =e^{-t}\cos x \] The above is the Fourier series solution. To answer (ii), we let \(t\rightarrow \infty \) in the above. This shows that equilibrium temperature will be zero.
Starting with the series solution as given in (4.34)\begin{equation} u\left ( x,t\right ) =\frac{a_{0}}{2}+\sum _{n=1}^{\infty }e^{-n^{2}t}\left ( a_{n}\cos nx+b_{n}\sin nx\right ) \tag{1} \end{equation} At \(t=0\) the above becomes (using \(u\left ( x,0\right ) =\left \vert x\right \vert \))\[ \left \vert x\right \vert =\frac{a_{0}}{2}+\sum _{n=1}^{\infty }a_{n}\cos nx+b_{n}\sin nx \] Hence \(a_{n},b_{n}\) are the Fourier series coefficients of \(\left \vert x\right \vert \). But \(\left \vert x\right \vert \) is even. Hence \(b_{n}=0\). So we only need to find \(a_{0},a_{n}\)\[ a_{0}=\frac{1}{\pi }\int _{-\pi }^{\pi }f\left ( x\right ) dx \] Because \(f\left ( x\right ) \) is even the above simplifies to\begin{align*} a_{0} & =\frac{2}{\pi }\int _{0}^{\pi }f\left ( x\right ) dx\\ & =\frac{2}{\pi }\int _{0}^{\pi }xdx\\ & =\frac{1}{\pi }\left [ x^{2}\right ] _{0}^{\pi }\\ & =\frac{1}{\pi }\left [ \pi ^{2}\right ] \\ & =\pi \end{align*}
And\[ a_{n}=\frac{1}{\pi }\int _{-\pi }^{\pi }f\left ( x\right ) \cos nxdx \] But \(f\left ( x\right ) \) is even and \(\cos nx\) is even, hence product is even. The above simplifies to\begin{align*} a_{n} & =\frac{2}{\pi }\int _{0}^{\pi }f\left ( x\right ) \cos nxdx\\ & =\frac{2}{\pi }\int _{0}^{\pi }x\cos nxdx \end{align*}
Integration by parts gives\begin{align*} a_{n} & =\frac{2}{\pi }\left ( \overset{0}{\overbrace{\left [ x\frac{\sin nx}{n}\right ] _{0}^{\pi }}}-\int _{0}^{\pi }\frac{\sin nx}{n}dx\right ) \\ & =\frac{2}{\pi }\left ( \frac{1}{n}\left [ \frac{\cos nx}{n}\right ] _{0}^{\pi }\right ) \\ & =\frac{2}{\pi n^{2}}\left ( \cos n\pi -1\right ) \\ & =\frac{2}{\pi n^{2}}\left ( \left ( -1\right ) ^{n}-1\right ) \end{align*}
Therefore (1) becomes\begin{equation} u\left ( x,t\right ) =\frac{\pi }{2}+\sum _{n=1}^{\infty }e^{-n^{2}t}\left ( \frac{2}{\pi n^{2}}\left ( \left ( -1\right ) ^{n}-1\right ) \cos nx\right ) \tag{1A} \end{equation} The above is the Fourier series solution. To answer (ii), we let \(t\rightarrow \infty \) in the above. This shows that equilibrium temperature will become\[ u_{eq}\left ( x,t\right ) =\frac{\pi }{2}\]
The cable equation \(v_{t}=\gamma v_{xx}-\alpha v\) with \(\gamma ,v>0\), also known as the lossy heat equation,was derived by the nineteenth-century Scottish physicist William Thomson to model propagation of signals in a transatlantic cable. Later, in honor of his work on thermodynamics, including determining the value of absolute zero temperature, he was named Lord Kelvin by Queen Victoria. The cable equation was later used to model the electrical activity of neurons. (a) Show that the general solution to the cable equation is given by \(v\left ( x,t\right ) =e^{-\alpha t}u\left ( x,t\right ) \) where \(u\left ( x,t\right ) \) solves the heat equation \(u_{t}=\gamma u_{xx}\).
(b) Find a Fourier series solution to the Dirichlet initial-boundary value problem \(v_{t}=\gamma v_{xx}-\alpha v\), with initial conditions \(v\left ( x,0\right ) =f\left ( x\right ) \) and boundary conditions \(v\left ( 0,t\right ) =0,v\left ( 1,t\right ) =0\) for \(0\leq x\leq 1,t>0\). Does your solution approach an equilibrium value? If so, how fast?
solution
Given \begin{equation} v\left ( x,t\right ) =e^{-\alpha t}u\left ( x,t\right ) \tag{1} \end{equation} Hence\begin{equation} \frac{\partial v}{\partial t}=-\alpha e^{-\alpha t}u+e^{-\alpha t}\frac{\partial u}{\partial t} \tag{2} \end{equation} And\begin{align} \frac{\partial v}{\partial x} & =e^{-\alpha t}\frac{\partial u}{\partial x}\nonumber \\ \frac{\partial ^{2}v}{\partial x^{2}} & =e^{-\alpha t}\frac{\partial ^{2}u}{\partial x^{2}} \tag{3} \end{align}
Substituting (1,2,3) into \(v_{t}=\gamma v_{xx}-\alpha v\) gives\[ -\alpha e^{-\alpha t}u+e^{-\alpha t}\frac{\partial u}{\partial t}=\gamma e^{-\alpha t}\frac{\partial ^{2}u}{\partial x^{2}}-\alpha e^{-\alpha t}u \] Canceling \(e^{-\alpha t}\neq 0\) from all the terms gives\begin{align*} -\alpha u+\frac{\partial u}{\partial t} & =\gamma \frac{\partial ^{2}u}{\partial x^{2}}-\alpha u\\ \frac{\partial u}{\partial t} & =\gamma \frac{\partial ^{2}u}{\partial x^{2}} \end{align*}
Which is what problem asked to show.
Now we need to solve\begin{equation} v_{t}=\gamma v_{xx}-\alpha v \tag{1} \end{equation} With initial and boundary conditions given. Using separation of variable, let \(v=T\left ( t\right ) X\left ( x\right ) \) where \(T\left ( t\right ) \) is function that depends on time only and \(X\left ( x\right ) \) is a function that depends on \(x\) only. Using this substitution in (1) gives\[ T^{\prime }X=\gamma X^{\prime \prime }T-\alpha XT \] Dividing by \(XT\neq 0\) gives\[ \frac{1}{\gamma }\frac{T^{\prime }}{T}+\frac{\alpha }{\gamma }=\frac{X^{\prime \prime }}{X}=-\lambda \] Where \(\lambda \) is the separation constant. The above gives two ODE’s to solve\begin{align} X^{\prime \prime }+\lambda X & =0\nonumber \\ X\left ( 0\right ) & =0\nonumber \\ X\left ( 1\right ) & =0 \tag{2} \end{align}
And\begin{align} \frac{1}{\gamma }\frac{T^{\prime }}{T}+\frac{\alpha }{\gamma } & =-\lambda \nonumber \\ T^{\prime }+\alpha T & =-\lambda \gamma T\nonumber \\ T^{\prime }+\alpha T+\lambda \gamma T & =0\nonumber \\ T^{\prime }+\left ( \alpha +\lambda \gamma \right ) T & =0 \tag{3} \end{align}
ODE (2) is the boundary value ODE which will generate the eigenvalues and eigenfunctions.
case \(\lambda <0\)
Let \(-\lambda =\mu ^{2}\). The solution to (2) becomes\[ X=c_{1}\cosh \left ( \mu x\right ) +c_{2}\sinh \left ( \mu x\right ) \] At \(x=0\)\[ 0=c_{1}\] Hence the solution becomes \(X=c_{2}\sinh \left ( \mu x\right ) \). At \(x=1\) this gives \(0=c_{2}\sinh \left ( \mu \right ) \). But \(\sinh \left ( \mu \right ) =0\) only when \(\mu =0\) which is not the case here. Hence \(c_{2}=0\) leading to trivial solution. Therefore \(\lambda <0\) is not eigenvalue.
case \(\lambda =0\)
The solution is \(X\left ( x\right ) =c_{1}x+c_{2}\). At \(x=0\) this becomes \(0=c_{2}\). Hence solution is \(X=c_{1}x\). At \(x=1\) this gives \(0=c_{1}\). Therefore trivial solution. Hence \(\lambda =0\) is not eigenvalue.
case \(\lambda >0\)
Solution is \[ X\left ( x\right ) =c_{1}\cos \left ( \sqrt{\lambda }x\right ) +c_{2}\sin \left ( \sqrt{\lambda }x\right ) \] At \(x=0\) this results in \(0=c_{1}\). The above now becomes\[ X\left ( x\right ) =c_{2}\sin \left ( \sqrt{\lambda }x\right ) \] At \(x=1\)\[ 0=c_{2}\sin \left ( \sqrt{\lambda }\right ) \] For non-trivial solution we want \(\sin \left ( \sqrt{\lambda }\right ) =0\) or \(\sqrt{\lambda }=n\pi ,n=1,2,\cdots \). Hence \[ \lambda _{n}=n^{2}\pi ^{2}\qquad n=1,2,\cdots \] And the corresponding eigenfunctions\begin{equation} X_{n}\left ( x\right ) =\sin \left ( n\pi x\right ) \tag{4} \end{equation} Now we can solve (3)\begin{align*} T^{\prime }+\left ( \alpha +\lambda \gamma \right ) T & =0\\ T_{n}^{\prime }+\left ( \alpha +n^{2}\pi ^{2}\gamma \right ) T_{n} & =0 \end{align*}
The solution is\begin{equation} T_{n}\left ( t\right ) =b_{n}e^{-\left ( \alpha +n^{2}\pi ^{2}\gamma \right ) t}\tag{5} \end{equation} Where \(b_{n}\) is arbitrary constant of integration that depends on \(b\). From (4,5) we obtain the fundamental solution\[ v_{n}\left ( x,t\right ) =b_{n}e^{-\left ( \alpha +n^{2}\pi ^{2}\gamma \right ) t}\sin \left ( n\pi x\right ) \] The general solution is linear combination of the above\begin{equation} v\left ( x,t\right ) =\sum _{n=1}^{\infty }b_{n}e^{-\left ( \alpha +n^{2}\pi ^{2}\gamma \right ) t}\sin \left ( n\pi x\right ) \tag{6} \end{equation} At \(t=0\) the above becomes\[ f\left ( x\right ) =\sum _{n=1}^{\infty }b_{n}\sin \left ( n\pi x\right ) \] We see that \(b_{n}\) are the Fourier coefficients of \(f\left ( x\right ) \), after odd extending it from \(\left [ -1,1\right ] \). Therefore, the period of \(f\left ( x\right ) \) becomes \(2\).\[ b_{n}=\int _{-1}^{1}f\left ( x\right ) \sin \left ( n\pi x\right ) dx \] Since \(f\left ( x\right ) \) is odd (we did odd extension) and since \(\sin \) is odd, then the product is even, and the above becomes\[ b_{n}=2\int _{0}^{1}f\left ( x\right ) \sin \left ( n\pi x\right ) dx \] Using the above in (6) gives\[ v\left ( x,t\right ) =\sum _{n=1}^{\infty }2\left ( \int _{0}^{1}f\left ( x\right ) \sin \left ( n\pi x\right ) dx\right ) e^{-\left ( \alpha +n^{2}\pi ^{2}\gamma \right ) t}\sin \left ( n\pi x\right ) \] To find equilibrium, we let \(t\rightarrow \infty \) then \(e^{-\left ( \alpha +n^{2}\pi ^{2}\gamma \right ) t}\rightarrow 0\) because \(\alpha ,\gamma >0\) and the above becomes \[ v_{eq}\left ( x,t\right ) =0 \]