1. HOME
  2. PDF (letter size)

my mathematics cheat sheet

Nasser M. Abbasi

December 20, 2024   Compiled on December 20, 2024 at 4:48am

Contents

1 Special ode’s and their solutions
1.1 Airy \(y^{\prime \prime }+axy=0\)
1.2 Chebyshev \(\left ( 1-x^{2}\right ) y^{\prime \prime }-xy^{\prime }+n^{2}y=0\)
1.3 Hermite \(y^{\prime \prime }-2xy^{\prime }+2ny=0\)
1.4 Legendre \(\left ( 1-x^{2}\right ) y^{\prime \prime }-2xy^{\prime }+n\left ( n+1\right ) y=0\)
1.5 Bessel \(x^{2}y^{\prime \prime }+xy^{\prime }+\left ( x^{2}-n^{2}\right ) y=0\)
1.6 Reduced Riccati \(y^{\prime }=ax^{n}+by^{2}\)
1.7 Gauss Hypergeometric ode \(x\left ( 1-x\right ) y^{\prime \prime }+\left ( c-\left ( a+b+1\right ) x\right ) y^{\prime }-aby=0\)
2 Change of variables and chain rule in differential equation
2.1 Example 1 Change of the independent variable using \(z=g\left ( x\right ) \)
2.2 Example 2 Change of the independent variable using \(t=\ln \left ( x\right ) \) Euler ode
2.3 Example 3 Change of the dependent variable using \(y=x^{r}\) Euler ode
3 Changing the role of independent and dependent variable in an ode
3.1 Example 1
3.2 Example 2
3.3 Example 3
3.4 Example 4
3.5 Example 5
3.6 Example 6
3.7 Example 7
4 general notes
5 Converting first order ODE which is homogeneous to separable ODE
6 Direct solving of some simple PDE’s
7 Fourier series flow chart
7.1 Theorem on when we can do term by term differentiation
7.2 Relation between coefficients of Fourier series of \(f\left ( x\right ) \) Fourier series of \(f^{\prime }\left ( x\right ) \)
7.3 Theorem on convergence of Fourier series
8 Laplacian in different coordinates
9 Linear combination of two solution is solution to ODE
10 To find the Wronskian ODE
11 Green functions notes
12 Laplace transform notes
13 Series, power series, Laurent series notes
13.1 Some tricks to find sums
13.1.1 Example 1
13.2 Methods to find Laurent series
13.2.1 Method one
13.2.2 Method Two
13.2.3 Method Three
13.2.4 Conclusion
14 Gamma function notes
15 Riemann zeta function notes
16 Complex functions notes
16.1 Find \(b_{n}\) coefficients in the Laurent series expansion
17 Hints to solve some problems
17.1 Complex analysis and power and Laurent series
17.2 Errors and relative errors
18 Some CAS notes
19 d’Alembert’s Solution to wave PDE
20 Convergence
21 Note on using when to raise ln to exp solving an ode
21.1 Example 1
21.2 Example 2
21.3 Example 3
22 References

A place to keep quick notes about Math that I keep forgetting. This is meant to be a scratch notes and cheat sheet for me to write math notes before I forget them or move them somewhere else. Can and will contain errors and/or not complete description in number of places. Use at your own risk.

1 Special ode’s and their solutions

These are ode’s whose solution is in terms of special functions. Will update as I find more. Most of the special functions come up from working out the solution in series of second order ode which has regular singular point at expansion point. These are the more interesting odes which will generate these special functions.

1.1 Airy  \(y^{\prime \prime }+axy=0\)

solution is

\[ y\left ( x\right ) =c_{1}\operatorname {AiryAi}\left ( -a^{\frac {1}{3}}x\right ) +c_{2}\operatorname {AiryBi}\left ( -a^{\frac {1}{3}}x\right ) \]

1.2 Chebyshev \(\left ( 1-x^{2}\right ) y^{\prime \prime }-xy^{\prime }+n^{2}y=0\)

For

\[ \left ( 1-x^{2}\right ) y^{\prime \prime }-xy^{\prime }+n^{2}y=0 \]

Singular points at \(x=1,-1\) and \(\infty \). Solution valid for \(\left \vert x\right \vert <1\). Maple gives solution

\[ y\left ( x\right ) =c_{1}\frac {1}{\left ( x+\sqrt {x^{2}-1}\right ) ^{n}}+c_{2}\left ( x+\sqrt {x^{2}-1}\right ) ^{n}\]

For

\[ \left ( 1-x^{2}\right ) y^{\prime \prime }-axy^{\prime }+n^{2}y=0 \]

Maple gives solution

\begin{multline*} y\left ( x\right ) =c_{1}\left ( x^{2}-1\right ) ^{\frac {1}{2}-\frac {a}{4}}\operatorname {LegendreP}\left ( \frac {\sqrt {a^{2}+4n^{2}-2a+1}}{2}-\frac {1}{2},-1+\frac {a}{2},x\right ) \\ +c_{2}\left ( x^{2}-1\right ) ^{\frac {1}{2}-\frac {a}{4}}\operatorname {LegendreQ}\left ( \frac {\sqrt {a^{2}+4n^{2}-2a+1}}{2}-\frac {1}{2},-1+\frac {a}{2},x\right ) \end{multline*}

If \(n\) positive integer, then solution in series gives polynomial solution of degree \(n\). Called Chebyshev polynomials.

1.3 Hermite \(y^{\prime \prime }-2xy^{\prime }+2ny=0\)

Converges for all \(x\). If \(n\) is positive integer, one series terminates. Series solution in terms of Hermite polynomials.

Maple gives solution

\[ y\left ( x\right ) =c_{1}x\operatorname {KummerM}\left ( \frac {1}{2}-\frac {n}{2},\frac {3}{2},x^{2}\right ) +c_{2}x\operatorname {KummerU}\left ( \frac {1}{2}-\frac {n}{2},\frac {3}{2},x^{2}\right ) \]

1.4 Legendre \(\left ( 1-x^{2}\right ) y^{\prime \prime }-2xy^{\prime }+n\left ( n+1\right ) y=0\)

Series solution in terms of Legendre functions. When \(n\) is positive integer, one series terminates (i.e. becomes polynomial).

Maple gives solution

\[ y\left ( x\right ) =c_{1}\operatorname {LegendreP}\left ( n,x\right ) +c_{2}\operatorname {LegendreQ}\left ( n,x\right ) \]

If the ode is given in form

\[ \sin \left ( \theta \right ) P^{\prime \prime }\left ( \theta \right ) +\cos \left ( \theta \right ) P^{\prime }\left ( \theta \right ) +n\sin \left ( \theta \right ) P\left ( \theta \right ) =0 \]

Then using \(x=\cos \theta \) transforms it to the earlier more familar form. Maple gives this as solution

\[ P\left ( \theta \right ) =c_{1}\operatorname {LegendreP}\left ( \frac {\sqrt {4n+1}}{2}-\frac {1}{2},\cos \theta \right ) +c_{2}\operatorname {LegendreQ}\left ( \frac {\sqrt {4n+1}}{2}-\frac {1}{2},\cos \theta \right ) \]

1.5 Bessel \(x^{2}y^{\prime \prime }+xy^{\prime }+\left ( x^{2}-n^{2}\right ) y=0\)

\(x=0\,\) is regular singular point. Solution in terms of Bessel functions

\[ y\left ( x\right ) =c_{1}\operatorname {BesselJ}\left ( n,x\right ) +c_{2}\operatorname {BesselY}\left ( n,x\right ) \]

1.6 Reduced Riccati \(y^{\prime }=ax^{n}+by^{2}\)

For the special case of \(n=-2\) the solution is

\[ y\left ( x\right ) =\frac {\lambda }{x}-\frac {x^{2b\lambda }}{\frac {bx}{2b\lambda +1}x^{2b\lambda }+c_{1}}\]

Where in the above \(\lambda \) is a root of \(b\lambda ^{2}+\lambda +a=0\).

For \(n\neq -2\)

\begin{align*} w & =\sqrt {x}\left \{ \begin {array} [c]{cc}c_{1}\operatorname {BesselJ}\left ( \frac {1}{2k},\frac {1}{k}\sqrt {ab}x^{k}\right ) +c_{2}\operatorname {BesselY}\left ( \frac {1}{2k},\frac {1}{k}\sqrt {ab}x^{k}\right ) & ab>0\\ c_{1}\operatorname {BesselI}\left ( \frac {1}{2k},\frac {1}{k}\sqrt {-ab}x^{k}\right ) +c_{2}\operatorname {BesselK}\left ( \frac {1}{2k},\frac {1}{k}\sqrt {-ab}x^{k}\right ) & ab<0 \end {array} \right . \\ y & =-\frac {1}{b}\frac {w^{\prime }}{w}\\ k & =1+\frac {n}{2}\end{align*}

1.7 Gauss Hypergeometric ode \(x\left ( 1-x\right ) y^{\prime \prime }+\left ( c-\left ( a+b+1\right ) x\right ) y^{\prime }-aby=0\)

Solution is for \(\left \vert x\right \vert <1\) is in terms of hypergeom function. Has 3 regular singular points, \(x=0,x=1,x=\infty \).

Maple gives this solution

\[ y\left ( x\right ) =c_{1}\operatorname {hypergeom}\left ( \left [ a,b\right ] ,\left [ c\right ] ,x\right ) +c_{2}x^{1-c}\operatorname {hypergeom}\left ( \left [ 1+a-c,1+b-c\right ] ,\left [ 2-c\right ] ,x\right ) \]

And Mathematica gives

\[ y\left ( x\right ) =c_{1}\operatorname {HypergeometricF1}\left ( a,b,c,x\right ) +\left ( -1\right ) ^{1-c}x^{1-c}c_{2}\operatorname {HypergeometricF1}\left ( 1+a-c,1+b-c,2-c,x\right ) \]

2 Change of variables and chain rule in differential equation

These are examples doing change of variable for an ode.

2.1 Example 1 Change of the independent variable using \(z=g\left ( x\right ) \)

Given the ode

\[ \frac {d^{2}y}{dx^{2}}+\frac {dy}{dx}+y=\sin \left ( x\right ) \]

And we are asked to do change of variables from \(x\) to \(z\) where \(z=g\left ( x\right ) \). In this, we can also write

\[ x=g^{-1}\left ( z\right ) \]

Where \(g^{-1}\left ( z\right ) \) is the inverse function. Using chain rule gives

\[ \frac {dy}{dx}=\frac {dy}{dz}\frac {dz}{dx}\]

And for second derivative

\begin{align*} \frac {d^{2}y}{dx^{2}} & =\frac {d}{dx}\left ( \frac {dy}{dx}\right ) \\ & =\frac {d}{dx}\left ( \frac {dy}{dz}\frac {dz}{dx}\right ) \end{align*}

And now we use the product rule, which is \(\frac {d}{dx}\left ( ab\right ) =a^{\prime }b+ab^{\prime }\) on the above, which gives

\begin{equation} \frac {d^{2}y}{dx^{2}}=\left ( \frac {d}{dx}\frac {dy}{dz}\right ) \left ( \frac {dz}{dx}\right ) +\left ( \frac {dy}{dz}\right ) \left ( \frac {d}{dx}\frac {dz}{dx}\right ) \tag {1}\end{equation}

Let us do each of the terms on the right above one by one.  The second term on the RHS above is easy. It is

\begin{equation} \left ( \frac {dy}{dz}\right ) \left ( \frac {d}{dx}\frac {dz}{dx}\right ) =\left ( \frac {dy}{dz}\right ) \left ( \frac {d^{2}z}{dx^{2}}\right ) \tag {2}\end{equation}

It is the first term in (1) which needs more care. The problem is how to handle \(\frac {d}{dx}\frac {dy}{dz}\)? Since the denominators are different. The trick is to write \(\frac {d}{dx}\frac {dy}{dz}\) as \(\frac {d}{dz}\frac {dz}{dx}\left ( \frac {dy}{dz}\right ) \) which does not change anything, but now we can change the order and write this as \(\frac {dz}{dx}\frac {d}{dz}\left ( \frac {dy}{dz}\right ) \) which now makes the denominator the same and now it is free sailing:

\begin{align*} \frac {d}{dx}\frac {dy}{dz} & =\frac {d}{dz}\frac {dz}{dx}\left ( \frac {dy}{dz}\right ) \\ & =\frac {dz}{dx}\frac {d}{dz}\left ( \frac {dy}{dz}\right ) \\ & =\frac {dz}{dx}\left ( \frac {d^{2}y}{dz^{2}}\right ) \end{align*}

Therefore, the first term in (1) becomes

\begin{align} \left ( \frac {d}{dx}\frac {dy}{dz}\right ) \left ( \frac {dz}{dx}\right ) & =\frac {dz}{dx}\left ( \frac {d^{2}y}{dz^{2}}\right ) \left ( \frac {dz}{dx}\right ) \nonumber \\ & =\left ( \frac {dz}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) \tag {3}\end{align}

Using (2,3) then we have

\[ \frac {d^{2}y}{dx^{2}}=\left ( \frac {dz}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) +\left ( \frac {dy}{dz}\right ) \left ( \frac {d^{2}z}{dx^{2}}\right ) \]

Hence the original ode now becomes

\begin{align*} \frac {d^{2}y}{dx^{2}}+\frac {dy}{dx}+y & =\sin \left ( x\right ) \\ \overset {y^{\prime \prime }\left ( x\right ) }{\overbrace {\left ( \frac {dz}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) +\left ( \frac {dy}{dz}\right ) \left ( \frac {d^{2}z}{dx^{2}}\right ) }}+\overset {y^{\prime }\left ( x\right ) }{\overbrace {\frac {dy}{dz}\frac {dz}{dx}}}+y\left ( z\right ) & =\sin \left ( g^{-1}\left ( z\right ) \right ) \end{align*}

We could have written the RHS above as just \(\sin \left ( x\right ) \) instead of \(\sin \left ( g^{-1}\left ( z\right ) \right ) \) but since the independent variable is now \(z\), this seemed better to do it this way. But both are correct.  Now, since \(z=g\left ( x\right ) \) the above can also be written as

\begin{align*} \left ( \frac {dg}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) +\left ( \frac {dy}{dz}\right ) \left ( \frac {d^{2}g}{dx^{2}}\right ) +\frac {dy}{dz}\frac {dg}{dx}+y\left ( z\right ) & =\sin \left ( g^{-1}\left ( z\right ) \right ) \\ \left ( g^{\prime }\left ( x\right ) \right ) ^{2}y^{\prime \prime }\left ( x\right ) +y^{\prime }\left ( z\right ) g^{\prime \prime }\left ( x\right ) +y^{\prime }\left ( z\right ) g^{\prime }\left ( x\right ) +y\left ( z\right ) & =\sin \left ( x\right ) \end{align*}

OK, since the above was so much fun, lets do third derivative \(\frac {d^{3}y}{dx^{3}}\)

\begin{align} \frac {d^{3}y}{dx^{3}} & =\frac {d}{dx}\left ( \frac {d^{2}y}{dx^{2}}\right ) \nonumber \\ & =\frac {d}{dx}\left ( \left ( \frac {dz}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) +\left ( \frac {dy}{dz}\right ) \left ( \frac {d^{2}z}{dx^{2}}\right ) \right ) \nonumber \\ & =\frac {d}{dx}\left [ \left ( \frac {dz}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) \right ] +\frac {d}{dx}\left [ \left ( \frac {dy}{dz}\right ) \left ( \frac {d^{2}z}{dx^{2}}\right ) \right ] \tag {4}\end{align}

Each term above is now found. Looking at first term in (4)

\[ \frac {d}{dx}\left [ \left ( \frac {dz}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) \right ] \]

Using the product rule, which is \(\frac {d}{dx}\left ( ab\right ) =a^{\prime }b+ab^{\prime }\) on the above gives

\[ \frac {d}{dx}\left [ \left ( \frac {dz}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) \right ] =\frac {d}{dx}\left [ \left ( \frac {dz}{dx}\right ) ^{2}\right ] \frac {d^{2}y}{dz^{2}}+\left ( \frac {dz}{dx}\right ) ^{2}\frac {d}{dx}\left [ \frac {d^{2}y}{dz^{2}}\right ] \]

But \(\frac {d}{dx}\left [ \left ( \frac {dz}{dx}\right ) ^{2}\right ] =2\frac {dz}{dx}\frac {d^{2}z}{dx}\) and for \(\frac {d}{dx}\left ( \frac {d^{2}y}{dz^{2}}\right ) \) we have to use the same trick as before by writing \(\frac {d}{dx}\left ( \frac {d^{2}y}{dz^{2}}\right ) =\frac {d}{dz}\frac {dz}{dx}\left ( \frac {d^{2}y}{dz^{2}}\right ) =\frac {dz}{dx}\frac {d}{dz}\left ( \frac {d^{2}y}{dz^{2}}\right ) \) and now we have \(\frac {d}{dx}\left ( \frac {d^{2}y}{dz^{2}}\right ) =\frac {dz}{dx}\frac {d^{3}y}{dz^{3}}\). Hence the first term in (4) is now done.

\begin{align} \frac {d}{dx}\left [ \left ( \frac {dz}{dx}\right ) ^{2}\left ( \frac {d^{2}y}{dz^{2}}\right ) \right ] & =2\frac {dz}{dx}\frac {d^{2}z}{dx^{2}}\frac {d^{2}y}{dz^{2}}+\left ( \frac {dz}{dx}\right ) ^{2}\frac {dz}{dx}\frac {d^{3}y}{dz^{3}}\nonumber \\ & =2\frac {dz}{dx}\frac {d^{2}z}{dx^{2}}\frac {d^{2}y}{dz^{2}}+\left ( \frac {dz}{dx}\right ) ^{3}\frac {d^{3}y}{dz^{3}} \tag {5}\end{align}

Now we look at the second term in (4) which is \(\frac {d}{dx}\left [ \left ( \frac {dy}{dz}\right ) \left ( \frac {d^{2}z}{dx^{2}}\right ) \right ] \) and apply the product rule, this gives

\begin{align} \frac {d}{dx}\left [ \left ( \frac {dy}{dz}\right ) \left ( \frac {d^{2}z}{dx^{2}}\right ) \right ] & =\frac {d}{dx}\left [ \frac {dy}{dz}\right ] \left ( \frac {d^{2}z}{dx^{2}}\right ) +\frac {dy}{dz}\frac {d}{dx}\left [ \frac {d^{2}z}{dx^{2}}\right ] \nonumber \\ & =\frac {d}{dz}\frac {dz}{dx}\left [ \frac {dy}{dz}\right ] \left ( \frac {d^{2}z}{dx^{2}}\right ) +\frac {dy}{dz}\frac {d^{3}z}{dx^{3}}\nonumber \\ & =\frac {dz}{dx}\frac {d}{dz}\left [ \frac {dy}{dz}\right ] \left ( \frac {d^{2}z}{dx^{2}}\right ) +\frac {dy}{dz}\frac {d^{3}z}{dx^{3}}\nonumber \\ & =\frac {dz}{dx}\frac {d^{2}y}{dz^{2}}\left ( \frac {d^{2}z}{dx^{2}}\right ) +\frac {dy}{dz}\frac {d^{3}z}{dx^{3}} \tag {6}\end{align}

That is it. We are done. (5,6) are the two terms in (4). Therefore

\begin{align*} \frac {d^{3}y}{dx^{3}} & =2\frac {dz}{dx}\frac {d^{2}z}{dx^{2}}\frac {d^{2}y}{dz^{2}}+\left ( \frac {dz}{dx}\right ) ^{3}\frac {d^{3}y}{dz^{3}}+\frac {dz}{dx}\frac {d^{2}y}{dz^{2}}\left ( \frac {d^{2}z}{dx^{2}}\right ) +\frac {dy}{dz}\frac {d^{3}z}{dx^{3}}\\ & =3\frac {dz}{dx}\frac {d^{2}z}{dx^{2}}\frac {d^{2}y}{dz^{2}}+\left ( \frac {dz}{dx}\right ) ^{3}\frac {d^{3}y}{dz^{3}}+\frac {dy}{dz}\frac {d^{3}z}{dx^{3}}\end{align*}

Now, since \(z=g\left ( x\right ) \) the above can also be written as

\[ y^{\prime \prime \prime }\left ( x\right ) =3g^{\prime }\left ( x\right ) g^{\prime \prime }\left ( x\right ) y^{\prime \prime }\left ( z\right ) +\left ( g^{\prime }\left ( x\right ) \right ) ^{3}y^{\prime \prime \prime }\left ( z\right ) +y^{\prime }\left ( z\right ) g^{\prime \prime \prime }\left ( x\right ) \]

This table show summary of transformation for each derivative \(y^{\left ( n\right ) }\left ( x\right ) \) when using change of variables \(z=g\left ( x\right ) \)

\(y^{\prime }\left ( x\right ) \) \(y^{\prime }\left ( z\right ) g^{\prime }\left ( x\right ) \)
\(y^{\prime \prime }\left ( x\right ) \) \(\left ( g^{\prime }\left ( x\right ) \right ) ^{2}y^{\prime \prime }\left ( z\right ) +y^{\prime }\left ( z\right ) g^{\prime \prime }\left ( x\right ) \)
\(y^{\prime \prime \prime }\left ( x\right ) \) \(3g^{\prime }\left ( x\right ) g^{\prime \prime }\left ( x\right ) y^{\prime \prime }\left ( z\right ) +\left ( g^{\prime }\left ( x\right ) \right ) ^{3}y^{\prime \prime \prime }\left ( z\right ) +y^{\prime }\left ( z\right ) g^{\prime \prime \prime }\left ( x\right ) \)
\(y^{\prime \prime \prime \prime }\left ( x\right ) \) \(3\left ( g^{\prime \prime }\left ( x\right ) \right ) ^{2}y^{\prime \prime }\left ( z\right ) +4g^{\prime }\left ( x\right ) y^{\prime \prime }\left ( z\right ) g^{\prime \prime \prime }\left ( x\right ) +6\left ( g^{\prime }\left ( x\right ) \right ) ^{2}g^{\prime \prime }\left ( x\right ) y^{\prime \prime \prime }\left ( z\right ) +y^{\prime }\left ( z\right ) g^{\prime \prime \prime \prime }\left ( x\right ) +\left ( g^{\prime }\left ( x\right ) \right ) ^{4}y^{\prime \prime \prime \prime }\left ( z\right ) \)

Strictly speaking, it would be better to use different variable than \(y\) when changing the independent variable. i.e. instead of writing \(y\left ( z\right ) \) in all the above, we should write \(u\left ( z\right ) \) in its place. So the above table will look like

\(y^{\prime }\left ( x\right ) \) \(u^{\prime }\left ( z\right ) g^{\prime }\left ( x\right ) \)
\(y^{\prime \prime }\left ( x\right ) \) \(\left ( g^{\prime }\left ( x\right ) \right ) ^{2}u^{\prime \prime }\left ( z\right ) +u^{\prime }\left ( z\right ) g^{\prime \prime }\left ( x\right ) \)
\(y^{\prime \prime \prime }\left ( x\right ) \) \(3g^{\prime }\left ( x\right ) g^{\prime \prime }\left ( x\right ) u^{\prime \prime }\left ( z\right ) +\left ( g^{\prime }\left ( x\right ) \right ) ^{3}u^{\prime \prime \prime }\left ( z\right ) +u^{\prime }\left ( z\right ) g^{\prime \prime \prime }\left ( x\right ) \)
\(y^{\prime \prime \prime \prime }\left ( x\right ) \) \(3\left ( g^{\prime \prime }\left ( x\right ) \right ) ^{2}u^{\prime \prime }\left ( z\right ) +4g^{\prime }\left ( x\right ) u^{\prime \prime }\left ( z\right ) g^{\prime \prime \prime }\left ( x\right ) +6\left ( g^{\prime }\left ( x\right ) \right ) ^{2}g^{\prime \prime }\left ( x\right ) u^{\prime \prime \prime }\left ( z\right ) +y^{\prime }\left ( z\right ) g^{\prime \prime \prime \prime }\left ( x\right ) +\left ( g^{\prime }\left ( x\right ) \right ) ^{4}u^{\prime \prime \prime \prime }\left ( z\right ) \)

So any place where \(y\left ( z\right ) \) shows in the transformed expression, it should be written with new letter for the dependent variable \(u\left ( z\right ) \). But this is not always enforced.

2.2 Example 2 Change of the independent variable using \(t=\ln \left ( x\right ) \) Euler ode

Given the ode

\[ x^{2}\frac {d^{2}y}{dx^{2}}+2x\frac {dy}{dx}+y=0 \]

And asked to do change of variable \(t=\ln \left ( x\right ) \)

\begin{align*} \frac {dy}{dx} & =\frac {dy}{dt}\frac {dt}{dx}\\ & =\frac {dy}{dt}\frac {1}{x}\end{align*}

And

\begin{align*} \frac {d^{2}y}{dx^{2}} & =\frac {d}{dx}\left ( \frac {dy}{dx}\right ) \\ & =\frac {d}{dx}\left ( \frac {dy}{dt}\frac {1}{x}\right ) \\ & =\frac {d}{dx}\left [ \frac {dy}{dt}\right ] \frac {1}{x}+\frac {dy}{dt}\frac {d}{dx}\left ( \frac {1}{x}\right ) \\ & =\frac {d}{dt}\frac {dt}{dx}\left [ \frac {dy}{dt}\right ] \frac {1}{x}-\frac {dy}{dt}\frac {1}{x^{2}}\\ & =\frac {dt}{dx}\frac {d^{2}y}{dt^{2}}\frac {1}{x}-\frac {dy}{dt}\frac {1}{x^{2}}\\ & =\frac {1}{x}\frac {d^{2}y}{dt^{2}}\frac {1}{x}-\frac {dy}{dt}\frac {1}{x^{2}}\\ & =\frac {1}{x^{2}}\frac {d^{2}y}{dt^{2}}-\frac {dy}{dt}\frac {1}{x^{2}}\end{align*}

Hence the original ode becomes

\begin{align*} x^{2}\left ( \frac {1}{x^{2}}\frac {d^{2}y}{dt^{2}}-\frac {dy}{dt}\frac {1}{x^{2}}\right ) +2x\left ( \frac {dy}{dt}\frac {1}{x}\right ) +y & =0\\ \frac {d^{2}y}{dt^{2}}-\frac {dy}{dt}+2\frac {dy}{dt}+y & =0\\ \frac {d^{2}y}{dt^{2}}+\frac {dy}{dt}+y & =0 \end{align*}

2.3 Example 3 Change of the dependent variable using \(y=x^{r}\) Euler ode

Given the ode

\[ x^{2}\frac {d^{2}y}{dx^{2}}+2x\frac {dy}{dx}+y=0 \]

And asked to do change of variable \(y=x^{r}\)

\[ \frac {dy}{dx}=rx^{r-1}\]

And

\begin{align*} \frac {d^{2}y}{dx^{2}} & =\frac {d}{dx}\left ( rx^{r-1}\right ) \\ & =r\left ( r-1\right ) x^{r-2}\end{align*}

Hence the original ode becomes

\begin{align*} x^{2}\left ( r\left ( r-1\right ) x^{r-2}\right ) +2x\left ( rx^{r-1}\right ) +x^{r} & =0\\ r\left ( r-1\right ) x^{r}+2rx^{r}+x^{r} & =0\\ r\left ( r-1\right ) +2r+1 & =0 \end{align*}

Solving for \(r\) gives the roots. Hence solutions are \(y_{1}=x^{r_{1}}\) and \(y_{2}=x^{r_{2}}\). Final solution is therefore

\begin{align*} y & =c_{1}y_{1}+c_{2}y_{2}\\ & =c_{1}x^{r_{1}}+c_{2}x^{r_{2}}\end{align*}

This method of solving the Euler ode is much simpler than using \(t=\ln \left ( x\right ) \) change of variables but for some reason most text books use the later one.

3 Changing the role of independent and dependent variable in an ode

(added Dec 14, 2024).

Given an ode \(y^{\prime }\left ( x\right ) =f\left ( x,y\right ) \), we want to change it so that instead of \(y\left ( x\right ) \) being the dependent variable, to make the ode so that \(x\left ( y\right ) \) is the dependent variable. For example, given the ode

\[ \frac {d^{2}y}{dx^{2}}=\frac {dy}{dx}e^{y\left ( x\right ) }\]

The new ode becomes

\[ \frac {d^{2}x}{dy^{2}}=-\left ( \frac {dx}{dy}\right ) ^{2}e^{y}\]

Which is easier to solve for \(x\left ( y\right ) \). Once solved, we flip back and find \(y\) from the solution. Sometimes this trick can make solving hard ode very easy. It also can make solving easy ode very hard. Only way to find out, is to try it. So if we have an ode that we are having hard time solving, we can try this trick.

For first order ode, the method is easy. We just isolate \(\frac {dy}{dx}\) and then flip the left hand side and flip the right hand side, and change all \(y\left ( x\right ) \) to just \(y\) and change all the \(x\) to \(x\left ( y\right ) \).

More formally, this can also be done using change of variables, like this. The first step is to do change of variables.

\begin{align*} x & =v\left ( t\right ) \\ y & =t \end{align*}

If we carry the above change of variables, the new ode will be in terms of \(v\left ( t\right ) ,v^{\prime }\left ( t\right ) \) and so on.

Now we replace all the \(v^{\left ( n\right ) }\left ( t\right ) \) with \(x^{\left ( n\right ) }\left ( y\right ) \) where \(n\) here is the order of derivative. And replace any \(t\) by \(y\) (not \(y\left ( x\right ) \) but just \(y\)). And replace any \(v\left ( t\right ) \) by \(x\). The new ode will be the flipped ode.

When we do the above change of variables using chain rule, these will result in the following

\begin{align*} \frac {dy}{dx} & \rightarrow \frac {1}{\frac {dv}{dt}}=\frac {dt}{dv}\\ \frac {d^{2}y}{dx^{2}} & \rightarrow -\frac {\frac {d^{2}v}{dt^{2}}}{\left ( \frac {dv}{dt}\right ) ^{3}}\\ \frac {d^{3}y}{dx^{3}} & \rightarrow \frac {\frac {2\left ( \frac {d^{2}v}{dt^{2}}\right ) ^{2}}{\left ( \frac {dv}{dt}\right ) ^{2}}-\frac {\frac {d^{3}v}{dt^{3}}}{\left ( \frac {dv}{dt}\right ) ^{3}}}{\frac {dv}{dt}}\end{align*}

And so on. Once the above is done, the rest is easy. We just replace any \(\frac {dv}{dt}\) by \(\frac {dx}{dy}\) and any \(t\) by \(y\) and any \(v\) by \(x\). We will not do change roles for ode higher than two in these examples.

3.1 Example 1

Change the role for the ode

\[ \frac {dy}{dx}=x \]

This has solution

\begin{equation} y\left ( x\right ) =\frac {1}{2}x^{2}+c_{1} \tag {1}\end{equation}

Since this is first order, we can do it the easy way without change of variable. Flip the left side and flip the right side and do the renaming

\[ \frac {dx}{dy}=\frac {1}{x}\]

If we want to do it via change of variables, the method is: Let

\begin{align*} x & =v\left ( t\right ) \\ y & =t \end{align*}

Then

\[ \frac {dy}{dx}=\frac {dy}{dt}\frac {dt}{dv}\frac {dv}{dx}\]

But \(\frac {dy}{dt}=1\) and the above becomes

\[ \frac {dy}{dx}=\frac {dt}{dv}\frac {dv}{dx}\]

And \(\frac {dv}{dx}=1\) and the above becomes

\[ \frac {dy}{dx}=\frac {dt}{dv}\]

Hence ode becomes

\begin{align*} \frac {dt}{dv} & =v\left ( t\right ) \\ \frac {dv}{dt} & =\frac {1}{v}\end{align*}

Now we replace \(v^{\prime }\left ( t\right ) \) by \(x^{\prime }\left ( y\right ) \) and \(v\) by \(x\) The above becomes (which is the flipped ode)

\[ \frac {dx}{dy}=\frac {1}{x}\]

Solving for \(x\left ( y\right ) \) gives

\begin{align*} x_{1} & =\sqrt {2y+c_{1}}\\ x_{2} & =-\sqrt {2y+c_{1}}\end{align*}

Lets take the first solution and solve for \(y\), this gives

\begin{align} x^{2} & =2y+c_{1}\nonumber \\ y & =\frac {1}{2}x^{2}-\frac {1}{2}c_{1}\nonumber \\ & =\frac {1}{2}x^{2}+c_{1} \tag {2}\end{align}

Which is the same as (1). Of course, in this example there is no point of changing the roles, but this was just an example.

3.2 Example 2

Change the role for the ode

\[ \frac {dy}{dx}=e^{y}\]

This has solution

\begin{equation} y\left ( x\right ) =\ln \left ( \frac {-1}{x+c_{1}}\right ) \tag {1}\end{equation}

Since this is first order, we will do it the easy way. Flip the left side and flip the right side and do the renaming. This gives

\[ \frac {dx}{dy}=e^{-y}\]

Solving this gives

\[ x=-e^{-y}+c_{1}\]

Solving for \(y\) gives

\begin{align*} -x+c_{1} & =e^{-y}\\ \ln \left ( -x+c_{1}\right ) & =-y\\ y & =-\ln \left ( -x+c_{1}\right ) \\ & =\ln \left ( \frac {1}{-x+c_{1}}\right ) \\ & =\ln \left ( \frac {-1}{x-c_{1}}\right ) \\ & =\ln \left ( \frac {-1}{x+c_{2}}\right ) \end{align*}

Which is same as (1).

3.3 Example 3

Change the role for the ode

\begin{equation} y\ln y+\left ( x-\ln y\right ) \frac {dy}{dx}=0 \tag {1}\end{equation}

Solving the above gives

\begin{align} y_{1} & =e^{x-\sqrt {x^{2}-2c_{1}}}\tag {2}\\ y_{1} & =e^{x+\sqrt {x^{2}-2c_{1}}}\nonumber \end{align}

Since this is first order, we will do it the easy way. First isolate \(\frac {dy}{dx}\) then flip the left side and the right side and rename. Solving for \(\frac {dy}{dx}\) from (1) gives

\[ \frac {dy}{dx}=\frac {-y\ln y}{x-\ln y}\]

Flipping

\begin{align} \frac {dx}{dy} & =\frac {\ln \left ( y\right ) -x}{y\ln y}\nonumber \\ & =\frac {1}{y}-\frac {x}{y\ln y}\nonumber \\ \frac {dx}{dy}+\frac {x}{y\ln y} & =\frac {1}{y} \tag {3}\end{align}

In this example, we see that changing roles really paid off as Eq. (3) is linear ode in \(x\left ( y\right ) \) but (1) is very hard to solve for \(y\left ( x\right ) \) and needs Lie symmetry to solve it. Solving (3) gives

\[ x=\frac {\ln y}{2}+\frac {c_{1}}{\ln y}\]

Solving the above for \(y\) gives same solutions as (2).

3.4 Example 4

Change the role for the ode

\begin{equation} \frac {d^{2}y}{dx^{2}}=\frac {dy}{dx}e^{y\left ( x\right ) } \tag {1}\end{equation}

This has solution

\begin{equation} y\left ( x\right ) =c_{1}c_{2}+c_{1}x+\ln \left ( \frac {-c_{1}}{e^{c_{1}c_{2}}e^{xc_{1}}-1}\right ) \tag {2}\end{equation}

Since this is not first order, we can not do the easy method as with first order and we have to do change of variables since with second derivative it is more complicate. Let

\begin{align*} x & =v\left ( t\right ) \\ y & =t \end{align*}

Using the rules gives above, we know that

\begin{align} \frac {dy}{dx} & =\frac {1}{\frac {dv}{dt}}\tag {3}\\ \frac {d^{2}y}{dx^{2}} & =-\frac {\frac {d^{2}v}{dt^{2}}}{\left ( \frac {dv}{dt}\right ) ^{3}}\nonumber \end{align}

Substituting (3) into (1) (and changing at \(y\left ( x\right ) \) by \(t\) and any \(x\) by \(v\left ( t\right ) \)) gives

\begin{align*} -\frac {\frac {d^{2}v}{dt^{2}}}{\left ( \frac {dv}{dt}\right ) ^{3}} & =\frac {1}{\frac {dv}{dt}}e^{t}\\ -\frac {d^{2}v}{dt^{2}} & =\frac {\left ( \frac {dv}{dt}\right ) ^{3}}{\frac {dv}{dt}}e^{t}\\ -\frac {d^{2}v}{dt^{2}} & =\left ( \frac {dv}{dt}\right ) ^{2}e^{t}\end{align*}

We now replace each \(\frac {dv}{dt}\) by \(\frac {dx}{dy}\) and each \(t\) by \(y\). The above becomes

\[ \frac {d^{2}x}{dy^{2}}=-\left ( \frac {dx}{dy}\right ) ^{2}e^{y}\]

And the above is the final flipped ode. The solution is

\[ x=-\frac {1}{c_{1}}\ln \left ( e^{y}\right ) +\frac {1}{c_{1}}\ln \left ( e^{y}-c_{1}\right ) +c_{2}\]

To obtain \(y\) as function of \(x\), we just isolate \(y\) from the above.

\begin{align*} c_{1}x & =-\ln \left ( e^{y}\right ) +\ln \left ( e^{y}-c_{1}\right ) +c_{1}c_{2}\\ c_{1}x-c_{1}c_{2} & =\ln \left ( \frac {e^{y}-c_{1}}{e^{y}}\right ) \\ e^{\left ( c_{1}x-c_{1}c_{2}\right ) } & =\frac {e^{y}-c_{1}}{e^{y}}\\ e^{\left ( c_{1}x-c_{1}c_{2}\right ) } & =1-c_{1}e^{-y}\\ 1-e^{\left ( c_{1}x-c_{1}c_{2}\right ) } & =c_{1}e^{-y}\\ \frac {1-e^{\left ( c_{1}x-c_{1}c_{2}\right ) }}{c_{1}} & =e^{-y}\\ -y & =\ln \left ( \frac {1-e^{\left ( c_{1}x-c_{1}c_{2}\right ) }}{c_{1}}\right ) \\ y & =\ln \left ( \frac {c_{1}}{1-e^{\left ( c_{1}x-c_{1}c_{2}\right ) }}\right ) \end{align*}

Which is the solution to the original ode obtain by first flipping the ode.

3.5 Example 5

Change the role for the ode

\begin{equation} 1+xy\left ( 1+xy^{2}\right ) \frac {dy}{dx}=0 \tag {1}\end{equation}

As this stands, it is hard to solve as it needed Lie symmetry. The solution is

\begin{align*} y_{1} & =\frac {1}{x}\sqrt {-x\left ( 2x\operatorname {LambertW}\left ( -\frac {1}{2}c_{1}e^{\frac {-2x-1}{2x}}\right ) +2x+1\right ) }\\ y_{2} & =-\frac {1}{x}\sqrt {-x\left ( 2x\operatorname {LambertW}\left ( -\frac {1}{2}c_{1}e^{\frac {-2x-1}{2x}}\right ) +2x+1\right ) }\end{align*}

By flipping roles, the ode becomes Bernoulli, which is much easier. Since this is first order, we will use the easy method. First we isolate\(\ \frac {dy}{dx}\) from (1) then flip both sides and rename. Solving for\(\ \frac {dy}{dx}\) in (1) gives

\[ \frac {dy}{dx}=\frac {-1}{xy\left ( 1+xy^{2}\right ) }\]

Flipping and renaming \(y\left ( x\right ) \) to \(y\) and \(x\) to \(x\left ( y\right ) \) gives

\[ \frac {dx}{dy}=-xy-x^{2}y^{3}\]

This is in the form

\[ x^{\prime }=Px+Qx^{n}\]

Where \(n=2\) here. Hence Bernoulli, which is easily solved. The solution is

\[ x=\frac {1}{-2+c_{1}e^{\frac {y^{2}}{2}}-y^{2}}\]

The last step is to solve for \(y\) as function of \(x\).

\begin{align*} x\left ( -2+c_{1}e^{\frac {y^{2}}{2}}-y^{2}\right ) & =1\\ -2x+c_{1}xe^{\frac {y^{2}}{2}}-xy^{2} & =1\\ c_{1}e^{\frac {y^{2}}{2}}-y^{2} & =\frac {1+2x}{x}\end{align*}

Solving for \(y\) from the above gives same answer as above. This is an example where flipping roles paid off well. But only way to know is to try it and see.

3.6 Example 6

Change the role for the ode

\begin{equation} \left ( 1-4xy^{2}\right ) \frac {dy}{dx}=y^{3} \tag {1}\end{equation}

As this stands, this is homogeneous class G. The solution is

\begin{align*} y_{1} & =-\frac {1}{2x}\sqrt {x\left ( 1+\sqrt {16c_{1}x+1}\right ) }\\ y_{2} & =\frac {1}{2x}\sqrt {x\left ( 1+\sqrt {16c_{1}x+1}\right ) }\\ y_{3} & =-\frac {1}{2x}\sqrt {-x\left ( -1+\sqrt {16c_{1}x+1}\right ) }\\ y_{4} & =\frac {1}{2x}\sqrt {-x\left ( -1+\sqrt {16c_{1}x+1}\right ) }\end{align*}

By flipping roles, the ode becomes linear, which is much easier to solve. Since this is first order, we will use the easy method. First we isolate\(\ \frac {dy}{dx}\) from (1) then flip both sides and rename. Solving for\(\ \frac {dy}{dx}\) in (1) gives

\[ \frac {dy}{dx}=\frac {y^{3}}{1-4xy^{2}}\]

Flipping and renaming \(y\left ( x\right ) \) to \(y\) and \(x\) to \(x\left ( y\right ) \) gives

\begin{align*} \frac {dx}{dy} & =\frac {1-4xy^{2}}{y^{3}}\\ & =\frac {1}{y^{3}}-4\frac {x}{y}\end{align*}

Or

\[ \frac {dx}{dy}+\frac {4}{y}x=\frac {1}{y^{3}}\]

Which is linear ode in \(x\left ( y\right ) \). Solving gives

\[ x=\frac {1}{y^{4}}\left ( \frac {y^{2}}{2}+c_{1}\right ) \]

The last step is to solve for \(y\) which will give same solution as above.

3.7 Example 7

Change the role for the ode

\begin{equation} \frac {dy}{dx}=\frac {x}{y^{2}x^{2}+y^{5}} \tag {1}\end{equation}

As this stands, this can be solved using Lie symmetry or as an exact ode but with an integrating factor that needs to be found first, The solution is

\[ y_{1}=\frac {1}{2}\left ( -8x^{2}-12\operatorname {LambertW}\left ( \frac {4}{3}c_{1}e^{-\frac {2}{3}x^{2}-1}\right ) -12\right ) ^{\frac {1}{3}}\]

And 2 more (too long to type). By flipping roles the new ode becomes

\begin{align*} \frac {dx}{dy} & =\frac {y^{2}x^{2}+y^{5}}{x}\\ & =xy^{2}+y^{5}x^{-1}\end{align*}

This has form

\[ x^{\prime }=P\left ( y\right ) x+Q\left ( y\right ) x^{n}\]

Which is Bernoulli ode. Which is simpler to solve solve. Solving gives

\begin{align*} x & =-\frac {1}{2}\sqrt {-6-4y^{3}+c_{1}4e^{\frac {2}{3}y^{3}}}\\ x & =\frac {1}{2}\sqrt {-6-4y^{3}+c_{1}4e^{\frac {2}{3}y^{3}}}\end{align*}

Finally, we solve for \(y\) from the above. This will give same solutions as above.

4 general notes

\(\blacksquare \) Some rules to remember. This is in the real domain

  1. \(\sqrt {ab}=\sqrt {a}\sqrt {b}\) only for \(a\geq 0,b\geq 0\). In general \(\left ( ab\right ) ^{\frac {1}{n}}=a^{\frac {1}{n}}b^{\frac {1}{n}}\) for \(a\geq 0,b\geq 0\) where \(n\) is positive integer.
  2. \(\sqrt {y}=x\) implies \(y=x^{2}\) only when \(x>0\). So be careful when squaring both sides to get rid of sqrt root on one side. To see this, let \(\sqrt {y}=4\) then \(y=16\) because \(4\) is positive. But if we had \(\sqrt {y}=-4\) then we can’t say that \(y=16\) since \(\sqrt {16}\) is \(4\) and not \(-4\). (we always take the positive root). So each time we square both sides of equation to get rid of \(\sqrt {}\) on one side, always say this is valid when the other side is not negative.
  3. Generalization of the above: given \(\left ( ab\right ) ^{\frac {n}{m}}\) where both \(n,m\) integers then \(\left ( ab\right ) ^{\frac {n}{m}}=a^{\frac {n}{m}}b^{\frac {n}{m}}\) only when \(a\geq 0,b\geq 0\). This applies if \(\frac {n}{m}<1\) such as \(\frac {2}{3}\) or when \(\frac {n}{m}>1\) such as \(\frac {3}{2}\). Only time we can write \(\left ( ab\right ) ^{n}=a^{n}b^{n}\) for any \(a,b\) is when \(n\) is an integer (positive or negative). When the power is ratio of integers, then was can split it only under the condition that all terms are positive.
  4. \(\sqrt {\frac {1}{b}}=\frac {1}{\sqrt {b}}\) only for \(b>0\). This can be used for example to simplify \(\sqrt {\frac {1}{1-x^{2}}}\sqrt {1-x^{2}}\) to \(1\) under the condition \(1-x^{2}>0\) or \(-1<x<1\). Because in this case the input becomes \(\frac {1}{\sqrt {1-x^{2}}}\sqrt {1-x^{2}}=1\).
  5. Generalization of the above:\(\sqrt {\frac {a}{b}}=\frac {\sqrt {a}}{\sqrt {b}}\) only for \(a\geq 0,b>0\)
  6. \(\sqrt {x^{2}}=x\) only for \(x\geq 0\)
  7. Generalization of the above: \(\left ( x^{n}\right ) ^{\frac {1}{n}}=x\) only when \(x\geq 0\) (assuming \(n\) is integer).

\(\blacksquare \) Given \(u\equiv u\left ( x,y\right ) \) then total differential of \(u\) is

\[ du=\frac {\partial u}{\partial x}dx+\frac {\partial u}{\partial y}dy \]

\(\blacksquare \) Lyapunov function is used to determine stability of an equilibrium point. Taking this equilibrium point to be zero, and someone gives us a set of differential equations \(\begin {pmatrix} x^{\prime }\left ( t\right ) \\ y^{\prime }\left ( t\right ) \\ z^{\prime }\left ( t\right ) \end {pmatrix} =\begin {pmatrix} f_{1}\left ( x,y,z,t\right ) \\ f_{2}\left ( x,y,z,t\right ) \\ f_{2}\left ( x,y,z,t\right ) \end {pmatrix} \) and assuming \(\left ( 0,0,0\right ) \) is an equilibrium point. The question is, how to determine if it stable or not? There are two main ways to do this. One by linearization of the system around origin. This means we find the Jacobian matrix, evaluate it at origin, and check the sign of the real parts of the eigenvalues. This is the common way to do this. Another method, called Lyapunov, is more direct. There is no linearization needed. But we need to do the following. We need to find a function \(V\left ( x,y,z\right ) \) which is called Lyapunov function for the system which meets the following conditions

  1. \(V\left ( x.y,z\right ) \) is continuously differentiable function in \(\mathbb {R} ^{3}\) and \(V\left ( x.y,z\right ) \geq 0\) (positive definite or positive semidefinite) for all \(x,y,z\) away from the origin, or everywhere inside some fixed region around the origin. This function represents the total energy of the system (For Hamiltonian systems). Hence \(V\left ( x,y,z\right ) \) can be zero away from the origin. But it could never be negative.
  2. \(V\left ( 0,0,0\right ) =0\). This says the system has no energy when it is at the equilibrium point. (rest state).
  3. The orbital derivative \(\frac {dV}{dt}\leq 0\) (i.e. negative definite or negative semi-definite) for all \(x,y,z\), or inside some fixed region around the origin. The orbital derivative is same as \(\frac {dV}{dt}\) along any solution trajectory. This condition says that the total energy is either constant in time (the zero case) or the total energy is decreasing in time (the negative definite case). Both of which indicate that the origin is a stable equilibrium point.

If \(\frac {dV}{dt}\) is negative semi-definite then the origin is stable in Lyapunov sense. If \(\frac {dV}{dt}\) is negative definite then the origin is asymptotically stable equilibrium. Negative semi-definite means the system, when perturbed away from the origin, a trajectory will remain around the origin since its energy do not increase nor decrease. So it is stable. But asymptotically stable equilibrium is a stronger stability. It means when perturbed from the origin the solution will eventually return back to the origin since the energy is decreasing. Global stability means \(\frac {dV}{dt}\leq 0\) everywhere, and not just in some closed region around the origin. Local stability means \(\frac {dV}{dt}\leq 0\) in some closed region around the origin. Global stability is stronger stability than local stability.

Main difficulty with this method is to find \(V\left ( x.y,z\right ) \). If the system is Hamiltonian, then \(V\) is the same as total energy. Otherwise, one will guess. Typically a quadratic function such as \(V=ax^{2}+cxy+dy^{2}\) is used (for system in \(x,y\)) then we try to find \(a,c,d\) which makes it positive definite everywhere away from origin, and also more importantly makes \(\frac {dV}{dt}\leq 0\). If so, we say origin is stable. Most of the problems we had starts by giving us \(V\) and then asks to show it is Lyapunov function and what kind of stability it is.

To determine if \(V\) is positive definite or not, the common way is to find the Hessian and check the sign of the eigenvalues. Another way is to find the Hessian and check the sign of the minors. For \(2\times 2\) matrix, this means the determinant is positive and the entry \(\left ( 1,1\right ) \) in the matrix is positive. Similar thing to check if \(\frac {dV}{dt}\leq 0\). We find the Hessian of \(\frac {dV}{dt}\) and do the same thing. But now we check for negative eigenvalues instead.

\(\blacksquare \) Methods to find Green function are

  1. Fredholm theory
  2. methods of images
  3. separation of variables
  4. Laplace transform

reference Wikipedia I need to make one example and apply each of the above methods on it.

\(\blacksquare \) In solving an ODE with constant coefficient just use the characteristic equation to solve the solution.

\(\blacksquare \) In solving an ODE with coefficients that are functions that depends on the independent variable, as in \(y^{\prime \prime }\left ( x\right ) +q\left ( x\right ) y^{\prime }\left ( x\right ) +p\left ( x\right ) y\left ( x\right ) =0\), first classify the point \(x_{0}\) type. This means to check how \(p\left ( x\right ) \) and \(q\left ( x\right ) \) behaves at \(x_{0}\). We are talking about the ODE here, not the solution yet.

There are 3 kinds of points. \(x_{0}\) can be normal, or regular singular point, or irregular singular point. Normal point \(x_{0}\) means \(p\left ( x\right ) \) and \(q\left ( x\right ) \) have Taylor series expansion \(y\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}\left ( x-x_{0}\right ) ^{n}\) that converges to \(y\left ( x\right ) \) at \(x_{0}\).
Regular singular point \(x_{0}\) means that the above test fails, but \(\lim _{x\rightarrow x_{0}}\left ( x-x_{0}\right ) q\left ( x\right ) \) has a convergent Taylor series, and also that \(\lim _{x\rightarrow x_{0}}\left ( x-x_{0}\right ) ^{2}p\left ( x\right ) \) now has a convergent Taylor series at \(x_{0}\). This also means the limit exist.

All this just means we can get rid of the singularity. i.e. \(x_{0}\) is a removable singularity. If this is the case, then the solution at \(x_{0}\) can be assumed to have a Frobenius series \(y\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}\left ( x-x_{0}\right ) ^{n+\alpha }\) where \(a_{0}\neq 0\) and \(\alpha \) is the root of the Frobenius indicial equation. There are three cases to consider. See https://math.usask.ca/~cheviakov/courses/m338/text/Frobenius_Case3_ill.pdf for more discussion on this.

The third type of point, is the hard one. Called irregular singular point. We can’t get rid of it using the above. So we also say the ODE has an essential singularity at \(x_{0}\) (another fancy name for irregular singular point). What this means is that we can’t approximate the solution at \(x_{0}\) using either Taylor nor Frobenius series.

If the point is an irregular singular point, then use the methods of asymptotic. See advanced mathematical methods for scientists and engineers chapter 3. For normal point, use \(y\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}x^{n}\), for regular singular point use \(y\left ( x\right ) =\sum _{n=0}^{\infty }a_{n}x^{n+r}\). Remember, to solve for \(r\) first. This should give two values. If you get one root, then use reduction of order to find second solution.

\(\blacksquare \) Asymptotic series \(S\left ( z\right ) =c_{0}+\frac {c_{1}}{z}+\frac {c_{2}}{z^{2}}+\cdots \) is series expansion of \(f\left ( z\right ) \) which gives good and rapid approximation for large \(z\) as long as we know when to truncate \(S\left ( z\right ) \) before it becomes divergent. This is the main difference Asymptotic series expansion and Taylor series expansion.

\(S\left ( z\right ) \) is used to approximate a function for large \(z\) while Taylor (or power series) is used for local approximation or for small distance away from the point of expansion. \(S\left ( z\right ) \) will become divergent, hence it  needs to be truncated at some \(n\) to use, where \(n\) is the number of terms in \(S_{n}\left ( z\right ) \). It is optimally truncated when \(n\approx \left \vert z\right \vert ^{2}\).

\(S\left ( x\right ) \) has the following two important properties

  1. \(\lim _{\left \vert z\right \vert \rightarrow \infty }z^{n}\left ( f\left ( z\right ) -S_{n}\left ( z\right ) \right ) =0\) for fixed \(n\).
  2. \(\lim _{n\rightarrow \infty }z^{n}\left ( f\left ( z\right ) -S_{n}\left ( z\right ) \right ) =\infty \) for fixed \(z\).

We write \(S\left ( z\right ) \sim f\left ( z\right ) \) when \(S\left ( z\right ) \) is the asymptotic series expansion of \(f\left ( z\right ) \) for large \(z\). Most common method to find \(S\left ( z\right ) \) is by integration by parts. At least this is what we did in the class I took.

\(\blacksquare \) For Taylor series, leading behavior is \(a_{0}\) no controlling factor? For Frobenius series, leading behavior term is \(a_{0}x^{\alpha }\) and controlling factor is \(x^{\alpha }\). For asymptotic series, controlling factor is assumed to be \(e^{S\left ( x\right ) }\) always. proposed by Carlini (1817)

\(\blacksquare \) Method to find the leading behavior of the solution \(y\left ( x\right ) \) near irregular singular point using asymptotic is called the dominant balance method.

\(\blacksquare \) When solving \(\epsilon y^{\prime \prime }+p\left ( x\right ) y^{\prime }+q\left ( x\right ) y=0\) for very small \(\epsilon \) then use WKB method, if there is no boundary layer between the boundary conditions. If the ODE non-linear, can’t use WKB, has to use boundary layer (B.L.).  Example \(\epsilon y^{\prime \prime }+yy^{\prime }-y=0\) with \(y\left ( 0\right ) =0,y\left ( 1\right ) =-2\) then use BL.

\(\blacksquare \) good exercise is to solve say \(\epsilon y^{\prime \prime }+(1+x)y^{\prime }+y=0\) with \(y\left ( 0\right ) =y\left ( 1\right ) \) using both B.L. and WKB and compare the solutions, they should come out the same. \(y\sim \frac {2}{1+x}-\exp \left ( \frac {-x}{\epsilon }-\frac {x^{2}}{2\epsilon }\right ) +O\left ( \epsilon \right ) .\) with BL had to do the matching between the outer and the inner solutions. WKB is easier. But can’t use it for non-linear ODE.

\(\blacksquare \) When there is rapid oscillation over the entire domain, WKB is better. Use WKB to solve Schrodinger equation where \(\epsilon \) becomes function of \(\hslash \) (Planck’s constant, \(6.62606957\times 10^{-34}\) m\(^{2}\)kg/s)

\(\blacksquare \) In second order ODE with non constant coefficient, \(y^{\prime \prime }\left ( x\right ) +p\left ( x\right ) y^{\prime }\left ( x\right ) +q\left ( x\right ) y\left ( x\right ) =0\), if we know one solution \(y_{1}\left ( x\right ) \), then a method called the reduction of order can be used to find the second solution \(y_{2}\left ( x\right ) \). Write \(y_{2}\left ( x\right ) =u\left ( x\right ) y_{1}\left ( x\right ) \), plug this in the ODE, and solve for \(u\left ( x\right ) \). The final solution will be \(y\left ( x\right ) =c_{1}y_{1}\left ( x\right ) +c_{2}y_{2}\left ( x\right ) \). Now apply I.C.’s to find \(c_{1},c_{2}\).

\(\blacksquare \) To find particular solution to \(y^{\prime \prime }\left ( x\right ) +p\left ( x\right ) y^{\prime }\left ( x\right ) +q\left ( x\right ) y\left ( x\right ) =f\left ( x\right ) \), we can use a method called undetermined coefficients.  But a better method is called variation of parameters, In this method, assume \(y_{p}\left ( x\right ) =u_{1}\left ( x\right ) y_{1}\left ( x\right ) +u_{2}\left ( x\right ) y_{2}\left ( x\right ) \) where \(y_{1}\left ( x\right ) ,y_{2}\left ( x\right ) \) are the two linearly independent solutions of the homogeneous ODE and \(u_{1}\left ( x\right ) ,u_{2}\left ( x\right ) \) are to be determined. This ends up with \(u_{1}\left ( x\right ) =-\int \frac {y_{2}\left ( x\right ) f\left ( x\right ) }{W}dx\) and \(u_{2}\left ( x\right ) =\int \frac {y_{1}\left ( x\right ) f\left ( x\right ) }{W}dx\). Remember to put the ODE in standard form first, so \(a=1\), i.e. \(ay^{\prime \prime }\left ( x\right ) +\cdots \). In here, \(W\) is the Wronskian \(W=\begin {vmatrix} y_{1}\left ( x\right ) & y_{2}\left ( x\right ) \\ y_{1}^{\prime }\left ( x\right ) & y_{2}^{\prime }\left ( x\right ) \end {vmatrix} \)

\(\blacksquare \) Two solutions of \(y^{\prime \prime }\left ( x\right ) +p\left ( x\right ) y^{\prime }\left ( x\right ) +q\left ( x\right ) y\left ( x\right ) =0\) are linearly independent if \(W\left ( x\right ) \neq 0\), where \(W\) is the Wronskian.

\(\blacksquare \) For second order linear ODE defined over the whole real line, the Wronskian is either always zero, or not zero. This comes from Abel formula for Wronskian, which is \(W\left ( x\right ) =k\exp \left ( -\int \frac {B\left ( x\right ) }{A\left ( x\right ) }dx\right ) \) for ODE of form \(A\left ( x\right ) y^{\prime \prime }+B\left ( x\right ) y^{\prime }+C\left ( x\right ) y=0\). Since \(\exp \left ( -\int \frac {B\left ( x\right ) }{A\left ( x\right ) }dx\right ) >0\), then it is decided by \(k\). The constant of integration. If \(k=0\), then \(W\left ( x\right ) =0\) everywhere, else it is not zero everywhere.

\(\blacksquare \) For linear PDE, if boundary condition are time dependent, can not use separation of variables. Try Transform method (Laplace or Fourier) to solve the PDE.

\(\blacksquare \) If unable to invert Laplace analytically, try numerical inversion or asymptotic methods. Need to find example of this.

\(\blacksquare \) Green function takes the homogeneous solution and the forcing function and constructs a particular solution. For PDE’s, we always want a symmetric Green’s function.

\(\blacksquare \) To get a symmetric Green’s function given an ODE, start by converting the ODE to a Sturm-Liouville form first. This way the Green’s function comes out symmetric.

\(\blacksquare \) For numerical solutions of field problems, there are basically two different problems: Those with closed boundaries and those with open boundaries but with initial conditions. Closed boundaries are elliptical problems which can be cast in the form \(Au=f\), and the other are either hyperbolic or parabolic.

\(\blacksquare \) For numerical solution of elliptical problems, the basic layout is something like this:

Always start with trial solution \(u(x)\) such that \(u_{trial}(x)=\sum _{i=0}^{i=N}C_{i}\phi _{i}(x)\) where the \(C_{i}\) are the unknowns to be determined and the \(\phi _{i}\) are set of linearly independent functions (polynomials) in \(x\).

How to determine those \(C_{i}\) comes next. Use either residual method (Galerkin) or variational methods (Ritz). For residual, we make a function based on the error \(R=A-u_{trial}f\). It all comes down to solving \(\int f(R)=0\) over the domain. This is a picture

| 
+---------------+-------------------------------------+ 
|                                                     | 
residual                       Variational (sub u_trial in I(u) 
|                          where I(u) is functional to minimize. 
| 
+----------------+-------------+----------+ 
|                |             |          | 
Absolute error   collocation   subdomain orthogonality 
.... 
+----------------------+------------+ 
|                      |            | 
method of moments   Galerkin     least squares

\(\blacksquare \) Geometric probability distribution. Use when you want an answer to the question: What is the probability you have to do the experiment \(N\) times to finally get the output you are looking for, given that a probability of \(p\) showing up from doing one experiment.

For example: What is the probability one has to flip a fair coin \(N\) times to get a head? The answer is \(P(X=N)=(1-p)^{k-1}p\). So for a fair coin, \(p=\frac {1}{2}\) that a head will show up from one flip. So the probability we have to flip a coin \(10\) times to get a head is \(P(X=10)=(1-0.5)^{9}(0.5)=0.00097\) which is very low as expected.

\(\blacksquare \) To generate random variable drawn from some distribution different from uniform distribution, by only using uniform distribution \(U(0,1)\) do this: Lets say we want to generate random number from exponential distribution with mean \(\mu \).

This distribution has \(pdf(X)=\frac {1}{\mu }e^{\frac {-x}{\mu }}\), the first step is to find the cdf of exponential distribution, which is known to be \(F(x)=P(X<=x)=1-e^{\frac {-x}{\mu }}\).

Now find the inverse of this, which is \(F^{-1}(x)=-\mu \ln (1-x)\). Then generate a random number from the uniform distribution \(U(0,1)\). Let this value be called \(z\).

Now plug this value into \(F^{-1}(z)\), this gives a random number from exponential distribution, which will be \(-\mu \ \ln (1-z)\) (take the natural log of both side of \(F(x)\)).

This method can be used to generate random variables from any other distribution by knowing on \(U(0,1)\). But it requires knowing the CDF and the inverse of the CDF for the other distribution. This is called the inverse CDF method. Another method is called the rejection method

\(\blacksquare \) Given \(u\), a r.v. from uniform distribution over [0,1], then to obtain \(v\), a r.v. from uniform distribution over [A,B], then the relation is \(v=A+(B-A)u\).

\(\blacksquare \) When solving using F.E.M. is best to do everything using isoparametric element (natural coordinates), then find the Jacobian of transformation between the natural and physical coordinates to evaluate the integrals needed. For the force function, using Gaussian quadrature method.

\(\blacksquare \) A solution to differential equation is a function that can be expressed as a convergent series. (Cauchy. Briot and Bouquet, Picard)

\(\blacksquare \) To solve a first order ODE using integrating factor.

\[ x^{\prime }(t)+p(t)x(t)=f(t) \]

then as long as it is linear and \(p(t),f(t)\) are integrable functions in \(t\), then follow these steps

  1. multiply the ODE by function \(I(t)\), this is called the integrating factor.

    \[ I(t)x^{\prime }(t)+I(t)p(t)x(t)=I(t)f(t) \]
  2. We solve for \(I(t)\) such that the left side satisfies

    \[ \frac {d}{dt}\left ( I(t)x(t)\right ) =I(t)x^{\prime }(t)+I(t)p(t)x(t) \]
  3. Solving the above for \(I(t)\) gives

    \begin{align*} I^{\prime }(t)x(t)+I(t)x^{\prime }(t) & =I(t)x^{\prime }(t)+I(t)p(t)x(t)\\ I^{\prime }(t)x(t) & =I(t)p(t)x(t)\\ I^{\prime }(t) & =I(t)p(t)\\ \frac {dI}{I} & =p(t)dt \end{align*}

    Integrating both sides gives

    \begin{align*} \ln (I) & =\int {p(t)dt}\\ I(t) & =e^{\int {p(t)dt}}\end{align*}
  4. Now equation (1) can be written as

    \[ \frac {d}{dt}\left ( I(t)x(t)\right ) =I(t)f(t) \]
    We now integrate the above to give
    \begin{align*} I(t)x(t) & =\int {I(t)f(t)\,dt}+C\\ x(t) & =\frac {\int {I(t)f(t)\,dt}+C}{I(t)}\end{align*}

    Where \(I(t)\) is given by (2). Hence

    \[ x(t)=\frac {\int {e^{\int {p(t)dt}}f(t)\,dt}+C}{e^{\int {p(t)dt}}}\]
    \(\blacksquare \) A polynomial is called ill-conditioned if we make small change to one of its coefficients and this causes large change to one of its roots.

\(\blacksquare \) To find rank of matrix \(A\) by hand, find the row echelon form, then count how many zero rows there are. subtract that from number of rows, i.e. \(n\).

\(\blacksquare \) To find the basis of the column space of \(A\), find the row echelon form and pick the columns with the pivots, there are the basis (the linearly independent columns of \(A\)).

\(\blacksquare \) For symmetric matrix \(A\), its second norm is its spectral radius \(\rho (A)\) which is the largest eigenvalue of \(A\) (in absolute terms).

\(\blacksquare \) The eigenvalues of the inverse of matrix \(A\) is the inverse of the eigenvalues of \(A\).

\(\blacksquare \) If matrix \(A\) of order \(n\times n\), and it has \(n\) distinct eigenvalues, then it can be diagonalized  \(A=V\Lambda V^{-1}\), where

\[ \Lambda =\begin {pmatrix} e^{\lambda _{1}} & 0 & 0\\ 0 & \ddots & 0\\ 0 & 0 & e^{\lambda n}\end {pmatrix} \]

and \(V\) is matrix that has the \(n\) eigenvectors as its columns.

\(\blacksquare \) \(\lim _{k\rightarrow \infty }\int _{x_{1}}^{x_{2}}f_{k}\left ( x\right ) dx=\int _{x_{1}}^{x_{2}}\lim _{k\rightarrow \infty }f_{k}\left ( x\right ) dx\) only if \(f_{k}\left ( x\right ) \) converges uniformly over \(\left [ x_{1},x_{2}\right ] \).

\(\blacksquare \) \(A^{3}=I\), has infinite number of \(A\) solutions. Think of \(A^{3}\) as 3 rotations, each of \(120^{0}\), going back to where we started. Each rotation around a straight line. Hence infinite number of solutions.

\(\blacksquare \) How to integrate \(I=\int \frac {\sqrt {x^{3}-1}}{x}\,dx\).

Let \(u=x^{3}+1\), then \(du=3x^{2}dx\) and the above becomes

\[ I=\int \frac {\sqrt {u}}{3x^{3}}\,du=\frac {1}{3}\int \frac {\sqrt {u}}{u-1}\,du \]

Now let \(u=\tan ^{2}v\) or \(\sqrt {u}=\tan v\), hence \(\frac {1}{2}\frac {1}{\sqrt {u}}du=\sec ^{2}v\,dv\) and the above becomes

\begin{align*} I & =\frac {1}{3}\int \frac {\sqrt {u}}{\tan ^{2}v-1}\left ( 2\sqrt {u}\sec ^{2}v\right ) \,dv\\ & =\frac {2}{3}\int \frac {u}{\tan ^{2}v-1}\sec ^{2}v\,dv\\ & =\frac {2}{3}\int \frac {\tan ^{2}v}{\tan ^{2}v-1}\sec ^{2}v\,dv \end{align*}

But \(\tan ^{2}v-1=\sec ^{2}v\) hence

\begin{align*} I & =\frac {2}{3}\int \tan ^{2}v\,dv\\ & =\frac {2}{3}\left ( \tan v-v\right ) \end{align*}

Substituting back

\[ I=\frac {2}{3}\left ( \sqrt {u}-\arctan \left ( \sqrt {u}\right ) \right ) \]

Substituting back

\[ I=\frac {2}{3}\left ( \sqrt {x^{3}+1}-\arctan \left ( \sqrt {x^{3}+1}\right ) \right ) \]

\(\blacksquare \) (added Nov. 4, 2015) Made small diagram to help me remember long division terms used.

pict

\(\blacksquare \) If a linear ODE is equidimensional, as in \(a_{n}x^{n}y^{(n)}+a_{n-1}x^{n-1}y^{(n01)}+\dots \) for example \(x^{2}y^{\prime \prime }-2y=0\) then use ansatz \(y=x^{r}\) this will give equation in \(r\) only. Solve for \(r\) and obtain \(y_{1}=x^{r_{1}},y_{2}=x^{r_{2}}\) and the solution will be

\[ y=c_{1}y_{1}+c_{2}y_{2}\]

For example, for the above ode, the solution is \(c_{1}x^{2}+\frac {c_{2}}{x}\). This ansatz works only if ODE is equidimensional. So can’t use it on \(xy^{\prime \prime }+y=0\) for example.

If \(r\) is multiple root, use \(x^{r},x^{r}\log (x),x^{r}(\log (x))^{2}\dots \) as solutions.

\(\blacksquare \) for \(x^{i}\), where \(i=\sqrt {-1}\), write it as \(x=e^{\log {x}}\) hence \(x^{i}=e^{i\,\log {x}}=\cos (\log {x})+i\,\sin (\log {x})\)

\(\blacksquare \) Some integral tricks: \(\int \sqrt {a^{2}-x^{2}}dx\) use \(x=a\sin \theta \). For \(\int \sqrt {a^{2}+x^{2}}dx\) use \(x=a\tan \theta \) and for \(\int \sqrt {x^{2}-a^{2}}dx\) use \(x=a\sec \theta \).

\(\blacksquare \) \(y^{\prime \prime }+x^{n}y=0\) is called Emden-Fowler form.

\(\blacksquare \) For second order ODE, boundary value problem, with eigenvalue (Sturm-Liouville), remember that having two boundary conditions is not enough to fully solve it.

One boundary condition is used to find the first constant of integration, and the second boundary condition is used to find the eigenvalues.

We still need another input to find the second constant of integration. This is normally done by giving the initial value. This problem happens as part of initial value, boundary value problem. The point is, with boundary value and eigenvalue also present, we need 3 inputs to fully solve it. Two boundary conditions is not enough.

\(\blacksquare \) If given ODE \(y^{\prime \prime }\left ( x\right ) +p\left ( x\right ) y^{\prime }\left ( x\right ) +q\left ( x\right ) y\left ( x\right ) =0\) and we are asked to classify if it is singular at \(x=\infty \), then let \(x=\frac {1}{t}\) and check what happens at \(t=0\). The \(\frac {d^{2}}{dx^{2}}\) operator becomes \(\left ( 2t^{3}\frac {d}{dt}+t^{4}\frac {d^{2}}{dt^{2}}\right ) \) and \(\frac {d}{dx}\) operator becomes \(-t^{2}\frac {d}{dt}\). And write the ode now where \(t\) is the independent variable, and follow standard operating procedures. i.e. look at \(\lim _{t\rightarrow 0}xp\left ( t\right ) \) and \(\lim _{t\rightarrow 0}x^{2}q\left ( t\right ) \) and see if these are finite or not. To see how the operator are mapped, always start with \(x=\frac {1}{t}\) then write \(\frac {d}{dx}=\frac {d}{dt}\frac {dt}{dx}\) and write \(\frac {d^{2}}{dx^{2}}=\left ( \frac {d}{dx}\right ) \left ( \frac {d}{dx}\right ) \). For example, \(\frac {d}{dx}=-t^{2}\frac {d}{dt}\) and

\begin{align*} \frac {d^{2}}{dx^{2}} & =\left ( -t^{2}\frac {d}{dt}\right ) \left ( -t^{2}\frac {d}{dt}\right ) \\ & =-t^{2}\left ( -2t\frac {d}{dt}-t^{2}\frac {d^{2}}{dt^{2}}\right ) \\ & =\left ( 2t^{3}\frac {d}{dt}+t^{4}\frac {d^{2}}{dt^{2}}\right ) \end{align*}

Then the new ODE becomes

\begin{align*} \left ( 2t^{3}\frac {d}{dt}+t^{4}\frac {d^{2}}{dt^{2}}\right ) y\left ( t\right ) +p\left ( t\right ) \left ( -t^{2}\frac {d}{dt}y\left ( t\right ) \right ) +q\left ( t\right ) y\left ( t\right ) & =0\\ t^{4}\frac {d^{2}}{dt^{2}}y+\left ( -t^{2}p\left ( t\right ) +2t^{3}\right ) \frac {d}{dt}y+q\left ( t\right ) y & =0\\ \frac {d^{2}}{dt^{2}}y+\frac {\left ( -p\left ( t\right ) +2t\right ) }{t^{2}}\frac {d}{dt}y+\frac {q\left ( t\right ) }{t^{4}}y & =0 \end{align*}

The above is how the ODE will always become after the transformation. Remember to change \(p\left ( x\right ) \) to \(p\left ( t\right ) \) using \(x=\frac {1}{t}\) and same for \(q\left ( x\right ) \). Now the new \(p\) is \(\frac {\left ( -p\left ( t\right ) +2t\right ) }{t^{2}}\) and the new \(q\) is \(\frac {q\left ( t\right ) }{t^{4}}\). Then do \(\lim _{t\rightarrow 0}t\frac {\left ( p\left ( t\right ) +2t^{3}\right ) }{t^{4}}\) and \(\lim _{t\rightarrow 0}t^{2}\frac {q\left ( t\right ) }{t^{4}}\) as before.

\(\blacksquare \) If the ODE \(a\left ( x\right ) y^{\prime \prime }+b\left ( x\right ) y^{\prime }+c\left ( x\right ) y=0\),  and say \(0\leq x\leq 1\), and there is essential singularity at either end, then use boundary layer or WKB. But Boundary layer method works on non-linear ODE’s (and also on linear ODE) and only if the boundary layer is at end of the domain, i.e. at \(x=0\) or \(x=1\).

WKB method on the other hand, works only on linear ODE, but the singularity can be any where (i.e. inside the domain). As rule of thumb, if the ODE is linear, use WKB. If the ODE is non-linear, we must use boundary layer.

Another difference, is that with boundary layer, we need to do matching phase at the interface between the boundary layer and the outer layer in order to find the constants of integrations. This can be tricky and is the hardest part of solving using boundary layer.

Using WKB, no matching phase is needed. We apply the boundary conditions to the whole solution obtained. See my HWs for NE 548 for problems solved from Bender and Orszag text book.

\(\blacksquare \) In numerical, to find if a scheme will converge, check that it is stable and also check that if it is consistent.

It could also be conditionally stable, or unconditionally stable, or unstable.

To check it is consistent, this is the same as finding the LTE (local truncation error) and checking that as the time step and the space step both go to zero, the LTE goes to zero. What is the LTE? You take the scheme and plug in the actual solution in it. An example is better to explain this part. Lets solve \(u_{t}=u_{xx}\). Using forward in time and centered difference in space, the numerical scheme (explicit) is

\[ U_{j}^{n+1}=U_{j}^{n}+\frac {k}{h^{2}}\left ( U_{j-1}^{n}-2U_{j}^{n}+U_{j+1}^{n}\right ) \]

The LTE is the difference between these two (error)

\[ LTE=U_{j}^{n+1}-\left ( U_{j}^{n}+\frac {k}{h^{2}}\left ( U_{j-1}^{n}-2U_{j}^{n}+U_{j+1}^{n}\right ) \right ) \]

Now plug-in \(u\left ( t^{n},x_{j}\right ) \) in place of \(U_{j}^{n}\) and \(u\left ( t^{n}+k,x_{j}\right ) \) in place of \(U_{j}^{n+1}\) and plug-in \(u\left ( t^{n},x+h\right ) \) in place of \(U_{j+1}^{n}\) and plug-in \(u\left ( t^{n},x-h\right ) \) in place of \(U_{j-1}^{n}\) in the above. It becomes

\begin{equation} LTE=u\left ( t+k,x_{j}\right ) -\left ( u\left ( t^{n},x_{j}\right ) +\frac {k}{h^{2}}\left ( u\left ( t,x-h\right ) -2u\left ( t^{n},x_{j}\right ) +u\left ( t,x+h\right ) \right ) \right ) \tag {1}\end{equation}

Where in the above \(k\) is the time step (also written as \(\Delta t\)) and \(h\) is the space step size. Now comes the main trick. Expanding the term \(u\left ( t^{n}+k,x_{j}\right ) \) in Taylor,

\begin{equation} u\left ( t^{n}+k,x_{j}\right ) =u\left ( t^{n},x_{j}\right ) +k\left . \frac {\partial u}{\partial t}\right \vert _{t^{n}}+\frac {k^{2}}{2}\left . \frac {\partial ^{2}u}{\partial t^{2}}\right \vert _{t^{n}}+O\left ( k^{3}\right ) \tag {2}\end{equation}

And expanding

\begin{equation} u\left ( t^{n},x_{j}+h\right ) =u\left ( t^{n},x_{j}\right ) +h\left . \frac {\partial u}{\partial x}\right \vert _{x_{j}}+\frac {h^{2}}{2}\left . \frac {\partial ^{2}u}{\partial x^{2}}\right \vert _{x_{j}}+O\left ( h^{3}\right ) \tag {3}\end{equation}

And expanding

\begin{equation} u\left ( t^{n},x_{j}-h\right ) =u\left ( t^{n},x_{j}\right ) -h\left . \frac {\partial u}{\partial x}\right \vert _{x_{j}}+\frac {h^{2}}{2}\left . \frac {\partial ^{2}u}{\partial x^{2}}\right \vert _{x_{j}}-O\left ( h^{3}\right ) \tag {4}\end{equation}

Now plug-in (2,3,4) back into (1). Simplifying, many things drop out, and we should obtain that

\[ LTE=O(k)+O\left ( h^{2}\right ) \]

Which says that \(LTE\rightarrow 0\) as \(h\rightarrow 0,k\rightarrow 0\). Hence it is consistent.

To check it is stable, use Von Neumann method for stability. This check if the solution at next time step does not become larger than the solution at the current time step. There can be condition for this. Such as it is stable if \(k\leq \frac {h^{2}}{2}\). This says that using this scheme, it will be stable as long as time step is smaller than \(\frac {h^{2}}{2}\). This makes the time step much smaller than space step.

\(\blacksquare \) For \(ax^{2}+bx+c=0\), with roots \(\alpha ,\beta \) then the relation between roots and coefficients is

\begin{align*} \alpha +\beta & =-\frac {b}{a}\\ \alpha \beta & =\frac {c}{a}\end{align*}

\(\blacksquare \) Leibniz rules for integration

\begin{align*} \frac {d}{dx}\int _{a\left ( x\right ) }^{b\left ( x\right ) }f\left ( t\right ) dt & =f\left ( b\left ( x\right ) \right ) b^{\prime }\left ( x\right ) -f\left ( a\left ( x\right ) \right ) a^{\prime }\left ( x\right ) \\ \frac {d}{dx}\int _{a\left ( x\right ) }^{b\left ( x\right ) }f\left ( t,x\right ) dt & =f\left ( b\left ( x\right ) \right ) b^{\prime }\left ( x\right ) -f\left ( a\left ( x\right ) \right ) a^{\prime }\left ( x\right ) +\int _{a\left ( x\right ) }^{b\left ( x\right ) }\frac {\partial }{\partial x}f\left ( t,x\right ) dt \end{align*}

\(\blacksquare \) \(\int _{a}^{b}f\left ( x\right ) dx=\int _{a}^{b}f\left ( a+b-x\right ) dx\)

\(\blacksquare \) Differentiable function implies continuous. But continuous does not imply differentiable. Example is \(\left \vert x\right \vert \) function.

\(\blacksquare \) Mean curvature being zero is a characteristic of minimal surfaces.

\(\blacksquare \) How to find phase difference between 2 signals \(x_{1}(t),x_{2}(t)\)? One way is to find the DFT of both signals (in Mathematica this is Fourier, in Matlab fft()), then find where the bin where peak frequency is located (in either output), then find the phase difference between the 2 bins at that location. Value of DFT at that bin is complex number. Use Arg in Mathematica to find its phase. The difference gives the phase difference between the original signals in time domain. See https://mathematica.stackexchange.com/questions/11046/how-to-find-the-phase-difference-of-two-sampled-sine-waves for an example.

\(\blacksquare \) Watch out when squaring both sides of equation. For example, given \(y=\sqrt {x}\). squaring both sides gives \(y^{2}=x\). But this is only true for \(y\geq 0\). Why? Let us take the square root of this in order to get back to the original equation. This gives \(\sqrt {y^{2}}=\sqrt {x}\). And here is the problem, \(\sqrt {y^{2}}=y\) only for \(y\geq 0\). Why? Let us assume \(y=-1\). Then \(\sqrt {y^{2}}=\sqrt {\left ( -1\right ) ^{2}}=\sqrt {1}=1\) which is not \(-1\). So when taking the square of both sides of the equation, remember this condition.

\(\blacksquare \) do not replace \(\sqrt {x^{2}}\) by \(x\), but by \(|x|\), since \(x=\sqrt {x^{2}}\) only for non negative \(x\).

\(\blacksquare \) Given an equation, and we want to solve for \(x\). We can square both sides in order to get rid of sqrt if needed on one side. But be careful. Even though after squaring both sides, the new equation is still true, the solutions of the new equation can introduce extraneous solution that does not satisfy the original equation. Here is an example I saw on the internet which illustrate this. Given \(\sqrt {x}=x-6\). And we want to solve for \(x\). Squaring both sides gives \(x=\left ( x-6\right ) ^{2}\). This has solutions \(x=9,x=4\). But only \(x=9\) is valid solution for the original equation before squaring. The solution \(x=4\) is extraneous. So need to check all solutions found after squaring against the original equation, and remove those extraneous one. In summary, if \(a^{2}=b^{2}\) then this does not mean that \(a=b\). But if \(a=b\) then it means that \(a^{2}=b^{2}\). For example \(\left ( -5\right ) ^{2}=5^{2}\). But \(-5\neq 5\).

\(\blacksquare \) How to find Laplace transform of product of two functions?

There is no formula for the Laplace transform of product \(f\left ( t\right ) g\left ( t\right ) \). (But if this was convolution, it is different story). But you could always try the definition and see if you can integrate it. Since \(\mathcal {L}\left ( f\left ( t\right ) \right ) =\int _{0}^{\infty }e^{-st}f\left ( t\right ) dt\) then \(\mathcal {L}\left ( f\left ( t\right ) g\left ( t\right ) \right ) =\int _{0}^{\infty }e^{-st}f\left ( t\right ) g\left ( t\right ) dt\). Hence for \(f\left ( t\right ) =e^{at},g\left ( t\right ) =t\) this becomes

\begin{align*}\mathcal {L}\left ( te^{at}\right ) & =\int _{0}^{\infty }e^{-st}te^{at}dt\\ & =\int _{0}^{\infty }te^{-t\left ( s-a\right ) }dt \end{align*}

Let \(s-a\equiv z\) then

\begin{align*}\mathcal {L}\left ( te^{at}\right ) & =\int _{0}^{\infty }te^{-tz}dt\\ & =\mathcal {L}_{z}\left ( t\right ) \\ & =\frac {1}{z^{2}}\\ & =\frac {1}{\left ( s-a\right ) ^{2}}\end{align*}

Similarly for \(f\left ( t\right ) =e^{at},g\left ( t\right ) =t^{2}\)

\begin{align*}\mathcal {L}\left ( t^{2}e^{at}\right ) & =\int _{0}^{\infty }e^{-st}t^{2}e^{at}dt\\ & =\int _{0}^{\infty }t^{2}e^{-t\left ( s-a\right ) }dt \end{align*}

Let \(s-a\equiv z\) then

\begin{align*}\mathcal {L}\left ( te^{at}\right ) & =\int _{0}^{\infty }t^{2}e^{-tz}dt\\ & =\mathcal {L}_{z}\left ( t^{2}\right ) \\ & =\frac {2}{z^{3}}\\ & =\frac {2}{\left ( s-a\right ) ^{3}}\end{align*}

Similarly for \(f\left ( t\right ) =e^{at},g\left ( t\right ) =t^{3}\)

\begin{align*}\mathcal {L}\left ( t^{2}e^{at}\right ) & =\int _{0}^{\infty }e^{-st}t^{3}e^{at}dt\\ & =\int _{0}^{\infty }t^{3}e^{-t\left ( s-a\right ) }dt \end{align*}

Let \(s-a\equiv z\) then

\begin{align*}\mathcal {L}\left ( te^{at}\right ) & =\int _{0}^{\infty }t^{3}e^{-tz}dt\\ & =\mathcal {L}_{z}\left ( t^{3}\right ) \\ & =\frac {6}{z^{4}}\\ & =\frac {6}{\left ( s-a\right ) ^{4}}\end{align*}

And so on. Hence we see that for \(f\left ( t\right ) =e^{at},g\left ( t\right ) =t^{n}\)

\[\mathcal {L}\left ( t^{n}e^{at}\right ) =\frac {n!}{\left ( s-a\right ) ^{n+1}}\]

5 Converting first order ODE which is homogeneous to separable ODE

(Added July, 2017).

If the ODE \(M\left ( x,y\right ) +N\left ( x,y\right ) \frac {dy}{dx}=0\) has both \(M\) and \(N\) homogenous functions of same power, then this ODE can be converted to separable. Here is an example. We want to solve

\begin{equation} \left ( x^{3}+8x^{2}y\right ) +\left ( 4xy^{2}-y^{3}\right ) y^{\prime }=0 \tag {1}\end{equation}

The above is homogenous in \(M,N\), since the total powers of each term in them is \(3\).

\[ \left ( \overset {3}{\overbrace {x^{3}}}+8\overset {3}{\overbrace {x^{2}y}}\right ) +\left ( 4\overset {3}{\overbrace {xy^{2}}}-\overset {3}{\overbrace {y^{3}}}\right ) y^{\prime }=0 \]

So we look at each term in \(N\) and \(M\) and add all the powers on each \(x,y\) in them. All powers should add to same value, which is \(3\) in this case. Of course \(N,M\) should be polynomials for this to work. So one should check that they are polynomials in \(x,y\) before starting this process. Once we check \(M,N\) are homogeneous, then we let

\[ y=xv \]

Therefore now

\begin{align} M & =x^{3}+8x^{2}\left ( xv\right ) \nonumber \\ & =x^{3}+8x^{3}v \tag {2}\end{align}

And

\begin{align} N & =4x\left ( xv\right ) ^{2}-\left ( xv\right ) ^{3}\nonumber \\ & =4x^{3}v^{2}-x^{3}v^{3} \tag {3}\end{align}

And

\begin{equation} y^{\prime }=v+xv^{\prime } \tag {4}\end{equation}

Substituting (3,4,5) into (1) gives

\begin{align*} \left ( x^{3}+8x^{3}v\right ) +\left ( 4x^{3}v^{2}-x^{3}v^{3}\right ) \left ( v+xv^{\prime }\right ) & =0\\ \left ( x^{3}+8x^{3}v\right ) +\left ( 4x^{3}v^{3}-x^{3}v^{4}\right ) +\left ( 4x^{4}v^{2}-x^{4}v^{3}\right ) v^{\prime } & =0 \end{align*}

Dividing by \(x^{3}\neq 0\) it simplifies to

\[ \left ( 1+8v\right ) +\left ( 4v^{3}-v^{4}\right ) +x\left ( 4v^{2}-v^{3}\right ) v^{\prime }=0 \]

Which can be written as

\begin{align*} x\left ( 4v^{2}-v^{3}\right ) v^{\prime } & =-\left ( \left ( 1+8v\right ) +\left ( 4v^{3}-v^{4}\right ) \right ) \\ v^{\prime } & =\frac {-\left ( \left ( 1+8v\right ) +\left ( 4v^{3}-v^{4}\right ) \right ) }{\left ( 4v^{2}-v^{3}\right ) }\left ( \frac {1}{x}\right ) \end{align*}

We see that it is now separable. We now solve this for \(v\left ( x\right ) \) by direct integration of both sides And then using \(y=xv\) find \(y\left ( x\right ) \).

6 Direct solving of some simple PDE’s

Some simple PDE’s can be solved by direct integration, here are few examples.

Example 1

\[ \frac {\partial z\left ( x,y\right ) }{\partial x}=0 \]

Integrating w.r.t. \(x\)., and remembering that now constant of integration will be function of \(y\), hence

\[ z\left ( x,y\right ) =f\left ( y\right ) \]

Example 2

\[ \frac {\partial ^{2}z\left ( x,y\right ) }{\partial x^{2}}=x \]

Integrating once w.r.t. \(x\) gives

\[ \frac {\partial z\left ( x,y\right ) }{\partial x}=\frac {x^{2}}{2}+f\left ( y\right ) \]

Integrating again gives

\[ z\left ( x,y\right ) =\frac {x^{3}}{6}+xf\left ( y\right ) +g\left ( y\right ) \]

Example 3

\[ \frac {\partial ^{2}z\left ( x,y\right ) }{\partial y^{2}}=y \]

Integrating once w.r.t. \(y\) gives

\[ \frac {\partial z\left ( x,y\right ) }{\partial y}=\frac {y^{2}}{2}+f\left ( x\right ) \]

Integrating again gives

\[ z\left ( x,y\right ) =\frac {y^{3}}{6}+yf\left ( x\right ) +g\left ( x\right ) \]

Example 4

\[ \frac {\partial ^{2}z\left ( x,y\right ) }{\partial x\partial y}=0 \]

Integrating once w.r.t \(x\) gives

\[ \frac {\partial z\left ( x,y\right ) }{\partial y}=f\left ( y\right ) \]

Integrating again w.r.t. \(y\) gives

\[ z\left ( x,y\right ) =\int f\left ( y\right ) dy+g\left ( x\right ) \]

Example 5

Solve \(u_{t}+u_{x}=0\) with \(u\left ( x,1\right ) =\frac {x}{1+x^{2}}\). Let \(u\equiv u\left ( x\left ( t\right ) ,t\right ) \), therefore

\[ \frac {du}{dt}=\frac {\partial u}{\partial t}+\frac {\partial u}{\partial x}\frac {dx}{dt}\]

Comparing the above with the given PDE, we see that if \(\frac {dx}{dt}=1\) then \(\frac {du}{dt}=0\) or \(u\left ( x\left ( t\right ) ,t\right ) \) is constant. At \(t=1\) we are given that

\begin{equation} u=\frac {x\left ( 1\right ) }{1+x\left ( 1\right ) ^{2}} \tag {1}\end{equation}

To find \(x\left ( 1\right ) \), from \(\frac {dx}{dt}=1\) we obtain that \(x\left ( t\right ) =t+c\). At \(t=1\), \(c=x\left ( 1\right ) -1\). Hence \(x\left ( t\right ) =t+x\left ( 1\right ) -1\) or

\[ x\left ( 1\right ) =x\left ( t\right ) +1-t \]

Hence solution from (1) becomes

\[ u=\frac {x-t+1}{1+\left ( x-t+1\right ) ^{2}}\]

Example 6

Solve \(u_{t}+u_{x}+u^{2}=0\).

Let \(u\equiv u\left ( x\left ( t\right ) ,t\right ) \), therefore

\[ \frac {du}{dt}=\frac {\partial u}{\partial t}+\frac {\partial u}{\partial x}\frac {dx}{dt}\]

Comparing the above with the given PDE, we see that if \(\frac {dx}{dt}=1\) then \(\frac {du}{dt}=-u^{2}\) or \(\frac {-1}{u}=-t+c.\) Hence

\[ u=\frac {1}{t+c}\]

At \(t=0\), \(c=\frac {1}{u\left ( x\left ( 0\right ) ,0\right ) }\). Let \(u\left ( x\left ( 0\right ) ,0\right ) =f\left ( x\left ( 0\right ) \right ) \). Therefore

\[ u=\frac {1}{t+\frac {1}{f\left ( x\left ( 0\right ) \right ) }}\]

Now we need to find \(x\left ( 0\right ) \). From \(\frac {dx}{dt}=1\), then \(x=t+c\) or \(c=x\left ( 0\right ) \), hence \(x\left ( 0\right ) =x-t\) and the above becomes

\[ u\left ( x,t\right ) =\frac {1}{t+\frac {1}{f\left ( x-t\right ) }}=\frac {f\left ( x-t\right ) }{tf\left ( x-t\right ) +1}\]

7 Fourier series flow chart

(added Oct. 20, 2016)

pict

7.1 Theorem on when we can do term by term differentiation

If \(f\left ( x\right ) \) on \(-L\leq x\leq L\) is continuous (notice, NOT piecewise continuous), this means \(f\left ( x\right ) \) has no jumps in it, and that \(f^{\prime }\left ( x\right ) \) exists on \(-L<x<L\) and \(f^{\prime }\left ( x\right ) \) is either continuous or piecewise continuous (notice, that \(f^{\prime }\left ( x\right ) \) can be piecewise continuous (P.W.C.), i.e. have finite number of jump discontinuities), and also and this is very important, that \(f\left ( -L\right ) =f\left ( L\right ) \) then we can do term by term differentiation of the Fourier series of \(f\left ( x\right ) \) and use \(=\) instead of \(\sim \). Not only that, but the term by term differentiation of the Fourier series of \(f\left ( x\right ) \) will give the Fourier series of \(f^{\prime }\left ( x\right ) \) itself.

So that main restriction here is that \(f\left ( x\right ) \) on \(-L\leq x\leq L\) is continuous (no jump discontinuities) and that \(f\left ( -L\right ) =f\left ( L\right ) \). So look at \(f\left ( x\right ) \) first and see if it is continuous or not (remember, the whole \(f\left ( x\right ) \) has to be continuous, not piecewise, so no jump discontinuities). If this condition is met, look at see if \(f\left ( -L\right ) =f\left ( L\right ) \).

For example \(f\left ( x\right ) =x\) on \(-1\leq x\leq 1\) is continuous, but \(f\left ( -1\right ) \neq f\left ( 1\right ) \) so the F.S. of \(f\left ( x\right ) \) can’t be term be term differentiated (well, it can, but the result will not be the Fourier series of \(f^{\prime }\left ( x\right ) \)). So we should not do term by term differentiation in this case.

But the Fourier series for \(f\left ( x\right ) =x^{2}\) can be term by term differentiated. This has its \(f^{\prime }\left ( x\right ) \) being continuous, since it meets all the conditions. Also Fourier series for \(f\left ( x\right ) =\left \vert x\right \vert \) can be term by term differentiated. This has its \(f^{\prime }\left ( x\right ) \) being P.W.C. due to a jump at \(x=0\) but that is OK, as \(f^{\prime }\left ( x\right ) \) is allowed to be P.W.C., but it is \(f\left ( x\right ) \) which is not allowed to be P.W.C.

There is a useful corollary that comes from the above. If \(f\left ( x\right ) \) meets all the conditions above, then its Fourier series is absolutely convergent and also uniformly convergent. The M-test can be used to verify that the Fourier series is uniformly convergent.

7.2 Relation between coefficients of Fourier series of \(f\left ( x\right ) \) Fourier series of \(f^{\prime }\left ( x\right ) \)

If term by term differentiation allowed, then let

\begin{align*} f\left ( x\right ) & =\frac {a_{0}}{2}+\sum _{n=1}^{\infty }a_{n}\cos \left ( n\frac {\pi }{L}x\right ) +b_{n}\sin \left ( n\frac {\pi }{L}x\right ) \\ f^{\prime }\left ( x\right ) & =\frac {\alpha _{0}}{2}+\sum _{n=1}^{\infty }\alpha _{n}\cos \left ( n\frac {\pi }{L}x\right ) +\beta _{n}\sin \left ( n\frac {\pi }{L}x\right ) \end{align*}

Then

\begin{align*} \alpha _{n} & =nb_{n}\\ \beta _{n} & =-na_{n}\end{align*}

And Bessel’s inequality instead of \(\frac {a_{0}^{2}}{2}+\sum _{n=1}^{\infty }\left ( a_{n}^{2}+b_{n}^{2}\right ) <\infty \) now becomes \(\sum _{n=1}^{\infty }n^{2}\left ( a_{n}^{2}+b_{n}^{2}\right ) <\infty \). So it is stronger.

7.3 Theorem on convergence of Fourier series

If \(f\left ( x\right ) \) is piecewise continuous on \(-L<x<L\) and if it is periodic with period \(2L\) and if on any point \(x\) on the entire domain \(-\infty <x<\infty \) both the left sided derivative and the right sided derivative exist (but these do not have to be the same !) then we say that the Fourier series of \(f\left ( x\right ) \) converges and it converges to the average of \(f\left ( x\right ) \) at each point including points that have jump discontinuities.

8 Laplacian in different coordinates

(added Jan. 10, 2019)

9 Linear combination of two solution is solution to ODE

If \(y_{1},y_{2}\) are two solutions to \(ay^{\prime \prime }+by^{\prime }+cy=0\) then to show that \(c_{1}y_{1}+c_{2}y_{2}\) is also solution:

\begin{align*} ay_{1}^{\prime \prime }+by_{1}^{\prime }+cy_{1} & =0\\ ay_{2}^{\prime \prime }+by_{2}^{\prime }+cy_{2} & =0 \end{align*}

Multiply the first ODE by \(c_{1}\) and second ODE by \(c_{2}\)

\begin{align*} a\left ( c_{1}y_{1}\right ) ^{\prime \prime }+b\left ( c_{1}y_{1}\right ) ^{\prime }+c\left ( c_{1}y_{1}\right ) & =0\\ a\left ( c_{2}y_{2}\right ) ^{\prime \prime }+b\left ( c_{2}y_{2}\right ) ^{\prime }+c\left ( c_{2}y_{2}\right ) & =0 \end{align*}

Add the above two equations, using linearity of differentials

\[ a\left ( c_{1}y_{1}+c_{2}y_{2}\right ) ^{\prime \prime }+b\left ( c_{1}y_{1}+c_{2}y_{2}\right ) ^{\prime }+c\left ( c_{1}y_{1}+c_{2}y_{2}\right ) =0 \]

Therefore \(c_{1}y_{1}+c_{2}y_{2}\) satisfies the original ODE. Hence solution.

10 To find the Wronskian ODE

Since

\[ W\left ( x\right ) =\begin {vmatrix} y_{1} & y_{2}\\ y_{1}^{\prime } & y_{2}^{\prime }\end {vmatrix} =y_{1}y_{2}^{\prime }-y_{2}y_{1}^{\prime }\]

Where \(y_{1},y_{2}\) are two solutions to \(ay^{\prime \prime }+by^{\prime }+cy=0.\) Write

\begin{align*} ay_{1}^{\prime \prime }+py_{1}^{\prime }+cy_{1} & =0\\ ay_{2}^{\prime \prime }+py_{2}^{\prime }+cy_{2} & =0 \end{align*}

Multiply the first ODE above by \(y_{2}\) and the second by \(y_{1}\)

\begin{align*} ay_{2}y_{1}^{\prime \prime }+py_{2}y_{1}^{\prime }+cy_{2}y_{1} & =0\\ ay_{1}y_{2}^{\prime \prime }+py_{1}y_{2}^{\prime }+cy_{1}y_{2} & =0 \end{align*}

Subtract the second from the first

\begin{equation} a\left ( y_{2}y_{1}^{\prime \prime }-y_{1}y_{2}^{\prime \prime }\right ) +p\left ( y_{2}y_{1}^{\prime }-y_{1}y_{2}^{\prime }\right ) =0 \tag {1}\end{equation}

But

\begin{equation} p\left ( y_{2}y_{1}^{\prime }-y_{1}y_{2}^{\prime }\right ) =-pW \tag {2}\end{equation}

And

\begin{align} \frac {dW}{dx} & =\frac {d}{dx}\left ( y_{1}y_{2}^{\prime }-y_{2}y_{1}^{\prime }\right ) \nonumber \\ & =y_{1}^{\prime }y_{2}^{\prime }+y_{1}y_{2}^{\prime \prime }-y_{2}^{\prime }y_{1}^{\prime }-y_{2}y_{1}^{\prime \prime }\nonumber \\ & =y_{1}y_{2}^{\prime \prime }-y_{2}y_{1}^{\prime \prime } \tag {3}\end{align}

Substituting (2,3) into (1) gives the Wronskian differential equation

\begin{align*} -a\left ( \frac {dW}{dx}\right ) -pW & =0\\ aW^{\prime }+pW & =0 \end{align*}

Whose solution is

\[ W\left ( x\right ) =Ce^{-\int \frac {p}{a}dx}\]

Where \(C\) is constant of integration.

Remember: \(W\left ( x_{0}\right ) =0\) does not mean the two functions are linearly dependent. The functions can still be Linearly independent on other interval, It just means \(x_{0}\) can’t be in the domain of the solution for two functions to be solutions. However, if the two functions are linearly dependent, then this implies \(W=0\) everywhere.  So to check if two functions are L.D., need to show that \(W=0\) everywhere.

11 Green functions notes

\(\blacksquare \) Green function is what is called impulse response in control. But it is more general, and can be used for solving PDE also.

Given a differential equation with some forcing function on the right side. To solve this, we replace the forcing function with an impulse. The solution of the DE now is called the impulse response, which is the Green’s function of the differential equation.

Now to find the solution to the original problem with the original forcing function, we just convolve the Green function with the original forcing function. Here is an example. Suppose we want to solve   \(L\left [ y\left ( t\right ) \right ] =f\left ( t\right ) \) with zero initial conditions. Then we solve \(L\left [ g\left ( t\right ) \right ] =\delta \left ( t\right ) \). The solution is \(g\left ( t\right ) \). Now \(y\left ( t\right ) =g\left ( t\right ) \circledast f\left ( t\right ) \). This is for initial value problem.  For example. \(y^{\prime }\left ( t\right ) +kx=e^{at}\), with \(y\left ( 0\right ) =0\). Then we solve \(g^{\prime }\left ( t\right ) +kg=\delta \left ( t\right ) \). The solution is \(g\left ( t\right ) =\left \{ \begin {array} [c]{cc}e^{-kt} & t>0\\ 0 & t<0 \end {array} \right . \), this is for causal system. Hence \(y\left ( t\right ) =g\left ( t\right ) \circledast f\left ( t\right ) \). The nice thing here, is that once we find \(g\left ( t\right ) \), we can solve \(y^{\prime }\left ( t\right ) +kx=f\left ( t\right ) \) for any \(f\left ( t\right ) \) by just convolving the Green function (impulse response) with the new \(f\left ( t\right ) \).

\(\blacksquare \) We can think of Green function as an inverse operator. Given \(L\left [ y\left ( t\right ) \right ] =f\left ( t\right ) \), we want to find solution \(y\left ( t\right ) =\int _{-\infty }^{\infty }G\left ( t;\tau \right ) f\left ( \tau \right ) d\tau \). So in a sense, \(G\left ( t;\tau \right ) \) is like \(L^{-1}\left [ y\left ( t\right ) \right ] \).

\(\blacksquare \) Need to add notes for Green function for Sturm-Liouville boundary value ODE. Need to be clear on what boundary conditions to use. What is B.C. is not homogeneous?

\(\blacksquare \) Green function properties:

  1. \(G\left ( t;\tau \right ) \) is continuous at \(t=\tau \). This is where the impulse is located.
  2. The derivative \(G^{\prime }\left ( t\right ) \) just before \(t=\tau \) is not the same as \(G^{\prime }\left ( t\right ) \) just after \(t=\tau \). i.e. \(G^{\prime }\left ( t;t-\varepsilon \right ) -G^{\prime }\left ( t;t+\varepsilon \right ) \neq 0\). This means there is discontinuity in derivative.
  3. \(G\left ( t;\tau \right ) \) should satisfy same boundary conditions as original PDE or ODE (this is for Sturm-Liouville or boundary value problems).
  4. \(L\left [ G\left ( t;\tau \right ) \right ] =0\) for \(t\neq \tau \)
  5. \(G\left ( x;\tau \right ) \) is symmetric. i.e. \(G\left ( x;\tau \right ) =G\left ( \tau ;x\right ) \).

\(\blacksquare \) When solving for \(G\left ( t;\tau \right ) \), in context of 1D, hence two boundary conditions, one at each end, and second order ODE (Sturm-Liouville), we now get two solutions, one for \(t<\tau \) and one for \(t>\tau \).

So we have \(4\) constants of integrations to find (this is for second order ODE) not just two constants as normally one would get , since now we have 2 different solutions. Two of these constants from the two boundary conditions, and two more come from property of Green function as mentioned above. \(G\left ( t;\tau \right ) =\left \{ \begin {array} [c]{cc}A_{1}y_{1}+A_{2}y_{2} & 0<t<\tau \\ A_{3}y_{1}+A_{4}y_{2} & \tau <t<L \end {array} \right . \)

12 Laplace transform notes

\(\blacksquare \) Remember that \(u_{c}\left ( t\right ) f\left ( t-c\right ) \Longleftrightarrow e^{-cs}F\left ( s\right ) \) and \(u_{c}\left ( t\right ) f\left ( t\right ) \Longleftrightarrow e^{-cs}\mathcal {L}\left \{ f\left ( t+c\right ) \right \} \). For example, if we are given \(u_{2}\left ( t\right ) t\), then \(\mathcal {L}\left ( u_{2}\left ( t\right ) t\right ) =e^{-2s}\mathcal {L}\left \{ t+2\right \} =e^{-2s}\left ( \frac {1}{s^{2}}+\frac {2}{s}\right ) =e^{-2s}\left ( \frac {1+2s}{s^{2}}\right ) \). Do not do \(u_{c}\left ( t\right ) f\left ( t\right ) \Longleftrightarrow e^{-cs}\mathcal {L}\left \{ f\left ( t\right ) \right \} \) ! That will be a big error. We use this allot when asked to write a piecewise function using Heaviside functions.

13 Series, power series, Laurent series notes

\(\blacksquare \) if we have a function \(f\left ( x\right ) \) represented as series (say power series or Fourier series), then we say the series converges to \(f\left ( x\right ) \) uniformly in region \(D\), if given \(\varepsilon >0\), we can number \(N\) which depends only on \(\varepsilon \), such that \(\left \vert f\left ( x\right ) -S_{N}\left ( x\right ) \right \vert <\varepsilon \).

Where here \(S_{N}\left ( x\right ) \) is the partial sum of the series using \(N\) terms. The difference between uniform convergence and non-uniform convergence, is that with uniform the number \(N\) only depends on \(\varepsilon \) and not on which \(x\) we are trying to approximate \(f\left ( x\right ) \) at. In uniform convergence, the number \(N\) depends on both \(\varepsilon \) and \(x\). So this means at some locations in \(D\) we need much larger \(N\) than in other locations to convergence to \(f\left ( x\right ) \) with same accuracy. Uniform convergence is better. It depends on the basis functions used to approximate \(f\left ( x\right ) \) in the series.

If the function \(f\left ( x\right ) \) is discontinuous at some point, then it is not possible to find uniform convergence there. As we get closer and closer to the discontinuity, more and more terms are needed to obtained same approximation away from the discontinuity, hence not uniform convergence. For example, Fourier series approximation of a step function can not be uniformly convergent due to the discontinuity in the step function.

\(\blacksquare \) \(\ \)Geometric series:

\begin{align*} \sum _{n=0}^{N}r^{n} & =1+r+r^{2}+r^{3}+\cdots +r^{N}=\frac {1-r^{N+1}}{1-r}\\ \sum _{n=1}^{N}r^{n} & =-1+\sum _{n=0}^{N}r^{n}=-1+\frac {1-r^{N+1}}{1-r}=r\frac {1-r^{N}}{1-r}\\ \sum _{n=0}^{\infty }r^{n} & =1+r+r^{2}+r^{3}+\cdots =\frac {1}{1-r}\qquad \left \vert r\right \vert <1\\ \sum _{n=0}^{\infty }\left ( -1\right ) ^{n}r^{n} & =1-r+r^{2}-r^{3}+\cdots =\frac {1}{1+r}\qquad \left \vert r\right \vert <1 \end{align*}

\(\blacksquare \) \(\ \)Binomial series:

General binomial is

\[ \left ( x+y\right ) ^{n}=x^{n}+nx^{n-1}y+\frac {n\left ( n-1\right ) }{2!}x^{n-2}y^{2}+\frac {n\left ( n-1\right ) \left ( n-2\right ) }{3!}x^{n-3}y^{3}+\cdots \]

From the above we can generate all other special cases. For example,

\[ \left ( 1+x\right ) ^{n}=1+nx+\frac {n\left ( n-1\right ) x^{2}}{2!}+\frac {n\left ( n-1\right ) \left ( n-2\right ) x^{3}}{3!}+\cdots \]

This work for positive and negative \(n\), rational or not. The sum converges when only for \(\left \vert x\right \vert <1\). From this, we can derive the above sums also for the geometric series.  For example, for \(n=-1\) the above becomes

\begin{align*} \frac {1}{\left ( 1+x\right ) } & =1-x+x^{2}-x^{3}+\cdots \qquad \left \vert x\right \vert <1\\ \frac {1}{\left ( 1-x\right ) } & =1+x+x^{2}+x^{3}+\cdots \qquad \left \vert x\right \vert <1 \end{align*}

For \(\left \vert x\right \vert >1\), we can still find series expansion in negative powers of \(x\) as follows

\begin{align*} \left ( 1+x\right ) ^{n} & =\left ( x\left ( 1+\frac {1}{x}\right ) \right ) ^{n}\\ & =x^{n}\left ( 1+\frac {1}{x}\right ) ^{n}\end{align*}

And now since \(\left \vert \frac {1}{x}\right \vert <1\), we can use binomial expansion to expand the term \(\left ( 1+\frac {1}{x}\right ) ^{n}\) in the above and obtain a convergent series, since now \(\left \vert \frac {1}{x}\right \vert <1\,.\) This will give the following expansion

\begin{align*} \left ( 1+x\right ) ^{n} & =x^{n}\left ( 1+\frac {1}{x}\right ) ^{n}\\ & =x^{n}\left ( 1+n\left ( \frac {1}{x}\right ) +\frac {n\left ( n-1\right ) }{2!}\left ( \frac {1}{x}\right ) ^{2}+\frac {n\left ( n-1\right ) \left ( n-2\right ) }{3!}\left ( \frac {1}{x}\right ) ^{3}+\cdots \right ) \end{align*}

So everything is the same, we just change \(x\) with \(\frac {1}{x}\) and remember to multiply the whole expansion with \(x^{n}\).  For example, for \(n=-1\)

\begin{align*} \frac {1}{\left ( 1+x\right ) } & =\frac {1}{x\left ( 1+\frac {1}{x}\right ) }=\frac {1}{x}\left ( 1-\frac {1}{x}+\left ( \frac {1}{x}\right ) ^{2}-\left ( \frac {1}{x}\right ) ^{3}+\cdots \right ) \qquad \left \vert x\right \vert >1\\ \frac {1}{\left ( 1-x\right ) } & =\frac {1}{x\left ( 1-\frac {1}{x}\right ) }=\frac {1}{x}\left ( 1+\frac {1}{x}+\left ( \frac {1}{x}\right ) ^{2}+\left ( \frac {1}{x}\right ) ^{3}+\cdots \right ) \qquad \left \vert x\right \vert >1 \end{align*}

These tricks are very useful when working with Laurent series.

\(\blacksquare \) \(\ \)Arithmetic series:

\begin{align*} \sum _{n=1}^{N}n & =\frac {1}{2}N\left ( N+1\right ) \\ \sum _{n=1}^{N}a_{n} & =N\left ( \frac {a_{1}+a_{N}}{2}\right ) \end{align*}

i.e. the sum is \(N\) times the arithmetic mean.

\(\blacksquare \) \(\ \)Taylor series: Expanded around \(x=a\) is

\[ f\left ( x\right ) =f\left ( a\right ) +\left ( x-a\right ) f^{\prime }\left ( a\right ) +\frac {\left ( x-a\right ) ^{2}f^{\prime \prime }\left ( a\right ) }{2!}+\frac {\left ( x-a\right ) ^{3}f^{\left ( 3\right ) }\left ( a\right ) }{3!}+\cdots +R_{n}\]

Where \(R_{n}\) is remainder \(R_{n}=\frac {\left ( x-a\right ) ^{n+1}}{\left ( n+1\right ) !}f^{\left ( n+1\right ) }\left ( x_{0}\right ) \) where \(x_{0}\) is some point between \(x\) and \(a\).

\(\blacksquare \) \(\ \)Maclaurin series: Is just Taylor expanded around zero. i.e. \(a=0\)

\[ f\left ( x\right ) =f\left ( 0\right ) +xf^{\prime }\left ( 0\right ) +\frac {x^{2}f^{\prime \prime }\left ( 0\right ) }{2!}+\frac {x^{3}f^{\left ( 3\right ) }\left ( 0\right ) }{3!}+\cdots \]

\(\blacksquare \) \(\ \)This diagram shows the different convergence of series and the relation between them

pict

The above shows that an absolutely convergent series (\(B\)) is also convergent. Also a uniformly convergent series (\(D\)) is also convergent. But the series \(B\) is absolutely convergent and not uniform convergent. While \(D\) is uniform convergent and not absolutely convergent.

The series \(C\) is both absolutely and uniformly convergent. And finally the series \(A\) is convergent, but not absolutely (called conditionally convergent). Examples of \(B\) (converges absolutely but not uniformly) is

\begin{align*} \sum _{n=0}^{\infty }x^{2}\frac {1}{\left ( 1+x^{2}\right ) ^{n}} & =x^{2}\left ( 1+\frac {1}{1+x^{2}}+\frac {1}{\left ( 1+x^{2}\right ) ^{2}}+\frac {1}{\left ( 1+x^{2}\right ) ^{3}}+\cdots \right ) \\ & =x^{2}+\frac {x^{2}}{1+x^{2}}+\frac {x^{2}}{\left ( 1+x^{2}\right ) ^{2}}+\frac {x^{2}}{\left ( 1+x^{2}\right ) ^{3}}+\cdots \end{align*}

And example of \(D\) (converges uniformly but not absolutely) is

\[ \sum _{n=1}^{\infty }\left ( -1\right ) ^{n+1}\frac {1}{x^{2}+n}=\frac {1}{x^{2}+1}-\frac {1}{x^{2}+2}+\frac {1}{x^{3}+3}-\frac {1}{x^{4}+4}+\cdots \]

Example of \(A\) (converges but not absolutely) is the alternating harmonic series

\[ \sum _{n=1}^{\infty }\left ( -1\right ) ^{n+1}\frac {1}{n}=1-\frac {1}{2}+\frac {1}{3}-\frac {1}{4}+\cdots \]

The above converges to \(\ln \left ( 2\right ) \) but absolutely it now becomes the harmonic series and it diverges

\[ \sum _{n=1}^{\infty }\frac {1}{n}=1+\frac {1}{2}+\frac {1}{3}+\frac {1}{4}+\cdots \]

For uniform convergence, we really need to have an \(x\) in the series and not just numbers, since the idea behind uniform convergence is if the series convergence to within an error tolerance \(\varepsilon \) using the same number of terms independent of the point \(x\) in the region.

\(\blacksquare \) The sequence \(\sum _{n=1}^{\infty }\frac {1}{n^{a}}\) converges for \(a>1\) and diverges for \(a\leq 1\). So \(a=1\) is the flip value. For example

\[ 1+\frac {1}{2}+\frac {1}{3}+\frac {1}{4}+\cdots \]

Diverges, since \(a=1\), also \(1+\frac {1}{\sqrt {2}}+\frac {1}{\sqrt {3}}+\frac {1}{\sqrt {4}}+\cdots \) diverges, since \(a=\frac {1}{2}\leq 1\). But \(1+\frac {1}{4}+\frac {1}{9}+\frac {1}{16}+\cdots \) converges, where \(a=2\) here and the sum is \(\frac {\pi ^{2}}{6}\).

\(\blacksquare \) Using partial sums. Let \(\sum _{n=0}^{\infty }a_{n}\) be some sequence. The partial sum is \(S_{N}=\sum _{n=0}^{N}a_{n}\). Then

\[ \sum _{n=0}^{\infty }a_{n}=\lim _{N\rightarrow \infty }S_{n}\]

If \(\lim _{N\rightarrow \infty }S_{n}\) exist and finite, then we can say that \(\sum _{n=0}^{\infty }a_{n}\) converges. So here we use set up a sequence who terms are partial sum, and them look at what happens in the limit to such a term as \(N\rightarrow \theta \). Need to find an example where this method is easier to use to test for convergence than the other method below.

\(\blacksquare \) Given a series, we are allowed to rearrange order of terms only when the series is absolutely convergent. Therefore for the alternating series \(1-\frac {1}{2}+\frac {1}{3}-\frac {1}{4}+\cdots \), do not rearrange terms since this is not absolutely convergent. This means the series sum is independent of the order in which terms are added only when the series is absolutely convergent.

\(\blacksquare \) In an infinite series of complex numbers, the series converges, if the real part of the series and also the complex part of the series, each converges on their own.

\(\blacksquare \) Power series: \(f\left ( z\right ) =\sum _{n=0}^{\infty }a_{n}\left ( z-z_{0}\right ) ^{n}\). This series is centered at \(z_{0}\). Or expanded around \(z_{0}\). This has radius of convergence \(R\) is the series converges for \(\left \vert z-z_{0}\right \vert <R\) and diverges for \(\left \vert z-z_{0}\right \vert >R\).

\(\blacksquare \) Tests for convergence.

  1. Always start with preliminary test. If \(\lim _{n\rightarrow \infty }a_{n}\) does not go to zero, then no need to do anything else. The series \(\sum _{n=0}^{\infty }a_{n}\) does not converge. It diverges. But if \(\lim _{n\rightarrow \infty }a_{n}=0\), it still can diverge. So this is a necessary but not sufficient condition for convergence. An example is \(\sum \frac {1}{n}\). Here \(a_{n}\rightarrow 0\) in the limit, but we know that this series does not converge.
  2. For Uniform convergence, there is a test called the weierstrass M test, which can be used to check if the series is uniformly convergent. But if this test fails, this does not necessarily mean the series is not uniform convergent. It still can be uniform convergent. (need an example).
  3. To test for absolute convergence, use the ratio test. If \(L=\lim _{n\rightarrow \infty }\left \vert \frac {a_{n+1}}{a_{n}}\right \vert <1\) then absolutely convergent. If \(L=1\) then inconclusive. Try the integral test. If \(L>1\) then not absolutely convergent. There is also the root test. \(L=\lim _{n\rightarrow \infty }\sqrt [n]{\left \vert a_{n}\right \vert }=\lim _{n\rightarrow \infty }\left \vert a_{n}\right \vert ^{\frac {1}{n}}\).
  4. The integral test, use when ratio test is inconclusive. \(L=\lim _{n\rightarrow \infty }\int ^{n}f\left ( x\right ) dx\) where \(a\left ( n\right ) \) becomes \(f\left ( x\right ) \). Remember to use this only of the  terms of the sequence are monotonically decreasing and are all positive. For example, \(\sum _{n=1}^{\infty }\ln \left ( 1+\frac {1}{n}\right ) \), then use \(L=\lim _{N\rightarrow \infty }\int ^{N}\ln \left ( 1+\frac {1}{x}\right ) dx=\left ( \left ( 1+x\right ) \ln \left ( 1+x\right ) -x\ln \left ( x\right ) -1\right ) ^{N}\). Notice, we only use the upper limit in the integral. This becomes (after simplifications) \(\lim _{N\rightarrow \infty }\frac {N}{N+1}=1\). Hence the limit \(L\) is finite, then the series converges.
  5. Radius of convergence is called \(R=\frac {1}{L}\) where \(L\) is from (3) above.
  6. Comparison test. Compare the series with one we happen to already know it converges. Let \(\sum b_{n}\) be a series which we know is convergent (for example \(\sum \frac {1}{n^{2}}\)), and we want to find if \(\sum a_{n}\) converges. If all terms of both series are positive and if \(a_{n}\leq b_{n}\) for each \(n\), then we conclude that \(\sum a_{n}\) converges also.

\(\blacksquare \) For Laurent series, lets say singularity is at \(z=0\) and \(z=1\). To expand about \(z=0\), get \(f\left ( z\right ) \) to look like \(\frac {1}{1-z}\) and use geometric series for \(\left \vert z\right \vert <1\). To expand about \(z=1\), there are two choices, to the inside and to the outside. For the outside, i.e. \(\left \vert z\right \vert >1\), get \(f\left ( z\right ) \) to have \(\frac {1}{1-\frac {1}{z}}\) form, since this now valid for \(\left \vert z\right \vert >1\).

\(\blacksquare \) Can only use power series \(\sum a_{n}\left ( z-z_{0}\right ) ^{n}\) to expand \(f\left ( z\right ) \) around \(z_{0}\) only if \(f\left ( z\right ) \) is analytic at \(z_{0}\). If \(f\left ( z\right ) \) is not analytic at \(z_{0}\) need to use Laurent series. Think of Laurent series as an extension of power series to handle singularities.

13.1 Some tricks to find sums

13.1.1 Example 1

Find \(\sum _{n=1}^{\infty }\frac {e^{inx}}{n}\)

solution Let \(f\left ( x\right ) =\sum _{n=1}^{\infty }\frac {e^{inx}}{n}\), taking derivative gives

\begin{align*} f^{\prime }\left ( x\right ) & =i\sum _{n=1}^{\infty }e^{inx}\\ & =i\sum _{n=1}^{\infty }\left ( e^{ix}\right ) ^{n}\\ & =i\left ( \sum _{n=0}^{\infty }\left ( e^{ix}\right ) ^{n}-1\right ) \\ & =\frac {i}{1-e^{ix}}-i \end{align*}

Hence

\begin{align*} f\left ( x\right ) & =\int \left ( \frac {i}{1-e^{ix}}-i\right ) dx\\ & =i\int \frac {dx}{1-e^{ix}}-ix+C\\ & =i\left ( x+i\ln \left ( 1-e^{ix}\right ) \right ) -ix+C\\ & =ix-\ln \left ( 1-e^{ix}\right ) -ix+C\\ & =-\ln \left ( 1-e^{ix}\right ) +C \end{align*}

We can set \(C=0\) to obtain

\[ \sum _{n=1}^{\infty }\frac {e^{inx}}{n}=-\ln \left ( 1-e^{ix}\right ) \]

More tricks to add...

13.2 Methods to find Laurent series

Let us find the Laurent series for \(f\left ( z\right ) =\frac {5z-2}{z\left ( z-1\right ) }\). There is a singularity of order \(1\) at \(z=0\) and \(z=1\).

13.2.1 Method one

Expansion around \(z=0\). Let

\begin{align*} g\left ( z\right ) & =zf\left ( z\right ) \\ & =\frac {5z-2}{\left ( z-1\right ) }\end{align*}

This makes \(g\left ( z\right ) \) analytic around \(z\), since \(g\left ( z\right ) \) do not have a pole at \(z=0\), then it is analytic around \(z=0\) and therefore it has a power series expansion around \(z=0\) given by

\begin{equation} g\left ( z\right ) =\sum _{n=0}^{\infty }a_{n}z^{n} \tag {1}\end{equation}

Where

\[ a_{n}=\frac {1}{n!}\left . g^{\left ( n\right ) }\left ( z\right ) \right \vert _{z=0}\]

But

\[ g\left ( 0\right ) =2 \]

And

\begin{align*} g^{\prime }\left ( z\right ) & =\frac {5\left ( z-1\right ) -\left ( 5z-2\right ) }{\left ( z-1\right ) ^{2}}=\frac {-3}{\left ( z-1\right ) ^{2}}\\ g^{\prime }\left ( 0\right ) & =-3 \end{align*}

And

\begin{align*} g^{\prime \prime }\left ( z\right ) & =\frac {-3\left ( -2\right ) }{\left ( z-1\right ) ^{3}}=\frac {6}{\left ( z-1\right ) ^{3}}\\ g^{\prime \prime }\left ( 0\right ) & =-6 \end{align*}

And

\begin{align*} g^{\prime \prime \prime }\left ( z\right ) & =\frac {6\left ( -3\right ) }{\left ( z-1\right ) ^{4}}=\frac {-18}{\left ( z-1\right ) ^{4}}\\ g^{\prime \prime }\left ( 0\right ) & =-18 \end{align*}

And so on. Therefore, from (1)

\begin{align*} g\left ( z\right ) & =g\left ( 0\right ) +g^{\prime }\left ( 0\right ) z+\frac {1}{2!}g^{\prime \prime }\left ( 0\right ) z^{2}+\frac {1}{3!}g^{\prime \prime \prime }\left ( 0\right ) z^{3}+\cdots \\ & =2-3z-\frac {6}{2}z^{2}-\frac {18}{3!}z^{3}-\cdots \\ & =2-3z-3z^{2}-3z^{3}-\cdots \end{align*}

Therefore

\begin{align*} f\left ( z\right ) & =\frac {g\left ( z\right ) }{z}\\ & =\frac {2}{z}-3-3z-3z^{2}-\cdots \end{align*}

The residue is \(2\).  The above expansion is valid around \(z=0\) up and not including the next singularity, which is at \(z=1\). Now we find the expansion of \(f\left ( z\right ) \) around \(z=1\). Let

\begin{align*} g\left ( z\right ) & =\left ( z-1\right ) f\left ( z\right ) \\ & =\frac {5z-2}{z}\end{align*}

This makes \(g\left ( z\right ) \) analytic around \(z=1\), since \(g\left ( z\right ) \) do not have a pole at \(z=1\). Therefore it has a power series expansion about \(z=1\) given by

\begin{equation} g\left ( z\right ) =\sum _{n=0}^{\infty }a_{n}\left ( z-1\right ) ^{n} \tag {1}\end{equation}

Where

\[ a_{n}=\frac {1}{n!}\left . g^{\left ( n\right ) }\left ( z\right ) \right \vert _{z=1}\]

But

\[ g\left ( 1\right ) =3 \]

And

\begin{align*} g^{\prime }\left ( z\right ) & =\frac {5z-\left ( 5z-2\right ) }{z^{2}}=\frac {2}{z^{2}}\\ g^{\prime }\left ( 1\right ) & =2 \end{align*}

And

\begin{align*} g^{\prime \prime }\left ( z\right ) & =\frac {2\left ( -2\right ) }{z^{3}}=\frac {-4}{z^{3}}\\ g^{\prime \prime }\left ( 1\right ) & =-4 \end{align*}

And

\begin{align*} g^{\prime \prime \prime }\left ( z\right ) & =\frac {-4\left ( -3\right ) }{z^{4}}=\frac {12}{z^{4}}\\ g^{\prime \prime }\left ( 1\right ) & =12 \end{align*}

And so on. Therefore, from (1)

\begin{align*} g\left ( z\right ) & =g\left ( 1\right ) +g^{\prime }\left ( 1\right ) \left ( z-1\right ) +\frac {1}{2!}g^{\prime \prime }\left ( 1\right ) \left ( z-1\right ) ^{2}+\frac {1}{3!}g^{\prime \prime \prime }\left ( 1\right ) \left ( z-1\right ) ^{3}+\cdots \\ & =3+2\left ( z-1\right ) -\frac {4}{2}\left ( z-1\right ) ^{2}+\frac {12}{3!}\left ( z-1\right ) ^{3}-\cdots \\ & =3+2\left ( z-1\right ) -2\left ( z-1\right ) ^{2}+2\left ( z-1\right ) ^{3}-\cdots \end{align*}

Therefore

\begin{align*} f\left ( z\right ) & =\frac {g\left ( z\right ) }{z-1}\\ & =\frac {3}{z-1}+2-2\left ( z-1\right ) +2\left ( z-1\right ) ^{2}-2\left ( z-1\right ) ^{3}+\cdots \end{align*}

The residue is \(3\). The above expansion is valid around \(z=1\) up and not including the next singularity, which is at \(z=0\) inside a circle of radius \(1\).

pict

Putting the above two regions together, then we see there is a series expansion of \(f\left ( z\right ) \) that is shared between the two regions, in the shaded region below.

pict

Let check same series in the shared region give same values. Using the series expansion about \(f\left ( 0\right ) \) to find \(f\left ( z\right ) \) at point \(z=\frac {1}{2}\), gives \(-2\) when using \(10\) terms in the series. Using series expansion around \(z=1\) to find \(f\left ( \frac {1}{2}\right ) \) using \(10\) terms also gives \(-2\). So both series are valid produce same result.

13.2.2 Method Two

This method is simpler than the above, but it results in different regions. It is based on converting the expression in order to use geometric series expansion on it.

\[ f\left ( z\right ) =\frac {5z-2}{z\left ( z-1\right ) }\]

Since there is a pole at \(z=0\) and at \(z=1\), then we first find expansion for \(0<\left \vert z\right \vert <1\). To do this, we write the above as

\begin{align*} f\left ( z\right ) & =\frac {5z-2}{z}\left ( \frac {1}{z-1}\right ) \\ & =\frac {2-5z}{z}\left ( \frac {1}{1-z}\right ) \end{align*}

And now expand \(\frac {1}{1-z}\) using geometric series, which is valid for \(\left \vert z\right \vert <1\). This gives

\begin{align*} f\left ( z\right ) & =\frac {2-5z}{z}\left ( 1+z+z^{2}+z^{3}+\cdots \right ) \\ & =\frac {2}{z}\left ( 1+z+z^{2}+z^{3}+\cdots \right ) -5\left ( 1+z+z^{2}+z^{3}+\cdots \right ) \\ & =\left ( \frac {2}{z}+2+2z+2z^{2}+\cdots \right ) -\left ( 5+5z+5z^{2}+5z^{3}+\cdots \right ) \\ & =\frac {2}{z}-3-3z-3z^{2}-3z^{3}-\cdots \end{align*}

The above is valid for \(0<\left \vert z\right \vert <1\) which agrees with result of method 1.

Now, to find expansion for \(\left \vert z\right \vert >1\), we need a term that looks like \(\left ( \frac {1}{1-\frac {1}{z}}\right ) \). Since now it can be expanded for \(\left \vert \frac {1}{z}\right \vert <1\) or \(\left \vert z\right \vert >1\) which is what we want. Therefore, writing \(f\left ( z\right ) \) as

\[ f\left ( z\right ) =\frac {5z-2}{z\left ( z-1\right ) }=\frac {5z-2}{z^{2}\left ( 1-\frac {1}{z}\right ) }=\frac {5z-2}{z^{2}}\left ( \frac {1}{1-\frac {1}{z}}\right ) \]

But for \(\left \vert \frac {1}{z}\right \vert <1\) the above becomes

\begin{align*} f\left ( z\right ) & =\frac {5z-2}{z^{2}}\left ( 1+\frac {1}{z}+\frac {1}{z^{2}}+\frac {1}{z^{3}}+\cdots \right ) \\ & =\frac {5}{z}\left ( 1+\frac {1}{z}+\frac {1}{z^{2}}+\frac {1}{z^{3}}+\cdots \right ) -\frac {2}{z^{2}}\left ( 1+\frac {1}{z}+\frac {1}{z^{2}}+\frac {1}{z^{3}}+\cdots \right ) \\ & =\left ( \frac {5}{z}+\frac {5}{z^{2}}+\frac {5}{z^{3}}+\frac {5}{z^{4}}+\cdots \right ) -\left ( \frac {2}{z^{2}}+\frac {2}{z^{3}}+\frac {2}{z^{4}}+\frac {2}{z^{5}}+\cdots \right ) \\ & =\frac {5}{z}+\frac {3}{z^{3}}+\frac {3}{z^{4}}+\frac {3}{z^{5}}+\cdots \end{align*}

With residue \(5\). The above is valid for \(\left \vert z\right \vert >1\). The following diagram illustrates the result obtained from method 2.

pict

13.2.3 Method Three

For expansion about \(z=0\), this uses same method as above, giving same series valid for \(\left \vert z\right \vert <1\,.\) This method is a little different for those points other than zero. The idea is to replace \(z\) by \(z-z_{0}\) where \(z_{0}\) is the point we want to expand about and do this replacement in \(f\left ( z\right ) \) itself. So for \(z=1\) using this example, we let \(\xi =z-1\) hence \(z=\xi +1\). Then \(f\left ( z\right ) \) becomes

\begin{align*} f\left ( z\right ) & =\frac {5z-2}{z\left ( z-1\right ) }\\ & =\frac {5\left ( \xi +1\right ) -2}{\left ( \xi +1\right ) \left ( \xi \right ) }\\ & =\frac {5\left ( \xi +1\right ) -2}{\xi }\left ( \frac {1}{\xi +1}\right ) \\ & =\frac {5\xi +3}{\xi }\left ( \frac {1}{1+\xi }\right ) \end{align*}

Now we expand \(\frac {1}{1+\xi }\) for \(\left \vert \xi \right \vert <1\) and the above becomes

\begin{align*} f\left ( z\right ) & =\frac {5\xi +3}{\xi }\left ( 1-\xi +\xi ^{2}-\xi ^{3}+\xi ^{4}-\cdots \right ) \\ & =\frac {5\xi +3}{\xi }\left ( 1-\xi +\xi ^{2}-\xi ^{3}+\xi ^{4}-\cdots \right ) \\ & =\left ( \frac {5\xi +3}{\xi }-\left ( 5\xi +3\right ) +\left ( 5\xi +3\right ) \xi -\left ( 5\xi +3\right ) \xi ^{2}+\cdots \right ) \\ & =\left ( 5+\frac {3}{\xi }-5\xi -3+5\xi ^{2}+3\xi -5\xi ^{3}-3\xi ^{2}+\cdots \right ) \\ & =\left ( 2+\frac {3}{\xi }-2\xi +2\xi ^{2}-2\xi ^{3}+\cdots \right ) \end{align*}

We now replace \(\xi =z-1\) and the above becomes

\[ f\left ( z\right ) =\left ( \frac {3}{\left ( z-1\right ) }+2-2\left ( z-1\right ) +2\left ( z-1\right ) ^{2}-2\left ( z-1\right ) ^{3}+2\left ( z-1\right ) ^{4}-\cdots \right ) \]

The above is valid for \(\left \vert \xi \right \vert <1\) or \(\left \vert z-1\,\right \vert <1\) or \(\,-1<\left ( z-1\right ) <1\) or \(0<z<2\).  This gives same series and for same region as in method one. But this is little faster as it uses Binomial series short cut to find the expansion instead of calculating derivatives as in method one.

13.2.4 Conclusion

Method one and method three give same series and for same regions. Method three uses binomial expansion as short cut and requires one to convert \(f\left ( z\right ) \) to form to allow using Binomial expansion. Method one does not use binomial expansion but requires doing many derivatives to evaluate the terms of the power series. It is more direct method.

Method two also uses binomial expansion, but gives different regions that method one and three.

If one is good in differentiation, method one seems the most direct. Otherwise, the choice is between method two or three as they both use Binomial expansion. Method two seems a little more direct than method three. It also depends what the problem is asking form. If the problem asks to expand around \(z_{0}\) vs. if it is asking to find expansion in \(\left \vert z\right \vert >1\) for example, then this decides which method to use.

14 Gamma function notes

\(\blacksquare \) Gamma function is defined by

\[ \Gamma \left ( x\right ) =\int _{0}^{\infty }t^{x-1}e^{-t}dt\qquad x>0 \]

The above is called the Euler representation. Or if we want it defined in complex domain, the above becomes

\[ \Gamma \left ( z\right ) =\int _{0}^{\infty }t^{z-1}e^{-t}dt\qquad \operatorname {Re}\left ( z\right ) >0 \]

Since the above is defined only for right half plane, there is way to extend this to left half plane, using what is called analytical continuation. More on this below.  First, some relations involving \(\Gamma \left ( x\right ) \)

\begin{align*} \Gamma \left ( z\right ) & =\left ( z-1\right ) \Gamma \left ( z-1\right ) \qquad \operatorname {Re}\left ( z\right ) >1\\ \Gamma \left ( 1\right ) & =1\\ \Gamma \left ( 2\right ) & =1\\ \Gamma \left ( 3\right ) & =2\\ \Gamma \left ( 4\right ) & =3!\\ \Gamma \left ( n\right ) & =\left ( n-1\right ) !\\ \Gamma \left ( n+1\right ) & =n!\\ \Gamma \left ( \frac {1}{2}\right ) & =\sqrt {\pi }\\ \Gamma \left ( z+1\right ) & =z\Gamma \left ( z\right ) \qquad \text {recursive formula}\\ \Gamma \left ( \bar {z}\right ) & =\overline {\Gamma \left ( z\right ) }\\ \Gamma \left ( n+\frac {1}{2}\right ) & =\frac {1\cdot 3\cdot 5\cdots \left ( 2n-1\right ) }{2^{n}}\sqrt {\pi }\end{align*}

\(\blacksquare \) To extend \(\Gamma \left ( z\right ) \) to the left half plane, i.e. for negative values. Let us define, using the above recursive formula

\[ \bar {\Gamma }\left ( z\right ) =\frac {\Gamma \left ( z+1\right ) }{z}\qquad \operatorname {Re}\left ( z\right ) >-1 \]

For example

\[ \bar {\Gamma }\left ( -\frac {1}{2}\right ) =\frac {\Gamma \left ( \frac {1}{2}\right ) }{-\frac {1}{2}}=-2\Gamma \left ( \frac {1}{2}\right ) =-2\sqrt {\pi }\]

And for \(\operatorname {Re}\left ( z\right ) >-2\)

\[ \bar {\Gamma }\left ( -\frac {3}{2}\right ) =\frac {\bar {\Gamma }\left ( -\frac {3}{2}+1\right ) }{-\frac {3}{2}}=\left ( \frac {1}{-\frac {3}{2}}\right ) \bar {\Gamma }\left ( -\frac {1}{2}\right ) =\left ( \frac {1}{-\frac {3}{2}}\right ) \left ( \frac {1}{-\frac {1}{2}}\right ) \Gamma \left ( \frac {1}{2}\right ) =\left ( \frac {1}{-\frac {3}{2}}\right ) \left ( \frac {1}{-\frac {1}{2}}\right ) \sqrt {\pi }=\frac {4}{3}\sqrt {\pi }\]

And so on. Notice that for \(x<0\) the functions \(\Gamma \left ( x\right ) \) are not defined for all negative integers \(x=-1,-2,\cdots \) it is also not defined for \(x=0\)

\(\blacksquare \) The above method of extending (or analytical continuation) of the Gamma function to negative values is due to Euler. Another method to extend Gamma is due to Weierstrass. It starts by rewriting from the definition as follows, where \(a>0\)

\begin{align} \Gamma \left ( z\right ) & =\int _{0}^{\infty }t^{z-1}e^{-t}dt\nonumber \\ & =\int _{0}^{a}t^{z-1}e^{-t}dt+\int _{a}^{\infty }t^{z-1}e^{-t}dt \tag {1}\end{align}

Expanding the integrand in the first integral using Taylor series gives

\begin{align*} \int _{0}^{a}t^{z-1}e^{-t}dt & =\int _{0}^{a}t^{z-1}\left ( 1+\left ( -t\right ) +\frac {\left ( -t\right ) ^{2}}{2!}+\frac {\left ( -t\right ) ^{3}}{3!}+\cdots \right ) dt\\ & =\int _{0}^{a}t^{z-1}\left ( 1+\left ( -t\right ) +\frac {\left ( -t\right ) ^{2}}{2!}+\frac {\left ( -t\right ) ^{3}}{3!}+\cdots \right ) dt\\ & =\int _{0}^{a}t^{z-1}\sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}t^{n}}{n!}dt\\ & =\int _{0}^{a}\sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}t^{n+z-1}}{n!}dt\\ & =\sum _{n=0}^{\infty }\int _{0}^{a}\frac {\left ( -1\right ) ^{n}t^{n+z-1}}{n!}dt\\ & =\sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}}{n!}\int _{0}^{a}t^{n+z-1}dt\\ & =\sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}}{n!}\left [ \frac {t^{n+z}}{n+z}\right ] _{0}^{a}\\ & =\sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}}{n!\left ( n+z\right ) }a^{n+z}\end{align*}

This takes care of the first integral in (1). Now, since the lower limits of the second integral in (1) is not zero, then there is no problem integrating it directly. Remember that in the Euler definition, it had zero in the lower limit, that is why we said there \(\operatorname {Re}\left ( z\right ) >1\). Now can can choose any value for \(a\). Weierstrass choose \(a=1\). Hence (1) becomes

\begin{align} \Gamma \left ( z\right ) & =\int _{0}^{a}t^{z-1}e^{-t}dt+\int _{a}^{\infty }t^{z-1}e^{-t}dt\nonumber \\ & =\sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}}{n!\left ( n+z\right ) }+\int _{1}^{\infty }t^{z-1}e^{-t}dt \tag {2}\end{align}

Notice the term \(a^{n+z}\) now is just \(1\) since \(a=1\). The second integral above can now be integrated directly. Let us now verify that Euler continuation \(\bar {\Gamma }\left ( z\right ) \) for say \(z=-\frac {1}{2}\) gives the same result as Weierstrass formula. From above, we found that \(\bar {\Gamma }\left ( z\right ) =-2\sqrt {\pi }\). Equation (2) for \(z=-\frac {1}{2}\) becomes

\begin{equation} \bar {\Gamma }\left ( -\frac {1}{2}\right ) =\sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}}{n!\left ( n-\frac {1}{2}\right ) }+\int _{1}^{\infty }t^{-\frac {3}{2}}e^{-t}dt \tag {3}\end{equation}

Using the computer

\[ \sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}}{n!\left ( n-\frac {1}{2}\right ) }=-2\sqrt {\pi }+2\sqrt {\pi }\left ( 1-\operatorname {erf}\left ( 1\right ) \right ) -2\frac {1}{e}\]

And direct integration

\[ \int _{1}^{\infty }t^{-\frac {3}{2}}e^{-t}dt=-2\sqrt {\pi }+2\sqrt {\pi }\operatorname {erf}\left ( 1\right ) +\frac {2}{e}\]

Hence (3) becomes

\begin{align*} \bar {\Gamma }\left ( -\frac {1}{2}\right ) & =\left ( -2\sqrt {\pi }+2\sqrt {\pi }\left ( 1-\operatorname {erf}\left ( 1\right ) \right ) -2\frac {1}{e}\right ) +\left ( -2\sqrt {\pi }+2\sqrt {\pi }\operatorname {erf}\left ( 1\right ) +\frac {2}{e}\right ) \\ & =-2\sqrt {\pi }\end{align*}

Which is the same as using Euler method. Let us check for \(z=-\frac {2}{3}\). We found above that \(\bar {\Gamma }\left ( -\frac {3}{2}\right ) =\frac {4}{3}\sqrt {\pi }\) using Euler method of analytical continuation. Now we will check using Weierstrass method. Equation (2) for \(z=-\frac {3}{2}\) becomes

\[ \bar {\Gamma }\left ( -\frac {3}{2}\right ) =\sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}}{n!\left ( n-\frac {3}{2}\right ) }+\int _{1}^{\infty }t^{-\frac {5}{2}}e^{-t}dt \]

Using the computer

\[ \sum _{n=0}^{\infty }\frac {\left ( -1\right ) ^{n}}{n!\left ( n-\frac {3}{2}\right ) }=\frac {4\sqrt {\pi }}{3}-\frac {4\sqrt {\pi }\left ( 1-\operatorname {erf}\left ( 1\right ) \right ) }{3}+\frac {2}{3e}\]

And

\[ \int _{1}^{\infty }t^{-\frac {5}{2}}e^{-t}dt=-\frac {4\sqrt {\pi }\operatorname {erf}\left ( 1\right ) }{3}+\frac {4\sqrt {\pi }}{3}-\frac {2}{3e}\]

Hence

\begin{align*} \bar {\Gamma }\left ( -\frac {3}{2}\right ) & =\left ( \frac {4\sqrt {\pi }}{3}-\frac {4\sqrt {\pi }\left ( 1-\operatorname {erf}\left ( 1\right ) \right ) }{3}+\frac {2}{3e}\right ) +\left ( -\frac {4\sqrt {\pi }\operatorname {erf}\left ( 1\right ) }{3}+\frac {4\sqrt {\pi }}{3}-\frac {2}{3e}\right ) \\ & =\frac {4}{3}\sqrt {\pi }\end{align*}

Which is the same as using the Euler method. Clearly the Euler method for analytical continuation of the Gamma function is simpler to compute.

\(\blacksquare \) Euler reflection formula

\begin{align*} \Gamma \left ( x\right ) \Gamma \left ( 1-x\right ) & =\int _{0}^{\infty }\frac {t^{x-1}}{1+t}dt\qquad 0<x<1\\ & =\frac {\pi }{\sin \left ( \pi x\right ) }\end{align*}

Where contour integration was used to derive the above. See Mary Boas text book, page 607, second edition, example 5 for full derivation.

\(\blacksquare \) \(\Gamma \left ( z\right ) \) has singularities at \(z=0,-1,-2,\cdots \) and \(\Gamma \left ( 1-z\right ) \) has singularities at \(z=1,2,3,\cdots \) so in the above reflection formula, the zeros of \(\sin \left ( \pi x\right ) \) cancel the singularities of \(\Gamma \left ( x\right ) \) when it is written as

\[ \Gamma \left ( 1-x\right ) =\frac {\pi }{\Gamma \left ( x\right ) \sin \left ( \pi x\right ) }\]

\(\blacksquare \) \(\frac {1}{\Gamma \left ( z\right ) }\) is entire.

\(\blacksquare \) There are other representations for \(\Gamma \left ( x\right ) \). One that uses products by Euler also is

\begin{align*} \Gamma \left ( z\right ) & =\frac {1}{z}\Pi _{n=1}^{\infty }\frac {\left ( 1+\frac {1}{n}\right ) ^{z}}{1+\frac {z}{n}}\\ & =\lim _{n\rightarrow \infty }\frac {n!\left ( n+1\right ) ^{z}}{z\left ( z-1\right ) \cdots \left ( z+n\right ) }\end{align*}

And another due to Weierstrass is

\begin{align*} \Gamma \left ( z\right ) & =\frac {e^{-\gamma z}}{z}\Pi _{n=1}^{\infty }\frac {e^{\frac {z}{n}}}{1+\frac {z}{n}}\\ & =e^{-\gamma z}\lim _{n\rightarrow \infty }\frac {n!\exp \left ( z\left ( 1+\frac {1}{2}+\cdots +\frac {1}{n}\right ) \right ) }{z\left ( z+1\right ) \left ( z+2\right ) \cdots \left ( z+n\right ) }\end{align*}

15 Riemann zeta function notes

\(\blacksquare \) Given by \(\zeta \left ( s\right ) =\sum _{n=1}^{\infty }\frac {1}{n^{s}}\) for \(\operatorname {Re}\left ( s\right ) >1\). Euler studied this and It was extended to the whole complex plane by Riemann. So the Riemann zeta function refer to the one with the extension to the whole complex plane. Euler only looked at it on the real line. It  has pole at \(s=1\). Has trivial zeros at \(s=-2,-4,-6,\cdots \) and all its non trivial zeros are inside the critical strip \(0<s<1\) and they all lie on the critical line \(s=\frac {1}{2}\). \(\zeta \left ( s\right ) \) is also defined by integral formula

\[ \zeta \left ( s\right ) =\frac {1}{\Gamma \left ( s\right ) }\int _{0}^{\infty }\frac {1}{e^{t}-1}\frac {t^{s}}{t}dt\qquad \operatorname {Re}\left ( s\right ) >1 \]

\(\blacksquare \) The connection between \(\zeta \left ( s\right ) \) prime numbers is given by the Euler product formula

\begin{align*} \zeta \left ( s\right ) & =\Pi _{p}\frac {1}{1-p^{-s}}\\ & =\left ( \frac {1}{1-2^{-s}}\right ) \left ( \frac {1}{1-3^{-s}}\right ) \left ( \frac {1}{1-5^{-s}}\right ) \left ( \frac {1}{1-7^{-s}}\right ) \cdots \\ & =\left ( \frac {1}{1-\frac {1}{2^{s}}}\right ) \left ( \frac {1}{1-\frac {1}{3^{s}}}\right ) \left ( \frac {1}{1-\frac {1}{5^{s}}}\right ) \left ( \frac {1}{1-\frac {1}{7^{s}}}\right ) \cdots \\ & =\left ( \frac {2^{s}}{2^{s}-1}\right ) \left ( \frac {3^{s}}{3^{s}-1}\right ) \left ( \frac {5^{s}}{5^{s}-1}\right ) \left ( \frac {7^{s}}{7^{s}-1}\right ) \cdots \end{align*}

\(\blacksquare \) \(\zeta \left ( s\right ) \) functional equation is

\[ \zeta \left ( s\right ) =2^{s}\pi ^{s-1}\sin \left ( \frac {\pi s}{2}\right ) \Gamma \left ( 1-s\right ) \zeta \left ( 1-s\right ) \]

16 Complex functions notes

\(\blacksquare \) Complex identities

\begin{align*} \left \vert z\right \vert ^{2} & =z\bar {z}\\ \overline {\left ( \bar {z}\right ) } & =z\\ \overline {\left ( z_{1}+z_{2}\right ) } & =\bar {z}_{1}+\bar {z}_{2}\\ \left \vert \bar {z}\right \vert & =\left \vert z\right \vert \\ \left \vert z_{1}z_{2}\right \vert & =\left \vert z_{1}\right \vert \left \vert z_{2}\right \vert \\ \operatorname {Re}\left ( z\right ) & =\frac {z+\bar {z}}{2}\\ \operatorname {Im}\left ( z\right ) & =\frac {z+\bar {z}}{2i}\\ \arg \left ( z_{1}z_{2}\right ) & =\arg \left ( z_{1}\right ) +\arg \left ( z_{2}\right ) \end{align*}

\(\blacksquare \) A complex function \(f\left ( z\right ) \) is analytic in a region \(D\) if it is defined and differentiable at all points in \(D\). One way to check for analyticity is to use the Cauchy Riemann (CR) equations (this is a necessary condition but not sufficient). If \(f\left ( z\right ) \) satisfies CR everywhere in that region then it is analytic. Let \(f\left ( z\right ) =u\left ( x,y\right ) +iv\left ( x,y\right ) \), then these two equations in Cartesian coordinates are

\begin{align*} \frac {\partial u}{\partial x} & =\frac {\partial v}{\partial y}\\ -\frac {\partial u}{\partial y} & =\frac {\partial v}{\partial x}\end{align*}

Sometimes it is easier to use the polar form of these. Let \(f\left ( z\right ) =r\cos \theta +i\sin \theta \), then the equations become

\begin{align*} \frac {\partial u}{\partial r} & =\frac {1}{r}\frac {\partial v}{\partial \theta }\\ -\frac {1}{r}\frac {\partial u}{\partial \theta } & =\frac {\partial v}{\partial r}\end{align*}

To remember them, think of the \(r\) as the \(x\) and \(\theta \) as the \(y\).

Let us apply these on \(\sqrt {z}\) to see how it works. Since \(z=re^{i\theta +2n\pi }\) then \(f\left ( z\right ) =\) \(\sqrt {r}e^{i\frac {\theta }{2}+n\pi }\).This is multi-valued function. One value for \(n=0\) and another for \(n=1\). The first step is to make it single valued. Choosing \(n=0\) gives the principal value. Then \(f\left ( z\right ) =\sqrt {r}e^{i\frac {\theta }{2}}\). Now we find the branch points. \(z=0\) is a branch point. We can pick \(-\pi <\theta <\pi \) and pick the negative real axis as the branch cut (the other branch point being \(-\infty \)). This is one choice.

We could have picked \(0<\theta <2\pi \) and had the positive \(x\) axis as the branch cut, where now the second branch point is \(+\infty \) but in both cases, origin is still part of the branch cut. Let us stick with \(-\pi <\theta <\pi \).

Given all of this, now\(\sqrt {z}=\sqrt {r}e^{i\frac {\theta }{2}}=\sqrt {r}\left ( \cos \left ( \frac {\theta }{2}\right ) +i\sin \left ( \frac {\theta }{2}\right ) \right ) \), hence \(u=\sqrt {r}\cos \left ( \frac {\theta }{2}\right ) \) and \(v=\sqrt {r}\sin \left ( \frac {\theta }{2}\right ) \). Therefore \(\frac {\partial u}{\partial r}=\frac {1}{2}\frac {1}{\sqrt {r}}\cos \left ( \frac {\theta }{2}\right ) ,\) and \(\frac {\partial v}{\partial \theta }=\frac {1}{2}\sqrt {r}\cos \left ( \frac {\theta }{2}\right ) \) and \(\frac {\partial u}{\partial \theta }=-\frac {1}{2}\sqrt {r}\sin \left ( \frac {\theta }{2}\right ) \) and \(\frac {\partial v}{\partial r}=\frac {1}{2}\frac {1}{\sqrt {r}}\sin \left ( \frac {\theta }{2}\right ) \). Applying Cauchy-Riemann above gives

\begin{align*} \frac {1}{2}\frac {1}{\sqrt {r}}\cos \left ( \frac {\theta }{2}\right ) & =\frac {1}{r}\frac {1}{2}\sqrt {r}\cos \left ( \frac {\theta }{2}\right ) \\ \frac {1}{2}\frac {1}{\sqrt {r}}\cos \left ( \frac {\theta }{2}\right ) & =\frac {1}{2}\frac {1}{\sqrt {r}}\cos \left ( \frac {\theta }{2}\right ) \end{align*}

Satisfied. and for the second equation

\begin{align*} -\frac {1}{r}\left ( -\frac {1}{2}\sqrt {r}\sin \left ( \frac {\theta }{2}\right ) \right ) & =\frac {1}{2}\frac {1}{\sqrt {r}}\sin \left ( \frac {\theta }{2}\right ) \\ \frac {1}{2}\frac {1}{\sqrt {r}}\sin \left ( \frac {\theta }{2}\right ) & =\frac {1}{2}\frac {1}{\sqrt {r}}\sin \left ( \frac {\theta }{2}\right ) \end{align*}

so \(\sqrt {z}\) is analytic in the region \(-\pi <\theta <\pi \), and not including branch points and branch cut.

\(\blacksquare \) We can’t just say \(f\left ( z\right ) \) is Analytic and stop. Have to say \(f\left ( z\right ) \) is analytic in a region or at a point. When we say \(f\left ( z\right ) \) analytic at a point, we mean analytic in small region around the point.

If \(f\left ( z\right ) \) is defined only at an isolated point \(z_{0}\) and not defined anywhere around it, then the function can not be analytic at \(z_{0}\) since it is not differentiable at \(z_{0}\). Also \(f\left ( z\right ) \) is analytic at a point \(z_{0}\) if the power series for \(f\left ( z\right ) \) expanded around \(z_{0}\) converges to \(f\left ( z\right ) \) evaluated at \(z_{0}\). An analytic complex function mean it is infinitely many times differentiable in the region, which means the limit exist \(\lim _{\Delta z\rightarrow 0}\frac {f\left ( z+\Delta z\right ) -f\left ( z\right ) }{\Delta z}\) and does not depend on direction.

\(\blacksquare \) Before applying the Cauchy Riemann equations, make sure the complex function is first made to be single valued.

\(\blacksquare \) Remember that Cauchy Riemann equations as necessary but not sufficient condition for function to be analytic. The extra condition needed is that all the partial derivatives are continuous. Need to find example where CR is satisfied but not the continuity on the partial derivatives. Most of the HW problems just needs the CR but good to keep an eye on this other condition.

\(\blacksquare \) Cauchy-Goursat: If \(f\left ( z\right ) \) is analytic on and inside closed contour \(C\) then \({\displaystyle \oint \limits _{C}} f\left ( z\right ) dz=0\). But remember that if \({\displaystyle \oint \limits _{C}} f\left ( z\right ) dz=0\) then this does not necessarily imply \(f\left ( z\right ) \) is analytic on and inside \(C\). So this is an IF and not an IFF relation. For example \({\displaystyle \oint \limits _{C}} \frac {1}{z^{2}}dz=0\) around unit circle centered at origin, but clearly \(\frac {1}{z^{2}}\) is not analytic everywhere inside \(C\), since it has a singularity at \(z=0\).

proof of Cauchy-Goursat: The proof uses two main ideas. It uses the Cauchy-Riemann equations and also uses Green theorem.  Green’s Theorem says

\begin{equation} \int _{C}Pdx+Qdy=\int _{D}\left ( \frac {\partial Q}{\partial x}-\frac {\partial P}{\partial y}\right ) dA \tag {1}\end{equation}

So Green’s Theorem transforms integration on the boundary \(C\) of region \(D\) by integration over the area inside the boundary \(C\).  Let \(f\left ( z\right ) =u+iv\). And since \(z=x+iy\) then \(dz=dx+idy\). Therefore

\begin{align}{\displaystyle \oint \limits _{C}} f\left ( z\right ) dz & ={\displaystyle \oint \limits _{C}} \left ( u+iv\right ) \left ( dx+idy\right ) \nonumber \\ & ={\displaystyle \oint \limits _{C}} udx+uidy+ivdx-vdy\nonumber \\ & ={\displaystyle \oint \limits _{C}} \left ( udx-vdy\right ) +i{\displaystyle \oint \limits _{C}} vdx+udy \tag {2}\end{align}

We now apply  (1) to each of the two integrals in (3). Hence the first integral in (2) becomes

\[{\displaystyle \oint \limits _{C}} \left ( udx-vdy\right ) =\int _{D}\left ( -\frac {\partial v}{\partial x}-\frac {\partial u}{\partial y}\right ) dA \]

But from CR, we know that \(-\frac {\partial u}{\partial y}=\frac {\partial v}{\partial x}\), hence the above is zero. And the second integral in (2) becomes

\[{\displaystyle \oint \limits _{C}} vdx+udy=\int _{D}\left ( \frac {\partial u}{\partial x}-\frac {\partial v}{\partial y}\right ) dA \]

But from CR, we know that \(\frac {\partial u}{\partial x}=\frac {\partial v}{\partial y}\), hence the above is zero. Therefore the whole integral in (2) is zero. Therefore \({\displaystyle \oint \limits _{C}} f\left ( z\right ) dz=0\). QED.

\(\blacksquare \) Cauchy residue: If \(f\left ( z\right ) \) is analytic on and inside closed contour \(C\) except at some isolated points \(z_{1},z_{2},\cdots ,z_{N}\) then \({\displaystyle \oint \limits _{C}} f\left ( z\right ) dz=2\pi i\sum _{j=1}^{N}\operatorname {Res}\left ( f\left ( z\right ) \right ) _{z=z_{j}}\). The term \(\operatorname {Res}\left ( f\left ( z\right ) \right ) _{z=z_{j}}\) is the residue of \(f\left ( z\right ) \) at point \(z_{j}\). Use Laurent expansion of \(f\left ( z\right ) \) to find residues. See above on methods how to find Laurent series.

\(\blacksquare \) Maximum modulus principle: If \(f\left ( z\right ) \) is analytic in some region \(D\) and is not constant inside \(D\), then its maximum value must be on the boundary. Also its minimum on the boundary, as long as \(f\left ( z\right ) \neq 0\) anywhere inside \(D\). In the other hand, if \(f\left ( z\right ) \) happened to have a maximum at some point \(z_{0}\) somewhere inside \(D\), then this implies that \(f\left ( z\right ) \) is constant everywhere and will have the value \(f\left ( z_{0}\right ) \) everywhere. What all this really mean, is that if \(f\left ( z\right ) \) is analytic and not constant in \(D\), then its maximum is on the boundary and not inside.

There is a complicated proof of this. See my notes for Physics 501. Hopefully this will not come up in the exam since I did not study the proof.

\(\blacksquare \) These definitions from book of Joseph Bak  

  1. \(f\) is analytic at \(z\) if \(f\) is differentiable in a neighborhood of \(z\). Similarly \(f\) is analytic on set \(S\) if \(f\) is differentiable at all points in some open set containing \(S\).
  2. \(f\left ( z\right ) \) is analytic on open set \(U\) is \(f\left ( z\right ) \) if differentiable at each point of \(U\) and \(f^{\prime }\left ( z\right ) \) is continuous on \(U\).

\(\blacksquare \) Some important formulas.

  1. If \(f\left ( z\right ) \) is analytic on and inside \(C\) then

    \[{\displaystyle \oint \limits _{C}} f\left ( z\right ) dz=0 \]
  2. If \(f\left ( z\right ) \) is analytic on and inside \(C\) then and \(z_{0}\) is a point in \(C\) then

    \begin{align*} 2\pi if\left ( z_{0}\right ) & ={\displaystyle \oint \limits _{C}} \frac {f\left ( z\right ) }{z-z_{0}}dz\\ 2\pi if^{\prime }\left ( z_{0}\right ) & ={\displaystyle \oint \limits _{C}} \frac {f\left ( z\right ) }{\left ( z-z_{0}\right ) ^{2}}dz\\ \frac {2\pi i}{2!}f^{\prime \prime }\left ( z_{0}\right ) & ={\displaystyle \oint \limits _{C}} \frac {f\left ( z\right ) }{\left ( z-z_{0}\right ) ^{3}}dz\\ & \vdots \\ \frac {2\pi i}{n!}f^{\left ( n\right ) }\left ( z_{0}\right ) & ={\displaystyle \oint \limits _{C}} \frac {f\left ( z\right ) }{\left ( z-z_{0}\right ) ^{n+1}}dz \end{align*}
  3. From the above, we find, where here \(f\left ( z\right ) =1\)

    \[{\displaystyle \oint \limits _{C}} \frac {1}{\left ( z-z_{0}\right ) ^{n+1}}dz=\left \{ \begin {array} [c]{ccc}2\pi i & & n=0\\ 0 & & n=1,2,\cdots \end {array} \right . \]

16.1 Find \(b_{n}\) coefficients in the Laurent series expansion

On Finding coefficient of the principle part of the Laurent series expansion around \(z_{0}\). Let

\begin{align} f\left ( z\right ) & =\sum _{n=0}^{\infty }c_{n}\left ( z-z_{0}\right ) ^{n}+\sum _{n=1}^{N}\frac {b_{n}}{\left ( z-z_{0}\right ) ^{n}}\tag {1}\\ & =\sum _{n=0}^{\infty }c_{n}\left ( z-z_{0}\right ) ^{n}+\frac {b_{1}}{\left ( z-z_{0}\right ) }+\frac {b_{2}}{\left ( z-z_{0}\right ) ^{2}}+\frac {b_{3}}{\left ( z-z_{0}\right ) ^{3}}+\cdots +\frac {b_{N}}{\left ( z-z_{0}\right ) ^{N}}\nonumber \end{align}

The goal is to determine all the coefficients \(b_{1},b_{2},\cdots ,b_{N}\) in Laurent series expansion. This assumes the largest order of the pole is finite. To find \(b_{1}\), we multiply both side of the above by \(\left ( z-z_{0}\right ) ^{N}\) which gives

\begin{equation} \left ( z-z_{0}\right ) ^{N}f\left ( z\right ) =\sum _{n=0}^{\infty }c_{n}\left ( z-z_{0}\right ) ^{n+N}+b_{1}\left ( z-z_{0}\right ) ^{N-1}+b_{2}\left ( z-z_{0}\right ) ^{N-2}+b_{3}\left ( z-z_{0}\right ) ^{N-3}+\cdots +b_{N} \tag {2}\end{equation}

Differentiating both sides \(N-1\) times w.r.t. \(z\) gives

\[ \frac {d^{N-1}}{dz^{\left ( N-1\right ) }}\left ( \left ( z-z_{0}\right ) ^{N}f\left ( z\right ) \right ) =\sum _{n=0}^{\infty }\frac {d^{N-1}}{dz^{\left ( N-1\right ) }}\left ( c_{n}\left ( z-z_{0}\right ) ^{n+N}\right ) +b_{1}\left ( N-1\right ) ! \]

Evaluating at \(x=x_{0}\) the above gives

\[ b_{1}=\frac {\lim _{z\rightarrow z_{0}}\frac {d^{N-1}}{dz^{\left ( N-1\right ) }}\left ( \left ( z-z_{0}\right ) ^{N}f\left ( z\right ) \right ) }{\left ( N-1\right ) !}\]

To find \(b_{2}\) we differentiate both sides of (2) \(N-2\) times which gives

\[ \frac {d^{N-2}}{dz^{\left ( N-2\right ) }}\left ( \left ( z-z_{0}\right ) ^{N}f\left ( z\right ) \right ) =\sum _{n=0}^{\infty }\frac {d^{N-2}}{dz^{\left ( N-2\right ) }}\left ( c_{n}\left ( z-z_{0}\right ) ^{n+N}\right ) +b_{1}\left ( N-1\right ) !\left ( x-x_{0}\right ) +b_{2}\left ( N-2\right ) ! \]

Hence

\[ b_{2}=\frac {\lim _{z\rightarrow z_{0}}\frac {d^{N-2}}{dz^{\left ( N-2\right ) }}\left ( \left ( z-z_{0}\right ) ^{N}f\left ( z\right ) \right ) }{\left ( N-2\right ) !}\]

We keep doing the above to find \(b_{3},b_{4},\cdots ,b_{N}\). Therefore the general formula is

\begin{equation} b_{n}=\frac {\lim _{z\rightarrow z_{0}}\frac {d^{N-n}}{dz^{\left ( N-n\right ) }}\left ( \left ( z-z_{0}\right ) ^{N}f\left ( z\right ) \right ) }{\left ( N-n\right ) !} \tag {3A}\end{equation}

And for the special case of the last term \(b_{N}\) the above simplifies to

\begin{equation} b_{k}=\frac {\lim _{z\rightarrow z_{0}}\left ( z-z_{0}\right ) ^{N}f\left ( z\right ) }{\left ( N-k\right ) !} \tag {3B}\end{equation}

Where in (3) \(n\) is the coefficient \(b_{n}\) needed to be evaluated and \(N\) is the pole order and \(z_{0}\) is the expansion point. The special value \(b_{1}\) is called the residue of \(f\left ( z\right ) \) at \(z_{0}\).

17 Hints to solve some problems

17.1 Complex analysis and power and Laurent series

  1. Laurent series of \(f\left ( z\right ) \) around point \(z_{0}\) is \(\sum _{n=-\infty }^{\infty }a_{n}\left ( z-z_{0}\right ) ^{n}\) and \(a_{n}=\frac {1}{2\pi i}{\displaystyle \oint } \frac {f\left ( z\right ) }{\left ( z-z_{0}\right ) ^{n+1}}dz\). Integration is around path enclosing \(z_{0}\) in counter clockwise.
  2. Power series of \(f\left ( z\right ) \) around \(z_{0}\) is \(\sum _{0}^{\infty }a_{n}\left ( z-z_{0}\right ) ^{n}\) where \(a_{n}=\frac {1}{n!}\left . f^{\left ( n\right ) }\left ( z\right ) \right \vert _{z=z_{0}}\)
  3. Problem asks to use Cauchy integral formula \({\displaystyle \oint \limits _{C}} \frac {f\left ( z\right ) }{z-z_{0}}dz=2\pi if\left ( z_{0}\right ) \) to evaluate another integral \({\displaystyle \oint \limits _{C}} g\left ( z\right ) dz\). Both over same \(C\). The idea is to rewrite \(g\left ( z\right ) \) as \(\frac {f\left ( z\right ) }{z-z_{0}}\) by factoring out the poles of \(g\left ( z\right ) \) that are outside \(C\) leaving one inside \(C\). Then we can write

    \begin{align*}{\displaystyle \oint \limits _{C}} g\left ( z\right ) dz & ={\displaystyle \oint \limits _{C}} \frac {f\left ( z\right ) }{z-z_{0}}dz\\ & =2\pi if\left ( z_{0}\right ) \end{align*}

    For example, to solve \({\displaystyle \oint \limits _{C}} \frac {1}{\left ( z+1\right ) \left ( z+2\right ) }dz\) around \(C\) unit circle. Rewriting this as \({\displaystyle \oint \limits _{C}} \frac {\frac {1}{z+2}}{\left ( z-\left ( -1\right ) \right ) }dz\) where now \(f\left ( z\right ) =\frac {1}{z+2}\) and now we can use Cauchy integral formula. So all what we have to do is just evaluate \(\frac {1}{z+2}\) at \(z=-1\), which gives \({\displaystyle \oint \limits _{C}} \frac {1}{\left ( z+1\right ) \left ( z+2\right ) }dz=2\pi i\). This works if \(g\left ( z\right ) \) can be factored into \(\frac {f\left ( z\right ) }{z-z_{0}}\) where \(f\left ( z\right ) \) is analytic on and inside \(C\). This would not work if \(g\left ( z\right ) \) has more than one pole inside \(C\).

  4. Problem asks to find \({\displaystyle \oint \limits _{C}} f\left ( z\right ) dz\) where \(C\) is some closed contour. For this, if \(f\left ( z\right ) \) had number of isolated singularities inside \(C\), then just use

    \[{\displaystyle \oint \limits _{C}} f\left ( z\right ) dz=2\pi i\sum \text {residues of}\ f\left ( z\right ) \ \text {at each singularity inside }C \]
  5. Problem asks to find \(\int _{C}f\left ( z\right ) dz\) where \(C\) is some open path, i.e. not closed (if it is closed, try Cauchy), such as a straight line or a half circle arc. For these problem, use parameterization. This converts the integral to line integration. If \(C\) is straight line, use standard \(t\) parameterization, which is found by using

    \begin{align*} x\left ( t\right ) & =\left ( 1-t\right ) x_{0}+tx_{1}\\ y\left ( t\right ) & =\left ( 1-t\right ) y_{0}+ty_{1}\end{align*}

    where \(\left ( x_{0},y_{0}\right ) \) in the line initial point and \(\left ( x_{1},y_{1}\right ) \) is the line end point. This works for straight lines. Now use the above and rewrite \(z=x+iy\) as \(z\left ( t\right ) =x\left ( t\right ) +iy\left ( t\right ) \) and then plug-in in this \(z\left ( t\right ) \) in \(f\left ( z\right ) \) to obtain \(f\left ( t\right ) \), then the integral becomes

    \[ \int _{C}f\left ( z\right ) dz=\int _{t=0}^{t=1}f\left ( t\right ) z^{\prime }\left ( t\right ) dt \]
    And now evaluate this integral using normal integration rules. If the path is a circular arc, then no need to use \(t\), just use \(\theta \). Rewrite \(x=re^{i\theta }\) and use \(\theta \) instead of \(t\) and follow same steps as above.
  6. Problem gives \(u\left ( x,y\right ) \) and asks to find \(v\left ( x,y\right ) \) in order for \(f\left ( x,y\right ) =u\left ( x,y\right ) +iv\left ( x,y\right ) \) to be analytic in some region. To solve these, use Cauchy Riemann equations. Need to use both equations. One equation will introduce a constant of integration (a function) and the second equation is used to solve for it. This gives \(v\left ( x,y\right ) \). See problem 2, HW 2, Physics 501 as example.
  7. Problem asks to evaluate \({\displaystyle \oint \limits _{C}} \frac {f\left ( z\right ) }{\left ( z-z_{0}\right ) ^{n}}dz\) where \(n\) is some number. This is the order of the pole, and \(f\left ( z\right ) \) is analytic on and inside \(C\). Then use the Cauchy integral formula for higher pole order. \({\displaystyle \oint \limits _{C}} \frac {f\left ( z\right ) }{\left ( z-z_{0}\right ) ^{n}}dz=2\pi i\ \operatorname {Residue}\left ( z_{0}\right ) \). The only difference here is that this is pole of order \(n\). So to find residue, use

    \begin{align*} \operatorname {Residue}\left ( z_{0}\right ) & =\lim _{z\rightarrow z_{0}}\frac {d^{n-1}}{dz^{n}}\frac {\left ( z-z_{0}\right ) ^{n}}{\left ( n-1\right ) !}\frac {f\left ( z\right ) }{\left ( z-z_{0}\right ) ^{n}}\\ & =\lim _{z\rightarrow z_{0}}\frac {d^{n-1}}{dz^{n}}\frac {f\left ( z\right ) }{\left ( n-1\right ) !}\end{align*}
  8. Problem gives \(f\left ( z\right ) \) and asks to find branch points and branch cuts. One way is to first find where \(f\left ( z\right ) =0\) and for each zero, make a small circle around it, starting from \(\theta =0\) to \(\theta =2\pi \). If the function  at \(\theta =0\) has different value from \(\theta =2\pi \), then this is a branch point. Do this for other zeros. Then connect the branch points. This will give the branch cut. It is not always clear how to connect the branch point though, might need to try different ways. For example \(f\left ( z\right ) =\sqrt {z^{2}+1}\) has two zeros at \(z=\pm i\). Both turn out to be branch points. The branch cut is the line between \(-i\) to \(+i\) on the imaginary axis.
  9. Problem gives a series \(\sum _{n=0}^{\infty }a_{n}z^{n}\) and asks to find radius of convergence \(R\). Two ways, find \(L=\lim _{n\rightarrow \infty }\frac {\left \vert a_{n+1}\right \vert }{\left \vert a_{n}\right \vert }\) and then \(R=\frac {1}{L}\). Another way is to find \(L\) using \(L=\lim _{n\rightarrow \infty }\left \vert a_{n}\right \vert ^{\frac {1}{n}}\).
  10. Problem gives integral \(\int _{0}^{2\pi }f\left ( \theta \right ) d\theta \) and asks to evaluate using residues. We start by converting everything to \(z\) using \(z=e^{i\theta }\) using \(\left \vert z\right \vert =1\). No need to use \(z=re^{i\theta }\). The idea is to convert it to \({\displaystyle \oint } f\left ( z\right ) dz\) which then we can use \({\displaystyle \oint } f\left ( z\right ) dz=2\pi i\sum \) residues inside. Replace \(f\left ( \theta \right ) \) to become \(f\left ( z\right ) \), this could require using Euler relation such as \(\cos n\theta =\frac {z^{n}+z^{-n}}{2}\) and similar for \(\sin \theta \). Now all what is needed is to find residues of any poles inside the unit circle. Do not worry about poles outside the unit circle. To find residues use short cut tricks. No need to find Laurent series.
    For an example, to evaluate \(\int _{0}^{2\pi }\frac {1}{5+4\cos \theta }d\theta \), then \(\frac {1}{5+4\cos \theta }\) becomes \(\frac {1}{\left ( 2z+1\right ) \left ( z+2\right ) }\) and there is only one pole inside unit circle, at \(z=-\frac {1}{2}\).
  11. Problem gives integral \(\int _{0}^{\infty }f\left ( \theta \right ) d\theta \) and asks to evaluate using residues. The contour here goes from \(-R\) to \(+R\) and then a semi circle in upper half plane. This works for even \(f\left ( \theta \right ) \) since we can write \(\int _{0}^{\infty }f\left ( \theta \right ) d\theta =\frac {1}{2}\int _{-\infty }^{\infty }f\left ( \theta \right ) d\theta \). If there is a pole inside the upper half plane, then the integral over the semi circle is \(2\pi i\) times the sum of residues. If there is a pole on the real line, then make a small semi circle around pole, say at \(z=a\) and then the integral for the small semi circle is \(-\pi i\) times the residue at \(a\). The minus sign here is due to moving clock wise on the small circle.
  12. Problem gives a series \(\sum _{n=0}^{\infty }a_{n}z^{n}\) and asks if it is uniformly convergent. For general series, use the M-test. But for this kind of series, just find radius of convergence as above using ratio test, and if it is absolutely convergent, then say it converges uniformly for \(\left \vert z\right \vert \leq r<R\). It is important to write it this way, and not just \(\left \vert z\right \vert <R\).
  13. Problems gives \(\sum _{n=0}^{\infty }a_{n}\) and asks to find the sum. Sometimes this trick works for some series. For example the alternating series \(\sum _{n=1}^{\infty }\left ( -1\right ) ^{n+1}\frac {1}{n}=1-\frac {1}{2}+\frac {1}{3}-\frac {1}{4}+\cdots \), then write it as \(x-\frac {x^{2}}{2}+\frac {x^{3}}{3}-\frac {x^{4}}{4}+\cdots \) which is the same when \(x=1\), and now notice that this is the Taylor series for \(\ln \left ( 1+x\right ) \) which means when \(x=1\) then \(1-\frac {1}{2}+\frac {1}{3}-\frac {1}{4}+\cdots =\ln \left ( 2\right ) \).
  14. Problem gives \(f\left ( z\right ) \) and asks to find residue at some \(z=z_{0}\). Of course we can always expand \(f\left ( z\right ) \) around \(z=0\) using Laurent series and find the coefficient of \(\frac {1}{z}\). But this is too much work. Instead, if \(f\left ( z\right ) \) has a simple pole of order one, then we use

    \[ R\left ( z_{0}\right ) =\lim _{z\rightarrow z_{0}}\left ( z-z_{0}\right ) f\left ( z\right ) \]
    In general, if \(f\left ( z\right ) =\frac {g\left ( z\right ) }{h\left ( z\right ) }\) then there are two cases. If \(h\left ( z_{0}\right ) =0\) or not. If \(h\left ( z_{0}\right ) \neq 0\), then we can just use the above. For example, if \(f\left ( z\right ) =\frac {z}{\left ( 2z+1\right ) \left ( 5-z\right ) }\) and we want the residue at \(z_{0}=5\), then since it simple pole, then using
    \begin{align*} R\left ( 5\right ) & =\lim _{z\rightarrow 5}\left ( z-5\right ) \frac {z}{\left ( 2z+1\right ) \left ( 5-z\right ) }\\ & =\lim _{z\rightarrow 5}\frac {-z}{\left ( 2z+1\right ) }\\ & =-\frac {3}{11}\end{align*}

    But if \(h\left ( z_{0}\right ) =0\) then we need to apply La’Hopital like this. If \(f\left ( z\right ) =\frac {\sin z}{1-z^{4}}\) and we want to find residue at \(z=i\). Then do as above, but with extra step, like this

    \begin{align*} R\left ( i\right ) & =\lim _{z\rightarrow i}\left ( z-i\right ) \frac {\sin z}{1-z^{4}}\\ & =\left ( \lim _{z\rightarrow i}\sin z\right ) \left ( \lim _{z\rightarrow i}\left ( z-i\right ) \frac {1}{1-z^{4}}\right ) \\ & =\sin i\left ( \lim _{z\rightarrow i}\frac {\left ( z-i\right ) }{1-z^{4}}\right ) \qquad \text {Now apply La'Hopital}\\ & =\sin i\left ( \lim _{z\rightarrow i}\frac {1}{-4z^{3}}\right ) \\ & =\frac {\sin i}{-4i^{3}}\\ & =\frac {1}{4}\sinh \left ( 1\right ) \end{align*}

    Now if the pole is not a simple pole or order one,.say of order \(m\), then we first multiply \(f\left ( z\right ) \) by \(\left ( z-z_{0}\right ) ^{m}\) then differentiate the result \(m-1\) times, then divide by \(\left ( m-1\right ) !\), and then evaluate the result at \(z=z_{0}.\) in other words,

    \[ R\left ( z_{0}\right ) =\lim _{z\rightarrow z_{0}}\frac {1}{\left ( m-1\right ) !}\frac {d^{m-1}}{dz^{m-1}}\left ( \left ( z-z_{0}\right ) ^{m}f\left ( z\right ) \right ) \]
    For example, if \(f\left ( z\right ) =\frac {z\sin z}{\left ( z-\pi \right ) ^{3}}\) and we want residue at \(z=\pi \). Since order is \(m=3\), then
    \begin{align*} R\left ( z_{0}\right ) & =\lim _{z\rightarrow \pi }\frac {1}{2!}\frac {d^{2}}{dz^{2}}\left ( \left ( z-\pi \right ) ^{3}\frac {z\sin z}{\left ( z-\pi \right ) ^{3}}\right ) \\ & =\lim _{z\rightarrow \pi }\frac {1}{2}\frac {d^{2}}{dz^{2}}\left ( z\sin z\right ) \\ & =\lim _{z\rightarrow \pi }\frac {1}{2}\left ( -z\sin z+2\cos z\right ) \\ & =-1 \end{align*}

    The above methods will work on most of the HW problems I’ve seen so far but If all else fails, try Laurent series, that always works.

17.2 Errors and relative errors

  1. A problem gives an expression in \(x,y\) such as \(f\left ( x,y\right ) \) and asks how much a relative error in both \(x\) and \(y\) will affect \(f\left ( x,y\right ) \) in worst case.  For these problems, find \(df\) and then find \(\frac {df}{f}\). For example, if \(f\left ( x,y\right ) =\sqrt {\frac {x}{y^{3}}}\) and relative error is in \(x\) and \(y\) is \(2\%\) then what is worst relative error in \(f\left ( x,y\right ) ?\). Then since

    \begin{align*} df & =\frac {\partial f}{\partial x}dx+\frac {\partial f}{\partial y}dy\\ & =\frac {1}{2}x^{-\frac {1}{2}}b^{-\frac {3}{2}}dx-\frac {3}{2}x^{\frac {1}{2}}y^{-\frac {5}{2}}dy \end{align*}

    Then

    \[ \frac {df}{f}=\frac {1}{2}\frac {dx}{x}-\frac {3}{2}\frac {dy}{y}\]
    But \(\frac {dx}{x}\) and \(\frac {dy}{y}\) are the relative errors in \(x\) and \(y\). So if we plug-in \(2\) for \(\frac {dx}{x}\) and \(-2\) for \(\frac {dy}{y}\) we get \(4\%\) is worst relative error in \(f\left ( x,y\right ) \). Notice we used \(-2\%\) relative error for \(y\) and \(+2\%\) relative error for \(x\) since we wanted the worst (largest) relative error. If we wanted the least relative error in \(f\left ( x,y\right ) \), then we will use \(+2\%\) for \(y\) also, which gives \(1-3=-2\) or \(2\%\) relative error in \(f\left ( x,y\right ) \).  

18 Some CAS notes

\(\blacksquare \) in Mathematica Exp is a symbol. Head[Exp] gives Symbol but in Maple it is not.

In Maple

indets(z^2-exp(x^2-1)+1+Pi+Gamma*foo()-sin(y),'name');

gives \(\left \{ \Gamma ,\pi ,x,y,z\right \} \) but in Mathematica

expr=z^2-Exp[x^2-1]+1+Pi+Gamma*foo[]-Sin[y]; 
Cases[expr,_Symbol,Infinity]

gives \(\{e,x,\pi ,z,\text {Gamma},y\}\)

Notice that \(e\) shows up in Mathematica, but not in Maple.

19 d’Alembert’s Solution to wave PDE

(added December 13, 2018)

The PDE is

\begin{equation} \frac {\partial ^{2}\psi }{\partial t^{2}}=c^{2}\frac {\partial ^{2}\psi }{\partial x^{2}} \tag {1}\end{equation}

Let

\begin{align*} u & =x-ct\\ v & =x+ct \end{align*}

Then

\begin{align} \frac {\partial \psi }{\partial t} & =\frac {\partial \psi }{\partial u}\frac {\partial u}{\partial t}+\frac {\partial \psi }{\partial v}\frac {\partial v}{\partial t}\nonumber \\ & =-c\frac {\partial \psi }{\partial u}+c\frac {\partial \psi }{\partial v} \tag {2}\end{align}

And

\begin{align} \frac {\partial \psi }{\partial x} & =\frac {\partial \psi }{\partial u}\frac {\partial u}{\partial x}+\frac {\partial \psi }{\partial v}\frac {\partial v}{\partial x}\nonumber \\ & =\frac {\partial \psi }{\partial u}+\frac {\partial \psi }{\partial v} \tag {3}\end{align}

Then, from (2)

\begin{align} \frac {\partial ^{2}\psi }{\partial t^{2}} & =-c\left ( \frac {\partial ^{2}\psi }{\partial u^{2}}\frac {\partial u}{\partial t}+\frac {\partial ^{2}\psi }{\partial u\partial v}\frac {\partial v}{\partial t}\right ) +c\left ( \frac {\partial ^{2}\psi }{\partial v^{2}}\frac {\partial v}{\partial t}+\frac {\partial ^{2}\psi }{\partial v\partial u}\frac {\partial u}{\partial t}\right ) \nonumber \\ & =-c\left ( -c\frac {\partial ^{2}\psi }{\partial u^{2}}+c\frac {\partial ^{2}\psi }{\partial u\partial v}\right ) +c\left ( c\frac {\partial ^{2}\psi }{\partial v^{2}}-c\frac {\partial ^{2}\psi }{\partial v\partial u}\right ) \nonumber \\ & =c^{2}\frac {\partial ^{2}\psi }{\partial u^{2}}-c^{2}\frac {\partial ^{2}\psi }{\partial u\partial v}+c^{2}\frac {\partial ^{2}\psi }{\partial v^{2}}-c^{2}\frac {\partial ^{2}\psi }{\partial v\partial u}\nonumber \\ & =c^{2}\frac {\partial ^{2}\psi }{\partial u^{2}}+c^{2}\frac {\partial ^{2}\psi }{\partial v^{2}}-2c^{2}\frac {\partial ^{2}\psi }{\partial v\partial u} \tag {4}\end{align}

And from (3)

\begin{align} \frac {\partial ^{2}\psi }{\partial x^{2}} & =\left ( \frac {\partial ^{2}\psi }{\partial u^{2}}\frac {\partial u}{\partial x}+\frac {\partial ^{2}\psi }{\partial u\partial v}\frac {\partial v}{\partial x}\right ) +\left ( \frac {\partial ^{2}\psi }{\partial v^{2}}\frac {\partial v}{\partial x}+\frac {\partial ^{2}\psi }{\partial v\partial u}\frac {\partial u}{\partial x}\right ) \nonumber \\ & =\left ( \frac {\partial ^{2}\psi }{\partial u^{2}}+\frac {\partial ^{2}\psi }{\partial u\partial v}\right ) +\left ( \frac {\partial ^{2}\psi }{\partial v^{2}}+\frac {\partial ^{2}\psi }{\partial v\partial u}\right ) \nonumber \\ & =\frac {\partial ^{2}\psi }{\partial u^{2}}+\frac {\partial ^{2}\psi }{\partial v^{2}}+2\frac {\partial ^{2}\psi }{\partial v\partial u} \tag {5}\end{align}

Substituting (4,5) into (1) gives

\begin{align*} -2c^{2}\frac {\partial ^{2}\psi }{\partial v\partial u} & =2c^{2}\frac {\partial ^{2}\psi }{\partial v\partial u}\\ -4c^{2}\frac {\partial ^{2}\psi }{\partial v\partial u} & =0 \end{align*}

Since \(c\neq 0\) then

\[ \frac {\partial ^{2}\psi }{\partial v\partial u}=0 \]

Integrating w.r.t \(v\) gives

\[ \frac {\partial \psi }{\partial u}=f\left ( u\right ) \]

Integrating w.r.t \(u\)

\[ \psi \left ( x,t\right ) =F\left ( u\right ) +G\left ( v\right ) \]

Therefore

\begin{equation} \psi \left ( x,t\right ) =F\left ( x-ct\right ) +G\left ( x+ct\right ) \tag {6}\end{equation}

The functions \(F\left ( x,t\right ) ,G\left ( x,t\right ) \) are arbitrary functions found from initial and boundary conditions if given. Let initial conditions be

\begin{align*} \psi \left ( x,0\right ) & =f_{0}\left ( x\right ) \\ \frac {\partial }{\partial t}\psi \left ( x,0\right ) & =g_{0}\left ( x\right ) \end{align*}

Where the first condition above is the shape of the string at time \(t=0\) and the second condition is the initial velocity.

Applying first condition to (6) gives

\begin{equation} f_{0}\left ( x\right ) =F\left ( x\right ) +G\left ( x\right ) \tag {7}\end{equation}

Applying the second condition gives

\begin{align} g_{0}\left ( x\right ) & =\left [ \frac {\partial }{\partial t}F\left ( x-ct\right ) \right ] _{t=0}+\left [ \frac {\partial }{\partial t}G\left ( x+ct\right ) \right ] _{t=0}\nonumber \\ & =\left [ \frac {dF\left ( x-ct\right ) }{d\left ( x-ct\right ) }\frac {\partial \left ( x-ct\right ) }{\partial t}\right ] _{t=0}+\left [ \frac {dG\left ( x+ct\right ) }{d\left ( x+ct\right ) }\frac {\partial \left ( x+ct\right ) }{\partial t}\right ] _{t=0}\nonumber \\ & =\left [ -c\frac {dF\left ( x-ct\right ) }{d\left ( x-ct\right ) }\right ] _{t=0}+\left [ c\frac {dG\left ( x+ct\right ) }{d\left ( x+ct\right ) }\right ] _{t=0}\nonumber \\ & =-c\frac {dF\left ( x\right ) }{dx}+c\frac {dG\left ( x\right ) }{dx} \tag {8}\end{align}

Now we have two equations (7,8) and two unknowns \(F,G\) to solve for. But the (8) has derivatives of \(F,G\,\). So to make it easier to solve, we integrate (8) w.r.t. to obtain

\begin{equation} \int ^{x}g_{0}\left ( s\right ) ds=-cF\left ( x\right ) +cG\left ( x\right ) \tag {9}\end{equation}

So we will use (9) instead of (8) with (7) to solve for \(F,G\). From (7)

\begin{equation} F\left ( x\right ) =f_{0}\left ( x\right ) -G\left ( x\right ) \tag {10}\end{equation}

Substituting (10) in (9) gives

\begin{align} \int ^{x}g_{0}\left ( s\right ) ds & =-c\left ( f_{0}\left ( x\right ) -G\left ( x\right ) \right ) +cG\left ( x\right ) \nonumber \\ & =-cf_{0}\left ( x\right ) +2cG\left ( x\right ) \nonumber \\ G\left ( x\right ) & =\frac {\left ( \int ^{x}g_{0}\left ( s\right ) ds\right ) +cf_{0}\left ( x\right ) }{2c}\nonumber \\ & =\frac {1}{2c}\left ( \int ^{x}g_{0}\left ( s\right ) ds+cf_{0}\left ( x\right ) \right ) \tag {11}\end{align}

Using the above back in (10) gives \(F\left ( x\right ) \) as

\begin{equation} F\left ( x\right ) =f_{0}\left ( x\right ) -\frac {1}{2c}\left ( \int ^{x}g_{0}\left ( s\right ) ds+cf_{0}\left ( x\right ) \right ) \tag {12}\end{equation}

Using (11,12) in (6) gives the final solution

\begin{align*} \psi \left ( x,t\right ) & =F\left ( x-ct\right ) +G\left ( x+ct\right ) \\ & =f_{0}\left ( x-ct\right ) -\frac {1}{2c}\left ( \int ^{x-ct}g_{0}\left ( s\right ) ds+cf_{0}\left ( x-ct\right ) \right ) +\frac {1}{2c}\left ( \int ^{x}g_{0}\left ( s\right ) ds+cf_{0}\left ( x\right ) \right ) \\ & =f_{0}\left ( x-ct\right ) -\frac {1}{2c}\int ^{x-ct}g_{0}\left ( s\right ) ds-\frac {1}{2}f_{0}\left ( x-ct\right ) +\frac {1}{2c}\int ^{x+ct}g_{0}\left ( s\right ) ds+\frac {1}{2}f_{0}\left ( x+ct\right ) \\ & =\frac {1}{2}\left ( f_{0}\left ( x-ct\right ) +f_{0}\left ( x-ct\right ) \right ) +\frac {1}{2c}\int _{x-ct}^{x+ct}g_{0}\left ( s\right ) ds \end{align*}

The above is the final solution. So if we are given initial position and initial velocity of the string as function of \(x\), we can find exact solution to the wave PDE.

20 Convergence

\(\blacksquare \) Definition of pointwise convergence: \(f_{n}\left ( x\right ) \) converges pointwise to \(f_{\ast }\left ( x\right ) \) if for each \(\varepsilon >0\) there exist integer \(N\left ( \varepsilon ,x\right ) \) such that \(\left \vert f_{n}\left ( x\right ) -f_{\ast }\left ( x\right ) \right \vert <\varepsilon \) for all \(n\geq N\).

\(\blacksquare \) Definition of uniform convergence: \(f_{n}\left ( x\right ) \) converges uniformly to \(f_{\ast }\left ( x\right ) \) if for each \(\varepsilon >0\) there exist integer \(N\left ( \varepsilon \right ) \) such that \(\left \vert f_{n}\left ( x\right ) -f_{\ast }\left ( x\right ) \right \vert <\varepsilon \) for all \(n\geq N\).

\(\blacksquare \) Another way to find uniform convergence, first find pointwise convergence of \(f_{n}\left ( x\right ) \). Say it converges to \(f_{\ast }\left ( x\right ) \). Now show that

\[ \left \Vert f_{n}-f_{\ast }\right \Vert =\sup _{x\in I}\left ( f_{n}-f_{\ast }\right ) \]

goes to zero as \(n\rightarrow \infty \). To find \(\sup \left ( f_{n}-f_{\ast }\right ) \) might need to find the maximum of \(f_{n}-f_{\ast }\). i.e. differentiate this, set to zero, find \(x\) where it is Max, then evaluate \(f_{n}\left ( x\right ) -f_{\ast }\left ( x\right ) \) at this maximum. This gives the \(\sup \). Then see if this goes to zero as \(n\rightarrow \infty \)

\(\blacksquare \) If sequence of functions \(f_{n}\) converges uniformly to \(f_{\ast }\), then \(f_{\ast }\) must be continuous. So this gives a quick check if uniform convergence exist. First find the pointwise convergence \(f_{\ast }\left ( x\right ) \) and check if this is continuous or not. If not, then no need to check for uniform convergence, it does not exist. But if \(f_{\ast }\left ( x\right ) \) is continuous function, we still need to check because it is possible there is no uniform convergence.

21 Note on using when to raise ln to exp solving an ode

Sometimes in the middle of solving an ode, we get \(\ln \) on both sides. We can raise both sides to \(\exp \) as soon as these show up, or wait until the end, after solving the constant of integration to do that. This shows we get same result in both cases.

21.1 Example 1

\begin{equation} y^{\prime }=2\frac {2yx-x}{x+y} \tag {1}\end{equation}

With initial conditions \(y\left ( 0\right ) =2\). This is homogenous type ode. It solved by substitution \(u=\frac {y}{x}\) which results in the new ode in \(u\) given by

\[ u^{\prime }=\frac {1}{x}\left ( \frac {-u^{2}+3u-2}{1+u}\right ) \]

This is now separable

\begin{align*} \frac {du}{dx} & =\frac {1}{x}\left ( \frac {-u^{2}+3u-2}{1+u}\right ) \\ \int \frac {1+u}{-u^{2}+3u-2}du & =\frac {1}{x}dx \end{align*}

Integrating gives

\begin{equation} 2\ln \left ( 1-u\right ) -3\ln \left ( 2-u\right ) =\ln x+c \tag {1A}\end{equation}

Replacing \(u\) by \(\frac {y}{x}\) which gives

\begin{align} 2\ln \left ( 1-\frac {y}{x}\right ) -3\ln \left ( 2-\frac {y}{x}\right ) & =\ln x+c\nonumber \\ \ln \left ( \frac {\left ( 1-\frac {y}{x}\right ) ^{2}}{\left ( 2-\frac {y}{x}\right ) ^{3}}\right ) & =\ln x+c\nonumber \\ \ln \left ( \frac {\frac {1}{x^{2}}\left ( x-y\right ) ^{2}}{\frac {1}{x^{3}}\left ( 2x-y\right ) ^{3}}\right ) & =\ln x+c\nonumber \\ \ln \left ( x\frac {\left ( x-y\right ) ^{2}}{\left ( 2x-y\right ) ^{3}}\right ) & =\ln x+c\nonumber \\ \ln x+\ln \frac {\left ( x-y\right ) ^{2}}{\left ( 2x-y\right ) ^{3}} & =\ln x+c \tag {1B}\end{align}

\(\ln x\) cancels out giving

\begin{equation} \ln \frac {\left ( x-y\right ) ^{2}}{\left ( 2x-y\right ) ^{3}}=c \tag {2}\end{equation}

Now lets try to solve for \(c\) from IC \(y\left ( 0\right ) =2\). The above becomes

\begin{align*} \ln \left ( \frac {\left ( -2\right ) ^{2}}{\left ( -2\right ) ^{3}}\right ) & =c\\ c & =\ln \left ( \frac {4}{-8}\right ) \\ & =\ln \left ( -\frac {1}{2}\right ) \end{align*}

So the solution (2) is

\[ \ln \frac {\left ( x-y\right ) ^{2}}{\left ( 2x-y\right ) ^{3}}=\ln \left ( -\frac {1}{2}\right ) \]

And only now after \(c\) is found, we raise both sides to \(\exp \) (to simplify it) which gives the solution as

\[ \frac {\left ( x-y\right ) ^{2}}{\left ( 2x-y\right ) ^{3}}=\frac {-1}{2}\]

Or

\begin{equation} \frac {\left ( x-y\right ) ^{2}}{\left ( y-2x\right ) ^{3}}=\frac {1}{2} \tag {3}\end{equation}

Lets see what happens if we had raised both sides to \(\exp \) earlier on, instead of waiting until after solving for the constant of integration. i.e. from step (1A) above

\begin{align*} 2\ln \left ( 1-u\right ) -3\ln \left ( 2-u\right ) & =\ln x+c\\ \ln \frac {\left ( 1-u\right ) ^{2}}{\left ( 2-u\right ) ^{3}} & =\ln x+c\\ \frac {\left ( 1-u\right ) ^{2}}{\left ( 2-u\right ) ^{3}} & =e^{\ln x+c}\\ \frac {\left ( 1-u\right ) ^{2}}{\left ( 2-u\right ) ^{3}} & =Ax \end{align*}

Where \(A\) is new constant. And only now we replace \(u\) by \(\frac {y}{x}\) which gives

\begin{align} \frac {\left ( 1-\frac {y}{x}\right ) ^{2}}{\left ( 2-\frac {y}{x}\right ) ^{3}} & =Ax\nonumber \\ x\frac {\left ( x-y\right ) ^{2}}{\left ( 2x-y\right ) ^{3}} & =Ax\nonumber \\ \frac {\left ( x-y\right ) ^{2}}{\left ( 2x-y\right ) ^{3}} & =A \tag {4}\end{align}

Using IC \(y\left ( 0\right ) =2\). The above becomes

\begin{align*} \frac {\left ( -2\right ) ^{2}}{\left ( -2\right ) ^{3}} & =A\\ A & =-\frac {1}{2}\end{align*}

Hence (4) becomes

\begin{align*} \frac {\left ( x-y\right ) ^{2}}{\left ( 2x-y\right ) ^{3}} & =-\frac {1}{2}\\ \frac {\left ( x-y\right ) ^{2}}{\left ( y-2x\right ) ^{3}} & =\frac {1}{2}\end{align*}

Which is the same answer obtained earlier in (3). This shows both methods work. It might be better to delay the raising to exponential to the very end so it is all done in one place.

21.2 Example 2

\begin{align} y^{\prime } & =\frac {y^{2}-2xy-x^{2}}{y^{2}+2xy-x^{2}}\tag {1}\\ y\left ( 1\right ) & =-1\nonumber \end{align}

This is a homogenous ode, solved by the substitution \(u=\frac {y}{x}\) which results in new ode in \(u\) given by

\[ u^{\prime }=\frac {1}{x}\frac {-u^{3}-u^{2}-u-1}{u^{2}+2u-1}\]

This is separable

\[ \frac {u^{2}+2u-1}{-u^{3}-u^{2}-u-1}du=\frac {1}{x}dx \]

Integrating gives

\begin{equation} \ln \left ( u+1\right ) -\ln \left ( u^{2}+1\right ) =\ln \left ( x\right ) +c_{1} \tag {1}\end{equation}

There are two choices now. Raise both sides to \(\exp \) to simplify the \(u\) solution or wait until the end. Option 1:

Replacing \(u\) by \(\frac {y}{x}\) in (1) gives

\begin{align} \ln \left ( \frac {y}{x}+1\right ) -\ln \left ( \left ( \frac {y}{x}\right ) ^{2}+1\right ) & =\ln \left ( x\right ) +c_{1}\nonumber \\ \ln \left ( \frac {\frac {y}{x}+1}{\left ( \frac {y}{x}\right ) ^{2}+1}\right ) & =\ln \left ( x\right ) +c_{1}\nonumber \\ \ln \left ( \frac {\frac {1}{x}\left ( y+x\right ) }{\frac {1}{x^{2}}\left ( y^{2}+x^{2}\right ) }\right ) & =\ln \left ( x\right ) +c_{1}\nonumber \\ \ln \left ( x\frac {\left ( y+x\right ) }{\left ( y^{2}+x^{2}\right ) }\right ) & =\ln \left ( x\right ) +c_{1}\nonumber \\ \ln x+\ln \left ( \frac {y+x}{y^{2}+x^{2}}\right ) & =\ln \left ( x\right ) +c_{1}\nonumber \\ \ln \left ( \frac {y+x}{y^{2}+x^{2}}\right ) & =c_{1} \tag {2}\end{align}

Now lets try to solve for \(c_{1}\) from IC \(y\left ( 1\right ) =-1\). The above becomes

\begin{align*} \ln \left ( \frac {0}{2}\right ) & =c_{1}\\ c & =-\infty \end{align*}

Hence (2) becomes

\[ \ln \left ( \frac {y+x}{y^{2}+x^{2}}\right ) =-\infty \]

Now raising both sides to \(\exp \) gives

\begin{align*} \frac {y+x}{y^{2}+x^{2}} & =e^{-\infty }\\ \frac {y+x}{y^{2}+x^{2}} & =0\\ y+x & =0\\ y & =-x \end{align*}

Lets try to see what happens if we raise to \(\exp \) after solving for \(u\) immeadilty which is the second option. From (1)

\[ \ln \left ( \frac {u+1}{u^{2}+1}\right ) =\ln \left ( x\right ) +c_{1}\]

Raising both to \(\exp \) gives

\[ \frac {u+1}{u^{2}+1}=Ax \]

Where \(A\) new constant. Now we replace \(u\) by \(\frac {y}{x}\)

\begin{align} \frac {\frac {y}{x}+1}{\left ( \frac {y}{x}\right ) ^{2}+1} & =Ax\tag {2}\\ x\frac {y+x}{y^{2}+x^{2}} & =Ax\nonumber \\ \frac {y+x}{y^{2}+x^{2}} & =A\nonumber \end{align}

Solving for \(A\) from IC \(y\left ( 1\right ) =-1\) from the above gives

\begin{align*} \frac {0}{2} & =A\\ A & =0 \end{align*}

Hence the solution (2) becomes

\[ \frac {\frac {y}{x}+1}{\left ( \frac {y}{x}\right ) ^{2}+1}=0 \]

or

\begin{align*} \frac {y}{x}+1 & =0\\ y & =-x \end{align*}

So both method worked. The early one and the later on one. Both give same result.

21.3 Example 3

\begin{align} \left ( x+2y\right ) y^{\prime } & =1\tag {1}\\ y\left ( 0\right ) & =-1\nonumber \end{align}

This is tricky as how it is solved needs special handling of the initial conditions. Let us solve by subtituting \(z=x+2y\). Then \(z^{\prime }=1+2y^{\prime }\). The ode now becomes

\begin{align*} z\frac {\left ( z^{\prime }-1\right ) }{2} & =1\\ z^{\prime }-1 & =\frac {2}{z}\\ z^{\prime } & =\frac {2}{z}+1 \end{align*}

This is separable

\[ \frac {dz}{1+\frac {2}{z}}=dx \]

Integrating

\begin{align} \int \frac {dz}{1+\frac {2}{z}} & =\int dx\nonumber \\ z-2\ln \left ( 2+z\right ) & =x+c \tag {1}\end{align}

We could raise both sides to \(\exp \) now or wait until after converting back to \(y\). Lets look what happens in both cases. Raising to \(\exp \) now gives

\begin{align*} e^{z-2\ln \left ( 2+z\right ) } & =Ae^{x}\\ \frac {e^{z}}{\left ( 2+z\right ) ^{2}} & =Ae^{x}\end{align*}

But \(z=x+2y\) and the above becomes

\begin{align} \frac {e^{x+2y}}{\left ( 2+x+2y\right ) ^{2}} & =Ae^{x}\nonumber \\ \frac {e^{2y}}{\left ( 2+x+2y\right ) ^{2}} & =A \tag {2}\end{align}

Which is the correct solution. Now IC is used to find \(A\).  Using \(y\left ( 0\right ) =-1\) the above becomes

\[ \frac {e^{-2}}{0}=A \]

So \(A=\infty \). Hence the solution (2) is

\[ \frac {e^{2y}}{\left ( 2+x+2y\right ) ^{2}}=\infty \]

When this happens, to simplify the above we say that \(\left ( 2+x+2y\right ) ^{2}=0\) or \(2+x+2y=0\). This gives \(2y=-2-x\). Hence

\[ y=-1-x \]

22 References

Too many references used, but will try to remember to start recording books used from now on. Here is current list

  1. Applied partial differential equation, by Haberman
  2. Advanced Mathematical Methods for Scientists and Engineers, Bender and Orszag, Springer.
  3. Boundary value problems in physics and engineering, Frank Chorlton, Van Norstrand, 1969
  4. Class notes. Math 322. University Wisconsin, Madison. Fall 2016. By Professor Smith. Math dept.
  5. Mathematical methods in the physical sciences. Mary Boas, second edition.
  6. Mathematical methods in physics and engineering. Riley, Hobson, Bence. Second edition.
  7. various pages Wikipedia.
  8. Mathworld at Wolfram.
  9. Fourier series and boundary value problems 8th edition. James Brown, Ruel Churchill.
  10. good note on Sturm-Liouville http://ramanujan.math.trinity.edu/rdaileda/teach/s12/m3357/lectures/lecture_4_10_short.pdf