1. HOME
  2. PDF (letter size)
  3. PDF (legal size)

my mathematics cheat sheet

Nasser M. Abbasi

February 6, 2025   Compiled on February 6, 2025 at 6:02pm

Contents

1 What is a first integral of a differential equation and how to find it?
1.1 Example y=xy
1.2 Example y=xy
1.3 Example y=2x2y2
1.4 Example y=x1y
1.5 Example y=ysinx
2 Special ode’s and their solutions
2.1 Airy y+axy=0
2.2 Chebyshev (1x2)yxy+n2y=0
2.3 Hermite y2xy+2ny=0
2.4 Legendre (1x2)y2xy+n(n+1)y=0
2.5 Bessel x2y+xy+(x2n2)y=0
2.6 Reduced Riccati y=axn+by2
2.7 Gauss Hypergeometric ode x(1x)y+(c(a+b+1)x)yaby=0
3 Change of variables and chain rule in differential equation
3.1 Example 1 Change of the independent variable using z=g(x)
3.2 Example 2 Change of the independent variable using t=ln(x) Euler ode
3.3 Example 3 Change of the dependent variable using y=xr Euler ode
4 Changing the role of independent and dependent variable in an ode
4.1 Example 1
4.2 Example 2
4.3 Example 3
4.4 Example 4
4.5 Example 5
4.6 Example 6
4.7 Example 7
5 general notes
6 Converting first order ODE which is homogeneous to separable ODE
7 Direct solving of some simple PDE’s
8 Fourier series flow chart
8.1 Theorem on when we can do term by term differentiation
8.2 Relation between coefficients of Fourier series of f(x) Fourier series of f(x)
8.3 Theorem on convergence of Fourier series
9 Laplacian in different coordinates
10 Linear combination of two solution is solution to ODE
11 To find the Wronskian ODE
12 Green functions notes
13 Laplace transform notes
14 Series, power series, Laurent series notes
14.1 Some tricks to find sums
14.1.1 Example 1
14.2 Methods to find Laurent series
14.2.1 Method one
14.2.2 Method Two
14.2.3 Method Three
14.2.4 Conclusion
15 Gamma function notes
16 Riemann zeta function notes
17 Complex functions notes
17.1 Find bn coefficients in the Laurent series expansion
18 Hints to solve some problems
18.1 Complex analysis and power and Laurent series
18.2 Errors and relative errors
19 Some CAS notes
20 d’Alembert’s Solution to wave PDE
21 Convergence
22 Note on using when to raise ln to exp solving an ode
22.1 Example 1
22.2 Example 2
22.3 Example 3
23 References

A place to keep quick notes about Math that I keep forgetting. This is meant to be a scratch notes and cheat sheet for me to write math notes before I forget them or move them somewhere else. Can and will contain errors and/or not complete description in number of places. Use at your own risk.

1 What is a first integral of a differential equation and how to find it?

Lets start with first order ode. This generalizes to any order. Let say our ode is

dydx=f(x,y)

The first integral of the above ode is any function Φ(x,y) such that its rate of change along x is zero. i.e. ddxΦ(x,y)=0. This means

ddxΦ(x,y)=Φx+Φydydx=Φx+Φydydx

But dydx=f(x,y), hence the above is

ddxΦ(x,y)=Φx+Φyf(x,y)

If the above comes out to be zero, then Φ(x,y) is called the first integral of the ode dydx=f(x,y).  In the above, we should make sure to replace any y in the RHS with the solution itself of the ode.

Notice that the first integral itself is not constant. But its rate of change as the independent variable changes is what is constant. We should not mix these two things.

But how to find the first integral function Φ(x,y)? This is easy. We have to solve the ode itself. Then move all terms to one side, and this is what Φ(x,y) is. Let us look at few examples to make this clear.

It is also possible to find first integral Φ(x,y) without solving the ode. We just need to find any Φ(x,y) such that when we differentiate it w.r.t. x which gives Φx+Φyf(x,y) becomes zero. In here f(x,y) must be the RHS of the ode. So if we by inspection or other means can find such Φ(x,y) then no need to solve the ode to find it. For some easy ode’s, method of inspection might be possible. There are more advanced methods to finding first integrals. But here, for simplicity, we assume we have the solution to the ode available.

If we want to first first integral without having the solution to the first order ode, and if the ode is already exact ode, then the same method used in solving exact ode can be used to find the first integral. i.e. if the ode has form

(1)M(x,y)dx+N(x,y)dy=0

Where this is exact (i.e. My=Nx) then we assume first integral exist and given by Φ(x,y)=c1. Hence

(2)ddxΦ(x,y)=Φx+Φydydx=0

Comparing (1,2) we see that

Φx=MΦy=N

From these two equations we can now find Φ(x,y) using same methods we use when solving exact first order ode. So this is an example where we can find Φ(x,y) without knowing the solution to the ode. The above works if the ode is exact.

If the ode is not exact, then we try to find an integrating factor which makes the ode exact first.

The point of this note is to show that first integral is a function Φ(x,y) which happens to be constant along solution curves.

The first integral Φ(x,y) of an ode is not unique. We just need to find one. Even though the ode itself can have unique solution, there can be many different first integrals Φ(x,y). Only condition is that ddxΦ(x,y)=0.

1.1 Example y=xy

The solution can be found to be

y2=c1x2

Hence the first integral is (moving everything to one side)

Φ(x,y)=y2+x2c1

To show this the first integral, we have to show that ddxΦ(x,y)=0. This is given by

ddxΦ(x,y)=Φx+Φyf(x,y)

Looking at our ode we see that f(x,y)=xy. The above becomes

ddxΦ(x,y)=ΦxΦy(xy)

But Φx=2x and Φy=2y, then the above becomes

ddxΦ(x,y)=2x2y(xy)=0

Since ddxΦ(x,y)=0 then Φ(x,y)=y2+x2c1 is first integral.

1.2 Example y=xy

This is linear ode, the solution is

y=c1ex22

Hence the first integral is (moving everything to one side)

Φ(x,y)=yc1ex22

To show this the first integral, we have to show that ddxΦ(x,y)=0. This is given by

ddxΦ(x,y)=Φx+Φyf(x,y)

Looking at our ode we see that f(x,y)=xy. The above becomes

ddxΦ(x,y)=Φx+Φy(xy)

But Φx=c1xex22 and Φy=1, then the above becomes

ddxΦ(x,y)=c1xex22+(xy)

But y is the solution y=c1ex22, hence the above becomes

ddxΦ(x,y)=c1xex22+xc1ex22=0

Since ddxΦ(x,y)=0 then Φ(x,y)=yc1ex22 is first integral.

1.3 Example y=2x2y2

The solution can be found to be

1y=23x3+c1

Hence the first integral is (moving everything to one side)

Φ(x,y)=1y+23x3+c1

To show this is indeed the first integral, we have to show that ddxΦ(x,y)=0. This is given by

ddxΦ(x,y)=Φx+Φyf(x,y)

Looking at our ode we see that f(x,y)=2x2y2. The above becomes

ddxΦ(x,y)=Φx+Φy(2x2y2)

But Φx=2x2 and Φy=1y2, then the above becomes

ddxΦ(x,y)=2x21y2(2x2y2)=0

Since ddxΦ(x,y)=0 then

Φ(x,y)=1y+23x3+c1

Is first integral of the ode y=2x2y2

1.4 Example y=x1y

The solution is

y22=x+12x2+c1

Hence the first integral is

Φ(x,y)=y22+x12x2c1

To show this is the first integral, we have to show that ddxΦ(x,y)=0

ddxΦ(x,y)=Φx+Φyf(x,y)

Looking at our ode we see that f(x,y)=x1y, then the above becomes

ddxΦ(x,y)=Φx+Φy(x1y)

But Φx=1x and Φy=y, then the above becomes

ddxΦ(x,y)=1x+y(x1y)=0

Since ddxΦ(x,y)=0 then

Φ(x,y)=y22+x12x2c1

Is first integral of the ode y=x1y.

1.5 Example y=ysinx

The solution is

y=c1ecosx

Hence the first integral is

Φ(x,y)=yc1ecosx

To show this is the first integral, we have to show that ddxΦ(x,y)=0

ddxΦ(x,y)=Φx+Φyf(x,y)

Looking at our ode we see that f(x,y)=ysinx, then the above becomes

ddxΦ(x,y)=Φx+Φy(ysinx)

But Φx=c1sin(x)ecosx and Φy=1, then the above becomes

ddxΦ(x,y)=c1sin(x)ecosx+(ysinx)

Notice that in this example, we have y in the RHS above that did not cancel as the case was with the first two examples. In this case, we have to replace this y by the solution y=c1ecosx and the above now becomes

ddxΦ(x,y)=c1sin(x)ecosx+c1ecosxsinx=0

Since ddxΦ(x,y)=0 then

Φ(x,y)=yc1ecosx

Is first integral of the ode y=ysinx.

2 Special ode’s and their solutions

These are ode’s whose solution is in terms of special functions. Will update as I find more. Most of the special functions come up from working out the solution in series of second order ode which has regular singular point at expansion point. These are the more interesting odes which will generate these special functions.

2.1 Airy  y+axy=0

solution is

y(x)=c1AiryAi(a13x)+c2AiryBi(a13x)

2.2 Chebyshev (1x2)yxy+n2y=0

For

(1x2)yxy+n2y=0

Singular points at x=1,1 and . Solution valid for |x|<1. Maple gives solution

y(x)=c11(x+x21)n+c2(x+x21)n

For

(1x2)yaxy+n2y=0

Maple gives solution

y(x)=c1(x21)12a4LegendreP(a2+4n22a+1212,1+a2,x)+c2(x21)12a4LegendreQ(a2+4n22a+1212,1+a2,x)

If n positive integer, then solution in series gives polynomial solution of degree n. Called Chebyshev polynomials.

2.3 Hermite y2xy+2ny=0

Converges for all x. If n is positive integer, one series terminates. Series solution in terms of Hermite polynomials.

Maple gives solution

y(x)=c1xKummerM(12n2,32,x2)+c2xKummerU(12n2,32,x2)

2.4 Legendre (1x2)y2xy+n(n+1)y=0

Series solution in terms of Legendre functions. When n is positive integer, one series terminates (i.e. becomes polynomial).

Maple gives solution

y(x)=c1LegendreP(n,x)+c2LegendreQ(n,x)

If the ode is given in form

sin(θ)P(θ)+cos(θ)P(θ)+nsin(θ)P(θ)=0

Then using x=cosθ transforms it to the earlier more familar form. Maple gives this as solution

P(θ)=c1LegendreP(4n+1212,cosθ)+c2LegendreQ(4n+1212,cosθ)

2.5 Bessel x2y+xy+(x2n2)y=0

x=0 is regular singular point. Solution in terms of Bessel functions

y(x)=c1BesselJ(n,x)+c2BesselY(n,x)

2.6 Reduced Riccati y=axn+by2

For the special case of n=2 the solution is

y(x)=λxx2bλbx2bλ+1x2bλ+c1

Where in the above λ is a root of bλ2+λ+a=0.

For n2

w=x{c1BesselJ(12k,1kabxk)+c2BesselY(12k,1kabxk)ab>0c1BesselI(12k,1kabxk)+c2BesselK(12k,1kabxk)ab<0y=1bwwk=1+n2

2.7 Gauss Hypergeometric ode x(1x)y+(c(a+b+1)x)yaby=0

Solution is for |x|<1 is in terms of hypergeom function. Has 3 regular singular points, x=0,x=1,x=.

Maple gives this solution

y(x)=c1hypergeom([a,b],[c],x)+c2x1chypergeom([1+ac,1+bc],[2c],x)

And Mathematica gives

y(x)=c1HypergeometricF1(a,b,c,x)+(1)1cx1cc2HypergeometricF1(1+ac,1+bc,2c,x)

3 Change of variables and chain rule in differential equation

These are examples doing change of variable for an ode.

3.1 Example 1 Change of the independent variable using z=g(x)

Given the ode

d2ydx2+dydx+y=sin(x)

And we are asked to do change of variables from x to z where z=g(x). In this, we can also write

x=g1(z)

Where g1(z) is the inverse function. Using chain rule gives

dydx=dydzdzdx

And for second derivative

d2ydx2=ddx(dydx)=ddx(dydzdzdx)

And now we use the product rule, which is ddx(ab)=ab+ab on the above, which gives

(1)d2ydx2=(ddxdydz)(dzdx)+(dydz)(ddxdzdx)

Let us do each of the terms on the right above one by one.  The second term on the RHS above is easy. It is

(2)(dydz)(ddxdzdx)=(dydz)(d2zdx2)

It is the first term in (1) which needs more care. The problem is how to handle ddxdydz? Since the denominators are different. The trick is to write ddxdydz as ddzdzdx(dydz) which does not change anything, but now we can change the order and write this as dzdxddz(dydz) which now makes the denominator the same and now it is free sailing:

ddxdydz=ddzdzdx(dydz)=dzdxddz(dydz)=dzdx(d2ydz2)

Therefore, the first term in (1) becomes

(ddxdydz)(dzdx)=dzdx(d2ydz2)(dzdx)(3)=(dzdx)2(d2ydz2)

Using (2,3) then we have

d2ydx2=(dzdx)2(d2ydz2)+(dydz)(d2zdx2)

Hence the original ode now becomes

d2ydx2+dydx+y=sin(x)(dzdx)2(d2ydz2)+(dydz)(d2zdx2)y(x)+dydzdzdxy(x)+y(z)=sin(g1(z))

We could have written the RHS above as just sin(x) instead of sin(g1(z)) but since the independent variable is now z, this seemed better to do it this way. But both are correct.  Now, since z=g(x) the above can also be written as

(dgdx)2(d2ydz2)+(dydz)(d2gdx2)+dydzdgdx+y(z)=sin(g1(z))(g(x))2y(x)+y(z)g(x)+y(z)g(x)+y(z)=sin(x)

OK, since the above was so much fun, lets do third derivative d3ydx3

d3ydx3=ddx(d2ydx2)=ddx((dzdx)2(d2ydz2)+(dydz)(d2zdx2))(4)=ddx[(dzdx)2(d2ydz2)]+ddx[(dydz)(d2zdx2)]

Each term above is now found. Looking at first term in (4)

ddx[(dzdx)2(d2ydz2)]

Using the product rule, which is ddx(ab)=ab+ab on the above gives

ddx[(dzdx)2(d2ydz2)]=ddx[(dzdx)2]d2ydz2+(dzdx)2ddx[d2ydz2]

But ddx[(dzdx)2]=2dzdxd2zdx and for ddx(d2ydz2) we have to use the same trick as before by writing ddx(d2ydz2)=ddzdzdx(d2ydz2)=dzdxddz(d2ydz2) and now we have ddx(d2ydz2)=dzdxd3ydz3. Hence the first term in (4) is now done.

ddx[(dzdx)2(d2ydz2)]=2dzdxd2zdx2d2ydz2+(dzdx)2dzdxd3ydz3(5)=2dzdxd2zdx2d2ydz2+(dzdx)3d3ydz3

Now we look at the second term in (4) which is ddx[(dydz)(d2zdx2)] and apply the product rule, this gives

ddx[(dydz)(d2zdx2)]=ddx[dydz](d2zdx2)+dydzddx[d2zdx2]=ddzdzdx[dydz](d2zdx2)+dydzd3zdx3=dzdxddz[dydz](d2zdx2)+dydzd3zdx3(6)=dzdxd2ydz2(d2zdx2)+dydzd3zdx3

That is it. We are done. (5,6) are the two terms in (4). Therefore

d3ydx3=2dzdxd2zdx2d2ydz2+(dzdx)3d3ydz3+dzdxd2ydz2(d2zdx2)+dydzd3zdx3=3dzdxd2zdx2d2ydz2+(dzdx)3d3ydz3+dydzd3zdx3

Now, since z=g(x) the above can also be written as

y(x)=3g(x)g(x)y(z)+(g(x))3y(z)+y(z)g(x)

This table show summary of transformation for each derivative y(n)(x) when using change of variables z=g(x)

y(x) y(z)g(x)
y(x) (g(x))2y(z)+y(z)g(x)
y(x) 3g(x)g(x)y(z)+(g(x))3y(z)+y(z)g(x)
y(x) 3(g(x))2y(z)+4g(x)y(z)g(x)+6(g(x))2g(x)y(z)+y(z)g(x)+(g(x))4y(z)

Strictly speaking, it would be better to use different variable than y when changing the independent variable. i.e. instead of writing y(z) in all the above, we should write u(z) in its place. So the above table will look like

y(x) u(z)g(x)
y(x) (g(x))2u(z)+u(z)g(x)
y(x) 3g(x)g(x)u(z)+(g(x))3u(z)+u(z)g(x)
y(x) 3(g(x))2u(z)+4g(x)u(z)g(x)+6(g(x))2g(x)u(z)+y(z)g(x)+(g(x))4u(z)

So any place where y(z) shows in the transformed expression, it should be written with new letter for the dependent variable u(z). But this is not always enforced.

3.2 Example 2 Change of the independent variable using t=ln(x) Euler ode

Given the ode

x2d2ydx2+2xdydx+y=0

And asked to do change of variable t=ln(x)

dydx=dydtdtdx=dydt1x

And

d2ydx2=ddx(dydx)=ddx(dydt1x)=ddx[dydt]1x+dydtddx(1x)=ddtdtdx[dydt]1xdydt1x2=dtdxd2ydt21xdydt1x2=1xd2ydt21xdydt1x2=1x2d2ydt2dydt1x2

Hence the original ode becomes

x2(1x2d2ydt2dydt1x2)+2x(dydt1x)+y=0d2ydt2dydt+2dydt+y=0d2ydt2+dydt+y=0

3.3 Example 3 Change of the dependent variable using y=xr Euler ode

Given the ode

x2d2ydx2+2xdydx+y=0

And asked to do change of variable y=xr

dydx=rxr1

And

d2ydx2=ddx(rxr1)=r(r1)xr2

Hence the original ode becomes

x2(r(r1)xr2)+2x(rxr1)+xr=0r(r1)xr+2rxr+xr=0r(r1)+2r+1=0

Solving for r gives the roots. Hence solutions are y1=xr1 and y2=xr2. Final solution is therefore

y=c1y1+c2y2=c1xr1+c2xr2

This method of solving the Euler ode is much simpler than using t=ln(x) change of variables but for some reason most text books use the later one.

4 Changing the role of independent and dependent variable in an ode

(added Dec 14, 2024).

Given an ode y(x)=f(x,y), we want to change it so that instead of y(x) being the dependent variable, to make the ode so that x(y) is the dependent variable. For example, given the ode

d2ydx2=dydxey(x)

The new ode becomes

d2xdy2=(dxdy)2ey

Which is easier to solve for x(y). Once solved, we flip back and find y from the solution. Sometimes this trick can make solving hard ode very easy. It also can make solving easy ode very hard. Only way to find out, is to try it. So if we have an ode that we are having hard time solving, we can try this trick.

For first order ode, the method is easy. We just isolate dydx and then flip the left hand side and flip the right hand side, and change all y(x) to just y and change all the x to x(y).

More formally, this can also be done using change of variables, like this. The first step is to do change of variables.

x=v(t)y=t

If we carry the above change of variables, the new ode will be in terms of v(t),v(t) and so on.

Now we replace all the v(n)(t) with x(n)(y) where n here is the order of derivative. And replace any t by y (not y(x) but just y). And replace any v(t) by x. The new ode will be the flipped ode.

When we do the above change of variables using chain rule, these will result in the following

dydx1dvdt=dtdvd2ydx2d2vdt2(dvdt)3d3ydx32(d2vdt2)2(dvdt)2d3vdt3(dvdt)3dvdt

And so on. Once the above is done, the rest is easy. We just replace any dvdt by dxdy and any t by y and any v by x. We will not do change roles for ode higher than two in these examples.

4.1 Example 1

Change the role for the ode

dydx=x

This has solution

(1)y(x)=12x2+c1

Since this is first order, we can do it the easy way without change of variable. Flip the left side and flip the right side and do the renaming

dxdy=1x

If we want to do it via change of variables, the method is: Let

x=v(t)y=t

Then

dydx=dydtdtdvdvdx

But dydt=1 and the above becomes

dydx=dtdvdvdx

And dvdx=1 and the above becomes

dydx=dtdv

Hence ode becomes

dtdv=v(t)dvdt=1v

Now we replace v(t) by x(y) and v by x The above becomes (which is the flipped ode)

dxdy=1x

Solving for x(y) gives

x1=2y+c1x2=2y+c1

Lets take the first solution and solve for y, this gives

x2=2y+c1y=12x212c1(2)=12x2+c1

Which is the same as (1). Of course, in this example there is no point of changing the roles, but this was just an example.

4.2 Example 2

Change the role for the ode

dydx=ey

This has solution

(1)y(x)=ln(1x+c1)

Since this is first order, we will do it the easy way. Flip the left side and flip the right side and do the renaming. This gives

dxdy=ey

Solving this gives

x=ey+c1

Solving for y gives

x+c1=eyln(x+c1)=yy=ln(x+c1)=ln(1x+c1)=ln(1xc1)=ln(1x+c2)

Which is same as (1).

4.3 Example 3

Change the role for the ode

(1)ylny+(xlny)dydx=0

Solving the above gives

(2)y1=exx22c1y1=ex+x22c1

Since this is first order, we will do it the easy way. First isolate dydx then flip the left side and the right side and rename. Solving for dydx from (1) gives

dydx=ylnyxlny

Flipping

dxdy=ln(y)xylny=1yxylny(3)dxdy+xylny=1y

In this example, we see that changing roles really paid off as Eq. (3) is linear ode in x(y) but (1) is very hard to solve for y(x) and needs Lie symmetry to solve it. Solving (3) gives

x=lny2+c1lny

Solving the above for y gives same solutions as (2).

4.4 Example 4

Change the role for the ode

(1)d2ydx2=dydxey(x)

This has solution

(2)y(x)=c1c2+c1x+ln(c1ec1c2exc11)

Since this is not first order, we can not do the easy method as with first order and we have to do change of variables since with second derivative it is more complicate. Let

x=v(t)y=t

Using the rules gives above, we know that

(3)dydx=1dvdtd2ydx2=d2vdt2(dvdt)3

Substituting (3) into (1) (and changing at y(x) by t and any x by v(t)) gives

d2vdt2(dvdt)3=1dvdtetd2vdt2=(dvdt)3dvdtetd2vdt2=(dvdt)2et

We now replace each dvdt by dxdy and each t by y. The above becomes

d2xdy2=(dxdy)2ey

And the above is the final flipped ode. The solution is

x=1c1ln(ey)+1c1ln(eyc1)+c2

To obtain y as function of x, we just isolate y from the above.

c1x=ln(ey)+ln(eyc1)+c1c2c1xc1c2=ln(eyc1ey)e(c1xc1c2)=eyc1eye(c1xc1c2)=1c1ey1e(c1xc1c2)=c1ey1e(c1xc1c2)c1=eyy=ln(1e(c1xc1c2)c1)y=ln(c11e(c1xc1c2))

Which is the solution to the original ode obtain by first flipping the ode.

4.5 Example 5

Change the role for the ode

(1)1+xy(1+xy2)dydx=0

As this stands, it is hard to solve as it needed Lie symmetry. The solution is

y1=1xx(2xLambertW(12c1e2x12x)+2x+1)y2=1xx(2xLambertW(12c1e2x12x)+2x+1)

By flipping roles, the ode becomes Bernoulli, which is much easier. Since this is first order, we will use the easy method. First we isolate dydx from (1) then flip both sides and rename. Solving for dydx in (1) gives

dydx=1xy(1+xy2)

Flipping and renaming y(x) to y and x to x(y) gives

dxdy=xyx2y3

This is in the form

x=Px+Qxn

Where n=2 here. Hence Bernoulli, which is easily solved. The solution is

x=12+c1ey22y2

The last step is to solve for y as function of x.

x(2+c1ey22y2)=12x+c1xey22xy2=1c1ey22y2=1+2xx

Solving for y from the above gives same answer as above. This is an example where flipping roles paid off well. But only way to know is to try it and see.

4.6 Example 6

Change the role for the ode

(1)(14xy2)dydx=y3

As this stands, this is homogeneous class G. The solution is

y1=12xx(1+16c1x+1)y2=12xx(1+16c1x+1)y3=12xx(1+16c1x+1)y4=12xx(1+16c1x+1)

By flipping roles, the ode becomes linear, which is much easier to solve. Since this is first order, we will use the easy method. First we isolate dydx from (1) then flip both sides and rename. Solving for dydx in (1) gives

dydx=y314xy2

Flipping and renaming y(x) to y and x to x(y) gives

dxdy=14xy2y3=1y34xy

Or

dxdy+4yx=1y3

Which is linear ode in x(y). Solving gives

x=1y4(y22+c1)

The last step is to solve for y which will give same solution as above.

4.7 Example 7

Change the role for the ode

(1)dydx=xy2x2+y5

As this stands, this can be solved using Lie symmetry or as an exact ode but with an integrating factor that needs to be found first, The solution is

y1=12(8x212LambertW(43c1e23x21)12)13

And 2 more (too long to type). By flipping roles the new ode becomes

dxdy=y2x2+y5x=xy2+y5x1

This has form

x=P(y)x+Q(y)xn

Which is Bernoulli ode. Which is simpler to solve solve. Solving gives

x=1264y3+c14e23y3x=1264y3+c14e23y3

Finally, we solve for y from the above. This will give same solutions as above.

5 general notes

Some rules to remember. This is in the real domain

  1. ab=ab only for a0,b0. In general (ab)1n=a1nb1n for a0,b0 where n is positive integer.
  2. y=x implies y=x2 only when x>0. So be careful when squaring both sides to get rid of sqrt root on one side. To see this, let y=4 then y=16 because 4 is positive. But if we had y=4 then we can’t say that y=16 since 16 is 4 and not 4. (we always take the positive root). So each time we square both sides of equation to get rid of on one side, always say this is valid when the other side is not negative.
  3. Generalization of the above: given (ab)nm where both n,m integers then (ab)nm=anmbnm only when a0,b0. This applies if nm<1 such as 23 or when nm>1 such as 32. Only time we can write (ab)n=anbn for any a,b is when n is an integer (positive or negative). When the power is ratio of integers, then was can split it only under the condition that all terms are positive.
  4. 1b=1b only for b>0. This can be used for example to simplify 11x21x2 to 1 under the condition 1x2>0 or 1<x<1. Because in this case the input becomes 11x21x2=1.
  5. Generalization of the above:ab=ab only for a0,b>0
  6. x2=x only for x0
  7. Generalization of the above: (xn)1n=x only when x0 (assuming n is integer).

Given uu(x,y) then total differential of u is

du=uxdx+uydy

Lyapunov function is used to determine stability of an equilibrium point. Taking this equilibrium point to be zero, and someone gives us a set of differential equations (x(t)y(t)z(t))=(f1(x,y,z,t)f2(x,y,z,t)f2(x,y,z,t)) and assuming (0,0,0) is an equilibrium point. The question is, how to determine if it stable or not? There are two main ways to do this. One by linearization of the system around origin. This means we find the Jacobian matrix, evaluate it at origin, and check the sign of the real parts of the eigenvalues. This is the common way to do this. Another method, called Lyapunov, is more direct. There is no linearization needed. But we need to do the following. We need to find a function V(x,y,z) which is called Lyapunov function for the system which meets the following conditions

  1. V(x.y,z) is continuously differentiable function in R3 and V(x.y,z)0 (positive definite or positive semidefinite) for all x,y,z away from the origin, or everywhere inside some fixed region around the origin. This function represents the total energy of the system (For Hamiltonian systems). Hence V(x,y,z) can be zero away from the origin. But it could never be negative.
  2. V(0,0,0)=0. This says the system has no energy when it is at the equilibrium point. (rest state).
  3. The orbital derivative dVdt0 (i.e. negative definite or negative semi-definite) for all x,y,z, or inside some fixed region around the origin. The orbital derivative is same as dVdt along any solution trajectory. This condition says that the total energy is either constant in time (the zero case) or the total energy is decreasing in time (the negative definite case). Both of which indicate that the origin is a stable equilibrium point.

If dVdt is negative semi-definite then the origin is stable in Lyapunov sense. If dVdt is negative definite then the origin is asymptotically stable equilibrium. Negative semi-definite means the system, when perturbed away from the origin, a trajectory will remain around the origin since its energy do not increase nor decrease. So it is stable. But asymptotically stable equilibrium is a stronger stability. It means when perturbed from the origin the solution will eventually return back to the origin since the energy is decreasing. Global stability means dVdt0 everywhere, and not just in some closed region around the origin. Local stability means dVdt0 in some closed region around the origin. Global stability is stronger stability than local stability.

Main difficulty with this method is to find V(x.y,z). If the system is Hamiltonian, then V is the same as total energy. Otherwise, one will guess. Typically a quadratic function such as V=ax2+cxy+dy2 is used (for system in x,y) then we try to find a,c,d which makes it positive definite everywhere away from origin, and also more importantly makes dVdt0. If so, we say origin is stable. Most of the problems we had starts by giving us V and then asks to show it is Lyapunov function and what kind of stability it is.

To determine if V is positive definite or not, the common way is to find the Hessian and check the sign of the eigenvalues. Another way is to find the Hessian and check the sign of the minors. For 2×2 matrix, this means the determinant is positive and the entry (1,1) in the matrix is positive. Similar thing to check if dVdt0. We find the Hessian of dVdt and do the same thing. But now we check for negative eigenvalues instead.

Methods to find Green function are

  1. Fredholm theory
  2. methods of images
  3. separation of variables
  4. Laplace transform

reference Wikipedia I need to make one example and apply each of the above methods on it.

In solving an ODE with constant coefficient just use the characteristic equation to solve the solution.

In solving an ODE with coefficients that are functions that depends on the independent variable, as in y(x)+q(x)y(x)+p(x)y(x)=0, first classify the point x0 type. This means to check how p(x) and q(x) behaves at x0. We are talking about the ODE here, not the solution yet.

There are 3 kinds of points. x0 can be normal, or regular singular point, or irregular singular point. Normal point x0 means p(x) and q(x) have Taylor series expansion y(x)=n=0an(xx0)n that converges to y(x) at x0.
Regular singular point x0 means that the above test fails, but limxx0(xx0)q(x) has a convergent Taylor series, and also that limxx0(xx0)2p(x) now has a convergent Taylor series at x0. This also means the limit exist.

All this just means we can get rid of the singularity. i.e. x0 is a removable singularity. If this is the case, then the solution at x0 can be assumed to have a Frobenius series y(x)=n=0an(xx0)n+α where a00 and α is the root of the Frobenius indicial equation. There are three cases to consider. See https://math.usask.ca/~cheviakov/courses/m338/text/Frobenius_Case3_ill.pdf for more discussion on this.

The third type of point, is the hard one. Called irregular singular point. We can’t get rid of it using the above. So we also say the ODE has an essential singularity at x0 (another fancy name for irregular singular point). What this means is that we can’t approximate the solution at x0 using either Taylor nor Frobenius series.

If the point is an irregular singular point, then use the methods of asymptotic. See advanced mathematical methods for scientists and engineers chapter 3. For normal point, use y(x)=n=0anxn, for regular singular point use y(x)=n=0anxn+r. Remember, to solve for r first. This should give two values. If you get one root, then use reduction of order to find second solution.

Asymptotic series S(z)=c0+c1z+c2z2+ is series expansion of f(z) which gives good and rapid approximation for large z as long as we know when to truncate S(z) before it becomes divergent. This is the main difference Asymptotic series expansion and Taylor series expansion.

S(z) is used to approximate a function for large z while Taylor (or power series) is used for local approximation or for small distance away from the point of expansion. S(z) will become divergent, hence it  needs to be truncated at some n to use, where n is the number of terms in Sn(z). It is optimally truncated when n|z|2.

S(x) has the following two important properties

  1. lim|z|zn(f(z)Sn(z))=0 for fixed n.
  2. limnzn(f(z)Sn(z))= for fixed z.

We write S(z)f(z) when S(z) is the asymptotic series expansion of f(z) for large z. Most common method to find S(z) is by integration by parts. At least this is what we did in the class I took.

For Taylor series, leading behavior is a0 no controlling factor? For Frobenius series, leading behavior term is a0xα and controlling factor is xα. For asymptotic series, controlling factor is assumed to be eS(x) always. proposed by Carlini (1817)

Method to find the leading behavior of the solution y(x) near irregular singular point using asymptotic is called the dominant balance method.

When solving ϵy+p(x)y+q(x)y=0 for very small ϵ then use WKB method, if there is no boundary layer between the boundary conditions. If the ODE non-linear, can’t use WKB, has to use boundary layer (B.L.).  Example ϵy+yyy=0 with y(0)=0,y(1)=2 then use BL.

good exercise is to solve say ϵy+(1+x)y+y=0 with y(0)=y(1) using both B.L. and WKB and compare the solutions, they should come out the same. y21+xexp(xϵx22ϵ)+O(ϵ). with BL had to do the matching between the outer and the inner solutions. WKB is easier. But can’t use it for non-linear ODE.

When there is rapid oscillation over the entire domain, WKB is better. Use WKB to solve Schrodinger equation where ϵ becomes function of (Planck’s constant, 6.62606957×1034 m2kg/s)

In second order ODE with non constant coefficient, y(x)+p(x)y(x)+q(x)y(x)=0, if we know one solution y1(x), then a method called the reduction of order can be used to find the second solution y2(x). Write y2(x)=u(x)y1(x), plug this in the ODE, and solve for u(x). The final solution will be y(x)=c1y1(x)+c2y2(x). Now apply I.C.’s to find c1,c2.

To find particular solution to y(x)+p(x)y(x)+q(x)y(x)=f(x), we can use a method called undetermined coefficients.  But a better method is called variation of parameters, In this method, assume yp(x)=u1(x)y1(x)+u2(x)y2(x) where y1(x),y2(x) are the two linearly independent solutions of the homogeneous ODE and u1(x),u2(x) are to be determined. This ends up with u1(x)=y2(x)f(x)Wdx and u2(x)=y1(x)f(x)Wdx. Remember to put the ODE in standard form first, so a=1, i.e. ay(x)+. In here, W is the Wronskian W=|y1(x)y2(x)y1(x)y2(x)|

Two solutions of y(x)+p(x)y(x)+q(x)y(x)=0 are linearly independent if W(x)0, where W is the Wronskian.

For second order linear ODE defined over the whole real line, the Wronskian is either always zero, or not zero. This comes from Abel formula for Wronskian, which is W(x)=kexp(B(x)A(x)dx) for ODE of form A(x)y+B(x)y+C(x)y=0. Since exp(B(x)A(x)dx)>0, then it is decided by k. The constant of integration. If k=0, then W(x)=0 everywhere, else it is not zero everywhere.

For linear PDE, if boundary condition are time dependent, can not use separation of variables. Try Transform method (Laplace or Fourier) to solve the PDE.

If unable to invert Laplace analytically, try numerical inversion or asymptotic methods. Need to find example of this.

Green function takes the homogeneous solution and the forcing function and constructs a particular solution. For PDE’s, we always want a symmetric Green’s function.

To get a symmetric Green’s function given an ODE, start by converting the ODE to a Sturm-Liouville form first. This way the Green’s function comes out symmetric.

For numerical solutions of field problems, there are basically two different problems: Those with closed boundaries and those with open boundaries but with initial conditions. Closed boundaries are elliptical problems which can be cast in the form Au=f, and the other are either hyperbolic or parabolic.

For numerical solution of elliptical problems, the basic layout is something like this:

Always start with trial solution u(x) such that utrial(x)=i=0i=NCiϕi(x) where the Ci are the unknowns to be determined and the ϕi are set of linearly independent functions (polynomials) in x.

How to determine those Ci comes next. Use either residual method (Galerkin) or variational methods (Ritz). For residual, we make a function based on the error R=Autrialf. It all comes down to solving f(R)=0 over the domain. This is a picture

| 
+---------------+-------------------------------------+ 
|                                                     | 
residual                       Variational (sub u_trial in I(u) 
|                          where I(u) is functional to minimize. 
| 
+----------------+-------------+----------+ 
|                |             |          | 
Absolute error   collocation   subdomain orthogonality 
.... 
+----------------------+------------+ 
|                      |            | 
method of moments   Galerkin     least squares

Geometric probability distribution. Use when you want an answer to the question: What is the probability you have to do the experiment N times to finally get the output you are looking for, given that a probability of p showing up from doing one experiment.

For example: What is the probability one has to flip a fair coin N times to get a head? The answer is P(X=N)=(1p)k1p. So for a fair coin, p=12 that a head will show up from one flip. So the probability we have to flip a coin 10 times to get a head is P(X=10)=(10.5)9(0.5)=0.00097 which is very low as expected.

To generate random variable drawn from some distribution different from uniform distribution, by only using uniform distribution U(0,1) do this: Lets say we want to generate random number from exponential distribution with mean μ.

This distribution has pdf(X)=1μexμ, the first step is to find the cdf of exponential distribution, which is known to be F(x)=P(X<=x)=1exμ.

Now find the inverse of this, which is F1(x)=μln(1x). Then generate a random number from the uniform distribution U(0,1). Let this value be called z.

Now plug this value into F1(z), this gives a random number from exponential distribution, which will be μ ln(1z) (take the natural log of both side of F(x)).

This method can be used to generate random variables from any other distribution by knowing on U(0,1). But it requires knowing the CDF and the inverse of the CDF for the other distribution. This is called the inverse CDF method. Another method is called the rejection method

Given u, a r.v. from uniform distribution over [0,1], then to obtain v, a r.v. from uniform distribution over [A,B], then the relation is v=A+(BA)u.

When solving using F.E.M. is best to do everything using isoparametric element (natural coordinates), then find the Jacobian of transformation between the natural and physical coordinates to evaluate the integrals needed. For the force function, using Gaussian quadrature method.

A solution to differential equation is a function that can be expressed as a convergent series. (Cauchy. Briot and Bouquet, Picard)

To solve a first order ODE using integrating factor.

x(t)+p(t)x(t)=f(t)

then as long as it is linear and p(t),f(t) are integrable functions in t, then follow these steps

  1. multiply the ODE by function I(t), this is called the integrating factor.

    I(t)x(t)+I(t)p(t)x(t)=I(t)f(t)
  2. We solve for I(t) such that the left side satisfies

    ddt(I(t)x(t))=I(t)x(t)+I(t)p(t)x(t)
  3. Solving the above for I(t) gives

    I(t)x(t)+I(t)x(t)=I(t)x(t)+I(t)p(t)x(t)I(t)x(t)=I(t)p(t)x(t)I(t)=I(t)p(t)dII=p(t)dt

    Integrating both sides gives

    ln(I)=p(t)dtI(t)=ep(t)dt
  4. Now equation (1) can be written as

    ddt(I(t)x(t))=I(t)f(t)
    We now integrate the above to give
    I(t)x(t)=I(t)f(t)dt+Cx(t)=I(t)f(t)dt+CI(t)

    Where I(t) is given by (2). Hence

    x(t)=ep(t)dtf(t)dt+Cep(t)dt
    A polynomial is called ill-conditioned if we make small change to one of its coefficients and this causes large change to one of its roots.

To find rank of matrix A by hand, find the row echelon form, then count how many zero rows there are. subtract that from number of rows, i.e. n.

To find the basis of the column space of A, find the row echelon form and pick the columns with the pivots, there are the basis (the linearly independent columns of A).

For symmetric matrix A, its second norm is its spectral radius ρ(A) which is the largest eigenvalue of A (in absolute terms).

The eigenvalues of the inverse of matrix A is the inverse of the eigenvalues of A.

If matrix A of order n×n, and it has n distinct eigenvalues, then it can be diagonalized  A=VΛV1, where

Λ=(eλ1000000eλn)

and V is matrix that has the n eigenvectors as its columns.

limkx1x2fk(x)dx=x1x2limkfk(x)dx only if fk(x) converges uniformly over [x1,x2].

A3=I, has infinite number of A solutions. Think of A3 as 3 rotations, each of 1200, going back to where we started. Each rotation around a straight line. Hence infinite number of solutions.

How to integrate I=x31xdx.

Let u=x3+1, then du=3x2dx and the above becomes

I=u3x3du=13uu1du

Now let u=tan2v or u=tanv, hence 121udu=sec2vdv and the above becomes

I=13utan2v1(2usec2v)dv=23utan2v1sec2vdv=23tan2vtan2v1sec2vdv

But tan2v1=sec2v hence

I=23tan2vdv=23(tanvv)

Substituting back

I=23(uarctan(u))

Substituting back

I=23(x3+1arctan(x3+1))

(added Nov. 4, 2015) Made small diagram to help me remember long division terms used.

pict

If a linear ODE is equidimensional, as in anxny(n)+an1xn1y(n01)+ for example x2y2y=0 then use ansatz y=xr this will give equation in r only. Solve for r and obtain y1=xr1,y2=xr2 and the solution will be

y=c1y1+c2y2

For example, for the above ode, the solution is c1x2+c2x. This ansatz works only if ODE is equidimensional. So can’t use it on xy+y=0 for example.

If r is multiple root, use xr,xrlog(x),xr(log(x))2 as solutions.

for xi, where i=1, write it as x=elogx hence xi=eilogx=cos(logx)+isin(logx)

Some integral tricks: a2x2dx use x=asinθ. For a2+x2dx use x=atanθ and for x2a2dx use x=asecθ.

y+xny=0 is called Emden-Fowler form.

For second order ODE, boundary value problem, with eigenvalue (Sturm-Liouville), remember that having two boundary conditions is not enough to fully solve it.

One boundary condition is used to find the first constant of integration, and the second boundary condition is used to find the eigenvalues.

We still need another input to find the second constant of integration. This is normally done by giving the initial value. This problem happens as part of initial value, boundary value problem. The point is, with boundary value and eigenvalue also present, we need 3 inputs to fully solve it. Two boundary conditions is not enough.

If given ODE y(x)+p(x)y(x)+q(x)y(x)=0 and we are asked to classify if it is singular at x=, then let x=1t and check what happens at t=0. The d2dx2 operator becomes (2t3ddt+t4d2dt2) and ddx operator becomes t2ddt. And write the ode now where t is the independent variable, and follow standard operating procedures. i.e. look at limt0xp(t) and limt0x2q(t) and see if these are finite or not. To see how the operator are mapped, always start with x=1t then write ddx=ddtdtdx and write d2dx2=(ddx)(ddx). For example, ddx=t2ddt and

d2dx2=(t2ddt)(t2ddt)=t2(2tddtt2d2dt2)=(2t3ddt+t4d2dt2)

Then the new ODE becomes

(2t3ddt+t4d2dt2)y(t)+p(t)(t2ddty(t))+q(t)y(t)=0t4d2dt2y+(t2p(t)+2t3)ddty+q(t)y=0d2dt2y+(p(t)+2t)t2ddty+q(t)t4y=0

The above is how the ODE will always become after the transformation. Remember to change p(x) to p(t) using x=1t and same for q(x). Now the new p is (p(t)+2t)t2 and the new q is q(t)t4. Then do limt0t(p(t)+2t3)t4 and limt0t2q(t)t4 as before.

If the ODE a(x)y+b(x)y+c(x)y=0,  and say 0x1, and there is essential singularity at either end, then use boundary layer or WKB. But Boundary layer method works on non-linear ODE’s (and also on linear ODE) and only if the boundary layer is at end of the domain, i.e. at x=0 or x=1.

WKB method on the other hand, works only on linear ODE, but the singularity can be any where (i.e. inside the domain). As rule of thumb, if the ODE is linear, use WKB. If the ODE is non-linear, we must use boundary layer.

Another difference, is that with boundary layer, we need to do matching phase at the interface between the boundary layer and the outer layer in order to find the constants of integrations. This can be tricky and is the hardest part of solving using boundary layer.

Using WKB, no matching phase is needed. We apply the boundary conditions to the whole solution obtained. See my HWs for NE 548 for problems solved from Bender and Orszag text book.

In numerical, to find if a scheme will converge, check that it is stable and also check that if it is consistent.

It could also be conditionally stable, or unconditionally stable, or unstable.

To check it is consistent, this is the same as finding the LTE (local truncation error) and checking that as the time step and the space step both go to zero, the LTE goes to zero. What is the LTE? You take the scheme and plug in the actual solution in it. An example is better to explain this part. Lets solve ut=uxx. Using forward in time and centered difference in space, the numerical scheme (explicit) is

Ujn+1=Ujn+kh2(Uj1n2Ujn+Uj+1n)

The LTE is the difference between these two (error)

LTE=Ujn+1(Ujn+kh2(Uj1n2Ujn+Uj+1n))

Now plug-in u(tn,xj) in place of Ujn and u(tn+k,xj) in place of Ujn+1 and plug-in u(tn,x+h) in place of Uj+1n and plug-in u(tn,xh) in place of Uj1n in the above. It becomes

(1)LTE=u(t+k,xj)(u(tn,xj)+kh2(u(t,xh)2u(tn,xj)+u(t,x+h)))

Where in the above k is the time step (also written as Δt) and h is the space step size. Now comes the main trick. Expanding the term u(tn+k,xj) in Taylor,

(2)u(tn+k,xj)=u(tn,xj)+kut|tn+k222ut2|tn+O(k3)

And expanding

(3)u(tn,xj+h)=u(tn,xj)+hux|xj+h222ux2|xj+O(h3)

And expanding

(4)u(tn,xjh)=u(tn,xj)hux|xj+h222ux2|xjO(h3)

Now plug-in (2,3,4) back into (1). Simplifying, many things drop out, and we should obtain that

LTE=O(k)+O(h2)

Which says that LTE0 as h0,k0. Hence it is consistent.

To check it is stable, use Von Neumann method for stability. This check if the solution at next time step does not become larger than the solution at the current time step. There can be condition for this. Such as it is stable if kh22. This says that using this scheme, it will be stable as long as time step is smaller than h22. This makes the time step much smaller than space step.

For ax2+bx+c=0, with roots α,β then the relation between roots and coefficients is

α+β=baαβ=ca

Leibniz rules for integration

ddxa(x)b(x)f(t)dt=f(b(x))b(x)f(a(x))a(x)ddxa(x)b(x)f(t,x)dt=f(b(x))b(x)f(a(x))a(x)+a(x)b(x)xf(t,x)dt

abf(x)dx=abf(a+bx)dx

Differentiable function implies continuous. But continuous does not imply differentiable. Example is |x| function.

Mean curvature being zero is a characteristic of minimal surfaces.

How to find phase difference between 2 signals x1(t),x2(t)? One way is to find the DFT of both signals (in Mathematica this is Fourier, in Matlab fft()), then find where the bin where peak frequency is located (in either output), then find the phase difference between the 2 bins at that location. Value of DFT at that bin is complex number. Use Arg in Mathematica to find its phase. The difference gives the phase difference between the original signals in time domain. See https://mathematica.stackexchange.com/questions/11046/how-to-find-the-phase-difference-of-two-sampled-sine-waves for an example.

Watch out when squaring both sides of equation. For example, given y=x. squaring both sides gives y2=x. But this is only true for y0. Why? Let us take the square root of this in order to get back to the original equation. This gives y2=x. And here is the problem, y2=y only for y0. Why? Let us assume y=1. Then y2=(1)2=1=1 which is not 1. So when taking the square of both sides of the equation, remember this condition.

do not replace x2 by x, but by |x|, since x=x2 only for non negative x.

Given an equation, and we want to solve for x. We can square both sides in order to get rid of sqrt if needed on one side. But be careful. Even though after squaring both sides, the new equation is still true, the solutions of the new equation can introduce extraneous solution that does not satisfy the original equation. Here is an example I saw on the internet which illustrate this. Given x=x6. And we want to solve for x. Squaring both sides gives x=(x6)2. This has solutions x=9,x=4. But only x=9 is valid solution for the original equation before squaring. The solution x=4 is extraneous. So need to check all solutions found after squaring against the original equation, and remove those extraneous one. In summary, if a2=b2 then this does not mean that a=b. But if a=b then it means that a2=b2. For example (5)2=52. But 55.

How to find Laplace transform of product of two functions?

There is no formula for the Laplace transform of product f(t)g(t). (But if this was convolution, it is different story). But you could always try the definition and see if you can integrate it. Since L(f(t))=0estf(t)dt then L(f(t)g(t))=0estf(t)g(t)dt. Hence for f(t)=eat,g(t)=t this becomes

L(teat)=0estteatdt=0tet(sa)dt

Let saz then

L(teat)=0tetzdt=Lz(t)=1z2=1(sa)2

Similarly for f(t)=eat,g(t)=t2

L(t2eat)=0estt2eatdt=0t2et(sa)dt

Let saz then

L(teat)=0t2etzdt=Lz(t2)=2z3=2(sa)3

Similarly for f(t)=eat,g(t)=t3

L(t2eat)=0estt3eatdt=0t3et(sa)dt

Let saz then

L(teat)=0t3etzdt=Lz(t3)=6z4=6(sa)4

And so on. Hence we see that for f(t)=eat,g(t)=tn

L(tneat)=n!(sa)n+1

6 Converting first order ODE which is homogeneous to separable ODE

(Added July, 2017).

If the ODE M(x,y)+N(x,y)dydx=0 has both M and N homogenous functions of same power, then this ODE can be converted to separable. Here is an example. We want to solve

(1)(x3+8x2y)+(4xy2y3)y=0

The above is homogenous in M,N, since the total powers of each term in them is 3.

(x33+8x2y3)+(4xy23y33)y=0

So we look at each term in N and M and add all the powers on each x,y in them. All powers should add to same value, which is 3 in this case. Of course N,M should be polynomials for this to work. So one should check that they are polynomials in x,y before starting this process. Once we check M,N are homogeneous, then we let

y=xv

Therefore now

M=x3+8x2(xv)(2)=x3+8x3v

And

N=4x(xv)2(xv)3(3)=4x3v2x3v3

And

(4)y=v+xv

Substituting (3,4,5) into (1) gives

(x3+8x3v)+(4x3v2x3v3)(v+xv)=0(x3+8x3v)+(4x3v3x3v4)+(4x4v2x4v3)v=0

Dividing by x30 it simplifies to

(1+8v)+(4v3v4)+x(4v2v3)v=0

Which can be written as

x(4v2v3)v=((1+8v)+(4v3v4))v=((1+8v)+(4v3v4))(4v2v3)(1x)

We see that it is now separable. We now solve this for v(x) by direct integration of both sides And then using y=xv find y(x).

7 Direct solving of some simple PDE’s

Some simple PDE’s can be solved by direct integration, here are few examples.

Example 1

z(x,y)x=0

Integrating w.r.t. x., and remembering that now constant of integration will be function of y, hence

z(x,y)=f(y)

Example 2

2z(x,y)x2=x

Integrating once w.r.t. x gives

z(x,y)x=x22+f(y)

Integrating again gives

z(x,y)=x36+xf(y)+g(y)

Example 3

2z(x,y)y2=y

Integrating once w.r.t. y gives

z(x,y)y=y22+f(x)

Integrating again gives

z(x,y)=y36+yf(x)+g(x)

Example 4

2z(x,y)xy=0

Integrating once w.r.t x gives

z(x,y)y=f(y)

Integrating again w.r.t. y gives

z(x,y)=f(y)dy+g(x)

Example 5

Solve ut+ux=0 with u(x,1)=x1+x2. Let uu(x(t),t), therefore

dudt=ut+uxdxdt

Comparing the above with the given PDE, we see that if dxdt=1 then dudt=0 or u(x(t),t) is constant. At t=1 we are given that

(1)u=x(1)1+x(1)2

To find x(1), from dxdt=1 we obtain that x(t)=t+c. At t=1, c=x(1)1. Hence x(t)=t+x(1)1 or

x(1)=x(t)+1t

Hence solution from (1) becomes

u=xt+11+(xt+1)2

Example 6

Solve ut+ux+u2=0.

Let uu(x(t),t), therefore

dudt=ut+uxdxdt

Comparing the above with the given PDE, we see that if dxdt=1 then dudt=u2 or 1u=t+c. Hence

u=1t+c

At t=0, c=1u(x(0),0). Let u(x(0),0)=f(x(0)). Therefore

u=1t+1f(x(0))

Now we need to find x(0). From dxdt=1, then x=t+c or c=x(0), hence x(0)=xt and the above becomes

u(x,t)=1t+1f(xt)=f(xt)tf(xt)+1

8 Fourier series flow chart

(added Oct. 20, 2016)

pict

8.1 Theorem on when we can do term by term differentiation

If f(x) on LxL is continuous (notice, NOT piecewise continuous), this means f(x) has no jumps in it, and that f(x) exists on L<x<L and f(x) is either continuous or piecewise continuous (notice, that f(x) can be piecewise continuous (P.W.C.), i.e. have finite number of jump discontinuities), and also and this is very important, that f(L)=f(L) then we can do term by term differentiation of the Fourier series of f(x) and use = instead of . Not only that, but the term by term differentiation of the Fourier series of f(x) will give the Fourier series of f(x) itself.

So that main restriction here is that f(x) on LxL is continuous (no jump discontinuities) and that f(L)=f(L). So look at f(x) first and see if it is continuous or not (remember, the whole f(x) has to be continuous, not piecewise, so no jump discontinuities). If this condition is met, look at see if f(L)=f(L).

For example f(x)=x on 1x1 is continuous, but f(1)f(1) so the F.S. of f(x) can’t be term be term differentiated (well, it can, but the result will not be the Fourier series of f(x)). So we should not do term by term differentiation in this case.

But the Fourier series for f(x)=x2 can be term by term differentiated. This has its f(x) being continuous, since it meets all the conditions. Also Fourier series for f(x)=|x| can be term by term differentiated. This has its f(x) being P.W.C. due to a jump at x=0 but that is OK, as f(x) is allowed to be P.W.C., but it is f(x) which is not allowed to be P.W.C.

There is a useful corollary that comes from the above. If f(x) meets all the conditions above, then its Fourier series is absolutely convergent and also uniformly convergent. The M-test can be used to verify that the Fourier series is uniformly convergent.

8.2 Relation between coefficients of Fourier series of f(x) Fourier series of f(x)

If term by term differentiation allowed, then let

f(x)=a02+n=1ancos(nπLx)+bnsin(nπLx)f(x)=α02+n=1αncos(nπLx)+βnsin(nπLx)

Then

αn=nbnβn=nan

And Bessel’s inequality instead of a022+n=1(an2+bn2)< now becomes n=1n2(an2+bn2)<. So it is stronger.

8.3 Theorem on convergence of Fourier series

If f(x) is piecewise continuous on L<x<L and if it is periodic with period 2L and if on any point x on the entire domain <x< both the left sided derivative and the right sided derivative exist (but these do not have to be the same !) then we say that the Fourier series of f(x) converges and it converges to the average of f(x) at each point including points that have jump discontinuities.

9 Laplacian in different coordinates

(added Jan. 10, 2019)

10 Linear combination of two solution is solution to ODE

If y1,y2 are two solutions to ay+by+cy=0 then to show that c1y1+c2y2 is also solution:

ay1+by1+cy1=0ay2+by2+cy2=0

Multiply the first ODE by c1 and second ODE by c2

a(c1y1)+b(c1y1)+c(c1y1)=0a(c2y2)+b(c2y2)+c(c2y2)=0

Add the above two equations, using linearity of differentials

a(c1y1+c2y2)+b(c1y1+c2y2)+c(c1y1+c2y2)=0

Therefore c1y1+c2y2 satisfies the original ODE. Hence solution.

11 To find the Wronskian ODE

Since

W(x)=|y1y2y1y2|=y1y2y2y1

Where y1,y2 are two solutions to ay+by+cy=0. Write

ay1+py1+cy1=0ay2+py2+cy2=0

Multiply the first ODE above by y2 and the second by y1

ay2y1+py2y1+cy2y1=0ay1y2+py1y2+cy1y2=0

Subtract the second from the first

(1)a(y2y1y1y2)+p(y2y1y1y2)=0

But

(2)p(y2y1y1y2)=pW

And

dWdx=ddx(y1y2y2y1)=y1y2+y1y2y2y1y2y1(3)=y1y2y2y1

Substituting (2,3) into (1) gives the Wronskian differential equation

a(dWdx)pW=0aW+pW=0

Whose solution is

W(x)=Cepadx

Where C is constant of integration.

Remember: W(x0)=0 does not mean the two functions are linearly dependent. The functions can still be Linearly independent on other interval, It just means x0 can’t be in the domain of the solution for two functions to be solutions. However, if the two functions are linearly dependent, then this implies W=0 everywhere.  So to check if two functions are L.D., need to show that W=0 everywhere.

12 Green functions notes

Green function is what is called impulse response in control. But it is more general, and can be used for solving PDE also.

Given a differential equation with some forcing function on the right side. To solve this, we replace the forcing function with an impulse. The solution of the DE now is called the impulse response, which is the Green’s function of the differential equation.

Now to find the solution to the original problem with the original forcing function, we just convolve the Green function with the original forcing function. Here is an example. Suppose we want to solve   L[y(t)]=f(t) with zero initial conditions. Then we solve L[g(t)]=δ(t). The solution is g(t). Now y(t)=g(t)f(t). This is for initial value problem.  For example. y(t)+kx=eat, with y(0)=0. Then we solve g(t)+kg=δ(t). The solution is g(t)={ektt>00t<0, this is for causal system. Hence y(t)=g(t)f(t). The nice thing here, is that once we find g(t), we can solve y(t)+kx=f(t) for any f(t) by just convolving the Green function (impulse response) with the new f(t).

We can think of Green function as an inverse operator. Given L[y(t)]=f(t), we want to find solution y(t)=G(t;τ)f(τ)dτ. So in a sense, G(t;τ) is like L1[y(t)].

Need to add notes for Green function for Sturm-Liouville boundary value ODE. Need to be clear on what boundary conditions to use. What is B.C. is not homogeneous?

Green function properties:

  1. G(t;τ) is continuous at t=τ. This is where the impulse is located.
  2. The derivative G(t) just before t=τ is not the same as G(t) just after t=τ. i.e. G(t;tε)G(t;t+ε)0. This means there is discontinuity in derivative.
  3. G(t;τ) should satisfy same boundary conditions as original PDE or ODE (this is for Sturm-Liouville or boundary value problems).
  4. L[G(t;τ)]=0 for tτ
  5. G(x;τ) is symmetric. i.e. G(x;τ)=G(τ;x).

When solving for G(t;τ), in context of 1D, hence two boundary conditions, one at each end, and second order ODE (Sturm-Liouville), we now get two solutions, one for t<τ and one for t>τ.

So we have 4 constants of integrations to find (this is for second order ODE) not just two constants as normally one would get , since now we have 2 different solutions. Two of these constants from the two boundary conditions, and two more come from property of Green function as mentioned above. G(t;τ)={A1y1+A2y20<t<τA3y1+A4y2τ<t<L

13 Laplace transform notes

Remember that uc(t)f(tc)ecsF(s) and uc(t)f(t)ecsL{f(t+c)}. For example, if we are given u2(t)t, then L(u2(t)t)=e2sL{t+2}=e2s(1s2+2s)=e2s(1+2ss2). Do not do uc(t)f(t)ecsL{f(t)} ! That will be a big error. We use this allot when asked to write a piecewise function using Heaviside functions.

14 Series, power series, Laurent series notes

if we have a function f(x) represented as series (say power series or Fourier series), then we say the series converges to f(x) uniformly in region D, if given ε>0, we can number N which depends only on ε, such that |f(x)SN(x)|<ε.

Where here SN(x) is the partial sum of the series using N terms. The difference between uniform convergence and non-uniform convergence, is that with uniform the number N only depends on ε and not on which x we are trying to approximate f(x) at. In uniform convergence, the number N depends on both ε and x. So this means at some locations in D we need much larger N than in other locations to convergence to f(x) with same accuracy. Uniform convergence is better. It depends on the basis functions used to approximate f(x) in the series.

If the function f(x) is discontinuous at some point, then it is not possible to find uniform convergence there. As we get closer and closer to the discontinuity, more and more terms are needed to obtained same approximation away from the discontinuity, hence not uniform convergence. For example, Fourier series approximation of a step function can not be uniformly convergent due to the discontinuity in the step function.

 Geometric series:

n=0Nrn=1+r+r2+r3++rN=1rN+11rn=1Nrn=1+n=0Nrn=1+1rN+11r=r1rN1rn=0rn=1+r+r2+r3+=11r|r|<1n=0(1)nrn=1r+r2r3+=11+r|r|<1

 Binomial series:

General binomial is

(x+y)n=xn+nxn1y+n(n1)2!xn2y2+n(n1)(n2)3!xn3y3+

From the above we can generate all other special cases. For example,

(1+x)n=1+nx+n(n1)x22!+n(n1)(n2)x33!+

This work for positive and negative n, rational or not. The sum converges when only for |x|<1. From this, we can derive the above sums also for the geometric series.  For example, for n=1 the above becomes

1(1+x)=1x+x2x3+|x|<11(1x)=1+x+x2+x3+|x|<1

For |x|>1, we can still find series expansion in negative powers of x as follows

(1+x)n=(x(1+1x))n=xn(1+1x)n

And now since |1x|<1, we can use binomial expansion to expand the term (1+1x)n in the above and obtain a convergent series, since now |1x|<1. This will give the following expansion

(1+x)n=xn(1+1x)n=xn(1+n(1x)+n(n1)2!(1x)2+n(n1)(n2)3!(1x)3+)

So everything is the same, we just change x with 1x and remember to multiply the whole expansion with xn.  For example, for n=1

1(1+x)=1x(1+1x)=1x(11x+(1x)2(1x)3+)|x|>11(1x)=1x(11x)=1x(1+1x+(1x)2+(1x)3+)|x|>1

These tricks are very useful when working with Laurent series.

 Arithmetic series:

n=1Nn=12N(N+1)n=1Nan=N(a1+aN2)

i.e. the sum is N times the arithmetic mean.

 Taylor series: Expanded around x=a is

f(x)=f(a)+(xa)f(a)+(xa)2f(a)2!+(xa)3f(3)(a)3!++Rn

Where Rn is remainder Rn=(xa)n+1(n+1)!f(n+1)(x0) where x0 is some point between x and a.

 Maclaurin series: Is just Taylor expanded around zero. i.e. a=0

f(x)=f(0)+xf(0)+x2f(0)2!+x3f(3)(0)3!+

 This diagram shows the different convergence of series and the relation between them

pict

The above shows that an absolutely convergent series (B) is also convergent. Also a uniformly convergent series (D) is also convergent. But the series B is absolutely convergent and not uniform convergent. While D is uniform convergent and not absolutely convergent.

The series C is both absolutely and uniformly convergent. And finally the series A is convergent, but not absolutely (called conditionally convergent). Examples of B (converges absolutely but not uniformly) is

n=0x21(1+x2)n=x2(1+11+x2+1(1+x2)2+1(1+x2)3+)=x2+x21+x2+x2(1+x2)2+x2(1+x2)3+

And example of D (converges uniformly but not absolutely) is

n=1(1)n+11x2+n=1x2+11x2+2+1x3+31x4+4+

Example of A (converges but not absolutely) is the alternating harmonic series

n=1(1)n+11n=112+1314+

The above converges to ln(2) but absolutely it now becomes the harmonic series and it diverges

n=11n=1+12+13+14+

For uniform convergence, we really need to have an x in the series and not just numbers, since the idea behind uniform convergence is if the series convergence to within an error tolerance ε using the same number of terms independent of the point x in the region.

The sequence n=11na converges for a>1 and diverges for a1. So a=1 is the flip value. For example

1+12+13+14+

Diverges, since a=1, also 1+12+13+14+ diverges, since a=121. But 1+14+19+116+ converges, where a=2 here and the sum is π26.

Using partial sums. Let n=0an be some sequence. The partial sum is SN=n=0Nan. Then

n=0an=limNSn

If limNSn exist and finite, then we can say that n=0an converges. So here we use set up a sequence who terms are partial sum, and them look at what happens in the limit to such a term as Nθ. Need to find an example where this method is easier to use to test for convergence than the other method below.

Given a series, we are allowed to rearrange order of terms only when the series is absolutely convergent. Therefore for the alternating series 112+1314+, do not rearrange terms since this is not absolutely convergent. This means the series sum is independent of the order in which terms are added only when the series is absolutely convergent.

In an infinite series of complex numbers, the series converges, if the real part of the series and also the complex part of the series, each converges on their own.

Power series: f(z)=n=0an(zz0)n. This series is centered at z0. Or expanded around z0. This has radius of convergence R is the series converges for |zz0|<R and diverges for |zz0|>R.

Tests for convergence.

  1. Always start with preliminary test. If limnan does not go to zero, then no need to do anything else. The series n=0an does not converge. It diverges. But if limnan=0, it still can diverge. So this is a necessary but not sufficient condition for convergence. An example is 1n. Here an0 in the limit, but we know that this series does not converge.
  2. For Uniform convergence, there is a test called the weierstrass M test, which can be used to check if the series is uniformly convergent. But if this test fails, this does not necessarily mean the series is not uniform convergent. It still can be uniform convergent. (need an example).
  3. To test for absolute convergence, use the ratio test. If L=limn|an+1an|<1 then absolutely convergent. If L=1 then inconclusive. Try the integral test. If L>1 then not absolutely convergent. There is also the root test. L=limn|an|n=limn|an|1n.
  4. The integral test, use when ratio test is inconclusive. L=limnnf(x)dx where a(n) becomes f(x). Remember to use this only of the  terms of the sequence are monotonically decreasing and are all positive. For example, n=1ln(1+1n), then use L=limNNln(1+1x)dx=((1+x)ln(1+x)xln(x)1)N. Notice, we only use the upper limit in the integral. This becomes (after simplifications) limNNN+1=1. Hence the limit L is finite, then the series converges.
  5. Radius of convergence is called R=1L where L is from (3) above.
  6. Comparison test. Compare the series with one we happen to already know it converges. Let bn be a series which we know is convergent (for example 1n2), and we want to find if an converges. If all terms of both series are positive and if anbn for each n, then we conclude that an converges also.

For Laurent series, lets say singularity is at z=0 and z=1. To expand about z=0, get f(z) to look like 11z and use geometric series for |z|<1. To expand about z=1, there are two choices, to the inside and to the outside. For the outside, i.e. |z|>1, get f(z) to have 111z form, since this now valid for |z|>1.

Can only use power series an(zz0)n to expand f(z) around z0 only if f(z) is analytic at z0. If f(z) is not analytic at z0 need to use Laurent series. Think of Laurent series as an extension of power series to handle singularities.

14.1 Some tricks to find sums

14.1.1 Example 1

Find n=1einxn

solution Let f(x)=n=1einxn, taking derivative gives

f(x)=in=1einx=in=1(eix)n=i(n=0(eix)n1)=i1eixi

Hence

f(x)=(i1eixi)dx=idx1eixix+C=i(x+iln(1eix))ix+C=ixln(1eix)ix+C=ln(1eix)+C

We can set C=0 to obtain

n=1einxn=ln(1eix)

More tricks to add...

14.2 Methods to find Laurent series

Let us find the Laurent series for f(z)=5z2z(z1). There is a singularity of order 1 at z=0 and z=1.

14.2.1 Method one

Expansion around z=0. Let

g(z)=zf(z)=5z2(z1)

This makes g(z) analytic around z, since g(z) do not have a pole at z=0, then it is analytic around z=0 and therefore it has a power series expansion around z=0 given by

(1)g(z)=n=0anzn

Where

an=1n!g(n)(z)|z=0

But

g(0)=2

And

g(z)=5(z1)(5z2)(z1)2=3(z1)2g(0)=3

And

g(z)=3(2)(z1)3=6(z1)3g(0)=6

And

g(z)=6(3)(z1)4=18(z1)4g(0)=18

And so on. Therefore, from (1)

g(z)=g(0)+g(0)z+12!g(0)z2+13!g(0)z3+=23z62z2183!z3=23z3z23z3

Therefore

f(z)=g(z)z=2z33z3z2

The residue is 2.  The above expansion is valid around z=0 up and not including the next singularity, which is at z=1. Now we find the expansion of f(z) around z=1. Let

g(z)=(z1)f(z)=5z2z

This makes g(z) analytic around z=1, since g(z) do not have a pole at z=1. Therefore it has a power series expansion about z=1 given by

(1)g(z)=n=0an(z1)n

Where

an=1n!g(n)(z)|z=1

But

g(1)=3

And

g(z)=5z(5z2)z2=2z2g(1)=2

And

g(z)=2(2)z3=4z3g(1)=4

And

g(z)=4(3)z4=12z4g(1)=12

And so on. Therefore, from (1)

g(z)=g(1)+g(1)(z1)+12!g(1)(z1)2+13!g(1)(z1)3+=3+2(z1)42(z1)2+123!(z1)3=3+2(z1)2(z1)2+2(z1)3

Therefore

f(z)=g(z)z1=3z1+22(z1)+2(z1)22(z1)3+

The residue is 3. The above expansion is valid around z=1 up and not including the next singularity, which is at z=0 inside a circle of radius 1.

pict

Putting the above two regions together, then we see there is a series expansion of f(z) that is shared between the two regions, in the shaded region below.

pict

Let check same series in the shared region give same values. Using the series expansion about f(0) to find f(z) at point z=12, gives 2 when using 10 terms in the series. Using series expansion around z=1 to find f(12) using 10 terms also gives 2. So both series are valid produce same result.

14.2.2 Method Two

This method is simpler than the above, but it results in different regions. It is based on converting the expression in order to use geometric series expansion on it.

f(z)=5z2z(z1)

Since there is a pole at z=0 and at z=1, then we first find expansion for 0<|z|<1. To do this, we write the above as

f(z)=5z2z(1z1)=25zz(11z)

And now expand 11z using geometric series, which is valid for |z|<1. This gives

f(z)=25zz(1+z+z2+z3+)=2z(1+z+z2+z3+)5(1+z+z2+z3+)=(2z+2+2z+2z2+)(5+5z+5z2+5z3+)=2z33z3z23z3

The above is valid for 0<|z|<1 which agrees with result of method 1.

Now, to find expansion for |z|>1, we need a term that looks like (111z). Since now it can be expanded for |1z|<1 or |z|>1 which is what we want. Therefore, writing f(z) as

f(z)=5z2z(z1)=5z2z2(11z)=5z2z2(111z)

But for |1z|<1 the above becomes

f(z)=5z2z2(1+1z+1z2+1z3+)=5z(1+1z+1z2+1z3+)2z2(1+1z+1z2+1z3+)=(5z+5z2+5z3+5z4+)(2z2+2z3+2z4+2z5+)=5z+3z3+3z4+3z5+

With residue 5. The above is valid for |z|>1. The following diagram illustrates the result obtained from method 2.

pict

14.2.3 Method Three

For expansion about z=0, this uses same method as above, giving same series valid for |z|<1. This method is a little different for those points other than zero. The idea is to replace z by zz0 where z0 is the point we want to expand about and do this replacement in f(z) itself. So for z=1 using this example, we let ξ=z1 hence z=ξ+1. Then f(z) becomes

f(z)=5z2z(z1)=5(ξ+1)2(ξ+1)(ξ)=5(ξ+1)2ξ(1ξ+1)=5ξ+3ξ(11+ξ)

Now we expand 11+ξ for |ξ|<1 and the above becomes

f(z)=5ξ+3ξ(1ξ+ξ2ξ3+ξ4)=5ξ+3ξ(1ξ+ξ2ξ3+ξ4)=(5ξ+3ξ(5ξ+3)+(5ξ+3)ξ(5ξ+3)ξ2+)=(5+3ξ5ξ3+5ξ2+3ξ5ξ33ξ2+)=(2+3ξ2ξ+2ξ22ξ3+)

We now replace ξ=z1 and the above becomes

f(z)=(3(z1)+22(z1)+2(z1)22(z1)3+2(z1)4)

The above is valid for |ξ|<1 or |z1|<1 or 1<(z1)<1 or 0<z<2.  This gives same series and for same region as in method one. But this is little faster as it uses Binomial series short cut to find the expansion instead of calculating derivatives as in method one.

14.2.4 Conclusion

Method one and method three give same series and for same regions. Method three uses binomial expansion as short cut and requires one to convert f(z) to form to allow using Binomial expansion. Method one does not use binomial expansion but requires doing many derivatives to evaluate the terms of the power series. It is more direct method.

Method two also uses binomial expansion, but gives different regions that method one and three.

If one is good in differentiation, method one seems the most direct. Otherwise, the choice is between method two or three as they both use Binomial expansion. Method two seems a little more direct than method three. It also depends what the problem is asking form. If the problem asks to expand around z0 vs. if it is asking to find expansion in |z|>1 for example, then this decides which method to use.

15 Gamma function notes

Gamma function is defined by

Γ(x)=0tx1etdtx>0

The above is called the Euler representation. Or if we want it defined in complex domain, the above becomes

Γ(z)=0tz1etdtRe(z)>0

Since the above is defined only for right half plane, there is way to extend this to left half plane, using what is called analytical continuation. More on this below.  First, some relations involving Γ(x)

Γ(z)=(z1)Γ(z1)Re(z)>1Γ(1)=1Γ(2)=1Γ(3)=2Γ(4)=3!Γ(n)=(n1)!Γ(n+1)=n!Γ(12)=πΓ(z+1)=zΓ(z)recursive formulaΓ(z¯)=Γ(z)Γ(n+12)=135(2n1)2nπ

To extend Γ(z) to the left half plane, i.e. for negative values. Let us define, using the above recursive formula

Γ¯(z)=Γ(z+1)zRe(z)>1

For example

Γ¯(12)=Γ(12)12=2Γ(12)=2π

And for Re(z)>2

Γ¯(32)=Γ¯(32+1)32=(132)Γ¯(12)=(132)(112)Γ(12)=(132)(112)π=43π

And so on. Notice that for x<0 the functions Γ(x) are not defined for all negative integers x=1,2, it is also not defined for x=0

The above method of extending (or analytical continuation) of the Gamma function to negative values is due to Euler. Another method to extend Gamma is due to Weierstrass. It starts by rewriting from the definition as follows, where a>0

Γ(z)=0tz1etdt(1)=0atz1etdt+atz1etdt

Expanding the integrand in the first integral using Taylor series gives

0atz1etdt=0atz1(1+(t)+(t)22!+(t)33!+)dt=0atz1(1+(t)+(t)22!+(t)33!+)dt=0atz1n=0(1)ntnn!dt=0an=0(1)ntn+z1n!dt=n=00a(1)ntn+z1n!dt=n=0(1)nn!0atn+z1dt=n=0(1)nn![tn+zn+z]0a=n=0(1)nn!(n+z)an+z

This takes care of the first integral in (1). Now, since the lower limits of the second integral in (1) is not zero, then there is no problem integrating it directly. Remember that in the Euler definition, it had zero in the lower limit, that is why we said there Re(z)>1. Now can can choose any value for a. Weierstrass choose a=1. Hence (1) becomes

Γ(z)=0atz1etdt+atz1etdt(2)=n=0(1)nn!(n+z)+1tz1etdt

Notice the term an+z now is just 1 since a=1. The second integral above can now be integrated directly. Let us now verify that Euler continuation Γ¯(z) for say z=12 gives the same result as Weierstrass formula. From above, we found that Γ¯(z)=2π. Equation (2) for z=12 becomes

(3)Γ¯(12)=n=0(1)nn!(n12)+1t32etdt

Using the computer

n=0(1)nn!(n12)=2π+2π(1erf(1))21e

And direct integration

1t32etdt=2π+2πerf(1)+2e

Hence (3) becomes

Γ¯(12)=(2π+2π(1erf(1))21e)+(2π+2πerf(1)+2e)=2π

Which is the same as using Euler method. Let us check for z=23. We found above that Γ¯(32)=43π using Euler method of analytical continuation. Now we will check using Weierstrass method. Equation (2) for z=32 becomes

Γ¯(32)=n=0(1)nn!(n32)+1t52etdt

Using the computer

n=0(1)nn!(n32)=4π34π(1erf(1))3+23e

And

1t52etdt=4πerf(1)3+4π323e

Hence

Γ¯(32)=(4π34π(1erf(1))3+23e)+(4πerf(1)3+4π323e)=43π

Which is the same as using the Euler method. Clearly the Euler method for analytical continuation of the Gamma function is simpler to compute.

Euler reflection formula

Γ(x)Γ(1x)=0tx11+tdt0<x<1=πsin(πx)

Where contour integration was used to derive the above. See Mary Boas text book, page 607, second edition, example 5 for full derivation.

Γ(z) has singularities at z=0,1,2, and Γ(1z) has singularities at z=1,2,3, so in the above reflection formula, the zeros of sin(πx) cancel the singularities of Γ(x) when it is written as

Γ(1x)=πΓ(x)sin(πx)

1Γ(z) is entire.

There are other representations for Γ(x). One that uses products by Euler also is

Γ(z)=1zΠn=1(1+1n)z1+zn=limnn!(n+1)zz(z1)(z+n)

And another due to Weierstrass is

Γ(z)=eγzzΠn=1ezn1+zn=eγzlimnn!exp(z(1+12++1n))z(z+1)(z+2)(z+n)

16 Riemann zeta function notes

Given by ζ(s)=n=11ns for Re(s)>1. Euler studied this and It was extended to the whole complex plane by Riemann. So the Riemann zeta function refer to the one with the extension to the whole complex plane. Euler only looked at it on the real line. It  has pole at s=1. Has trivial zeros at s=2,4,6, and all its non trivial zeros are inside the critical strip 0<s<1 and they all lie on the critical line s=12. ζ(s) is also defined by integral formula

ζ(s)=1Γ(s)01et1tstdtRe(s)>1

The connection between ζ(s) prime numbers is given by the Euler product formula

ζ(s)=Πp11ps=(112s)(113s)(115s)(117s)=(1112s)(1113s)(1115s)(1117s)=(2s2s1)(3s3s1)(5s5s1)(7s7s1)

ζ(s) functional equation is

ζ(s)=2sπs1sin(πs2)Γ(1s)ζ(1s)

17 Complex functions notes

Complex identities

|z|2=zz¯(z¯)=z(z1+z2)=z¯1+z¯2|z¯|=|z||z1z2|=|z1||z2|Re(z)=z+z¯2Im(z)=z+z¯2iarg(z1z2)=arg(z1)+arg(z2)

A complex function f(z) is analytic in a region D if it is defined and differentiable at all points in D. One way to check for analyticity is to use the Cauchy Riemann (CR) equations (this is a necessary condition but not sufficient). If f(z) satisfies CR everywhere in that region then it is analytic. Let f(z)=u(x,y)+iv(x,y), then these two equations in Cartesian coordinates are

ux=vyuy=vx

Sometimes it is easier to use the polar form of these. Let f(z)=rcosθ+isinθ, then the equations become

ur=1rvθ1ruθ=vr

To remember them, think of the r as the x and θ as the y.

Let us apply these on z to see how it works. Since z=reiθ+2nπ then f(z)= reiθ2+nπ.This is multi-valued function. One value for n=0 and another for n=1. The first step is to make it single valued. Choosing n=0 gives the principal value. Then f(z)=reiθ2. Now we find the branch points. z=0 is a branch point. We can pick π<θ<π and pick the negative real axis as the branch cut (the other branch point being ). This is one choice.

We could have picked 0<θ<2π and had the positive x axis as the branch cut, where now the second branch point is + but in both cases, origin is still part of the branch cut. Let us stick with π<θ<π.

Given all of this, nowz=reiθ2=r(cos(θ2)+isin(θ2)), hence u=rcos(θ2) and v=rsin(θ2). Therefore ur=121rcos(θ2), and vθ=12rcos(θ2) and uθ=12rsin(θ2) and vr=121rsin(θ2). Applying Cauchy-Riemann above gives

121rcos(θ2)=1r12rcos(θ2)121rcos(θ2)=121rcos(θ2)

Satisfied. and for the second equation

1r(12rsin(θ2))=121rsin(θ2)121rsin(θ2)=121rsin(θ2)

so z is analytic in the region π<θ<π, and not including branch points and branch cut.

We can’t just say f(z) is Analytic and stop. Have to say f(z) is analytic in a region or at a point. When we say f(z) analytic at a point, we mean analytic in small region around the point.

If f(z) is defined only at an isolated point z0 and not defined anywhere around it, then the function can not be analytic at z0 since it is not differentiable at z0. Also f(z) is analytic at a point z0 if the power series for f(z) expanded around z0 converges to f(z) evaluated at z0. An analytic complex function mean it is infinitely many times differentiable in the region, which means the limit exist limΔz0f(z+Δz)f(z)Δz and does not depend on direction.

Before applying the Cauchy Riemann equations, make sure the complex function is first made to be single valued.

Remember that Cauchy Riemann equations as necessary but not sufficient condition for function to be analytic. The extra condition needed is that all the partial derivatives are continuous. Need to find example where CR is satisfied but not the continuity on the partial derivatives. Most of the HW problems just needs the CR but good to keep an eye on this other condition.

Cauchy-Goursat: If f(z) is analytic on and inside closed contour C then Cf(z)dz=0. But remember that if Cf(z)dz=0 then this does not necessarily imply f(z) is analytic on and inside C. So this is an IF and not an IFF relation. For example C1z2dz=0 around unit circle centered at origin, but clearly 1z2 is not analytic everywhere inside C, since it has a singularity at z=0.

proof of Cauchy-Goursat: The proof uses two main ideas. It uses the Cauchy-Riemann equations and also uses Green theorem.  Green’s Theorem says

(1)CPdx+Qdy=D(QxPy)dA

So Green’s Theorem transforms integration on the boundary C of region D by integration over the area inside the boundary C.  Let f(z)=u+iv. And since z=x+iy then dz=dx+idy. Therefore

Cf(z)dz=C(u+iv)(dx+idy)=Cudx+uidy+ivdxvdy(2)=C(udxvdy)+iCvdx+udy

We now apply  (1) to each of the two integrals in (3). Hence the first integral in (2) becomes

C(udxvdy)=D(vxuy)dA

But from CR, we know that uy=vx, hence the above is zero. And the second integral in (2) becomes

Cvdx+udy=D(uxvy)dA

But from CR, we know that ux=vy, hence the above is zero. Therefore the whole integral in (2) is zero. Therefore Cf(z)dz=0. QED.

Cauchy residue: If f(z) is analytic on and inside closed contour C except at some isolated points z1,z2,,zN then Cf(z)dz=2πij=1NRes(f(z))z=zj. The term Res(f(z))z=zj is the residue of f(z) at point zj. Use Laurent expansion of f(z) to find residues. See above on methods how to find Laurent series.

Maximum modulus principle: If f(z) is analytic in some region D and is not constant inside D, then its maximum value must be on the boundary. Also its minimum on the boundary, as long as f(z)0 anywhere inside D. In the other hand, if f(z) happened to have a maximum at some point z0 somewhere inside D, then this implies that f(z) is constant everywhere and will have the value f(z0) everywhere. What all this really mean, is that if f(z) is analytic and not constant in D, then its maximum is on the boundary and not inside.

There is a complicated proof of this. See my notes for Physics 501. Hopefully this will not come up in the exam since I did not study the proof.

These definitions from book of Joseph Bak  

  1. f is analytic at z if f is differentiable in a neighborhood of z. Similarly f is analytic on set S if f is differentiable at all points in some open set containing S.
  2. f(z) is analytic on open set U is f(z) if differentiable at each point of U and f(z) is continuous on U.

Some important formulas.

  1. If f(z) is analytic on and inside C then

    Cf(z)dz=0
  2. If f(z) is analytic on and inside C then and z0 is a point in C then

    2πif(z0)=Cf(z)zz0dz2πif(z0)=Cf(z)(zz0)2dz2πi2!f(z0)=Cf(z)(zz0)3dz2πin!f(n)(z0)=Cf(z)(zz0)n+1dz
  3. From the above, we find, where here f(z)=1

    C1(zz0)n+1dz={2πin=00n=1,2,

17.1 Find bn coefficients in the Laurent series expansion

On Finding coefficient of the principle part of the Laurent series expansion around z0. Let

(1)f(z)=n=0cn(zz0)n+n=1Nbn(zz0)n=n=0cn(zz0)n+b1(zz0)+b2(zz0)2+b3(zz0)3++bN(zz0)N

The goal is to determine all the coefficients b1,b2,,bN in Laurent series expansion. This assumes the largest order of the pole is finite. To find b1, we multiply both side of the above by (zz0)N which gives

(2)(zz0)Nf(z)=n=0cn(zz0)n+N+b1(zz0)N1+b2(zz0)N2+b3(zz0)N3++bN

Differentiating both sides N1 times w.r.t. z gives

dN1dz(N1)((zz0)Nf(z))=n=0dN1dz(N1)(cn(zz0)n+N)+b1(N1)!

Evaluating at x=x0 the above gives

b1=limzz0dN1dz(N1)((zz0)Nf(z))(N1)!

To find b2 we differentiate both sides of (2) N2 times which gives

dN2dz(N2)((zz0)Nf(z))=n=0dN2dz(N2)(cn(zz0)n+N)+b1(N1)!(xx0)+b2(N2)!

Hence

b2=limzz0dN2dz(N2)((zz0)Nf(z))(N2)!

We keep doing the above to find b3,b4,,bN. Therefore the general formula is

(3A)bn=limzz0dNndz(Nn)((zz0)Nf(z))(Nn)!

And for the special case of the last term bN the above simplifies to

(3B)bk=limzz0(zz0)Nf(z)(Nk)!

Where in (3) n is the coefficient bn needed to be evaluated and N is the pole order and z0 is the expansion point. The special value b1 is called the residue of f(z) at z0.

18 Hints to solve some problems

18.1 Complex analysis and power and Laurent series

  1. Laurent series of f(z) around point z0 is n=an(zz0)n and an=12πif(z)(zz0)n+1dz. Integration is around path enclosing z0 in counter clockwise.
  2. Power series of f(z) around z0 is 0an(zz0)n where an=1n!f(n)(z)|z=z0
  3. Problem asks to use Cauchy integral formula Cf(z)zz0dz=2πif(z0) to evaluate another integral Cg(z)dz. Both over same C. The idea is to rewrite g(z) as f(z)zz0 by factoring out the poles of g(z) that are outside C leaving one inside C. Then we can write

    Cg(z)dz=Cf(z)zz0dz=2πif(z0)

    For example, to solve C1(z+1)(z+2)dz around C unit circle. Rewriting this as C1z+2(z(1))dz where now f(z)=1z+2 and now we can use Cauchy integral formula. So all what we have to do is just evaluate 1z+2 at z=1, which gives C1(z+1)(z+2)dz=2πi. This works if g(z) can be factored into f(z)zz0 where f(z) is analytic on and inside C. This would not work if g(z) has more than one pole inside C.

  4. Problem asks to find Cf(z)dz where C is some closed contour. For this, if f(z) had number of isolated singularities inside C, then just use

    Cf(z)dz=2πiresidues of f(z) at each singularity inside C
  5. Problem asks to find Cf(z)dz where C is some open path, i.e. not closed (if it is closed, try Cauchy), such as a straight line or a half circle arc. For these problem, use parameterization. This converts the integral to line integration. If C is straight line, use standard t parameterization, which is found by using

    x(t)=(1t)x0+tx1y(t)=(1t)y0+ty1

    where (x0,y0) in the line initial point and (x1,y1) is the line end point. This works for straight lines. Now use the above and rewrite z=x+iy as z(t)=x(t)+iy(t) and then plug-in in this z(t) in f(z) to obtain f(t), then the integral becomes

    Cf(z)dz=t=0t=1f(t)z(t)dt
    And now evaluate this integral using normal integration rules. If the path is a circular arc, then no need to use t, just use θ. Rewrite x=reiθ and use θ instead of t and follow same steps as above.
  6. Problem gives u(x,y) and asks to find v(x,y) in order for f(x,y)=u(x,y)+iv(x,y) to be analytic in some region. To solve these, use Cauchy Riemann equations. Need to use both equations. One equation will introduce a constant of integration (a function) and the second equation is used to solve for it. This gives v(x,y). See problem 2, HW 2, Physics 501 as example.
  7. Problem asks to evaluate Cf(z)(zz0)ndz where n is some number. This is the order of the pole, and f(z) is analytic on and inside C. Then use the Cauchy integral formula for higher pole order. Cf(z)(zz0)ndz=2πi Residue(z0). The only difference here is that this is pole of order n. So to find residue, use

    Residue(z0)=limzz0dn1dzn(zz0)n(n1)!f(z)(zz0)n=limzz0dn1dznf(z)(n1)!
  8. Problem gives f(z) and asks to find branch points and branch cuts. One way is to first find where f(z)=0 and for each zero, make a small circle around it, starting from θ=0 to θ=2π. If the function  at θ=0 has different value from θ=2π, then this is a branch point. Do this for other zeros. Then connect the branch points. This will give the branch cut. It is not always clear how to connect the branch point though, might need to try different ways. For example f(z)=z2+1 has two zeros at z=±i. Both turn out to be branch points. The branch cut is the line between i to +i on the imaginary axis.
  9. Problem gives a series n=0anzn and asks to find radius of convergence R. Two ways, find L=limn|an+1||an| and then R=1L. Another way is to find L using L=limn|an|1n.
  10. Problem gives integral 02πf(θ)dθ and asks to evaluate using residues. We start by converting everything to z using z=eiθ using |z|=1. No need to use z=reiθ. The idea is to convert it to f(z)dz which then we can use f(z)dz=2πi residues inside. Replace f(θ) to become f(z), this could require using Euler relation such as cosnθ=zn+zn2 and similar for sinθ. Now all what is needed is to find residues of any poles inside the unit circle. Do not worry about poles outside the unit circle. To find residues use short cut tricks. No need to find Laurent series.
    For an example, to evaluate 02π15+4cosθdθ, then 15+4cosθ becomes 1(2z+1)(z+2) and there is only one pole inside unit circle, at z=12.
  11. Problem gives integral 0f(θ)dθ and asks to evaluate using residues. The contour here goes from R to +R and then a semi circle in upper half plane. This works for even f(θ) since we can write 0f(θ)dθ=12f(θ)dθ. If there is a pole inside the upper half plane, then the integral over the semi circle is 2πi times the sum of residues. If there is a pole on the real line, then make a small semi circle around pole, say at z=a and then the integral for the small semi circle is πi times the residue at a. The minus sign here is due to moving clock wise on the small circle.
  12. Problem gives a series n=0anzn and asks if it is uniformly convergent. For general series, use the M-test. But for this kind of series, just find radius of convergence as above using ratio test, and if it is absolutely convergent, then say it converges uniformly for |z|r<R. It is important to write it this way, and not just |z|<R.
  13. Problems gives n=0an and asks to find the sum. Sometimes this trick works for some series. For example the alternating series n=1(1)n+11n=112+1314+, then write it as xx22+x33x44+ which is the same when x=1, and now notice that this is the Taylor series for ln(1+x) which means when x=1 then 112+1314+=ln(2).
  14. Problem gives f(z) and asks to find residue at some z=z0. Of course we can always expand f(z) around z=0 using Laurent series and find the coefficient of 1z. But this is too much work. Instead, if f(z) has a simple pole of order one, then we use

    R(z0)=limzz0(zz0)f(z)
    In general, if f(z)=g(z)h(z) then there are two cases. If h(z0)=0 or not. If h(z0)0, then we can just use the above. For example, if f(z)=z(2z+1)(5z) and we want the residue at z0=5, then since it simple pole, then using
    R(5)=limz5(z5)z(2z+1)(5z)=limz5z(2z+1)=311

    But if h(z0)=0 then we need to apply La’Hopital like this. If f(z)=sinz1z4 and we want to find residue at z=i. Then do as above, but with extra step, like this

    R(i)=limzi(zi)sinz1z4=(limzisinz)(limzi(zi)11z4)=sini(limzi(zi)1z4)Now apply La’Hopital=sini(limzi14z3)=sini4i3=14sinh(1)

    Now if the pole is not a simple pole or order one,.say of order m, then we first multiply f(z) by (zz0)m then differentiate the result m1 times, then divide by (m1)!, and then evaluate the result at z=z0. in other words,

    R(z0)=limzz01(m1)!dm1dzm1((zz0)mf(z))
    For example, if f(z)=zsinz(zπ)3 and we want residue at z=π. Since order is m=3, then
    R(z0)=limzπ12!d2dz2((zπ)3zsinz(zπ)3)=limzπ12d2dz2(zsinz)=limzπ12(zsinz+2cosz)=1

    The above methods will work on most of the HW problems I’ve seen so far but If all else fails, try Laurent series, that always works.

18.2 Errors and relative errors

  1. A problem gives an expression in x,y such as f(x,y) and asks how much a relative error in both x and y will affect f(x,y) in worst case.  For these problems, find df and then find dff. For example, if f(x,y)=xy3 and relative error is in x and y is 2% then what is worst relative error in f(x,y)?. Then since

    df=fxdx+fydy=12x12b32dx32x12y52dy

    Then

    dff=12dxx32dyy
    But dxx and dyy are the relative errors in x and y. So if we plug-in 2 for dxx and 2 for dyy we get 4% is worst relative error in f(x,y). Notice we used 2% relative error for y and +2% relative error for x since we wanted the worst (largest) relative error. If we wanted the least relative error in f(x,y), then we will use +2% for y also, which gives 13=2 or 2% relative error in f(x,y).  

19 Some CAS notes

in Mathematica Exp is a symbol. Head[Exp] gives Symbol but in Maple it is not.

In Maple

indets(z^2-exp(x^2-1)+1+Pi+Gamma*foo()-sin(y),'name');

gives {Γ,π,x,y,z} but in Mathematica

expr=z^2-Exp[x^2-1]+1+Pi+Gamma*foo[]-Sin[y]; 
Cases[expr,_Symbol,Infinity]

gives {e,x,π,z,Gamma,y}

Notice that e shows up in Mathematica, but not in Maple.

20 d’Alembert’s Solution to wave PDE

(added December 13, 2018)

The PDE is

(1)2ψt2=c22ψx2

Let

u=xctv=x+ct

Then

ψt=ψuut+ψvvt(2)=cψu+cψv

And

ψx=ψuux+ψvvx(3)=ψu+ψv

Then, from (2)

2ψt2=c(2ψu2ut+2ψuvvt)+c(2ψv2vt+2ψvuut)=c(c2ψu2+c2ψuv)+c(c2ψv2c2ψvu)=c22ψu2c22ψuv+c22ψv2c22ψvu(4)=c22ψu2+c22ψv22c22ψvu

And from (3)

2ψx2=(2ψu2ux+2ψuvvx)+(2ψv2vx+2ψvuux)=(2ψu2+2ψuv)+(2ψv2+2ψvu)(5)=2ψu2+2ψv2+22ψvu

Substituting (4,5) into (1) gives

2c22ψvu=2c22ψvu4c22ψvu=0

Since c0 then

2ψvu=0

Integrating w.r.t v gives

ψu=f(u)

Integrating w.r.t u

ψ(x,t)=F(u)+G(v)

Therefore

(6)ψ(x,t)=F(xct)+G(x+ct)

The functions F(x,t),G(x,t) are arbitrary functions found from initial and boundary conditions if given. Let initial conditions be

ψ(x,0)=f0(x)tψ(x,0)=g0(x)

Where the first condition above is the shape of the string at time t=0 and the second condition is the initial velocity.

Applying first condition to (6) gives

(7)f0(x)=F(x)+G(x)

Applying the second condition gives

g0(x)=[tF(xct)]t=0+[tG(x+ct)]t=0=[dF(xct)d(xct)(xct)t]t=0+[dG(x+ct)d(x+ct)(x+ct)t]t=0=[cdF(xct)d(xct)]t=0+[cdG(x+ct)d(x+ct)]t=0(8)=cdF(x)dx+cdG(x)dx

Now we have two equations (7,8) and two unknowns F,G to solve for. But the (8) has derivatives of F,G. So to make it easier to solve, we integrate (8) w.r.t. to obtain

(9)xg0(s)ds=cF(x)+cG(x)

So we will use (9) instead of (8) with (7) to solve for F,G. From (7)

(10)F(x)=f0(x)G(x)

Substituting (10) in (9) gives

xg0(s)ds=c(f0(x)G(x))+cG(x)=cf0(x)+2cG(x)G(x)=(xg0(s)ds)+cf0(x)2c(11)=12c(xg0(s)ds+cf0(x))

Using the above back in (10) gives F(x) as

(12)F(x)=f0(x)12c(xg0(s)ds+cf0(x))

Using (11,12) in (6) gives the final solution

ψ(x,t)=F(xct)+G(x+ct)=f0(xct)12c(xctg0(s)ds+cf0(xct))+12c(xg0(s)ds+cf0(x))=f0(xct)12cxctg0(s)ds12f0(xct)+12cx+ctg0(s)ds+12f0(x+ct)=12(f0(xct)+f0(xct))+12cxctx+ctg0(s)ds

The above is the final solution. So if we are given initial position and initial velocity of the string as function of x, we can find exact solution to the wave PDE.

21 Convergence

Definition of pointwise convergence: fn(x) converges pointwise to f(x) if for each ε>0 there exist integer N(ε,x) such that |fn(x)f(x)|<ε for all nN.

Definition of uniform convergence: fn(x) converges uniformly to f(x) if for each ε>0 there exist integer N(ε) such that |fn(x)f(x)|<ε for all nN.

Another way to find uniform convergence, first find pointwise convergence of fn(x). Say it converges to f(x). Now show that

fnf=supxI(fnf)

goes to zero as n. To find sup(fnf) might need to find the maximum of fnf. i.e. differentiate this, set to zero, find x where it is Max, then evaluate fn(x)f(x) at this maximum. This gives the sup. Then see if this goes to zero as n

If sequence of functions fn converges uniformly to f, then f must be continuous. So this gives a quick check if uniform convergence exist. First find the pointwise convergence f(x) and check if this is continuous or not. If not, then no need to check for uniform convergence, it does not exist. But if f(x) is continuous function, we still need to check because it is possible there is no uniform convergence.

22 Note on using when to raise ln to exp solving an ode

Sometimes in the middle of solving an ode, we get ln on both sides. We can raise both sides to exp as soon as these show up, or wait until the end, after solving the constant of integration to do that. This shows we get same result in both cases.

22.1 Example 1

(1)y=22yxxx+y

With initial conditions y(0)=2. This is homogenous type ode. It solved by substitution u=yx which results in the new ode in u given by

u=1x(u2+3u21+u)

This is now separable

dudx=1x(u2+3u21+u)1+uu2+3u2du=1xdx

Integrating gives

(1A)2ln(1u)3ln(2u)=lnx+c

Replacing u by yx which gives

2ln(1yx)3ln(2yx)=lnx+cln((1yx)2(2yx)3)=lnx+cln(1x2(xy)21x3(2xy)3)=lnx+cln(x(xy)2(2xy)3)=lnx+c(1B)lnx+ln(xy)2(2xy)3=lnx+c

lnx cancels out giving

(2)ln(xy)2(2xy)3=c

Now lets try to solve for c from IC y(0)=2. The above becomes

ln((2)2(2)3)=cc=ln(48)=ln(12)

So the solution (2) is

ln(xy)2(2xy)3=ln(12)

And only now after c is found, we raise both sides to exp (to simplify it) which gives the solution as

(xy)2(2xy)3=12

Or

(3)(xy)2(y2x)3=12

Lets see what happens if we had raised both sides to exp earlier on, instead of waiting until after solving for the constant of integration. i.e. from step (1A) above

2ln(1u)3ln(2u)=lnx+cln(1u)2(2u)3=lnx+c(1u)2(2u)3=elnx+c(1u)2(2u)3=Ax

Where A is new constant. And only now we replace u by yx which gives

(1yx)2(2yx)3=Axx(xy)2(2xy)3=Ax(4)(xy)2(2xy)3=A

Using IC y(0)=2. The above becomes

(2)2(2)3=AA=12

Hence (4) becomes

(xy)2(2xy)3=12(xy)2(y2x)3=12

Which is the same answer obtained earlier in (3). This shows both methods work. It might be better to delay the raising to exponential to the very end so it is all done in one place.

22.2 Example 2

(1)y=y22xyx2y2+2xyx2y(1)=1

This is a homogenous ode, solved by the substitution u=yx which results in new ode in u given by

u=1xu3u2u1u2+2u1

This is separable

u2+2u1u3u2u1du=1xdx

Integrating gives

(1)ln(u+1)ln(u2+1)=ln(x)+c1

There are two choices now. Raise both sides to exp to simplify the u solution or wait until the end. Option 1:

Replacing u by yx in (1) gives

ln(yx+1)ln((yx)2+1)=ln(x)+c1ln(yx+1(yx)2+1)=ln(x)+c1ln(1x(y+x)1x2(y2+x2))=ln(x)+c1ln(x(y+x)(y2+x2))=ln(x)+c1lnx+ln(y+xy2+x2)=ln(x)+c1(2)ln(y+xy2+x2)=c1

Now lets try to solve for c1 from IC y(1)=1. The above becomes

ln(02)=c1c=

Hence (2) becomes

ln(y+xy2+x2)=

Now raising both sides to exp gives

y+xy2+x2=ey+xy2+x2=0y+x=0y=x

Lets try to see what happens if we raise to exp after solving for u immeadilty which is the second option. From (1)

ln(u+1u2+1)=ln(x)+c1

Raising both to exp gives

u+1u2+1=Ax

Where A new constant. Now we replace u by yx

(2)yx+1(yx)2+1=Axxy+xy2+x2=Axy+xy2+x2=A

Solving for A from IC y(1)=1 from the above gives

02=AA=0

Hence the solution (2) becomes

yx+1(yx)2+1=0

or

yx+1=0y=x

So both method worked. The early one and the later on one. Both give same result.

22.3 Example 3

(1)(x+2y)y=1y(0)=1

This is tricky as how it is solved needs special handling of the initial conditions. Let us solve by subtituting z=x+2y. Then z=1+2y. The ode now becomes

z(z1)2=1z1=2zz=2z+1

This is separable

dz1+2z=dx

Integrating

dz1+2z=dx(1)z2ln(2+z)=x+c

We could raise both sides to exp now or wait until after converting back to y. Lets look what happens in both cases. Raising to exp now gives

ez2ln(2+z)=Aexez(2+z)2=Aex

But z=x+2y and the above becomes

ex+2y(2+x+2y)2=Aex(2)e2y(2+x+2y)2=A

Which is the correct solution. Now IC is used to find A.  Using y(0)=1 the above becomes

e20=A

So A=. Hence the solution (2) is

e2y(2+x+2y)2=

When this happens, to simplify the above we say that (2+x+2y)2=0 or 2+x+2y=0. This gives 2y=2x. Hence

y=1x

23 References

Too many references used, but will try to remember to start recording books used from now on. Here is current list

  1. Applied partial differential equation, by Haberman
  2. Advanced Mathematical Methods for Scientists and Engineers, Bender and Orszag, Springer.
  3. Boundary value problems in physics and engineering, Frank Chorlton, Van Norstrand, 1969
  4. Class notes. Math 322. University Wisconsin, Madison. Fall 2016. By Professor Smith. Math dept.
  5. Mathematical methods in the physical sciences. Mary Boas, second edition.
  6. Mathematical methods in physics and engineering. Riley, Hobson, Bence. Second edition.
  7. various pages Wikipedia.
  8. Mathworld at Wolfram.
  9. Fourier series and boundary value problems 8th edition. James Brown, Ruel Churchill.
  10. good note on Sturm-Liouville http://ramanujan.math.trinity.edu/rdaileda/teach/s12/m3357/lectures/lecture_4_10_short.pdf