A place to keep quick notes about Math that I keep forgetting. This is meant to be a scratch
notes and cheat sheet for me to write math notes before I forget them or move them
somewhere else. Can and will contain errors and/or not complete description in number of
places. Use at your own risk.
1 What is a first integral of a differential equation and how to find it?
Lets start with first order ode. This generalizes to any order. Let say our ode is
The first
integral of the above ode is any function such that its rate of change along is zero. i.e. .
This means
But , hence the above is
If the above comes out to be zero, then is called the first integral of
the ode . In the above, we should make sure to replace any in the RHS with the solution
itself of the ode.
Notice that the first integral itself is not constant. But its rate of change as the
independent variable changes is what is constant. We should not mix these two
things.
But how to find the first integral function ? This is easy. We have to solve the ode itself.
Then move all terms to one side, and this is what is. Let us look at few examples to make
this clear.
It is also possible to find first integral without solving the ode. We just need to find any
such that when we differentiate it w.r.t. which gives becomes zero. In here must be the
RHS of the ode. So if we by inspection or other means can find such then no need to solve
the ode to find it. For some easy ode’s, method of inspection might be possible. There are
more advanced methods to finding first integrals. But here, for simplicity, we assume we have
the solution to the ode available.
If we want to first first integral without having the solution to the first order ode, and if the
ode is already exact ode, then the same method used in solving exact ode can be
used to find the first integral. i.e. if the ode has form
Where this is exact (i.e. )
then we assume first integral exist and given by . Hence
Comparing (1,2) we see
that
From these two equations we can now find using same methods we use when solving exact
first order ode. So this is an example where we can find without knowing the solution to the
ode. The above works if the ode is exact.
If the ode is not exact, then we try to find an integrating factor which makes the ode exact
first.
The point of this note is to show that first integral is a function which happens to be
constant along solution curves.
The first integral of an ode is not unique. We just need to find one. Even though the ode
itself can have unique solution, there can be many different first integrals . Only condition is
that .
1.1 Example
The solution can be found to be
Hence the first integral is (moving everything to
one side)
To show this the first integral, we have to show that . This is given by
Looking at our ode we see that . The above becomes
But and , then the above
becomes
Since then is first integral.
1.2 Example
This is linear ode, the solution is
Hence the first integral is (moving everything to
one side)
To show this the first integral, we have to show that . This is given by
Looking at our ode we see that . The above becomes
But and , then the above
becomes
But is the solution , hence the above becomes
Since then is first integral.
1.3 Example
The solution can be found to be
Hence the first integral is (moving everything to one side)
To show this is indeed the first integral, we have to show that . This is given by
Looking at our ode we see that . The above becomes
But and , then the above
becomes
Since then
Is first integral of the ode
1.4 Example
The solution is
Hence the first integral is
To show this is the first integral, we have to show
that
Looking at our ode we see that , then the above becomes
But and , then the above
becomes
Since then
Is first integral of the ode .
1.5 Example
The solution is
Hence the first integral is
To show this is the first integral, we have to show
that
Looking at our ode we see that , then the above becomes
But and , then the above
becomes
Notice that in this example, we have in the RHS above that did not cancel as the
case was with the first two examples. In this case, we have to replace this by the solution
and the above now becomes
Since then
Is first integral of the ode .
2 Special ode’s and their solutions
These are ode’s whose solution is in terms of special functions. Will update as I find more.
Most of the special functions come up from working out the solution in series of second order
ode which has regular singular point at expansion point. These are the more interesting odes
which will generate these special functions.
2.1 Airy
solution is
2.2 Chebyshev
For
Singular points at and . Solution valid for . Maple gives solution
For
Maple gives
solution
If positive integer, then solution in series gives polynomial solution of degree . Called
Chebyshev polynomials.
2.3 Hermite
Converges for all . If is positive integer, one series terminates. Series solution in terms of
Hermite polynomials.
Maple gives solution
2.4 Legendre
Series solution in terms of Legendre functions. When is positive integer, one series
terminates (i.e. becomes polynomial).
Maple gives solution
If the ode is given in form
Then using transforms it to the earlier more familar form. Maple gives this as
solution
2.5 Bessel
is regular singular point. Solution in terms of Bessel functions
2.6 Reduced Riccati
For the special case of the solution is
Where in the above is a root of .
For
2.7 Gauss Hypergeometric ode
Solution is for is in terms of hypergeom function. Has 3 regular singular points,
.
Maple gives this solution
And Mathematica gives
3 Change of variables and chain rule in differential equation
These are examples doing change of variable for an ode.
3.1 Example 1 Change of the independent variable using
Given the ode
And we are asked to do change of variables from to where . In this, we can
also write
Where is the inverse function. Using chain rule gives
And for second
derivative
And now we use the product rule, which is on the above, which gives
Let us do each of the
terms on the right above one by one. The second term on the RHS above is easy. It is
It is
the first term in (1) which needs more care. The problem is how to handle ? Since the
denominators are different. The trick is to write as which does not change anything, but
now we can change the order and write this as which now makes the denominator the same
and now it is free sailing:
Therefore, the first term in (1) becomes
Using (2,3) then we have
Hence the original ode now becomes
We could have written the RHS above as just instead of but since the independent variable
is now , this seemed better to do it this way. But both are correct. Now, since the above
can also be written as
OK, since the above was so much fun, lets do third derivative
Each term above is now found. Looking at first term in (4)
Using the product
rule, which is on the above gives
But and for we have to use the same trick as
before by writing and now we have . Hence the first term in (4) is now done.
Now we look at the second term in (4) which is and apply the product rule, this
gives
That is it. We are done. (5,6) are the two terms in (4). Therefore
Now, since the above can also be written as
This table show summary of transformation for
each derivative when using change of variables
Strictly speaking, it would be better to use different variable than when changing the
independent variable. i.e. instead of writing in all the above, we should write in its place. So
the above table will look like
So any place where shows in the transformed expression, it should be written with new
letter for the dependent variable . But this is not always enforced.
3.2 Example 2 Change of the independent variable using Euler ode
Given the ode
And asked to do change of variable
And
Hence the original ode becomes
3.3 Example 3 Change of the dependent variable using Euler ode
Given the ode
And asked to do change of variable
And
Hence the original ode becomes
Solving for gives the roots. Hence solutions are and . Final solution is therefore
This method of solving the Euler ode is much simpler than using change of variables but for
some reason most text books use the later one.
4 Changing the role of independent and dependent variable in an ode
(added Dec 14, 2024).
Given an ode , we want to change it so that instead of being the dependent variable,
to make the ode so that is the dependent variable. For example, given the ode
The new ode becomes
Which is easier to solve for . Once solved, we flip back
and find from the solution. Sometimes this trick can make solving hard ode very
easy. It also can make solving easy ode very hard. Only way to find out, is to try
it. So if we have an ode that we are having hard time solving, we can try this
trick.
For first order ode, the method is easy. We just isolate and then flip the left hand
side and flip the right hand side, and change all to just and change all the to
.
More formally, this can also be done using change of variables, like this. The first step is to
do change of variables.
If we carry the above change of variables, the new ode will be in terms of and so
on.
Now we replace all the with where here is the order of derivative. And replace
any by (not but just ). And replace any by . The new ode will be the flipped
ode.
When we do the above change of variables using chain rule, these will result in the
following
And so on. Once the above is done, the rest is easy. We just replace any by and any
by and any by . We will not do change roles for ode higher than two in these
examples.
4.1 Example 1
Change the role for the ode
This has solution
Since this is first order, we can do it the
easy way without change of variable. Flip the left side and flip the right side and
do the renaming
If we want to do it via change of variables, the method is: Let
Then
But and the above becomes
And and the above becomes
Hence ode becomes
Now we replace by and by The above becomes (which is the flipped ode)
Solving for
gives
Lets take the first solution and solve for , this gives
Which is the same as (1). Of course, in this example there is no point of changing the roles,
but this was just an example.
4.2 Example 2
Change the role for the ode
This has solution
Since this is first order, we will do it
the easy way. Flip the left side and flip the right side and do the renaming. This
gives
Solving this gives
Solving for gives
Which is same as (1).
4.3 Example 3
Change the role for the ode
Solving the above gives
Since this is first order, we will do it the easy way. First isolate then flip the left side and the
right side and rename. Solving for from (1) gives
Flipping
In this example, we see that changing roles really paid off as Eq. (3) is linear ode in but (1)
is very hard to solve for and needs Lie symmetry to solve it. Solving (3) gives
Solving the
above for gives same solutions as (2).
4.4 Example 4
Change the role for the ode
This has solution
Since this is not first order, we can not do the
easy method as with first order and we have to do change of variables since with second
derivative it is more complicate. Let
Using the rules gives above, we know that
Substituting (3) into (1) (and changing at by and any by ) gives
We now replace each by and each by . The above becomes
And the above is the final
flipped ode. The solution is
To obtain as function of , we just isolate from the above.
Which is the solution to the original ode obtain by first flipping the ode.
4.5 Example 5
Change the role for the ode
As this stands, it is hard to solve as it needed Lie symmetry. The
solution is
By flipping roles, the ode becomes Bernoulli, which is much easier. Since this is first order,
we will use the easy method. First we isolate from (1) then flip both sides and rename.
Solving for in (1) gives
Flipping and renaming to and to gives
This is in the form
Where
here. Hence Bernoulli, which is easily solved. The solution is
The last step is to solve for as
function of .
Solving for from the above gives same answer as above. This is an example where flipping
roles paid off well. But only way to know is to try it and see.
4.6 Example 6
Change the role for the ode
As this stands, this is homogeneous class G. The solution
is
By flipping roles, the ode becomes linear, which is much easier to solve. Since this is
first order, we will use the easy method. First we isolate from (1) then flip both
sides and rename. Solving for in (1) gives
Flipping and renaming to and to
gives
Or
Which is linear ode in . Solving gives
The last step is to solve for which will give same
solution as above.
4.7 Example 7
Change the role for the ode
As this stands, this can be solved using Lie symmetry
or as an exact ode but with an integrating factor that needs to be found first,
The solution is
And 2 more (too long to type). By flipping roles the new ode
becomes
This has form
Which is Bernoulli ode. Which is simpler to solve solve. Solving
gives
Finally, we solve for from the above. This will give same solutions as above.
5 general notes
Some rules to remember. This is in the real domain
only for . In general for where is positive integer.
implies only when . So be careful when squaring both sides to get rid of sqrt
root on one side. To see this, let then because is positive. But if we had then
we can’t say that since is and not . (we always take the positive root). So each
time we square both sides of equation to get rid of on one side, always say this
is valid when the other side is not negative.
Generalization of the above: given where both integers then only when . This
applies if such as or when such as . Only time we can write for any is when
is an integer (positive or negative). When the power is ratio of integers, then was
can split it only under the condition that all terms are positive.
only for . This can be used for example to simplify to under the condition or
. Because in this case the input becomes .
Generalization of the above: only for
only for
Generalization of the above: only when (assuming is integer).
Given then total differential of is
Lyapunov function is used to determine stability of an equilibrium point. Taking this
equilibrium point to be zero, and someone gives us a set of differential equations and
assuming is an equilibrium point. The question is, how to determine if it stable or not?
There are two main ways to do this. One by linearization of the system around origin. This
means we find the Jacobian matrix, evaluate it at origin, and check the sign of the real parts
of the eigenvalues. This is the common way to do this. Another method, called Lyapunov, is
more direct. There is no linearization needed. But we need to do the following. We need to
find a function which is called Lyapunov function for the system which meets the following
conditions
is continuously differentiable function in and (positive definite or positive
semidefinite) for all away from the origin, or everywhere inside some fixed region
around the origin. This function represents the total energy of the system (For
Hamiltonian systems). Hence can be zero away from the origin. But it could
never be negative.
. This says the system has no energy when it is at the equilibrium point. (rest
state).
The orbital derivative (i.e. negative definite or negative semi-definite) for all
, or inside some fixed region around the origin. The orbital derivative is same
as along any solution trajectory. This condition says that the total energy is
either constant in time (the zero case) or the total energy is decreasing in time
(the negative definite case). Both of which indicate that the origin is a stable
equilibrium point.
If is negative semi-definite then the origin is stable in Lyapunov sense. If is negative definite
then the origin is asymptotically stable equilibrium. Negative semi-definite means the
system, when perturbed away from the origin, a trajectory will remain around the origin
since its energy do not increase nor decrease. So it is stable. But asymptotically stable
equilibrium is a stronger stability. It means when perturbed from the origin the solution will
eventually return back to the origin since the energy is decreasing. Global stability means
everywhere, and not just in some closed region around the origin. Local stability means in
some closed region around the origin. Global stability is stronger stability than local
stability.
Main difficulty with this method is to find . If the system is Hamiltonian, then is the same
as total energy. Otherwise, one will guess. Typically a quadratic function such as is used (for
system in ) then we try to find which makes it positive definite everywhere away from
origin, and also more importantly makes . If so, we say origin is stable. Most of the problems
we had starts by giving us and then asks to show it is Lyapunov function and what kind of
stability it is.
To determine if is positive definite or not, the common way is to find the Hessian and check
the sign of the eigenvalues. Another way is to find the Hessian and check the sign of the
minors. For matrix, this means the determinant is positive and the entry in the matrix is
positive. Similar thing to check if . We find the Hessian of and do the same thing. But now
we check for negative eigenvalues instead.
Methods to find Green function are
Fredholm theory
methods of images
separation of variables
Laplace transform
reference Wikipedia I need to make one example and apply each of the above methods on
it.
In solving an ODE with constant coefficient just use the characteristic equation to solve the
solution.
In solving an ODE with coefficients that are functions that depends on the independent
variable, as in , first classify the point type. This means to check how and behaves at . We
are talking about the ODE here, not the solution yet.
There are 3 kinds of points. can be normal, or regular singular point, or irregular singular
point. Normal point means and have Taylor series expansion that converges to at
. Regular singular point means that the above test fails, but has a convergent Taylor series,
and also that now has a convergent Taylor series at . This also means the limit
exist.
All this just means we can get rid of the singularity. i.e. is a removable singularity. If this is
the case, then the solution at can be assumed to have a Frobenius series where and is
the root of the Frobenius indicial equation. There are three cases to consider. See
https://math.usask.ca/~cheviakov/courses/m338/text/Frobenius_Case3_ill.pdf for
more discussion on this.
The third type of point, is the hard one. Called irregular singular point. We can’t get rid of
it using the above. So we also say the ODE has an essential singularity at (another fancy
name for irregular singular point). What this means is that we can’t approximate the
solution at using either Taylor nor Frobenius series.
If the point is an irregular singular point, then use the methods of asymptotic. See advanced
mathematical methods for scientists and engineers chapter 3. For normal point,
use , for regular singular point use . Remember, to solve for first. This should
give two values. If you get one root, then use reduction of order to find second
solution.
Asymptotic series is series expansion of which gives good and rapid approximation for
large as long as we know when to truncate before it becomes divergent. This is the main
difference Asymptotic series expansion and Taylor series expansion.
is used to approximate a function for large while Taylor (or power series) is used for local
approximation or for small distance away from the point of expansion. will become
divergent, hence it needs to be truncated at some to use, where is the number of terms in .
It is optimally truncated when .
has the following two important properties
for fixed .
for fixed .
We write when is the asymptotic series expansion of for large . Most common
method to find is by integration by parts. At least this is what we did in the class I
took.
For Taylor series, leading behavior is no controlling factor? For Frobenius series, leading
behavior term is and controlling factor is . For asymptotic series, controlling factor is
assumed to be always. proposed by Carlini (1817)
Method to find the leading behavior of the solution near irregular singular point using
asymptotic is called the dominant balance method.
When solving for very small then use WKB method, if there is no boundary layer between
the boundary conditions. If the ODE non-linear, can’t use WKB, has to use boundary layer
(B.L.). Example with then use BL.
good exercise is to solve say with using both B.L. and WKB and compare the
solutions, they should come out the same. with BL had to do the matching between
the outer and the inner solutions. WKB is easier. But can’t use it for non-linear
ODE.
When there is rapid oscillation over the entire domain, WKB is better. Use WKB to solve
Schrodinger equation where becomes function of (Planck’s constant, mkg/s)
In second order ODE with non constant coefficient, , if we know one solution , then a
method called the reduction of order can be used to find the second solution . Write , plug
this in the ODE, and solve for . The final solution will be . Now apply I.C.’s to find
.
To find particular solution to , we can use a method called undetermined coefficients. But a
better method is called variation of parameters, In this method, assume where are the two
linearly independent solutions of the homogeneous ODE and are to be determined. This
ends up with and . Remember to put the ODE in standard form first, so , i.e. . In here, is
the Wronskian
Two solutions of are linearly independent if , where is the Wronskian.
For second order linear ODE defined over the whole real line, the Wronskian is either always
zero, or not zero. This comes from Abel formula for Wronskian, which is for ODE of form .
Since , then it is decided by . The constant of integration. If , then everywhere, else it is not
zero everywhere.
For linear PDE, if boundary condition are time dependent, can not use separation of
variables. Try Transform method (Laplace or Fourier) to solve the PDE.
If unable to invert Laplace analytically, try numerical inversion or asymptotic methods.
Need to find example of this.
Green function takes the homogeneous solution and the forcing function and constructs a
particular solution. For PDE’s, we always want a symmetric Green’s function.
To get a symmetric Green’s function given an ODE, start by converting the ODE to a
Sturm-Liouville form first. This way the Green’s function comes out symmetric.
For numerical solutions of field problems, there are basically two different problems: Those
with closed boundaries and those with open boundaries but with initial conditions. Closed
boundaries are elliptical problems which can be cast in the form , and the other are either
hyperbolic or parabolic.
For numerical solution of elliptical problems, the basic layout is something like
this:
Always start with trial solution such that where the are the unknowns to be
determined and the are set of linearly independent functions (polynomials) in
.
How to determine those comes next. Use either residual method (Galerkin) or variational
methods (Ritz). For residual, we make a function based on the error . It all comes down to
solving over the domain. This is a picture
|+---------------+-------------------------------------+| |residual Variational (sub u_trial in I(u)| where I(u) is functional to minimize.|+----------------+-------------+----------+| | | |Absolute error collocation subdomain orthogonality....+----------------------+------------+| | |method of moments Galerkin least squares
Geometric probability distribution. Use when you want an answer to the question:
What is the probability you have to do the experiment times to finally get the
output you are looking for, given that a probability of showing up from doing one
experiment.
For example: What is the probability one has to flip a fair coin times to get a head?
The answer is . So for a fair coin, that a head will show up from one flip. So the
probability we have to flip a coin times to get a head is which is very low as
expected.
To generate random variable drawn from some distribution different from uniform
distribution, by only using uniform distribution do this: Lets say we want to generate
random number from exponential distribution with mean .
This distribution has , the first step is to find the cdf of exponential distribution, which is
known to be .
Now find the inverse of this, which is . Then generate a random number from the uniform
distribution . Let this value be called .
Now plug this value into , this gives a random number from exponential distribution, which
will be (take the natural log of both side of ).
This method can be used to generate random variables from any other distribution by
knowing on . But it requires knowing the CDF and the inverse of the CDF for the other
distribution. This is called the inverse CDF method. Another method is called the
rejection method
Given , a r.v. from uniform distribution over [0,1], then to obtain , a r.v. from uniform
distribution over [A,B], then the relation is .
When solving using F.E.M. is best to do everything using isoparametric element (natural
coordinates), then find the Jacobian of transformation between the natural and physical
coordinates to evaluate the integrals needed. For the force function, using Gaussian
quadrature method.
A solution to differential equation is a function that can be expressed as a convergent series.
(Cauchy. Briot and Bouquet, Picard)
To solve a first order ODE using integrating factor.
then as long as it is linear and are
integrable functions in , then follow these steps
multiply the ODE by function , this is called the integrating factor.
We solve for such that the left side satisfies
Solving the above for gives
Integrating both sides gives
Now equation (1) can be written as
We now integrate the above to give
Where is given by (2). Hence
A polynomial is called ill-conditioned if we make small
change to one of its coefficients and this causes large change to one of its
roots.
To find rank of matrix by hand, find the row echelon form, then count how many zero rows
there are. subtract that from number of rows, i.e. .
To find the basis of the column space of , find the row echelon form and pick the
columns with the pivots, there are the basis (the linearly independent columns of
).
For symmetric matrix , its second norm is its spectral radius which is the largest eigenvalue
of (in absolute terms).
The eigenvalues of the inverse of matrix is the inverse of the eigenvalues of .
If matrix of order , and it has distinct eigenvalues, then it can be diagonalized , where
and is matrix that has the eigenvectors as its columns.
only if converges uniformly over .
, has infinite number of solutions. Think of as 3 rotations, each of , going back to
where we started. Each rotation around a straight line. Hence infinite number of
solutions.
How to integrate .
Let , then and the above becomes
Now let or , hence and the above becomes
But hence
Substituting back
Substituting back
(added Nov. 4, 2015) Made small diagram to help me remember long division terms
used.
If a linear ODE is equidimensional, as in for example then use ansatz this will give
equation in only. Solve for and obtain and the solution will be
For example, for the above
ode, the solution is . This ansatz works only if ODE is equidimensional. So can’t use it on
for example.
If is multiple root, use as solutions.
for , where , write it as hence
Some integral tricks: use . For use and for use .
is called Emden-Fowler form.
For second order ODE, boundary value problem, with eigenvalue (Sturm-Liouville),
remember that having two boundary conditions is not enough to fully solve it.
One boundary condition is used to find the first constant of integration, and the second
boundary condition is used to find the eigenvalues.
We still need another input to find the second constant of integration. This is
normally done by giving the initial value. This problem happens as part of initial
value, boundary value problem. The point is, with boundary value and eigenvalue
also present, we need 3 inputs to fully solve it. Two boundary conditions is not
enough.
If given ODE and we are asked to classify if it is singular at , then let and check
what happens at . The operator becomes and operator becomes . And write
the ode now where is the independent variable, and follow standard operating
procedures. i.e. look at and and see if these are finite or not. To see how the
operator are mapped, always start with then write and write . For example, and
Then the new ODE becomes
The above is how the ODE will always become after the transformation. Remember to
change to using and same for . Now the new is and the new is . Then do and as
before.
If the ODE , and say , and there is essential singularity at either end, then use boundary
layer or WKB. But Boundary layer method works on non-linear ODE’s (and also on
linear ODE) and only if the boundary layer is at end of the domain, i.e. at or
.
WKB method on the other hand, works only on linear ODE, but the singularity can be any
where (i.e. inside the domain). As rule of thumb, if the ODE is linear, use WKB. If the
ODE is non-linear, we must use boundary layer.
Another difference, is that with boundary layer, we need to do matching phase at the
interface between the boundary layer and the outer layer in order to find the constants of
integrations. This can be tricky and is the hardest part of solving using boundary
layer.
Using WKB, no matching phase is needed. We apply the boundary conditions to the whole
solution obtained. See my HWs for NE 548 for problems solved from Bender and Orszag text
book.
In numerical, to find if a scheme will converge, check that it is stable and also check that if
it is consistent.
It could also be conditionally stable, or unconditionally stable, or unstable.
To check it is consistent, this is the same as finding the LTE (local truncation error) and
checking that as the time step and the space step both go to zero, the LTE goes to zero.
What is the LTE? You take the scheme and plug in the actual solution in it. An example is
better to explain this part. Lets solve . Using forward in time and centered difference in
space, the numerical scheme (explicit) is
The LTE is the difference between these two (error)
Now plug-in in place of
and in place of and plug-in in place of and plug-in in place of in the above.
It becomes
Where in the above is the time step (also written as ) and is the
space step size. Now comes the main trick. Expanding the term in Taylor,
And
expanding
And expanding
Now plug-in (2,3,4) back into (1). Simplifying, many
things drop out, and we should obtain that
Which says that as . Hence it is
consistent.
To check it is stable, use Von Neumann method for stability. This check if the solution at
next time step does not become larger than the solution at the current time step. There can
be condition for this. Such as it is stable if . This says that using this scheme, it will be
stable as long as time step is smaller than . This makes the time step much smaller than
space step.
For , with roots then the relation between roots and coefficients is
Leibniz rules for integration
Differentiable function implies continuous. But continuous does not imply differentiable.
Example is function.
Mean curvature being zero is a characteristic of minimal surfaces.
How to find phase difference between 2 signals ? One way is to find the DFT of both signals (in
Mathematica this is Fourier, in Matlab fft()), then find where the bin where peak frequency is
located (in either output), then find the phase difference between the 2 bins at that location.
Value of DFT at that bin is complex number. Use Arg in Mathematica to find its phase. The
difference gives the phase difference between the original signals in time domain. See
https://mathematica.stackexchange.com/questions/11046/how-to-find-the-phase-difference-of-two-sampled-sine-waves
for an example.
Watch out when squaring both sides of equation. For example, given . squaring both sides
gives . But this is only true for . Why? Let us take the square root of this in order to get
back to the original equation. This gives . And here is the problem, only for . Why? Let us
assume . Then which is not . So when taking the square of both sides of the equation,
remember this condition.
do not replace by , but by , since only for non negative .
Given an equation, and we want to solve for . We can square both sides in order to get rid of
sqrt if needed on one side. But be careful. Even though after squaring both sides, the new
equation is still true, the solutions of the new equation can introduce extraneous solution
that does not satisfy the original equation. Here is an example I saw on the internet which
illustrate this. Given . And we want to solve for . Squaring both sides gives . This has
solutions . But only is valid solution for the original equation before squaring.
The solution is extraneous. So need to check all solutions found after squaring
against the original equation, and remove those extraneous one. In summary, if
then this does not mean that . But if then it means that . For example . But
.
How to find Laplace transform of product of two functions?
There is no formula for the Laplace transform of product . (But if this was convolution, it is
different story). But you could always try the definition and see if you can integrate it. Since
then . Hence for this becomes
Let then
Similarly for
Let then
Similarly for
Let then
And so on. Hence we see that for
6 Converting first order ODE which is homogeneous to separable ODE
(Added July, 2017).
If the ODE has both and homogenous functions of same power, then this ODE can be
converted to separable. Here is an example. We want to solve
The above is homogenous in ,
since the total powers of each term in them is .
So we look at each term in and and add all
the powers on each in them. All powers should add to same value, which is in this case. Of
course should be polynomials for this to work. So one should check that they are
polynomials in before starting this process. Once we check are homogeneous, then we let
Therefore now
And
And
Substituting (3,4,5) into (1) gives
Dividing by it simplifies to
Which can be written as
We see that it is now separable. We now solve this for by direct integration of both sides
And then using find .
7 Direct solving of some simple PDE’s
Some simple PDE’s can be solved by direct integration, here are few examples.
Example 1
Integrating w.r.t. ., and remembering that now constant of integration will be function of ,
hence
Example 2
Integrating once w.r.t. gives
Integrating again gives
Example 3
Integrating
once w.r.t. gives
Integrating again gives
Example 4
Integrating once w.r.t gives
Integrating
again w.r.t. gives
Example 5
Solve with . Let , therefore
Comparing the above with the given PDE, we see that if then
or is constant. At we are given that
To find , from we obtain that . At , . Hence or
Hence
solution from (1) becomes
Example 6
Solve .
Let , therefore
Comparing the above with the given PDE, we see that if then or Hence
At ,
. Let . Therefore
Now we need to find . From , then or , hence and the above
becomes
8 Fourier series flow chart
(added Oct. 20, 2016)
8.1 Theorem on when we can do term by term differentiation
If on is continuous (notice, NOT piecewise continuous), this means has no jumps in it, and
that exists on and is either continuous or piecewise continuous (notice, that can be
piecewise continuous (P.W.C.), i.e. have finite number of jump discontinuities), and also and
this is very important, that then we can do term by term differentiation of the Fourier series
of and use instead of . Not only that, but the term by term differentiation of the Fourier
series of will give the Fourier series of itself.
So that main restriction here is that on is continuous (no jump discontinuities) and that .
So look at first and see if it is continuous or not (remember, the whole has to be
continuous, not piecewise, so no jump discontinuities). If this condition is met, look at see if
.
For example on is continuous, but so the F.S. of can’t be term be term differentiated
(well, it can, but the result will not be the Fourier series of ). So we should not do term by
term differentiation in this case.
But the Fourier series for can be term by term differentiated. This has its being continuous,
since it meets all the conditions. Also Fourier series for can be term by term differentiated.
This has its being P.W.C. due to a jump at but that is OK, as is allowed to be P.W.C.,
but it is which is not allowed to be P.W.C.
There is a useful corollary that comes from the above. If meets all the conditions above,
then its Fourier series is absolutely convergent and also uniformly convergent. The M-test
can be used to verify that the Fourier series is uniformly convergent.
8.2 Relation between coefficients of Fourier series of Fourier series of
If term by term differentiation allowed, then let
Then
And Bessel’s inequality instead of now becomes . So it is stronger.
8.3 Theorem on convergence of Fourier series
If is piecewise continuous on and if it is periodic with period and if on any point on the
entire domain both the left sided derivative and the right sided derivative exist (but these
do not have to be the same !) then we say that the Fourier series of converges
and it converges to the average of at each point including points that have jump
discontinuities.
9 Laplacian in different coordinates
(added Jan. 10, 2019)
10 Linear combination of two solution is solution to ODE
If are two solutions to then to show that is also solution:
Multiply the first ODE by and second ODE by
Add the above two equations, using linearity of differentials
Therefore satisfies the original
ODE. Hence solution.
11 To find the Wronskian ODE
Since
Where are two solutions to Write
Multiply the first ODE above by and the second by
Subtract the second from the first
But
And
Substituting (2,3) into (1) gives the Wronskian differential equation
Whose solution is
Where is constant of integration.
Remember: does not mean the two functions are linearly dependent. The functions can still
be Linearly independent on other interval, It just means can’t be in the domain of the
solution for two functions to be solutions. However, if the two functions are linearly
dependent, then this implies everywhere. So to check if two functions are L.D., need to
show that everywhere.
12 Green functions notes
Green function is what is called impulse response in control. But it is more general, and can
be used for solving PDE also.
Given a differential equation with some forcing function on the right side. To solve
this, we replace the forcing function with an impulse. The solution of the DE now
is called the impulse response, which is the Green’s function of the differential
equation.
Now to find the solution to the original problem with the original forcing function, we just
convolve the Green function with the original forcing function. Here is an example. Suppose
we want to solve with zero initial conditions. Then we solve . The solution is . Now . This
is for initial value problem. For example. , with . Then we solve . The solution is , this is for
causal system. Hence . The nice thing here, is that once we find , we can solve
for any by just convolving the Green function (impulse response) with the new
.
We can think of Green function as an inverse operator. Given , we want to find solution . So
in a sense, is like .
Need to add notes for Green function for Sturm-Liouville boundary value ODE. Need to be
clear on what boundary conditions to use. What is B.C. is not homogeneous?
Green function properties:
is continuous at . This is where the impulse is located.
The derivative just before is not the same as just after . i.e. . This means there
is discontinuity in derivative.
should satisfy same boundary conditions as original PDE or ODE (this is for
Sturm-Liouville or boundary value problems).
for
is symmetric. i.e. .
When solving for , in context of 1D, hence two boundary conditions, one at each end, and
second order ODE (Sturm-Liouville), we now get two solutions, one for and one for
.
So we have constants of integrations to find (this is for second order ODE) not just two
constants as normally one would get , since now we have 2 different solutions. Two of these
constants from the two boundary conditions, and two more come from property of Green
function as mentioned above.
13 Laplace transform notes
Remember that and . For example, if we are given , then . Do not do ! That will be a big
error. We use this allot when asked to write a piecewise function using Heaviside
functions.
14 Series, power series, Laurent series notes
if we have a function represented as series (say power series or Fourier series), then we say
the series converges to uniformly in region , if given , we can number which depends only
on , such that .
Where here is the partial sum of the series using terms. The difference between uniform
convergence and non-uniform convergence, is that with uniform the number only depends on
and not on which we are trying to approximate at. In uniform convergence, the
number depends on both and . So this means at some locations in we need much
larger than in other locations to convergence to with same accuracy. Uniform
convergence is better. It depends on the basis functions used to approximate in the
series.
If the function is discontinuous at some point, then it is not possible to find uniform
convergence there. As we get closer and closer to the discontinuity, more and more
terms are needed to obtained same approximation away from the discontinuity,
hence not uniform convergence. For example, Fourier series approximation of a
step function can not be uniformly convergent due to the discontinuity in the step
function.
Geometric series:
Binomial series:
General binomial is
From the above we can generate all other special cases. For example,
This work for positive and negative , rational or not. The sum converges when only for .
From this, we can derive the above sums also for the geometric series. For example, for the
above becomes
For , we can still find series expansion in negative powers of as follows
And now since , we can use binomial expansion to expand the term in the above and obtain
a convergent series, since now This will give the following expansion
So everything is the same, we just change with and remember to multiply the whole
expansion with . For example, for
These tricks are very useful when working with Laurent series.
Arithmetic series:
i.e. the sum is times the arithmetic mean.
Taylor series: Expanded around is
Where is remainder where is some point between and
.
Maclaurin series: Is just Taylor expanded around zero. i.e.
This diagram shows the
different convergence of series and the relation between them
The above shows that an absolutely convergent series () is also convergent. Also
a uniformly convergent series () is also convergent. But the series is absolutely
convergent and not uniform convergent. While is uniform convergent and not absolutely
convergent.
The series is both absolutely and uniformly convergent. And finally the series is convergent,
but not absolutely (called conditionally convergent). Examples of (converges absolutely but
not uniformly) is
And example of (converges uniformly but not absolutely) is
Example of (converges but not
absolutely) is the alternating harmonic series
The above converges to but absolutely it now
becomes the harmonic series and it diverges
For uniform convergence, we really need to have
an in the series and not just numbers, since the idea behind uniform convergence is if the
series convergence to within an error tolerance using the same number of terms independent
of the point in the region.
The sequence converges for and diverges for . So is the flip value. For example
Diverges, since , also diverges, since . But converges, where here and the sum is
.
Using partial sums. Let be some sequence. The partial sum is . Then
If exist and finite,
then we can say that converges. So here we use set up a sequence who terms are partial
sum, and them look at what happens in the limit to such a term as . Need to find an
example where this method is easier to use to test for convergence than the other method
below.
Given a series, we are allowed to rearrange order of terms only when the series is absolutely
convergent. Therefore for the alternating series , do not rearrange terms since this is not
absolutely convergent. This means the series sum is independent of the order in which terms
are added only when the series is absolutely convergent.
In an infinite series of complex numbers, the series converges, if the real part of the series
and also the complex part of the series, each converges on their own.
Power series: . This series is centered at . Or expanded around . This has radius of
convergence is the series converges for and diverges for .
Tests for convergence.
Always start with preliminary test. If does not go to zero, then no need to do
anything else. The series does not converge. It diverges. But if , it still can
diverge. So this is a necessary but not sufficient condition for convergence. An
example is . Here in the limit, but we know that this series does not converge.
For Uniform convergence, there is a test called the weierstrass M test, which can
be used to check if the series is uniformly convergent. But if this test fails, this
does not necessarily mean the series is not uniform convergent. It still can be
uniform convergent. (need an example).
To test for absolute convergence, use the ratio test. If then absolutely convergent.
If then inconclusive. Try the integral test. If then not absolutely convergent.
There is also the root test. .
The integral test, use when ratio test is inconclusive. where becomes . Remember
to use this only of the terms of the sequence are monotonically decreasing and
are all positive. For example, , then use . Notice, we only use the upper limit in
the integral. This becomes (after simplifications) . Hence the limit is finite, then
the series converges.
Radius of convergence is called where is from (3) above.
Comparison test. Compare the series with one we happen to already know it
converges. Let be a series which we know is convergent (for example ), and we
want to find if converges. If all terms of both series are positive and if for each
, then we conclude that converges also.
For Laurent series, lets say singularity is at and . To expand about , get to look like and
use geometric series for . To expand about , there are two choices, to the inside and
to the outside. For the outside, i.e. , get to have form, since this now valid for
.
Can only use power series to expand around only if is analytic at . If is not analytic at
need to use Laurent series. Think of Laurent series as an extension of power series to handle
singularities.
14.1 Some tricks to find sums
14.1.1 Example 1
Find
solution Let , taking derivative gives
Hence
We can set to obtain
More tricks to add...
14.2 Methods to find Laurent series
Let us find the Laurent series for . There is a singularity of order at and .
14.2.1 Method one
Expansion around . Let
This makes analytic around , since do not have a pole at , then it is analytic around
and therefore it has a power series expansion around given by
Where
But
And
And
And
And so on. Therefore, from (1)
Therefore
The residue is . The above expansion is valid around up and not including the next
singularity, which is at . Now we find the expansion of around . Let
This makes analytic around , since do not have a pole at . Therefore it has a power series
expansion about given by
Where
But
And
And
And
And so on. Therefore, from (1)
Therefore
The residue is . The above expansion is valid around up and not including the next
singularity, which is at inside a circle of radius .
Putting the above two regions together, then we see there is a series expansion of that is
shared between the two regions, in the shaded region below.
Let check same series in the shared region give same values. Using the series expansion about
to find at point , gives when using terms in the series. Using series expansion
around to find using terms also gives . So both series are valid produce same
result.
14.2.2 Method Two
This method is simpler than the above, but it results in different regions. It is based on
converting the expression in order to use geometric series expansion on it.
Since there is a
pole at and at , then we first find expansion for . To do this, we write the above
as
And now expand using geometric series, which is valid for . This gives
The above is valid for which agrees with result of method 1.
Now, to find expansion for , we need a term that looks like . Since now it can be
expanded for or which is what we want. Therefore, writing as
But for the above
becomes
With residue . The above is valid for . The following diagram illustrates the result obtained
from method 2.
14.2.3 Method Three
For expansion about , this uses same method as above, giving same series valid for This
method is a little different for those points other than zero. The idea is to replace by where
is the point we want to expand about and do this replacement in itself. So for using this
example, we let hence . Then becomes
Now we expand for and the above becomes
We now replace and the above becomes
The above is valid for or or or . This gives same
series and for same region as in method one. But this is little faster as it uses Binomial series
short cut to find the expansion instead of calculating derivatives as in method
one.
14.2.4 Conclusion
Method one and method three give same series and for same regions. Method three uses
binomial expansion as short cut and requires one to convert to form to allow using
Binomial expansion. Method one does not use binomial expansion but requires
doing many derivatives to evaluate the terms of the power series. It is more direct
method.
Method two also uses binomial expansion, but gives different regions that method one and
three.
If one is good in differentiation, method one seems the most direct. Otherwise, the choice is
between method two or three as they both use Binomial expansion. Method two seems a
little more direct than method three. It also depends what the problem is asking form. If the
problem asks to expand around vs. if it is asking to find expansion in for example, then this
decides which method to use.
15 Gamma function notes
Gamma function is defined by
The above is called the Euler representation. Or if we want
it defined in complex domain, the above becomes
Since the above is defined only
for right half plane, there is way to extend this to left half plane, using what is
called analytical continuation. More on this below. First, some relations involving
To extend to the left half plane, i.e. for negative values. Let us define, using
the above recursive formula
For example
And for
And so on. Notice that for
the functions are not defined for all negative integers it is also not defined for
The above method of extending (or analytical continuation) of the Gamma function to
negative values is due to Euler. Another method to extend Gamma is due to Weierstrass. It
starts by rewriting from the definition as follows, where
Expanding the integrand in the first integral using Taylor series gives
This takes care of the first integral in (1). Now, since the lower limits of the second integral
in (1) is not zero, then there is no problem integrating it directly. Remember that in the
Euler definition, it had zero in the lower limit, that is why we said there . Now can can
choose any value for . Weierstrass choose . Hence (1) becomes
Notice the term now is just since . The second integral above can now be integrated
directly. Let us now verify that Euler continuation for say gives the same result as
Weierstrass formula. From above, we found that . Equation (2) for becomes
Using the
computer
And direct integration
Hence (3) becomes
Which is the same as using Euler method. Let us check for . We found above that using
Euler method of analytical continuation. Now we will check using Weierstrass method.
Equation (2) for becomes
Using the computer
And
Hence
Which is the same as using the Euler method. Clearly the Euler method for analytical
continuation of the Gamma function is simpler to compute.
Euler reflection formula
Where contour integration was used to derive the above. See Mary Boas text book, page 607,
second edition, example 5 for full derivation.
has singularities at and has singularities at so in the above reflection formula, the zeros of
cancel the singularities of when it is written as
is entire.
There are other representations for . One that uses products by Euler also is
And another due to Weierstrass is
16 Riemann zeta function notes
Given by for . Euler studied this and It was extended to the whole complex plane by
Riemann. So the Riemann zeta function refer to the one with the extension to the whole
complex plane. Euler only looked at it on the real line. It has pole at . Has trivial zeros at
and all its non trivial zeros are inside the critical strip and they all lie on the critical line .
is also defined by integral formula
The connection between prime numbers is given by the Euler product formula
functional equation is
17 Complex functions notes
Complex identities
A complex function is analytic in a region if it is defined and differentiable at all points in .
One way to check for analyticity is to use the Cauchy Riemann (CR) equations (this
is a necessary condition but not sufficient). If satisfies CR everywhere in that
region then it is analytic. Let , then these two equations in Cartesian coordinates
are
Sometimes it is easier to use the polar form of these. Let , then the equations
become
To remember them, think of the as the and as the .
Let us apply these on to see how it works. Since then .This is multi-valued function. One
value for and another for . The first step is to make it single valued. Choosing gives the
principal value. Then . Now we find the branch points. is a branch point. We can pick and
pick the negative real axis as the branch cut (the other branch point being ). This is one
choice.
We could have picked and had the positive axis as the branch cut, where now the second
branch point is but in both cases, origin is still part of the branch cut. Let us stick with
.
Given all of this, now, hence and . Therefore and and and . Applying Cauchy-Riemann
above gives
Satisfied. and for the second equation
so is analytic in the region , and not including branch points and branch cut.
We can’t just say is Analytic and stop. Have to say is analytic in a region or at a point.
When we say analytic at a point, we mean analytic in small region around the
point.
If is defined only at an isolated point and not defined anywhere around it, then the function
can not be analytic at since it is not differentiable at . Also is analytic at a point if the
power series for expanded around converges to evaluated at . An analytic complex function
mean it is infinitely many times differentiable in the region, which means the limit exist and
does not depend on direction.
Before applying the Cauchy Riemann equations, make sure the complex function is first
made to be single valued.
Remember that Cauchy Riemann equations as necessary but not sufficient condition for
function to be analytic. The extra condition needed is that all the partial derivatives are
continuous. Need to find example where CR is satisfied but not the continuity on the partial
derivatives. Most of the HW problems just needs the CR but good to keep an eye on this
other condition.
Cauchy-Goursat: If is analytic on and inside closed contour then . But remember
that if then this does not necessarily imply is analytic on and inside . So this
is an IF and not an IFF relation. For example around unit circle centered at
origin, but clearly is not analytic everywhere inside , since it has a singularity at
.
proof of Cauchy-Goursat: The proof uses two main ideas. It uses the Cauchy-Riemann
equations and also uses Green theorem. Green’s Theorem says
So Green’s Theorem
transforms integration on the boundary of region by integration over the area inside the
boundary . Let . And since then . Therefore
We now apply (1) to each of the two integrals in (3). Hence the first integral in (2) becomes
But from CR, we know that , hence the above is zero. And the second integral in (2)
becomes
But from CR, we know that , hence the above is zero. Therefore the whole integral
in (2) is zero. Therefore . QED.
Cauchy residue: If is analytic on and inside closed contour except at some isolated points
then . The term is the residue of at point . Use Laurent expansion of to find residues. See
above on methods how to find Laurent series.
Maximum modulus principle: If is analytic in some region and is not constant inside
, then its maximum value must be on the boundary. Also its minimum on the
boundary, as long as anywhere inside . In the other hand, if happened to have a
maximum at some point somewhere inside , then this implies that is constant
everywhere and will have the value everywhere. What all this really mean, is that if
is analytic and not constant in , then its maximum is on the boundary and not
inside.
There is a complicated proof of this. See my notes for Physics 501. Hopefully this will not
come up in the exam since I did not study the proof.
These definitions from book of Joseph Bak
is analytic at if is differentiable in a neighborhood of . Similarly is analytic on
set if is differentiable at all points in some open set containing .
is analytic on open set is if differentiable at each point of and is continuous
on .
Some important formulas.
If is analytic on and inside then
If is analytic on and inside then and is a point in then
From the above, we find, where here
17.1 Find coefficients in the Laurent series expansion
On Finding coefficient of the principle part of the Laurent series expansion around . Let
The goal is to determine all the coefficients in Laurent series expansion. This assumes the
largest order of the pole is finite. To find , we multiply both side of the above by which gives
Differentiating both sides times w.r.t. gives
Evaluating at the above gives
To find we
differentiate both sides of (2) times which gives
Hence
We keep doing the above to find .
Therefore the general formula is
And for the special case of the last term the above
simplifies to
Where in (3) is the coefficient needed to be evaluated and is the pole
order and is the expansion point. The special value is called the residue of at
.
18 Hints to solve some problems
18.1 Complex analysis and power and Laurent series
Laurent series of around point is and . Integration is around path enclosing
in counter clockwise.
Power series of around is where
Problem asks to use Cauchy integral formula to evaluate another integral . Both over
same . The idea is to rewrite as by factoring out the poles of that are outside leaving
one inside . Then we can write
For example, to solve around unit circle. Rewriting this as where now and
now we can use Cauchy integral formula. So all what we have to do is just
evaluate at , which gives . This works if can be factored into where is
analytic on and inside . This would not work if has more than one pole inside
.
Problem asks to find where is some closed contour. For this, if had number of
isolated singularities inside , then just use
Problem asks to find where is some open path, i.e. not closed (if it is closed, try
Cauchy), such as a straight line or a half circle arc. For these problem, use
parameterization. This converts the integral to line integration. If is straight line, use
standard parameterization, which is found by using
where in the line initial point and is the line end point. This works for
straight lines. Now use the above and rewrite as and then plug-in in this
in to obtain , then the integral becomes
And now evaluate this integral
using normal integration rules. If the path is a circular arc, then no need
to use , just use . Rewrite and use instead of and follow same steps as
above.
Problem gives and asks to find in order for to be analytic in some region. To solve
these, use Cauchy Riemann equations. Need to use both equations. One equation will
introduce a constant of integration (a function) and the second equation
is used to solve for it. This gives . See problem 2, HW 2, Physics 501 as
example.
Problem asks to evaluate where is some number. This is the order of the pole, and is
analytic on and inside . Then use the Cauchy integral formula for higher pole order. .
The only difference here is that this is pole of order . So to find residue, use
Problem gives and asks to find branch points and branch cuts. One way is to first find
where and for each zero, make a small circle around it, starting from to . If the
function at has different value from , then this is a branch point. Do this for
other zeros. Then connect the branch points. This will give the branch cut. It
is not always clear how to connect the branch point though, might need
to try different ways. For example has two zeros at . Both turn out to be
branch points. The branch cut is the line between to on the imaginary
axis.
Problem gives a series and asks to find radius of convergence . Two ways, find and
then . Another way is to find using .
Problem gives integral and asks to evaluate using residues. We start by converting
everything to using using . No need to use . The idea is to convert it to which then
we can use residues inside. Replace to become , this could require using Euler
relation such as and similar for . Now all what is needed is to find residues
of any poles inside the unit circle. Do not worry about poles outside the
unit circle. To find residues use short cut tricks. No need to find Laurent
series. For an example, to evaluate , then becomes and there is only one pole inside unit
circle, at .
Problem gives integral and asks to evaluate using residues. The contour here goes from
to and then a semi circle in upper half plane. This works for even since we can write
. If there is a pole inside the upper half plane, then the integral over the semi circle is
times the sum of residues. If there is a pole on the real line, then make a small semi
circle around pole, say at and then the integral for the small semi circle is times the
residue at . The minus sign here is due to moving clock wise on the small
circle.
Problem gives a series and asks if it is uniformly convergent. For general series, use the
M-test. But for this kind of series, just find radius of convergence as above using ratio
test, and if it is absolutely convergent, then say it converges uniformly for . It is
important to write it this way, and not just .
Problems gives and asks to find the sum. Sometimes this trick works for some series.
For example the alternating series , then write it as which is the same when ,
and now notice that this is the Taylor series for which means when then
.
Problem gives and asks to find residue at some . Of course we can always expand
around using Laurent series and find the coefficient of . But this is too much
work. Instead, if has a simple pole of order one, then we use
In general, if
then there are two cases. If or not. If , then we can just use the above. For
example, if and we want the residue at , then since it simple pole, then using
But if then we need to apply La’Hopital like this. If and we want to find residue at .
Then do as above, but with extra step, like this
Now if the pole is not a simple pole or order one,.say of order , then we first multiply
by then differentiate the result times, then divide by , and then evaluate the result at
in other words,
For example, if and we want residue at . Since order is , then
The above methods will work on most of the HW problems I’ve seen so far but If all
else fails, try Laurent series, that always works.
18.2 Errors and relative errors
A problem gives an expression in such as and asks how much a relative error in both
and will affect in worst case. For these problems, find and then find . For example,
if and relative error is in and is then what is worst relative error in . Then since
Then
But and are the relative errors in and . So if we plug-in for and for we get
is worst relative error in . Notice we used relative error for and relative error for
since we wanted the worst (largest) relative error. If we wanted the least
relative error in , then we will use for also, which gives or relative error in .
19 Some CAS notes
in Mathematica Exp is a symbol. Head[Exp] gives Symbol but in Maple it is
not.
Notice that shows up in Mathematica, but not in Maple.
20 d’Alembert’s Solution to wave PDE
(added December 13, 2018)
The PDE is
Let
Then
And
Then, from (2)
And from (3)
Substituting (4,5) into (1) gives
Since then
Integrating w.r.t gives
Integrating w.r.t
Therefore
The functions are arbitrary
functions found from initial and boundary conditions if given. Let initial conditions
be
Where the first condition above is the shape of the string at time and the second condition
is the initial velocity.
Applying first condition to (6) gives
Applying the second condition gives
Now we have two equations (7,8) and two unknowns to solve for. But the (8) has
derivatives of . So to make it easier to solve, we integrate (8) w.r.t. to obtain
So we will
use (9) instead of (8) with (7) to solve for . From (7)
Substituting (10) in (9)
gives
Using the above back in (10) gives as
Using (11,12) in (6) gives the final solution
The above is the final solution. So if we are given initial position and initial velocity of the
string as function of , we can find exact solution to the wave PDE.
21 Convergence
Definition of pointwise convergence: converges pointwise to if for each there exist integer
such that for all .
Definition of uniform convergence: converges uniformly to if for each there exist integer
such that for all .
Another way to find uniform convergence, first find pointwise convergence of . Say it
converges to . Now show that
goes to zero as . To find might need to find the maximum of . i.e. differentiate this, set to
zero, find where it is Max, then evaluate at this maximum. This gives the . Then see if this
goes to zero as
If sequence of functions converges uniformly to , then must be continuous. So this gives a
quick check if uniform convergence exist. First find the pointwise convergence and check if
this is continuous or not. If not, then no need to check for uniform convergence, it does not
exist. But if is continuous function, we still need to check because it is possible there is no
uniform convergence.
22 Note on using when to raise ln to exp solving an ode
Sometimes in the middle of solving an ode, we get on both sides. We can raise both sides to
as soon as these show up, or wait until the end, after solving the constant of integration to
do that. This shows we get same result in both cases.
22.1 Example 1
With initial conditions . This is homogenous type ode. It solved by substitution which
results in the new ode in given by
This is now separable
Integrating gives
Replacing by which gives
cancels out giving
Now lets try to solve for from IC . The above becomes
So the solution (2) is
And only now after is found, we raise both sides to (to simplify it)
which gives the solution as
Or
Lets see what happens if we had raised both sides to earlier
on, instead of waiting until after solving for the constant of integration. i.e. from step (1A)
above
Where is new constant. And only now we replace by which gives
Using IC . The above becomes
Hence (4) becomes
Which is the same answer obtained earlier in (3). This shows both methods work. It might
be better to delay the raising to exponential to the very end so it is all done in one
place.
22.2 Example 2
This is a homogenous ode, solved by the substitution which results in new ode in given by
This is separable
Integrating gives
There are two choices now. Raise both sides to to
simplify the solution or wait until the end. Option 1:
Replacing by in (1) gives
Now lets try to solve for from IC . The above becomes
Hence (2) becomes
Now raising both sides to gives
Lets try to see what happens if we raise to after solving for immeadilty which is the second
option. From (1)
Raising both to gives
Where new constant. Now we replace by
Solving for from IC from the above gives
Hence the solution (2) becomes
or
So both method worked. The early one and the later on one. Both give same result.
22.3 Example 3
This is tricky as how it is solved needs special handling of the initial conditions. Let us solve
by subtituting . Then . The ode now becomes
This is separable
Integrating
We could raise both sides to now or wait until after converting back to . Lets look what
happens in both cases. Raising to now gives
But and the above becomes
Which is the correct solution. Now IC is used to find . Using the above becomes
So .
Hence the solution (2) is
When this happens, to simplify the above we say that or . This
gives . Hence
23 References
Too many references used, but will try to remember to start recording books used from now
on. Here is current list
Applied partial differential equation, by Haberman
Advanced Mathematical Methods for Scientists and Engineers, Bender and
Orszag, Springer.
Boundary value problems in physics and engineering, Frank Chorlton, Van
Norstrand, 1969
Class notes. Math 322. University Wisconsin, Madison. Fall 2016. By Professor
Smith. Math dept.
Mathematical methods in the physical sciences. Mary Boas, second edition.
Mathematical methods in physics and engineering. Riley, Hobson, Bence. Second
edition.
various pages Wikipedia.
Mathworld at Wolfram.
Fourier series and boundary value problems 8th edition. James Brown, Ruel
Churchill.