A place to keep quick notes about Math that I keep forgetting. This is meant to be a scratch notes to write math notes. Can contain errors and/or not complete. Use at your own risk.
reference wikipedia I need to make one example and apply each of the above methods on it.
For example: What is the probability one has to flip a fair coin times to get a head? The answer is . So for a fair coin, that a head will show up from one flip. So the probability we have to flip a coin times to get a head is which is very low as expected.
This method can be used to generate random variables from any other distribution by knowing on . But it requires knowing the CDF and the inverse of the CDF for the other distribution. This is called the inverse CDF method. Another method is called the rejection method
then as long as it is linear and are integrable functions in , then follow these steps
Integrating both sides gives
We now integrate the above to give
Where is given by (2). Hence
and is matrix that has the eigenvectors as its columns.
Let , then and the above becomes
Now let or , hence and the above becomes
But hence
Substituting back
Substituting back
For example, for the above ode, the solution is . This ansatz works only if ODE is equidimensional. So can't use it on for example.
If is multiple root, use as solutions.
Then the new ODE becomes
The above is how the ODE will always become after the transformation. Remember to change to using and same for . Now the new is and the new is . Then do and as before.
Mathematica 9.01
gives
Maple 18
gives
(Added Sept. 30, 2016). When solving an exact ODE setup the following two ODE's are setup
Next, the first ODE is integrated w.r.t. , leading to
| (3) |
Where replaces the ”integration constant”. Now the above is differentiated w.r.t. and the resulting equation is compared to (2) to solve for . Next is found by integrating. Then now that is found, then is found from (3). And since is some constant, then an implicit solution for is thus obtained.
Some simple PDE's can be solved by direct integration, here are few examples.
Example 1
Integrate w.r.t. ., and remember that now constant of integration will be function of , hence
Example 2
Integrating once w.r.t. gives
Integrating again gives
Example 3
Integrating once w.r.t. gives
Integrating again gives
Example 4
Integrating once w.r.t gives
Integrating again w.r.t. gives
(added Oct. 20, 2016).
If are two solutions to then to show that is also solution:
Multiply the first ODE by and second ODE by
Add the above two equations, using linearity of differentials
Therefore satisfies the original ODE. Hence solution.
Since
Where are two solutions to Write
Multiply the first ODE above by and the second by
Subtract the second from the first
| (1) |
But
| (2) |
And
Substituting (2,3) into (1) gives the Wronskian differential equation
Whose solution is
Where is constant of integration.
Remember: does not mean the two functions are linearly dependent. The functions can still be Linearly independent on other interval, It just means can't be in the domain of the solution for two functions to be solutions. However, if the two functions are linearly dependent, then this implies everywhere. So to check if two functions are L.D., need to show that everywhere.
Regular Sturm-Liouville ODE is an eigenvalue boundary value ODE. This means, it the ODE has an eigenvalue in it , where solution exists only for specific values of eigenvalues. The ODE is
Or
With the restrictions that are real functions that are continuous everywhere over and also we need . The is called the weight function. But this is not all. The boundary conditions must be linear homogeneous, of this form
Where are just real constants. Some of them can be zero but not all. For example, is OK. So boundary conditions do not have to mixed, but they can be in general. But they must be homogeneous. Notice that periodic boundary conditions are not allowed. Well, they are allowed, but then the problem is no longer called Sturm-Liouville. The above is just the definition of the equation and its boundary conditions. Below is list of the important properties of this ODE. Each one of these properties have a proof.
Carrying integration by parts on first part of the integral in numerator, it becomes
But this becomes much simpler when we plug-in the boundary conditions that we must use, making the above
And if and , then it becomes Rayleigh quotient is useful to show that can be positive without solving for and also used to estimate the value of the minimum by replacing by a trial function and actually solving for using to obtain a numerical estimate of .
where are are two eigenfunction. But remember, this does NOT mean the integrand is identically zero, or . Only when happened to have same eigenvalue, only then we can say that . But for unique eigenfunctions only the integral form is zero This is important difference, used in the proof below, that different eigenfunctions have different eigenvalues (for the scalar SL case). For higher dimensions, we can have more than one eigenfunction for same eigenvalue.
Proofs for these properties follows
Given regular Sturm Liouville (RSL) ODE
In operator form
And (1) becomes
When solving RSL ode, since it is an eigenvalue ODE with associated boundary conditions, we will get infinite number of non-negative eigenvalues. For each eigenvalue, there is associated with it one eigenfunction (for 1D case). Looking at any two different eigenfunctions, say , then the symmetry relation says the following
Now we will show the above is true. This requires integration by parts two times. We start from the LHS expression and at the end we should end up with the integral on the RHS. Let , then
Where we used in the above. Hence
| (2) |
Now we will do integration by parts on the first integral. Let . Using , and if we let , then . Hence
We now apply integration by parts again to the second integral above. But now let and , hence and , therefore the above becomes
Substituting the above into (2) gives
| (3) |
Now comes the part where the boundary conditions are important. In RSL, the boundary conditions are such that all the terms vanish. This is because the boundary conditions are
And
So now (3) becomes
Therefore, we showed that . The only thing to watch for here, is which term to make and which to make when making integration by parts. To remember, always start with and make the term with in them during integration parts.
Assume is complex. The corresponding eigenfunctions are complex also. Let be corresponding eigenfunction
| (1) |
Taking complex conjugate of both sides, and since is real, we obtain
But since all the coefficients of the ODE are real. The above becomes
| (2) |
But by symmetry, we know that
Substituting (1),(2) into the above gives
But which is positive. Also the weight is positive by definition. Hence for the above to zero, it must be that . Which means is real. QED.
So the main tools to use in this proof: Definition of and symmetry relation and that . This might come up in the exam.
Now will show that there is one eigenfunction associated with each eigenvalue (Again, this is for 1D, it is possible to get more than one eigenfunction for same eigenvalue for 2D, as mentioned earlier). By contradiction, assume that has two eigenfunctions associated with it. Hence
From the first equation, , substituting this into the second equation gives
By Lagrange identity, , hence this means that
Where is some constant. This is the main difference between the above argument, and between Largrange identity. This can be confusing. So let me talk more about this. In Lagrange identity, we write
And when also satisfy the SL boundary condition, only then we say that
But the above is not the same as saying . This is important to keep in mind. Only the integral form is zero for any two functions with the SL B.C..Now we continue. We showed that . In SL, this constant is zero due to B.C. Hence
But by definition. Hence or
or . So the eigenfunctions are linearly dependent. One is just scaled version of the other. But eigenfunction must be linearly independent. Hence assumption is not valid, and there can not be two linearly independent eigenfunctions for same eigenvalue. Notice also that is the just the Wronskian. When it is zero, we know the functions are linearly dependent. The important part in the above proof, is that only when happened to have same eigenvalue.
The idea of this proof is to assume the eigenfunction is complex, then show that its real part and its complex part both satisfy the ODE and the boundary conditions. But since they are both use the same eigenvalue, then the real part and the complex part must be linearly dependent. This implies the eigenfunction must be real. (think of Argand diagram)
Assume that is complex eigenfunction with real part and complex part . Then since
The above is just writing the Sturm-Liouville ODE in operator form, where is the operator as above. Now we have
By linearity of operator
Which implies
So we showed that the real and complex part satisfy S.L. Now we need to show they are satisfy S.L. boundary conditions. Since
Where are the left and right ends of the domain. Then
Hence
So the above means both and satisfy the boundary conditions of S.L. But since both have the same eigenvalue, then they must be linearly dependent, since we know with S.L. each eigenfunction (or one linearly dependent to it) have only one eigenvalue. This means
Where is some constant. In other words,
Where is new constant. (OK it happens to be complex constant, but it is OK to do so, we always do this trick in other places, if it will make me feel better, I could take the magnitude of the constant). So all what the above says, is that we assumed to be complex, and found that it is real. So it can't be complex.
Given two different eigenfunctions . Hence
From symmetry integral relation, since these eigenfunctions also satisfy S.L. boundary conditions, we can write
Replacing (1,2) into the above
But since there are different eigenvalues for different eigenfunctions. Hence
Which means are orthogonal to each others with weight
When S.L. is singular, meaning at one or both ends, we end with important class of ODE's, whose solutions are special functions (not or ) as the case with the regular S.L. Recall that S.L. is
With the regular S.L., we say that over the whole domain, including end points. But with singular, this is not the case. Here are three important S.L. ODE's that are singular.
Bessel equation:
Or in standard form
So we see that . We see that at , then , which what makes it singular. (also the weight happens to be zero also), but we only care about being zero or not, at one of the ends. So to check if S.L. is singular or not, just need to check if is zero or not at one of the ends. As mentioned before, when at one of the ends, we can't use the standard B.C. for the regular S.L. instead, at the end where , we must use what is called bounded boundary conditions, which is in this case . The solution to this ODE will be in terms of Bessel functions. Notice that this happened to be singular, due to the domain starting at . If the domain happened to be from to say , then it is no longer singular S.L. but regular one.
Legendre equation
Or it can be written as
We see that . And now it happens to be that at . So it is singular at the other end compared to Bessel. In this case . Again, the boundary conditions at must now be bounded. i.e. . On the other end, where is not zero, we still use the standard boundary conditions .
Chebyshev equation
Or
So we see that . So where is here? At then and at then also. So this is singular at both ends. So we need to use bounded boundary conditions at both ends now to solve this.
The solution to this singular S.L. is given in terms of special function Chebyshev.
Where are found from boundary conditions now. The above is valid for ”large” ,
and is found by first letting and then assuming . And
working though the WKB method. Remember, WKB only works for linear homogeneous
ODE and is used to estimate solution for large (or small ).
where above is the S-L operator, as in so that we write .
And for axis symmetric (no dependency), it becomes
The solution is found by separation of variables. For the above is in terms of Bessel function order zero
which is found using series method . The eigenvalues are zeros of
where is disk radius.
| (1) |
Let be the eigenfunction associated with eigenvalue . Since eigenfunctions satisfy the ODE itself, then we can write, for any arbitrary eigenfunction (subscript removed for clarity in what follows)
Integrating both sides
| (2) |
Looking at . Integrating by part. Let , hence
Substituting (3) into (2) gives
Hence
compare to the one in the book .
Problem 1 If the problem gives S-L equations, and asks to find estimate on the smallest eigenvalue, then use Rayleigh quotient for . And write . Now we do not need to solve the SL to find solution , this is the whole point. Everything in RHS is given, except for, of course the solution . Here comes the main idea: Come up with any trial and use it in place of . This trial function just needs to satisfy the boundary conditions, which is also given. Then all what we need to do is just evaluate the integral. Pick the simplest function of which satisfies boundary conditions. All other terms we can read from the given problem. At the end, we should get a numerical value for the integral. This is the upper limit of the lowest eigenvalue
Problem 2 We are given SL problem with boundary conditions, and asked to show that without solving the ODE to find : Use Rayleigh quotient and argue that the denominator can't be zero (else eigenfunction is zero, which is not possible) and it also can't be negative. Argue about the numerator not being negative. Do not solve the ODE! this is the whole point of using Rayleigh quotient.
Problem We are given an SL problem with boundary conditions and asked to estimate large and corresponding eigenfunction. This is different from being asked to estimate the smallest eigenvalue, where we use Rayleigh quotient and trial function. In this one, we instead use WKB. For 1D, just use what is called the physical optics correction method, given by . Where in this are all known functions (from the problem itself). Notice that is not there. Now use the boundary conditions and solve for one of the constants, should come to be zero. Use the second boundary condition and (remember, the boundary conditions are homogenous), and we get one equation in and for non-trivial solution, solve for allowed values of . This gives the large eigenvalue estimate. i.e. for when is very large. Depending on problem, does not have be too large to get good accurate estimate compared with exact solution. See HW7, problem 5.9.2 for example.
Problem We are given 1D heat ODE . If B.C. are homogenous, we are done. We know the solution. Separation of variables. If the B.C. is not homogenous, then we have a small problem. We can't do separation of variables. If you can find equilibrium solution, , where this solution only needs to satisfy the non-homogenous B.C. then write solution as , and plug this in the PDE. This will give but this now satisfies the homogenous B.C. This is the case if the original non-homogenous B.C were Dirichlet. If they were Neumann, then we will get where is extra term that do not vanish because do not vanish in this case after double differentiating. We solve for now, since it has homogenous B.C., but it has extra term there, which we treat as new source. We apply initial conditions now also to final all the eigenfunction expansion coefficients. We are done. Now we know
But if we can find the reference function or we do not want to use this method, then we can use another method, called eigenfunction expansion. We assume and plug this in the PDE. But now we can't do term by term differentiation, since are the eigenfunctions that satisfies the homogenous B.C., while has non-homogenous B.C. So the trick is to use Green formula, to re-writes as plus the contribution from the boundary terms. (this is like doing integration by parts twice, but much easier). Now rewrite . See page 355-356, Haberman.
Any linear second order ODE can be converted to S-L form. The S-L form is the following
| (1) |
Sometimes there is a minus sign there. I really never understood why some book put a minus sign and some do not. May be I'll find out one day. But for now, (1) is used as the S-L form. The goal now, is given any general form eigenvalue ODE, we want to convert it (rewrite it) in the above form. The second order linear ODE will have this form
| (2) |
The parameter in the S-L form, is the eigenvalue. We really only use S-L form for eigenvalue problems. First, we will show how to convert (2) to (1), and then show few examples. We are given (2), and want to convert it to (1). The first step is to convert (2) to standard form
Then multiply (2) by some unknown which is assumed positive.
Now, rewrite as . These are the same thing. Now we replace this in the above and obtain
Here comes the main trick in all of this. We want to force to be zero. This implies . (this comes from just solving the ODE . Therefore, if , then (3) becomes
We are done. Hence
And
Let see how this works on some examples
Convert to S-L. We see the general form here is , with . Hence , therefore the SL form is
| (1) |
Where
Hence (1) becomes
This was easy, since the ODE given was already in SL form.
Convert
Rewrite as
We see the general form here is , with . Hence
Therefore the SL form is
| (1) |
Where
Hence (1) becomes
Convert
Rewrite as
We see the general form here is , with . Hence
Therefore the SL form is
| (1) |
Where
Hence (1) becomes
Convert
We see the general form here is , with . Hence
Therefore the SL form is
| (1) |
Where
Hence (1) becomes
I seem to have a sign mistake there. I just do not see it now. This should come out to be
Too many references used, but will try to remember to start recording books used from now on. Here is current list