Processing math: 100%

4.2 HW 2

  4.2.1 Problem 1
  4.2.2 Problem 2
  4.2.3 Problem 3
  4.2.4 Problem 4
  4.2.5 Problem 5
  4.2.6 HW 2 key solution
PDF (letter size)
PDF (legal size)

4.2.1 Problem 1

problem description

pict

solution The following diagram illustrates epi J for n=1. In words, it is the set of all points above the curve of the function J\left ( u\right )

pict

This is an iff proof, hence we need to show the following

1.
Given J is convex function, then show that epi J is a convex set.
2.
Given that epi J is a convex set, then show that J is a convex function.

Proof of first direction We pick any two arbitrary points in epi J, such as p_{0}=(u^{0},y^{0}) and p_{1}=(u^{1},y^{1}). To show epi J is a convex set, we need now to show that any point on the line between p_{0},p_{1} is also in epi J. The point between them is given by p_{\lambda }=\left ( u^{\lambda },y^{\lambda }\right ) where \lambda \in \left [ 0,1\right ] . The following diagram helps illustrates this for n=1.

pict

The point p_{\lambda } is given by\begin{align*} \left ( u^{\lambda },y^{\lambda }\right ) & =\left ( 1-\lambda \right ) p_{0}+\lambda p_{1}\\ & =\left ( 1-\lambda \right ) \left ( u^{0},y^{0}\right ) +\lambda \left ( u^{1},y^{1}\right ) \\ & =\left ( \left ( 1-\lambda \right ) u^{0}+\lambda u^{1},\left ( 1-\lambda \right ) y^{0}+\lambda y^{1}\right ) \end{align*}

Therefore y^{\lambda }=\left ( 1-\lambda \right ) y^{0}+\lambda y^{1}. Since p_{0},p_{1} are in epi J, then by the definition of epi J, we know that y^{0}\geq J\left ( u^{0}\right ) and y^{1}\geq J\left ( u^{1}\right ) . Therefore we conclude that\begin{equation} y^{\lambda }\geq \left ( 1-\lambda \right ) J\left ( u^{0}\right ) +\lambda J\left ( u^{1}\right ) \tag{1} \end{equation} But since we assumed J is a convex function, then we also know that \left ( 1-\lambda \right ) J\left ( u^{0}\right ) +\lambda J\left ( u^{1}\right ) \geq J\left ( u^{\lambda }\right ) where u^{\lambda }=\left ( 1-\lambda \right ) u^{0}+\lambda u^{1}. Therefore (1) becomes y^{\lambda }\geq J\left ( u^{\lambda }\right ) This implies the arbitrary point p_{\lambda } is in epi J.

We now need to proof the other direction. Given that epi J is a convex set, then show that J is a convex function. Since epi J is a convex set, we pick two arbitrary points in epi J, such as p_{0},p_{1}. We can choose p_{0}=\left ( u^{0},J\left ( u^{0}\right ) \right ) and p_{1}=\left ( u^{1},J\left ( u^{1}\right ) \right ) . These are still in epi J, but on the lower bound, on the edge with J\left ( u\right ) curve.

pict

Since p_{0},p_{1} are two points in a convex set, then any point p^{\lambda } on a line between them is also in epi J (by definition of a convex set). And since p^{\lambda }=\left ( 1-\lambda \right ) p_{0}+\lambda p_{1} this implies\begin{align} p^{\lambda } & =\left ( u^{\lambda },y^{\lambda }\right ) \nonumber \\ & =\left ( \left ( 1-\lambda \right ) p_{0}+\lambda p_{1}\right ) \nonumber \\ & =\left ( \left ( 1-\lambda \right ) \left ( u^{0},J\left ( u^{0}\right ) \right ) +\lambda \left ( u^{1},J\left ( u^{1}\right ) \right ) \right ) \nonumber \\ & =\left ( \left ( 1-\lambda \right ) u^{0}+\lambda u^{1},\overset{y^{\lambda }}{\overbrace{\left ( 1-\lambda \right ) J\left ( u^{0}\right ) +J\left ( u^{1}\right ) }}\right ) \tag{1} \end{align}

Since p^{\lambda } is in epi J then by definition of epi J \begin{equation} y^{\lambda }\geq J\left ( u^{\lambda }\right ) \tag{2} \end{equation} But from (1) we see that y^{\lambda }=\left ( 1-\lambda \right ) J\left ( u^{0}\right ) +J\left ( u^{1}\right ) , therefore (2) is the same as writing\begin{equation} \left ( 1-\lambda \right ) J\left ( u^{0}\right ) +J\left ( u^{1}\right ) \geq J\left ( u^{\lambda }\right ) \tag{3} \end{equation} But u^{\lambda }=\left ( 1-\lambda \right ) u^{0}+\lambda u^{1}, hence the above becomes \left ( 1-\lambda \right ) J\left ( u^{0}\right ) +J\left ( u^{1}\right ) \geq J\left ( \left ( 1-\lambda \right ) u^{0}+\lambda u^{1}\right ) But the above is the definition of a convex function. Therefore J\left ( u\right ) is a convex function. QED.

4.2.2 Problem 2

problem description

pict

solution Let u_{0}^{\ast } and u_{1}^{\ast } be any two different minimizing elements in \Re ^{n} such that J\left ( u_{0}^{\ast }\right ) <J\left ( u_{1}^{\ast }\right ) . We will show that this leads to contradiction. Since u_{0}^{\ast } is a minimizer, then there exists some R>0, such that all points u that satisfy \left \Vert u^{\ast }-u\right \Vert \leq R also satisfy J\left ( u_{0}^{\ast }\right ) \leq J\left ( u\right )

pict

We will consider all points along the line joining u_{0}^{\ast },u_{1}^{\ast }, and pick one point u^{\lambda } that satisfies \left \Vert u^{\ast }-u^{\lambda }\right \Vert \leq R, where \lambda \in \left [ 0,1\right ] is selected to make the convex mixture u^{\lambda }=\left ( 1-\lambda \right ) u_{0}^{\ast }+\lambda u_{1}^{\ast } satisfied. Therefore any \lambda \leq \frac{R}{\left \Vert u_{0}^{\ast }-u_{1}^{\ast }\right \Vert } will put u^{\lambda } inside the sphere of radius R.

pict

Hence now we can say that\begin{equation} J\left ( u_{0}^{\ast }\right ) \leq J\left ( u^{\lambda }\right ) \tag{1} \end{equation} But given that J\left ( u\right ) is a strict convex function, then\begin{equation} J( u^{\lambda }) <\left ( 1-\lambda \right ) J\left ( u_{0}^{\ast }\right ) +\lambda J\left ( u_{1}^{\ast }\right ) \tag{2} \end{equation} Since we assumed that J\left ( u_{0}^{\ast }\right ) <J\left ( u_{1}^{\ast }\right ) , then if we replace J\left ( u_{1}^{\ast }\right ) by J\left ( u_{0}^{\ast }\right ) in the RHS of (2), it will change from < to \leq resulting in\begin{align} J ( u^{\lambda } ) & \leq \left ( 1-\lambda \right ) J\left ( u_{0}^{\ast }\right ) +\lambda J\left ( u_{0}^{\ast }\right ) \nonumber \\ J ( u^{\lambda } ) & \leq J\left ( u_{0}^{\ast }\right ) \tag{3} \end{align}

We see that equations (3) and (1) are a contradiction. Therefore our assumption is wrong and there can not be more than one minimizing element and u_{0}^{\ast } must be the same as u_{1}^{\ast }.

4.2.3 Problem 3

problem description

pict

solution We are given that J\left ( u^{\ast }\right ) \leq J\left ( u\right ) for all u such that \left \Vert u^{\ast }-u\right \Vert <\delta . Let us pick any arbitrary point u^{1}, outside ball of radius \delta . Then any point on the line between u^{\ast } and u^{1} is given by u^{\lambda }=\left ( 1-\lambda \right ) u^{\ast }+\lambda u^{1} In picture, so far we have this setup

pict

We now need to show that J\left ( u^{\ast }\right ) \leq J\left ( u^{1}\right ) even though u^{1} is outside the ball. Since J is a convex function, then\begin{equation} J\left ( u^{\lambda }\right ) \leq \left ( 1-\lambda \right ) J\left ( u^{\ast }\right ) +\lambda J\left ( u^{1}\right ) \tag{1} \end{equation} We can now select \lambda to push u^{\lambda } to be inside the ball. We are free to change \lambda as we want while keeping u^{1} fixed, outside the ball. If we do this we then we have J\left ( u^{\ast }\right ) \leq J\left ( u^{\lambda }\right )

pict

Hence (1) becomes\begin{equation} J\left ( u^{\ast }\right ) \leq \left ( 1-\lambda \right ) J\left ( u^{\ast }\right ) +\lambda J\left ( u^{1}\right ) \tag{2} \end{equation} Where we replaced J\left ( u^{\lambda }\right ) by J\left ( u^{\ast }\right ) in (1) and since J\left ( u^{\ast }\right ) \leq J\left ( u^{\lambda }\right ) the \leq relation remained valid. Simplifying (2) gives\begin{align*} J\left ( u^{\ast }\right ) & \leq J\left ( u^{\ast }\right ) -\lambda J\left ( u^{\ast }\right ) +\lambda J\left ( u^{1}\right ) \\ \lambda J\left ( u^{\ast }\right ) & \leq \lambda J\left ( u^{1}\right ) \end{align*}

For non-zero \lambda this means J\left ( u^{\ast }\right ) \leq J\left ( u^{1}\right ) . This completes the proof, since u^{1} was arbitrary point anywhere. Hence u^{\ast } is global minimum. QED

4.2.4 Problem 4

problem description

pict

solution

We need to show that J\left ({\displaystyle \sum \limits _{i=1}^{N}} \lambda _{i}u^{i}\right ) \leq{\displaystyle \sum \limits _{i=1}^{N}} \lambda _{i}J\left ( u^{i}\right ) where {\displaystyle \sum \limits _{i=1}^{N}} \lambda _{i}=1. Proof by induction. For N=1 and since \lambda _{1}=1, then we have J\left ( u^{1}\right ) =J\left ( u^{1}\right ) The case for N=2 comes for free, from the definition of J being a convex function\begin{equation} J\left ( \left ( 1-\lambda \right ) u^{1}+\lambda u^{2}\right ) \leq \left ( 1-\lambda \right ) J\left ( u^{1}\right ) +\lambda J\left ( u^{2}\right ) \tag{A} \end{equation} By making \left ( 1-\lambda \right ) \equiv \lambda _{1},\lambda \equiv \lambda _{2}, the above can be written as J\left ( \lambda _{1}u^{1}+\lambda _{2}u^{2}\right ) \leq \lambda _{1}J\left ( u^{1}\right ) +\lambda _{2}J\left ( u^{2}\right ) We now assume it is true for N=k-1. In other words, the inductive hypothesis below is given as true\begin{equation} J\left ({\displaystyle \sum \limits _{i=1}^{k-1}} \lambda _{i}u^{i}\right ) \leq{\displaystyle \sum \limits _{i=1}^{k-1}} \lambda _{i}J\left ( u^{i}\right ) \tag{*} \end{equation} Now we have to show it will also be true for N=k, which is\begin{align}{\displaystyle \sum \limits _{i=1}^{k}} \lambda _{i}J\left ( u^{i}\right ) & =\lambda _{1}J\left ( u^{1}\right ) +\lambda _{1}J\left ( u^{1}\right ) +\cdots +\lambda _{k}J\left ( u^{k}\right ) \nonumber \\ & =\left ( 1-\lambda _{k}\right ) \left ( \frac{\lambda _{1}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{1}\right ) +\frac{\lambda _{1}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{1}\right ) +\cdots +\frac{\lambda _{k-1}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{k-1}\right ) +\frac{\lambda _{k}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{k}\right ) \right ) \nonumber \\ & =\left ( 1-\lambda _{k}\right ) \left ( \frac{\lambda _{1}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{1}\right ) +\frac{\lambda _{1}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{1}\right ) +\cdots +\frac{\lambda _{k-1}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{k-1}\right ) \right ) +\lambda _{k}J\left ( u^{k}\right ) \nonumber \\ & =\left ( 1-\lambda _{k}\right ) \left ({\displaystyle \sum \limits _{i=1}^{k-1}} \frac{\lambda _{i}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{i}\right ) \right ) +\lambda _{k}J\left ( u^{k}\right ) \tag{1} \end{align}

Now we take advantage of the inductive hypothesis Eq. (*) on k-1, which says that J\left ({\displaystyle \sum \limits _{i=1}^{k-1}} \frac{\lambda _{i}}{\left ( 1-\lambda _{k}\right ) }u^{i}\right ) \leq {\displaystyle \sum \limits _{i=1}^{k-1}} \frac{\lambda _{i}}{\left ( 1-\lambda _{k}\right ) }J\left ( u^{i}\right ) . Using this in (1) changes it to \geq relation\begin{equation}{\displaystyle \sum \limits _{i=1}^{k}} \lambda _{i}J\left ( u^{i}\right ) \geq \left ( 1-\lambda _{k}\right ) J\left ({\displaystyle \sum \limits _{i=1}^{k-1}} \frac{\lambda _{i}}{\left ( 1-\lambda _{k}\right ) }u^{i}\right ) +\lambda _{k}J\left ( u^{k}\right ) \tag{2} \end{equation} We now take advantage of the case of N=2 in (A) by viewing RHS of (2) as \left ( 1-\lambda _{k}\right ) J\left ( u^{1}\right ) +\lambda _{k}J\left ( u^{2}\right ) , where we let u^{1}\equiv{\displaystyle \sum \limits _{i=1}^{k-1}} \frac{\lambda _{i}}{\left ( 1-\lambda _{k}\right ) }u^{i},u^{2}\equiv u^{k}. Hence we conclude that\begin{equation} \left ( 1-\lambda _{k}\right ) J\left ({\displaystyle \sum \limits _{i=1}^{k-1}} \frac{\lambda _{i}}{\left ( 1-\lambda _{k}\right ) }u^{i}\right ) +\lambda _{k}J\left ( u^{k}\right ) \geq J\left ( \left ( 1-\lambda _{k}\right ){\displaystyle \sum \limits _{i=1}^{k-1}} \frac{\lambda _{i}}{\left ( 1-\lambda _{k}\right ) }u^{i}+\lambda _{k}u^{k}\right ) \tag{3} \end{equation} Using (3) in (2) gives (the \geq relation remains valid, even more now, since we replaced something in RHS of (2), by something smaller) \begin{align*}{\displaystyle \sum \limits _{i=1}^{k}} \lambda _{i}J\left ( u^{i}\right ) & \geq J\left ( \left ( 1-\lambda _{k}\right ){\displaystyle \sum \limits _{i=1}^{k-1}} \frac{\lambda _{i}}{\left ( 1-\lambda _{k}\right ) }u^{i}+\lambda _{k}u^{k}\right ) \\ & =J\left ( \left ({\displaystyle \sum \limits _{i=1}^{k-1}} \lambda _{i}u^{i}\right ) +\lambda _{k}u^{k}\right ) \end{align*}

Hence {\displaystyle \sum \limits _{i=1}^{k}} \lambda _{i}J\left ( u^{i}\right ) \geq J\left ({\displaystyle \sum \limits _{i=1}^{k}} \lambda _{i}u^{i}\right ) QED.

4.2.5 Problem 5

   4.2.5.1 Appendix

problem description

pict

solution

Assuming J\left ( u\right ) is twice continuously differentiable (C^{2}) in u_{1},u_{2},\cdots ,u_{n}, then if we can show that the Hessian \nabla ^{2}J\left ( u\right ) is positive semi-definite on u_{i}>0, then this implies J\left ( u\right ) is convex. The first step is to determined \nabla ^{2}J\left ( u\right ) . \begin{align*} \frac{\partial J}{\partial u_{i}} & =-\frac{1}{n}\left ( u_{1}u_{2}\cdots u_{n}\right ) ^{\frac{1}{n}-1}{\displaystyle \prod \limits _{k=1,k\neq i}^{n}} u_{k}=\frac{1}{n}\frac{J\left ( u\right ) }{\left ( u_{1}u_{2}\cdots u_{n}\right ) }{\displaystyle \prod \limits _{k=1,k\neq i}^{n}} u_{k}=\frac{1}{n}\frac{J\left ( u\right ) }{{\displaystyle \prod \limits _{k=1}^{n}} u_{k}}{\displaystyle \prod \limits _{k=1,k\neq i}^{n}} u_{k}\\ & =\frac{1}{n}\frac{J\left ( u\right ) }{u_{i}} \end{align*}

And\begin{align*} \frac{\partial ^{2}J}{\partial u_{i}^{2}} & =\frac{1}{n}\frac{\left ( \frac{1}{n}\frac{J\left ( u\right ) }{u_{i}}\right ) }{u_{i}}-\frac{1}{n}\frac{J\left ( u\right ) }{u_{i}^{2}}\\ & =\frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{i}^{2}}-\frac{1}{n}\frac{J\left ( u\right ) }{u_{i}^{2}}\\ & =\frac{1}{n}\frac{J\left ( u\right ) }{u_{i}^{2}}\left ( \frac{1}{n}-1\right ) \end{align*}

And the cross derivatives are\begin{align*} \frac{\partial ^{2}J}{\partial u_{i}\partial u_{j}} & =\frac{\partial }{\partial u_{j}}\left ( \frac{1}{n}\frac{J\left ( u\right ) }{u_{i}}\right ) \\ & =\frac{1}{n}\frac{\frac{1}{n}\frac{J\left ( u\right ) }{u_{j}}}{u_{i}}\\ & =\frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{i}u_{j}} \end{align*}

Therefore \boxed{ \nabla ^{2}J\left ( u\right ) =\begin{pmatrix} \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{1}^{2}}\left ( 1-n\right ) & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{1}u_{2}} & \cdots & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{1}u_{n}}\\ \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{2}u_{1}} & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{2}^{2}}\left ( 1-n\right ) & \cdots & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{2}u_{n}}\\ \vdots & \cdots & \ddots & \vdots \\ \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{n}u_{1}} & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{n}u_{2}} & \cdots & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{n}^{2}}\left ( 1-n\right ) \end{pmatrix} } Now we need to show that \nabla ^{2}J\left ( u\right ) is positive semi-definite.  For n=1, the above reduces to \nabla ^{2}J\left ( u\right ) =\frac{J\left ( u\right ) }{u_{1}^{2}}\left ( 1-1\right ) =0 Hence non-negative. This is the same as saying the second derivative is zero. For n=2 \nabla ^{2}J\left ( u\right ) =\begin{pmatrix} \frac{1}{4}J\left ( u\right ) \frac{1-2}{u_{1}^{2}} & \frac{1}{u_{1}u_{2}}\frac{1}{4}J\left ( u\right ) \\ \frac{1}{u_{2}u_{1}}\frac{1}{4}J\left ( u\right ) & \frac{1}{4}J\left ( u\right ) \frac{1-2}{u_{2}^{2}}\end{pmatrix} =\begin{pmatrix} \frac{-1}{u_{1}^{2}}\frac{1}{4}J\left ( u\right ) & \frac{1}{u_{1}u_{2}}\frac{1}{4}J\left ( u\right ) \\ \frac{1}{u_{2}u_{1}}\frac{1}{4}J\left ( u\right ) & \frac{-1}{u_{2}^{2}}\frac{1}{4}J\left ( u\right ) \end{pmatrix} The first leading minor is \frac{-1}{4u_{1}^{2}}J\left ( u\right ) , which is positive, since J\left ( u\right ) <0 and u_{i}>0 (given). The second leading minor is \Delta _{2}=\begin{vmatrix} \frac{-1}{u_{1}^{2}}\frac{1}{4}J\left ( u\right ) & \frac{1}{u_{1}u_{2}}\frac{1}{4}J\left ( u\right ) \\ \frac{1}{u_{2}u_{1}}\frac{1}{4}J\left ( u\right ) & \frac{-1}{u_{2}^{2}}\frac{1}{4}J\left ( u\right ) \end{vmatrix} =0 Hence all the leasing minors are non-negative. Which means \nabla ^{2}J\left ( u\right ) is semi-definite. We will look at n=3 \nabla ^{2}J\left ( u\right ) =\begin{pmatrix} \frac{-2}{u_{1}^{2}}\frac{1}{9}J\left ( u\right ) & \frac{1}{u_{1}u_{2}}\frac{1}{9}J\left ( u\right ) & \frac{1}{u_{1}u_{3}}\frac{1}{9}J\left ( u\right ) \\ \frac{1}{u_{2}u_{1}}\frac{1}{9}J\left ( u\right ) & \frac{-2}{u_{2}^{2}}\frac{1}{9}J\left ( u\right ) & \frac{1}{u_{2}u_{3}}\frac{1}{9}J\left ( u\right ) \\ \frac{1}{u_{3}u_{1}}\frac{1}{9}J\left ( u\right ) & \frac{1}{u_{3}u_{2}}\frac{1}{9}J\left ( u\right ) & \frac{-2}{u_{3}^{2}}\frac{1}{9}J\left ( u\right ) \end{pmatrix} The first leading minor is \frac{-2}{9u_{1}^{2}}J\left ( u\right ) , which is positive again, since J\left ( u\right ) <0 for u_{i}>0 (given). And the second leading minor is \frac{1}{27}J^{2}\frac{u^{2}}{u_{1}^{2}u_{2}^{2}}

which is positive, since all terms are positive. The third leading minor is \Delta _{3}=\begin{vmatrix} \frac{-2}{u_{1}^{2}}\frac{1}{9}J\left ( u\right ) & \frac{1}{u_{1}u_{2}}\frac{1}{9}J\left ( u\right ) & \frac{1}{u_{1}u_{3}}\frac{1}{9}J\left ( u\right ) \\ \frac{1}{u_{2}u_{1}}\frac{1}{9}J\left ( u\right ) & \frac{-2}{u_{2}^{2}}\frac{1}{9}J\left ( u\right ) & \frac{1}{u_{2}u_{3}}\frac{1}{9}J\left ( u\right ) \\ \frac{1}{u_{3}u_{1}}\frac{1}{9}J\left ( u\right ) & \frac{1}{u_{3}u_{2}}\frac{1}{9}J\left ( u\right ) & \frac{-2}{u_{3}^{2}}\frac{1}{9}J\left ( u\right ) \end{vmatrix} =0 Hence non-of the leading minors are negative. Therefore \nabla ^{2}J\left ( u\right ) is semi-definite. The same pattern repeats for higher values of  n. All leading minors are positive, except the last leading minor will be zero.

4.2.5.1 Appendix

Another way to show that \nabla ^{2}J\left ( u\right ) is positive semi-definite is to show that x^{T}\left ( \nabla ^{2}J\left ( u\right ) \right ) x\geq 0 for any vector x. (since \nabla ^{2}J\left ( u\right ) is symmetric). x^{T}\left ( \nabla ^{2}J\left ( u\right ) \right ) x=\begin{pmatrix} x_{1} & x_{2} & \cdots & x_{n}\end{pmatrix}\begin{pmatrix} \frac{1}{n}\frac{J\left ( u\right ) }{u_{1}^{2}}\left ( \frac{1}{n}-1\right ) & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{1}u_{2}} & \cdots & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{1}u_{n}}\\ \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{2}u_{1}} & \frac{1}{n}\frac{J\left ( u\right ) }{u_{2}^{2}}\left ( \frac{1}{n}-1\right ) & \cdots & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{2}u_{n}}\\ \vdots & \cdots & \ddots & \vdots \\ \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{n}u_{1}} & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{n}u_{2}} & \cdots & \frac{1}{n}\frac{J\left ( u\right ) }{u_{n}^{2}}\left ( \frac{1}{n}-1\right ) \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\\ \vdots \\ x_{n}\end{pmatrix} Now the idea is to set n=1,2,3,\cdots and show that the resulting values \geq 0 always. For n=1, we obtain 0 as before. For n=2, let

\Delta =\begin{pmatrix} x_{1} & x_{2}\end{pmatrix}\begin{pmatrix} \frac{1}{n}\frac{J\left ( u\right ) }{u_{1}^{2}}\left ( \frac{1}{n}-1\right ) & \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{1}u_{2}}\\ \frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{2}u_{1}} & \frac{1}{n}\frac{J\left ( u\right ) }{u_{2}^{2}}\left ( \frac{1}{n}-1\right ) \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} . Expanding gives\begin{align*} \Delta & =\begin{pmatrix} x_{1}\frac{1}{n}\frac{J\left ( u\right ) }{u_{1}^{2}}\left ( \frac{1}{n}-1\right ) +x_{2}\frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{2}u_{1}} & x_{1}\frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{1}u_{2}}+x_{2}\frac{1}{n}\frac{J\left ( u\right ) }{u_{2}^{2}}\left ( \frac{1}{n}-1\right ) \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} \\ & =x_{1}^{2}\frac{1}{n}\frac{J\left ( u\right ) }{u_{1}^{2}}\left ( \frac{1}{n}-1\right ) +x_{1}x_{2}\frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{2}u_{1}}+x_{2}x_{1}\frac{1}{n^{2}}\frac{J\left ( u\right ) }{u_{1}u_{2}}+x_{2}^{2}\frac{1}{n}\frac{J\left ( u\right ) }{u_{2}^{2}}\left ( \frac{1}{n}-1\right ) \\ & =x_{1}^{2}\frac{1}{2}\frac{J\left ( u\right ) }{u_{1}^{2}}\left ( \frac{1}{2}-1\right ) +x_{1}x_{2}\frac{1}{4}\frac{J\left ( u\right ) }{u_{2}u_{1}}+x_{2}x_{1}\frac{1}{4}\frac{J\left ( u\right ) }{u_{1}u_{2}}+x_{2}^{2}\frac{1}{2}\frac{J\left ( u\right ) }{u_{2}^{2}}\left ( \frac{1}{2}-1\right ) \end{align*}

The RHS above becomes, and by replacing J\left ( u\right ) =-\sqrt{u_{1}u_{2}} for n=2\begin{align*} -\frac{1}{4}x_{1}^{2}\frac{J\left ( u\right ) }{u_{1}^{2}}+x_{1}x_{2}\frac{1}{2}\frac{J\left ( u\right ) }{u_{2}u_{1}}-\frac{1}{4}x_{2}^{2}\frac{J\left ( u\right ) }{u_{2}^{2}} & =\frac{1}{4}x_{1}^{2}\frac{\sqrt{u_{1}u_{2}}}{u_{1}^{2}}-x_{1}x_{2}\frac{1}{2}\frac{\sqrt{u_{1}u_{2}}}{u_{2}u_{1}}+\frac{1}{4}x_{2}^{2}\frac{\sqrt{u_{1}u_{2}}}{u_{2}^{2}}\\ & =\left ( \frac{1}{\sqrt{4}}\frac{\left ( u_{1}u_{2}\right ) ^{\frac{1}{4}}}{u_{1}}x_{1}-\frac{1}{\sqrt{4}}\frac{\left ( u_{1}u_{2}\right ) ^{\frac{1}{4}}}{u_{2}}x_{2}\right ) ^{2} \end{align*}

Where we completed the square in the last step above. Hence x^{T}\left ( \nabla ^{2}J\left ( u\right ) \right ) x\geq 0. The same process can be continued for n higher. Hence \nabla ^{2}J\left ( u\right ) is positive semi-definite.

4.2.6 HW 2 key solution

pict
pict
pict
pict
pict
pict