______part a
\[\begin{array} [c]{lll}\mu _{Y}\left ( n\right ) & = & E\left [ Y_{n}\right ] \\ & & \\ & = & E\left [{\displaystyle \sum \limits _{i=1}^{n}} X_{i}\right ] \\ & & \\ & = &{\displaystyle \sum \limits _{i=1}^{n}} E\left [ X_{i}\right ] \end{array} \]
but \(E\left [ X_{i}\right ] =1\cdot P\left \{ X_{i}=1\right \} +0\cdot P\left \{ X_{i}=0\right \} =p\)
\[\begin{array} [c]{lll}\mu _{Y}\left ( n\right ) & = &{\displaystyle \sum \limits _{i=1}^{n}} p\\ & & \\ & = & np \end{array} \]
so\[ \frame{$\mu $$_Y$$\left ( n\right ) $=np} \]
I’ll find now a general expressing for \(E\left [ Y_{m}Y_{n}\right ] \) that I need to use in this problem.
\[\begin{array} [c]{lll}Y_{m}Y_{n} & = & \left ({\displaystyle \sum \limits _{j=1}^{m}} X_{j}\right ) \left ({\displaystyle \sum \limits _{i=1}^{n}} X_{i}\right ) \\ & & \\ & = & \left ( X_{1}+X_{2}+\cdot \cdot \cdot +X_{m}\right ) \left ( X_{1}+X_{2}+\cdot \cdot \cdot +X_{n}\right ) \\ & & \\ & = & X_{1}X_{1}+X_{1}X_{2}+\cdot \cdot \cdot +X_{1}X_{n}\\ & + & X_{2}X_{1}+X_{2}X_{2}+\cdot \cdot \cdot +X_{2}X_{n}\\ & + & X_{3}X_{1}+X_{3}X_{2}+\cdot \cdot \cdot +X_{3}X_{n}\\ & + & \cdot \cdot \cdot \\ & + & X_{m}X_{1}+X_{m}X_{2}+\cdot \cdot \cdot +X_{m}X_{n}\end{array} \]
so, there are m rows, and n columns. also note that \(E\left [ X_{i}X_{i}\right ] =E\left [ X_{i}^{2}\right ] =0\cdot \left ( 1-p\right ) +1^{2}\cdot p=p\)
and since \(X_{1},X_{2},X_{3},\cdot \cdot \cdot \) are all independent with each others. then \(E\left [ X_{i}X_{j}\right ] =E\left [ X_{i}\right ] E\left [ X_{j}\right ] =p\cdot p=p^{2}\)
now, if \(m<n\) then there are \(m\) pairs of \(X_{i}X_{i}\) and there are \(\left ( m\cdot \left ( n-1\right ) \right ) .\)
if \(n<m\), then are \(n\) pairs of \(X_{i}X_{i}\) and there are \(\left ( n\cdot \left ( m-1\right ) \right ) .\)
so, the general case is then \[ \fbox{E$\left [ Y_mY_n\right ] $=$\min $$\left ( m,n\right ) $$\left ( \max \left ( m,n\right ) -1\right ) $p+$\min $$\left ( m,n\right ) $p$^2$} \]
i.e. if \[ \fbox{m$<$n$\Rightarrow $E$\left [ Y_mY_n\right ] $=m$\left ( n-1\right ) $p+mp$^2$} \]
if \[ \fbox{m$>$n$\Rightarrow $E$\left [ Y_mY_n\right ] $=n$\left ( m-1\right ) $p+np$^2$} \]
when \[ \fbox{m=n$\Rightarrow $E$\left [ Y_nY_n\right ] $=E$\left [ Y_n^2\right ] $=n$\left ( n-1\right ) $p+np$^2$} \]
now,
\[\begin{array} [c]{lll}\sigma _{Y}^{2}\left ( n\right ) & = & E\left [ Y_{n}^{2}\right ] -E^{2}\left [ Y_{n}\right ] \\ & & \\ & = & n\left ( n-1\right ) p+np^{2}-\left ( np\right ) ^{2}\end{array} \]
so\[\begin{array} [c]{c}\sigma _{Y}^{2}\left ( n\right ) =n\left ( n-1\right ) p+np^{2}-n^{2}p^{2}\\ =n\left ( n-1\right ) p+np^{2}\left ( 1-n\right ) \\ =\left ( n-1\right ) \left ( np-np^{2}\right ) \\ =np\left ( n-1\right ) \left ( 1-p\right ) \end{array} \]
so\[ \fbox{$\sigma $$_Y^2$$\left ( n\right ) $=np$\left ( n-1\right ) $$\left ( 1-p\right ) $} \]
______part b
\[\begin{array} [c]{lll}K_{Y}\left ( m,n\right ) & = & E\left [ Y_{m}X_{n}^{*}\right ] -\mu _{Y}\left ( m\right ) \mu _{X}\left ( n\right ) \\ & & \\ & = & \min \left ( m,n\right ) \left ( \max \left ( m,n\right ) -1\right ) p+\min \left ( m,n\right ) p^{2}-\left ( mp\right ) \left ( np\right ) \end{array} \]
so\[ K_{Y}\left ( m,n\right ) =\min \left ( m,n\right ) \left ( \max \left ( m,n\right ) -1\right ) p+\min \left ( m,n\right ) p^{2}-mnp \]
i.e. \[ m<n\Rightarrow K_{Y}\left ( m,n\right ) =m\left ( n-1\right ) p+mp^{2}-mnp=mp\left ( p-1\right ) \]
\[ n<m\Rightarrow K_{Y}\left ( m,n\right ) =n\left ( m-1\right ) p+np^{2}-mnp=np\left ( p-1\right ) \]
so\[ \fbox{K$_Y$$\left ( m,n\right ) $=$\min $$\left ( m,n\right ) $p$\left ( p-1\right ) $} \]
______part c
\[\begin{array} [c]{lll}\sigma _{A}^{2} & = & E\left [ A^{2}\right ] -E^{2}\left [ A\right ] \\ & & \\ & = & E\left [ \left ( Y_{m}-Y_{n}\right ) ^{2}\right ] -E^{2}\left [ Y_{m}-Y_{n}\right ] \\ & & \\ & = & E\left [ Y_{m}^{2}+Y_{n}^{2}-2Y_{m}Y_{n}\right ] -\left ( E\left [ Y_{m}\right ] -E\left [ Y_{n}\right ] \right ) ^{2}\\ & & \\ & = & E\left [ Y_{m}^{2}\right ] +E\left [ Y_{n}^{2}\right ] -2E\left [ Y_{m}Y_{n}\right ] -\left ( E^{2}\left [ Y_{m}\right ] +E^{2}\left [ Y_{n}\right ] -2E\left [ Y_{m}\right ] E\left [ Y_{n}\right ] \right ) \\ & = & E\left [ Y_{m}^{2}\right ] +E\left [ Y_{n}^{2}\right ] -2E\left [ Y_{m}Y_{n}\right ] -E^{2}\left [ Y_{m}\right ] -E^{2}\left [ Y_{n}\right ] +2E\left [ Y_{m}\right ] E\left [ Y_{n}\right ] \\ & = & \left ( E\left [ Y_{m}^{2}\right ] -E^{2}\left [ Y_{m}\right ] \right ) +\left ( E\left [ Y_{n}^{2}\right ] -E^{2}\left [ Y_{n}\right ] \right ) -2E\left [ Y_{m}Y_{n}\right ] +2E\left [ Y_{m}\right ] E\left [ Y_{n}\right ] \\ & & \\ & = & \sigma _{y}^{2}\left ( m\right ) +\sigma _{y}^{2}\left ( n\right ) -2E\left [ Y_{m}Y_{n}\right ] +2E\left [ Y_{m}\right ] E\left [ Y_{n}\right ] \end{array} \]
now, since \(X_{i}\) are all independent with each others, then \(E\left [ Y_{m}Y_{n}\right ] =E\left [ Y_{m}\right ] E\left [ Y_{n}\right ] \), only if \(E\left [ X_{i}\right ] E\left [ X_{i}\right ] =E\left [ X_{i}X_{i}\right ] \)
for all i. \(E\left [ X_{i}\right ] E\left [ X_{i}\right ] =p^{2}\) and \(E\left [ X_{i}X_{i}\right ] =p\), so \(Y_{m}\)a nd \(Y_{n}\) are not independent with each others even though \(X_{i},X_{j}\,\) are. so the general expression becomes:
\(\sigma _{A}^{2}=np\left ( n-1\right ) \left ( 1-p\right ) +mp\left ( m-1\right ) \left ( 1-p\right ) -2\left [ \min \left ( m,n\right ) \left ( \max \left ( m,n\right ) -1\right ) p+\min \left ( m,n\right ) p^{2}\right ] +2nmp^{2}\)
so\[\begin{array} [c]{c}m<n\Rightarrow \sigma _{A}^{2}=np\left ( n-1\right ) \left ( 1-p\right ) +mp\left ( m-1\right ) \left ( 1-p\right ) -2\left [ m\left ( n-1\right ) p+mp^{2}\right ] +2nmp^{2}\\ =n^{2}\left ( p-p^{2}\right ) +n\left ( p^{2}-p\right ) +m^{2}\left ( p-p^{2}\right ) +m\left ( p-p^{2}\right ) +2nm\left ( p^{2}-p\right ) \end{array} \]
and\[\begin{array} [c]{c}n<m\Rightarrow \sigma _{A}^{2}=np\left ( n-1\right ) \left ( 1-p\right ) +mp\left ( m-1\right ) \left ( 1-p\right ) -2\left [ n\left ( m-1\right ) p+np^{2}\right ] +2nmp^{2}\\ =n^{2}\left ( p-p^{2}\right ) +n\left ( p-p^{2}\right ) +m^{2}\left ( p-p^{2}\right ) +m\left ( p^{2}-p\right ) +2nm\left ( p^{2}-p\right ) \end{array} \]
and\[ n=m\Rightarrow \sigma _{A}^{2}=0 \]
I can simplify this more by writing \[ \gamma =p-p^{2} \]
so
\[ m<n\Rightarrow \sigma _{A}^{2}=n^{2}\gamma -n\gamma +\gamma m^{2}-\gamma m+2\gamma nm \]
and\[ n<m\Rightarrow \sigma _{A}^{2}=\gamma n^{2}+\gamma n+\gamma m^{2}+\gamma m+2\gamma nm \]
so, finally
\[ \fbox{m$<$n$\Rightarrow $$\sigma $$_A^2$=$\gamma $$\left ( n^2+m^2+2nm\right ) $-$\gamma $$\left ( n+m\right ) $} \]
and\[ \fbox{n$<$m$\Rightarrow $$\sigma $$_A^2$=$\gamma $$\left ( n^2+m^2+2nm\right ) $+$\gamma $$\left ( n+m\right ) $} \]
where\[ \fbox{$\gamma $=p-p$^2$} \]
\[\begin{array} [c]{lll}R_{X}\left ( l\right ) & = & 5\delta \left ( l\right ) \\ S_{X}\left ( \omega \right ) & = & 5\\ S_{Y}\left ( \omega \right ) & = & 5\\ & & \\ R_{X,Y}\left ( l\right ) & = & 2\delta \left ( l\right ) \\ S_{XY}\left ( \omega \right ) & = & 2\\ & & \\ h_{1}\left ( n\right ) & = & u\left ( n+2\right ) -u\left ( n-3\right ) =\{1,1, \frame{1},1,1\}\\ H_{1}\left ( j\omega \right ) & = & \frac{\sin \left ( \frac 52\omega \right ) }{\sin \left ( \frac \omega 2\right ) }\\ & & \\ h_{2}\left ( n\right ) & = & \left [ 2-\left | n\right | \right ] h_{1}\left ( n\right ) =\{1, \frame{2},1\}\\ H_{2}\left ( j\omega \right ) & = & 2\left ( 1+\cos \omega \right ) \\ h_{3}\left ( n\right ) & = & \left ( \frac 12\right ) ^{\left | n\right | }=\left \{ \cdot \cdot \cdot ,\frac 14,\frac 12, \frame{1},\frac 12,\frac 14,\cdot \cdot \cdot \right \} \\ H_{3}\left ( j\omega \right ) & = & \frac{1-\left ( \frac 12\right ) ^{2}}{1-2\frac 12\cos \omega +\left ( \frac 12\right ) ^{2}}=\frac{\frac 34}{\frac 32-\cos \omega }=\frac 3{6-4\cos \omega }\end{array} \]
\[\begin{array} [c]{lll}R_{U}\left ( l\right ) & = & R_{X}\left ( l\right ) *h_{1}\left ( l\right ) *h_{3}\left ( l\right ) *h_{1}^{*}\left ( -l\right ) *h_{3}^{*}\left ( -l\right ) \\ & & +\\ & & R_{Y}\left ( l\right ) *h_{2}\left ( l\right ) *h_{3}\left ( l\right ) *h_{2}^{*}\left ( -l\right ) *h_{3}^{*}\left ( -l\right ) \\ & & +\\ & & R_{XY}\left ( l\right ) *h_{3}\left ( l\right ) *h_{3}^{*}\left ( -l\right ) \\ & & \\ S_{U}\left ( \omega \right ) & = & S_{X}\left ( \omega \right ) \left | H_{1}\left ( j\omega \right ) \right | ^{2}\left | H_{3}\left ( j\omega \right ) \right | ^{2}\\ & & +\\ & & S_{Y}\left ( \omega \right ) \left | H_{2}\left ( j\omega \right ) \right | ^{2}\left | H_{3}\left ( j\omega \right ) \right | ^{2}\\ & & +\\ & & S_{XY}\left ( \omega \right ) \left | H_{3}\left ( j\omega \right ) \right | ^{2}\\ & & \\ & = & 5\left | \frac{\sin \left ( \frac 52\omega \right ) }{\sin \left ( \frac \omega 2\right ) }\right | ^{2}\left | \frac 3{6-4\cos \omega }\right | ^{2}\\ & & +\\ & & 5\left | 2\left ( 1+\cos \omega \right ) \right | ^{2}\left | \frac 3{6-4\cos \omega }\right | ^{2}\\ & & +\\ & & 2\left | \frac 3{6-4\cos \omega }\right | ^{2}\end{array} \]
so\[\begin{array} [c]{lll}S_{U}\left ( \omega \right ) & = & 5\left | \frac{\sin \left ( \frac 52\omega \right ) }{\sin \left ( \frac \omega 2\right ) }\right | ^{2}\left | \frac 3{6-4\cos \omega }\right | ^{2}+5\left | 2\left ( 1+\cos \omega \right ) \right | ^{2}\left | \frac 3{6-4\cos \omega }\right | ^{2}+2\left | \frac 3{6-4\cos \omega }\right | ^{2}\end{array} \]
so\[ \fbox{S$_U$$\left ( \omega \right ) $=-$\frac 94$$\frac{57+80\cos \left ( \omega \right ) +20\cos \left ( 3\omega \right ) +40\cos \left ( 2\omega \right ) +10\cos \left ( 4\omega \right ) }{-2\cos \left ( 2\omega \right ) +12\cos \left ( \omega \right ) -11}$ } \]
______part a
let the time average of \(X_{n}\) be \(\widehat{M}\) , where \[ \widehat{M}\equiv \frac 1N{\displaystyle \sum \limits _{n=1}^{N}} X_{n}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;0\leq n<\infty \]
the mean of \(\widehat{M}\) is the ensemble mean of process \(X_{n}\), i.e.\[ E\left [ \widehat{M}\right ] =E\left [ X_{n}\right ] =\mu _{X} \]
so, if the variance of \(\widehat{M}\) is small, then we can say that the time average of R.P. \(X_{n}\) converges to the ensemble average of \(X_{n}\). that is, we say that \(X_{n}\) is ergodic in the mean.
so, the condition I need to look for is to see if the variance of \(\widehat{M}\) goes to zero as \(N\) goes very large.
i.e. if \[ \lim _{N\nearrow \infty }\sigma _{\widehat{M}}^{2}\longrightarrow 0 \]
then \(X_{n}\) is ergodic in the mean.
since \(\widehat{M}\) is a random variable, the convergence above is in the mean square sense.
Now, I find expression to this condition:
\[\begin{array} [c]{lll}\sigma _{\widehat{M}}^{2} & = & E\left [ \left | \widehat{M}-E\left [ \widehat{M}\right ] \right | ^{2}\right ] \\ & & \\ & = & E\left [ \left | \widehat{M}-\mu _{X}\right | ^{2}\right ] \end{array} \]
but \[\begin{array} [c]{lll}\widehat{M}-\mu _{X} & = & \left ( \frac{1}{N}{\displaystyle \sum \limits _{n=1}^{N}} X_{n}\right ) -\mu _{X}\end{array} \]
but \[\begin{array} [c]{lll}\frac 1N{\displaystyle \sum \limits _{n=1}^{N}} X_{n} & = & \frac 1N\left ( X_{1}+X_{2}+X_{3}+\cdot \cdot \cdot +X_{N}+\left ( N\cdot \mu _{X}-N\cdot \mu _{X}\right ) \right ) \\ & & \\ & = & \frac 1N\left ( \left ( X_{1}-\mu _{X}\right ) +\left ( X_{2}-\mu _{X}\right ) +\cdot \cdot \cdot +\left ( X_{N}-\mu _{X}\right ) +\left ( N\cdot \mu _{X}\right ) \right ) \\ & & \\ & = & \frac 1N\left ( \left ( X_{1}-\mu _{X}\right ) +\left ( X_{2}-\mu _{X}\right ) +\cdot \cdot \cdot +\left ( X_{N}-\mu _{X}\right ) \right ) +\mu _{X}\\ & & \\ & = & \left ( \frac 1N{\displaystyle \sum \limits _{n=1}^{N}} X_{n}-\mu _{X}\right ) +\mu _{X}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \end{array} \]
so, substitute the above in equation (2) we get:
\[\begin{array} [c]{lll}\widehat{M}-\mu _{X} & = & \left ( \frac 1N{\displaystyle \sum \limits _{n=1}^{N}} X_{n}-\mu _{X}\right ) +\mu _{X}-\mu _{X}=\frac 1N{\displaystyle \sum \limits _{n=1}^{N}} X_{n}-\mu _{X}\end{array} \]
so \[\begin{array} [c]{lll}\sigma _{\widehat{M}}^{2} & = & E\left [ \left | \widehat{M}-\mu _{X}\right | ^{2}\right ] \\ & & \\ & = & E\left [ \left | \frac 1N{\displaystyle \sum \limits _{n=1}^{N}} X_{n}-\mu _{X}\right | ^{2}\right ] \\ & & \\ & = & \frac 1{N^{2}}E\left [ \left |{\displaystyle \sum \limits _{n=1}^{N}} X_{n}-\mu _{X}\right | ^{2}\right ] \\ & & \\ & = & \frac 1{N^{2}}E\left [{\displaystyle \sum \limits _{n_{1}=1,n_{2}=1}^{N}} \left ( X_{n_{1}}-\mu _{X}\right ) \left ( X_{n_{2}}-\mu _{X}\right ) ^{*}\right ] \\ & & \\ & = & \frac 1{N^{2}}{\displaystyle \sum \limits _{n_{1},n_{2}=1}^{N}} E\left [ \left ( X_{n_{1}}-\mu _{X}\right ) \left ( X_{n_{2}}^{*}-\mu _{X}\right ) \right ] \end{array} \]
since M.S. limit and \(E\left [ \cdot \right ] \) operator can commute. so:
\[ \sigma _{\widehat{M}}^{2}=\frac 1{N^{2}}{\displaystyle \sum \limits _{n_{1},n_{2}=1}^{N}} K_{X}\left ( n_{1}-n_{2}\right ) \]
since the process is stationary.
so my condition can be stated as \[ \lim _{N->\infty }\sigma _{\widehat{M}}^{2}=\lim _{N->\infty }\ \frac{1}{N^{2}}{\displaystyle \sum \limits _{\substack{n_{1}=1\\n_{2}=1}}^{N}} K_{X}\left ( n_{1}-n_{2}\right ) \longrightarrow 0 \]
so, if the above goes to zero in the limit as indicated, then one can say that \(X_{n}\) is M.S. ergodic in the mean.
This in addition to the condition stated above, that\[ \;\;\fbox{E$\left [ \widehat{M}\right ] $$\equiv $E$\left [ \frac 1N{\displaystyle \sum \limits _n} =1^NX_n\right ] $\ =E$\left [ X_n\right ] $} \]
To simplify the condition in equation (3) above:
I need to find the sum\(\ \{\displaystyle \sum \limits _{\substack{n_{1}=1 \\n_{2}=1}}^{N}} K_{X}\left [ n_{1}-n_{2}\right ] \)
fix \(n_{2}=1,\)then partial sum = \(K_{X}[1-1]+K_{X}[2-1]+K_{X}\left [ 3-1\right ] +\cdot \cdot \cdot +K_{X}\left [ N-1\right ] \)
fix \(n_{2}=2,\)then partial sum = \(K_{X}[1-2]+K_{X}[2-2]+K_{X}\left [ 3-2\right ] +\cdot \cdot \cdot +K_{X}\left [ N-2\right ] \)
fix \(n_{2}=3,\)then partial sum = \(K_{X}[1-3]+K_{X}[2-3]+K_{X}\left [ 3-3\right ] +\cdot \cdot \cdot +K_{X}\left [ N-3\right ] \)
\(\cdot \cdot \cdot \cdot \)
fix \(n_{2}=N,\)then partial sum = \(K_{X}[1-N]+K_{X}[2-N]+K_{X}\left [ 3-N\right ] +\cdot \cdot \cdot +K_{X}\left [ N-N\right ] \)
so, the above total sum is
\(\left ( K_{X}[0]+K_{X}[1]+K_{X}\left [ 2\right ] +\cdot \cdot \cdot +K_{X}\left [ N-1\right ] \right ) +\left ( K_{X}[-1]+K_{X}[0]+K_{X}\left [ 1\right ] +\cdot \cdot \cdot +K_{X}\left [ N-2\right ] \right ) +...\left ( K_{X}[1-N]+K_{X}[2-N]+K_{X}\left [ 3-N\right ] +\cdot \cdot \cdot +K_{X}\left [ 0\right ] \right ) \)
so\(\{\displaystyle \sum \limits _{\substack{n_{1}=1 \\n_{2}=1}}^{N}} K_{X}\left [ n_{1}-n_{2}\right ] =N\cdot K_{X}\left [ 0\right ] +\left ( N-1\right ) \left ( K_{X}\left [ 1\right ] +K_{X}\left [ -1\right ] \right ) +\left ( N-2\right ) \left ( K_{X}\left [ -2\right ] +K_{X}\left [ 2\right ] \right ) +\left ( N-3\right ) \left ( K_{X}\left [ -3\right ] +K_{X}\left [ 3\right ] \right ) +\cdot \cdot \cdot +\left ( 1\right ) \left ( K_{X}\left [ -\left ( N-1\right ) \right ] +K_{X}\left [ N-1\right ] \right ) \)
so\[ \frac 1{N^{2}}{\displaystyle \sum \limits _{\substack{n_{1}=1 \\n_{2}=1}}^{N}} K_{X}\left [ n_{1}-n_{2}\right ] =\frac 1N{\displaystyle \sum \limits _{n=-N}^{N}} \left ( 1-\frac{\left | n\right | }N\right ) K_{X}\left [ n\right ] \]
______part a
\(X\left ( t\right ) \) , for \(t>0,\) takes in 2 values, \(\left \{ 1,-1\right \} \), so
\(E\left [ X\left ( t\right ) \right ] =\left ( 1\cdot P\left \{ X\left ( t\right ) =1\right \} +\left ( -1\right ) \cdot P\left \{ X\left ( t\right ) =-1\right \} \right ) =P\left \{ X\left ( t\right ) =1\right \} -P\left \{ X\left ( t\right ) =-1\right \} \text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)}\)
but\[\begin{array} [c]{lll}\text{ $P\left \{ X\left ( t\right ) =1\right \} $} & = & P\left \{ \left ( -1\right ) ^{N\left ( t\right ) }=1\right \} \end{array} \]
but \(P\left \{ \left ( -1\right ) ^{N\left ( t\right ) }=1\right \} \)is the same as the probability that \(N\left ( t\right ) \,\) takes in even values, because when \(N\left ( t\right ) \) takes in even values, then \(\left ( -1\right ) ^{N\left ( t\right ) }\) will have value of 1.
so,\(P\left \{ \left ( -1\right ) ^{N\left ( t\right ) }=1\right \} =P\left \{ N\left ( t\right ) =\text{even values}\right \} \)
but the probability that \(N\left ( t\right ) \) takes in even values = \(P\left \{ N\left ( t\right ) =2\right \} +P\left \{ N\left ( t\right ) =4\right \} +P\left \{ N\left ( t\right ) =6\right \} +\cdot \cdot \cdot \) This is because since the times of arrivals are independent from each others in a poisson process.
then \(P\left \{ \left ( -1\right ) ^{N\left ( t\right ) }=1\right \} =P\left \{ N\left ( t\right ) =\text{even values}\right \} =P_{t}\left ( 2\right ) +P_{t}\left ( 4\right ) +P_{t}\left ( 6\right ) +\cdot \cdot \cdot =\)\(\sum \limits _{n=0}^{\infty }P_{t}\left ( 2n\right ) \)
Similarly,\[\begin{array} [c]{lll}\text{$P\left \{ X\left ( t\right ) =-1\right \} $} & = & P\left \{ \left ( -1\right ) ^{N\left ( t\right ) }=-1\right \} \end{array} \]
again, similar to above argument, \(P\left \{ \left ( -1\right ) ^{N\left ( t\right ) }=-1\right \} \)is the same as the probability that \(N\left ( t\right ) \,\) takes in odd values, because when \(N\left ( t\right ) \) takes in odd values, then \(\left ( -1\right ) ^{N\left ( t\right ) }\) will have value of -1.
so \(P\left \{ \left ( -1\right ) ^{N\left ( t\right ) }=1\right \} =P\left \{ N\left ( t\right ) =\text{odd values}\right \} \)
but the probability that \(N\left ( t\right ) \) takes in odd values = \(P\left \{ N\left ( t\right ) =1\right \} +P\left \{ N\left ( t\right ) =3\right \} +P\left \{ N\left ( t\right ) =5\right \} +\cdot \cdot \cdot \)
then \(P\left \{ \left ( -1\right ) ^{N\left ( t\right ) }=1\right \} =P\left \{ N\left ( t\right ) =\text{odd values}\right \} =P_{t}\left ( 1\right ) +P_{t}\left ( 3\right ) +P_{t}\left ( 5\right ) +\cdot \cdot \cdot =\frame{$\sum \limits _n=1^\infty $P$_t$$\left ( 2n-1\right ) $}\)
so, substituting in equation 1 above, we see
\begin{equation} \label{2}\text{ $E\left [ X\left ( t\right ) \right ] =P\left \{ X\left ( t\right ) =1\right \} -P\left \{ X\left ( t\right ) =-1\right \} =$}\sum \limits _{n=0}^{\infty }P_{t}\left ( 2n\right ) -\sum \limits _{n=1}^{\infty }P_{t}\left ( 2n-1\right ) \end{equation}
but \[\begin{array} [c]{lll}\sum \limits _{n=0}^{\infty }P_{t}\left ( 2n\right ) & = & P_{t}\left ( 0\right ) +P_{t}\left ( 2\right ) +P_{t}\left ( 4\right ) +\cdot \cdot \cdot \\ & & \\ & = & \frac{\left ( \lambda t\right ) ^{0}}{0!}e^{-\lambda t}+\frac{\left ( \lambda t\right ) ^{2}}{2!}e^{-\lambda t}+\frac{\left ( \lambda t\right ) ^{4}}{4!}e^{-\lambda t}+\cdot \cdot \cdot \\ & & \\ & = & e^{-\lambda t}\left ( \frac{\left ( \lambda t\right ) ^{0}}{0!}+\frac{\left ( \lambda t\right ) ^{2}}{2!}+ \frac{\left ( \lambda t\right ) ^{4}}{4!}+\cdot \cdot \cdot \right ) \\ & & \\ & = & e^{-\lambda t}\cosh \lambda t \end{array} \]
and
\[\begin{array} [c]{lll}\sum \limits _{n=1}^{\infty }P_{t}\left ( 2n-1\right ) & = & P_{t}\left ( 1\right ) +P_{t}\left ( 3\right ) +P_{t}\left ( 5\right ) +\cdot \cdot \cdot \\ & & \\ & = & \frac{\left ( \lambda t\right ) ^{1}}{1!}e^{-\lambda t}+\frac{\left ( \lambda t\right ) ^{3}}{3!}e^{-\lambda t}+\frac{\left ( \lambda t\right ) ^{5}}{5!}e^{-\lambda t}+\cdot \cdot \cdot \\ & & \\ & = & e^{-\lambda t}\left ( \frac{\left ( \lambda t\right ) ^{1}}{1!}+\frac{\left ( \lambda t\right ) ^{3}}{3!}+ \frac{\left ( \lambda t\right ) ^{5}}{5!}+\cdot \cdot \cdot \right ) \\ & & \\ & = & e^{-\lambda t}\sinh \lambda t \end{array} \]
so, equation 2 above becomes\[\begin{array} [c]{lll}\mu _{X}\left ( t\right ) & = & \sum \limits _{n=0}^{\infty }P_{t}\left ( 2n\right ) -\sum \limits _{n=1}^{\infty }P_{t}\left ( 2n-1\right ) \\ & & \\ & = & e^{-\lambda t}\cosh \lambda t-e^{-\lambda t}\sinh \lambda t\\ & & \\ & = & e^{-\lambda t}\left ( \cosh \lambda t-\sinh \lambda t\right ) \;\;\;\;\;\;\;\;\text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)}\end{array} \]
now, \(e^{-x}=\cosh x-\sinh x\) so let \(y\equiv -\lambda t\) so \[\begin{array} [c]{lll}e^{-\lambda t} & = & \cosh \lambda t-\sinh \lambda t\text{\ \ \ \ \ \ \ }\end{array} \]
we see immediately that equation (3) becomes\[ \fbox{$\mu $$_X$$\left ( t\right ) $=e$^{-\lambda t}$e$^{-\lambda t}$=e$^{-2\lambda t}$} \text{\ \ \ \ \ \ \ \ \ \ \ \ \ }t>0 \]
first, let \(t_{1}-t_{2}=\tau >0\) . now\[\begin{array} [c]{lll}R_{X}\left ( t_{1},t_{2}\right ) & = & E\left [ X\left ( t_{1}\right ) X\left ( t_{2}\right ) \right ] \\ & & \\ & = & \left ( 1\right ) \cdot P\left \{ X\left ( t_{1}\right ) =1,X\left ( t_{2}\right ) =1\right \} \\ & & +\left ( -1\right ) \cdot P\left \{ X\left ( t_{1}\right ) =-1,X\left ( t_{2}\right ) =1\right \} \\ & & +\left ( -1\right ) \cdot P\left \{ X\left ( t_{1}\right ) =1,X\left ( t_{2}\right ) =-1\right \} \\ & & +\left ( 1\right ) P\left \{ X\left ( t_{1}\right ) =-1,X\left ( t_{2}\right ) =-1\right \} \text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5)}\end{array} \]
now,using the relation that \(P\left \{ A\mid B\right \} =\frac{P\left \{ A,B\right \} }{P\left \{ B\right \} }\), then \[\begin{array} [c]{lll}P\left \{ X\left ( t_{1}\right ) =1,X\left ( t_{2}\right ) =1\right \} & = & P\left \{ X\left ( t_{1}\right ) =1\mid X\left ( t_{2}\right ) =1\right \} \cdot P\left \{ X\left ( t_{2}\right ) =1\right \} \\ & & \\ & = & P\left \{ \left ( -1\right ) ^{N\left ( t_{1}\right ) }=1\mid \left ( -1\right ) ^{N\left ( t_{2}\right ) }=1\right \} \cdot P\left \{ \left ( -1\right ) ^{N\left ( t_{2}\right ) }=1\right \} \\ & & \\ & = & P\left \{ N\left ( t_{1}\right ) =even\mid N\left ( t_{2}\right ) =even\right \} \cdot P\left \{ N\left ( t_{2}\right ) =even\right \} \text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (6)}\end{array} \]
now,when \(X\left ( t_{2}\right ) =1,\)then for\(\;X\left ( t_{1}\right ) \) to have value of 1, means that even number of points are between \(t_{2}\)and \(t_{1}\), where the point, is the point of time when \(X(t)\) switches between 1,-1.
so \(P\left \{ X\left ( t_{1}\right ) =1\mid X\left ( t_{2}\right ) =1\right \} =P\left \{ \text{there is even number of points between }t_{2}\text{and }t_{1}\right \} \)
But from part a, we find that \(P\left \{ \text{there is even number of points between 0 and }t\right \} =\)probability that \(X\left ( t\right ) \) takes in a value of 1 at time \(t.\)
this means that probability that \(X\left ( t\right ) \) takes in a value of 1 at time \(t\) is the same as talking about the probability that there are even number of points between 0 and \(t\).
so, now I can say that \(P\left \{ \text{there is even number of points between 0 and }t\right \} =\sum \limits _{n=0}^{\infty }P_{t}\left ( 2n\right ) =e^{-\lambda t}\cosh \lambda t\)
when \(t_{1}-t_{2}=\tau \geq 0,\) I can write the above by replacing \(t\) with \(\tau \) as
\(P\left \{ \text{there is even number of points between }t_{1}\text{ and }t_{2}\right \} =\sum \limits _{n=0}^{\infty }P_{t_{1}-t_{2}}\left ( 2n\right ) =e^{-\lambda \tau }\cosh \lambda \tau \)
in other words, \[ P\left \{ X\left ( t_{1}\right ) =1\mid X\left ( t_{2}\right ) =1\right \} =e^{-\lambda \tau }\cosh \lambda \tau \]
and , from part a, we know that
\(P\left \{ X\left ( t_{2}\right ) =1\right \} =P\left \{ \text{there is even number of points between 0 and }t_{2}\right \} =e^{-\lambda t_{2}}\cosh \lambda t_{2}\)
\[ P\left \{ X\left ( t_{2}\right ) =1\right \} =e^{-\lambda t_{2}}\cosh \lambda t_{2} \]
so, substitute the above 2 relations in equation (6) gives:
\[ \fbox{$P\left \{ X\left ( t_1\right ) =1,X\left ( t_2\right ) =1\right \} =e^-\lambda \tau \cosh \lambda \tau \;e^-\lambda t_2\cosh \lambda t_2$\ \ \ \ \ \ \ \ \ }\ \ \ \ \ \ \ \ \ \ \ \ \ (7) \]
similarly,\[ P\left \{ X\left ( t_{1}\right ) =-1,X\left ( t_{2}\right ) =1\right \} =P\left \{ X\left ( t_{1}\right ) =-1\mid X\left ( t_{2}\right ) =1\right \} \cdot P\left \{ X\left ( t_{2}\right ) =1\right \} \]
but again \(P\left \{ X\left ( t_{1}\right ) =-1\mid X\left ( t_{2}\right ) =1\right \} \) \(\equiv P\left \{ \text{there is odd number of points between }t_{1}\text{ and }t_{2}\right \} \)
but \(P\left \{ \text{there is odd number of points between 0 and }t\right \} =\sum \limits _{n=1}^{\infty }P_{t}\left ( 2n-1\right ) =e^{-\lambda t}\sinh \lambda t\)
so this means that the \(P\left \{ there\;is\;odd\;number\;of\;points\;between\;t_{1\;}and\;t_{2}\right \} \)\(=\sum \limits _{n=1}^{\infty }P_{t_{1}-t_{2}}\left ( 2n-1\right ) =e^{-\lambda \tau }\sinh \lambda \tau \)
and \(P\left \{ X\left ( t_{2}\right ) =1\right \} =P\left \{ \text{there is even number of points between 0 and }t_{2}\right \} =\sum \limits _{n=0}^{\infty }P_{t_{2}}\left ( 2n\right ) =e^{-\lambda t_{2}}\lambda t_{2}\)
so,
\(P\left \{ X\left ( t_{1}\right ) =-1,X\left ( t_{2}\right ) =1\right \} =P\left \{ N\left ( t_{1}\right ) =odd\mid N\left ( t_{2}\right ) =even\right \} \cdot P\left \{ N\left ( t_{2}\right ) =even\right \} =e^{-\lambda \tau }\sinh \lambda \tau e^{-\lambda t_{2}}\cosh \lambda t_{2}\)
i.e.\[ \fbox{$P\left \{ X\left ( t_1\right ) =-1,X\left ( t_2\right ) =1\right \} =e^-\lambda \tau \sinh \lambda \tau e^-\lambda t_2\cosh \lambda t_2$}\ \ \ \ \ \ \ \ \ \ (8) \]
similarly, i find
\[ \fbox{$P\left \{ X\left ( t_1\right ) =1,X\left ( t_2\right ) =-1\right \} =e^-\lambda \tau \sinh \lambda \tau e^-\lambda t_2\sinh \lambda t_2$\ \ \ }\ \ \ \ \ \ (9) \]
and finally
\[ \fbox{$P\left \{ X\left ( t_1\right ) =-1,X\left ( t_2\right ) =-1\right \} =e^-\lambda \tau \cosh \lambda \tau e^-\lambda t_2\sinh \lambda t_2$\ \ }\ \ \ \ \ (10) \]
so, from equation (5), substitute in it equations 7,8,9,10, I get
\[\begin{array} [c]{lll}R_{X}\left ( t_{1},t_{2}\right ) & = & e^{-\lambda \tau }\cosh \lambda \tau \;e^{-\lambda t_{2}}\cosh \lambda t_{2}\\ & & \\ & & -e^{-\lambda \tau }\sinh \lambda \tau e^{-\lambda t_{2}}\cosh \lambda t_{2}\\ & & \\ & & -e^{-\lambda \tau }\sinh \lambda \tau e^{-\lambda t_{2}}\sinh \lambda t_{2}\\ & & \\ & & +e^{-\lambda \tau }\cosh \lambda \tau e^{-\lambda t_{2}}\sinh \lambda t_{2}\end{array} \]
so,
\(R_{X}\left ( t_{1},t_{2}\right ) =e^{-\lambda \tau }e^{-\lambda t_{2}}\left ( \cosh \lambda \tau \cosh \lambda t_{2}-\sinh \lambda \tau \cosh \lambda t_{2}-\sinh \lambda \tau \sinh \lambda t_{2}+\cosh \lambda \tau \sinh \lambda t_{2}\right ) \)
\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=e^{-\lambda \tau }e^{-\lambda t_{2}}\left ( \cosh \lambda \tau \left ( \cosh \lambda t_{2}+\sinh \lambda t_{2}\right ) -\sinh \lambda \tau \left ( \cosh \lambda t_{2}+\sinh \lambda t_{2}\right ) \right ) \;\;\;\;\;\;\;\;\;\)(11)
but,\[\begin{array} [c]{c}e^{x}=\cosh x+\sinh x\\ e^{-x}=\cosh x-\sinh x \end{array} \]
so, equation (11) becomes
\[ R_{X}\left ( t_{1},t_{2}\right ) =e^{-\lambda \tau }e^{-\lambda t_{2}}\left ( \cosh \lambda \tau \left ( e^{\lambda t_{2}}\right ) -\sinh \lambda \tau \left ( e^{\lambda t_{2}}\right ) \right ) =e^{-\lambda \tau }\left ( \cosh \lambda \tau -\sinh \lambda \tau \right ) =e^{-\lambda \tau }e^{-\lambda \tau }=e^{-2\lambda \tau } \]
i.e. for \(t_{1}>t_{2}\geq 0,\)and \(\tau =t_{1}-t_{2},\) \[ \frame{$R_X\left ( t_1,t_2\right ) =e^-2\lambda \left ( t_1-t_2\right ) $} \]
similarly, one can let \(t_{2}>t_{1}>0,\)and \(\tau =t_{2}-t_{1}\) and that would lead to \[ \frame{$R_X\left ( t_2,t_1\right ) =e^-2\lambda \left ( t_2-t_1\right ) $} \]
so, from the above we see that \[ \fbox{R$_X$$\left ( t_1,t_2\right ) $=e$^-2\lambda \left | t_1-t_2\right | $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ t$_1$,t$_2$$\geq $0} \]
______part c
since \(\mu _{X}\left ( t\right ) \) is a function of \(t,\) then \(X\left ( t\right ) \) is a non-stationary process, so \(X\left ( t\right ) \) is M.S. continuous at time \(t\) iff \(R_{X}\left ( t_{1},t_{2}\right ) \) is continuous at time \(t_{1}=t_{2}\equiv t.\)
so \(R_{X}\left ( t,t\right ) =e^{-2\lambda \left | t-t\right | }=1\)
so __________________________\(X\left ( t\right ) \) is M.S. continuous.
R.P. \(X\left ( t\right ) \) has M.S. derivative at time \(t\) iff \(R_{X}\left ( t_{1},t_{2}\right ) \) has a second order mixed derivative when \(t_{1}=t_{2}\equiv t.\)
\[ \frac{\partial R_{X}\left ( t_{1},t_{2}\right ) }{\partial t_{1}}=\frac \partial{\partial t_{1}}\left ( e^{2\lambda \left ( t_{1}-t_{2}\right ) }u\left ( -(t_{1}-t_{2}\right ) +e^{-2\lambda \left ( t_{1}-t_{2}\right ) }u\left ( t_{1}-t_{2}\right ) \right ) \left \{ \begin{array} [c]{ll}\frac 1{2\lambda }e^{2\lambda \left ( t_{1}-t_{2}\right ) } & t_{2}>t_{1}\\ & \\ -\frac 1{2\lambda }e^{-2\lambda \left ( t_{1}-t_{2}\right ) } & t_{1}>t_{2}\end{array} \right . \]
and\[ \frac{\partial ^{2}R_{X}\left ( t_{1},t_{2}\right ) }{\partial t_{1}\partial t_{2}}=\frac \partial{\partial t_{2}}\left \{ \begin{array} [c]{ll}\frac 1{2\lambda }e^{2\lambda \left ( t_{1}-t_{2}\right ) } & t_{2}>t_{1}\\ & \\ -\frac 1{2\lambda }e^{-2\lambda \left ( t_{1}-t_{2}\right ) } & t_{1}>t_{2}\end{array} \right . =\left \{ \begin{array} [c]{ll}-\frac 1{4\lambda ^{2}}e^{2\lambda \left ( t_{1}-t_{2}\right ) } & t_{2}>t_{1}\\ & \\ -\frac 1{4\lambda ^{2}}e^{2\lambda \left ( t_{1}-t_{2}\right ) } & t_{1}>t_{2}\end{array} \right . \]
at the line \(t_{1}=t_{2}\) , i.e. \(\tau =0\) we get\[ \fbox{$\frac{\partial ^2R_X\left ( t_1,t_2\right ) }{\partial t_1\partial t_2}$=-$\frac 1{4\lambda ^2}$} \]
so \[ \lim _{\tau \searrow 0}\left ( -\frac 1{4\lambda ^{2}}\right ) =\left ( -\frac 1{4\lambda ^{2}}\right ) \]
so, the limit exist, so \(X\left ( t\right ) \) is M.S. diferetiable.
\(X\left ( t\right ) \,\) uncorrelated means \(R_{X}\left ( t_{1},t_{2}\right ) =0\) for \(t_{1}\neq t_{2}\), in other words, \(R_{X}\left ( \tau \right ) =0\) for \(\tau \neq 0.\)
also note that \(X\left ( t\right ) \) and \(N\left ( t\right ) \) are orthogonal since they are uncorrelated with zero-mean.
\[ K_{X}\left ( t_{1},t_{2}\right ) =\sigma _{X}^{2}\left ( t_{1}\right ) \;\delta \left ( t_{1}-t_{2}\right ) =e^{-\left | t_{1}\right | }\;\delta \left ( t_{1}-t_{2}\right ) \]
so, since \(X\left ( t\right ) \) is a zero-mean process, then\[ R_{X}\left ( t_{1},t_{2}\right ) =e^{-\left | t_{1}\right | }\;\delta \left ( t_{1}-t_{2}\right ) \]
let\[ h\left ( t\right ) =h_{1}\left ( t\right ) *h_{2}\left ( t\right ) \]
where \(L_{i}\) means the time variable of the operator \(L\) is \(t_{i}\), and \(L^{*}\) is the adjoint operator whose impulse response is \(h^{*}\left ( t,\tau \right ) .\)\[\begin{array} [c]{lll}h\left ( t\right ) & = & h_{1}\left ( t\right ) *h_{2}\left ( t\right ) \\ & & \\ H\left ( \omega \right ) & = & H_{1}\left ( \omega \right ) \;H_{2}\left ( \omega \right ) \\ & & \\ & = & \frac 1{1+j\omega }\frac 2{2+j\omega }\\ & & \\ & = & \frac 2{1+j\omega }-\frac 2{2+j\omega }\\ & & \\ h\left ( t\right ) & = & F^{-1}\left \{ \frac 2{1+j\omega }-\frac 2{2+j\omega }\right \} \end{array} \]
so\[ \fbox{h$\left ( t\right ) $=2$\left ( e^-t-e^-2t\right ) $u$\left ( t\right ) $} \]
so\[\begin{array} [c]{lll}R_{XY}\left ( t_{1},t_{2}\right ) & = & L_{2}^{*}\left \{ R_{X}\left ( t_{1},t_{2}\right ) \right \} \\ & & \\ & = & \int \limits _{-\infty }^{\infty }h^{*}\left ( \alpha \right ) R_{X}\left ( t_{1};t_{2}-\alpha \right ) \;d\alpha \\ & & \\ & = & \int \limits _{-\infty }^{\infty }2\left ( e^{-\alpha }-e^{-2\alpha }\right ) u\left ( \alpha \right ) e^{-\left | t_{1}\right | }\;\delta \left ( t_{1}-\left ( t_{2}-\alpha \right ) \right ) \;d\alpha \\ & & \\ & = & \int \limits _{0}^{\infty }2\left ( e^{-\alpha }-e^{-2\alpha }\right ) e^{-\left | t_{1}\right | }\;\delta \left ( t_{1}-\left ( t_{2}-\alpha \right ) \right ) \;d\alpha \end{array} \]
when \(t_{2}-\alpha =t_{1}\Longrightarrow \alpha =t_{2}-t_{1}\;>0\;\;\;\)then the above integral has a value of \[ R_{XY}\left ( t_{1},t_{2}\right ) =2\left ( e^{-\left ( t_{2}-t_{1}\right ) }-e^{-2\left ( t_{2}-t_{1}\right ) }\right ) e^{-\left | t_{1}\right | }u\left ( t_{2}-t_{1}\right ) \]
or\[ \fbox{R$_XY$$\left ( t_1,t_2\right ) $=2$\left ( e^-\left ( t_2-t_1\right ) -e^-2\left ( t_2-t_1\right ) \right ) $e$^-\left | t_1\right | \;$\ u$\left ( t_2-t_1\right ) $}\text{\ } \]
now, I find \(R_{_{1}YY}\) due to contribution from \(R_{XY}\) and find \(R_{_{2}YY}\) due to contribution from \(R_{NY}\) and add them to get final \(R_{YY}=\) \(R_{_{1}YY}+R_{_{2}YY}\) (since \(N\perp X\))
now, Find contribution due to \(R_{XY}\)
\[\begin{array} [c]{lll}R_{_{1}YY}\left ( t_{1},t_{2}\right ) & = & L_{1}\left \{ R_{XY}\left ( t_{1},t_{2}\right ) \right \} \\ & & \\ & = & \int \limits _{-\infty }^{\infty }h\left ( \alpha \right ) R_{XY}\left ( t_{1}-\alpha ;t_{2}\right ) \;d\alpha \\ & & \\ & = & \int \limits _{-\infty }^{\infty }2\left ( e^{-\alpha }-e^{-2\alpha }\right ) u\left ( \alpha \right ) \;\left [ 2\left ( e^{-\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }-e^{-2\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }\right ) e^{-\left | t_{1}-\alpha \right | }u\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) \right ] \;d\alpha \end{array} \]
the above integral is exist only for \(\alpha >0\), else it is zero , so
\[\begin{array} [c]{lll}R_{_{1}YY}\left ( t_{1},t_{2}\right ) & = & \int \limits _{0}^{\infty }2\left ( e^{-\alpha }-e^{-2\alpha }\right ) \;\left [ 2\left ( e^{-\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }-e^{-2\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }\right ) e^{-\left | t_{1}-\alpha \right | }u\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) \right ] \;d\alpha \end{array} \]
now, when \(t_{2}-\left ( t_{1}-\alpha \right ) >0\Rightarrow t_{2}-t_{1}+\alpha >0\Rightarrow \alpha >t_{1}-t_{2}>0\Rightarrow t_{1}-t_{2}>0\)
so \(u\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) =u\left ( t_{1}-t_{2}\right ) \)
then \begin{equation} \label{1}\begin{array} [c]{lll}R_{_{1}YY}\left ( t_{1},t_{2}\right ) & = & \int \limits _{0}^{\infty }2\left ( e^{-\alpha }-e^{-2\alpha }\right ) \;\left [ 2\left ( e^{-\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }-e^{-2\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }\right ) e^{-\left | t_{1}-\alpha \right | }u\left ( t_{1}-t_{2}\right ) \right ] \;d\alpha \end{array} \end{equation}
now \[ e^{-\left | t_{1}-\alpha \right | }=e^{t_{1}-\alpha }u\left ( -t_{1}+\alpha \right ) +e^{-t_{1}+\alpha }u\left ( t_{1}-\alpha \right ) \]
so if \(t_{1}<0\) then, since \(\alpha >0\) then\[ \int \limits _{0}^{\infty }e^{-\left | t_{1}-\alpha \right | }d\alpha =\int \limits _{0}^{\infty }e^{t_{1}-\alpha }d\alpha \]
and, when \(t_{1}>0\)
\[ \int \limits _{0}^{\infty }e^{-\left | t_{1}-\alpha \right | }d\alpha =\int \limits _{0}^{t_{1}}e^{-t_{1}+\alpha }d\alpha +\int \limits _{t_{1}}^{\infty }e^{t_{1}-\alpha }d\alpha \]
so , equation (1) can be written in 2 parts as
when \(t_{2}<t_{1}\) and \(t_{1}<0\) then
\[\begin{array} [c]{lll}R_{_{1}YY}\left ( t_{1},t_{2}\right ) & = & \int \limits _{0}^{\infty }2\left ( e^{-\alpha }-e^{-2\alpha }\right ) \;\left [ 2\left ( e^{-\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }-e^{-2\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }\right ) e^{t_{1}-\alpha }\right ] \;d\alpha \\ & & \\ R_{_{1}YY}\left ( t_{1},t_{2}\right ) & = & \fbox{$\frac 13$e$^2t_1-t_2$-$\frac 15$e$^3t_1-2t_2$}\end{array} \]
when \(t_{2}<t_{1}\) and \(t_{1}>0\) then \begin{equation} \label{3}\begin{array} [c]{lll}R_{_{1}YY}\left ( t_{1},t_{2}\right ) & = & \int \limits _{0}^{t_{1}}2\left ( e^{-\alpha }-e^{-2\alpha }\right ) \;\left [ 2\left ( e^{-\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }-e^{-2\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }\right ) e^{-t_{1}+\alpha }\right ] \;d\alpha \\ & & \\ & & +\int \limits _{t_{1}}^{\infty }2\left ( e^{-\alpha }-e^{-2\alpha }\right ) \;\left [ 2\left ( e^{-\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }-e^{-2\left ( t_{2}-\left ( t_{1}-\alpha \right ) \right ) }\right ) e^{t_{1}-\alpha }\right ] d\alpha \\ & & \\ & = & \fbox{-$\frac 83$e$^-t_1-t_2$+e$^-t_1-2t_2$+e$^-t_1-t_2$-$\frac 8{15}$e$^-2t_1-2t_2$+2e$^-t_2$-$\frac 23$e$^-2t_2+t_1$}\end{array} \end{equation}
so, combine the above 2 expression in boxes, we get for when \(t_{2}<t_{1}\) \[\begin{array} [c]{lll}R_{_{1}YY}\left ( t_{1},t_{2}\right ) & = & \left ( \frac 13e^{2t_{1}-t_{2}}-\frac 15e^{3t_{1}-2t_{2}}\right ) u\left ( -t_{1}\right ) \\ & & \\ & & +\left ( -\frac 83e^{-t_{1}-t_{2}}+e^{-t_{1}-2t_{2}}+e^{-t_{1}-t_{2}}-\frac 8{15}e^{-2t_{1}-2t_{2}}+2e^{-t_{2}}-\frac 23e^{-2t_{2}+t_{1}}\right ) u\left ( t_{1}\right ) \end{array} \]
For white noise, \[ S_{N}\left ( \omega \right ) =\sigma _{N}^{2}=5 \]
so\[ S_{NY}\left ( \omega \right ) =S_{N}\left ( \omega \right ) H_{2}^{*}\left ( j\omega \right ) =5\frac 2{2+j\omega }=\frac{10}{2+j\omega } \]
so\[ R_{NY}\left ( \tau \right ) =10\;F^{-1}\left \{ S_{NY}\left ( \omega \right ) \right \} =10e^{-2\tau }\;u\left ( \tau \right ) \]
or we can write this by saying \(\tau =t_{1}-t_{2}\) then
\[ \fbox{R$_NY$$\left ( t_1-t_2\right ) $=10e$^-2\left ( t_1-t_2\right ) $u$\left ( t_1-t_2\right ) $} \]
Now,\[ S_{_{2}YY}\left ( \omega \right ) =S_{X}\left ( \omega \right ) \left | H_{2}\left ( j\omega \right ) \right | ^{2} \]
but\[ H_{2}\left ( j\omega \right ) =\frac 2{2+j\omega } \]
so\[ \left | H_{2}\left ( j\omega \right ) \right | ^{2}=\frac 4{4+\omega ^{2}} \]
so\[ S_{_{2}YY}\left ( \omega \right ) =5\frac 4{4+\omega ^{2}} \]
so\[ R_{_{2}YY}(\tau )= 5 e^-2 |\tau | \]
so, for \(t_{1}>t_{2}\) then, combine all results from part a and part b to get \[\begin{array} [c]{lll}R_{YY}\left ( t_{1},t_{2}\right ) & = & R_{_{1}YY}\left ( t_{1},t_{2}\right ) +R_{_{2}YY}\left ( t_{1},t_{2}\right ) \\ & & \\ & = & \left ( \frac 13e^{2t_{1}-t_{2}}-\frac 15e^{3t_{1}-2t_{2}}\right ) u\left ( -t_{1}\right ) u\left ( t_{1}-t_{2}\right ) \\ & & \\ & & +\left ( -\frac 83e^{-t_{1}-t_{2}}+e^{-t_{1}-2t_{2}}+e^{-t_{1}-t_{2}}-\frac 8{15}e^{-2t_{1}-2t_{2}}+2e^{-t_{2}}-\frac 23e^{-2t_{2}+t_{1}}\right ) u\left ( t_{1}\right ) u\left ( t_{1}-t_{2}\right ) \\ & & \\ & & +5e^{-2\left | t_{1}-t_{2}\right | }\\ & & \end{array} \]
since
\[ K_{X}\left ( t_{1},t_{2}\right ) =R_{X}\left ( t_{1},t_{2}\right ) -\mu _{X}\mu _{X}^{*} \]
then\[\begin{array} [c]{lll}R_{X}\left ( t_{1},t_{2}\right ) & = & 25e^{-\left | t_{1}-t_{2}\right | }+36\\ & & \\ R_{X}\left ( \tau =t_{1}-t_{2}\right ) & = & 25e^{-\left | \tau \right | }+36 \end{array} \]
\(\bullet \)\(X\left ( t\right ) \) is strict sense stationary:
since the auto correlation function \(R_{X}\left ( t_{1},t_{2}\right ) \) is a function of \(\left ( t_{1}-t_{2}\right ) \) and since the mean is constant, then \(X\left ( t\right ) \) is a WSS process. However to decide if it is SSS process, I need to determine if \(X\left ( t+T\right ) \) has the same density function as \(X\left ( t\right ) \) for any order. This I dont know from given information. so ___________________________________________\(X\left ( t\right ) \) is not SSS processs based on what is given.
\(\bullet \)\(X\left ( t\right ) \) has total average power DC of 36 watt:
find the power spectral:\[\begin{array} [c]{lll}S_{X}\left ( \omega \right ) & = & \int \limits _{-\infty }^{\infty }R_{X}\left ( \tau \right ) e^{-j\omega t}d\tau \\ & & \\ & = & \int \limits _{-\infty }^{\infty }\left ( 25e^{-\left | \tau \right | }+36\right ) e^{-j\omega \tau }\;d\tau \\ & & \\ & = & 36\int \limits _{-\infty }^{\infty }e^{-j\omega \tau }d\tau +25\int \limits _{-\infty }^{\infty }e^{-\left | \tau \right | }e^{-j\omega \tau }\;d\tau \\ & & \\ & = & 36\cdot 2\pi \delta \left ( \omega \right ) +50\frac 1{1+\omega ^{2}}\end{array} \]
so, let \(\omega =0,\) total average DC power is \(72\pi +50=276.2\) watt
so, ______________________________________________________________________________________________the statment that \(X\left ( t\right ) \) has total average DC power of 36 watt is NOT true.
\(\bullet X\left ( t\right ) \) is M.S. ergodic in the mean:
a stationary R.P. is M.S. ergodic in the mean iff\[ \lim _{T\nearrow \infty }\;\frac 1{2T}\int \limits _{-2T}^{2T}\left ( 1-\frac{\left | \tau \right | }{2T}\right ) K_{X}\left ( \tau \right ) \;d\tau \longrightarrow 0 \]
Fourier transform for triangular pulse\(\left ( 1-\frac{\left | \tau \right | }{2T}\right ) \) is \(2T\left ( \frac{\sin 2\pi fT}{2\pi fT}\right ) ^{2}\), using Parseval’s theorem\[\begin{array} [c]{lll}\sigma _{M}^{2} & = & \frac 1{2T}\int \limits _{-2T}^{2T}\left ( 1- \frac{\left | \tau \right | }{2T}\right ) K_{X}\left ( \tau \right ) \;d\tau \\ & & \\ & = & \frac 1{2T}\int \limits _{-2T}^{2T}\left ( 1- \frac{\left | \tau \right | }{2T}\right ) 25e^{-\left | \tau \right | }\;d\tau \\ & & \\ & = & \frac 1{2T}\int \limits _{-\infty }^{\infty }2T\left ( \frac{\sin 2\pi fT}{2\pi fT}\right ) ^{2}50\frac 1{1+\left ( 2\pi f\right ) ^{2}}\;df\\ & & \\ & = & 50\int \limits _{-\infty }^{\infty }\left ( \frac{\sin 2\pi fT}{2\pi fT}\right ) ^{2}\frac 1{1+\left ( 2\pi f\right ) ^{2}}\;df\\ & & \\ \lim _{T\nearrow \infty }\sigma _{M}^{2} & = & 50\int \limits _{-\infty }^{\infty }\lim _{T\nearrow \infty }\left ( \frac{\sin 2\pi fT}{2\pi fT}\right ) ^{2}\frac 1{1+\left ( 2\pi f\right ) ^{2}}\;df\\ & & \\ & = & 50\int \limits _{-\infty }^{\infty }0\cdot df=0 \end{array} \]
\(\bullet X\left ( t\right ) \) has a periodic component:
A WSS process is a wide sense periodic if \[ \mu _{X}\left ( t\right ) =\mu _{X}\left ( t+T\right ) \;\;\;\forall t \]
and the auto-covariance \(K_{X}\left ( t_{1},t_{2}\right ) \) is periodic.
the second condition above fails, so this is not a wide sense periodic function. This also Implies it is not M.S. period since M.S. periodicity is stronger than WS periodicity.
However, the question asks if \(X\left ( t\right ) \) has at least one component of the process is periodic, Not if the process itself is periodic. It is possible that \(X\left ( t\right ) \) has component that is periodic, but \(X\left ( t\right ) \) not be periodic.
________________________________________________________________so I can’t for certinity say that \(X\left ( t\right ) \) has or not a periodic component .
\(\bullet X\left ( t\right ) \) has an AC power of 61 Watt:
total power =\(\int \limits _{-\infty }^{\infty }S_{X}\left ( \omega \right ) \;d\omega =\int \limits _{-\infty }^{\infty }36\cdot 2\pi \delta \left ( \omega \right ) +50\frac 1{1+\omega ^{2}}\;d\omega =72\pi +50\int \limits _{-\infty }^{\infty }\frac 1{1+\omega ^{2}}\;d\omega =72\pi +50\pi =122\pi \) watt
but the DC power was found to be\(\;\left ( 72\pi +50\right ) \)watt, so AC power=\(122\pi -\left ( 72\pi +50\right ) =50\pi -50=107.07\) watt
so\(\;\)____________________________________________________\(X\left ( t\right ) \) do NOT have an AC power of 61 Watt .
\(\bullet X\left ( t\right ) \) has a variance of 25:
Variance =\(\sigma _{X}^{2}\left ( t\right ) =K_{X}\left ( t,t\right ) \Rightarrow K_{x}\left ( 0\right ) =25e^{0}=25\)
so_____________________________________\(X\left ( t\right ) \) has a variance of 25 is True .
\[ R_{x}\left ( \tau \right ) =3+2\exp \left ( -4\tau ^{2}\right ) \]
______part a
the power spectral density \(S_{X}\left ( \omega \right ) \) is\[ S_{X}\left ( \omega \right ) =F\left \{ R_{X}\left ( \tau \right ) \right \} =\int _{-\infty }^{\infty }R_{X}\left ( \tau \right ) \exp (-j\omega \tau )\;d\tau \\]
so\[\begin{array} [c]{lll}S_{X}\left ( \omega \right ) & = & \int \limits _{-\infty }^{\infty }\left ( 3+2\exp \left ( -4\tau ^{2}\right ) \right ) \exp (-j\omega \tau )\;d\tau \\ & & \\ & = & \int \limits _{-\infty }^{\infty }3\exp (-j\omega \tau )\;d\tau +2\int \limits _{-\infty }^{\infty }\exp \left ( -4\tau ^{2}\right ) \exp (-j\omega \tau )\;d\tau \\ & & \\ & = & 6\pi \;\delta \left ( \omega \right ) +\sqrt{\pi }\exp \left ( -\frac{\omega ^{2}}{16}\right ) \end{array} \]
so\[ \fbox{S$_X$$\left ( \omega \right ) $=6$\pi $\ $\delta $$\left ( \omega \right ) $+$\sqrt{\pi }$$\exp $$\left ( -\frac{\omega ^2}{16}\right ) $} \]
\[\begin{array} [c]{lll}\text{total power} & = & \int \limits _{-\infty }^{\infty }S_{X}\left ( \omega \right ) \;d\omega \\ & & \\ & = & \int \limits _{-\infty }^{\infty }\left ( 6\pi \;\delta \left ( \omega \right ) + \sqrt{\pi }\exp \left ( -\frac{\omega ^{2}}{16}\right ) \right ) d\omega \\ & & \\ & = & \int \limits _{-\infty }^{\infty }6\pi \;\delta \left ( \omega \right ) \;d\omega +\int \limits _{-\infty }^{\infty }\sqrt{\pi }\exp \left ( -\frac{\omega ^{2}}{16}\right ) d\omega \\ & & \\ & = & 6\pi +4\pi \end{array} \]
\[ \fbox{total power=$10\pi $} \]
now, power between \(\frac{-1}{\sqrt{\pi }}\) and \(\frac 1{\sqrt{\pi }}\) ,call it \(p_{1}\) , is given by\[\begin{array} [c]{lll}p_{1} & = & \int \limits _{-\frac 1{ \sqrt{\pi }}}^{\frac 1{\sqrt{\pi }}}S_{X}\left ( \omega \right ) \;d\omega \\ & & \\ & = & \int \limits _{-\frac 1{ \sqrt{\pi }}}^{\frac 1{\sqrt{\pi }}}6\pi \;\delta \left ( \omega \right ) \;d\omega +\int \limits _{-\frac 1{\sqrt{\pi }}}^{\frac 1{\sqrt{\pi }}}\sqrt{\pi }\exp \left ( -\frac{\omega ^{2}}{16}\right ) d\omega \\ & & \\ & = & 6\pi +4\pi \;\text{erf}\left ( \frac 1{4\sqrt{\pi }}\right ) \end{array} \]
where \[ \text{erf}\left ( x\right ) =\frac 2\pi \int _{0}^{x}\exp \left ( -t^{2}\right ) \;dt \]
so, erf\(\left ( \frac 1{4\sqrt{\pi }}\right ) =\)erf\(\left ( 0.443\right ) =0.158\)\[ p_{1}=6\pi +4\pi \left ( 0.158\right ) =20.84\text{\ \ \ \ \ Watt} \]
so fraction to total power is\[ \frac{p_{1}}{\text{total power }}=\frac{20.83}{10\pi }=0.663\Longrightarrow \fbox{\%\ 66.3} \]