Exercises

ex-ch07-01

Easy

Let θUniform(a,a)\theta \sim \text{Uniform}(-a, a) for some a>0a > 0 and Y=θY = \theta (noiseless observation). Compute θ^MMSE(y)\hat\theta_{\text{MMSE}}(y), θ^MAP(y)\hat\theta_{\text{MAP}}(y), and the MMSE.

ex-ch07-02

Easy

Show that if θ^(Y)\hat\theta(Y) satisfies E[(θθ^(Y))ϕ(Y)]=0\mathbb{E}[(\theta - \hat\theta(Y))\phi(Y)] = 0 for every bounded measurable ϕ\phi, then θ^(Y)=E[θY]\hat\theta(Y) = \mathbb{E}[\theta|Y] almost surely.

ex-ch07-03

Easy

Let θExponential(λ)\theta \sim \text{Exponential}(\lambda) with density fθ(θ)=λeλθf_\theta(\theta) = \lambda e^{-\lambda\theta} for θ0\theta \geq 0. Given NN iid observations YiθPoisson(θ)Y_i | \theta \sim \text{Poisson}(\theta), compute the posterior density and the MMSE estimator.

ex-ch07-04

Medium

Derive the LMMSE estimator directly from the orthogonality principle without using the completing-the-square argument of TThe LMMSE Formula.

ex-ch07-05

Medium

Let (θ,Y)(\theta, Y) be a zero-mean jointly Gaussian pair with variances σθ2,σy2\sigma_\theta^2, \sigma_y^2 and correlation coefficient ρ\rho. Compute θ^MMSE(y)\hat\theta_{\text{MMSE}}(y), the MMSE, and the conditional variance Var(θY=y)\text{Var}(\theta|Y=y).

ex-ch07-06

Medium

Consider Y=θ+WY = \theta + W with θN(0,σθ2)\theta \sim \mathcal{N}(0,\sigma_\theta^2) and WLaplace(0,b)W \sim \text{Laplace}(0, b) (density 12bew/b\frac{1}{2b}e^{-|w|/b}) independent of θ\theta. Compute θ^MAP(y)\hat\theta_{\text{MAP}}(y) in closed form. Is θ^MMSE(y)\hat\theta_{\text{MMSE}}(y) still affine in yy?

ex-ch07-07

Medium

Let θN(0,In)\boldsymbol\theta \sim \mathcal{N}(\mathbf{0},\mathbf{I}_n) and Y=Hθ+W\mathbf{Y} = \mathbf{H}\boldsymbol\theta + \mathbf{W} with WN(0,σw2Im)\mathbf{W} \sim \mathcal{N}(\mathbf{0}, \sigma_w^2\mathbf{I}_m) independent. Show that the LMMSE estimator can be written as θ^LMMSE=(HH+σw2I)1HY\hat\theta_{\text{LMMSE}} = (\mathbf{H}^\top\mathbf{H} + \sigma_w^2\mathbf{I})^{-1}\mathbf{H}^\top\mathbf{Y}.

ex-ch07-08

Medium

Show that for any Bayesian model, the posterior mean θ^MMSE(Y)\hat\theta_{\text{MMSE}} (\mathbf{Y}) is unconditionally unbiased: E[θ^MMSE(Y)]=E[θ]\mathbb{E}[\hat\theta_{\text{MMSE}}(\mathbf{Y})] = \mathbb{E}[\boldsymbol\theta]. Give an example where it is not conditionally unbiased, i.e. E[θ^MMSE(Y)θ]θ\mathbb{E}[\hat\theta_{\text{MMSE}}(\mathbf{Y})|\boldsymbol\theta] \neq \boldsymbol\theta.

ex-ch07-09

Medium

Verify that in the Gaussian model of EComplex Gaussian Signal in Gaussian Noise, the posterior mean X^MMSE\hat{\mathbf{X}}_{\text{MMSE}} and the posterior error e=XX^MMSE\mathbf{e} = \mathbf{X} - \hat{\mathbf{X}}_{\text{MMSE}} are independent (not just uncorrelated).

ex-ch07-10

Medium

In the pilot-based channel estimation model of DPilot-Based Channel Estimation Model, compute the Bayesian CRLB — the lower bound on E[hh^2]\mathbb{E}[\|\mathbf{h} - \hat{\mathbf{h}}\|^2] over all estimators h^\hat{\mathbf{h}}. Verify that the MMSE estimator achieves it.

ex-ch07-11

Medium

Let θBeta(a,b)\theta \sim \text{Beta}(a,b) and Y1,,YNθiid Bernoulli(θ)Y_1, \ldots, Y_N | \theta \sim \text{iid Bernoulli}(\theta). Compute the posterior, the MAP, and the MMSE of θ\theta given Y\mathbf{Y}. What happens as a,b0a,b \to 0?

ex-ch07-12

Hard

Show that for any estimator θ^(Y)\hat\theta(\mathbf{Y}) with finite MSE, E[θθ^(Y)2]  =  MMSE  +  E[θ^(Y)E[θY]2].\mathbb{E}[\|\boldsymbol\theta - \hat\theta(\mathbf{Y})\|^2] \;=\; \text{MMSE} \;+\; \mathbb{E}[\|\hat\theta(\mathbf{Y}) - \mathbb{E}[\boldsymbol\theta|\mathbf{Y}]\|^2] . In words, the excess MSE is the average squared deviation of the estimator from the conditional mean.

ex-ch07-13

Hard

A transmitter sends θ{+1,1}\theta \in \{+1, -1\} equiprobably through a fading channel: Y=Hθ+WY = H\theta + W, where HCN(0,1)H \sim \mathcal{CN}(0,1) and WCN(0,σw2)W \sim \mathcal{CN}(0, \sigma_w^2), with H,W,θH,W,\theta independent. Compute θ^MMSE(y)\hat\theta_{\text{MMSE}}(y), i.e. the non-coherent MMSE estimator.

ex-ch07-14

Hard

Let θ\theta have a Gaussian mixture prior: fθ(θ)=k=1KπkN(θ;μk,σk2)f_\theta(\theta) = \sum_{k=1}^K \pi_k \mathcal{N}(\theta; \mu_k, \sigma_k^2) with weights πk>0\pi_k > 0 and πk=1\sum \pi_k = 1. The observation is Y=θ+WY = \theta + W, WN(0,σw2)W \sim \mathcal{N}(0,\sigma_w^2) independent. Compute θ^MMSE(y)\hat\theta_{\text{MMSE}}(y) in closed form.

ex-ch07-15

Hard

Suppose the assumed channel covariance Σ~h\tilde{\boldsymbol{\Sigma}}_h differs from the true Σh\boldsymbol{\Sigma}_{h}. Compute the mean-square error of the mismatched MMSE estimator h^(y)=Σ~hXpH(XpΣ~hXpH+σ2I)1y\hat{\mathbf{h}}(\mathbf{y}) = \tilde{\boldsymbol{\Sigma}}_h \mathbf{X}_p^H(\mathbf{X}_p\tilde{\boldsymbol{\Sigma}}_h\mathbf{X}_p^H + \sigma^2 \mathbf{I})^{-1}\mathbf{y} under the true statistics. Show that the correctly-matched MMSE is always at least as good.

ex-ch07-16

Medium

Show that the LMMSE error covariance satisfies ΣθyΣθ\boldsymbol\Sigma_{\theta|y} \preceq \boldsymbol\Sigma_\theta in the positive-semidefinite ordering, with equality iff Σθy=0\boldsymbol\Sigma_{\theta y} = \mathbf{0}.

ex-ch07-17

Hard

Prove the "orthogonality \Rightarrow optimality" half of the orthogonality principle as an inequality, without the perturbation argument: for any g(Y)g(\mathbf{Y}) with E[(θg(Y))ϕ(Y)]=0\mathbb{E}[(\boldsymbol\theta - g(\mathbf{Y}))^\top\phi(\mathbf{Y})] = 0 for every ϕ\phi, and any other estimator g~(Y)\tilde g(\mathbf{Y}), Eθg(Y)2Eθg~(Y)2\mathbb{E}\|\boldsymbol\theta - g(\mathbf{Y})\|^2 \leq \mathbb{E}\|\boldsymbol\theta - \tilde g(\mathbf{Y})\|^2.

ex-ch07-18

Medium

Let Y=SNRθ+WY = \sqrt{\text{SNR}}\,\theta + W with θ,WN(0,1)\theta,W \sim \mathcal{N} (0,1) independent. Express the MMSE as a function of SNR and verify the I-MMSE identity: ddSNRI(θ;Y)=12MMSE(SNR)\frac{d}{d\,\text{SNR}}\,I(\theta;Y) = \frac{1}{2}\text{MMSE}(\text{SNR}) for this Gaussian case.

ex-ch07-19

Challenge

Let θ\theta be uniform on [0,1][0,1] and YθBernoulli(θ)Y | \theta \sim \text{Bernoulli} (\theta). Compute θ^MMSE(y)\hat\theta_{\text{MMSE}}(y) for y{0,1}y \in \{0,1\} and the resulting MMSE.

ex-ch07-20

Challenge

Derive the Bayesian CRLB (Van Trees inequality) for scalar θ\theta with prior fθf_\theta and likelihood fYθf_{Y|\theta}: E[(θθ^(Y))2]    1JB,JB=JD+JP,\mathbb{E}[(\theta - \hat\theta(Y))^2] \;\geq\; \frac{1}{J_B}, \qquad J_B = J_D + J_P, where JD=E[J(θ)]J_D = \mathbb{E}[J(\theta)] is the expected Fisher information and JP=E ⁣[d2dθ2logfθ(θ)]J_P = \mathbb{E}\!\left[-\frac{d^2}{d\theta^2}\log f_\theta(\theta)\right] is the prior information.