Exercises

ex-fsi-ch01-01

Easy

Under H0\mathcal{H}_0, YN(0,4)Y \sim \mathcal{N}(0, 4); under H1\mathcal{H}_1, YN(2,4)Y \sim \mathcal{N}(2, 4). For the detector g(y)=1{y>1}g(y) = \mathbb{1}\{y > 1\}, compute PfP_f, PdP_d, and PMP_M.

ex-fsi-ch01-02

Easy

Compute the likelihood ratio L(y)L(y) for the exponential problem H0:YExp(λ0)\mathcal{H}_0: Y \sim \text{Exp}(\lambda_0) vs. H1:YExp(λ1)\mathcal{H}_1: Y \sim \text{Exp}(\lambda_1) with λ0>λ1\lambda_0 > \lambda_1. Show that the LRT is equivalent to a threshold test on yy.

ex-fsi-ch01-03

Easy

For equal priors and 0-1 costs with f0=N(1,1),f1=N(+1,1)f_0 = \mathcal{N}(-1, 1), f_1 = \mathcal{N}(+1, 1), compute the MAP error probability PeP_e^\star.

ex-fsi-ch01-04

Easy

Show that the Bhattacharyya coefficient ρB=f0f1dy\rho_B = \int\sqrt{f_0 f_1}\,dy satisfies 0ρB10 \leq \rho_B \leq 1, with ρB=1\rho_B = 1 iff f0=f1f_0 = f_1 a.e.

ex-fsi-ch01-05

Easy

For BPSK over AWGN, Y=s+WY = s + W with s{Es,+Es}s \in \{-\sqrt{E_s}, +\sqrt{E_s}\} equiprobable and WN(0,N0/2)W \sim \mathcal{N}(0, N_0/2). Derive the MAP decision rule and compute PeP_e.

ex-fsi-ch01-06

Medium

For H0:YUnif(0,1)\mathcal{H}_0: Y \sim \text{Unif}(0, 1) vs. H1:YUnif(0,2)\mathcal{H}_1: Y \sim \text{Unif}(0, 2), derive the Neyman-Pearson test at level α=0.1\alpha = 0.1 and compute its power PdP_d.

ex-fsi-ch01-07

Medium

Let Y1,,YnY_1,\ldots,Y_n be i.i.d. Bernoulli(pp) under H0\mathcal{H}_0 with p=p0p=p_0, and under H1\mathcal{H}_1 with p=p1>p0p=p_1 > p_0. Derive the LRT and show the sufficient statistic is S=kYkS = \sum_k Y_k.

ex-fsi-ch01-08

Medium

Prove that the Chernoff exponent μ(s)\mu(s) equals the Kullback-Leibler divergence in the limits: μ(0)=D(f0f1)\mu'(0) = D(f_0 \| f_1) and μ(1)=D(f1f0)\mu'(1) = -D(f_1 \| f_0). (Signs depend on the sign convention; establish yours carefully.)

ex-fsi-ch01-09

Medium

Consider the Gaussian pair f0=N(0,σ02)f_0 = \mathcal{N}(0, \sigma_0^2), f1=N(0,σ12)f_1 = \mathcal{N}(0, \sigma_1^2) with σ1>σ0\sigma_1 > \sigma_0 (variance shift). Derive the LRT sufficient statistic and show it is not a threshold test on yy.

ex-fsi-ch01-10

Medium

A radar system must achieve Pf104P_f \leq 10^{-4} while maximising PdP_d. The observation is n=100n = 100 i.i.d. samples under H0:YkN(0,1)\mathcal{H}_0: Y_k \sim \mathcal{N}(0,1) vs. H1:YkN(μ,1)\mathcal{H}_1: Y_k \sim \mathcal{N}(\mu,1). What is the minimum μ\mu required to achieve Pd0.9P_d \geq 0.9?

ex-fsi-ch01-11

Medium

Compute the Bhattacharyya coefficient between f0=Exp(λ0)f_0 = \text{Exp}(\lambda_0) and f1=Exp(λ1)f_1 = \text{Exp}(\lambda_1), and show that ρB=2λ0λ1/(λ0+λ1)\rho_B = 2\sqrt{\lambda_0\lambda_1}/(\lambda_0+\lambda_1).

ex-fsi-ch01-12

Medium

Show that for i.i.d. observations the Chernoff bound becomes Peπ01sπ1senμ(s)P_e^\star \leq \pi_0^{1-s}\pi_1^s e^{-n\mu(s)} where μ(s)\mu(s) is computed for a single sample. Interpret the exponent.

ex-fsi-ch01-13

Hard

Prove that the ROC curve of any LRT is concave without using the slope identity. (Use only time-sharing and the Neyman-Pearson lemma.)

ex-fsi-ch01-14

Hard

Let f0,f1f_0, f_1 be densities on R\mathbb{R} and let T(y)T(y) be any real-valued statistic. Prove the data-processing inequality for binary testing: the error exponent based on T(Y)T(Y) alone cannot exceed the exponent based on the full observation YY: C(f0T,f1T)C(f0,f1),C(f_0^{T}, f_1^{T}) \leq C(f_0, f_1), where fjTf_j^T is the law of T(Y)T(Y) under Hj\mathcal{H}_j.

ex-fsi-ch01-15

Hard

For two Gaussians with different means and variances, f0=N(μ0,σ02)f_0 = \mathcal{N}(\mu_0, \sigma_0^2), f1=N(μ1,σ12)f_1 = \mathcal{N}(\mu_1, \sigma_1^2), derive μ(s)\mu(s) and express the Chernoff information as a transcendental equation.

ex-fsi-ch01-16

Challenge

Stein's lemma. Show that for i.i.d. observations, under the Neyman-Pearson criterion with PfαP_f \leq \alpha fixed, the miss probability decays as limn1nlogPM(n,α)=D(f0f1).\lim_{n\to\infty} \frac{1}{n}\log P_M^\star(n,\alpha) = -D(f_0 \| f_1). Sketch the proof via the LLR and the weak law of large numbers.

ex-fsi-ch01-17

Challenge

Minimax detection. Consider a binary test where the priors are unknown. The minimax rule chooses gg to minimise max(Pf(g),PM(g))\max(P_f(g), P_M(g)). Show that the minimax rule is the LRT whose threshold η\eta^\star equalises the two error probabilities, and describe how to compute η\eta^\star numerically.