Chapter Summary

Chapter Summary

Key Points

  • 1.

    The MM-ary decision problem. Given MM hypotheses H0,…,HMβˆ’1\mathcal{H}_0,\ldots,\mathcal{H}_{M-1} with densities fmf_m, priors Ο€m\pi_m, and observation y\mathbf{y}, the MAP rule is g(y)=arg⁑max⁑mΟ€mfm(y)g(\mathbf{y}) = \arg\max_m \pi_m f_m(\mathbf{y}), and the ML rule (equal priors) drops the Ο€m\pi_m.

  • 2.

    Signal-space detection. For MM finite-energy signals in AWGN, Gram--Schmidt orthogonalisation produces an orthonormal basis {Ο•1,…,Ο•N}\{\phi_1,\ldots,\phi_N\} with N≀MN \leq M. The vector of projections y∈RN\mathbf{y} \in \mathbb{R}^N is a sufficient statistic, and ML detection is the minimum-Euclidean-distance rule: pick the constellation point closest to y\mathbf{y}.

  • 3.

    Voronoi decision regions. The minimum-distance decoder partitions RN\mathbb{R}^N into polytopes (Voronoi cells). The geometry of these cells --- in particular the distance to the nearest boundary --- controls performance.

  • 4.

    Union and nearest-neighbor bounds. Symbol error probability obeys Peβ‰€βˆ‘mβ€²β‰ mP(mβ†’mβ€²)≀(Mβˆ’1)Q(dmin⁑/(2Οƒ21/2))P_e \leq \sum_{m'\neq m} P(m\to m') \leq (M-1)Q(d_{\min}/(2{\sigma^2}^{1/2})). The nearest-neighbor approximation Peβ‰ˆKmin⁑Q(dmin⁑SNR/2)P_e \approx K_{\min} Q(d_{\min}\sqrt{\text{SNR}/2}) is tight at high SNR, where Kmin⁑K_{\min} counts nearest neighbors.

  • 5.

    Exact formulas for standard constellations. BPSK: Pe=Q(2Es/N0)P_e = Q(\sqrt{2E_s/N_0}). MM-PSK: Peβ‰ˆ2Q(2SNRsin⁑(Ο€/M))P_e \approx 2Q(\sqrt{2\text{SNR}}\sin(\pi/M)). Square MM-QAM: Pe=1βˆ’(1βˆ’2(1βˆ’1/M)Q(3SNR/(Mβˆ’1)))2P_e = 1 - (1 - 2(1-1/\sqrt{M})Q(\sqrt{3\text{SNR}/(M-1)}))^2.

  • 6.

    MGF averaging and Craig's formula. The Craig integral Q(x)=1Ο€βˆ«0Ο€/2exp⁑(βˆ’x2/(2sin⁑2Ο•)) dΟ•Q(x) = \frac{1}{\pi}\int_0^{\pi/2}\exp(-x^2/(2\sin^2\phi))\,d\phi converts SER in fading to a single integral of the SNR MGF --- a closed-form reduction that works for Rayleigh, Nakagami, and Rician channels.

  • 7.

    Error exponents. For nn i.i.d. observations, the ML error probability decays as Pe≐eβˆ’nDmin⁑P_e \doteq e^{-n D_{\min}}, where Dmin⁑=min⁑mβ‰ mβ€²D(fmβˆ₯fmβ€²)D_{\min} = \min_{m\neq m'} D(f_m \| f_{m'}) (roughly; the precise exponent is the minimum Chernoff information). The same KL quantity that governs detection also governs channel capacity (ITA Ch. 4) --- this is the operational link between Books FSI and ITA.

Looking Ahead

Chapter 4 takes up sequential detection (SPRT, CUSUM, CFAR): the number of samples is no longer fixed. Chapter 5 opens Part II with parameter estimation, shifting from finite hypothesis sets to continuous parameters. The error-exponent connection previewed here will be developed rigorously in Chapter 5's CRLB and in Chapter 8's EM convergence results.