Chapter Summary
Chapter Summary
Key Points
- 1.
The -ary decision problem. Given hypotheses with densities , priors , and observation , the MAP rule is , and the ML rule (equal priors) drops the .
- 2.
Signal-space detection. For finite-energy signals in AWGN, Gram--Schmidt orthogonalisation produces an orthonormal basis with . The vector of projections is a sufficient statistic, and ML detection is the minimum-Euclidean-distance rule: pick the constellation point closest to .
- 3.
Voronoi decision regions. The minimum-distance decoder partitions into polytopes (Voronoi cells). The geometry of these cells --- in particular the distance to the nearest boundary --- controls performance.
- 4.
Union and nearest-neighbor bounds. Symbol error probability obeys . The nearest-neighbor approximation is tight at high SNR, where counts nearest neighbors.
- 5.
Exact formulas for standard constellations. BPSK: . -PSK: . Square -QAM: .
- 6.
MGF averaging and Craig's formula. The Craig integral converts SER in fading to a single integral of the SNR MGF --- a closed-form reduction that works for Rayleigh, Nakagami, and Rician channels.
- 7.
Error exponents. For i.i.d. observations, the ML error probability decays as , where (roughly; the precise exponent is the minimum Chernoff information). The same KL quantity that governs detection also governs channel capacity (ITA Ch. 4) --- this is the operational link between Books FSI and ITA.
Looking Ahead
Chapter 4 takes up sequential detection (SPRT, CUSUM, CFAR): the number of samples is no longer fixed. Chapter 5 opens Part II with parameter estimation, shifting from finite hypothesis sets to continuous parameters. The error-exponent connection previewed here will be developed rigorously in Chapter 5's CRLB and in Chapter 8's EM convergence results.