Prerequisites & Notation
Before You Begin
Chapter 16 is the operational payoff of the lattice theory built in Ch. 15. We now use lattices not as geometric curiosities but as coding primitives that actually achieve the AWGN capacity . The mathematical cost is modest beyond Ch. 15: the main addition is a clean understanding of the MMSE estimator and of modulo arithmetic against a lattice. But the payoff is enormous — the mod- scheme of Erez and Zamir (2004) closes, with one algebraic construction, a gap Shannon left open in 1948: how to achieve capacity with a structured codebook. If any item below feels shaky, take the cross-reference before starting.
- Lattices, generator matrices, fundamental volume, Voronoi region, packing/covering radius(Review ch15)
Self-check: Can you state the definition of a lattice , compute , and describe the Voronoi region ?
- Coset codes and the partition ambda / ambda'(Review ch04)
Self-check: Given lattices , can you explain the index , identify cosets, and relate the fundamental volumes by ?
- AWGN capacity and Shannon's random-coding achievability(Review ch09)
Self-check: Can you state the AWGN capacity formula per real dimension and outline Shannon's random-Gaussian-codebook achievability argument?
- MMSE estimation: the orthogonality principle and the scalar MMSE coefficient (Review ch03)
Self-check: For with and , can you derive the MMSE estimator with and explain why it is orthogonal to the error?
- Tomlinson–Harashima precoding (ISI precoder via modulo arithmetic)(Review ch10)
Self-check: Can you recall the THP idea for known ISI: at the transmitter, subtract the known interference and reduce mod- into a compact box? This is the scalar cousin of Costa's lattice DPC.
- Complexity: NP-hardness, average-case polynomial algorithms, sphere-decoder search tree
Self-check: Can you explain the difference between worst-case and average-case complexity and why a branch-and-bound algorithm can beat the exponential worst case in practice?
- Uniform distributions on bounded sets, moduli, and the crypto-lemma: if is uniform on and independent of , then is uniform on , independent of (Review ch04)
Self-check: Can you reproduce the one-line proof of the crypto-lemma via shift-invariance of the uniform distribution on ? This is the single most-used lemma in this chapter.
Notation for This Chapter
Lattice-theoretic symbols from Ch. 4 and Ch. 15 remain in force. The new symbols all circle around the Erez–Zamir construction: a fine coding lattice , a coarse shaping lattice , an MMSE coefficient , and a dither . The book-wide information-theoretic symbols (SNR , noise variance , capacity , rate ) follow the global CM notation.
| Symbol | Meaning | Introduced |
|---|---|---|
| Fine (coding) lattice; provides coding gain | s01 | |
| Coarse (shaping) lattice; ; provides shaping gain and power constraint | s01 | |
| Nesting quotient; the set of cosets of in ; indexed by integers | s01 | |
| Voronoi region (fundamental cell) of the lattice centred at the origin | s01 | |
| Nested lattice code: | s01 | |
| Modulo-lattice reduction: the unique element of inside , i.e., | s01 | |
| Nearest-neighbour quantiser to the lattice : | s01 | |
| Code rate (bits per real dimension): | s01 | |
| Second moment (average energy per dimension) of : | s02 | |
| Normalised second moment of : ; a dimensionless shape factor with | s03 | |
| MMSE scaling coefficient: | s02 | |
| Dither vector, uniform on and independent of the message | s02 | |
| Known interference (state) at the transmitter, in the dirty-paper channel | s04 | |
| Shaping gain of over the cube: ; upper-bounded by dB | s03 | |
| Coding gain of over : normalised to | s03 | |
| Lattice-decoding error probability: when | s05 |