Prerequisites & Notation

Before You Begin

Chapter 16 is the operational payoff of the lattice theory built in Ch. 15. We now use lattices not as geometric curiosities but as coding primitives that actually achieve the AWGN capacity C=12log2(1+SNR)C = \tfrac12 \log_2(1 + \text{SNR}). The mathematical cost is modest beyond Ch. 15: the main addition is a clean understanding of the MMSE estimator and of modulo arithmetic against a lattice. But the payoff is enormous — the mod-Λ\Lambda scheme of Erez and Zamir (2004) closes, with one algebraic construction, a gap Shannon left open in 1948: how to achieve capacity with a structured codebook. If any item below feels shaky, take the cross-reference before starting.

  • Lattices, generator matrices, fundamental volume, Voronoi region, packing/covering radius(Review ch15)

    Self-check: Can you state the definition of a lattice ΛRn\Lambda \subset \mathbb{R}^n, compute V(Λ)=detGV(\Lambda) = |\det \mathbf{G}|, and describe the Voronoi region V(Λ)\mathcal{V}(\Lambda)?

  • Coset codes and the partition ambda / 
ambda'(Review ch04)

    Self-check: Given lattices ΛΛ\Lambda' \subset \Lambda, can you explain the index Λ/Λ|\Lambda / \Lambda'|, identify cosets, and relate the fundamental volumes by V(Λ)=Λ/ΛV(Λ)V(\Lambda') = |\Lambda/\Lambda'| V(\Lambda)?

  • AWGN capacity C=12log2(1+SNR)C = \tfrac12 \log_2(1 + \text{SNR}) and Shannon's random-coding achievability(Review ch09)

    Self-check: Can you state the AWGN capacity formula per real dimension and outline Shannon's random-Gaussian-codebook achievability argument?

  • MMSE estimation: the orthogonality principle and the scalar MMSE coefficient α=SNR/(1+SNR)\alpha = \mathrm{SNR}/(1+\mathrm{SNR})(Review ch03)

    Self-check: For y=x+wy = x + w with xN(0,P)x \sim \mathcal{N}(0, P) and wN(0,σ2)w \sim \mathcal{N}(0, \sigma^2), can you derive the MMSE estimator x^=αy\hat{x} = \alpha y with α=P/(P+σ2)\alpha = P/(P + \sigma^2) and explain why it is orthogonal to the error?

  • Tomlinson–Harashima precoding (ISI precoder via modulo arithmetic)(Review ch10)

    Self-check: Can you recall the THP idea for known ISI: at the transmitter, subtract the known interference and reduce mod-2M2M into a compact box? This is the scalar cousin of Costa's lattice DPC.

  • Complexity: NP-hardness, average-case polynomial algorithms, sphere-decoder search tree

    Self-check: Can you explain the difference between worst-case and average-case complexity and why a branch-and-bound algorithm can beat the exponential worst case in practice?

  • Uniform distributions on bounded sets, moduli, and the crypto-lemma: if d\mathbf{d} is uniform on V(Λ)\mathcal{V}(\Lambda) and independent of x\mathbf{x}, then [x+d]modΛ[\mathbf{x} + \mathbf{d}] \bmod \Lambda is uniform on V(Λ)\mathcal{V}(\Lambda), independent of x\mathbf{x}(Review ch04)

    Self-check: Can you reproduce the one-line proof of the crypto-lemma via shift-invariance of the uniform distribution on V(Λ)\mathcal{V}(\Lambda)? This is the single most-used lemma in this chapter.

Notation for This Chapter

Lattice-theoretic symbols from Ch. 4 and Ch. 15 remain in force. The new symbols all circle around the Erez–Zamir construction: a fine coding lattice Λc\Lambda_c, a coarse shaping lattice ΛsΛc\Lambda_s \subset \Lambda_c, an MMSE coefficient α\alpha, and a dither d\mathbf{d}. The book-wide information-theoretic symbols (SNR SNR\text{SNR}, noise variance σ2\sigma^2, capacity CC, rate RR) follow the global CM notation.

SymbolMeaningIntroduced
Λc\Lambda_cFine (coding) lattice; provides coding gains01
Λs\Lambda_sCoarse (shaping) lattice; ΛsΛc\Lambda_s \subset \Lambda_c; provides shaping gain and power constraints01
Λc/Λs\Lambda_c / \Lambda_sNesting quotient; the set of cosets of Λs\Lambda_s in Λc\Lambda_c; indexed by integers 0,1,,Λc/Λs10, 1, \ldots, |\Lambda_c/\Lambda_s| - 1s01
V(Λ)\mathcal{V}(\Lambda)Voronoi region (fundamental cell) of the lattice Λ\Lambda centred at the origins01
C\mathcal{C}Nested lattice code: C=ΛcV(Λs)\mathcal{C} = \Lambda_c \cap \mathcal{V}(\Lambda_s)s01
[y]modΛ[\mathbf{y}] \bmod \LambdaModulo-lattice reduction: the unique element of y+Λ\mathbf{y} + \Lambda inside V(Λ)\mathcal{V}(\Lambda), i.e., yQΛ(y)\mathbf{y} - Q_\Lambda(\mathbf{y})s01
QΛ(y)Q_\Lambda(\mathbf{y})Nearest-neighbour quantiser to the lattice Λ\Lambda: QΛ(y)=argminλΛyλQ_\Lambda(\mathbf{y}) = \arg\min_{\boldsymbol{\lambda} \in \Lambda} \|\mathbf{y} - \boldsymbol{\lambda}\|s01
RRCode rate (bits per real dimension): R=1nlog2Λc/ΛsR = \tfrac{1}{n} \log_2 |\Lambda_c / \Lambda_s|s01
PPSecond moment (average energy per dimension) of V(Λs)\mathcal{V}(\Lambda_s): P=1nV(Λs)V(Λs)x2dxP = \tfrac{1}{n V(\Lambda_s)} \int_{\mathcal{V}(\Lambda_s)} \|\mathbf{x}\|^2 \, d\mathbf{x}s02
G(Λ)G(\Lambda)Normalised second moment of Λ\Lambda: G(Λ)=P(Λ)/V(Λ)2/nG(\Lambda) = P(\Lambda) / V(\Lambda)^{2/n}; a dimensionless shape factor with G(Λ)1/(2πe)G(\Lambda) \ge 1/(2 \pi e)s03
α\alphaMMSE scaling coefficient: α=SNR/(1+SNR)=P/(P+σ2)\alpha = \text{SNR} / (1 + \text{SNR}) = P / (P + \sigma^2)s02
d\mathbf{d}Dither vector, uniform on V(Λs)\mathcal{V}(\Lambda_s) and independent of the messages02
s\mathbf{s}Known interference (state) at the transmitter, in the dirty-paper channel y=x+s+w\mathbf{y} = \mathbf{x} + \mathbf{s} + \mathbf{w}s04
γs(Λ)\gamma_s(\Lambda)Shaping gain of Λ\Lambda over the cube: γs(Λ)=G(Zn)/G(Λ)=(1/12)/G(Λ)\gamma_s(\Lambda) = G(\mathbb{Z}^n) / G(\Lambda) = (1/12) / G(\Lambda); upper-bounded by πe/61.53\pi e / 6 \approx 1.53 dBs03
γc(Λ)\gamma_c(\Lambda)Coding gain of Λ\Lambda over Zn\mathbb{Z}^n: dmin2(Λ)/V(Λ)2/nd_{\min}^2(\Lambda) / V(\Lambda)^{2/n} normalised to Zn\mathbb{Z}^ns03
PelatP_e^{\mathrm{lat}}Lattice-decoding error probability: Pr[x^x]\Pr[\hat{\mathbf{x}} \ne \mathbf{x}] when x^=QΛ(y)\hat{\mathbf{x}} = Q_{\Lambda}(\mathbf{y})s05