Prerequisites & Notation

Before You Begin

This chapter examines coding constructions that approach AWGN capacity. We assume familiarity with the AWGN capacity formula (Chapter 10), the channel coding theorem for DMCs (Chapter 9), and basic concepts of linear algebra over Rn\mathbb{R}^n and F2\mathbb{F}_2.

  • AWGN channel capacity C=12log(1+SNR)C = \frac{1}{2}\log(1 + \text{SNR}) (Chapter 10)(Review ch10)

    Self-check: Can you state the AWGN capacity and explain why Gaussian input is optimal?

  • Channel coding theorem: achievability via random coding, converse via Fano's inequality (Chapter 9)(Review ch09)

    Self-check: Can you outline the random coding argument for the DMC?

  • Binary symmetric channel (BSC) and binary erasure channel (BEC) capacities (Chapter 9)(Review ch09)

    Self-check: What is the capacity of a BSC with crossover probability pp?

  • Linear algebra: lattice, basis, Voronoi region, volume of fundamental region

    Self-check: Can you define a lattice in Rn\mathbb{R}^n and sketch its Voronoi region in 2D?

  • Basic coding theory: linear codes, generator and parity-check matrices

    Self-check: Can you define a linear code via its parity-check matrix over F2\mathbb{F}_2?

Notation for This Chapter

Key symbols for coding constructions over the Gaussian channel.

SymbolMeaningIntroduced
Λ\LambdaLattice in Rn\mathbb{R}^ns01
V(Λ)\mathcal{V}(\Lambda)Voronoi region of lattice Λ\Lambdas01
Vol(Λ)\text{Vol}(\Lambda)Volume of the fundamental region of Λ\Lambdas01
G(Λ)G(\Lambda)Normalized second moment of Λ\Lambda (measures quantization efficiency)s01
WWBinary-input DMC (for polarization)s03
WN(i)W_N^{(i)}ii-th bit-channel after polar transform of size NNs03
I(W)I(W)Symmetric capacity of channel WWs03
CShC_{\text{Sh}}Shaping gain (dB)s04
CCdC_{\text{Cd}}Coding gain (dB)s04