Prerequisites & Notation

Before You Begin

This chapter opens Book CM by revisiting signal-space coding with the eyes of someone who already knows about codes, Gaussian channels, and capacity. The reader is expected to have been through the following material, or to have access to it while reading.

  • AWGN channel model, matched filtering, and signal-space representation(Review ch08)

    Self-check: Can you state the discrete-time AWGN model y=x+wy = x + w with w∼N(0,N0/2)w \sim \mathcal{N}(0, N_0/2) and explain the Es/N0E_s / N_0 vs. Eb/N0E_b / N_0 normalization?

  • Shannon channel capacity for the AWGN channel and mutual information basics(Review ch09)

    Self-check: Can you derive C=Wlog⁑2(1+SNR)C = W \log_2(1 + \text{SNR}) and explain why this is an upper bound on any reliable communication rate?

  • Basic binary channel coding: Hamming distance, minimum distance, union bound(Review ch11)

    Self-check: Can you state a union bound on the word-error probability of a linear binary code in terms of its weight enumerator and the pairwise error probability over AWGN?

  • M-PAM, M-PSK, and M-QAM constellations, Gray labeling, uncoded error probability(Review ch08)

    Self-check: Can you sketch 16-QAM, write the uncoded BER under Gray labeling, and relate dmin⁑2d_{\min}^2 to EsE_s for a square QAM?

  • Probability, Gaussian tail bounds, and the QQ function(Review ch04)

    Self-check: Can you state the Chernoff bound Q(x)≀12eβˆ’x2/2Q(x) \leq \tfrac{1}{2} e^{-x^2/2} and differentiate it from the exact value?

Notation for This Chapter

Symbols used throughout Chapter 1. In later chapters of Book CM we will extend this with lattice-theoretic, MIMO, and coding-specific notation.

SymbolMeaningIntroduced
XβŠ‚RN\mathcal{X} \subset \mathbb{R}^NSignal-space constellation (finite set of transmitted vectors)s01
M=∣X∣M = |\mathcal{X}|Constellation size (number of code points)s01
NNSignal-space dimension (real dimensions per code point)s01
RRInformation rate in bits per channel use (or bits/2D for 2D constellations)s01
Ξ·\etaSpectral efficiency in bits/s/Hz (bits per 2D dimension pair)s02
Es,EbE_s, E_bAverage energy per symbol and per information bit; Es=R EbE_s = R\, E_bs01
N0N_0Noise one-sided power spectral density (noise variance per real dim. is N0/2N_0/2)s01
SNR\text{SNR}Signal-to-noise ratio, typically Es/N0E_s/N_0 in this chapters01
CCChannel capacity (bits per channel use or bits/s/Hz)s02
dmin⁑,dEd_{\min}, d_{\rm E}Minimum Euclidean distance of the constellations01
Ξ³c\gamma_cCoding gain (dB) relative to a baseline uncoded constellations01
Ξ³s\gamma_sShaping gain (dB); ultimate limit Ο€e/6β‰ˆ1.53\pi e / 6 \approx 1.53 dBs03
PeP_eSymbol (or codeword) error probabilitys01
Q(x)Q(x)Gaussian tail: Q(x)=∫x∞12Ο€eβˆ’t2/2 dtQ(x) = \int_x^\infty \tfrac{1}{\sqrt{2\pi}} e^{-t^2/2}\, dts01
x→x^\mathbf{x} \to \hat{\mathbf{x}}Pairwise error event: transmitted x\mathbf{x}, decoded x^\hat{\mathbf{x}}s04
Ξ”=xβˆ’x^\boldsymbol{\Delta} = \mathbf{x} - \hat{\mathbf{x}}Error vector in signal spaces04
Ξ›\LambdaLattice (infinite point set in RN\mathbb{R}^N with group structure)s03