Prerequisites & Notation

Before You Begin

Chapter 20 sits at the interface of three strands. From Part II (BICM) we take the point-to-point coded-modulation framework β€” the separation of code and modulation and the capacity metric C=log⁑2(1+SNR)C = \log_2(1 + \text{SNR}) that every later chapter of the book is compared to. From Ch. 10 (MIMO capacity) we take the multi-antenna model \ntnY=HX+w\ntn{Y} = \mathbf{H} \mathbf{X} + \mathbf{w}, the Rayleigh-fading assumption, and the spatial-multiplexing sum- capacity log⁑2det⁑(I+SNR HHH/nt)\log_2 \det(\mathbf{I} + \text{SNR}\, \mathbf{H}\mathbf{H}^{H} / n_t). From the MIMO book (Ch. 18, forward-referenced throughout) we borrow the massive-MIMO regime nr≫1n_r \gg 1, in which the Wishart 1nrHHHβ†’I\tfrac{1}{n_r}\mathbf{H}^{H} \mathbf{H} \to \mathbf{I} (channel hardening) and the fading variance collapses. Channel hardening is the main reason the coded-modulation problem simplifies as the array grows: the fade becomes effectively deterministic, and the link budget is dominated not by diversity but by hardware constraints β€” specifically, the cost of high-resolution ADCs and RF chains at 64-256 antennas.

This chapter is about those hardware constraints and the coded- modulation design choices they force. Two constraints dominate. First, ADC resolution: a 12-bit ADC at a few GSamples/sec burns ∼1\sim 1 W per receive chain, which at nr=128n_r = 128 is intractable; the pragmatic answer has been 1-4 bit ADCs and careful coded- modulation design around the resulting quantisation loss. Second, RF chain count: activating nt=64n_t = 64 full RF chains is again prohibitive, motivating spatial modulation and related index- modulation schemes that embed information in which antennas are active, not in their amplitudes alone. These two ideas β€” low- resolution quantisation and index-based information transmission β€” are the operational content of this chapter.

  • BICM and the CM/BICM capacity gap (Ch. 5)(Review ch05)

    Self-check: Can you write down the BICM achievable rate RBICM=βˆ‘i=1mI(Bi;Y)R_{\text{BICM}} = \sum_{i=1}^m I(B_i; Y), contrast it with the CM capacity I(X;Y)I(X; Y), and identify the conditions under which the gap is small (high SNR, Gray labelling) or large (low SNR, non-Gray)?

  • MIMO channel model and capacity (Ch. 10)(Review ch10)

    Self-check: Can you state the ergodic MIMO capacity with CSI at the receiver only, E[log⁑2det⁑(Inr+(SNR/nt) HHH)]\mathbb{E}[\log_2 \det(\mathbf{I}_{n_r} + (\text{SNR}/n_t)\, \mathbf{H}\mathbf{H}^{H})], and explain why it grows as min⁑(nt,nr)log⁑2SNR\min(n_t, n_r) \log_2 \text{SNR} at high SNR?

  • Entropy and mutual information on a binary-symmetric channel (Ch. 4 / Book ITA)(Review ch01)

    Self-check: Can you derive the BSC capacity CBSC=1βˆ’h2(p)C_{\text{BSC}} = 1 - h_2(p), where h2(p)=βˆ’plog⁑2pβˆ’(1βˆ’p)log⁑2(1βˆ’p)h_2(p) = -p \log_2 p - (1-p)\log_2(1-p)? This formula is the heart of the 1-bit-quantised capacity proof in Β§2.

  • Q-function and AWGN error probability

    Self-check: Can you write Q(x)=12Ο€βˆ«x∞eβˆ’t2/2 dtQ(x) = \tfrac{1}{\sqrt{2\pi}} \int_x^\infty e^{-t^2/2}\, dt, recall that Q(0)=1/2Q(0) = 1/2 and Q(x)β‰ˆ12eβˆ’x2/2Q(x) \approx \tfrac{1}{2} e^{-x^2/2} at large xx, and distinguish this from the quantiser Q(β‹…)Q(\cdot) used later in this chapter?

  • Basic ADC operation and sampling theorem (Book Telecom Ch. 3)

    Self-check: Can you describe how an analogue-to-digital converter samples at rate fsf_s and uniformly quantises each sample to one of 2b2^b levels, and explain why each additional bit of resolution costs roughly 2Γ—2\times in power (a 6 6\,dB SNR improvement)?

Notation for This Chapter

Notation combines the MIMO symbols from Ch. 10 (channel matrix H\mathbf{H}, SNR SNR\text{SNR}, transmitted codeword X\mathbf{X}) with chapter-specific symbols for quantisation (ADC bits bb, quantiser Q(β‹…)Q(\cdot), quantisation-SNR SNRq\mathrm{SNR}_q) and for index modulation (active antennas nan_a, antenna index kk). Note the potential collision with the Q-function Q(β‹…)Q(\cdot): the quantiser Q(β‹…)Q(\cdot) is a deterministic nonlinear function that maps real/complex numbers to a finite alphabet; QQ is the Gaussian tail probability. They are different objects and the text carefully distinguishes them.

SymbolMeaningIntroduced
nt,nrn_t, n_rNumber of transmit and receive antennass01
nan_aNumber of active transmit antennas in Generalised Spatial Modulation (1≀na≀nt1 \le n_a \le n_t)s04
bbNumber of bits per sample at the ADC (resolution)s01
Q(β‹…)Q(\cdot)Quantiser: deterministic map from R\mathbb{R} (or C\mathbb{C}) to a 2b2^b-level alphabet. NOT the Q-function.s01
SNRq\mathrm{SNR}_qQuantisation SNR: SNRq=6.02 b+1.76\mathrm{SNR}_q = 6.02\, b + 1.76 dB for a uniform bb-bit ADC on a full-scale sinusoids01
H∈CnrΓ—nt\mathbf{H} \in \mathbb{C}^{n_r \times n_t}MIMO channel matrix (i.i.d. CN(0,1)\mathcal{CN}(0,1) under Rayleigh)s02
SNR\text{SNR}Receive signal-to-noise ratio per antennas02
CCCapacity (bits per channel use)s02
Q(β‹…)Q(\cdot)Gaussian tail probability (Q-function); distinct from the quantiser Q(β‹…)Q(\cdot)s02
h2(p)h_2(p)Binary entropy function: βˆ’plog⁑2pβˆ’(1βˆ’p)log⁑2(1βˆ’p)-p \log_2 p - (1-p) \log_2(1-p)s02
MMQAM constellation size (symbols per transmit antenna)s03
kkAntenna index, k∈{1,2,…,nt}k \in \{1, 2, \ldots, n_t\}s03
X\mathbf{X}Transmitted codeword / signal vector in Cnt\mathbb{C}^{n_t}s02
w∈Cnr\mathbf{w} \in \mathbb{C}^{n_r}Receiver noise, CN(0,Οƒ2I)\mathcal{CN}(0, \sigma^2 \mathbf{I})s02
RSMR_{\mathrm{SM}}Spatial Modulation rate: RSM=log⁑2nt+log⁑2MR_{\mathrm{SM}} = \log_2 n_t + \log_2 M bits/ch.uses03
RGSMR_{\mathrm{GSM}}Generalised SM rate: ⌊log⁑2(ntna)βŒ‹+nalog⁑2M\lfloor \log_2 \binom{n_t}{n_a} \rfloor + n_a \log_2 M bits/ch.uses04
NRFN_{\mathrm{RF}}Number of active RF chains (digital-to-RF conversions per Tx)s05