Prerequisites & Notation

Before You Begin

Chapter 22 is the research-frontier chapter of the book. It does not introduce a new self-contained theory; it surveys four open research areas (non-coherent space-time coding, finite-blocklength URLLC, deep learning for physical-layer codes, and optical-fibre coded modulation) and closes the book with a forward-looking map. Every section draws heavily on earlier material: think of this chapter as a research reading list with the prerequisites annotated.

What the reader should bring. Comfort with (i) the BICM framework of Ch. 5 and the gap between Shannon capacity and BICM-achievable rate; (ii) the MIMO capacity formula C(H)=logdet(I+SNRHHH)C(\mathbf{H}) = \log\det(\mathbf{I} + \text{SNR}\,\mathbf{H}\mathbf{H}^{H}) of Ch. 10 and the DMT curve of Ch. 12; (iii) the algebraic (CDA-NVD, Ch. 13) and lattice (LAST, Ch. 17) routes to DMT-optimal codes; (iv) the PAS construction and distribution matching of Ch. 19; (v) the practice of BICM-OFDM and high-mobility channels from Ch. 21. Readers coming directly from Ch. 5 will have enough background for s02 and s03, but s01 and s04 draw on Ch. 10-13 and Ch. 17.

What is genuinely new here. Four pieces of modern machinery not used elsewhere in the book: (a) the Grassmannian geometry of non-coherent MIMO signalling, (b) the Polyanskiy-Poor-Verdú finite-blocklength normal approximation, (c) the autoencoder view of the physical layer, and (d) the GN model for optical fibre propagation. Each is introduced at the minimum depth needed to appreciate the open questions that remain.

  • BICM framework and BICM capacity(Review ch05)

    Self-check: Can you state the BICM capacity formula and recall the gap between CtextBICMC_{\\text{BICM}} and the CM Shannon capacity for Gray- labelled QAM at moderate SNR (well below 0.10.1 bits/ch.use)? Do you see why BICM is the dominant design paradigm in standards like 5G NR and 400ZR, even though it is not capacity- achieving?

  • MIMO capacity and the Zheng-Tse DMT(Review ch12)

    Self-check: Can you state C(mathbfH)=logdet(mathbfI+textSNR,mathbfHmathbfHH)C(\\mathbf{H}) = \\log \\det (\\mathbf{I} + \\text{SNR} \\, \\mathbf{H} \\mathbf{H}^{H}) and the DMT curve d(r)=(ntr)(nrr)d^*(r) = (n_t - r) (n_r - r)? These are the coherent results. Section 1 asks what changes when the receiver does not know mathbfH\\mathbf{H}.

  • LAST and CDA-NVD constructions(Review ch13)

    Self-check: Can you recall why both CDA-NVD (algebraic) and LAST + MMSE-GDFE (lattice) achieve the DMT? The non-coherent DMT problem of §1 asks whether a structural result of this calibre exists when channel knowledge is absent — and the honest answer is: not yet.

  • Probabilistic amplitude shaping (PAS) and geometric shaping(Review ch19)

    Self-check: Can you state the PAS construction (distribution matcher + systematic LDPC + sign bit) and the shaping gain toward capacity (roughly 1.531.53 dB at high SNR)? §4 will discuss PAS for optical-fibre coherent systems (400ZR/800ZR), which use this construction essentially verbatim.

  • Central limit theorem and the QQ-function

    Self-check: Do you remember that Q(x) = \\int_x^\\infty \\tfrac{1}{\\sqrt{2\\pi}} e^{-t^2/2}\\, dt and Q1(epsilon)Q^{-1}(\\epsilon) is its inverse? The finite-blocklength bound of §2 takes the form R(n,epsilon)approxCsqrtV/n,Q1(epsilon)R(n, \\epsilon) \\approx C - \\sqrt{V/n}\\, Q^{-1}(\\epsilon) — a CLT-like correction to Shannon's capacity.

  • Neural network basics: gradient descent and backpropagation

    Self-check: Can you recall that a neural network is a composition of parameterised affine maps and nonlinearities, trained to minimise a loss function by stochastic gradient descent? §3 treats an encoder-decoder pair as a pair of such networks with a differentiable channel layer between them.

  • Optical fibre basics: Kerr nonlinearity and chromatic dispersion

    Self-check: Do you know that an optical fibre has both a linear effect (chromatic dispersion: frequency-dependent group velocity) and a nonlinear effect (Kerr: intensity-dependent refractive index, n=n0+n2E2n = n_0 + n_2 |E|^2)? §4 explains why the interaction of these two effects destroys the Shannon-type capacity picture for fibre-optic communication.

Notation for This Chapter

Since this chapter surveys four distinct research areas, the notation is grouped by area. Symbols local to one section are reused freely (e.g., TT is the coherence block-length in §1 and also the code blocklength in §2, but context disambiguates). All MIMO symbols are inherited from Ch. 10-13; we use standard notation for the new objects: VV for channel dispersion, α\alpha for autoencoder parameters, PlaunchP_{\rm launch} for the fibre launch power.

SymbolMeaningIntroduced
TTCoherence block length (non-coherent MIMO, §1) or code blocklength (URLLC, §2)s01
nt,nrn_t, n_rNumber of transmit and receive antennass01
ntn_t^\starEffective non-coherent degrees of freedom min(nt,nr,T/2)\min(n_t, n_r, \lfloor T/2 \rfloor)s01
GT,nt(C)\mathcal{G}_{T,n_t}(\mathbb{C})Complex Grassmannian of ntn_t-planes in CT\mathbb{C}^Ts01
nnCode blocklength (number of channel uses per codeword)s02
ϵ\epsilonTarget block error rate (BLER)s02
VVChannel dispersion (variance of the information density)s02
Q(),Q1()Q(\cdot), Q^{-1}(\cdot)Gaussian tail function and its inverses02
R(n,ϵ)R(n, \epsilon)Maximum achievable rate at blocklength nn and BLER ϵ\epsilons02
fθ,gϕf_\theta, g_\phiEncoder and decoder neural networks with parameters θ,ϕ\theta, \phis03
L\mathcal{L}Autoencoder training loss (cross-entropy or binary cross-entropy)s03
PlaunchP_{\rm launch}Optical launch power into the fibre (per channel, mW\rm mW)s04
LspanL_{\rm span}Fibre length (km), sometimes LlinkL_{\rm link} for end-to-ends04
γK\gamma_{\rm K}Kerr nonlinear coefficient (W1km1)(\rm W^{-1} \, km^{-1}); n2n_2 in some referencess04
β2\beta_2Group-velocity-dispersion (GVD) parameter (ps2/km)(\rm ps^2 / km)s04
CGN(Plaunch)C_{\rm GN}(P_{\rm launch})Gaussian-noise (GN) model rate estimate at launch power PlaunchP_{\rm launch}s04