Prerequisites & Notation

Before You Begin

Chapters 5–7 gave the "one-shot" BICM story: a bit interleaver feeds a binary code's output into a constellation mapper, the receiver demaps each channel observation into per-bit LLRs, and the decoder takes those LLRs as final. Capacity, PEP, and error exponents were analysed under this sequential data flow. Chapter 8 closes the loop. The decoder's soft output is fed BACK to the demapper as a priori information, the demapper refines its LLRs, and the two boxes iterate to convergence. The reader should be comfortable with BICM's bit-metric LLR, with soft-in/soft-out (SISO) decoding of binary codes, and with the Gaussian-approximation machinery behind mutual-information analysis.

  • BICM encoder/decoder, bit metric, and LLR extraction(Review ch05)

    Self-check: Can you write the BICM bit-metric LLR λ(y)=logsX(0)p(ys)logsX(1)p(ys)\lambda_\ell(y) = \log \sum_{s \in \mathcal{X}_\ell^{(0)}} p(y \mid s) - \log \sum_{s \in \mathcal{X}_\ell^{(1)}} p(y \mid s) and explain why under a Gaussian channel it is a function of Euclidean distance?

  • Soft-in/soft-out (SISO) decoding of binary codes(Review ch05)

    Self-check: Can you describe what a SISO decoder does: takes per-bit a priori LLRs and channel LLRs as input, returns per-bit a posteriori LLRs and their EXTRINSIC components (a posteriori minus input)?

  • Set-partition (Ungerboeck) labelling vs Gray labelling(Review ch02)

    Self-check: Can you state the Ungerboeck set-partition chain rule and explain why SP labelling maximises intra-subset minimum distance while Gray maximises the average drmavg2d^2_{\\rm avg}?

  • BICM union-bound BER and the role of the labelling(Review ch06)

    Self-check: Can you explain why Gray labelling outperforms SP labelling under one-shot BICM decoding on AWGN, and why the conclusion flips on fully-interleaved fading?

  • BICM capacity and the gap to CM capacity(Review ch05)

    Self-check: Can you sketch the CrmBICM(mu)C_{\\rm BICM}(\\mu) versus CrmCMC_{\\rm CM} curves for 16-QAM and identify the SNR range where the gap is most visible?

  • Mutual information, entropy, and the data-processing inequality(Review ch02)

    Self-check: Can you compute the mutual information between a binary input and a continuous output numerically, and state why post-processing a sufficient statistic does not reduce mutual information?

  • Binary-input AWGN channel and the consistent-Gaussian LLR model(Review ch03)

    Self-check: Can you state that for BI-AWGN with SNR gamma\\gamma, the LLR is Gaussian with mean pm4gamma\\pm 4\\gamma and variance 8gamma8\\gamma — the "consistent" Gaussian with mean = variance/2 that underlies the J-function?

Notation for This Chapter

Chapters 5–7's BICM notation (constellation X\mathcal{X}, labelling μ\mu, bit subsets X(b)\mathcal{X}_\ell^{(b)}, bit-metric LLR λ\lambda_\ell) carries over. The symbols below are the additional ones introduced for EXIT analysis and iterative decoding.

SymbolMeaningIntroduced
λ,λch,λA,λE\lambda, \lambda_{\rm ch}, \lambda_A, \lambda_ELLR variables: total LLR λ\lambda, channel LLR λch\lambda_{\rm ch}, a-priori LLR λA\lambda_A, extrinsic LLR λE\lambda_E. λE=λλA\lambda_E = \lambda - \lambda_As01
IA,IEI_A, I_EA-priori mutual information IA=I(B;λA)I_A = I(B; \lambda_A) and extrinsic mutual information IE=I(B;λE)I_E = I(B; \lambda_E), both valued in [0,1][0, 1]s02
J(σ)J(\sigma)The J-function: mutual information between a binary ±1\pm 1 input and a consistent-Gaussian LLR of standard deviation σ\sigma (mean σ2/2\sigma^2/2)s02
Tdem(IA,SNR)T_{\mathrm{dem}}(I_A, \mathrm{SNR})Demapper EXIT curve: maps a-priori MI at the demapper input to extrinsic MI at its output, parametrised by SNR\mathrm{SNR}s02
Tdec(IA,R)T_{\mathrm{dec}}(I_A, R)Decoder EXIT curve: maps a-priori MI at the decoder input (from the demapper) to extrinsic MI at its output, parametrised by code rate RRs02
Tdec1(,R)T_{\mathrm{dec}}^{-1}(\cdot, R)Inverse decoder EXIT curve, plotted on the same axes as TdemT_{\mathrm{dem}} for the convergence-tunnel visualisations03
Δ(IA)\Delta(I_A)Tunnel width at a-priori level IAI_A: Δ(IA)=Tdem(IA,SNR)Tdec1(IA,R)\Delta(I_A) = T_{\mathrm{dem}}(I_A, \mathrm{SNR}) - T_{\mathrm{dec}}^{-1}(I_A, R)s03
SNRconv\mathrm{SNR}_{\mathrm{conv}}, Eb/N0convE_b/N_0|_{\mathrm{conv}}Convergence threshold: the smallest SNR (or Eb/N0E_b/N_0) at which the tunnel is open, minIA[0,1)Δ(IA)>0\min_{I_A \in [0,1)} \Delta(I_A) > 0s03
μG,μSP,μaG\mu_G, \mu_{\rm SP}, \mu_{\rm aG}Gray, set-partition, and anti-Gray labelings respectively; μaG\mu_{\rm aG} is the bitwise complement of μG\mu_G on a rotated constellations04
R(SNR,μ)R^*(\mathrm{SNR}, \mu)Maximum achievable code rate at SNR under labelling μ\mu such that the inverted decoder curve lies below the demapper curve — the EXIT-matched rates05
{λi}i=0dv\{\lambda_i\}_{i=0}^{d_v}Variable-node degree distribution of an LDPC code from the edge perspective: λi\lambda_i is the fraction of edges attached to a variable node of degree iis05
{ρj}j=0dc\{\rho_j\}_{j=0}^{d_c}Check-node degree distribution of an LDPC code from the edge perspectives05
ttIteration index in BICM-ID, t=0,1,2,t = 0, 1, 2, \ldots; IE(t)I_E^{(t)} is the extrinsic MI after iteration tts03