Prerequisites & Notation
Before You Begin
Chapters 5–7 gave the "one-shot" BICM story: a bit interleaver feeds a binary code's output into a constellation mapper, the receiver demaps each channel observation into per-bit LLRs, and the decoder takes those LLRs as final. Capacity, PEP, and error exponents were analysed under this sequential data flow. Chapter 8 closes the loop. The decoder's soft output is fed BACK to the demapper as a priori information, the demapper refines its LLRs, and the two boxes iterate to convergence. The reader should be comfortable with BICM's bit-metric LLR, with soft-in/soft-out (SISO) decoding of binary codes, and with the Gaussian-approximation machinery behind mutual-information analysis.
- BICM encoder/decoder, bit metric, and LLR extraction(Review ch05)
Self-check: Can you write the BICM bit-metric LLR and explain why under a Gaussian channel it is a function of Euclidean distance?
- Soft-in/soft-out (SISO) decoding of binary codes(Review ch05)
Self-check: Can you describe what a SISO decoder does: takes per-bit a priori LLRs and channel LLRs as input, returns per-bit a posteriori LLRs and their EXTRINSIC components (a posteriori minus input)?
- Set-partition (Ungerboeck) labelling vs Gray labelling(Review ch02)
Self-check: Can you state the Ungerboeck set-partition chain rule and explain why SP labelling maximises intra-subset minimum distance while Gray maximises the average ?
- BICM union-bound BER and the role of the labelling(Review ch06)
Self-check: Can you explain why Gray labelling outperforms SP labelling under one-shot BICM decoding on AWGN, and why the conclusion flips on fully-interleaved fading?
- BICM capacity and the gap to CM capacity(Review ch05)
Self-check: Can you sketch the versus curves for 16-QAM and identify the SNR range where the gap is most visible?
- Mutual information, entropy, and the data-processing inequality(Review ch02)
Self-check: Can you compute the mutual information between a binary input and a continuous output numerically, and state why post-processing a sufficient statistic does not reduce mutual information?
- Binary-input AWGN channel and the consistent-Gaussian LLR model(Review ch03)
Self-check: Can you state that for BI-AWGN with SNR , the LLR is Gaussian with mean and variance — the "consistent" Gaussian with mean = variance/2 that underlies the J-function?
Notation for This Chapter
Chapters 5–7's BICM notation (constellation , labelling , bit subsets , bit-metric LLR ) carries over. The symbols below are the additional ones introduced for EXIT analysis and iterative decoding.
| Symbol | Meaning | Introduced |
|---|---|---|
| LLR variables: total LLR , channel LLR , a-priori LLR , extrinsic LLR . | s01 | |
| A-priori mutual information and extrinsic mutual information , both valued in | s02 | |
| The J-function: mutual information between a binary input and a consistent-Gaussian LLR of standard deviation (mean ) | s02 | |
| Demapper EXIT curve: maps a-priori MI at the demapper input to extrinsic MI at its output, parametrised by | s02 | |
| Decoder EXIT curve: maps a-priori MI at the decoder input (from the demapper) to extrinsic MI at its output, parametrised by code rate | s02 | |
| Inverse decoder EXIT curve, plotted on the same axes as for the convergence-tunnel visualisation | s03 | |
| Tunnel width at a-priori level : | s03 | |
| , | Convergence threshold: the smallest SNR (or ) at which the tunnel is open, | s03 |
| Gray, set-partition, and anti-Gray labelings respectively; is the bitwise complement of on a rotated constellation | s04 | |
| Maximum achievable code rate at SNR under labelling such that the inverted decoder curve lies below the demapper curve — the EXIT-matched rate | s05 | |
| Variable-node degree distribution of an LDPC code from the edge perspective: is the fraction of edges attached to a variable node of degree | s05 | |
| Check-node degree distribution of an LDPC code from the edge perspective | s05 | |
| Iteration index in BICM-ID, ; is the extrinsic MI after iteration | s03 |