Prerequisites & Notation

Before You Begin

This chapter takes the BICM capacity formula of Chapter 5 and the PEP analysis of Chapter 6 and places them on a rigorous information-theoretic footing through the lens of mismatched decoding. The reader should be comfortable with the parallel-binary-channel decomposition of BICM, the Gray and set-partition labellings, and the pairwise-error-probability / union bound machinery. A working knowledge of Gallager's random-coding exponent for binary-input channels is also assumed — we review only what is needed but do not reprove the classical results.

  • BICM capacity formula CBICM=I(Y;B)C_{\rm BICM} = \sum_\ell I(Y; B_\ell)(Review ch05)

    Self-check: Can you state why this is the achievable rate under the BICM product bit metric, and why it is bounded above by CCMC_{\rm CM}? Can you write down the gap as a sum of conditional mutual informations?

  • BICM PEP and union bound on fading channels(Review ch06)

    Self-check: Can you write the union bound on codeword error probability in terms of pairwise error probabilities, and identify the BICM diversity order from the product of free distance and minimum distinct-bit count?

  • Gallager's random-coding exponent for binary-input channels(Review ch13)

    Self-check: Can you state the form Er(R)=max0ρ1[E0(ρ)ρR]E_r(R) = \max_{0 \le \rho \le 1} [E_0(\rho) - \rho R] and identify the channel-dependent function E0(ρ,q)E_0(\rho, q) for a given input distribution qq?

  • Mismatched decoding and generalised mutual information (GMI)(Review ch14)

    Self-check: Can you state the GMI formula IGMI(s)=E[logq(Y,X)sEXˉ[q(Y,Xˉ)s]]I^{\mathrm{GMI}}(s) = \mathbb{E}\left[ \log \frac{q(Y,X)^s}{\mathbb{E}_{\bar X}[q(Y,\bar X)^s]} \right] and explain when it coincides with the mutual information (matched case) versus when it lies strictly below the capacity?

  • Cutoff rate R0=E0(1)R_0 = E_0(1) and its operational meaning(Review ch13)

    Self-check: Can you state Gallager's cutoff-rate theorem and explain why, for sequential and list decoders, R0R_0 is the practical rate limit while capacity is the theoretical one?

  • Bhattacharyya parameter and Chernoff exponent on BI-channels(Review ch13)

    Self-check: Can you compute the Bhattacharyya parameter β=yp(y0)p(y1)\beta = \sum_y \sqrt{p(y\mid 0) p(y\mid 1)} for a BI-AWGN channel and identify it as eSNRe^{-\text{SNR}}?

Notation for This Chapter

Symbols specific to the capacity / error-exponent analysis of BICM. The Chapter 5–6 BICM notation (constellation X\mathcal{X}, labelling μ\mu, bit positions bb_\ell, per-bit channel WW_\ell, LLR λ\lambda_\ell, subset X(b)\mathcal{X}_\ell^{(b)}) continues to apply and is not repeated here.

SymbolMeaningIntroduced
q(y,b)q(y, b)Decoding metric (generic). Matched case q=pq = p; BICM product metric q(y,b)=pW(yb)q(y, \mathbf{b}) = \prod_\ell p_{W_\ell}(y\mid b_\ell)s02
s>0s > 0Decoder scaling parameter. The mismatched rate depends on ss; the optimal ss^\star maximises the GMIs02
IGMI(s)I^{\mathrm{GMI}}(s)Generalised mutual information at scaling ss; BICM achievable rate is sups>0IGMI(s)\sup_{s>0} I^{\mathrm{GMI}}(s)s03
E0(ρ,q)E_0(\rho, q)Gallager function for input distribution qq and Gallager parameter ρ[0,1]\rho \in [0,1]s04
Er(R)E_r(R)Random-coding error exponent, Er(R)=max0ρ1[E0(ρ)ρR]E_r(R) = \max_{0\le\rho\le 1} [E_0(\rho) - \rho R]s04
ErCM(R),ErBICM(R)E_r^{\mathrm{CM}}(R), E_r^{\mathrm{BICM}}(R)Random-coding error exponents for the CM decoder and the BICM (mismatched, product-metric) decoder respectivelys04
R0R_0Gallager cutoff rate, R0=E0(1)R_0 = E_0(1); equivalently logy(xq(x)p(yx))2-\log \sum_y (\sum_x q(x)\sqrt{p(y\mid x)})^2 for binary inputss05
β\betaBhattacharyya parameter of a binary-input channel, β=yp(y0)p(y1)\beta = \sum_y \sqrt{p(y\mid 0) p(y\mid 1)}s05
dH(c,c)d_H(\mathbf{c}, \mathbf{c}')Hamming distance between two codewords in the binary code (carried over from Ch. 6)s05