Cutoff Rate and Practical PEP Bounds
Below the Cutoff, Decoders Have Work Exponential in Rate
The Gallager random-coding exponent of §4 treats the decoder as a maximum-metric decoder — the receiver computes the metric for every codeword in the codebook and picks the largest. That is conceptually clean but operationally expensive: the decoder's work is exponential in the blocklength times the rate. Real BICM decoders — LDPC belief propagation, sequential decoders, list decoders — are not of this type.
For sequential decoders on tree codes, Massey-Fano-Jelinek-Zigangirov showed in the 1960s that the practical rate limit is not the capacity but the cutoff rate : at rates , the expected decoder work per decoded bit is finite; at rates , the expected work diverges. For list decoders with list size growing polynomially in , the same threshold separates "polynomial work" from "exponential work". The cutoff rate is therefore the practical rate limit of any non-ML decoder on a binary-input channel, and its BICM specialisation is the quantity every receiver designer cares about.
Beyond the cutoff rate, §5 also revisits the pairwise error probability (PEP) of a BICM codeword pair on a fading channel from Ch. 6. The classical Caire-Taricco-Biglieri 1998 union bound uses a Bhattacharyya-factor product over distinct bit positions; the improved bound of Martinez, Guillén i Fàbregas, and Caire 2006 replaces this with the GMI-tuned saddle-point at each distance, tightening the bound by 1–2 dB at moderate SNR. Both bounds share the same exponential order (same diversity, same coding gain), but the prefactor in the Martinez-Fàbregas-Caire bound is sharper — a direct benefit of the mismatched-decoding framework.
This section defines precisely, develops its BICM specialisation, compares it to the CM cutoff rate, and derives the saddle-point PEP bound. We close with engineering and pitfall remarks on what the cutoff rate does and does not mean for practical decoders.
Definition: Gallager Cutoff Rate
Gallager Cutoff Rate
The cutoff rate of a memoryless channel with input distribution is the value of Gallager's at : Equivalently, for binary-input channels with uniform input, where is the Bhattacharyya parameter of the channel. has units of bits per channel use under , nats under .
For BICM with the product bit metric, the cutoff rate per symbol is This is just the parallel-channel cutoff-rate sum over the bit channels — a direct consequence of the parallel-channel decomposition of §1.
Operational meaning. For any decoder with polynomial complexity in the blocklength (sequential, list, threshold), the rate is the threshold above which the expected complexity grows exponentially in . Below , complexity is polynomial — or even linear for a well-tuned sequential decoder. This makes the practical rate limit of non-ML decoders.
Three comments on :
- is always capacity: by convexity of with . At low SNR ; at high SNR the gap grows but remains .
- Gallager thought was the "true" practical capacity until
the invention of LDPC decoding in the late 1990s. LDPC+BP is an
exception: it can operate above the cutoff rate with polynomial
complexity, because its complexity depends on the graph sparsity,
not on a search over codeword candidates. This is part of why LDPC
- BICM became standard.
- Even so, remains a useful sanity check: when a 5G NR LDPC decoder is reported to operate within 0.3–0.5 dB of Shannon, one can verify that the corresponding rate is above — confirming that the decoder is extracting information beyond the Gallager threshold.
Theorem: and
For any memoryless channel and constellation with labelling :
(i) Cutoff rate is capacity: Both with equality only at zero SNR (trivial channel), and the gap grows as a fraction of capacity with SNR — reaching – of capacity at moderate SNR for QAM.
(ii) BICM cutoff rate CM cutoff rate: with equality at zero SNR and strict inequality for at positive SNR. The ordering mirrors the capacity ordering of Ch. 5.
is a point evaluation of the Gallager function at ; since pointwise (Thm. CM Exponent" data-ref-type="theorem">TBICM Random-Coding Exponent CM Exponent), the values inherit the ordering. The capacity-vs-cutoff ordering is standard from Gallager's convexity argument.
Specialise Thm. CM Exponent" data-ref-type="theorem">TBICM Random-Coding Exponent CM Exponent to to get .
Use convexity of in with and to get .
Cutoff $\le$ capacity
is convex in with and . By convexity, , and since is decreasing (by concavity of the GMI in ), . Thus , with equality only at channels where is linear (zero-SNR limit).
BICM $\le$ CM cutoff
Thm. CM Exponent" data-ref-type="theorem">TBICM Random-Coding Exponent CM Exponent gave for all . Setting yields directly.
Strict inequality
For on AWGN at positive SNR, the chain-rule residual , which propagates to strictly — hence strictly.
Three PEP Bounds at Fixed Hamming Distance
Compare three different bounds on the pairwise error probability where and differ in bit positions, for BICM on AWGN with 16-QAM and Gray labelling:
- Caire-Taricco-Biglieri 1998 union bound (Ch. 6): a product of Bhattacharyya factors, one per distinct bit position.
- Chernoff/union with scaling : the GMI-style bound at the fixed decoder scaling.
- Martinez-Fàbregas-Caire 2006 saddle-point bound: optimise the scaling at each to tighten the bound.
The saddle-point bound is 1–2 dB tighter than the union bound at moderate SNR for the same — a non-trivial improvement that makes PEP-based union bounds on BICM much more accurate than the classical ones. At high SNR and low SNR all three bounds converge to the same exponential order.
Parameters
Theorem: Martinez-Fàbregas-Caire Saddle-Point PEP Bound
Let be two codewords differing in bit positions, transmitted over a BICM system with the product bit metric. The pairwise error probability satisfies where is the complementary bit at position , and the product runs over the distinct bit positions. The minimisation over is the saddle-point optimisation.
At this reduces to the Bhattacharyya-factor product of the Caire-Taricco-Biglieri 1998 bound; optimising over gives the tighter Martinez-Fàbregas-Caire 2006 bound. The improvement is – dB at moderate and moderate SNR; at large the saddle-point approaches and the improvement vanishes.
The classical union bound uses a Chernoff-type tilt at — the "neutral" choice that symmetrically weights the two codewords. The saddle-point argument optimises the tilt per distance: for short distances (where the decoder's "surprise" is largest) the optimal is smaller, placing more weight on the transmitted codeword; for long distances, approaches and the bound reduces to the Bhattacharyya form. The GMI framework naturally provides this tilt optimisation as part of the same machinery used for capacity.
Apply a Chernoff bound to with tilt parameter .
Recognise the per-position expectation as the -tilted Bhattacharyya-like factor.
Optimise to tighten the bound, obtaining the saddle-point form.
Chernoff bound on the PEP
Define the per-position metric-ratio for bit positions where (zero elsewhere). The PEP is For any , Chernoff gives
Evaluate per-position factor
For bit positions where , and the factor is 1. For the distinct positions, Taking the product over the positions gives the product form in the theorem statement.
Saddle-point optimisation
The bound holds for every , so the tightest bound is the -optimised one. Call the optimum . This value depends on , the channel, and the labelling. For fixed SNR and Gray, varies smoothly from at to at — approaching the Bhattacharyya tilt at large distances.
Reduction to CTB union bound
At , the per-position factor reduces to the standard Bhattacharyya parameter . The product over distinct positions is then where is the number of distinct positions at level — the Caire-Taricco-Biglieri union-bound form from Ch. 6 Thm. [?ch06:thm-bicm-union-bound]. The saddle-point bound tightens this by allowing .
Example: Saddle-Point vs Union Bound: 16-QAM Gray at 8 dB,
For BICM with 16-QAM Gray labelling on AWGN at dB and Hamming distance , compute (i) the CTB union-bound PEP, (ii) the MFC saddle-point PEP, and (iii) compare the two in dB of SNR margin.
Bhattacharyya factor at $s = 1/2$
For 16-QAM Gray at 8 dB linear (), the per-position Bhattacharyya factors are approximately (outer bits, high SNR) and (inner bits, effectively lower SNR). Averaging uniformly over bit positions gives . The CTB union bound for is .
Saddle-point at optimised $s$
Numerical saddle-point search gives for this distance. The per-position factor at is slightly smaller than at , say . The MFC bound is — about 5 tighter than the union bound.
Convert to SNR margin
At this SNR, a factor of 5 in PEP corresponds to about 0.8–1 dB of SNR (via the Q-function slope). So the MFC saddle-point bound tightens the union bound by dB. For a union bound over many values in a full codeword error estimate, this dB reduction propagates and can mean the difference between being within or outside the standard performance target.
Large-$d_H$ convergence
As , (the symmetric Chernoff tilt) and the MFC bound reduces to the CTB bound. The improvement is therefore distance-dependent: largest at moderate , vanishing at large . This is consistent with the classical Bhattacharyya bound being asymptotically tight at long distances.
Common Mistake: Cutoff Rate Is Not Capacity
Mistake:
Quoting as the "BICM capacity" or assuming that LDPC+BP decoders cannot operate at rates above .
Correction:
The cutoff rate is not the capacity. It is a tighter rate threshold for a specific class of decoders — namely those whose decoding complexity grows with the number of candidate codewords examined per information bit (sequential, list, threshold decoders). For these decoders, above the expected complexity diverges exponentially in .
LDPC + belief propagation is NOT in this class. BP complexity is proportional to the number of edges in the factor graph, which is independent of rate (for fixed graph density). BP therefore can operate above the cutoff rate — in fact, at rates within dB of capacity in 5G NR, well above . The invention of LDPC in the 1990s is what retired the cutoff rate as "the practical capacity"; the practical capacity is now (nearly) the Shannon capacity.
So when you see quoted in a paper, ask:
- What decoder architecture is the paper about?
- If the answer is "LDPC+BP", then is a historical curiosity — informative but not operational.
- If the answer is "sequential" or "list" or "polar-SCL with small list", then is the tight operational limit below which complexity stays polynomial.
5G NR LDPC Decoders Operate Above the Cutoff Rate
Every 5G NR LDPC decoder operates above the Gallager cutoff rate of the BICM channel. For QPSK at 3 dB, bits/symbol (Example ECutoff Rate of BICM-QPSK at 3 dB); the typical 5G NR QPSK operating rate is bits/symbol (code rate 0.85, QPSK spectral efficiency 2, giving 1.7). LDPC's density-evolution capacity is within dB of Shannon, which is well above .
This is why the cutoff rate is, in the 5G design process, a sanity check, not a design constraint. The design constraint is the density-evolution threshold of the chosen LDPC base graph on the BICM channel, which determines the achievable rate under BP decoding. That threshold is computed on the mixture-channel LLR histogram, and is the quantity a 5G LDPC designer actually optimises.
The cutoff-rate sanity check is useful when diagnosing simulation behaviour: if a decoder is reported to operate at with low error probability, this confirms the decoder is extracting information beyond the Gallager threshold — a hallmark of modern iterative / message-passing decoders. If instead the decoder is reported to work only at , it is likely a sequential or list-type decoder and the cutoff rate is doing real operational work.
- •
is below practical 5G NR LDPC operating rates
- •
Density-evolution threshold (not cutoff rate) is the design target for LDPC
- •
For sequential decoders (polar SCL with small list), the cutoff rate is tight
- •
is useful as a diagnostic when comparing decoder architectures
Common Mistake: Saddle-Point Is a Provable Upper Bound, Not an Exact PEP
Mistake:
Interpreting the Martinez-Fàbregas-Caire saddle-point bound as an exact PEP expression or as a lower bound on PEP.
Correction:
Both the CTB union bound and the MFC saddle-point bound are upper bounds on the pairwise error probability. The saddle-point tightens the Chernoff exponent but does not achieve equality; at finite blocklength the actual PEP is still below both bounds, often by – dB at the SNRs of interest. To match simulation, one typically uses the MFC bound plus a small correction term (or fits directly to the simulated PEP curve).
In particular, the MFC bound is a union bound in spirit: it is tight on the codeword-pair-level event, but a codeword-error event is a union over many pairs, and the union bound can loosen further as the list of pairs grows. For low error rates the MFC bound is tight; near the waterfall region the union bound component dominates and the bound loosens by 1–3 dB.
Historical Note: Martinez-Fàbregas-Caire 2006: Sharpening the BICM PEP
2006The 2006 IEEE Transactions on Information Theory paper of Alfonso Martinez, Albert Guillén i Fàbregas, and Giuseppe Caire — "Error probability analysis of bit-interleaved coded modulation" — is the companion analysis paper to the 2008 Foundations & Trends monograph. It derives the saddle-point PEP bound of Thm. TMartinez-Fàbregas-Caire Saddle-Point PEP Bound and shows that it tightens the classical Caire-Taricco-Biglieri 1998 union bound by – dB at moderate SNR.
The paper's contribution is twofold. First, it exhibits the exact saddle-point as a function of Hamming distance — a numerical knob that the 1998 paper had not optimised. Second, it demonstrates that the improved bound tracks simulation far more closely than the union bound — to within a few tenths of a dB at . This makes the MFC bound the practical PEP bound used in modern BICM analyses; the CTB union bound is retained for its simplicity (single evaluation at ) when rough estimates suffice.
Methodologically, the 2006 paper is the first concrete demonstration that the mismatched-decoding GMI framework (then being developed for the 2008 monograph) pays numerical dividends beyond the information-theoretic cleanup. The -optimisation that gives the exact achievable rate in §3 also gives the sharpest PEP bound in §5 — the framework is internally coherent.
Why This Matters: From Cutoff Rate to BICM-ID: Closing the Gap
Chapter 7's takeaway is that BICM — at one-shot demapping — has a rate gap to CM of dB on Gray-QAM, an exponent ratio of , and a PEP improvement of dB via the MFC saddle-point. All three quantities can be further improved by iterative demapping + decoding, where the binary decoder's extrinsic LLRs are fed back to the demapper for a second pass through the per-symbol likelihood computation.
BICM-ID — the subject of Chapter 8 — closes the product-metric mismatch by making the demapper posterior conditioned on the current LLRs of the other bits. In the limit of perfect extrinsic information, BICM-ID's demapper computes the full joint symbol likelihood; i.e., the iterative system eliminates the mismatched- decoding penalty entirely. The penalty for this gain is complexity (multiple demapper passes) and the need for set-partition labelling rather than Gray (because iterative feedback needs structural separation between bits). Chapter 8 develops the EXIT-chart analysis and the code-design implications.
Forward reference: the GMI saddle-point of this chapter and the EXIT chart of Ch. 8 are two sides of the same coin. The GMI parameterises single-pass BICM rate; the EXIT chart parameterises iterative BICM-ID rate. Both are mismatched-decoding quantities; both reduce to CM capacity as the mismatch vanishes (at high SNR for GMI, at full iteration for EXIT).
Quick Check
The cutoff rate is
the practical rate limit of sequential / list decoders, above which decoder complexity grows exponentially in
the BICM capacity minus a fixed gap of 3 dB
the rate above which the error probability is always 1
the rate at which the BICM mismatched decoder achieves the same PEP as the CM matched decoder
This is the operational meaning of the cutoff rate. For sequential (Fano, stack) and list (Forney) decoders, is the rate threshold that separates polynomial from exponential expected work. LDPC+BP is a notable exception that can operate above .
Quick Check
The Martinez-Fàbregas-Caire saddle-point PEP bound is tighter than the Caire-Taricco-Biglieri union bound because
it optimises the Chernoff tilt at each Hamming distance rather than fixing (Bhattacharyya)
it uses a different channel model (block fading instead of i.i.d.)
it accounts for BICM-ID iterative feedback
it assumes Gray labelling and a specific constellation
Both bounds are Chernoff-type. The CTB bound fixes , giving a Bhattacharyya-factor product. The MFC bound optimises per-distance, giving a tighter bound especially at short . This is the content of Thm. TMartinez-Fàbregas-Caire Saddle-Point PEP Bound.
Cutoff Rate
The Gallager function evaluated at : . For binary-input channels with uniform input, where is the Bhattacharyya parameter. Operationally, the rate threshold above which sequential and list decoders have exponential expected complexity.
Related: Gallager Function , Bhattacharyya Parameter , Sequential Decoder
Bhattacharyya Parameter
For a binary-input channel, — a scalar in characterising the discriminability of the two inputs. : perfectly discriminable (zero-SNR noise limit is , noiseless limit is ). The Bhattacharyya parameter controls the Chernoff PEP bound at and the cutoff rate via .
Related: Gallager Cutoff Rate , Chernoff Bound, Martinez-Fàbregas-Caire Saddle-Point PEP Bound
Saddle-Point PEP Bound (Martinez-Fàbregas-Caire 2006)
The PEP bound , where the minimum is over the decoder scaling and the product runs over distinct bit positions. Tightens the classical Bhattacharyya bound by – dB at moderate SNR; reduces to the classical bound at for long distances.
Related: Martinez-Fàbregas-Caire Saddle-Point PEP Bound, Chernoff Bound, BICM as Parallel Binary Channels, Union Bound