EXIT Chart Fundamentals

From LLR Densities to Mutual-Information Scalars

The analysis of BICM-ID in its most general form is a dynamical system on a space of LLR distributions: at each iteration the demapper maps an input LLR density to an output LLR density, and the decoder does the same. That is the "density evolution" analysis used for LDPC codes, and while it is exact, it lives in an infinite- dimensional space that is hard to visualise, slow to compute, and painful to optimise over.

The EXIT chart, introduced by Stephan ten Brink in 2001, is the projection of density evolution onto a SINGLE scalar per iteration — the mutual information between the true bit and its LLR. The tracking is no longer exact, but it preserves enough information for design purposes whenever the LLR density is approximately Gaussian. Okay so what we get is this: two curves on the unit square, a staircase trajectory that walks between them, and a "tunnel" whose width is the SNR margin. You can read convergence off the picture with a ruler. That is why the EXIT chart took over the iterative- coding literature within a year of its publication — it turned a research problem into a design chart.

The point is that iterative BICM-ID is just alternating projections between two nonlinear maps — the demapper and the decoder. EXIT charts are the dynamical-systems view: we watch the orbit and ask whether it escapes to (1,1)(1, 1) or gets trapped.

Definition:

Consistent-Gaussian LLR Assumption

An LLR random variable λ\lambda conditioned on a bit B{0,1}B \in \{0, 1\} is said to be consistent Gaussian if λB=b    N ⁣((12b)σ22,  σ2),\lambda \mid B = b \;\sim\; \mathcal{N}\!\left((1 - 2b) \cdot \tfrac{\sigma^2}{2}, \; \sigma^2\right), i.e., Gaussian with mean ±σ2/2\pm \sigma^2/2 (positive for b=0b = 0, negative for b=1b = 1) and variance σ2\sigma^2. The parameter σ\sigma (standard deviation of λ\lambda) is the sole summary statistic: mean and variance are locked together by the "consistency" relation mean=var/2\mathrm{mean} = \mathrm{var}/2. Equivalently, the likelihood ratio eλe^{\lambda} is a log-normal with the correct Bayes property P(B=0λ)=1/(1+eλ)P(B = 0 \mid \lambda) = 1/(1 + e^{-\lambda}).

The consistent-Gaussian model is EXACT for the BI-AWGN channel: if y=(12b)Es+wy = (1 - 2b)\sqrt{E_s} + w with wN(0,N0/2)w \sim \mathcal{N}(0, N_0/2), then λ=(4Es/N0)y\lambda = (4\sqrt{E_s}/N_0) y is consistent Gaussian with σ2=8Es/N0\sigma^2 = 8 E_s/N_0. For other channels — non-Gaussian noise, higher-order modulation, LDPC messages after a few iterations — the Gaussian model is an APPROXIMATION that turns out to be surprisingly accurate for mutual-information purposes, even when the exact density is markedly non-Gaussian.

,

Definition:

The J-Function

The J-function is the mutual information between a uniformly distributed binary input B{0,1}B \in \{0, 1\} and a consistent-Gaussian LLR λ\lambda of standard deviation σ\sigma: J(σ)I(B;λ)=112πσ2exp ⁣((ξσ2/2)22σ2)log2 ⁣(1+eξ)dξ.J(\sigma) \triangleq I(B; \lambda) = 1 - \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi \sigma^2}} \exp\!\left(-\frac{(\xi - \sigma^2/2)^2}{2\sigma^2}\right) \log_2\!\left(1 + e^{-\xi}\right) d\xi. It is a monotonically increasing function J:[0,)[0,1]J : [0, \infty) \to [0, 1] with J(0)=0J(0) = 0 and limσJ(σ)=1\lim_{\sigma \to \infty} J(\sigma) = 1. The inverse J1:[0,1][0,)J^{-1} : [0, 1] \to [0, \infty) is equally important: given a target mutual information I[0,1]I \in [0, 1], J1(I)J^{-1}(I) is the LLR standard deviation needed to achieve it.

Several accurate closed-form approximations to J(σ)J(\sigma) are tabulated in [?ten-brink-kramer-ashikhmin-2004], including a two-piece fit of the form J(σ)12H1σ2H2J(\sigma) \approx 1 - 2^{-H_1 \sigma^{2 H_2}} for small σ\sigma and J(σ)1eH3σ+H4J(\sigma) \approx 1 - e^{-H_3 \sigma + H_4} for large σ\sigma. Any numerical implementation of EXIT analysis should tabulate JJ and J1J^{-1} once and interpolate.

,

Example: Computing J(1)J(1) and J(2)J(2)

Compute J(σ)J(\sigma) for σ=1\sigma = 1 and σ=2\sigma = 2 to 4-digit accuracy using the defining integral or a tabulated approximation. Interpret the values.

Definition:

Demapper EXIT Curve TdemT_{\mathrm{dem}}

Let the demapper with a priori take independent consistent-Gaussian a-priori LLRs λA\lambda_A with variance σA2\sigma_A^2 satisfying J(σA)=IAJ(\sigma_A) = I_A, and channel observations at a given SNR. Assume the labelling μ\mu is symmetric (bit-channel invariance under the label group). The demapper's extrinsic output λE\lambda_E is modelled as consistent Gaussian with variance σE2\sigma_E^2, and the demapper EXIT curve is the map Tdem(IA,SNR)I(B;λE)=J(σE),σE2=fdem(σA2,SNR,μ),T_{\mathrm{dem}}(I_A, \mathrm{SNR}) \triangleq I(B; \lambda_E) = J(\sigma_E), \qquad \sigma_E^2 = f_{\rm dem}(\sigma_A^2, \mathrm{SNR}, \mu), where fdemf_{\rm dem} is determined by the labelling, the constellation, and the channel. In practice TdemT_{\rm dem} is computed by Monte Carlo over the demapper, fitting the output LLR to a consistent Gaussian by MI matching.

Three landmarks identify a demapper EXIT curve at a glance. At IA=0I_A = 0 the curve starts at the one-shot BICM bit-channel MI: Tdem(0,SNR)=I(B;λch)T_{\mathrm{dem}}(0, \mathrm{SNR}) = I(B; \lambda_{\rm ch}), the Ch. 5 bit-channel capacity. At IA=1I_A = 1 the curve ends at 1 if the demapper-with-perfect-a-priori can identify the bit uniquely (which depends on the labelling, s04), otherwise at a value less than 1. Between the two endpoints, the steepness of the curve is set by how much the a priori on other bits SHARPENS the bit-0 LLR — the crucial quantity that distinguishes SP from Gray (s04).

,

Theorem: Demapper EXIT Is a Function of (IA,SNR)(I_A, \mathrm{SNR}) Alone

Under the consistent-Gaussian LLR assumption for the a-priori input and the symmetry of the labelling, the demapper extrinsic mutual information IEI_E depends on the a-priori LLR distribution only through its mutual information IA=J(σA)I_A = J(\sigma_A). Equivalently, Tdem(IA,SNR)T_{\mathrm{dem}}(I_A, \mathrm{SNR}) is well-defined as a function of two scalars: a-priori MI and channel SNR. The same conclusion holds for the decoder EXIT Tdec(IA,R)T_{\mathrm{dec}}(I_A, R): under the consistent- Gaussian model, the decoder extrinsic MI depends on the input LLR distribution only through IAI_A and the code parameters (rate, degree distribution).

Within the consistent-Gaussian family, a single parameter — variance, equivalently MI — describes the entire distribution. So any functional of the distribution (including the demapper's extrinsic output) can be parametrised by a single scalar. This is what allows us to plot IEI_E against IAI_A instead of tracking full densities.

,

Definition:

Decoder EXIT Curve TdecT_{\mathrm{dec}}

Let the SISO decoder take a-priori LLRs λA\lambda_A with MI IAI_A (assumed consistent Gaussian) and produce extrinsic LLRs λE\lambda_E. The decoder EXIT curve is Tdec(IA,R)I(B;λE),T_{\mathrm{dec}}(I_A, R) \triangleq I(B; \lambda_E), where RR is the code rate (and, for LDPC, the degree profile). For an (dv,dc)(d_v, d_c)-regular LDPC code, the curve factorises cleanly into check-node and variable-node sub-curves; for convolutional codes it is computed by running the BCJR algorithm with consistent-Gaussian a-priori noise. On the EXIT chart the decoder curve is plotted with the ROLE OF IAI_A AND IEI_E SWAPPED: its INVERSE IA=Tdec1(IE,R)I_A = T_{\mathrm{dec}}^{-1}(I_E, R) is drawn on the same axes as the demapper curve, so both maps share the (IA,IE)(I_A, I_E) coordinate system.

The "swap" convention matters: when following the iteration on the chart, the demapper output IEI_E becomes the decoder input IAI_A (after deinterleaving, which preserves MI), so the natural reading direction alternates horizontal — vertical — horizontal between the two curves. The staircase trajectory is what falls out.

,

EXIT Chart: Demapper, Decoder, and Iteration Trajectory

The EXIT chart for 16-QAM BICM-ID. The blue curve is the demapper EXIT Tdem(IA,SNR)T_{\mathrm{dem}}(I_A, \mathrm{SNR}) for the selected labelling and SNR. The red curve is the INVERTED decoder EXIT Tdec1(IA,R)T_{\mathrm{dec}}^{-1}(I_A, R) for a regular rate-RR LDPC (or, equivalently, a convolutional code of the same rate), plotted on the same (IA,IE)(I_A, I_E) axes. The green staircase is the iterative trajectory starting from (0,0)(0, 0): each horizontal step is one demapper pass (moves IATdem(IA)I_A \to T_{\mathrm{dem}}(I_A)), each vertical step is one decoder pass (moves along the inverted decoder curve). The trajectory converges to (1,1)(1, 1) if the tunnel is open, i.e., the demapper curve lies strictly above the inverted decoder curve. Try SNR = 2 dB with SP and watch the tunnel close.

Parameters
3
0.5

Pattern: EXIT Curves Are Transfer Functions

Thinking of TdemT_{\mathrm{dem}} and TdecT_{\mathrm{dec}} as transfer functions in the sense of linear-system theory is productive even though both maps are nonlinear. The iteration IA(t+1)=Tdec(Tdem(IA(t),SNR),R)I_A^{(t+1)} = T_{\mathrm{dec}}(T_{\mathrm{dem}}(I_A^{(t)}, \mathrm{SNR}), R) is the feedback loop; its fixed points are where IA=Tdec(Tdem(IA,SNR),R)I_A = T_{\mathrm{dec}}(T_{\mathrm{dem}}(I_A, \mathrm{SNR}), R); stability of a fixed point is governed by the local slopes of the two curves. When we talk about "matched" codes in s05, the right analogy is loop shaping: arrange the decoder curve to mirror the demapper curve, from just below, so the closed-loop gain is slightly greater than 1 everywhere on [0,1)[0, 1) and the trajectory is pulled up to (1,1)(1, 1).

Common Mistake: The Gaussian-LLR Assumption Is Not Always Accurate

Mistake:

EXIT charts assume all LLR densities along the iteration are consistent Gaussian. The chart looks clean and gives sharp convergence thresholds. It is tempting to report those thresholds as if they were operating SNRs for finite-block-length implementations.

Correction:

The consistent-Gaussian approximation is excellent for regular LDPC decoders at moderate iterations on BI-AWGN, and good enough for higher-order modulation DEMAPPERS that average over multiple symbols. It fails in three regimes: (i) very low SNR, where demapper LLRs become heavy-tailed non-Gaussian; (ii) short block lengths (< 1000 bits), where the interleaver does not fully decorrelate and the i.i.d. a-priori assumption breaks; (iii) early iterations of irregular LDPC codes with high-degree variable nodes, where the output distribution is visibly bimodal. When in doubt, validate the EXIT-chart prediction with a Monte Carlo BER simulation at 2–3 design SNRs before committing to the design. Density evolution is the expensive but exact alternative.

Historical Note: ten Brink's EXIT Chart (2001)

2001

Stephan ten Brink introduced the EXIT chart in 2001 with a paper in IEEE Transactions on Communications originally titled "Convergence behavior of iteratively decoded parallel concatenated codes." The context was turbo-code analysis — two parallel soft decoders exchanging extrinsic information — and the paper's central innovation was the realisation that the full LLR-density dynamics could be replaced by a one-dimensional mutual-information dynamics with negligible loss of predictive power. Within a year the methodology had been ported to BICM-ID (by ten Brink himself and by Li and Ritcey), to iterative equalisation (Tüchler et al.), and to LDPC decoding (ten Brink, Kramer, Ashikhmin). By the mid-2000s EXIT charts were a standard design tool in industrial coding labs. The underlying Gaussian-LLR approximation has been rigorously justified for binary erasure channels [?ashikhmin-kramer-ten-brink-2004] and remains an excellent engineering approximation on more general channels.

Historical Note: Li and Ritcey: BICM-ID Before EXIT Charts

1997–1999

Four years before ten Brink's EXIT-chart paper, Xiaodong Li and James Ritcey of the University of Washington noticed that the iterative- decoding principle developed by Berrou, Glavieux, and Thitimajshima for turbo codes could be applied to BICM: let the decoder's extrinsic output serve as a priori for the demapper, and iterate. Their 1997 IEEE Communications Letters paper showed, by Monte Carlo simulation, that on Rayleigh fading BICM-ID could close 2–3 dB of the one-shot BICM gap — and that set-partition labelling was needed to realise that gain, overturning the Gray-is-best conclusion for one-shot BICM (Ch. 6). They lacked a theoretical tool for predicting the convergence threshold, however, and their 1999 follow-up paper used ad-hoc fixed-point analysis. ten Brink's EXIT chart supplied the missing theoretical tool and, in retrospect, explains exactly WHY SP's steeper demapper curve is the right choice under iteration.

,

Quick Check

What does the J-function J(σ)J(\sigma) give?

The BER of BPSK over AWGN at SNR σ2\sigma^2.

The mutual information between a binary input and a consistent-Gaussian LLR of standard deviation σ\sigma.

The capacity of 16-QAM at SNR σ\sigma.

The rate of a BICM system at SNR σ\sigma with Gray labelling.

J-function

The mutual information between a uniformly distributed binary input B{0,1}B \in \{0, 1\} and a consistent-Gaussian LLR ΛN((12B)σ2/2,σ2)\Lambda \sim \mathcal{N}((1 - 2B)\sigma^2/2, \sigma^2). A bijection J:[0,)[0,1]J : [0, \infty) \to [0, 1] with J(0)=0J(0) = 0, J()=1J(\infty) = 1. The J-function and its inverse are the bridge between the LLR-variance and the mutual- information views of a bit channel, and are the numerical workhorse of every EXIT-chart computation.

Related: Exit Chart, Consistent Gaussian Llr, Mutual Information

EXIT Chart (Extrinsic Information Transfer Chart)

A two-dimensional diagram on the unit square (IA,IE)[0,1]2(I_A, I_E) \in [0, 1]^2 showing the demapper and decoder transfer curves of an iterative receiver. The demapper curve is IE=Tdem(IA,SNR)I_E = T_{\rm dem}(I_A, \mathrm{SNR}); the decoder curve is plotted with swapped axes, IA=Tdec1(IE,R)I_A = T_{\rm dec}^{-1}(I_E, R). The iteration follows a staircase between the two curves; convergence to (1,1)(1, 1) occurs iff the tunnel between them is open.

Related: J-function, Fixed-Point Rate of a Converged BICM-ID Receiver, Convergence Threshold

Key Takeaway

EXIT charts compress density evolution into a 1D dynamical system. Under the consistent-Gaussian LLR assumption, each soft-in/soft-out box is a scalar map from a-priori MI to extrinsic MI. The demapper map Tdem(,SNR)T_{\mathrm{dem}}(\cdot, \mathrm{SNR}) and decoder map Tdec(,R)T_{\mathrm{dec}}(\cdot, R) jointly define BICM-ID's iteration, and the J-function converts between MI and LLR variance so the maps can be implemented with a single tabulated function. Everything interesting — convergence, thresholds, code design — becomes a geometric property of two curves on the unit square.