Exercises
ex-ch19-1
EasyA SISO decoder produces a-posteriori LLR for a systematic bit whose channel LLR is and whose a-priori LLR (from the peer decoder) is . Compute the extrinsic LLR that must be passed back to the peer decoder, and state why subtraction is essential.
Recall the definition .
What happens to LLR magnitudes if is fed back instead?
Apply the extrinsic formula
.
Interpret the subtraction
The peer decoder already knows for the systematic bit and supplied itself. Reusing either amounts to positive feedback: the same evidence is counted twice, LLR magnitudes blow up, and the decoder becomes overconfident about wrong decisions. Only represents new information learned through the code constraints.
ex-ch19-2
EasyA parallel concatenated convolutional code uses two identical rate- RSC encoders, puncturing the parity of the second encoder to keep only every other output bit. What is the overall rate of the PCCC, and what fraction of transmitted bits is systematic?
For information bits count how many parity bits survive.
Rate equals info bits divided by total transmitted bits.
Count transmitted bits per $K$ info bits
Systematic: . Parity 1: . Parity 2 punctured to . Total: .
Compute rate and systematic fraction
Rate . Systematic fraction is of transmitted bits.
ex-ch19-3
EasyFor the symmetric-Gaussian LLR model, verify that and as . What do these two limits mean operationally for a SISO block?
At the LLR is deterministic: which value?
At the conditional means of diverge.
Limit $\sigma \to 0$
. As both distributions concentrate at , so carries no information about : .
Limit $\sigma \to \infty$
The two conditional distributions separate (means grow faster than the standard deviation ), so determines : bit.
Operational meaning
means a-priori LLRs are useless; the SISO block decodes from channel data alone. means a-priori LLRs are perfect; nothing new can be learned and the iteration terminates.
ex-ch19-4
MediumTwo identical RSC decoders have EXIT functions that depend continuously on SNR . Explain geometrically why the turbo cliff occurs where the curves and first become tangent, and sketch how the trajectory stair-steps just above and just below the cliff.
Trajectory is .
A fixed point is an intersection of and .
Fixed points are intersections
A fixed point of the iteration satisfies , i.e., . Graphically, these are crossings of with its mirror. Between two crossings the trajectory monotonically moves toward the upper one and is trapped below the upper fixed point.
Just below the cliff
The curves cross at some . The trajectory climbs stairwise from , squeezes into the narrowing tunnel, and stalls at . Residual BER is determined by and is nonzero.
At and above the cliff
At the cliff the two curves touch tangentially at : an extremely slow passage through the pinch point, so convergence takes many iterations. Just above, the tunnel is open and the trajectory reaches within a handful of iterations, producing the steep BER drop known as the waterfall.
ex-ch19-5
MediumA BCJR decoder receives a-priori LLR on the information bits and channel LLR on the systematic and parity bits. It outputs a-posteriori LLR on the information bits. Derive the expression for the extrinsic LLR of bit that the decoder should forward to its peer, and explain why only the systematic channel LLR is subtracted.
BCJR computes as a product of forward, backward, and branch metrics.
Which inputs did the peer decoder not provide?
Structure of $L_D$
For a rate-1/2 RSC, , where is the information extracted from the parity constraints via forward-backward propagation.
What to subtract
The peer decoder (via the interleaver) already supplied and will directly use for its own systematic bit. Hence the extrinsic output is .
Parity LLRs are not subtracted
(parity of encoder 1) is information that only this decoder owns; the peer cannot double-count it. Hence only the systematic channel LLR is subtracted.
ex-ch19-6
MediumFor BPSK symbols with extrinsic LLR from the decoder, derive the soft mean and variance used by the soft-IC equalizer.
Use .
for BPSK, so .
Compute the mean
.
Compute the variance
Since deterministically, .
Interpret the limits
gives (no decoder knowledge). gives (hard decision).
ex-ch19-7
HardConsider a single-tap ISI channel with received signal , . Assume BPSK symbols and that the decoder has provided soft estimates and variance . Derive the soft-IC LMMSE estimator for from the window , treating the interfering as Gaussian with mean and variance .
Subtract from and from .
Then apply the LMMSE solution to the residual model.
Residual after cancellation
Form . The residual model is whose interference term has mean zero and variance .
LMMSE coefficient
For a scalar estimator , the LMMSE gain is
Effective SINR
The output SINR is As decoder feedback sharpens () the SINR grows toward β the matched-filter bound.
ex-ch19-8
MediumA rate- repetition code (repeat each bit twice) sees an AWGN channel with channel mutual information per output bit. Assuming symmetric-Gaussian messages, show that the EXIT transfer function of the repetition decoder is .
The extrinsic LLR for the -th repetition is the sum of the channel LLRs of the other repetition plus .
Variances of independent symmetric-Gaussian LLRs add.
LLR combination
For two repetitions of bit producing channel LLRs , the extrinsic LLR on the first copy is .
Add variances
Under symmetric-Gaussian consistency, variance is additive: where and . For the bound from both repetitions .
Apply $J$
Repetition codes have strictly increasing EXIT curves.
ex-ch19-9
MediumA turbo equalizer pairs an MMSE-PIC SISO equalizer with a memory-3 RSC decoder. At SNR the EXIT curves are and on . Compute the first three iterations of the trajectory starting from , and decide whether the loop converges to .
Compute , then , alternately.
Stop when the updates become negligible or you see divergence.
Iteration 1
. Then .
Iteration 2
. Then .
Iteration 3 and conclusion
. Then . Trajectory monotonically approaches 1. The tunnel is open and the loop converges.
ex-ch19-10
MediumAfter many turbo iterations the decoder feedback becomes nearly perfect so . Argue why the BER of an ISI+RSC turbo equalizer then approaches the matched-filter bound of the underlying code on an AWGN channel, independent of the ISI coefficients.
What happens to interference after ?
What does the equalizer reduce to in that limit?
Interference collapses
With for all interfering symbols, the soft-IC residual is : a genuine AWGN scalar observation for the current symbol. The ISI taps have been perfectly subtracted.
Equalizer becomes matched filter
The LMMSE filter reduces to the identity and passes per-symbol LLRs of AWGN quality to the decoder.
MFB attainment
The decoder now operates as if on AWGN without ISI, so its BER is the AWGN matched-filter bound of the outer code, independent of the ISI response (assuming nonzero energy).
ex-ch19-11
HardAn MIMO system uses layered detection with soft decoder feedback. Channel matrix is i.i.d. complex Gaussian with ; noise variance per receive antenna is ; symbols are BPSK with feedback variance for each interferer. After soft cancellation, the SINR of layer 1 under matched-filter detection is approximately . Verify this expression and interpret the two limits and .
Signal power after matched filtering by is .
Each interferer contributes residual variance through .
Signal and noise at MF output
The matched-filter output has signal power (deterministic limit) and noise power .
Residual interference after soft cancellation
Each of the cancelled interferers contributes residual variance which on average equals per interferer. Total interference power .
Form SINR and interpret limits
(no feedback): β the naive MF SINR. (perfect feedback): β the single-user MFB.
ex-ch19-12
HardConsider a scalar inference problem with true posterior . Derive the EP Gaussian approximation obtained by matching the mean and variance of , in terms of .
For a discrete-prior posterior of BPSK, .
since .
Compute the posterior probabilities
.
Match mean
.
Match variance
. The EP Gaussian has these moments; its natural parameters are then passed as an effective Gaussian prior to neighbouring factors.
ex-ch19-13
MediumLMMSE-PIC detection treats the symbol prior as (matching only second moments of a unit-energy constellation). Explain why EP's moment-matched Gaussian with feedback-dependent is a tighter approximation, and describe one regime where EP provides the largest gains over LMMSE.
Both are Gaussian β so why is EP better?
Think about high-order QAM vs BPSK.
Prior used by each method
LMMSE-PIC always uses for the residual symbol; this ignores decoder feedback about specific constellation points. EP's incorporates the posterior discrete constellation marginals and tightens with iteration.
Tightness
As feedback sharpens, EP's shrinks toward zero and concentrates on the true symbol; LMMSE variance stays fixed at . EP's filter is thus closer to matched filtering earlier.
Regime of largest gain
High-order QAM with moderate-to-large MIMO dimensions: LMMSE-PIC wastes prior information because is far from the actual discrete QAM posterior, while EP tracks the full posterior mean and variance.
ex-ch19-14
HardIn EP the message from a factor to variable is obtained as follows: (i) compute the belief ; (ii) form the cavity ; (iii) tilt by the true factor: ; (iv) project onto Gaussian by moment matching; (v) divide out the cavity. Explain why step (ii) β forming the cavity β is essential and what happens if it is skipped.
Cavities prevent a message from using its own previous value.
This is the EP analogue of 'extrinsic'.
Role of the cavity
The cavity represents what the other factors collectively believe about without factor 's contribution. Tilting by then incorporates 's exact constraint once.
What happens without a cavity
Without division by , the tilted distribution contains , which double-counts factor . The new message includes a copy of the previous message β the EP analogue of feeding the a-posteriori LLR back into the peer decoder.
Extrinsic interpretation
The cavity is exactly the extrinsic information of the turbo principle: the belief minus the contribution of the factor that will consume the new message. Both constructs enforce that only new evidence is propagated.
ex-ch19-15
MediumEP is known to sometimes diverge, oscillating between two moment-matched fixed points. A standard remedy is damping: update with . Explain intuitively why damping stabilises EP and what role the factor plays.
Undamped EP is a fixed-point iteration; large steps can overshoot.
Think of as a step size.
EP as fixed-point iteration
EP seeks moment-matching fixed points of a nonlinear map. In ill-conditioned regimes (e.g., strongly non-Gaussian posteriors) consecutive proposals can leapfrog the fixed point, producing oscillation.
Damping as step-size control
Damping mixes the new proposal with the old, reducing the effective update magnitude. This is analogous to gradient descent with a learning-rate on a log-scale.
Practical choice
Small (e.g., ) is robust but slow; close to is fast but risks oscillation. Adaptive damping based on change-in-belief is common in practice.
ex-ch19-16
EasyWhy does an interleaver of length give much better turbo-code performance than , even at the same rate and channel SNR?
Think about the minimum distance spectrum.
Think about the validity of EXIT's Gaussian assumption.
Distance spectrum
A long random interleaver spreads low-weight input patterns across the two component codes; the joint codeword distance is much larger than either component's minimum distance. Short interleavers produce many low-weight codewords, causing high error floors.
EXIT validity
EXIT analysis assumes i.i.d. symmetric-Gaussian extrinsic LLRs. This holds asymptotically as . At the Gaussian approximation fails and the waterfall threshold is much worse than predicted.
Consequence
Long interleavers lower both the waterfall (convergence threshold) and the error floor. The 5G-NR data channel uses LDPC codes with effective block sizes in the thousands for precisely this reason.
ex-ch19-17
HardAn EXIT curve encloses an area with the axis. For a code used on a BEC, ten Brink's area property relates to the code rate: for the decoder EXIT curve under the Gaussian model. Explain the implication: if the inner EXIT curve of a concatenated scheme encloses area , what rate can the outer code have?
For convergence, the inner curve must lie above the mirrored outer curve.
Areas provide a matched-area condition.
Area property
Under the BEC or the Gaussian approximation, a code of rate has decoder EXIT area (for the EXIT under channel capacity matching).
Matched-area condition
For the tunnel to stay open, the outer decoder curve (when mirrored) must fit beneath the inner curve. This requires the outer area to satisfy , i.e., .
Design implication
The inner EXIT area determines the maximal achievable outer rate. Matching to by curve shaping (e.g., irregular LDPC degree distributions) approaches channel capacity.
ex-ch19-18
MediumConsider a 2-tap ISI channel with BPSK input and noise variance (i.e., SNR dB for the main tap). Assume perfect decoder feedback ( for all symbols). Compute the per-symbol SINR at the equalizer output and compare to the matched- filter-bound SINR of a single-tap AWGN channel with the same main-tap energy.
With perfect feedback the ISI is exactly cancelled.
The MFB is for a unit-energy tap.
After perfect cancellation
, since exactly. The equalizer output equals the matched filter output on AWGN with unit energy.
SINR computation
(or dB).
Comparison
This equals the MFB of unit-energy AWGN. The ISI tap at is irrelevant under perfect feedback β it contributes no interference and no energy loss. The additional tap energy could be used if matched filtering across the ISI span were employed (combining tap 's contribution from the next sample), but soft-IC without matched filtering discards it.
References
- R. J. McEliece, D. J. C. MacKay, and J.-F. Cheng, Turbo Decoding as an Instance of Pearl's Belief Propagation Algorithm, 1998β Establishes that iterative turbo decoding is the sum-product (belief propagation) algorithm on the PCCC factor graph, run with flooding schedule. Rigorously connects turbo codes to graphical models.
- S. ten Brink, Convergence Behavior of Iteratively Decoded Parallel Concatenated Codes, 2001β Introduces EXIT charts. Shows that under a symmetric-Gaussian model for extrinsic LLRs, the decoder trajectory is predicted by the composition of one-dimensional transfer functions.
- X. Wang and H. V. Poor, Iterative (Turbo) Soft Interference Cancellation and Decoding for Coded CDMA, 1999β Foundational paper on turbo multiuser detection. Derives soft MMSE-PIC equalization with extrinsic-LLR feedback from the decoder. The template for turbo equalization and iterative MIMO detection.
- J. Choi, A. C. Singer, J. Lee, and N. I. Cho, Improved Linear Soft-Input Soft-Output Detection via Soft Feedback Successive Interference Cancellation, 2010β Analyses soft feedback SIC with extrinsic information and derives the variance-matched LMMSE filter. Quantifies the SINR improvement from iteration to iteration for coded MIMO and ISI channels.
- T. P. Minka, Expectation Propagation for Approximate Bayesian Inference, 2001β The original expectation-propagation paper. Develops EP as a generalization of assumed-density filtering and a message-passing algorithm that matches moments on factor-graph edges.
- T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, Design of Capacity-Approaching Irregular Low-Density Parity-Check Codes, 2001β Density evolution and EXIT-based design of irregular LDPC codes. The analytical engine underlying modern iterative-receiver code design.
- J. Cespedes, P. M. Olmos, M. Sanchez-Fernandez, and F. Perez-Cruz, Expectation Propagation Detection for High-Order High-Dimensional MIMO Systems, 2014β Applies EP to MIMO detection with high-order QAM. Demonstrates substantial gains over LMMSE-PIC for large constellations and moderate-dimension systems.