Exercises
ex-ch12-01
EasyA binary code has the following four codewords: .
(a) Compute the minimum Hamming distance .
(b) How many errors can the code detect? Correct?
(c) Is this code linear?
Compute pairwise distances for all pairs.
A code is linear if the XOR of any two codewords is also a codeword.
Pairwise distances
, , , , , .
.
Capabilities
Detects up to errors.
Corrects up to error.
Linearity
Check: . . . All XOR results are codewords, and is the zero word. The code is linear.
ex-ch12-02
MediumFor the Hamming code with parity-check matrix
the received word is .
(a) Compute the syndrome .
(b) Identify the error position.
(c) Correct the received word.
The syndrome equals the column of corresponding to the error position.
A Hamming code has all non-zero 3-bit vectors as columns of .
Syndrome computation
Computing mod 2: .
Error location
matches column 7 of . The error is in position 7.
Correction
Flip bit 7: .
Verify: .
ex-ch12-03
MediumA rate-1/2 code with is used with BPSK over an AWGN channel.
(a) Compute the asymptotic coding gain in dB.
(b) If uncoded BPSK requires dB for , estimate the required with this code (using the asymptotic approximation).
(c) Why is the actual required typically higher than the asymptotic estimate?
Asymptotic coding gain: .
The asymptotic gain is tight only at very high SNR.
Coding gain
.
In dB: dB.
Required $E_b/\ntn{n0}$
dB.
Why higher in practice
The asymptotic approximation considers only the dominant (minimum-distance) error events. At moderate SNR, higher-weight error patterns contribute significantly. Also, the union bound used in the asymptotic analysis is not tight at low SNR. Practical codes typically operate 1-2 dB above the asymptotic prediction.
ex-ch12-04
EasyA communication system requires transmitting 1000 information bits per block.
(a) If a rate-3/4 code is used, how many coded bits are transmitted?
(b) If the channel bandwidth is 1 MHz and QPSK modulation is used, what is the maximum information throughput?
(c) What is the overhead (fraction of bandwidth used for redundancy)?
With QPSK, each symbol carries 2 bits.
Coded block length
bits (rounded up to 1334).
Information throughput
QPSK symbol rate = MHz (approximately, ignoring roll-off). Raw bit rate = bits/s. Information rate = Mbps.
Overhead
Redundancy fraction = .
ex-ch12-05
MediumA rate-1/2 convolutional code has constraint length and generators and .
(a) Encode the input sequence (assume the encoder starts in the all-zero state and is terminated with 2 tail bits).
(b) How many output bits are produced in total?
(c) What is the effective code rate including tail bits?
Use the shift register: at each step, shift in the new bit and compute both outputs.
tail bits are needed to return to the zero state.
Encoding step by step
State = (register contents), input outputs :
| Time | Input | State | ||
|---|---|---|---|---|
| 1 | 1 | 00 10 | 1 | 1 |
| 2 | 0 | 10 01 | 1 | 0 |
| 3 | 1 | 01 10 | 0 | 0 |
| 4 | 1 | 10 11 | 0 | 1 |
| 5 | 0 (tail) | 11 01 | 0 | 1 |
| 6 | 0 (tail) | 01 00 | 1 | 1 |
Output: .
Total output bits
output bits.
Effective code rate
.
Without tail bits: . The tail bits reduce the effective rate, especially for short blocks.
ex-ch12-06
HardUsing the rate-1/2, convolutional code from Exercise 5, decode the received hard-decision sequence using the Viterbi algorithm.
Assume the encoder starts and ends in state 00.
Use Hamming distance as the branch metric.
At each time step, keep only the best path (survivor) to each state.
Branch metrics
At each time step, for each state transition, compute the Hamming distance between the received pair and the expected output pair.
Trellis traversal
Working through the trellis with path metric accumulation and survivor selection at each of the 6 time steps:
After processing all 6 received pairs and requiring termination at state 00, the survivor path gives the decoded input sequence.
Decoded sequence
The ML path through the trellis corresponds to the input sequence , which produces the codeword .
The received sequence differs from this codeword in the second pair ( vs ), indicating one channel error (in the first bit of pair 2).
The Viterbi decoder correctly recovers the original information bits .
ex-ch12-07
MediumA rate-1/3 convolutional code with has .
(a) What is the asymptotic coding gain over uncoded BPSK?
(b) Compare with a rate-1/2 code with and .
(c) Which code would you prefer in practice and why?
Both codes have the same but different rates.
Coding gain for rate-1/3 code
.
In dB: dB.
Coding gain for rate-1/2 code
.
In dB: dB.
Comparison
The rate-1/2 code has 1.76 dB more coding gain despite the same , because the higher rate means less bandwidth expansion. However, the rate-1/2, code requires Viterbi states vs. states for the rate-1/3, code.
In practice, the rate-1/2, code is preferred (it is the NASA standard) because the 1.76 dB gain justifies the moderate complexity of 64 states.
ex-ch12-08
MediumA turbo code uses two identical rate-1/2 recursive systematic convolutional (RSC) constituent encoders.
(a) What is the overall code rate without puncturing?
(b) If every other parity bit from each encoder is punctured, what is the resulting code rate?
(c) Design a puncturing pattern to achieve rate 3/4.
Without puncturing, the output is (systematic, parity1, parity2).
Puncturing removes parity bits to increase the rate.
Rate without puncturing
Each encoder produces 1 parity bit per information bit. Output: 1 systematic + 1 parity1 + 1 parity2 = 3 bits per information bit.
.
Rate with 1/2 puncturing
Puncturing every other parity bit from each encoder removes half the parity bits: bits per info bit.
.
Rate 3/4 design
Need , so coded bits per info bit.
Output per 3 info bits must be 4 coded bits: 3 systematic + 1 parity. Puncture 5 out of 6 parity bits (keep 1 from alternating encoders every 3 bits).
.
ex-ch12-09
HardThe EXIT function of a constituent decoder is approximated as
where and depend on . At dB, and .
(a) Plot (or sketch) the EXIT chart for the turbo decoder.
(b) Determine whether the iterative decoder converges at this SNR.
(c) Estimate the minimum for convergence.
For a turbo code, overlay with the inverse function .
Convergence requires an open tunnel between the two curves.
EXIT chart construction
Decoder 1: for .
Decoder 2 (axes swapped): plot .
At : . At : .
Convergence check
The tunnel is open if decoder 2's curve (with swapped axes) lies below decoder 1's curve everywhere. Checking at : . The inverse at : .
The curves intersect β the tunnel is closed. The decoder does not converge at this SNR.
Threshold estimation
Convergence requires the curves to just touch (tangent condition). Increasing (higher SNR) opens the tunnel. Numerically, convergence begins at approximately , corresponding to dB.
ex-ch12-10
EasyThe Shannon limit for a rate-1/2 binary-input AWGN channel is dB.
(a) A turbo code achieves at dB. How far is it from the Shannon limit?
(b) A convolutional code with achieves at dB. How much improvement does the turbo code provide?
The gap to Shannon is simply the difference in dB.
Gap to Shannon
Gap dB from the Shannon limit.
Improvement over convolutional
Improvement dB.
The turbo code provides 3.5 dB more coding gain than the best practical convolutional code.
ex-ch12-11
MediumAn LDPC code has the parity-check matrix
(a) Draw the Tanner graph. (b) Is this a regular LDPC code? (c) What is the code rate? (d) Does the graph contain any 4-cycles?
Check if all column weights are equal and all row weights are equal.
A 4-cycle means two variable nodes are both connected to the same two check nodes.
Tanner graph
6 variable nodes ( to ) and 3 check nodes ().
Edges: -, -; -, -; -, -; -; -; -.
Regularity
Column weights: .
Not regular (variable node degrees are not all equal).
Code rate
(assuming full-rank ).
Cycle check
and share . and share . and share . No pair of variable nodes shares two check nodes simultaneously, so there are no 4-cycles.
ex-ch12-12
HardUsing the Tanner graph from Exercise 11, perform one iteration of min-sum BP decoding given channel LLRs .
Compute the updated LLRs after one full iteration.
Min-sum approximation: the check-to-variable message magnitude is the minimum of incoming magnitudes, with sign = product of incoming signs.
Variable-to-check initialisation
Each variable node sends its channel LLR to connected checks:
, , , , ,
Check-to-variable (min-sum)
: inputs from . Sign = , min magnitude = 3. Message = .
: inputs from . Sign = , min = 2. Message = .
: inputs from . Sign = , min = 2. Message = .
Similarly for and (computing all outgoing messages).
Updated LLRs
.
Computing all check-to-variable messages and summing gives the updated LLRs after one iteration. The signs of the updated LLRs give the tentative hard decisions.
ex-ch12-13
MediumFor a BEC with erasure probability and :
(a) Compute the Bhattacharyya parameters for all 4 synthetic channels.
(b) Select the information set for rate 1/2.
(c) What is the expected block error probability under SC decoding?
BEC polarization: and .
.
Level 1 ($N=2$)
.
.
Level 2 ($N=4$)
. . . .
Channel selection and error probability
Ranking: .
Information set (): .
.
This is a loose upper bound; actual is lower.
ex-ch12-14
HardFor the polar code from Exercise 13, with frozen set (frozen to 0) and information set :
(a) Encode the message .
(b) The received LLRs after the channel are . Decode using SC decoding.
The encoding uses with .
SC decoding processes bits in order.
Encoding
.
(mod 2) (row 3 of ).
SC decoding
Process (frozen = 0): .
Process (frozen = 0): .
Process (information): compute using the butterfly recursion and previously decoded . If , set ; else 0.
Process (information): similarly compute and decide.
The decoded message should recover if the channel is not too noisy.
ex-ch12-15
MediumA system uses a rate-5/6 LDPC code with 64-QAM modulation and BICM.
(a) What is the spectral efficiency ?
(b) At what minimum does the AWGN Shannon capacity equal this spectral efficiency?
(c) If the system operates at dB, what is the gap to Shannon?
.
Shannon: where .
Spectral efficiency
bits/s/Hz.
Shannon limit
gives .
or dB.
Gap to Shannon
Gap dB from the Shannon limit.
ex-ch12-16
EasyFor 8-PSK, compare Gray labelling and set-partitioning (SP) labelling.
(a) How many bit errors occur for the most common symbol error (adjacent symbols) under each labelling?
(b) Which labelling is better for BICM? For BICM-ID?
Gray: adjacent symbols differ by 1 bit. SP: adjacent symbols may differ by more bits.
Bit errors per adjacent symbol error
Gray: Adjacent symbols differ in 1 bit 1 bit error.
SP: Adjacent symbols differ in 2-3 bits 2-3 bit errors per adjacent symbol error.
Preferred labelling
BICM (no iteration): Gray mapping is optimal because it minimises the bit error rate and maximises BICM capacity.
BICM-ID (iterative): SP labelling benefits more from decoder feedback because the bit-level channels are more "polarised" (some bits are much more reliable than others), enabling iterative gain.
ex-ch12-17
MediumA system uses Chase combining HARQ with a rate-1/2 code.
(a) After 2 identical transmissions with independent noise, what is the effective SNR improvement (in dB)?
(b) After 3 transmissions?
(c) What is the maximum throughput (bits/channel use) if on average 1.8 transmissions are needed?
Chase combining adds LLRs, effectively doubling the SNR.
SNR after 2 transmissions
LLR addition doubles the effective SNR: .
Gain: dB.
SNR after 3 transmissions
.
Gain: dB.
Throughput
bits/channel use.
ex-ch12-18
HardAn IR-HARQ system uses a rate-compatible code family with mother code rate . The initial transmission uses rate , and each retransmission lowers the effective rate: , , .
At a given SNR, the packet success probabilities are: , , , .
(a) Compute the average number of transmissions.
(b) Compute the average throughput in bits/channel use.
(c) Compare with Chase combining if , , .
Average transmissions: where is the probability of success on exactly the -th attempt.
Success probabilities per attempt
(success on 1st). ... wait, is cumulative success after 2 attempts.
, , , , .
Average transmissions and throughput
For IR-HARQ, each retransmission sends additional parity, so the total channel uses for transmissions varies.
Transmission 1: channel uses. Transmission 2: additional channel uses.
Average channel uses per info bit: .
Throughput: bits/channel use.
Chase combining comparison
For CC, each retransmission uses channel uses per info bit: .
bits/channel use.
IR-HARQ achieves 44% higher throughput.
ex-ch12-19
MediumOn a Rayleigh fading channel, uncoded BPSK achieves at dB.
(a) A rate-1/2 code with is used with sufficient interleaving. What is the diversity order?
(b) Estimate the coding gain at using the high-SNR approximation .
(c) At what approximate would be achieved with and without coding?
With diversity order , BER SNR at high SNR.
Without coding, BER on Rayleigh.
Diversity order
With sufficient interleaving, diversity order .
Coding gain estimate
Coded BER . Finding the for requires solving
(linear) dB.
Coding gain dB β enormous on fading.
BER = $10^{-6}$ comparison
Uncoded: gives or dB.
Coded: gives , so or dB.
Coding saves about dB at !
ex-ch12-20
ChallengeYou are designing the physical layer for a wireless system with:
- Bandwidth: 20 MHz
- Required data rate: 50 Mbps
- Target BER:
- Channel: Rayleigh fading, Hz
- Modulation: 64-QAM ( bits/symbol)
(a) Determine the minimum code rate needed to meet the data rate.
(b) Select an appropriate coding scheme and justify your choice.
(c) Compute the required interleaving depth and resulting delay.
(d) Should the system use HARQ? If so, which type?
Symbol rate bandwidth (with raised cosine, ).
Consider LDPC or polar codes for 5G-like performance.
Minimum code rate
Assume : Msymbols/s.
Raw bit rate Mbps.
Minimum code rate: .
Choose (standard rate, provides margin).
Actual data rate: Mbps Mbps.
Code selection
LDPC code (5G NR-style QC-LDPC) is recommended:
- Near-capacity performance at rate 2/3.
- Parallelisable BP decoding for 20 MHz bandwidth.
- Rate-compatible structure enables IR-HARQ.
- Well-suited to BICM with 64-QAM and Gray mapping.
Interleaving depth
ms.
Symbols per coherence time: .
For LDPC, the relevant distance is the girth or stopping distance. With block length - and time-frequency interleaving across OFDM subcarriers, the interleaving depth is inherently provided by the OFDM structure (with frequency-domain interleaving across subcarriers at 15 kHz spacing).
Delay from time interleaving: a few ms (one slot).
HARQ recommendation
Yes β use Type III IR-HARQ (incremental redundancy):
- Initial transmission at rate 2/3.
- Retransmissions lower rate to 1/2, 1/3, etc.
- Provides graceful adaptation to channel variations.
- Essential for meeting BER target under deep fades.