Exercises

ex-ch26-01

Easy

Compute the capacity CC and dispersion VV of the BEC with erasure probability δ=0.3\delta = 0.3. Determine the blocklength needed for rate R=0.5R = 0.5 bits/use at ϵ=103\epsilon = 10^{-3}.

ex-ch26-02

Easy

Show that the information density of the BSC with crossover probability pp at the capacity-achieving uniform input is: ι(X;Y)={log2(1p)1=1+log(1p)if Y=Xlog2p1=1+logpif YX\iota(X; Y) = \begin{cases} \log\frac{2(1-p)}{1} = 1 + \log(1-p) & \text{if } Y = X \\ \log\frac{2p}{1} = 1 + \log p & \text{if } Y \ne X \end{cases} (in bits, with pY(0)=pY(1)=1/2p_Y(0) = p_Y(1) = 1/2).

ex-ch26-03

Easy

For the AWGN channel at SNR=20\text{SNR} = 20 dB, compute the maximum coding rate at blocklength n=1000n = 1000 and error probability ϵ=105\epsilon = 10^{-5}. How close is this to the Shannon capacity?

ex-ch26-04

Easy

Verify that the AWGN dispersion V=SNR(SNR+2)/(2(1+SNR)2)V = \text{SNR}(\text{SNR}+2)/(2(1+\text{SNR})^2) satisfies V0V \to 0 as SNR0\text{SNR} \to 0 and V1/2V \to 1/2 as SNR\text{SNR} \to \infty (in nats2^2).

ex-ch26-05

Easy

A binary communication system uses a BSC with p=0.05p = 0.05. How many channel uses are needed to transmit B=256B = 256 information bits with error probability ϵ=106\epsilon = 10^{-6}?

ex-ch26-06

Medium

Derive the dispersion of the BSC from the information density. Specifically, show that: VBSC(p)=p(1p)(log1pp)2V_{\text{BSC}}(p) = p(1-p)\left(\log\frac{1-p}{p}\right)^2 (in the base matching the logarithm used for capacity).

ex-ch26-07

Medium

Show that the RCU bound for the AWGN channel with Gaussian codebook gives:

ϵRCU=E ⁣[min ⁣(1,  (M1)enEsp(R))]\epsilon_{\text{RCU}} = \mathbb{E}\!\left[\min\!\left(1,\; (M-1)\, e^{-nE_{\text{sp}}(R)}\right)\right]

where Esp(R)E_{\text{sp}}(R) is the sphere-packing exponent, in the limit of large nn. Explain why this recovers the error exponent at rates below capacity.

ex-ch26-08

Medium

For the AWGN channel, compute the SNR penalty ΔSNR\Delta_{\text{SNR}} (in dB) for operating at n=128n = 128 vs n=n = \infty at ϵ=105\epsilon = 10^{-5}, for SNR{0,5,10,20}\text{SNR} \in \{0, 5, 10, 20\} dB. Plot the penalty as a function of SNR.

ex-ch26-09

Medium

Prove that the Neyman-Pearson β\beta function satisfies 1nlogβ1ϵ(Pn,Qn)D(PQ)-\frac{1}{n}\log \beta_{1-\epsilon}(P^n, Q^n) \to D(P \| Q) as nn \to \infty, provided ϵ(0,1)\epsilon \in (0, 1) is fixed.

ex-ch26-10

Medium

For a Rayleigh fading channel Y=HX+ZY = HX + Z where HCN(0,1)H \sim \mathcal{CN}(0, 1) is known at the receiver, ZCN(0,1)Z \sim \mathcal{CN}(0, 1), and E[X2]P\mathbb{E}[|X|^2] \le P:

(a) Compute the capacity C=E[log(1+PH2)]C = \mathbb{E}[\log(1 + P|H|^2)].

(b) Show that the dispersion includes a fading term: V=VAWGN+VarH[log(1+PH2)]V = V_{\text{AWGN}} + \text{Var}_H[\log(1 + P|H|^2)].

ex-ch26-11

Medium

Consider the Z-channel with Pr[Y=0X=0]=1\Pr[Y=0|X=0] = 1, Pr[Y=0X=1]=p\Pr[Y=0|X=1] = p, Pr[Y=1X=1]=1p\Pr[Y=1|X=1] = 1-p. Compute the capacity-achieving input distribution, the capacity, and the dispersion for p=0.2p = 0.2.

ex-ch26-12

Medium

A URLLC system operates over a quasi-static Rayleigh fading channel (one fading realization per codeword). Show that the outage probability lower-bounds the error probability at any blocklength:

ϵPout(R)=Pr[log(1+SNRH2)<R]\epsilon \ge P_{\text{out}}(R) = \Pr[\log(1 + \text{SNR}|H|^2) < R]

and that no finite-blocklength code can beat the outage probability in this regime.

ex-ch26-13

Hard

Prove the achievability part of the normal approximation using the Berry-Esseen theorem. Specifically, show that for a DMC with capacity CC, dispersion V>0V > 0, and third absolute moment T=E[ι(X;Y)C3]T = \mathbb{E}[|\iota(X;Y) - C|^3]:

R(n,ϵ)CVnQ1(ϵ)c0TV3/2nR^*(n, \epsilon) \ge C - \sqrt{\frac{V}{n}}\, Q^{-1}(\epsilon) - \frac{c_0 T}{V^{3/2}\sqrt{n}}

for a universal constant c0c_0.

ex-ch26-14

Hard

Derive the meta-converse bound for the BSC at blocklength n=64n = 64 and error probability ϵ=0.01\epsilon = 0.01. Compute the exact maximum code size MM by evaluating β1ϵ\beta_{1-\epsilon} numerically.

ex-ch26-15

Hard

Show that for a channel with zero dispersion (V=0V = 0), the finite-blocklength rate converges to capacity at rate O(logn/n)O(\log n / n) instead of O(1/n)O(1/\sqrt{n}). Give an example of such a channel.

ex-ch26-16

Hard

For the two-user Gaussian MAC with P1=P2=PP_1 = P_2 = P and σ2=1\sigma^2 = 1, compute the full 3×33 \times 3 dispersion matrix V\mathbf{V} at P=10P = 10. Verify that the (3,3) entry equals the point-to-point dispersion at SNR =2P= 2P.

ex-ch26-17

Hard

Saddlepoint approximation. The normal approximation can be refined using the saddlepoint method. For the AWGN channel, the cumulant generating function of the information density is: Λ(s)=logE[esι(X;Y)].\Lambda(s) = \log \mathbb{E}[e^{s\,\iota(X;Y)}].

(a) Compute Λ(s)\Lambda(s) for the real AWGN channel with Gaussian input.

(b) Show that the saddlepoint approximation gives: logM(n,ϵ)nΛ(s)nsΛ(s)\log M^*(n, \epsilon) \approx n\Lambda'(s^*) - ns^* \Lambda'(s^*) where ss^* is the saddlepoint satisfying a specific equation.

ex-ch26-18

Challenge

Research-level. Derive the second-order coding rate for the Gaussian MIMO channel Y=HX+Z\mathbf{Y} = \mathbf{H}\mathbf{X} + \mathbf{Z} with HCNr×Nt\mathbf{H} \in \mathbb{C}^{N_r \times N_t} known at the receiver. Show that the dispersion is:

V=i=1min(Nt,Nr)SNRi2(SNRi+2)2(1+SNRi)2V = \sum_{i=1}^{\min(N_t, N_r)} \frac{\text{SNR}_{i}^{2}(\text{SNR}_{i} + 2)}{2(1 + \text{SNR}_{i})^2}

where SNRi=Piσi2/σ2\text{SNR}_{i} = P_i \sigma_i^2 / \sigma^2 are the per-stream SNRs after water-filling, and σi\sigma_i are the singular values of H\mathbf{H}.

ex-ch26-19

Challenge

Open problem flavor. The meta-converse for the fading MAC with KK users and blocklength nn is not fully characterized when KK grows with nn (the many-access regime). Formulate the problem for K=Θ(n/logn)K = \Theta(n/\log n) active users, each sending B=O(logn)B = O(\log n) bits, and derive an achievability bound using random Gaussian codebooks.

ex-ch26-20

Challenge

Implementation project. Write a simulation that computes the exact RCU bound and meta-converse for the BSC with p=0.11p = 0.11 at blocklengths n{32,64,128,256,512}n \in \{32, 64, 128, 256, 512\} and error probabilities ϵ{102,103,105}\epsilon \in \{10^{-2}, 10^{-3}, 10^{-5}\}.

(a) Plot the exact bounds alongside the normal approximation.

(b) Quantify the gap between the normal approximation and the exact bounds.

(c) Overlay the performance of the best known codes (e.g., BCH, Reed-Muller, polar) from the literature.