Exercises

ex-ch22-01

Easy

Compute the Marzetta-Hochwald non-coherent pre-log for nt=2n_t = 2, nr=4n_r = 4, T=6T = 6. Compare to the coherent pre-log.

ex-ch22-02

Easy

A URLLC system operates at 10 dB SNR, n=300n = 300 blocklength, target BLER Ο΅=10βˆ’6\epsilon = 10^{-6}. Use the Polyanskiy normal approximation to estimate the maximum achievable rate.

ex-ch22-03

Easy

For the GN model with PASE=10βˆ’5P_{\rm ASE} = 10^{-5} mW and nonlinear efficiency Ξ·NL=10βˆ’3\eta_{\rm NL} = 10^{-3} mWβˆ’2^{-2}, compute the optimal launch power PoptP_{\rm opt} and peak SNR.

ex-ch22-04

Medium

An autoencoder trained on a Rapp HPA with smoothness p=2p = 2, IBO 3 dB delivers a 0.8 dB coding gain. When deployed on a real HPA with p=1.5p = 1.5 and IBO 2 dB, experimental measurements show the gain drops to 0.2 dB. Explain.

ex-ch22-05

Medium

Derive the dispersion formula V(ρ)=ρ(ρ+2)/(2(ρ+1)2)(log⁑2e)2V(\rho) = \rho(\rho+2)/(2(\rho+1)^2) (\log_2 e)^2 for the real AWGN channel.

ex-ch22-06

Medium

Compare the capacity of a 2Γ—22 \times 2 MIMO system: coherent (perfect CSI) vs non-coherent at T=4T = 4.

ex-ch22-07

Medium

Show that the Polyanskiy normal approximation is tight at nβ†’βˆžn \to \infty but loose at small nn. Explicitly, give a non-trivial lower bound on Rβˆ—(n,Ο΅)R^*(n, \epsilon) valid at moderate nn.

ex-ch22-08

Medium

Explain why the optical fibre channel has no classical Shannon-type capacity theorem, despite being a physical channel with noise and bandwidth.

ex-ch22-09

Hard

An autoencoder is trained on AWGN with 16 messages, 2 channel uses, and SNR 5 dB. After training, it is deployed on the same channel at SNR 10 dB. Would you expect the BER to improve, stay the same, or worsen relative to a hand-designed 16-QAM?

ex-ch22-10

Hard

Prove that for the non-coherent block-fading MIMO channel with T=nt+nrT = n_t + n_r, the Zheng-Tse (2002) non-coherent DMT equals the coherent DMT.

ex-ch22-11

Hard

For 800G coherent optical links at 128 GBaud with 64-QAM PAS shaping, estimate the reach on SMF-28e fibre assuming PASE=10βˆ’5P_{\rm ASE} = 10^{-5} mW/span and Ξ·NL=10βˆ’3\eta_{\rm NL} = 10^{-3} mWβˆ’2^{-2}/km.

ex-ch22-12

Hard

An autoencoder trained with MSE loss vs cross-entropy loss will learn different encoders. Explain the difference and which is preferred for communication.

ex-ch22-13

Hard

An old proverb in information theory says that "every joint input- output constraint either adds a log⁑n\log n penalty or cuts a constant off the pre-log." Apply this to the non-coherent model: compared to a fully-coherent CSI-known system, what is the COST of not knowing the channel?

ex-ch22-14

Hard

The book's golden thread β€” that every chapter establishes a code design criterion and a construction β€” fails in Ch 22. Why? What structure would a future Ch 22 use to be a proper design chapter?

ex-ch22-15

Challenge

Open research: suggest a specific research direction that combines two of the book's landmark results (e.g., CDA codes of Ch 13 and PAS of Ch 19) and would be a natural PhD thesis topic.