Exercises

ex-ch19-01

Easy

Compute the Maxwell-Boltzmann distribution for 16-QAM (4-PAM per axis with amplitudes {βˆ’3,βˆ’1,+1,+3}\{-3, -1, +1, +3\}) at Ξ»=0.2\lambda = 0.2. What is the entropy of the marginal amplitude distribution?

ex-ch19-02

Easy

Using the PAS rate formula RPAS=Rc(H(PA)+log⁑2M)+Rcβˆ’1R_{\rm PAS} = R_c (H(P_A) + \log_2 \sqrt{M}) + R_c - 1, compute the PAS rate for 64-QAM with LDPC rate Rc=0.75R_c = 0.75 and H(PA)=2.5H(P_A) = 2.5 bits (per-axis amplitude entropy).

ex-ch19-03

Easy

Compute the CCDM rate loss at n=500n = 500 for an 8-PAM (8-ary amplitude) target with entropy H(PA)=2.8H(P_A) = 2.8 bits.

ex-ch19-04

Medium

A 400ZR coherent optical link uses 16-QAM with LDPC rate Rc=15/16R_c = 15/16. Estimate the PAS rate adaptation range by varying Ξ»\lambda from 0 to very large.

ex-ch19-05

Medium

Prove the maximum-entropy theorem: the distribution p(x)∝eβˆ’Ξ»x2p(x) \propto e^{-\lambda x^2} maximises H(X)H(X) subject to the constraints E[X2]=EΛ‰\mathbb{E}[X^2] = \bar{E} and βˆ‘xp(x)=1\sum_x p(x) = 1.

ex-ch19-06

Medium

Describe why the SIGN bits in PAS do not need shaping, while the AMPLITUDE bits do.

ex-ch19-07

Medium

A system uses 256-QAM (16-PAM per axis) with Ξ»=0.04\lambda = 0.04 shaping. Compute the per-axis amplitude entropy H(PA)H(P_A) and the shaping gain.

ex-ch19-08

Medium

Compare PAS and discrete MCS adaptation: at 18 dB SNR, a system can either use 64-QAM rate 3/4 (4.5 bits/symbol) or 16-QAM rate 5/6 (3.33 bits/symbol). What continuous PAS rate could achieve the same BER performance at 18 dB?

ex-ch19-09

Hard

Prove that geometric shaping and probabilistic shaping achieve the same mutual information for the SAME set of constellation points and marginal amplitude distribution β€” i.e., the two approaches are INFORMATION-EQUIVALENT.

ex-ch19-10

Hard

The CCDM rate loss formula O(log⁑n/n)O(\log n/n) is for CONSTANT composition. Show that the rate loss of a HIERARCHICAL DM (multi-stage tree of constant-composition blocks) is smaller at moderate nn.

ex-ch19-11

Hard

An autoencoder trained on an AWGN channel at SNR 10 dB is likely to learn a constellation similar to probabilistic-shaped 16-QAM. Why? What does this suggest about the fundamental uniqueness of shaping solutions?

ex-ch19-12

Hard

Derive the 1.53 dB asymptotic shaping ceiling Ο€e/6\pi e / 6 in units of dB using the normalised second moment of a sphere.

ex-ch19-13

Hard

A common criticism of PAS is that it adds encoder/decoder complexity. Quantify the complexity of CCDM encoding at n=1000n = 1000, ∣A∣=8|\mathcal{A}| = 8.

ex-ch19-14

Hard

The PAS architecture requires a SYSTEMATIC LDPC code. What goes wrong if you use a non-systematic turbo code instead?

ex-ch19-15

Challenge

Open research: can PAS be combined with CDA codes (Ch 13) to achieve BOTH the DMT (rank+determinant) and shaping gain simultaneously? Sketch what would need to be proved.