The Shaping Gain and the Ο€e/6\pi e / 6 Ceiling

The 1.531.53 dB Shannon Tax

We now derive the number Ο€e/6β‰ˆ1.533\pi e / 6 \approx 1.533 dB β€” the single most famous number in signal shaping. Where does it come from? Two observations, connected by one inequality:

  1. Gaussian is optimal. The maximum-entropy distribution at fixed energy on Rn\mathbb{R}^n is the isotropic Gaussian. A finite, uniformly distributed constellation cannot match its entropy at the same energy.
  2. Cubes are bad boundaries. A bounded-support distribution with the same entropy as a Gaussian has higher energy. For a cube the penalty is Ο€e/6\pi e / 6. For a sphere the penalty vanishes as nβ†’βˆžn \to \infty.

Combining these gives the claim: the best possible shaping gain (over cubic QAM) is Ο€e/6\pi e / 6, and it is achieved asymptotically by spherical constellations. This is the part of the gap to Shannon capacity that we can never close with coding alone β€” we have to shape the marginal distribution. But it is also upper bounded by a single universal dB number. Once you have your 1.531.53 dB, you are done with shaping forever. The rest of the gap is coding.

,

Theorem: Shaping Gain Ceiling: Ξ³s≀πe/6\gamma_s \le \pi e / 6

For any lattice Ξ›sβŠ‚Rn\Lambda_s \subset \mathbb{R}^n, G(Ξ›s)β€…β€Šβ‰₯β€…β€Š12Ο€e,G(\Lambda_s) \;\ge\; \frac{1}{2 \pi e}, with equality only in the limit nβ†’βˆžn \to \infty for the nn-dimensional ball. Consequently the shaping gain satisfies Ξ³s(Ξ›s)β€…β€Š=β€…β€Š112G(Ξ›s)β€…β€Šβ‰€β€…β€Š2Ο€e12β€…β€Š=β€…β€ŠΟ€e6β€…β€Šβ‰ˆβ€…β€Š1.4233,\gamma_s(\Lambda_s) \;=\; \frac{1}{12 G(\Lambda_s)} \;\le\; \frac{2 \pi e}{12} \;=\; \frac{\pi e}{6} \;\approx\; 1.4233, i.e.\ Ξ³s≀10log⁑10(Ο€e/6)β‰ˆ1.533\gamma_s \le 10 \log_{10}(\pi e / 6) \approx 1.533 dB.

The proof is a one-liner combining two classical facts: the Gaussian maximum-entropy theorem (the most random-looking distribution at fixed energy is Gaussian) and the fact that the differential entropy of a bounded-support uniform distribution equals log⁑V\log V, where VV is the support volume. A uniform distribution on V(Ξ›s)\mathcal{V}(\Lambda_s) has the same entropy as the Gaussian if and only if their variances are related by V2/n/Οƒ2=2Ο€eV^{2/n} / \sigma^2 = 2 \pi e, i.e.\ G=1/(2Ο€e)G = 1/(2\pi e).

, ,

Shaping Gain vs Dimension

The per-dimension shaping gain of the nn-ball (the infimum over lattices at dimension nn) climbs from 00 dB at n=1n = 1 to the Ο€e/6β‰ˆ1.533\pi e / 6 \approx 1.533 dB ceiling as nβ†’βˆžn \to \infty. The curve grows slowly: you need n∼16n \sim 16 to get the first dB, and another factor of 1010 in dimension to approach 1.41.4 dB. This is why practical shaping stays in moderate dimensions (shell mapping uses n=16n = 16) and why the last 0.10.1 dB of shaping gain is almost impossibly expensive.

Parameters
64

Theorem: Normalised Second Moment of the nn-Ball

The nn-dimensional ball of volume VV has normalised second moment G(Bn)β€…β€Š=β€…β€Š1(n+2)β‹…1Ο€β‹…Ξ“(1+n/2)2/n.G(B^n) \;=\; \frac{1}{(n+2)} \cdot \frac{1}{\pi} \cdot \Gamma(1 + n/2)^{2/n}. As nβ†’βˆžn \to \infty, G(Bn)β†’1/(2Ο€e)G(B^n) \to 1/(2 \pi e) by Stirling's approximation.

The ball is the "roundest" possible region at a given volume, so it achieves the smallest possible second moment per unit volume. The fact that G(Bn)G(B^n) converges to the Gaussian limit says that a high-dimensional uniform distribution on a ball is indistinguishable from a Gaussian in its moments β€” the entropy gap closes.

,

Reading the Ultimate Gap

A canonical way to read Shannon's capacity formula C=12log⁑2(1+SNR)C = \tfrac12 \log_2(1 + \text{SNR}) is as a prescription: if you are Ξ”\Delta dB below capacity at some rate RR, then Ξ”\Delta decomposes as

Ξ”=Ξ³c,uncoded⏟codingΒ gapβ€…β€Šβˆ’β€…β€ŠΞ³c⏟codingΒ gainβ€…β€Šβˆ’β€…β€ŠΞ³s⏟shapingΒ gain.\Delta = \underbrace{\gamma_{c,\rm uncoded}}_{\text{coding gap}} \;-\; \underbrace{\gamma_c}_{\text{coding gain}} \;-\; \underbrace{\gamma_s}_{\text{shaping gain}}.

The three components are independent. An uncoded QAM constellation at moderate rate sits at Ξ”β‰ˆ9\Delta \approx 9 dB from capacity. A good code closes about 88 dB of that (the coding side); the last 1.531.53 dB is the shaping tax, which no code β€” however long or cleverly designed β€” can close. Shaping is the necessary complement to coding, not an optional refinement.

Example: Shaping Gain of E8E_8

Compute the shaping gain Ξ³s(E8)\gamma_s(E_8) and express it as a fraction of the ultimate ceiling Ο€e/6\pi e / 6.

Common Mistake: Shaping and Coding Contributions Are Orthogonal, Not Interchangeable

Mistake:

Claiming that a stronger code can compensate for missing shaping, or that a better shaping strategy reduces the need for coding.

Correction:

The total gap to Shannon capacity equals (in dB) the sum of the coding-gap and shaping-gap terms. A code improves the coding term but leaves the shaping term untouched; a shaping scheme improves the shaping term but leaves the coding term untouched. The two cannot substitute for each other. Even a capacity-achieving binary code on a uniform QAM constellation is Ο€e/6β‰ˆ1.53\pi e / 6 \approx 1.53 dB short of Shannon β€” the shaping tax is unavoidable without shaping.

Common Mistake: The Ο€e/6\pi e / 6 Ceiling Is Asymptotic

Mistake:

Assuming a practical shaping scheme at n=8n = 8 or n=16n = 16 dimensions can recover the full 1.531.53 dB.

Correction:

The ceiling Ξ³s=Ο€e/6\gamma_s = \pi e / 6 is attained only in the limit nβ†’βˆžn \to \infty. At n=8n = 8 (Gosset lattice E8E_8) the best shaping gain is β‰ˆ0.65\approx 0.65 dB; at n=24n = 24 (Leech lattice Ξ›24\Lambda_{24}) it is β‰ˆ1.03\approx 1.03 dB. The last half-decibel of shaping gain is prohibitively expensive in complexity, which is why all practical systems accept Ξ³sβ‰ˆ1\gamma_s \approx 1 dB and move on.

Why This Matters: Probabilistic Shaping: Recovering 11 dB with a Different Mechanism

Modern standards (DVB-S2X, 5G, optical coherent) recover the same shaping gain through a different mechanism: probabilistic shaping. Instead of confining lattice points to a bounded region, the constellation is left fixed and the input distribution is made non-uniform β€” inner points appear more frequently than outer points. BΓΆcherer, Steiner, and Schulte (2015) showed that this is equivalent to spherical shaping in the high-rate regime. Chapter 19 will return to this in detail, with the probabilistic-amplitude-shaping (PAS) architecture of modern coherent optical links as the main example. The connection to this chapter is direct: the information-theoretic ceiling Ο€e/6\pi e / 6 applies to probabilistic shaping as well, and from the same max-entropy argument.

Quick Check

Why is Ο€e/6\pi e / 6 a universal upper bound on shaping gain?

Because the Gaussian is the maximum-entropy distribution at fixed variance, and the nn-ball achieves this limit as nβ†’βˆžn \to \infty.

Because the AWGN channel capacity is 12log⁑2(1+SNR)\tfrac12 \log_2(1 + \text{SNR}).

Because Ο€e\pi e is the Euler number times Ο€\pi.

Because the nn-ball has volume Ο€n/2/Ξ“(n/2+1)\pi^{n/2} / \Gamma(n/2 + 1).

Key Takeaway

The shaping gain ceiling is Ο€e/6β‰ˆ1.53\pi e / 6 \approx 1.53 dB. This number comes from a single inequality: the differential entropy of a uniform distribution on a bounded region is at most the differential entropy of a Gaussian at the same variance. The ceiling is attained in the limit nβ†’βˆžn \to \infty by the nn-ball; all finite-dimensional constellations fall short. In practice, 11 dB is readily attainable at n∼16n \sim 16; the last half-decibel costs enormous complexity. The next section surveys the two practical shaping schemes β€” shell mapping and trellis shaping β€” that attempt to approach the ceiling.