Packing Bounds and Viazovska's Theorems
How Dense Can Packings Get? The Two-Sided Question
Having seen the best lattices available in each dimension, the natural question is: how dense could a lattice packing be, if we were free to pick any lattice? This question has two sides:
-
Lower bound (existence). How dense can we prove some lattice to be? This is the Minkowski–Hlawka theorem: a random-lattice argument shows that some lattice achieves density , a bound that was essentially tight for 70 years.
-
Upper bound (impossibility). How dense can any lattice (or any packing) possibly be? This is the Kabatiansky–Levenshtein bound and its successors, culminating in the Cohn–Elkies linear-programming bound used by Viazovska.
The two bounds sandwich the optimum between roughly (Minkowski–Hlawka) and (Kabatiansky– Levenshtein). The exponential gap is a separate -year-open problem; in dimensions and the gap was closed by Viazovska's exact constructions.
This section states both bounds, sketches the Minkowski–Hlawka argument in the style of Shannon's random coding, and explains why these bounds shape the theory of coded modulation in Part IV.
Definition: Hermite Constant
Hermite Constant
The Hermite constant in dimension is taken over all -dimensional lattices. Equivalently, is the supremum of over lattices with , or equivalently the supremum of .
is related to the best lattice packing density by The Hermite constant is known exactly for and :
| Extremal lattice | ||
|---|---|---|
For other dimensions the Hermite constant is unknown — only upper and lower bounds are available.
Note that exactly, with extremiser — a particularly clean value. is also exact, with extremiser . Both of these exact values were known (from Conway–Sloane Ch. ) for decades before Viazovska's proof of global sphere-packing optimality.
Theorem: Minkowski–Hlawka Lower Bound on Densest Lattice Packing
For every , there exists an -dimensional lattice with packing density In particular, (the zeta factor is close to for large ).
The proof is an averaging argument, in the same spirit as Shannon's random-coding theorem: we construct a random lattice, compute the expected number of short vectors, and apply Markov's inequality to pull out a lattice with few short vectors and hence large packing radius. The randomisation is over a specific parameterised family of lattices (due to Siegel) whose density is easy to compute on average.
Consider the family of lattices obtained from by a uniformly random unimodular transformation with determinant .
Compute the expected number of nonzero lattice points inside a ball of radius .
Apply Markov's inequality to extract a lattice with no points in the ball.
The random lattice family
Consider the family of lattices where is drawn from a suitable probability distribution on (Siegel's invariant measure). Normalise to so . The density depends on the packing radius, which is half the minimum distance.
Expected count of short vectors
For any radius with (so ), Siegel's mean-value theorem gives (The factor comes from restricting to primitive lattice vectors — lattice points whose coordinates have no common factor.)
Markov's inequality
If the expected count is , then with positive probability the count is zero. Setting the bound gives , so a lattice with packing radius exists. Therefore This gives the single-factor bound; the additional factor of is lost when we convert "primitive-vector density" to "all-vector density", giving the stated .
The Minkowski–Hlawka Argument is Shannon's Random Coding
The Minkowski–Hlawka proof proceeds by average, not by construction. Shannon's channel-coding theorem proceeds by averaging error probability over a uniformly random codebook, then picking a realisation at least as good as average. Minkowski–Hlawka averages packing density over a randomly chosen lattice, then picks a realisation at least as good as average.
This parallel is not accidental. Loeliger () showed that the random-coding arguments for the AWGN channel and the random-lattice arguments for sphere packing are two faces of the same measure- theoretic identity. It explains why Erez–Zamir's capacity-achieving lattice code (Ch. ) works the way it does: a random lattice achieves the AWGN capacity in expectation, and so a deterministic lattice exists that achieves it.
Theorem: Kabatiansky–Levenshtein Upper Bound (Informational)
For every , the density of any sphere packing in satisfies Combined with the Minkowski–Hlawka lower bound , we have for large . The gap is approximately bits, or roughly dB per dimension in coding-gain language.
Kabatiansky–Levenshtein's bound comes from a linear-programming formulation of sphere packing: pack spheres of radius in the unit sphere at the origin (kissing configurations), and use orthogonal polynomial LP bounds to constrain the number of kissing neighbours. Cohn–Elkies () improved the bound in specific dimensions using a Fourier-analytic LP, which Viazovska then sharpened to equality for .
The proof uses Gegenbauer polynomials and the Delsarte method — beyond our scope.
We state the bound and illustrate its gap to Minkowski–Hlawka.
Statement only
The Kabatiansky–Levenshtein bound is the sharpest known upper bound that is independent of dimension (at ). Dimension- specific bounds — Cohn–Elkies () and its descendants — can sharpen it to tight in as Viazovska and collaborators showed. In most other dimensions, the gap between the best known lattice and Kabatiansky–Levenshtein remains of order .
Practical interpretation
For the coded-modulation designer: no code can beat the Kabatiansky–Levenshtein density in any dimension. The gap between the best known lattice and this upper bound is the "reserve capacity" that might be unlocked by better lattice constructions — or, per Viazovska, is sometimes zero (no reserve; the best lattice is already optimal).
Packing Density vs Dimension: Bounds and Best-Known Lattices
Log-scale plot of (or equivalently center density ) vs dimension , showing four curves: (i) the cube (baseline), (ii) the best known lattice in each dimension (from Conway–Sloane Table 1.2), (iii) the Minkowski– Hlawka lower bound on , and (iv) the Kabatiansky– Levenshtein upper bound. The gap between the two bounds grows linearly with . The and points sit on the upper bound (Viazovska). A toggle lets you add or remove the Minkowski–Hlawka bound.
Parameters
Theorem: Viazovska's Theorems: and Close the Gap
In dimensions and , These are the first — and, as of , only — dimensions in which the densest packing is known exactly and proved. The Kabatiansky–Levenshtein upper bound is tight for (after Cohn–Elkies refinement), and the lattices and achieve it.
What makes and exceptional is that the Cohn–Elkies LP has a "magic" Schwartz function whose zero structure exactly matches the minimum norms of (resp. ). Viazovska constructed the magic function using Eisenstein series and theta functions of half-integer weight — a tour de force of modular-form theory. In every other dimension the LP does not close, and a genuine gap between lower and upper bounds persists.
We state the result and connect it to the Hermite constants .
The proof uses Poisson summation plus a modular magic function; see Viazovska 2017 and the Cohn–Kumar–Miller–Radchenko–Viazovska companion paper.
Proof technique sketch
The Cohn–Elkies linear-programming method starts from Poisson summation: for a Schwartz function with nonnegative Fourier transform, applied to a sphere packing with balls of radius , one obtains provided for . Finding the best such is a linear programme. Viazovska's breakthrough was to construct this optimal explicitly for (and with her collaborators for ) using modular forms.
Consequence for the Hermite constant
For : , , so . For : (in standard normalisation), , so . Viazovska et al. proved these are the global suprema over all lattices in those dimensions. The result also applies to non-lattice packings, making these the globally optimal sphere packings.
Why This Matters: Forward to Ch. 16: Lattice Codes Achieving AWGN Capacity
The Minkowski–Hlawka bound tells us that random lattices, with high probability, have density close to the asymptotic bound. This is the key existence result used by Erez–Zamir () to prove that lattice encoding and decoding achieves the AWGN capacity — the first non-Gaussian construction to do so. Chapter 16 unfolds this argument in detail, showing how a sequence of random lattices plus MMSE scaling plus dithering converges to capacity-achieving performance. The ingredients are exactly the ones we've introduced in this chapter: Voronoi regions, second moments, theta series, and the Minkowski–Hlawka existence theorem.
Why This Matters: Forward to Ch. 17: LAST Codes and DMT Optimality
El Gamal, Caire, and Damen () took the nested-lattice construction one step further: a MIMO channel version where one lattice is used for coding and a sublattice for shaping, and the whole construction achieves the diversity–multiplexing trade-off (DMT) optimally on MIMO fading channels. These are the LAST codes (Lattice space–time codes), one of the CommIT group's major contributions. Chapter 17 builds LAST on top of the lattice vocabulary of this chapter; you cannot state the LAST result without first having Minkowski–Hlawka and a concrete lattice family in hand.
Asymptotic Bounds vs Practical Coding Gain
The gap between Minkowski–Hlawka and Kabatiansky–Levenshtein is about bits of density, or dB of coding gain per added dimension. In practice, using in an -D lattice code gives a -dB coding gain over ; using in -D gives dB. Both are well below the asymptotic bounds but are already more than practical decoding complexity can support: a 4-QAM + LDPC code in a CMOS modem delivers comparable gain at a tiny fraction of the complexity.
The message for the practitioner: lattice theory is the upper bound on what coded modulation can achieve in a given dimension. Practical codes usually stop well short of this upper bound and make up the gap with shaping (Ch. 4) and iterative decoding (Parts II and V). The asymptotic bounds of this section are benchmarks, not design targets.
- •
Minkowski–Hlawka: guarantees asymptotically, but does not construct an explicit lattice.
- •
Kabatiansky–Levenshtein: upper bound , sharp only at (Viazovska).
- •
Practical codes: – dB of coding gain in ; shaping (Ch. 4) adds up to dB independently.
Common Mistake: Large Kissing Number is Not the Same as High Density
Mistake:
Assuming that a lattice with many short vectors ("large kissing number") automatically has a high packing density. For example, the densest -D lattice has kissing number , but the Laminated lattice (often confused with in textbooks) also has kissing number and the same packing density: they are dual and isomorphic. However, a different lattice with kissing number in -D does not exist (Odlyzko–Sloane kissing bound), and if it did, its density could still be lower than 's because it might have a much larger fundamental volume.
Correction:
Density and kissing number are independent invariants. Density is ; kissing number is the count of minimum vectors. Conway–Sloane list both separately. To compare lattices, look at (or equivalently center density ); to predict error-rate multiplicity, look at . These are different numbers.
Most Dimensions Are Still Open
As of , the densest sphere packing is known and proved in dimensions (Hales in for , Viazovska in for ). For all other dimensions the optimal packing is conjectured but not proven. In particular:
- : known via , but the proof (Korkine–Zolotarev, ) is less celebrated than Viazovska's.
- : conjecturally , but unproven.
- : the "Laminated lattices" are believed best but unproven.
- : are best known but unproven.
- : extremal constructions via algebraic number fields.
For coded modulation, the takeaway is: use the best known lattice (Nebe–Sloane database), know that better lattices may or may not exist but haven't been found, and build your code assuming the best-known density is what you can deliver. In practice, this is more than enough.
Minkowski–Hlawka bound
A lower bound on the densest lattice packing in dimensions: . Proved by a random-lattice averaging argument in the spirit of Shannon's random coding.
Related: Packing Density and Center Density, Hermite Constant
Kabatiansky–Levenshtein bound
An upper bound on the density of any sphere packing in : . Refined and sharpened to tight in by Cohn–Elkies and Viazovska.
Related: Packing Density and Center Density, Viazovska's Theorems: and Close the Gap, Cohn Elkies
Quick Check
Which of the following best describes the "gap" between Minkowski–Hlawka and Kabatiansky–Levenshtein?
Zero in every dimension.
Exponentially small: .
Exponential: roughly in density.
Proved zero by Viazovska in all dimensions.
The lower bound is (ignoring log factors) and the upper bound is . The ratio is , or about dB per dimension. Closing this gap is the central open problem of sphere-packing theory.
Quick Check
In how many dimensions is the globally densest sphere packing currently known and proved?
One ( only).
Two ( and ).
Four ().
All of them.
: is densest (Korkine–Zolotarev ); : (same authors); : (Viazovska ); : (Cohn–Kumar–Miller–Radchenko–Viazovska ). The case is also known (Hales's Kepler-conjecture proof ), but we asked for .
Key Takeaway
Packing density is bounded above by Kabatiansky–Levenshtein at and below by Minkowski–Hlawka at — an exponential-in- gap that is the central open problem of sphere- packing theory. In dimensions and , Viazovska and collaborators closed the gap and proved that and are globally optimal. The Minkowski–Hlawka existence argument (random-lattice averaging) is the direct analogue of Shannon's random-coding argument; Ch. 16 will use it to build a capacity- achieving lattice code for the AWGN channel, and Ch. 17 will use a nested-lattice variation to achieve the DMT-optimal LAST code on MIMO fading channels. The lattice-theoretic bounds we have proved are the absolute upper envelope for Part IV's constructions.