The LAST Code Construction
From Algebra to Information Theory: Why We Need LAST Codes
Chapter 13 closed the DMT question for linear space-time codes algebraically: pick a cyclic division algebra of degree , verify the non-vanishing-determinant property, and you have a code that achieves the Zheng-Tse curve for every . The point is that this proof rests on the minimum-determinant of the codebook β a number-theoretic invariant of the CDA. It works, but it does not tell us why lattice codes are DMT-optimal; it tells us that one particular lattice (the image of the CDA's regular representation) is.
Here we take the complementary route. We ask: is there a construction that achieves the DMT using only information-theoretic ingredients β a random lattice (Ch. 15 Minkowski-Hlawka), an MMSE-type pre-processor (Ch. 16 Erez-Zamir), and a union bound? The answer, due to El Gamal, Caire, and Damen (2004), is yes: the Lattice Space-Time (LAST) code family. The construction is a direct composition of two prior chapters.
From Ch. 16 (Erez-Zamir) we take the recipe for achieving the AWGN capacity: a fine lattice inside a coarse shaping lattice , common random dithering, and MMSE scaling to defeat the effective noise. From Ch. 10-13 (MIMO) we take the idea that a MIMO fading channel can be triangularised by a decision-feedback receiver. The LAST construction does both: it sends Erez-Zamir codewords over the MIMO channel, triangularises the channel with the MMSE-GDFE receiver (Β§2), and β the main theorem of Β§3 β achieves the Zheng-Tse DMT at every rate.
Intuitively, what happens is that the MMSE-GDFE is to lattice codes what MMSE-SIC is to Gaussian random codes: it is the receiver that makes the capacity-achieving codebook work on a multi-layer channel. The analogy is precise and will be the central thread of Β§Β§2-3.
The construction was immediately extended (Β§4) to structured LAST codes by K. R. Kumar and G. Caire (2008), who showed that replacing the random lattice by , the Leech lattice, or any other dense known lattice preserves DMT-optimality AND gives a 3-6 dB coding-gain boost at moderate SNR. This practically-oriented extension is the bridge between information theory and deployment.
Definition: LAST Codebook
LAST Codebook
A Lattice Space-Time (LAST) codebook of rate bits per channel use, block length , and transmit dimension is defined by a pair of lattices in (real dimensions; is identified with via the standard complex-to-real isomorphism). The codebook is where is the Voronoi region of the coarse (shaping) lattice , and is the fine (coding) lattice. The rate is bits per channel use.
Given message and a common random dither known at both transmitter and receiver, the transmitted codeword matrix is reshaped into a space-time matrix . The per-channel-use transmit energy constraint is , the second moment of the shaping Voronoi region; choosing so that (the transmit energy budget) meets the power constraint.
Three design knobs define a LAST code: (i) the fine lattice , which controls coding gain (denser = better ); (ii) the coarse lattice , which controls shaping gain (rounder = closer to capacity-achieving Gaussian signalling); and (iii) the common random dither , which makes the transmit signal uniform on β the lattice-code analog of i.i.d. random Gaussian coding. The dither is what turns a deterministic codebook into one whose error-probability analysis is tractable via averaging β directly analogous to random-coding achievability proofs.
Definition: LAST Transmission over a MIMO Block-Fading Channel
LAST Transmission over a MIMO Block-Fading Channel
Let the MIMO channel be where is the channel matrix (constant over the block of length ), has i.i.d. entries, and the transmit constraint is . The SNR is (equivalently: per-symbol energy equal to SNR times noise variance, which is normalised to ).
LAST transmission. Vectorise the codeword matrix by stacking columns: . Using the Kronecker identity , the block observation becomes i.e., an effective flat linear channel of block length over -space. LAST decoding operates on this vectorised representation, treating the entire block as a single lattice-encoded super-symbol.
The key modelling move is the vectorisation: after stacking, the MIMO-over-time problem looks like a flat lattice-channel problem of enlarged dimension . The fine lattice lives in this enlarged space. This is why appears as a design parameter: larger = larger lattice dimension = more opportunity for shaping and coding gain, but also exponentially heavier sphere- decoding complexity.
Structure of a LAST Codebook (2D Slice)
A two-dimensional slice through the -dimensional LAST codebook. Fine lattice points of are shown as small dots; the Voronoi region of the coarse lattice is shaded. The codebook consists of fine-lattice points inside the shaded region. Increasing or increases the dimension of (and therefore the number of shaped codewords inside ). The visualisation projects the high-dimensional lattice pair onto a 2D subspace for illustration; the actual construction lives in .
Parameters
Lattice Space-Time Codes Achieving the Optimal DMT
Hesham El Gamal (Ohio State), Giuseppe Caire (then at Politecnico di Torino, later USC, now TU Berlin), and Mohamed Oussama Damen (Ecole Nationale SupΓ©rieure des TΓ©lΓ©communications de Paris, later Waterloo) introduced the Lattice Space-Time (LAST) code family and gave the first information-theoretic proof that lattice codes are diversity-multiplexing-tradeoff (DMT) optimal on MIMO fading channels. The paper has become one of the foundational references of lattice-based space-time coding.
The construction. Before this paper, MIMO codes were either algebraic (Alamouti, OSTBC, CDA-NVD codes of Ch. 13) or ad-hoc linear-dispersion codes (Hassibi-Hochwald). El Gamal, Caire, and Damen proposed a new family: take a random fine lattice (in the sense of Minkowski-Hlawka, Ch. 15), a shaping lattice , and a common random dither ; transmit shaped codewords over the vectorised MIMO channel. The codebook has no algebraic structure β only geometric (lattice) structure.
The receiver: MMSE-GDFE. The key insight of the paper is the receiver. The MMSE-SIC receiver of V-BLAST (Wolniansky et al. 1998) achieves the MIMO sum capacity for Gaussian random codebooks by triangularising the channel via a decision-feedback structure. El Gamal, Caire, and Damen identified the lattice analog: the MMSE-GDFE (Minimum-Mean-Square-Error Generalised Decision Feedback Equaliser). It is derived from the QR decomposition of the augmented channel matrix with , and it converts the MIMO channel into parallel triangular layers, each of which can be lattice- decoded in turn. The analogy MMSE-GDFE : lattice codes :: MMSE-SIC : Gaussian codes is precise and is developed in Β§2.
The main theorem. The paper's main result is that the LAST codebook decoded with MMSE-GDFE lattice decoding achieves the full Zheng-Tse DMT curve for every and every . The proof has three ingredients: (i) averaging over random lattices (Minkowski-Hlawka existence); (ii) Erez-Zamir analysis of the effective AWGN channel after MMSE-GDFE; (iii) matching the Laplace-method Wishart outage exponent of Zheng-Tse. The theorem is reproduced and proved in full in Β§3.
Why it redefined the field. Before 2004, DMT optimality was known only via algebraic CDA constructions (Ch. 13). Those constructions require deep number theory (cyclic division algebras, non-norm conditions, Selmer groups) and are inflexible: the codebook dimension is fixed by the algebra. The LAST paper showed that any lattice β random, , Leech, Barnes-Wall β wrapped in the MMSE-GDFE receiver achieves the DMT. This decoupled code-design from algebra: you can pick the lattice to match your finite-SNR coding-gain target (which is what Kumar and Caire 2008, below, did with and Leech) and still have DMT optimality for free. It also established the decoder as the load-bearing component: it is not the codebook alone but the codebook + MMSE-GDFE combination that achieves the DMT. This mindset β code+receiver co-design β has been foundational to every subsequent lattice-STC paper, including IR-LAST (Ch. 14), compute-and-forward (Ch. 18), and integer-forcing linear receivers.
Theorem: LAST Codebook Realises a Nested-Lattice Code over the MIMO Channel
The LAST codebook with common random dither , transmitted over the vectorised MIMO channel , is statistically equivalent to an Erez-Zamir nested-lattice code transmitted over an AWGN channel with channel matrix and noise , in the sense that for every realisation of the dither , the modulo-lattice operation converts the space-time channel into the same equivalent AWGN channel that Erez-Zamir analyse.
The point is that dithering + modulo reduction removes all codeword-structure from the transmitted signal: what the channel sees is a uniform distribution on , which is isotropic in when is Voronoi- rounded. This is the lattice analog of Gaussian-random-code achievability: by randomising the codebook across dither realisations, we get ensemble-averaged error probabilities that match the AWGN setting.
Recall the Erez-Zamir equivalent-AWGN theorem (Ch. 16): for nested-lattice codes with common random dither, the modulo-lattice output is equivalent in distribution to AWGN at the effective noise level.
Apply the same argument to the vectorised MIMO channel. The channel matrix does not affect the dither-modulo argument.
Step 1 β Vectorise and dither
Write . By the crypto lemma (Erez- Zamir, Ch. 16), for any deterministic , the distribution of over the dither is uniform on .
Step 2 β Channel output after modulo
The receiver forms , undoing the dither. By modulo-arithmetic and the MMSE scaling of Β§2 (taken here as a black box), this is equivalent in distribution to where and is the MMSE-effective noise of variance as .
Step 3 β Equivalent lattice-AWGN channel
The receiver is now decoding from β an Erez-Zamir lattice- AWGN decoding problem in dimension . All the machinery of Ch. 16 applies verbatim: the minimum-distance decoder on correctly identifies as long as the effective noise is inside the Voronoi region of .
Step 4 β Error probability via Minkowski-Hlawka averaging
Averaging over random-lattice ensemble (Minkowski-Hlawka, Ch. 15), the error probability is bounded by the volume-to-Voronoi-ball ratio. This will give DMT optimality in Β§3 when combined with the channel-outage analysis. The assertion of the theorem β statistical equivalence to a nested-lattice code over AWGN β is established.
Example: Computing Rate and Dimension for a LAST Code
Consider a MIMO channel () with block length . Let be a random lattice in (after complex-to-real conversion) with fundamental volume , and let for some integer . Compute: (a) The cardinality of the LAST codebook in terms of . (b) The per-channel-use rate in bits/ch.use. (c) The block-wise rate of information symbols in bits. (d) The total multiplexing gain as a function of the SNR-scaling of .
Part (a): Codebook size
.
Part (b): Per-channel-use rate
bits/ch.use.
Part (c): Block-wise information rate
Total bits per block . Per channel use of the block: bits/ch.use, matching (b).
Part (d): Multiplexing gain
The multiplexing gain is . If , then and the multiplexing gain is .
LAST vs. CDA: Two Routes to the Same DMT Peak
It is worth pausing to place LAST codes next to the CDA-NVD codes of Ch. 13. Both constructions achieve the Zheng-Tse DMT curve, but they do so through radically different mathematical machinery.
-
CDA-NVD codes (Ch. 13). The proof uses the non-vanishing determinant property of cyclic division algebras β a number-theoretic statement about the algebraic integers of a cyclic Galois extension. The code is explicit (a CDA generator matrix) and deterministic (no random dither needed). Coding gain is an algebraic constant depending on the choice of non-norm element and the class of the ideal.
-
LAST codes (this chapter). The proof uses the Minkowski- Hlawka averaging over random lattices β an information- theoretic statement about the generic dense lattice. The code is non-explicit (existence proof) and randomised (common random dither). Coding gain is a constant depending on the second moment of the shaping Voronoi region and the density of the random fine lattice.
The two constructions are complementary, not competing. CDA codes are the best explicit codes for moderate (say, ) where the number-theoretic machinery is manageable and the resulting sphere decoder is tractable. LAST codes are the information- theoretic baseline that tells us why MIMO lattice codes can achieve the DMT β and, crucially, LAST gives us the freedom to substitute any well-known dense lattice (, Leech, BW_16) for the fine lattice, which is the path Kumar and Caire (2008) took in Β§4. For or large products, structured LAST is competitive with or exceeds CDA in finite-SNR performance.
The underlying lesson is the same one we have seen repeatedly: if you have two proofs of the same theorem, each proof offers a different design freedom. Use both.
Common Mistake: Random LAST Is an Existence Proof, Not a Deployable Code
Mistake:
A reading of the 2004 LAST paper might suggest that any random lattice, once wrapped in MMSE-GDFE, gives a DMT-optimal, practical space-time code. One might then proceed to sample a "random lattice from the Minkowski-Hlawka ensemble" and try to implement it.
Correction:
The Minkowski-Hlawka existence theorem (Ch. 15) is non-constructive: it averages over the space of lattices with a Haar-like measure and says that a lattice achieving the density bound exists β but does not tell us which lattice does. Sampling from the ensemble requires high-precision arithmetic, is non-reproducible, and does not yield the best finite-SNR coding gain. For practical deployment one must use an explicit dense lattice β which is why the 2008 Kumar-Caire paper (Β§4) proposed , Leech, and Barnes-Wall lattices. The 2004 existence result is valuable because it tells us what the information-theoretic baseline is; the 2008 constructive result tells us how to deploy a practical code.