The Nazer-Gastpar Framework
The Relay That Decodes Functions, Not Messages
Classical relaying theory — Cover-El Gamal 1979, amplify-and-forward, decode-and-forward, compress-and-forward — rests on a shared assumption: the relay's job is to recover the individual users' messages (or soft versions of them) and retransmit that information toward a destination. In a -user MAC phase, this means the relay must decode separate messages, one per user, each at a rate below its single-user cut. For large , or for channels where no single user can be decoded reliably, this strategy runs out of steam.
The point is that in many network settings the relay does not need to know individual messages — it only needs to know functions of them. If a destination already holds and receives from the relay, it recovers by subtraction. The relay never had to separate the two streams; it only had to compute their sum. And if the function the relay needs to compute is a linear combination , then the natural coding primitive is a lattice — because a lattice is closed under integer linear combinations.
Nazer and Gastpar (2011) formalised this observation in the compute-and-forward framework. They ask: given a Gaussian MAC with known channel , and an integer coefficient vector chosen by the system designer, what rate can each user send at so the relay reliably decodes ? The answer — the Nazer-Gastpar computation rate — is the beginning of a vast literature on computation-theoretic coding. This section sets up the model; §2 proves the rate formula; §3-§5 apply it.
Historical Note: From Ahlswede-Cai-Li-Yeung 2000 to Nazer-Gastpar 2011
2000-2011Network coding was born in 2000 with the Ahlswede-Cai-Li-Yeung paper "Network Information Flow" (IEEE Trans. IT 2000), which proved that for wired multicast networks the capacity region is achieved by transmitting coded combinations of packets at intermediate nodes — not by routing raw packets. Koetter and Médard (2003) made the coding linear; Ho-Médard-Koetter-Karger- Effros-Shi-Leong (2006) made it random. For six years, network coding was a purely digital (packet-level) concept.
In 2006 Zhang, Liew, and Lam ("Hot topic: physical-layer network coding", ACM MobiCom) proposed a radical idea: on a wireless two-way relay channel, the analog superposition at the relay's antenna already performs the network-coding combination. If and are BPSK symbols, then with the right mapping the XOR is read off directly from the sum. This Physical-layer Network Coding (PNC) doubles the throughput of the two-way relay — from slots (pure routing) to slots.
PNC was elegant but fragile: it required careful amplitude and phase alignment, and it only worked for symmetric channels. The Nazer-Gastpar 2011 paper generalised PNC into a rigorous information-theoretic framework. They replaced the BPSK + XOR intuition with a lattice-coded superposition: instead of landing at a BPSK point, it lands at a lattice point . The relay decodes this lattice point — not the individual messages — at a computation rate that is the Nazer-Gastpar bound of §2. Zhang-Liew-Lam's PNC becomes the special case; the general framework covers arbitrary integer combinations and arbitrary channels. It also connects to the Körner-Marton 1979 distributed source coding problem and to the Ahlswede-Han 1983 Gaussian network-coding converse — three threads that the Nazer-Gastpar synthesis wove together.
Definition: The -User Gaussian MAC-Relay Model
The -User Gaussian MAC-Relay Model
The compute-and-forward channel model consists of transmitters (users) and one relay. At each channel use, user transmits a real symbol satisfying the average power constraint where is the length- codeword of user . The relay observes the analog superposition where is the channel vector (known at the relay) and is unit-variance AWGN. The SNR per user is .
The relay's goal is not to decode the individual messages . Instead, given an integer coefficient vector , the relay tries to decode the modulo- lattice linear combination This is a single lattice point — one element out of the codebook — and decoding it is a hypothesis-testing problem over alternatives, not .
Definition: Computation Rate
Computation Rate
A rate is achievable for compute-and-forward with channel vector and integer coefficient vector if, for every and every sufficiently large , there exists a nested lattice codebook with coding rate and a relay decoder such that The computation rate is the supremum over all achievable rates for that .
Three things to note. First, the users are synchronous — they share a common codebook , and each transmits at rate . Second, the channel is known at the relay but not at the transmitters (the relay chooses and broadcasts it back, or the system designer picks it once for a quasi-static block). Third, the relay decodes a single lattice point — it never identifies the individual 's. Information about which is which must be recovered downstream, usually at a destination that has side information.
Why Lattices Are the Natural Primitive
What makes lattices the right coding primitive for computing sums? Consider the alternative: random Gaussian codebooks (classical Shannon). If user encodes with an independent Gaussian codebook, then is not a codeword in any standard sense — the "sum" of two random codewords lives nowhere in particular. The relay would have to decode each separately and add them digitally, forfeiting the superposition.
Lattices escape this trap by design. If , then as well, because is a group under addition. And for any integers , because acts on by scalar multiplication. So the relay's target is a legal codeword — in fact, it is a point of , and after mod- reduction it lives in the same shaping region as each individual . The lattice structure converts the analog superposition into a structured sum that the relay can hope to decode. Caire's summary: "Lattice codes are to Gaussian networks what linear codes are to finite-field networks — the structured primitive that makes computation-over-channels possible."
Definition: Integer Coefficient Vector
Integer Coefficient Vector
The integer coefficient vector is any vector that the relay (or system designer) chooses to specify which linear function is to be decoded. The set of candidates is the countable lattice ; the system designer picks the that maximises the computation rate for the given channel .
Ideally is close to in the sense that is large (the cosine between and is close to ). For , is a perfect match. For , no integer vector matches — the Diophantine approximation of is by , but none is exact and the computation rate pays for the residual mismatch. The search for the optimiser is the integer-coefficient problem of §5.
Compute-and-Forward Assumes Quasi-Static Channels
The Nazer-Gastpar computation rate assumes the channel is constant over the coding block of length , and that the relay has chosen the integer coefficient appropriate for that . In fast fading — e.g., vehicular 5G NR channels with Doppler spreads exceeding a few hundred Hz at GHz — the channel varies within a block, and the integer vector chosen at block start becomes suboptimal (or infeasible) by block end.
- •
Coherence time must exceed the lattice-codeword duration , where is the symbol period
- •
At km/h and GHz: Doppler f_D pprox 83 Hz, T_c pprox 12 ms; with s (5G NR), only is safe
- •
Relay must feed back the chosen to users (or derive it from a common pilot), adding overhead
Common Mistake: The CF Relay Does Not Decode Individual Messages
Mistake:
Assuming the relay in compute-and-forward first decodes individually (by successive cancellation or joint ML) and then computes their integer combination.
Correction:
The point of CF is precisely to avoid decoding individual messages. The relay sees only and decodes the single lattice point — a hypothesis test over candidates, not . When a single user is much stronger than the others, a SIC receiver may do better by decoding that user first; but in symmetric or balanced channels, CF beats SIC because its decoding cost does not scale with . This is the essential network-coding insight: compute the function, not the individual inputs.
Compute-and-Forward vs. Multiple-Access Decoding
| Feature | CF at relay | Joint MAC decoding | SIC |
|---|---|---|---|
| Target | All | All sequentially | |
| Hypothesis space | (one codebook) | ( codebooks) | stages |
| Rate formula | (Nazer-Gastpar) | Pareto boundary of MAC polytope | Corner of MAC polytope |
| Symmetric sum-rate | Same as joint (at corners) | ||
| Channel knowledge | Relay knows | Relay knows | Relay knows |
| Codebook | Shared nested lattice | independent Gaussian codebooks | independent Gaussian codebooks |
| Best use case | Network settings (TWRC, interference, distributed source coding) | Classical uplink | Classical uplink with asymmetric rates |
Key Takeaway
The Nazer-Gastpar compute-and-forward model has a relay observe and decode a single lattice linear combination — not the individual messages. This works because the shared nested lattice is closed under integer linear combinations, converting the analog superposition of the channel into a structured sum at a legal lattice point.