The Nazer-Gastpar Framework

The Relay That Decodes Functions, Not Messages

Classical relaying theory — Cover-El Gamal 1979, amplify-and-forward, decode-and-forward, compress-and-forward — rests on a shared assumption: the relay's job is to recover the individual users' messages (or soft versions of them) and retransmit that information toward a destination. In a KK-user MAC phase, this means the relay must decode KK separate messages, one per user, each at a rate below its single-user cut. For large KK, or for channels where no single user can be decoded reliably, this strategy runs out of steam.

The point is that in many network settings the relay does not need to know individual messages — it only needs to know functions of them. If a destination already holds uA\mathbf{u}_A and receives uA+uB\mathbf{u}_A + \mathbf{u}_B from the relay, it recovers uB\mathbf{u}_B by subtraction. The relay never had to separate the two streams; it only had to compute their sum. And if the function the relay needs to compute is a linear combination kakuk\sum_k a_k \mathbf{u}_k, then the natural coding primitive is a lattice — because a lattice is closed under integer linear combinations.

Nazer and Gastpar (2011) formalised this observation in the compute-and-forward framework. They ask: given a Gaussian MAC y=k=1Khkxk+wy = \sum_{k=1}^K h_k \mathbf{x}_k + \mathbf{w} with known channel hRK\mathbf{h} \in \mathbb{R}^K, and an integer coefficient vector aZK\mathbf{a} \in \mathbb{Z}^K chosen by the system designer, what rate can each user send at so the relay reliably decodes kakukmodΛ\sum_k a_k \mathbf{u}_k \bmod \Lambda? The answer — the Nazer-Gastpar computation rate R(h,a)R(\mathbf{h}, \mathbf{a}) — is the beginning of a vast literature on computation-theoretic coding. This section sets up the model; §2 proves the rate formula; §3-§5 apply it.

,

Historical Note: From Ahlswede-Cai-Li-Yeung 2000 to Nazer-Gastpar 2011

2000-2011

Network coding was born in 2000 with the Ahlswede-Cai-Li-Yeung paper "Network Information Flow" (IEEE Trans. IT 2000), which proved that for wired multicast networks the capacity region is achieved by transmitting coded combinations of packets at intermediate nodes — not by routing raw packets. Koetter and Médard (2003) made the coding linear; Ho-Médard-Koetter-Karger- Effros-Shi-Leong (2006) made it random. For six years, network coding was a purely digital (packet-level) concept.

In 2006 Zhang, Liew, and Lam ("Hot topic: physical-layer network coding", ACM MobiCom) proposed a radical idea: on a wireless two-way relay channel, the analog superposition xA+xBx_A + x_B at the relay's antenna already performs the network-coding combination. If xAx_A and xBx_B are BPSK symbols, then with the right mapping the XOR uAuBu_A \oplus u_B is read off directly from the sum. This Physical-layer Network Coding (PNC) doubles the throughput of the two-way relay — from 44 slots (pure routing) to 22 slots.

PNC was elegant but fragile: it required careful amplitude and phase alignment, and it only worked for symmetric channels. The Nazer-Gastpar 2011 paper generalised PNC into a rigorous information-theoretic framework. They replaced the BPSK + XOR intuition with a lattice-coded superposition: instead of xA+xBx_A + x_B landing at a BPSK point, it lands at a lattice point aAuA+aBuBΛa_A \mathbf{u}_A + a_B \mathbf{u}_B \in \Lambda. The relay decodes this lattice point — not the individual messages — at a computation rate that is the Nazer-Gastpar bound of §2. Zhang-Liew-Lam's PNC becomes the a=(1,1)T\mathbf{a} = (1, 1)^T special case; the general framework covers arbitrary integer combinations and arbitrary channels. It also connects to the Körner-Marton 1979 distributed source coding problem and to the Ahlswede-Han 1983 Gaussian network-coding converse — three threads that the Nazer-Gastpar synthesis wove together.

,

Definition:

The KK-User Gaussian MAC-Relay Model

The compute-and-forward channel model consists of KK transmitters (users) and one relay. At each channel use, user kk transmits a real symbol xkx_k satisfying the average power constraint 1nE[xk2]Pk=1,,K,\frac{1}{n} \mathbb{E}\big[\|\mathbf{x}_k\|^2\big] \le P \qquad k = 1, \ldots, K, where xkRn\mathbf{x}_k \in \mathbb{R}^n is the length-nn codeword of user kk. The relay observes the analog superposition y  =  k=1Khkxk+w    Rn,\mathbf{y} \;=\; \sum_{k=1}^K h_k \mathbf{x}_k \,+\, \mathbf{w} \;\in\; \mathbb{R}^n, where h=(h1,,hK)TRK\mathbf{h} = (h_1, \ldots, h_K)^T \in \mathbb{R}^K is the channel vector (known at the relay) and wN(0,In)\mathbf{w} \sim \mathcal{N}(0, \mathbf{I}_n) is unit-variance AWGN. The SNR per user is SNR=P\text{SNR} = P.

The relay's goal is not to decode the KK individual messages u1,,uK\mathbf{u}_1, \ldots, \mathbf{u}_K. Instead, given an integer coefficient vector a=(a1,,aK)TZK{0}\mathbf{a} = (a_1, \ldots, a_K)^T \in \mathbb{Z}^K \setminus \{\mathbf{0}\}, the relay tries to decode the modulo- lattice linear combination v(a)    k=1Kakuk  mod  Λs    ΛcV(Λs).\mathbf{v}(\mathbf{a}) \;\triangleq\; \sum_{k=1}^K a_k \mathbf{u}_k \;\bmod\; \Lambda_s \;\in\; \Lambda_c \cap \mathcal{V}(\Lambda_s). This is a single lattice point — one element out of the codebook — and decoding it is a hypothesis-testing problem over ΛcV(Λs)=2nR|\Lambda_c \cap \mathcal{V}(\Lambda_s)| = 2^{nR} alternatives, not 2nKR2^{nKR}.

Definition:

Computation Rate

A rate RR is achievable for compute-and-forward with channel vector h\mathbf{h} and integer coefficient vector a\mathbf{a} if, for every ϵ>0\epsilon > 0 and every sufficiently large nn, there exists a nested lattice codebook ΛsΛcRn\Lambda_s \subset \Lambda_c \subset \mathbb{R}^n with coding rate Rcb=1nlog2(V(Λs)/V(Λc))RR_{\rm cb} = \tfrac{1}{n} \log_2(V(\Lambda_s)/V(\Lambda_c)) \ge R and a relay decoder v^(y)\hat{\mathbf{v}}(\mathbf{y}) such that Pr ⁣[v^(y)k=1KakukmodΛs]    ϵ.\Pr\!\left[\hat{\mathbf{v}}(\mathbf{y}) \neq \sum_{k=1}^K a_k \mathbf{u}_k \bmod \Lambda_s\right] \;\le\; \epsilon. The computation rate R(h,a)R(\mathbf{h}, \mathbf{a}) is the supremum over all achievable rates for that (h,a)(\mathbf{h}, \mathbf{a}).

Three things to note. First, the KK users are synchronous — they share a common codebook ΛcV(Λs)\Lambda_c \cap \mathcal{V}(\Lambda_s), and each transmits at rate RR. Second, the channel h\mathbf{h} is known at the relay but not at the transmitters (the relay chooses a\mathbf{a} and broadcasts it back, or the system designer picks it once for a quasi-static block). Third, the relay decodes a single lattice point — it never identifies the KK individual uk\mathbf{u}_k's. Information about which uk\mathbf{u}_k is which must be recovered downstream, usually at a destination that has side information.

Why Lattices Are the Natural Primitive

What makes lattices the right coding primitive for computing sums? Consider the alternative: random Gaussian codebooks (classical Shannon). If user kk encodes uk\mathbf{u}_k with an independent Gaussian codebook, then kakuk\sum_k a_k \mathbf{u}_k is not a codeword in any standard sense — the "sum" of two random codewords lives nowhere in particular. The relay would have to decode each uk\mathbf{u}_k separately and add them digitally, forfeiting the superposition.

Lattices escape this trap by design. If u1,u2Λc\mathbf{u}_1, \mathbf{u}_2 \in \Lambda_c, then u1+u2Λc\mathbf{u}_1 + \mathbf{u}_2 \in \Lambda_c as well, because Λc\Lambda_c is a group under addition. And a1u1+a2u2Λca_1 \mathbf{u}_1 + a_2 \mathbf{u}_2 \in \Lambda_c for any integers a1,a2a_1, a_2, because Z\mathbb{Z} acts on Λc\Lambda_c by scalar multiplication. So the relay's target kakuk\sum_k a_k \mathbf{u}_k is a legal codeword — in fact, it is a point of Λc\Lambda_c, and after mod-Λs\Lambda_s reduction it lives in the same shaping region as each individual uk\mathbf{u}_k. The lattice structure converts the analog superposition into a structured sum that the relay can hope to decode. Caire's summary: "Lattice codes are to Gaussian networks what linear codes are to finite-field networks — the structured primitive that makes computation-over-channels possible."

,

Definition:

Integer Coefficient Vector

The integer coefficient vector is any vector a=(a1,,aK)TZK{0}\mathbf{a} = (a_1, \ldots, a_K)^T \in \mathbb{Z}^K \setminus \{\mathbf{0}\} that the relay (or system designer) chooses to specify which linear function kakuk\sum_k a_k \mathbf{u}_k is to be decoded. The set of candidates is the countable lattice ZK{0}\mathbb{Z}^K \setminus \{\mathbf{0}\}; the system designer picks the a\mathbf{a} that maximises the computation rate R(h,a)R(\mathbf{h}, \mathbf{a}) for the given channel h\mathbf{h}.

Ideally a\mathbf{a} is close to h\mathbf{h} in the sense that (hTa)2/a2(\mathbf{h}^T \mathbf{a})^2 / \|\mathbf{a}\|^2 is large (the cosine between a\mathbf{a} and h\mathbf{h} is close to 11). For h=(1,2)T\mathbf{h} = (1, 2)^T, a=(1,2)T\mathbf{a} = (1, 2)^T is a perfect match. For h=(1,2)T\mathbf{h} = (1, \sqrt{2})^T, no integer vector matches — the Diophantine approximation of 2\sqrt{2} is by (1,1),(2,3),(5,7),(1, 1), (2, 3), (5, 7), \ldots, but none is exact and the computation rate pays for the residual mismatch. The search for the optimiser a(h)\mathbf{a}^*(\mathbf{h}) is the integer-coefficient problem of §5.

⚠️Engineering Note

Compute-and-Forward Assumes Quasi-Static Channels

The Nazer-Gastpar computation rate R(h,a)R(\mathbf{h}, \mathbf{a}) assumes the channel h\mathbf{h} is constant over the coding block of length nn, and that the relay has chosen the integer coefficient a\mathbf{a} appropriate for that h\mathbf{h}. In fast fading — e.g., vehicular 5G NR channels with Doppler spreads exceeding a few hundred Hz at 33 GHz — the channel varies within a block, and the integer vector a\mathbf{a} chosen at block start becomes suboptimal (or infeasible) by block end.

Practical Constraints
  • Coherence time TcT_c must exceed the lattice-codeword duration nTsn T_s, where TsT_s is the symbol period

  • At v=30v = 30 km/h and fc=3f_c = 3 GHz: Doppler f_D pprox 83 Hz, T_c pprox 12 ms; with Ts=33μT_s = 33\,\mus (5G NR), only n360n \le 360 is safe

  • Relay must feed back the chosen a\mathbf{a} to users (or derive it from a common pilot), adding overhead

📋 Ref: 3GPP TS 38.300 (5G NR overall architecture) — no CF mode is specified
,

Common Mistake: The CF Relay Does Not Decode Individual Messages

Mistake:

Assuming the relay in compute-and-forward first decodes u1,,uK\mathbf{u}_1, \ldots, \mathbf{u}_K individually (by successive cancellation or joint ML) and then computes their integer combination.

Correction:

The point of CF is precisely to avoid decoding individual messages. The relay sees only y=khkxk+w\mathbf{y} = \sum_k h_k \mathbf{x}_k + \mathbf{w} and decodes the single lattice point kakukmodΛs\sum_k a_k \mathbf{u}_k \bmod \Lambda_s — a hypothesis test over 2nR2^{nR} candidates, not 2nKR2^{nKR}. When a single user is much stronger than the others, a SIC receiver may do better by decoding that user first; but in symmetric or balanced channels, CF beats SIC because its decoding cost does not scale with KK. This is the essential network-coding insight: compute the function, not the individual inputs.

Compute-and-Forward vs. Multiple-Access Decoding

FeatureCF at relayJoint MAC decodingSIC
TargetkakukmodΛs\sum_k a_k \mathbf{u}_k \bmod \Lambda_sAll u1,,uK\mathbf{u}_1, \ldots, \mathbf{u}_KAll u1,,uK\mathbf{u}_1, \ldots, \mathbf{u}_K sequentially
Hypothesis space2nR2^{nR} (one codebook)2nKR2^{nKR} (KK codebooks)2nR×K2^{nR} \times K stages
Rate formulaR(h,a)R(\mathbf{h}, \mathbf{a}) (Nazer-Gastpar)Pareto boundary of MAC polytopeCorner of MAC polytope
Symmetric sum-rateKR(h,a)K \cdot R(\mathbf{h}, \mathbf{a}^*)12log2(1+SNRh2)\tfrac{1}{2} \log_2(1 + \text{SNR} \|\mathbf{h}\|^2)Same as joint (at corners)
Channel knowledgeRelay knows h\mathbf{h}Relay knows h\mathbf{h}Relay knows h\mathbf{h}
CodebookShared nested lattice ΛcΛs\Lambda_c \supset \Lambda_sKK independent Gaussian codebooksKK independent Gaussian codebooks
Best use caseNetwork settings (TWRC, interference, distributed source coding)Classical uplinkClassical uplink with asymmetric rates

Key Takeaway

The Nazer-Gastpar compute-and-forward model has a relay observe y=k=1Khkxk+w\mathbf{y} = \sum_{k=1}^K h_k \mathbf{x}_k + \mathbf{w} and decode a single lattice linear combination kakukmodΛs\sum_k a_k \mathbf{u}_k \bmod \Lambda_s — not the KK individual messages. This works because the shared nested lattice ΛcΛs\Lambda_c \supset \Lambda_s is closed under integer linear combinations, converting the analog superposition of the channel into a structured sum at a legal lattice point.