The Computation Rate Theorem
From Model to Rate
Section 1 set up the model: users transmit lattice codewords, the relay receives , and the goal is to decode . Section 2 answers the natural next question: at what rate? The Nazer-Gastpar computation-rate theorem gives the answer as a clean closed-form expression in , , and . Its proof is a direct generalisation of the Erez-Zamir achievability of Ch. 16: scale the received signal by a well-chosen , subtract a lattice estimate of the sum, and bound the residual noise. The new ingredient — compared to single-user AWGN — is that the users share a common lattice so the integer-coefficient combination is itself a lattice point mod . The MMSE coefficient now depends on both and ; the effective noise variance depends on both.
Theorem: Nazer-Gastpar Computation Rate
Consider the -user Gaussian MAC-relay channel with real channel vector , per-user power constraint , and . Let . Then for every integer coefficient vector , the following rate is achievable for compute-and-forward: where . The MMSE scaling coefficient that achieves this rate is
Read the formula as a signal-to-noise ratio inside a capacity expression. The numerator in the log is , and the denominator is the effective noise variance . Equivalently: When is perfectly aligned with ( for some scalar ), the term nearly cancels at high SNR, and the rate approaches — the single-user capacity modulated by the integer-coefficient norm. Misalignment (e.g., , ) shrinks below and collapses the rate.
Setup: nested Erez-Zamir codebook shared across users
Fix nested lattices with (so the second-moment power of is ) and coding rate . Let achieve the Poltyrev exponent (from Ch. 16 — AWGN-good). All users share this codebook. User picks its message as a lattice point , dithers with a uniform dither (known to the relay), and transmits . The crypto-lemma (Erez-Zamir) makes uniform over , hence power per dimension, hence satisfying the power constraint.
Relay MMSE scaling and lattice estimate subtraction
The relay scales by and adds back the -weighted dithers: Writing , we get where the effective noise is
Effective noise variance and its MMSE minimiser
Under the crypto-lemma, is uniform over — in particular, the 's are independent and have covariance (approximately, by the Erez-Zamir second-moment result — sphere-like shaping makes look like an isotropic ball). The per-dimension variance of is Minimise over : , giving Substituting back, with : where we defined .
Decoding via modulo-$\Lambda_s$ lattice quantisation
The relay applies and then rounds to the nearest point within . The sum by closure of under integer linear combinations; its residue mod lies in . The decoder succeeds iff falls inside — the fine-lattice Voronoi region. By the Poltyrev exponent (Erez-Zamir), this happens w.p. whenever the coding rate satisfies Taking the truncation (the rate is when , i.e., when the integer-coefficient mismatch dominates) yields the theorem.
Where the integer coefficient vector enters
Every step above uses only to define which lattice combination the relay targets. The codebook is -independent; the transmitters do not even know . The relay's scalar scaling and the effective noise both depend on , but neither depends on the individual messages. This is the deep property that makes CF useful: the same codebook works for every , and the relay can choose post-hoc, after observing .
Reading the Rate Formula
The Nazer-Gastpar rate depends on three quantities: the channel , the integer vector , and the SNR. Three regimes are worth naming.
(i) High-SNR, exact alignment (, ): and — but the codebook rate itself caps at , the single-user AWGN capacity of the channel . Compute-and-forward matches single-user capacity when is an integer vector.
(ii) Low-SNR or strong misalignment (, equivalently or at very low ): . The relay simply cannot decode that combination. Choose a different or abandon CF for this channel.
(iii) Mid-SNR, near-integer channel: the typical operating regime. For and , the inner product and the norm-squared , so — positive for small , negative (clipped to ) for large enough . The rate starts at , rises through a peak, and saturates.
Computation Rate Over the Integer Coefficient Plane
Heatmap of the Nazer-Gastpar computation rate as a function of the integer coefficient pair , for a fixed channel and SNR. The brightest cells are the integer vectors that best align with the channel direction; the dark cells are those where and the rate is . Try (symmetric — is optimal); ( is exact and best); or (no exact integer match — compare the rates at , , and ).
Parameters
Compute-and-Forward: Decoding at the Relay
Example: Computing at , SNR 10 dB
For the Gaussian MAC-relay channel with , , and , compute the Nazer-Gastpar rate and compare it to the single-user capacity bits.
Compute $\|\mathbf{a}\|^2$, $\mathbf{h}^T \mathbf{a}$, $\|\mathbf{h}\|^2$
; ; . Note exactly (parallel and identical): this is the "perfect alignment" case.
Compute $N_{\rm eff}$
.
Compute $R(\mathbf{h}, \mathbf{a})$
bits per channel use per user.
Compare with single-user capacity
Single-user capacity at with channel gain is bits. The CF rate is lower — because CF here is a per-user rate, and the relay still has to separate the two users' contributions. The sum-rate at the relay is bits per channel use, exceeding the single-user cap.
Example: Integer Mismatch:
For , , and , compute and check whether the integer vector gives a better rate.
$\mathbf{a} = (1, 1)^T$
; ; . . bits per user.
$\mathbf{a} = (1, 2)^T$
; ; the inner product is larger but so is the norm-squared. . bits per user.
Conclusion
The simpler integer vector beats here. The irrational has best Diophantine approximation by at small norm; using incurs the norm penalty without a proportional alignment gain. In general, the integer-coefficient search is the central optimisation of §5.
Example: Optimal for at 15 dB
For the channel and SNR , find the integer vector that maximises among the candidates .
Compute $\|\mathbf{h}\|^2$ and $R$ for each candidate
. Denominator . Tabulate:
| (bits) | ||||
|---|---|---|---|---|
Identify $\mathbf{a}^*$
with bits per user. Close integer approximation to the channel direction wins, but larger-norm candidates and — which better approximate the channel ratio — lose because the penalty dominates.
Remark: the integer-coefficient search
For a general channel the search over integer vectors is infinite. In practice one restricts to a bounded box (a small number, say ) and enumerates; or one formulates the problem as a shortest-vector problem on the channel lattice (§5, Thm. TInteger-Forcing Achieves a Constant Gap to MIMO Sum-Capacity). For and with , the box is essentially always sufficient.
The Single-User Limit:
The Erez-Zamir single-user AWGN-capacity result (Ch. 16) is a degenerate case of the Nazer-Gastpar theorem. With , , : , , . Then and — exactly the AWGN capacity at effective SNR . The MMSE coefficient reduces to , the scalar Wiener filter. This continuity is reassuring: compute-and-forward specialises correctly to the classical case.
Common Mistake: Can Be Negative — Don't Panic
Mistake:
Computing and getting a negative number at high SNR — then concluding the formula is wrong or the rate is infinite.
Correction:
Algebraically can be negative, but the correct interpretation requires after MMSE: MMSE estimation can only reduce the effective noise variance, never make it negative. The truncation in the Nazer- Gastpar formula uses inside a ; when the expression in the log exceeds the actual Erez-Zamir achievable bound (the MAC sum-rate cut for the joint user), the codebook simply cannot support that much information. In practice one reports or treats the negative- case as a certificate of single-user-capacity attainment (the relay does as well as decoding alone).
Quick Check
In the Nazer-Gastpar computation-rate theorem, the MMSE scaling coefficient at the relay is . Which of the following is its correct limiting behaviour?
At , (the relay ignores the received signal).
At , (the relay uses the raw received signal).
At , (the least-squares projection).
depends on the codebook but not on .
Correct. At very low SNR the MMSE estimate of the lattice combination is dominated by the noise; and the relay reports the zero lattice point — .
Compute-and-Forward Requires Precise Time and Frequency Sync
The rate formula assumes perfect symbol-level synchronisation among the transmitters, so that the analog sum is sample-aligned at the relay. In practice, a timing offset of half a symbol destroys the lattice sum: is no longer an integer lattice point but a continuous-time-shifted mixture.
- •
Timing alignment must be better than , i.e., s for 5G NR 30-kHz SCS
- •
Frequency alignment better than , i.e., Hz for a 360-symbol block
- •
Phase noise and carrier frequency offset must be pre-compensated per user
- •
Pilot exchange among users (or a common master clock / GPS reference) is mandatory
CF in Satellite Two-Way Communication Prototypes
Satellite two-way relay systems are a natural home for compute-and-forward: geostationary and LEO satellites relay uplink signals from two terrestrial terminals to each other. The quasi-static channel (low Doppler relative to a typical coding block), the fixed terminal positions (sync is feasible), and the throughput-limited nature of satellite backhaul all favour CF over AF/DF. Wübben and Rost (2015) reported a 2x throughput gain in a two-user satellite TWRC prototype with dB and integer-coefficient , validating the theoretical 2-slot-vs-4-slot prediction of §3. Commercial adoption remains limited because the throughput gain is offset by hardware cost (phased arrays, precision synchronisation, lattice decoders).
Key Takeaway
The Nazer-Gastpar computation rate is where . It is achieved by a shared nested Erez-Zamir codebook, MMSE scaling at the relay, and modulo- decoding to the nearest point. The rate is high when is a close integer approximation to ; it degenerates to the single-user AWGN capacity at .