The Computation Rate Theorem

From Model to Rate

Section 1 set up the model: KK users transmit lattice codewords, the relay receives y=khkxk+w\mathbf{y} = \sum_k h_k \mathbf{x}_k + \mathbf{w}, and the goal is to decode kakukmodΛs\sum_k a_k \mathbf{u}_k \bmod \Lambda_s. Section 2 answers the natural next question: at what rate? The Nazer-Gastpar computation-rate theorem gives the answer as a clean closed-form expression in h\mathbf{h}, a\mathbf{a}, and SNR\text{SNR}. Its proof is a direct generalisation of the Erez-Zamir achievability of Ch. 16: scale the received signal by a well-chosen α\alpha, subtract a lattice estimate of the sum, and bound the residual noise. The new ingredient — compared to single-user AWGN — is that the KK users share a common lattice so the integer-coefficient combination kakxk\sum_k a_k \mathbf{x}_k is itself a lattice point mod Λs\Lambda_s. The MMSE coefficient α\alpha^* now depends on both h\mathbf{h} and a\mathbf{a}; the effective noise variance depends on both.

,

Theorem: Nazer-Gastpar Computation Rate

Consider the KK-user Gaussian MAC-relay channel y=k=1Khkxk+w\mathbf{y} = \sum_{k=1}^K h_k \mathbf{x}_k + \mathbf{w} with real channel vector hRK\mathbf{h} \in \mathbb{R}^K, per-user power constraint E[xk2]/nP\mathbb{E}[\|\mathbf{x}_k\|^2]/n \le P, and wN(0,In)\mathbf{w} \sim \mathcal{N}(\mathbf{0}, \mathbf{I}_n). Let SNR=P\text{SNR} = P. Then for every integer coefficient vector aZK{0}\mathbf{a} \in \mathbb{Z}^K \setminus \{\mathbf{0}\}, the following rate is achievable for compute-and-forward:   R(h,a)  =  12log2+ ⁣(1a2SNR(hTa)21+SNRh2)  \boxed{\;R(\mathbf{h}, \mathbf{a}) \;=\; \frac{1}{2} \log_2^+ \!\left( \frac{1}{\|\mathbf{a}\|^2 - \frac{\text{SNR}\, (\mathbf{h}^T \mathbf{a})^2}{1 + \text{SNR}\, \|\mathbf{h}\|^2}} \right) \;} where log2+(x)=max(log2x,0)\log_2^+(x) = \max(\log_2 x, 0). The MMSE scaling coefficient that achieves this rate is α(h,a)  =  SNRhTa1+SNRh2.\alpha^*(\mathbf{h}, \mathbf{a}) \;=\; \frac{\text{SNR}\, \mathbf{h}^T \mathbf{a}}{1 + \text{SNR}\, \|\mathbf{h}\|^2}.

Read the formula as a signal-to-noise ratio inside a capacity expression. The numerator in the log is 11, and the denominator is the effective noise variance Neff(h,a)=a2SNR(hTa)2/(1+SNRh2)N_{\rm eff}(\mathbf{h}, \mathbf{a}) = \|\mathbf{a}\|^2 - \text{SNR}(\mathbf{h}^T \mathbf{a})^2 / (1 + \text{SNR} \|\mathbf{h}\|^2). Equivalently: R(h,a)  =  12log2+ ⁣(1+SNRh2a2(1+SNRh2)SNR(hTa)2).R(\mathbf{h}, \mathbf{a}) \;=\; \frac{1}{2} \log_2^+ \!\left( \frac{1 + \text{SNR} \|\mathbf{h}\|^2} {\|\mathbf{a}\|^2 (1 + \text{SNR} \|\mathbf{h}\|^2) - \text{SNR} (\mathbf{h}^T \mathbf{a})^2} \right). When a\mathbf{a} is perfectly aligned with h\mathbf{h} (a=ch\mathbf{a} = c \mathbf{h} for some scalar cc), the term SNR(hTa)2/(1+SNRh2)\text{SNR}(\mathbf{h}^T \mathbf{a})^2 / (1 + \text{SNR} \|\mathbf{h}\|^2) nearly cancels a2SNRh2/(1+SNRh2)a2\|\mathbf{a}\|^2 \text{SNR} \|\mathbf{h}\|^2 / (1 + \text{SNR} \|\mathbf{h}\|^2) \approx \|\mathbf{a}\|^2 at high SNR, and the rate approaches 12log2(SNRh2/a2)\tfrac{1}{2} \log_2(\text{SNR} \|\mathbf{h}\|^2 / \|\mathbf{a}\|^2) — the single-user capacity modulated by the integer-coefficient norm. Misalignment (e.g., h=(1,2)T\mathbf{h} = (1, \sqrt{2})^T, a=(1,1)T\mathbf{a} = (1, 1)^T) shrinks NeffN_{\rm eff} below 11 and collapses the rate.

,

Reading the Rate Formula

The Nazer-Gastpar rate R(h,a)R(\mathbf{h}, \mathbf{a}) depends on three quantities: the channel h\mathbf{h}, the integer vector a\mathbf{a}, and the SNR. Three regimes are worth naming.

(i) High-SNR, exact alignment (a=hZK\mathbf{a} = \mathbf{h} \in \mathbb{Z}^K, SNR\text{SNR} \to \infty): Neff0N_{\rm eff} \to 0 and RR \to \infty — but the codebook rate itself caps at 12log2(1+SNRh2)\tfrac{1}{2} \log_2(1 + \text{SNR} \|\mathbf{h}\|^2), the single-user AWGN capacity of the channel h\mathbf{h}. Compute-and-forward matches single-user capacity when a=h\mathbf{a} = \mathbf{h} is an integer vector.

(ii) Low-SNR or strong misalignment (Neff>1N_{\rm eff} > 1, equivalently (hTa)2<a2(\mathbf{h}^T \mathbf{a})^2 < \|\mathbf{a}\|^2 or at very low SNR\text{SNR}): R=0R = 0. The relay simply cannot decode that combination. Choose a different a\mathbf{a} or abandon CF for this channel.

(iii) Mid-SNR, near-integer channel: the typical operating regime. For h=(1,1.5)T\mathbf{h} = (1, 1.5)^T and a=(1,2)T\mathbf{a} = (1, 2)^T, the inner product hTa=4\mathbf{h}^T \mathbf{a} = 4 and the norm-squared a2=5\|\mathbf{a}\|^2 = 5, so Neff=516SNR/(1+3.25SNR)N_{\rm eff} = 5 - 16 \text{SNR} / (1 + 3.25 \text{SNR}) — positive for small SNR\text{SNR}, negative (clipped to log2+\log_2^+) for large enough SNR\text{SNR}. The rate starts at 00, rises through a peak, and saturates.

Computation Rate R(h,a)R(\mathbf{h}, \mathbf{a}) Over the Integer Coefficient Plane

Heatmap of the Nazer-Gastpar computation rate R(h,a)R(\mathbf{h}, \mathbf{a}) as a function of the integer coefficient pair a=(a1,a2){5,,5}2\mathbf{a} = (a_1, a_2) \in \{-5, \ldots, 5\}^2, for a fixed K=2K = 2 channel (h1,h2)(h_1, h_2) and SNR. The brightest cells are the integer vectors that best align with the channel direction; the dark cells are those where Neff>1N_{\rm eff} > 1 and the rate is 00. Try (h1,h2)=(1,1)(h_1, h_2) = (1, 1) (symmetric — (1,1)(1, 1) is optimal); (h1,h2)=(1,2)(h_1, h_2) = (1, 2) ((1,2)(1, 2) is exact and best); or (h1,h2)=(0.7,1.1)(h_1, h_2) = (0.7, 1.1) (no exact integer match — compare the rates at (1,1)(1, 1), (1,2)(1, 2), and (2,3)(2, 3)).

Parameters
10
0.7
1.1

Compute-and-Forward: Decoding a1u1+a2u2a_1 \mathbf{u}_1 + a_2 \mathbf{u}_2 at the Relay

User 1 transmits x1=(u1d1)modΛs\mathbf{x}_1 = (\mathbf{u}_1 - \mathbf{d}_1) \bmod \Lambda_s and user 2 transmits x2=(u2d2)modΛs\mathbf{x}_2 = (\mathbf{u}_2 - \mathbf{d}_2) \bmod \Lambda_s simultaneously. The relay receives the analog superposition y=h1x1+h2x2+w\mathbf{y} = h_1 \mathbf{x}_1 + h_2 \mathbf{x}_2 + \mathbf{w}, applies the MMSE scaling α\alpha^*, subtracts the known dithers, and rounds to the nearest fine-lattice point within the coarse-lattice Voronoi region. The decoded lattice point is a1u1+a2u2modΛsa_1 \mathbf{u}_1 + a_2 \mathbf{u}_2 \bmod \Lambda_s — an integer combination of the two messages, not either message individually.

Example: Computing R(h,a)R(\mathbf{h}, \mathbf{a}) at h=(1,2)T,a=(1,2)T\mathbf{h} = (1, 2)^T, \mathbf{a} = (1, 2)^T, SNR 10 dB

For the K=2K = 2 Gaussian MAC-relay channel with h=(1,2)T\mathbf{h} = (1, 2)^T, a=(1,2)T\mathbf{a} = (1, 2)^T, and SNR=10dB=10\text{SNR} = 10\,\text{dB} = 10, compute the Nazer-Gastpar rate R(h,a)R(\mathbf{h}, \mathbf{a}) and compare it to the single-user capacity 12log2(1+SNR)=1.73\tfrac{1}{2} \log_2 (1 + \text{SNR}) = 1.73 bits.

Example: Integer Mismatch: h=(1,2)T,a=(1,1)T\mathbf{h} = (1, \sqrt{2})^T, \mathbf{a} = (1, 1)^T

For h=(1,2)T\mathbf{h} = (1, \sqrt{2})^T, a=(1,1)T\mathbf{a} = (1, 1)^T, and SNR=20dB=100\text{SNR} = 20\,\text{dB} = 100, compute R(h,a)R(\mathbf{h}, \mathbf{a}) and check whether the integer vector a=(1,2)T\mathbf{a} = (1, 2)^T gives a better rate.

Example: Optimal a\mathbf{a}^* for h=(1,1.5)T\mathbf{h} = (1, 1.5)^T at 15 dB

For the channel h=(1,1.5)T\mathbf{h} = (1, 1.5)^T and SNR 15dB=31.615\,\text{dB} = 31.6, find the integer vector aZ2\mathbf{a}^* \in \mathbb{Z}^2 that maximises R(h,a)R(\mathbf{h}, \mathbf{a}) among the candidates {(1,1),(1,2),(2,3),(1,0),(0,1)}\{(1, 1), (1, 2), (2, 3), (1, 0), (0, 1)\}.

The Single-User Limit: K=1K = 1

The Erez-Zamir single-user AWGN-capacity result (Ch. 16) is a degenerate case of the Nazer-Gastpar theorem. With K=1K = 1, h1=hh_1 = h, a1=1a_1 = 1: a2=1\|\mathbf{a}\|^2 = 1, hTa=h\mathbf{h}^T \mathbf{a} = h, h2=h2\|\mathbf{h}\|^2 = h^2. Then Neff=1SNRh2/(1+SNRh2)=1/(1+SNRh2),N_{\rm eff} = 1 - \text{SNR} h^2 / (1 + \text{SNR} h^2) = 1 / (1 + \text{SNR} h^2), and R(h,a)=12log2(1+SNRh2)R(\mathbf{h}, \mathbf{a}) = \tfrac{1}{2} \log_2 (1 + \text{SNR} h^2) — exactly the AWGN capacity at effective SNR SNRh2\text{SNR} h^2. The MMSE coefficient reduces to α=SNRh/(1+SNRh2)\alpha^* = \text{SNR} h / (1 + \text{SNR} h^2), the scalar Wiener filter. This continuity is reassuring: compute-and-forward specialises correctly to the classical case.

,

Common Mistake: NeffN_{\rm eff} Can Be Negative — Don't Panic

Mistake:

Computing Neff=a2SNR(hTa)2/(1+SNRh2)N_{\rm eff} = \|\mathbf{a}\|^2 - \text{SNR}(\mathbf{h}^T \mathbf{a})^2 / (1 + \text{SNR} \|\mathbf{h}\|^2) and getting a negative number at high SNR — then concluding the formula is wrong or the rate is infinite.

Correction:

Algebraically NeffN_{\rm eff} can be negative, but the correct interpretation requires N=PNeffN^* = P \cdot N_{\rm eff} after MMSE: MMSE estimation can only reduce the effective noise variance, never make it negative. The truncation in the Nazer- Gastpar formula uses 1/Neff1/N_{\rm eff} inside a log2+\log_2^+; when the expression in the log exceeds the actual Erez-Zamir achievable bound 12log2(1+SNRh2)\tfrac{1}{2} \log_2 (1 + \text{SNR} \|\mathbf{h}\|^2) (the MAC sum-rate cut for the joint user), the codebook simply cannot support that much information. In practice one reports R(h,a)=min{12log2+(1/Neff),12log2(1+SNRh2)}R(\mathbf{h}, \mathbf{a}) = \min\{\tfrac{1}{2} \log_2^+ (1/N_{\rm eff}), \tfrac{1}{2} \log_2 (1 + \text{SNR} \|\mathbf{h}\|^2)\} or treats the negative-NeffN_{\rm eff} case as a certificate of single-user-capacity attainment (the relay does as well as decoding alone).

Quick Check

In the Nazer-Gastpar computation-rate theorem, the MMSE scaling coefficient at the relay is α=SNR(hTa)/(1+SNRh2)\alpha^* = \text{SNR}(\mathbf{h}^T \mathbf{a})/(1 + \text{SNR} \|\mathbf{h}\|^2). Which of the following is its correct limiting behaviour?

At SNR0\text{SNR} \to 0, α0\alpha^* \to 0 (the relay ignores the received signal).

At SNR\text{SNR} \to \infty, α1\alpha^* \to 1 (the relay uses the raw received signal).

At SNR\text{SNR} \to \infty, αhTa/h2\alpha^* \to \mathbf{h}^T \mathbf{a}/\|\mathbf{h}\|^2 (the least-squares projection).

α\alpha^* depends on the codebook Λc\Lambda_c but not on h\mathbf{h}.

🚨Critical Engineering Note

Compute-and-Forward Requires Precise Time and Frequency Sync

The rate formula R(h,a)R(\mathbf{h}, \mathbf{a}) assumes perfect symbol-level synchronisation among the KK transmitters, so that the analog sum khkxk\sum_k h_k \mathbf{x}_k is sample-aligned at the relay. In practice, a timing offset of half a symbol destroys the lattice sum: x1(t)+x2(tTs/2)\mathbf{x}_1(t) + \mathbf{x}_2(t - T_s/2) is no longer an integer lattice point but a continuous-time-shifted mixture.

Practical Constraints
  • Timing alignment must be better than Ts/10T_s / 10, i.e., ±3μ\pm 3\,\mus for 5G NR 30-kHz SCS

  • Frequency alignment better than 1/(nTs)1/(n T_s), i.e., <100< 100 Hz for a 360-symbol block

  • Phase noise and carrier frequency offset must be pre-compensated per user

  • Pilot exchange among users (or a common master clock / GPS reference) is mandatory

📋 Ref: 5G NR does NOT specify CF-style synchronised uplink — NR assumes orthogonal multiplexing via OFDMA
,
🔧Engineering Note

CF in Satellite Two-Way Communication Prototypes

Satellite two-way relay systems are a natural home for compute-and-forward: geostationary and LEO satellites relay uplink signals from two terrestrial terminals to each other. The quasi-static channel (low Doppler relative to a typical coding block), the fixed terminal positions (sync is feasible), and the throughput-limited nature of satellite backhaul all favour CF over AF/DF. Wübben and Rost (2015) reported a 2x throughput gain in a two-user satellite TWRC prototype with SNR15\text{SNR} \approx 15 dB and integer-coefficient a=(1,1)T\mathbf{a} = (1, 1)^T, validating the theoretical 2-slot-vs-4-slot prediction of §3. Commercial adoption remains limited because the throughput gain is offset by hardware cost (phased arrays, precision synchronisation, lattice decoders).

Key Takeaway

The Nazer-Gastpar computation rate is R(h,a)=12log2+(1/Neff)R(\mathbf{h}, \mathbf{a}) = \tfrac{1}{2} \log_2^+ (1/N_{\rm eff}) where Neff=a2SNR(hTa)2/(1+SNRh2)N_{\rm eff} = \|\mathbf{a}\|^2 - \text{SNR}(\mathbf{h}^T \mathbf{a})^2 / (1 + \text{SNR} \|\mathbf{h}\|^2). It is achieved by a shared nested Erez-Zamir codebook, MMSE scaling α=SNRhTa/(1+SNRh2)\alpha^* = \text{SNR} \mathbf{h}^T \mathbf{a} / (1 + \text{SNR} \|\mathbf{h}\|^2) at the relay, and modulo-Λs\Lambda_s decoding to the nearest Λc\Lambda_c point. The rate is high when a\mathbf{a} is a close integer approximation to h\mathbf{h}; it degenerates to the single-user AWGN capacity at K=1K = 1.