Compute-and-Forward

Beyond Decode or Compress: Computing Functions

Decode-and-forward decodes the entire message. Compress-and-forward forwards a compressed observation. Is there something in between? Compute-and-forward (CaF) answers this with an elegant idea: the relay decodes a function of the transmitted messages β€” specifically, a linear combination over an algebraic structure.

The key insight, due to Nazer and Gastpar (2011), is that if we use lattice codes instead of random codes, the superposition of codewords at the relay is itself a lattice codeword. The relay can therefore decode a linear combination of the source messages without decoding any individual message. This idea bridges channel coding and network coding: we get the benefits of network coding (forwarding functions rather than individual messages) even over noisy channels.

Definition:

Lattice and Nested Lattice Code

A lattice Ξ›βŠ‚Rn\Lambda \subset \mathbb{R}^n is a discrete additive subgroup of Rn\mathbb{R}^n: if Ξ»1,Ξ»2βˆˆΞ›\lambda_1, \lambda_2 \in \Lambda, then Ξ»1+Ξ»2βˆˆΞ›\lambda_1 + \lambda_2 \in \Lambda and Ξ»1βˆ’Ξ»2βˆˆΞ›\lambda_1 - \lambda_2 \in \Lambda.

A nested lattice code uses two lattices: a fine lattice Ξ›f\Lambda_f (for coding) and a coarse lattice Ξ›cβŠ‚Ξ›f\Lambda_c \subset \Lambda_f (for shaping). The codebook is C=Ξ›f∩V(Ξ›c)\mathcal{C} = \Lambda_f \cap \mathcal{V}(\Lambda_c), where V(Ξ›c)\mathcal{V}(\Lambda_c) is the fundamental Voronoi region of the coarse lattice. The rate is R=1nlog⁑∣C∣=1nlog⁑Vol(V(Ξ›c))Vol(V(Ξ›f))R = \frac{1}{n} \log |\mathcal{C}| = \frac{1}{n} \log \frac{\text{Vol}(\mathcal{V}(\Lambda_c))}{\text{Vol}(\mathcal{V}(\Lambda_f))}.

The nested lattice construction is the analogue of "random binning" for structured codes. The fine lattice provides the coding gain, while the coarse lattice provides the shaping gain and enforces the power constraint.

,

Nested Lattice Code

A structured channel code defined by two nested lattices: a fine lattice for channel coding and a coarse lattice for power shaping. The codewords are fine lattice points within the Voronoi region of the coarse lattice.

Related: Compute-and-Forward

Definition:

Compute-and-Forward Framework

In the compute-and-forward framework:

  • LL sources each encode their message Wβ„“W_\ell using a common nested lattice code with lattice point tβ„“βˆˆΞ›f∩V(Ξ›c)\mathbf{t}_\ell \in \Lambda_f \cap \mathcal{V}(\Lambda_c).
  • The transmitted codeword is xβ„“=[tβ„“+dβ„“]β€Šmodβ€ŠΞ›c\mathbf{x}_\ell = [\mathbf{t}_\ell + \mathbf{d}_\ell] \bmod \Lambda_c, where dβ„“\mathbf{d}_\ell is a dither vector known to all parties.
  • A relay receives y=βˆ‘β„“=1Lhβ„“xβ„“+z\mathbf{y} = \sum_{\ell=1}^L h_\ell \mathbf{x}_\ell + \mathbf{z}, where hβ„“h_\ell are channel coefficients and z\mathbf{z} is noise.
  • The relay decodes an integer linear combination u=⨁ℓ=1Laβ„“tβ„“=[βˆ‘β„“aβ„“tβ„“]β€Šmodβ€ŠΞ›c\mathbf{u} = \bigoplus_{\ell=1}^L a_\ell \mathbf{t}_\ell = [\sum_\ell a_\ell \mathbf{t}_\ell] \bmod \Lambda_c for some integer coefficient vector a=(a1,…,aL)T∈ZL\mathbf{a} = (a_1, \ldots, a_L)^T \in \mathbb{Z}^L.

The relay's task is to find a\mathbf{a} and decode the corresponding linear combination u\mathbf{u}. The algebraic structure of the lattice ensures that βˆ‘β„“aβ„“tβ„“\sum_\ell a_\ell \mathbf{t}_\ell is itself a lattice point, making decoding well-defined.

Compute-and-Forward

A relay strategy where the relay decodes an integer linear combination of the transmitted lattice codewords rather than individual messages. Bridges channel coding and network coding for noisy networks.

Related: Nested Lattice Code

Theorem: Compute-and-Forward Achievable Rate

Consider LL sources transmitting over a Gaussian multiple-access channel to a relay with channel coefficients h=(h1,…,hL)T\mathbf{h} = (h_1, \ldots, h_L)^T and noise variance Οƒ2\sigma^2. Each source uses a nested lattice code with power PP. The relay can decode the integer linear combination with coefficient vector a∈ZL\mathbf{a} \in \mathbb{Z}^L at any rate RCaF(h,a)=12log⁑+ ⁣(P/Οƒ2βˆ₯aβˆ₯2+P/Οƒ2β‹…(βˆ₯aβˆ₯2βˆ’βˆ£hTa∣2/βˆ₯hβˆ₯2))R_{\text{CaF}}(\mathbf{h}, \mathbf{a}) = \frac{1}{2} \log^+\!\left(\frac{P/\sigma^2}{\|\mathbf{a}\|^2 + P/\sigma^2 \cdot (\|\mathbf{a}\|^2 - |\mathbf{h}^T \mathbf{a}|^2 / \|\mathbf{h}\|^2)}\right) where the effective noise depends on how well a\mathbf{a} "aligns" with h\mathbf{h}.

The optimal coefficient vector is aβˆ—=arg⁑min⁑a∈ZLβˆ–{0}βˆ₯aβˆ’Ξ±hβˆ₯2\mathbf{a}^* = \arg\min_{\mathbf{a} \in \mathbb{Z}^L \setminus \{\mathbf{0}\}} \|\mathbf{a} - \alpha \mathbf{h}\|^2 for an appropriate scaling Ξ±\alpha, i.e., the integer vector closest to a scaled version of the channel vector.

The rate depends on how well the integer coefficient vector a\mathbf{a} matches the channel vector h\mathbf{h}. When h\mathbf{h} is close to an integer (or rational) vector, the relay can decode a linear combination at a high rate because the channel naturally "computes" close to an integer combination. When h\mathbf{h} is irrational or poorly aligned, the effective noise is large and the rate drops.

The point is that compute-and-forward is most powerful when the channel has a "nice" algebraic structure β€” a fascinating connection between number theory and communication.

πŸŽ“CommIT Contribution(2011)

Compute-and-Forward: Lattice Coding for Wireless Relay Networks

B. Nazer, M. Gastpar, G. Caire β€” IEEE Transactions on Information Theory

Nazer and Gastpar introduced the compute-and-forward framework, showing that structured (lattice) codes allow relays to decode integer linear combinations of codewords at rates that depend on how well the channel coefficients align with integer vectors. This unified physical-layer network coding with information-theoretic achievability, opening a rich research direction in algebraic network information theory.

The framework has had broad impact: it provides the foundation for physical-layer network coding, integer-forcing receivers for MIMO systems, and cooperative relay schemes where relays forward algebraic functions rather than individual messages.

lattice-codesnetwork-codingcompute-and-forwardView Paper β†’

Common Mistake: Compute-and-Forward Requires Channel-Coefficient Alignment

Mistake:

Expecting compute-and-forward to always outperform decode-and-forward because it "uses structure" and is "more sophisticated."

Correction:

CaF achieves high rates only when the channel coefficient vector h\mathbf{h} is well approximated by an integer vector. For generic (irrational) channel coefficients, the mismatch between h\mathbf{h} and the best integer approximation a\mathbf{a} creates a rate penalty that can be severe. In such cases, DF or CF may achieve higher rates. The practical lesson: CaF is most beneficial in symmetric networks or when the channel gains happen to be near-integer ratios.

Historical Note: From Erez-Zamir to Nazer-Gastpar: Lattices in Information Theory

2004-2011

The use of lattices in information theory has a fascinating trajectory. Erez and Zamir (2004) showed that nested lattice codes can achieve the AWGN channel capacity, settling a long-standing question about whether structured codes can match the performance of random codes. This result was crucial for compute-and-forward: without capacity-achieving lattice codes, the algebraic structure of CaF would come at an unacceptable rate loss.

Nazer and Gastpar (2011) built on this foundation to show that lattice codes enable something random codes cannot: the ability to decode functions of codewords over noisy channels. This idea connects information theory to number theory (through lattice problems like the shortest vector problem) and to algebraic coding theory in unexpected ways.

Quick Check

In compute-and-forward, the relay decodes an integer linear combination u=βˆ‘β„“aβ„“tβ„“β€Šmodβ€ŠΞ›c\mathbf{u} = \sum_\ell a_\ell \mathbf{t}_\ell \bmod \Lambda_c. Why must the coefficients aβ„“a_\ell be integers?

Because the channel coefficients are always integers

Because integer combinations of lattice points remain in the lattice, making decoding well-defined

To minimize the decoding complexity

Because the messages are binary

Compute-and-Forward: Rate vs. Channel Alignment

Explore how the CaF achievable rate depends on the alignment between the channel coefficient vector and integer vectors. When h1/h2h_1/h_2 is close to a simple ratio (e.g., 1, 2, 1/2), the rate is high.

Parameters
1

Channel coefficient for source 1

1

Channel coefficient for source 2

15