Compute-and-Forward
Beyond Decode or Compress: Computing Functions
Decode-and-forward decodes the entire message. Compress-and-forward forwards a compressed observation. Is there something in between? Compute-and-forward (CaF) answers this with an elegant idea: the relay decodes a function of the transmitted messages β specifically, a linear combination over an algebraic structure.
The key insight, due to Nazer and Gastpar (2011), is that if we use lattice codes instead of random codes, the superposition of codewords at the relay is itself a lattice codeword. The relay can therefore decode a linear combination of the source messages without decoding any individual message. This idea bridges channel coding and network coding: we get the benefits of network coding (forwarding functions rather than individual messages) even over noisy channels.
Definition: Lattice and Nested Lattice Code
Lattice and Nested Lattice Code
A lattice is a discrete additive subgroup of : if , then and .
A nested lattice code uses two lattices: a fine lattice (for coding) and a coarse lattice (for shaping). The codebook is , where is the fundamental Voronoi region of the coarse lattice. The rate is .
The nested lattice construction is the analogue of "random binning" for structured codes. The fine lattice provides the coding gain, while the coarse lattice provides the shaping gain and enforces the power constraint.
Nested Lattice Code
A structured channel code defined by two nested lattices: a fine lattice for channel coding and a coarse lattice for power shaping. The codewords are fine lattice points within the Voronoi region of the coarse lattice.
Related: Compute-and-Forward
Definition: Compute-and-Forward Framework
Compute-and-Forward Framework
In the compute-and-forward framework:
- sources each encode their message using a common nested lattice code with lattice point .
- The transmitted codeword is , where is a dither vector known to all parties.
- A relay receives , where are channel coefficients and is noise.
- The relay decodes an integer linear combination for some integer coefficient vector .
The relay's task is to find and decode the corresponding linear combination . The algebraic structure of the lattice ensures that is itself a lattice point, making decoding well-defined.
Compute-and-Forward
A relay strategy where the relay decodes an integer linear combination of the transmitted lattice codewords rather than individual messages. Bridges channel coding and network coding for noisy networks.
Related: Nested Lattice Code
Theorem: Compute-and-Forward Achievable Rate
Consider sources transmitting over a Gaussian multiple-access channel to a relay with channel coefficients and noise variance . Each source uses a nested lattice code with power . The relay can decode the integer linear combination with coefficient vector at any rate where the effective noise depends on how well "aligns" with .
The optimal coefficient vector is for an appropriate scaling , i.e., the integer vector closest to a scaled version of the channel vector.
The rate depends on how well the integer coefficient vector matches the channel vector . When is close to an integer (or rational) vector, the relay can decode a linear combination at a high rate because the channel naturally "computes" close to an integer combination. When is irrational or poorly aligned, the effective noise is large and the rate drops.
The point is that compute-and-forward is most powerful when the channel has a "nice" algebraic structure β a fascinating connection between number theory and communication.
MMSE scaling
The relay scales its observation: . The MMSE scaling minimizes the variance of the "effective noise" .
Effective noise computation
The effective noise has variance per dimension: Substituting and simplifying gives the effective SNR, from which the rate follows via the lattice coding theorem (Erez and Zamir, 2004).
Choosing the coefficient vector
The optimal minimizes the effective noise, which reduces to finding the shortest vector in a shifted lattice β a closest vector problem (CVP). For small , this can be solved exactly; for large , efficient algorithms such as LLL lattice reduction provide good approximations.
Compute-and-Forward: Lattice Coding for Wireless Relay Networks
Nazer and Gastpar introduced the compute-and-forward framework, showing that structured (lattice) codes allow relays to decode integer linear combinations of codewords at rates that depend on how well the channel coefficients align with integer vectors. This unified physical-layer network coding with information-theoretic achievability, opening a rich research direction in algebraic network information theory.
The framework has had broad impact: it provides the foundation for physical-layer network coding, integer-forcing receivers for MIMO systems, and cooperative relay schemes where relays forward algebraic functions rather than individual messages.
Common Mistake: Compute-and-Forward Requires Channel-Coefficient Alignment
Mistake:
Expecting compute-and-forward to always outperform decode-and-forward because it "uses structure" and is "more sophisticated."
Correction:
CaF achieves high rates only when the channel coefficient vector is well approximated by an integer vector. For generic (irrational) channel coefficients, the mismatch between and the best integer approximation creates a rate penalty that can be severe. In such cases, DF or CF may achieve higher rates. The practical lesson: CaF is most beneficial in symmetric networks or when the channel gains happen to be near-integer ratios.
Historical Note: From Erez-Zamir to Nazer-Gastpar: Lattices in Information Theory
2004-2011The use of lattices in information theory has a fascinating trajectory. Erez and Zamir (2004) showed that nested lattice codes can achieve the AWGN channel capacity, settling a long-standing question about whether structured codes can match the performance of random codes. This result was crucial for compute-and-forward: without capacity-achieving lattice codes, the algebraic structure of CaF would come at an unacceptable rate loss.
Nazer and Gastpar (2011) built on this foundation to show that lattice codes enable something random codes cannot: the ability to decode functions of codewords over noisy channels. This idea connects information theory to number theory (through lattice problems like the shortest vector problem) and to algebraic coding theory in unexpected ways.
Quick Check
In compute-and-forward, the relay decodes an integer linear combination . Why must the coefficients be integers?
Because the channel coefficients are always integers
Because integer combinations of lattice points remain in the lattice, making decoding well-defined
To minimize the decoding complexity
Because the messages are binary
Correct. A lattice is closed under integer linear combinations: if , then for any . This algebraic closure is what makes the decoded linear combination a valid codeword that can be forwarded or further processed.
Compute-and-Forward: Rate vs. Channel Alignment
Explore how the CaF achievable rate depends on the alignment between the channel coefficient vector and integer vectors. When is close to a simple ratio (e.g., 1, 2, 1/2), the rate is high.
Parameters
Channel coefficient for source 1
Channel coefficient for source 2