Noisy Network Coding
A Universal Strategy for Relay Networks
We have seen that compress-and-forward achieves within 0.5 bits of the cut-set bound for the single-relay Gaussian channel. But what about general networks with multiple relays? Can we find a single strategy that works well for any network topology?
Noisy network coding (NNC), introduced by Lim, Kim, El Gamal, and Chung (2011), answers this affirmatively. NNC generalizes compress-and-forward to arbitrary networks: every relay compresses its observation and maps it to a channel input, and the destination jointly decodes using all received signals. The key result is that NNC achieves within a constant gap of the cut-set bound for any Gaussian relay network β and the gap depends only on the number of nodes, not on the channel parameters.
Definition: Noisy Network Coding (NNC)
Noisy Network Coding (NNC)
Consider a network with nodes, a source node , and a destination node . Nodes are relays. The noisy network coding scheme operates as follows:
-
Compression: Each relay quantizes its received signal to a compressed version using a random codebook at rate .
-
Random mapping: Each relay maps its compression index to a channel input codeword drawn from an i.i.d. Gaussian codebook.
-
Joint decoding: The destination jointly decodes the source message and all compression indices from its own observation .
The scheme uses long blocks and the compression codebooks are generated independently. Unlike standard CF (which uses Wyner-Ziv binning and sequential decoding), NNC uses joint decoding of all compression indices simultaneously.
Noisy Network Coding
A generalization of compress-and-forward to arbitrary relay networks where each relay quantizes its observation and maps it to a channel input via random coding. The destination jointly decodes the source message and all compression indices. Achieves within a constant gap of the cut-set bound for Gaussian networks.
Related: Compress-and-Forward
Theorem: Noisy Network Coding Achievable Rate
For a general discrete memoryless relay network with source node 1, destination node , and relay nodes , noisy network coding achieves all rates where the minimum is over all subsets of relay nodes, and the mutual informations are evaluated under the product distribution .
The rate expression has a "cut-set-like" structure: for each subset of relays that "cooperate" with the source, the achievable rate is bounded by the mutual information across the cut minus the cost of describing the compression indices of the relays on the destination side. The scheme is universal in the sense that it does not require the relays to know the network topology or the other relays' codebooks.
Codebook generation
For each node , generate compression codewords i.i.d. and transmission codewords i.i.d. (for source) or random mapping (for relays).
Relay operation
In each block, relay finds jointly typical with . The compression index is mapped to the next block's transmission codeword via a deterministic function (random hash).
Joint decoding at destination
The destination looks for the unique message and compression indices such that all sequences are jointly typical. Error analysis via the multivariate packing lemma gives the rate constraint for each cut .
Noisy Network Coding
Lim, Kim, El Gamal, and Chung introduced noisy network coding as a universal relay strategy for general networks. The scheme generalizes compress-and-forward by replacing Wyner-Ziv binning with joint decoding of all compression indices at the destination. This simplification enables a single scheme to achieve within a constant gap of the cut-set bound for any Gaussian network β a landmark result in network information theory.
Theorem: NNC Constant-Gap Result for Gaussian Networks
For any Gaussian relay network with nodes, noisy network coding with Gaussian quantization at the noise level achieves within of the cut-set bound. The gap depends only on the number of nodes , not on the channel gains, powers, or SNRs.
Each relay introduces at most 0.5 bits of loss (from using independent inputs instead of correlated, similar to the single-relay case). With relays, the total gap is at most bits. While this grows with the network size, it is independent of SNR β a universal approximation.
The point is that noisy network coding reduces the capacity approximation problem for any Gaussian network to within a constant that depends only on the network size. For a network with 5 nodes, the gap is at most 2 bits β quite useful in practice.
Quantize at the noise level
Each relay sets its quantization distortion (the noise variance at relay ). This choice ensures that the compression index can be forwarded at a rate not exceeding the relay-to-downstream link capacity, while capturing all the useful information in beyond the noise floor.
Gap analysis per cut
For each cut , the NNC rate with Gaussian quantization is . The compression overhead for each relay in is at most bits (since quantizing at the noise level loses at most one bit of precision). The total gap across the cut is .
Independence loss
An additional 0.5 bits is lost from using independent inputs (no beamforming gain), giving a total gap of at most .
Example: NNC Applied to the Diamond Network
Apply noisy network coding to the Gaussian diamond network from Section 22.6: (), . With , , , . Compute the NNC rate and the cut-set bound.
Cut-set bound
Broadcast cut: bits. MAC cut: bits. Cut-set bound: bits.
NNC rate
Each relay quantizes at the noise level: . The NNC rate accounts for the compression overhead. With nodes, the gap is at most bits. Explicit computation: the NNC rate is approximately bits (the actual gap is typically much smaller than the worst-case bound).
Interpretation
NNC achieves a rate within about 1 bit of the cut-set bound for this 4-node network, without any coordination between relays. Each relay simply compresses and forwards independently. The gap would shrink further with optimized compression parameters.
Example: NNC for a Serial Relay Chain
Consider a chain of relays: source β relay 1 β relay 2 β destination, with Gaussian links , , , all with noise variance and powers . Compare DF (decode at each hop) with NNC.
DF rate (decode at each hop)
The DF rate is limited by the weakest hop: bits.
NNC rate
With NNC, each relay compresses and forwards. The cut-set bound gives bits (the bottleneck is any single hop). NNC with Gaussian quantization achieves approximately bits.
Comparison
For a serial chain, DF is clearly superior: each relay can fully decode (the observation contains only one upstream signal), so there is no loss from decoding. NNC is suboptimal here because it does not exploit the sequential structure. NNC shines in non-serial topologies (diamond, mesh) where decoding at relays may not be optimal.
NNC Gap to Cut-Set Bound vs. Network Size
Visualize how the worst-case gap between NNC and the cut-set bound grows with the number of relay nodes. Compare with the actual gap for specific network topologies.
Parameters
Noisy Network Coding: Universal Relay Strategy
Common Mistake: NNC Is Not Always Better Than DF
Mistake:
Assuming NNC is universally superior because it has a "constant gap" guarantee.
Correction:
NNC's constant gap is a worst-case result. For specific topologies like serial relay chains or degraded relay channels, decode-and-forward can achieve the exact capacity while NNC falls short. The value of NNC is its universality: it works well for any topology without requiring topology-specific optimization. In practice, the best approach is to compare DF, CF, NNC, and hybrid schemes for the specific network at hand.
Historical Note: The Quest for a Universal Relay Strategy
2005-2011The development of noisy network coding was the culmination of a decade-long quest for a universal relay strategy. Cover and El Gamal (1979) provided DF and CF for the single relay. Kramer, Gastpar, and Gupta (2005) extended these to networks but found that the gap could grow unboundedly. Avestimehr, Diggavi, and Tse (2011) showed, via the deterministic approach, that a constant-gap approximation exists for Gaussian networks β but their QMF scheme was complex and not based on standard information-theoretic tools.
Lim, Kim, El Gamal, and Chung (2011) achieved the same constant-gap result using a surprisingly simple modification of compress-and-forward: replace Wyner-Ziv binning with joint decoding. This insight β that joint decoding of compression indices is the key to universality β was unexpected and elegant.
Practical Implications of NNC
Noisy network coding has significant implications for practical relay network design:
- Simplicity: each relay compresses independently β no coordination between relays is needed. This maps naturally to distributed systems.
- Robustness: the scheme works for any topology and channel parameters. No topology-specific optimization is required.
- Scalability: adding a new relay can only improve performance (it provides additional compressed observations).
The main practical challenge is the joint decoding at the destination, which has complexity exponential in the number of relays. For small networks (), this is feasible; for large networks, iterative decoding approximations are needed.
- β’
Joint decoding complexity grows exponentially with number of relays
- β’
Each relay needs to estimate its noise level for optimal quantization
- β’
Synchronization across relays required for coherent reception
Quick Check
What is the key difference between standard compress-and-forward (Chapter 22) and noisy network coding?
NNC uses better compression codebooks
NNC replaces Wyner-Ziv binning with joint decoding of all compression indices at the destination
NNC allows relays to cooperate with each other
NNC uses structured (lattice) codes instead of random codes
Correct. In standard CF, the relay uses Wyner-Ziv binning and the destination decodes sequentially. In NNC, there is no binning β the destination jointly decodes the source message and all compression indices simultaneously. This removes the need for the Wyner-Ziv rate constraint to hold for each relay individually.
Key Takeaway
Noisy network coding provides a universal relay strategy: each relay compresses independently, and the destination jointly decodes everything. For any Gaussian relay network with nodes, NNC achieves within bits of the cut-set bound β a constant gap independent of channel parameters. This makes NNC the default strategy when the network topology is complex or unknown.