Noisy Network Coding

A Universal Strategy for Relay Networks

We have seen that compress-and-forward achieves within 0.5 bits of the cut-set bound for the single-relay Gaussian channel. But what about general networks with multiple relays? Can we find a single strategy that works well for any network topology?

Noisy network coding (NNC), introduced by Lim, Kim, El Gamal, and Chung (2011), answers this affirmatively. NNC generalizes compress-and-forward to arbitrary networks: every relay compresses its observation and maps it to a channel input, and the destination jointly decodes using all received signals. The key result is that NNC achieves within a constant gap of the cut-set bound for any Gaussian relay network β€” and the gap depends only on the number of nodes, not on the channel parameters.

Definition:

Noisy Network Coding (NNC)

Consider a network with KK nodes, a source node 11, and a destination node KK. Nodes 2,…,Kβˆ’12, \ldots, K-1 are relays. The noisy network coding scheme operates as follows:

  1. Compression: Each relay kk quantizes its received signal YknY_k^n to a compressed version Y^kn\hat{Y}_k^n using a random codebook at rate R^k\hat{R}_k.

  2. Random mapping: Each relay maps its compression index to a channel input codeword XknX_k^n drawn from an i.i.d. Gaussian codebook.

  3. Joint decoding: The destination jointly decodes the source message WW and all compression indices from its own observation YKnY_K^n.

The scheme uses long blocks and the compression codebooks are generated independently. Unlike standard CF (which uses Wyner-Ziv binning and sequential decoding), NNC uses joint decoding of all compression indices simultaneously.

Noisy Network Coding

A generalization of compress-and-forward to arbitrary relay networks where each relay quantizes its observation and maps it to a channel input via random coding. The destination jointly decodes the source message and all compression indices. Achieves within a constant gap of the cut-set bound for Gaussian networks.

Related: Compress-and-Forward

Theorem: Noisy Network Coding Achievable Rate

For a general discrete memoryless relay network with source node 1, destination node KK, and relay nodes N={2,…,Kβˆ’1}\mathcal{N} = \{2, \ldots, K-1\}, noisy network coding achieves all rates R<min⁑SβŠ†N[I(X1,XS;Y^Sc,YK∣XSc)βˆ’βˆ‘k∈Scβˆ–{K}I(Yk;Y^k∣Xk,Y^S,YK)]R < \min_{\mathcal{S} \subseteq \mathcal{N}} \left[I(X_1, X_{\mathcal{S}}; \hat{Y}_{\mathcal{S}^c}, Y_K \mid X_{\mathcal{S}^c}) - \sum_{k \in \mathcal{S}^c \setminus \{K\}} I(Y_k; \hat{Y}_k \mid X_k, \hat{Y}_{\mathcal{S}}, Y_K)\right] where the minimum is over all subsets S\mathcal{S} of relay nodes, and the mutual informations are evaluated under the product distribution ∏kp(xk)p(y^k∣yk,xk)\prod_k p(x_k) p(\hat{y}_k | y_k, x_k).

The rate expression has a "cut-set-like" structure: for each subset S\mathcal{S} of relays that "cooperate" with the source, the achievable rate is bounded by the mutual information across the cut minus the cost of describing the compression indices of the relays on the destination side. The scheme is universal in the sense that it does not require the relays to know the network topology or the other relays' codebooks.

πŸŽ“CommIT Contribution(2011)

Noisy Network Coding

S. H. Lim, Y.-H. Kim, A. El Gamal, S.-Y. Chung, G. Caire β€” IEEE Transactions on Information Theory

Lim, Kim, El Gamal, and Chung introduced noisy network coding as a universal relay strategy for general networks. The scheme generalizes compress-and-forward by replacing Wyner-Ziv binning with joint decoding of all compression indices at the destination. This simplification enables a single scheme to achieve within a constant gap of the cut-set bound for any Gaussian network β€” a landmark result in network information theory.

noisy-network-codingrelay-networkconstant-gapView Paper β†’

Theorem: NNC Constant-Gap Result for Gaussian Networks

For any Gaussian relay network with KK nodes, noisy network coding with Gaussian quantization at the noise level achieves within gap≀Kβˆ’12Β bitsΒ perΒ channelΒ use\text{gap} \leq \frac{K-1}{2} \text{ bits per channel use} of the cut-set bound. The gap depends only on the number of nodes KK, not on the channel gains, powers, or SNRs.

Each relay introduces at most 0.5 bits of loss (from using independent inputs instead of correlated, similar to the single-relay case). With Kβˆ’1K - 1 relays, the total gap is at most (Kβˆ’1)/2(K-1)/2 bits. While this grows with the network size, it is independent of SNR β€” a universal approximation.

The point is that noisy network coding reduces the capacity approximation problem for any Gaussian network to within a constant that depends only on the network size. For a network with 5 nodes, the gap is at most 2 bits β€” quite useful in practice.

Example: NNC Applied to the Diamond Network

Apply noisy network coding to the Gaussian diamond network from Section 22.6: Yrk=hkX+ZrkY_{r_k} = h_k X + Z_{r_k} (k=1,2k = 1,2), Y=g1Xr1+g2Xr2+ZY = g_1 X_{r_1} + g_2 X_{r_2} + Z. With h1=h2=1h_1 = h_2 = 1, g1=g2=1g_1 = g_2 = 1, P=Pr1=Pr2=10P = P_{r_1} = P_{r_2} = 10, N=Nr1=Nr2=1N = N_{r_1} = N_{r_2} = 1. Compute the NNC rate and the cut-set bound.

Example: NNC for a Serial Relay Chain

Consider a chain of relays: source β†’ relay 1 β†’ relay 2 β†’ destination, with Gaussian links Yr1=X+Z1Y_{r_1} = X + Z_1, Yr2=Xr1+Z2Y_{r_2} = X_{r_1} + Z_2, Y=Xr2+ZY = X_{r_2} + Z, all with noise variance N=1N = 1 and powers P=Pr1=Pr2=10P = P_{r_1} = P_{r_2} = 10. Compare DF (decode at each hop) with NNC.

NNC Gap to Cut-Set Bound vs. Network Size

Visualize how the worst-case gap between NNC and the cut-set bound grows with the number of relay nodes. Compare with the actual gap for specific network topologies.

Parameters
5
10

Noisy Network Coding: Universal Relay Strategy

Animated walkthrough of the noisy network coding scheme for a multi-relay network: each relay independently compresses its observation, forwards it, and the destination jointly decodes the source message and all compression indices.

Common Mistake: NNC Is Not Always Better Than DF

Mistake:

Assuming NNC is universally superior because it has a "constant gap" guarantee.

Correction:

NNC's constant gap is a worst-case result. For specific topologies like serial relay chains or degraded relay channels, decode-and-forward can achieve the exact capacity while NNC falls short. The value of NNC is its universality: it works well for any topology without requiring topology-specific optimization. In practice, the best approach is to compare DF, CF, NNC, and hybrid schemes for the specific network at hand.

Historical Note: The Quest for a Universal Relay Strategy

2005-2011

The development of noisy network coding was the culmination of a decade-long quest for a universal relay strategy. Cover and El Gamal (1979) provided DF and CF for the single relay. Kramer, Gastpar, and Gupta (2005) extended these to networks but found that the gap could grow unboundedly. Avestimehr, Diggavi, and Tse (2011) showed, via the deterministic approach, that a constant-gap approximation exists for Gaussian networks β€” but their QMF scheme was complex and not based on standard information-theoretic tools.

Lim, Kim, El Gamal, and Chung (2011) achieved the same constant-gap result using a surprisingly simple modification of compress-and-forward: replace Wyner-Ziv binning with joint decoding. This insight β€” that joint decoding of compression indices is the key to universality β€” was unexpected and elegant.

⚠️Engineering Note

Practical Implications of NNC

Noisy network coding has significant implications for practical relay network design:

  1. Simplicity: each relay compresses independently β€” no coordination between relays is needed. This maps naturally to distributed systems.
  2. Robustness: the scheme works for any topology and channel parameters. No topology-specific optimization is required.
  3. Scalability: adding a new relay can only improve performance (it provides additional compressed observations).

The main practical challenge is the joint decoding at the destination, which has complexity exponential in the number of relays. For small networks (K≀5K \leq 5), this is feasible; for large networks, iterative decoding approximations are needed.

Practical Constraints
  • β€’

    Joint decoding complexity grows exponentially with number of relays

  • β€’

    Each relay needs to estimate its noise level for optimal quantization

  • β€’

    Synchronization across relays required for coherent reception

Quick Check

What is the key difference between standard compress-and-forward (Chapter 22) and noisy network coding?

NNC uses better compression codebooks

NNC replaces Wyner-Ziv binning with joint decoding of all compression indices at the destination

NNC allows relays to cooperate with each other

NNC uses structured (lattice) codes instead of random codes

Key Takeaway

Noisy network coding provides a universal relay strategy: each relay compresses independently, and the destination jointly decodes everything. For any Gaussian relay network with KK nodes, NNC achieves within (Kβˆ’1)/2(K-1)/2 bits of the cut-set bound β€” a constant gap independent of channel parameters. This makes NNC the default strategy when the network topology is complex or unknown.