Chapter Summary

Chapter Summary

Key Points

  • 1.

    Max-flow equals min-cut for unicast. For a single source and single sink in a noiseless directed graph, the maximum information flow equals the minimum cut capacity. This is achievable by routing alone (Ford–Fulkerson), and no coding at intermediate nodes is needed.

  • 2.

    Network coding achieves multicast capacity. For single-source multicast to KK sinks, the maximum achievable rate is minkmincut(s,tk)\min_k \text{mincut}(s, t_k) — the minimum of the individual max-flows. Routing alone cannot always achieve this; network coding at intermediate nodes is necessary and sufficient.

  • 3.

    Linear codes suffice for multicast. Linear network codes over a finite field Fq\mathbb{F}_q with q>Kq > K achieve the multicast capacity. Each intermediate node forms linear combinations of incoming symbols — simple algebraic operations that are easy to implement.

  • 4.

    Random codes work. Random linear network coding (RLNC) — where each node independently draws random coefficients from Fq\mathbb{F}_q — achieves the multicast capacity with high probability. This eliminates the need for centralized code design and provides robustness to topology changes and link failures.

  • 5.

    The butterfly network is the canonical example. A single XOR at an intermediate node doubles the multicast rate from 1 to 2. This simple example captures the essential idea: coded packets are simultaneously useful to multiple receivers, each of which uses different side information to decode.

  • 6.

    Multi-source is much harder. The capacity region of general multi-source multi-sink networks is unknown. The cut-set bound is not always tight, linear codes are not always sufficient, and even the two-unicast problem remains open. This is one of the major unsolved problems in network information theory.

  • 7.

    Connections to practice. Network coding has been deployed in peer-to-peer systems (Avalanche), wireless mesh networks (COPE), and coded caching. The computational overhead (linear combinations over F256\mathbb{F}_{256}) is modest for modern hardware.

Looking Ahead

Chapter 22 introduces noise into the multi-hop setting: the relay channel, where an intermediate node helps the source communicate with the destination over noisy links. The techniques from this chapter — coding at intermediate nodes, combining information before forwarding — will reappear in noisy form. The three fundamental relay strategies — decode-and-forward, compress-and-forward, and compute-and-forward — each represent a different answer to the question "what should the relay do with the noisy signal it receives?" Decode-and-forward is the noisy analogue of routing; compress-and-forward uses Wyner–Ziv coding (Chapter 8); and compute-and-forward directly extends the linear network coding of this chapter to noisy channels, using lattice codes to decode linear combinations over the integers.