Prerequisites & Notation

Before You Begin

This chapter moves from single-hop to multi-hop communication. The setting is noiseless networks, so the focus is on graph theory and algebraic coding rather than probabilistic arguments. We assume the following background.

  • Basic graph theory: directed graphs, paths, cuts

    Self-check: Can you define a directed graph, a path from source to sink, and a cut separating source from sink?

  • Finite fields: Fq\mathbb{F}_q, addition and multiplication

    Self-check: Can you perform arithmetic in F2\mathbb{F}_2 (binary XOR) and F4\mathbb{F}_4?

  • Linear algebra over finite fields

    Self-check: Can you determine the rank of a matrix over Fq\mathbb{F}_q and solve a linear system?

  • Channel capacity and the coding theorem (for context)(Review ch09)

    Self-check: Can you explain why channel coding matters — what does reliable communication over a noisy channel require?

  • Entropy and mutual information (for rate bounds)(Review ch01)

    Self-check: Can you state the chain rule for entropy and the data processing inequality?

Notation for This Chapter

This chapter uses graph-theoretic notation alongside information-theoretic symbols.

SymbolMeaningIntroduced
G=(V,E)G = (V, E)Directed graph with vertex set VV and edge set EEs01
ssSource nodes01
tt (or t1,,tKt_1, \ldots, t_K)$Sink node(s) / destination(s)s01
c(e)c(e)Capacity of edge ee (integer, in symbols per time unit)s01
maxflow(s,t)\text{maxflow}(s, t)Maximum flow from ss to tts01
mincut(s,t)\text{mincut}(s, t)Minimum cut capacity separating ss from tts01
Fq\mathbb{F}_qFinite field with qq elementss02
M\mathbf{M}Transfer matrix of a linear network codes02
hhMulticast rate (symbols per time unit to each sink)s02