Chapter Summary
Chapter 17 Summary
Key Points
- 1.
A factor graph is a bipartite graph with variable nodes and factor nodes, making the factorization explicit. Edges connect each factor to the variables it depends on.
- 2.
Markov chains, HMMs, convolutional codes, ISI channels, LDPC codes, and MIMO detection all have natural factor graph representations. The graph topology (chain, trellis, bipartite, dense) determines whether inference is tractable.
- 3.
On a tree factor graph, sum-product message passing computes exact marginals in two passes (leaves to root, root to leaves), with cost . This is the algorithm behind forward-backward, Viterbi, BCJR, and Kalman filtering.
- 4.
On a loopy graph, applying the tree updates iteratively gives loopy BP. Its fixed points are stationary points of the Bethe free energy — a local approximation to the true free energy. On trees, Bethe is exact; on loops, it approximates.
- 5.
Random bipartite graphs are locally tree-like: for fixed depth , the neighborhood of a random node is a tree with probability as . This is why density evolution gives sharp asymptotic predictions for LDPC codes.
- 6.
Short cycles (especially 4-cycles) destroy loopy BP by double-counting information. Good graph constructions enforce girth or .
- 7.
Loopy BP works empirically when (i) the graph is locally tree-like, (ii) factors are 'weak' enough that information does not flow strongly through cycles, and (iii) scheduling is chosen carefully.
Looking Ahead
Chapter 18 derives the sum-product algorithm in full detail, develops the max-product variant (which computes MAP estimates), and specializes to Gaussian belief propagation. We will see that density evolution predicts the decoding threshold of LDPC codes, and that Gaussian BP provides the computational backbone for iterative receivers in modern wireless systems.