Chapter Summary
Chapter Summary
Key Points
- 1.
Conditional probability is a probability measure. satisfies the Kolmogorov axioms for fixed . The multiplication rule and chain rule follow directly: every joint probability factors into a telescoping product of conditional probabilities.
- 2.
Bayes' theorem inverts the conditioning direction. Given a partition and likelihoods , the posterior . This prior-likelihood-posterior update is the mathematical foundation of MAP detection, Bayesian channel estimation, and belief propagation.
- 3.
Independence means no information flows. iff . Mutual independence for events requires product conditions β pairwise independence is strictly weaker. Disjointness is the opposite of independence for events with positive probability.
- 4.
Conditional independence is the language of graphical models. means screens off all information flow between and . Conditioning on a common cause removes correlation; conditioning on a common effect (collider) introduces it. The Markov chain encodes .
- 5.
Three distributions from one experiment. Binomial , geometric , and negative binomial all arise from repeated independent Bernoulli trials by asking different questions about the count of successes. The geometric distribution's memoryless property mirrors the exponential distribution.
Looking Ahead
Chapter 3 applies the tools of this chapter to reliability and combinatorial probability. Chapter 5 lifts the framework from events to random variables, where conditional probability becomes conditional distribution and independence becomes the factorisation of joint PMFs. The Markov chain concept returns in Chapter 13 as a model for stochastic processes evolving in discrete time, and Bayes' theorem reappears in Book FSI as the optimal detection rule.