Prerequisites & Notation
What You Should Know Before Reading This Chapter
The Kalman filter lives at the intersection of three ideas we have already developed: linear MMSE estimation in jointly Gaussian models (Chapter 8), Wiener filtering and innovations (Chapter 9), and the Markov structure that arises from a discrete-time dynamical system. If any of those three feels hazy, this chapter will feel like magic rather than mathematics. Revisit the marked topics before proceeding.
- Conditional Gaussian distribution and the block-matrix inverse(Review ch08)
Self-check: For jointly Gaussian , write and in closed form. The Kalman update is this formula, iterated.
- Orthogonality principle for LMMSE(Review ch08)
Self-check: State why the LMMSE estimation error is orthogonal (uncorrelated) to every function of the observations. The Kalman gain is derived from exactly this condition.
- Innovations sequence and causal Wiener filtering(Review ch09)
Self-check: The innovations are white. Explain why recursive filtering is equivalent to projecting onto the innovations basis.
- Linear time-invariant systems and stability
Self-check: A matrix is (Schur) stable iff its eigenvalues lie strictly inside the unit disc. You should recognize controllability and observability Gramians.
- Matrix inversion lemma (Sherman--Morrison--Woodbury)(Review ch08)
Self-check: Apply to convert between covariance-form and information-form Kalman updates.
Chapter-Specific Notation
The state-space literature uses two nearly-equivalent symbol sets: the controls/aerospace tradition writes , while Caire's course notes use . We adopt because it matches the vast majority of signal-processing references, and flag the mapping in the first example. The observation matrix in this chapter has nothing to do with the MIMO channel matrix.
| Symbol | Meaning | Introduced |
|---|---|---|
| Hidden state at discrete time | s01 | |
| Observation at time | s01 | |
| State transition matrix | s01 | |
| Process-noise input matrix | s01 | |
| Observation matrix (NOT a channel matrix in this chapter) | s01 | |
| Process noise (driving noise), white | s01 | |
| Observation noise, white, independent of | s01 | |
| Conditional mean | s02 | |
| Conditional covariance | s02 | |
| Kalman gain at step | s02 | |
| Innovation (prediction residual) at step | s02 | |
| Innovation covariance | s02 | |
| Steady-state prediction covariance, solves the DARE | s03 |
Key Takeaway
The Kalman filter is not a new idea; it is the LMMSE estimator of Chapter 8, evaluated recursively along the innovations basis of Chapter 9, applied to a linear Gaussian Markov chain. Everything in this chapter follows from those three facts. Keep them in mind and the algebra will feel inevitable rather than ad hoc.