Chapter Summary
Chapter Summary
Key Points
- 1.
The discrete-time linear Gaussian state-space model , with white Gaussian admits a recursive closed-form MMSE estimator β the Kalman filter.
- 2.
The Kalman recursion alternates a prediction step (propagate the mean and covariance through the dynamics) and an update step (incorporate the new observation through the Kalman gain ).
- 3.
The innovation is a white Gaussian sequence under the model. This is the state-space realization of the innovations representation from the Wiener theory of Chapter 9.
- 4.
For time-invariant the prediction covariance converges to the fixed point of the discrete algebraic Riccati equation (DARE) under detectability and stabilizability. The steady-state Kalman filter is LTI and coincides with the causal Wiener filter for the same signal model.
- 5.
For nonlinear dynamics/observations, the EKF linearizes around the running estimate and the UKF propagates sigma points through the nonlinearity. Both are approximations; they lose optimality and can diverge when the nonlinearity is strong or the state distribution is multimodal. Particle filters provide a Monte Carlo alternative at higher computational cost.
Looking Ahead
Chapter 11 returns to the communications setting and treats symbol detection over ISI channels as a different inference problem on a sequence β but the state-space perspective developed here is exactly what underlies the Viterbi algorithm on the channel trellis.