Chapter Summary

Chapter Summary

Key Points

  • 1.

    The discrete-time linear Gaussian state-space model x[n+1]=Fx[n]+Gw[n]\mathbf{x}[n+1] = \mathbf{F}\mathbf{x}[n] + \mathbf{G}\mathbf{w}[n], y[n]=Hx[n]+v[n]\mathbf{y}[n] = \mathbf{H}\mathbf{x}[n] + \mathbf{v}[n] with white Gaussian w,v\mathbf{w},\mathbf{v} admits a recursive closed-form MMSE estimator β€” the Kalman filter.

  • 2.

    The Kalman recursion alternates a prediction step (propagate the mean and covariance through the dynamics) and an update step (incorporate the new observation through the Kalman gain K[n]=P[n∣nβˆ’1]HT(HP[n∣nβˆ’1]HT+R)βˆ’1\mathbf{K}[n] = \mathbf{P}[n|n-1]\mathbf{H}^T(\mathbf{H}\mathbf{P}[n|n-1]\mathbf{H}^T + \mathbf{R})^{-1}).

  • 3.

    The innovation y[n]βˆ’Hx^[n∣nβˆ’1]\mathbf{y}[n] - \mathbf{H}\hat{\mathbf{x}}[n|n-1] is a white Gaussian sequence under the model. This is the state-space realization of the innovations representation from the Wiener theory of Chapter 9.

  • 4.

    For time-invariant (F,H,Q,R)(\mathbf{F},\mathbf{H},\mathbf{Q},\mathbf{R}) the prediction covariance converges to the fixed point of the discrete algebraic Riccati equation (DARE) under detectability and stabilizability. The steady-state Kalman filter is LTI and coincides with the causal Wiener filter for the same signal model.

  • 5.

    For nonlinear dynamics/observations, the EKF linearizes around the running estimate and the UKF propagates sigma points through the nonlinearity. Both are approximations; they lose optimality and can diverge when the nonlinearity is strong or the state distribution is multimodal. Particle filters provide a Monte Carlo alternative at higher computational cost.

Looking Ahead

Chapter 11 returns to the communications setting and treats symbol detection over ISI channels as a different inference problem on a sequence β€” but the state-space perspective developed here is exactly what underlies the Viterbi algorithm on the channel trellis.