Exercises
ex-fsi-ch10-01
EasyWrite the discrete-time state-space model for a constant-velocity 1-D tracker with position and velocity , sampling period , process acceleration noise , and position-only observations with variance . Identify , , , , .
The state vector is .
Acceleration enters as into position and into velocity.
State equation
, , .
Observation equation
, .
ex-fsi-ch10-02
EasyThe scalar Kalman filter has . Compute and assuming .
.
(scalar case).
Predict
.
Update
, so .
ex-fsi-ch10-03
EasyShow that the Kalman gain minimizes the trace of the posterior covariance over all linear updates.
Set with free .
Differentiate w.r.t. and set to zero.
Posterior covariance
.
Differentiate
, giving .
ex-fsi-ch10-04
EasyProve that the innovation has zero mean and covariance .
Use and the unbiasedness .
The prediction error and are independent by construction.
Mean
.
Covariance
since the prediction error and observation noise are independent.
ex-fsi-ch10-05
EasyFor a scalar LTI model , , , , solve the DARE to find the steady-state prediction covariance .
DARE: .
Multiply through by and solve the resulting quadratic.
Form the quadratic
, which reduces to .
Numerical solution
With , , : , i.e., . Positive root: .
ex-fsi-ch10-06
MediumDerive the information-form update: show that .
Apply the Woodbury matrix identity to the standard covariance update.
Equivalently, interpret as the information contributed by the observation.
Start from the covariance update
.
Apply Woodbury
With , , , , Woodbury gives .
ex-fsi-ch10-07
MediumProve that in steady state, the Kalman filter is an LTI system, and express its transfer function from to .
In steady state , so the recursion has constant coefficients.
Use -transform on .
LTI recursion
Substitute the update into the predict step to get where .
Transfer function
. This is the state-space realization of the causal Wiener filter for the signal model.
ex-fsi-ch10-08
MediumA random walk with is observed as with . Find the steady-state Kalman gain and the MSE.
DARE: .
Reduce to a quadratic and take the positive root.
DARE
, i.e., .
Solution
; steady-state gain , filtering MSE .
ex-fsi-ch10-09
MediumShow that the Kalman filter recovers the LMMSE estimator when the model is jointly Gaussian, and give a one-sentence argument why it is MMSE (not just LMMSE) under Gaussian assumptions.
Gaussian conditional distributions are linear in the observation.
LMMSE = MMSE when is jointly Gaussian.
LMMSE
For a jointly Gaussian pair, is an affine function of with coefficients given by the Wiener-Hopf normal equations. The Kalman gain is exactly that coefficient at step .
Why MMSE
For jointly Gaussian random variables the conditional mean is linear, so the best affine estimator is the conditional mean, which is the MMSE estimator.
ex-fsi-ch10-10
MediumDerive the EKF equations from the Kalman equations by linearizing the nonlinear state equation around .
Use a first-order Taylor expansion.
Evaluate the Jacobian .
Linearization
.
Plug into Kalman
Replace by in the covariance propagation; propagate the mean through the nonlinearity: , .
ex-fsi-ch10-11
MediumWhy is the UKF generally more accurate than the EKF? Give a two-sentence argument tied to the sigma-point approximation.
The EKF propagates only the mean through the nonlinearity.
The UKF captures second-order moments via a deterministic sampling set.
EKF limitation
The EKF linearizes around the current mean; it is only accurate to first order in .
UKF advantage
The UKF deterministically samples sigma points that match the mean and covariance, propagates each through the true nonlinearity, and recomputes moments. This captures mean and covariance to (at least) second order of the Taylor expansion.
ex-fsi-ch10-12
MediumGiven a -step predictor for an LTI state-space model, express it in terms of .
Iterate the prediction step.
Use for all .
Recursion
.
Covariance
.
ex-fsi-ch10-13
HardProve that the DARE has a unique positive-semidefinite solution under detectability of and stabilizability of .
Use the monotonicity of the Riccati operator.
Consult AndersonβMoore Ch. 4 for the full proof.
Existence by monotone convergence
Initialize with . The Riccati recursion produces a monotonically non-decreasing sequence bounded above under detectability, hence convergent. The limit satisfies the DARE.
Uniqueness
Suppose two PSD solutions existed. Then satisfies a Lyapunov equation with a stable closed-loop matrix under stabilizability, forcing . See Anderson-Moore, Ch. 4.
ex-fsi-ch10-14
HardDesign a Kalman filter for bearings-only tracking: an observer at origin sees a target at moving with constant velocity, and measures only the bearing angle . Explain why this problem is not observable from a stationary observer.
Write the 4-state constant-velocity model and the scalar nonlinear observation.
Consider a target moving along a ray from the origin.
State-space model and EKF
, is block-upper triangular with coupling. The EKF linearizes the arctan observation via .
Observability failure
Any target on a ray from the origin at any range produces the same bearing sequence. The observer must manoeuvre to break this symmetry and make the system locally observable.
ex-fsi-ch10-15
HardA student notices that his Kalman filter "diverges": the state estimate drifts away from truth even though innovations look small. Give two distinct causes and the corresponding fix for each.
Numerical: loss of symmetry/positive-definiteness in .
Modeling: or mis-set.
Numerical divergence
Fix: use a square-root or Joseph-form update to maintain and symmetric at every step.
Model mismatch
Underestimated makes the filter overconfident; it ignores new measurements. Fix: inflate , or add adaptive noise estimation (IMM / Mehra's innovations method).