Examples and Beyond Wiener: Kalman, LMS, RLS
The Wiener Filter as the Centerpiece of Linear Estimation
The Wiener filter is beautiful, classical, and β in its pure form β rarely used in practice. Why? Because it assumes we know the second-order statistics exactly, and because it assumes stationarity. Real systems face non-stationarity (a user walks into a tunnel, a channel changes) and uncertainty about the statistics (we do not know and from first principles; we have to estimate them). The Wiener filter is the theoretical ceiling against which three practical alternatives are benchmarked:
- The Kalman filter (Chapter 10) generalizes Wiener to time-varying state-space models. Its steady-state converges to the causal Wiener filter when the model is time-invariant.
- Adaptive filters (LMS, RLS) estimate the statistics on the fly, producing a time-varying approximation to the Wiener filter that tracks slow changes in the channel.
- Model-based Wiener filters are the causal Wiener filter with the PSDs replaced by parametric models (AR or ARMA) whose parameters are estimated from data.
This closing section works through the canonical AR(1)+noise example end-to-end, makes the Kalman-Wiener bridge explicit, and sketches the transition to adaptive filtering.
Example: AR(1)+Noise: End-to-End Walkthrough
Consider with , observed in white noise of variance : . Compute (a) the non-causal Wiener filter and its MMSE; (b) the spectral factors ; (c) the causal Wiener filter, in recursive form; (d) the one-step prediction MMSE for .
(a) Non-causal Wiener and MMSE
. Signal variance: . SNR (7.45 dB). . Numerically (integrating):
(b) Spectral factorization
Apply the formulas of ESpectral Factorization of the AR(1)+Noise PSD with , , : . Solve , i.e., . Roots: . Pick . , so . Therefore .
(c) Causal Wiener filter
Apply EClosed-Form Causal Wiener Filter for AR(1)+Noise: . The recursion is . Numerically the causal MMSE is , a ratio .
(d) One-step prediction for $Y$
by Kolmogorov-Szego (the geometric mean of is for rational factorable spectra of this form). Compare with . The ratio measures the predictability of .
Wiener Prediction for Massive MIMO Channel Aging
In massive MIMO systems with user mobility the channel evolves between uplink pilot transmissions, and the base station's precoder β designed from a stale estimate β suffers a SINR loss known as channel aging. The CommIT group showed that by modeling each antenna's channel as a narrowband WSS process in time (governed by a Jakes-like Doppler spectrum), one can apply the Wiener-Kolmogorov causal prediction framework of this chapter on a per-eigenbeam basis. The resulting subspace-based predictor tracks the dominant eigenmodes of the channel covariance and predicts each one independently. The prediction horizon is limited by the Kolmogorov-Szego bound: for a user moving at 30 km/h at 3 GHz carrier, the geometric-mean of the Jakes PSD falls one to three dB below the variance, corresponding to a useful prediction horizon of roughly one channel coherence time. Beyond this horizon the predictor loses to simply assuming the channel is unchanged.
The Bridge to Kalman: State-Space Is the Finite-Memory Formulation
For the AR(1)+noise problem, the causal Wiener filter is a first-order recursion: . This is exactly the steady-state Kalman filter for the scalar state-space model , . Here is the dictionary:
| Wiener quantity | Kalman quantity |
|---|---|
| AR pole | State transition scalar |
| Innovation variance | Process noise variance |
| Noise variance | Measurement noise variance |
| Wiener gain | Steady-state Kalman gain |
| Closed-loop pole |
The Kalman formulation has two advantages. First, it applies to time-varying state-space models, where the Wiener filter simply does not apply. Second, it is recursive by construction β no spectral factorization is needed, because the recursive update effectively computes it on the fly through the Riccati equation. The Wiener filter, in turn, has the virtue of an explicit closed-form expression and a cleaner connection to frequency-domain intuition. Both perspectives are essential, and Chapter 10 builds the Kalman framework from scratch.
Adaptive Wiener: LMS and RLS
When the statistics and are unknown, we can estimate them from data. Two archetypal algorithms:
- Least Mean Squares (LMS) uses a stochastic-gradient update: , where is the instantaneous error. LMS converges to the Wiener filter as slowly, with complexity per sample for a length- filter. Convergence requires , and the steady-state excess MSE (the "misadjustment") is approximately .
- Recursive Least Squares (RLS) solves the exponentially-weighted least-squares problem recursively via the matrix inversion lemma. RLS converges much faster β in samples rather than β at the cost of operations per sample.
Both are on-line approximations to the Wiener filter. In the limit of infinite data and constant statistics, both converge to the FIR Wiener solution. Their value lies in tracking time-varying statistics β something the stationary Wiener filter cannot do.
Why This Matters: MMSE Equalization as Wiener Filtering
In a frequency-selective fading channel, the received signal is in the frequency domain. The MMSE equalizer, which you will meet in Chapter 11, is exactly the Wiener filter for this problem: . The zero-forcing equalizer is the high-SNR limit. For OFDM (Book Telecom, Ch 14) each subcarrier's per-tone MMSE estimator is a scalar Wiener filter applied independently β this is one of the major reasons OFDM is the dominant physical-layer architecture in 4G, 5G, and Wi-Fi.
See full treatment in Chapter 11
Robustness Under Statistical Mismatch
A Wiener filter designed with PSD estimates that differ from the true PSDs produces a suboptimal MSE. For signal-plus-noise problems, a standard robustness result says that if the assumed SNR is off by a factor of , the MSE degrades by approximately relative to the optimum at moderate SNR. The practical implication: Wiener filters are most useful when the statistics are reasonably stable and estimable from training data. In rapidly changing environments an adaptive filter, which uses only a short effective memory, typically outperforms a mis-designed static Wiener filter.
- β’
Performance degrades gracefully under statistical mismatch but requires correct qualitative structure (e.g., signal-in-noise).
- β’
Robust variants (e.g., minimax Wiener filters) exist for worst-case uncertainty sets.
The Wiener Family Tree
Key Takeaway
The Wiener filter sits at the crossroads of linear estimation. Every adaptive or model-based filter in practice β Kalman, LMS, RLS, MMSE equalizer β is a variant or approximation of it. Knowing the Wiener solution gives you the fundamental limit against which any practical algorithm must be measured.
Quick Check
In what sense does the steady-state Kalman filter equal the causal Wiener filter?
They coincide exactly when the state-space model is time-invariant and the Kalman filter has reached its steady-state gain.
They are exactly equal for any state-space model, time-varying or not.
They differ by a time-invariant all-pass filter.
For a time-invariant state-space model with all-time-invariant covariances, the Riccati recursion of the Kalman filter converges to a steady-state value; the corresponding steady-state Kalman gain produces exactly the same input-output relationship as the causal Wiener filter derived from the model's implied PSDs.