Adaptive Equalization
Learning the Channel on the Fly
All equalizers discussed so far assume the channel is known. In practice, the channel must be estimated and may change over time (e.g., due to mobility). Adaptive equalization algorithms update the equalizer coefficients iteratively, either using a known training sequence or operating in decision-directed mode where past symbol decisions serve as the reference. The two dominant algorithms are the least mean squares (LMS) algorithm, prized for its simplicity, and the recursive least squares (RLS) algorithm, which converges faster at the cost of higher complexity.
Definition: Least Mean Squares (LMS) Algorithm
Least Mean Squares (LMS) Algorithm
The LMS algorithm is a stochastic gradient descent method that adapts the equalizer tap vector to minimise , where is the error between the desired output and the equalizer output.
At each time step :
- Compute the output:
- Compute the error:
- Update the taps:
where is the step size and is either the known training symbol (training mode) or the hard decision (decision-directed mode).
The LMS algorithm has complexity per symbol --- one multiply-accumulate per tap.
The LMS update is the instantaneous (sample-by-sample) approximation to the steepest descent algorithm . The noise in the gradient estimate is the price paid for avoiding the computation of and .
Definition: Recursive Least Squares (RLS) Algorithm
Recursive Least Squares (RLS) Algorithm
The RLS algorithm minimises the exponentially weighted least-squares cost
where is the forgetting factor.
The RLS update is:
-
Gain vector:
-
Error:
-
Tap update:
-
Inverse correlation update:
RLS converges much faster than LMS (independent of the eigenvalue spread of ) but has complexity per symbol.
Definition: Training Mode
Training Mode
In training mode, the receiver knows the transmitted symbols during a preamble or midamble period. The adaptive algorithm uses these known symbols as the desired output to converge the equalizer taps. Training mode provides reliable convergence but reduces throughput because the training symbols carry no user data.
Definition: Decision-Directed Mode
Decision-Directed Mode
In decision-directed (DD) mode, the desired output is set to the hard decision on the equalizer output: . This allows the equalizer to track slow channel variations without interrupting data transmission.
Decision-directed mode works reliably only when the BER is already low (typically below ), because incorrect decisions corrupt the adaptation. In practice, the equalizer first converges using a training sequence, then switches to decision-directed mode for tracking.
LMS Adaptive Equalizer
Complexity: Time: per symbol (one complex multiply-accumulate per tap). Memory: for the tap vector and input buffer.The step size must satisfy for convergence, where is the largest eigenvalue of . A common rule of thumb is .
LMS Convergence Animation
Watch the LMS algorithm converge in real time. The animation shows the equalizer tap values and the MSE learning curve as a function of iteration number. Adjust the step size to see the trade-off between convergence speed and steady-state misadjustment.
Parameters
Example: LMS Step Size and Convergence
An LMS equalizer with taps operates on a channel with autocorrelation eigenvalues and .
(a) Determine the range of stable step sizes.
(b) Compute the convergence time constant for .
(c) Compute the steady-state excess MSE (misadjustment).
Stability bound
For convergence, the step size must satisfy
A sufficient condition is .
So for guaranteed stability.
Convergence time constant
The LMS convergence time constant for the slowest mode is
The fastest mode converges in
The eigenvalue spread determines how much slower the slowest mode is compared to the fastest.
Misadjustment
The misadjustment (excess MSE relative to ) is approximately
where is the average eigenvalue. With :
So the steady-state MSE is about 31.5% above the Wiener optimum --- this is the price of using a stochastic gradient.
Quick Check
What is the main advantage of the RLS algorithm over LMS for adaptive equalization?
RLS has lower computational complexity per symbol
RLS converges faster, especially for channels with large eigenvalue spread
RLS achieves lower steady-state MSE
RLS does not require a training sequence
Correct. RLS convergence rate is independent of the eigenvalue spread of , while LMS convergence slows down proportionally to .
Common Mistake: Choosing the LMS Step Size
Mistake:
Setting the LMS step size too large for fast convergence without checking the stability condition, causing the algorithm to diverge.
Correction:
The step size must satisfy . A practical guideline is , which provides a good trade-off between convergence speed and misadjustment. If convergence is too slow, increase cautiously and monitor the learning curve. If the MSE starts increasing, is too large.
Common Mistake: Premature Switch to Decision-Directed Mode
Mistake:
Switching from training mode to decision-directed mode before the equalizer has sufficiently converged, causing the equalizer to lock onto a wrong solution or diverge.
Correction:
Decision-directed mode requires the BER to be below approximately for reliable adaptation. Always verify that the training-mode MSE has converged to a sufficiently low level before switching. In fast-fading environments, use longer training sequences or pilot-aided adaptation rather than pure decision-directed tracking.
LMS vs. RLS Adaptive Algorithms
| Property | LMS | RLS |
|---|---|---|
| Complexity per symbol | ||
| Convergence speed | Depends on eigenvalue spread | Independent of |
| Tuning parameter | Step size | Forgetting factor |
| Tracking ability | Good for slow variations | Better for fast variations |
| Numerical stability | Very stable | Can suffer from finite-precision issues |
| Memory | (stores ) | |
| Typical application | Low-complexity receivers | Fast convergence scenarios |
Deeper Treatment in the FSI Book
The Wiener filter and its adaptive implementations (LMS, RLS) are covered in depth in the FSI book (Chapters 6β8), which treats the general LMMSE estimation framework, Kalman filtering for time-varying channels, and convergence analysis with full measure-theoretic rigor. The FSP book (Chapter 8) covers the spectral factorisation underlying the MMSE-DFE from the stochastic processes perspective.
Least Mean Squares (LMS)
A stochastic gradient descent algorithm that adapts filter coefficients by updating in the direction of the instantaneous gradient of the squared error. Complexity is per sample.
Related: Recursive Least Squares (RLS), Equalization
Recursive Least Squares (RLS)
An adaptive filtering algorithm that recursively minimises a weighted least-squares cost function. Converges faster than LMS but requires operations per sample.
Related: Least Mean Squares (LMS), Equalization
Training Mode
An operating mode of an adaptive equalizer in which known pilot/training symbols are used as the desired output for coefficient adaptation.
Related: Decision-Directed Mode
Decision-Directed Mode
An operating mode of an adaptive equalizer in which the hard decision on the equalizer output is used as the desired signal for continued adaptation, enabling tracking without training overhead.
Related: Training Mode