Exercises
ex-ch13-01
EasyA discrete-time channel has impulse response .
(a) What is the channel memory ?
(b) For BPSK (), how many distinct noise-free received levels are possible?
(c) Compute all possible noise-free levels and find the minimum eye opening .
Each received sample depends on symbols.
There are possible noise-free levels.
Channel memory
The channel has taps at , so .
Number of levels
The noise-free output is . With binary symbols involved, there are possible combinations.
Enumerate levels
For : levels are (minimum is ).
For : levels are (maximum is ).
The minimum eye opening is .
Compare with the ISI-free case: . ISI reduces the eye opening by a factor of 2.5.
ex-ch13-02
EasyCompute the matched-filter bound (MFB) for a BPSK system operating on a three-tap channel at dB.
The MFB collects all multipath energy coherently.
Total channel energy
$
Effective SNR
The effective at the MFB is
BER
\blacksquare$
ex-ch13-03
MediumProve that the folded spectrum condition for the Nyquist ISI-free criterion,
is equivalent to in the time domain.
Start from the DTFT of and use the Poisson summation formula.
Evaluate both sides at integer values of .
DTFT relationship
The DTFT of is .
The inverse DTFT gives .
Folded spectrum implies ISI-free
If , then since is -periodic, this just states that integrated properly gives .
For :
The folded spectrum condition means acts as a constant (= 1) when "averaged" over each interval. Multiplying by and integrating yields zero for by orthogonality of complex exponentials.
ex-ch13-04
MediumFor a two-tap channel with :
(a) Compute the infinite-length ZF equalizer impulse response.
(b) Show that the noise enhancement factor is .
(c) For , compute the noise enhancement in dB.
--- expand as a geometric series.
The noise enhancement is .
ZF equalizer transfer function
w[k] = (-\alpha)^kk \geq 0$.
Noise enhancement
$
Numerical value
For :
In dB: dB.
The ZF equalizer amplifies noise by 7.21 dB, which is a severe penalty.
ex-ch13-05
MediumDerive the frequency-domain expression for the MMSE linear equalizer:
starting from the Wiener-Hopf equation .
Work in the frequency domain where the Toeplitz autocorrelation matrix becomes a scalar PSD.
and .
Frequency-domain Wiener-Hopf
In the frequency domain, the Wiener-Hopf equation becomes
where is the PSD of and is the cross-PSD between and the desired signal .
Compute spectra
e^{-j\omega d}d = 0$ for simplicity.)
MMSE equalizer
N_0/E_s \to 0W \to H^*/|H|^2 = 1/HN_0/E_s \to \inftyW \to (E_s/N_0) H^*\blacksquare$
ex-ch13-06
HardConsider a 3-tap channel .
(a) Design a 5-tap MMSE equalizer at dB with optimal delay .
(b) Compute the output SINR and compare with the matched-filter bound.
(c) How much does increasing the equalizer to 11 taps improve performance?
Build the channel matrix and solve the Wiener-Hopf equation for each candidate delay .
The optimal minimises .
Channel matrix
For a 5-tap equalizer, the received vector has contributions from symbols . The channel matrix is (5 output samples, 7 symbols involved).
Optimal delay search
Compute for and choose the delay that minimises .
Typically, gives the best performance.
Performance comparison
Numerical computation shows:
- 5-tap MMSE: output SINR dB
- 11-tap MMSE: output SINR dB
- Matched-filter bound: SINR dB
The 5-tap equalizer is about 3.2 dB from the MFB; the 11-tap equalizer closes the gap to 2.4 dB.
ex-ch13-07
EasyShow that the MMSE linear equalizer reduces to the matched filter (up to a scalar) when .
In the MMSE formula, consider the limit .
Low-SNR limit
The MMSE equalizer is
As , the term becomes negligible:
This is the matched filter scaled by .
At low SNR, ISI is less harmful than noise, so the optimal strategy is to maximise SNR (matched filtering) rather than suppress ISI.
ex-ch13-08
MediumA channel has impulse response .
(a) Design a DFE with feedforward taps and feedback taps at dB.
(b) Compare the MMSE-DFE MSE with the linear MMSE MSE.
(c) By what factor does the DFE improve the output SINR?
The feedback filter removes postcursor ISI from and .
Under correct decisions, the effective channel seen by the feedforward filter changes.
Ideal feedback
With correct past decisions, the feedback filter subtracts .
Ideally, and cancel all postcursor ISI, reducing the effective channel to .
Feedforward filter
With postcursor ISI removed, the feedforward filter only needs to maximise SNR for the single-tap effective channel. This becomes a simple matched filter, with MSE determined primarily by the noise level.
MSE comparison
At 20 dB SNR:
- Linear MMSE:
- MMSE-DFE:
SINR improvement: approximately dB.
The DFE provides nearly 8 dB of improvement on this channel with strong postcursor ISI.
ex-ch13-09
HardShow that the MMSE-DFE achieves a lower MSE than any linear equalizer by proving
where .
Use Jensen inequality for the concave function .
for any positive random variable .
Apply Jensen inequality
Define .
The DFE MSE is .
Since is concave, Jensen's inequality gives
Exponentiate
Exponentiating both sides:
Equality holds iff is constant, i.e., is constant (flat channel).
ex-ch13-10
MediumExplain the error propagation phenomenon in a DFE. For the channel with BPSK, calculate the probability that a single decision error causes the next symbol to also be detected incorrectly (assuming dB).
A wrong decision causes the feedback to add instead of subtracting it.
The effective noise for the next symbol includes a bias.
Error propagation mechanism
When , the feedback subtracts instead of . The residual ISI is
This creates a bias of on the next sample.
Conditional error probability
The noise-free equalizer output for is (assuming the feedforward filter works correctly). With the error propagation bias of (worst case), the effective signal becomes , which is negative and will be detected as (error).
More precisely, the probability of error given the previous error is
At dB: , and with :
There is a 28.7% chance that each error propagates to the next symbol.
ex-ch13-11
MediumFor a channel with BPSK:
(a) Draw the 2-state trellis for the Viterbi MLSE.
(b) Compute the free distance .
(c) Compare the asymptotic BER with the matched-filter bound.
The trellis has states for .
over binary error sequences.
Trellis description
States: (), ().
Transitions from :
- : output
- : output
Transitions from :
- : output
- : output
Free distance
The minimum-weight error event is a single symbol error (difference between and ).
So .
Asymptotic BER comparison
MLSE BER
MFB BER
The MLSE BER is close to the MFB because it effectively collects multipath energy coherently.
ex-ch13-12
HardA channel has memory with 4-PAM modulation ().
(a) How many trellis states does the MLSE detector require?
(b) How many branch metric computations per time step?
(c) If the symbol rate is 1 Msymbol/s and each branch metric requires 10 floating-point operations, what is the computational load in GFLOPS?
(d) Suggest a reduced-complexity alternative and estimate its complexity.
States = , transitions per state = .
Total transitions per time step = .
MLSE complexity
(a) Trellis states: .
(b) Each state has incoming transitions, so total branch metrics per step: .
Computational load
(c) Computations per second: FLOPS GFLOPS.
This is substantial for a real-time mobile receiver.
Reduced complexity
(d) A reduced-state sequence estimation (RSSE) approach groups multiple states together. For example, reducing to states by only tracking the last 2 symbols:
Branch metrics per step: .
Computational load: GFLOPS.
This is a 4x reduction with only modest performance loss.
ex-ch13-13
ChallengeProve that the Viterbi algorithm finds the maximum-likelihood sequence by showing that the principle of optimality holds for the ISI channel trellis: the best path to any state at time must contain the best path to some state at time .
Use the additive structure of the sequence metric.
Show that extending a suboptimal sub-path can never yield a globally optimal path.
Additive metric
The ML criterion minimises
where depends only on the state and the input .
This metric decomposes additively along the trellis path.
Principle of optimality
Consider two paths arriving at state at time : path with metric and path with .
Any extension of by future symbols achieves total metric .
The same extension applied to achieves .
Therefore, can never be part of the globally optimal path. Discarding (keeping only the survivor ) does not eliminate the optimal sequence.
Induction
By induction over , at each time step we need retain only one survivor per state. The Viterbi algorithm does exactly this, and by the principle of optimality, the final traceback yields the ML sequence.
ex-ch13-14
EasyAn LMS equalizer with taps operates at dB. The input signal power is .
(a) Compute the maximum stable step size.
(b) Recommend a practical step size using the rule of thumb .
The stability bound is .
Maximum step size
$
Practical step size
\blacksquare$
ex-ch13-15
MediumDerive the convergence time constant of the LMS algorithm for the slowest eigenmode. Show that
where is the smallest eigenvalue of .
Decompose the weight error into eigenmodes of .
Each eigenmode decays as .
Weight error dynamics
Define the weight error .
The mean weight error evolution is
Eigendecomposition
Let . In the eigenbasis:
where .
Time constant
The -th mode decays exponentially with time constant
for small . The slowest mode has the smallest eigenvalue:
A large eigenvalue spread makes the slowest mode converge times slower than the fastest.
ex-ch13-16
MediumCompare the LMS and RLS algorithms for a channel with eigenvalue spread . The equalizer has taps.
(a) If LMS uses and , how many iterations does it take for the slowest mode to converge (to within of initial error)?
(b) If RLS uses and (initial ), approximately how many iterations does RLS take to converge?
(c) Compare the computational cost (multiply-accumulates per iteration) of both algorithms.
LMS convergence: .
RLS typically converges in iterations regardless of eigenvalue spread.
LMS convergence
e^{-3}k = 3 \times 500 = 1500$ iterations.
RLS convergence
RLS convergence is approximately iterations, independent of the eigenvalue spread.
The RLS converges 50 times faster than LMS.
Computational cost
- LMS: complex MACs per iteration
- RLS: complex MACs per iteration
RLS is 30x more expensive per iteration, but converges in 30 iterations vs. 1500.
Total cost to convergence:
- LMS: MACs
- RLS: MACs
In this scenario, RLS is actually more efficient in total computation to convergence.
ex-ch13-17
HardShow that the misadjustment of the LMS algorithm (the ratio of excess MSE to minimum MSE) is approximately
for small step size . Use this to show the fundamental trade-off: faster convergence (larger ) increases steady-state error.
The excess MSE in each eigenmode is .
Sum over all modes and normalise by .
Per-mode excess MSE
In the -th eigenmode, the steady-state weight error variance is
for small . The contribution to excess MSE is .
Summing over all modes is involved; the standard approximation yields .
Misadjustment
\bar{\lambda}$ is the average eigenvalue.
Convergence-misadjustment trade-off
Convergence speed: (faster for larger ).
Misadjustment: (larger for larger ).
Therefore
This is the fundamental LMS trade-off: you cannot simultaneously have fast convergence and low misadjustment.
ex-ch13-18
ChallengeDesign a complete equalization strategy for a mobile wireless channel with the following parameters:
- Channel memory: taps
- Modulation: 16-QAM ()
- Symbol rate: 2 Msymbol/s
- Doppler spread: 100 Hz (coherence time ms)
- Training overhead budget:
(a) Evaluate the feasibility of MLSE. Compute the number of trellis states and the required processing rate.
(b) Propose a practical equalizer architecture (DFE or linear) and specify the number of taps.
(c) Choose between LMS and RLS for adaptation and justify your choice. Determine the training sequence length needed.
(d) Estimate the performance gap relative to the matched-filter bound.
MLSE with 16-QAM and requires states.
The coherence time determines how often the equalizer must reconverge.
MLSE feasibility
Number of states: .
Branch metrics per symbol: .
At 2 Msymbol/s: operations/s.
MLSE is completely infeasible for this scenario.
Practical architecture
Propose an MMSE-DFE:
- Feedforward: taps
- Feedback: taps
Complexity per symbol: MACs --- easily feasible at 2 Msymbol/s.
Adaptation algorithm
Coherence time: 5 ms symbols.
The equalizer must reconverge within a fraction of this.
LMS: With and eigenvalue spread , convergence takes -- symbols. This fits within the coherence interval.
Training budget: 10% of symbols symbols per coherence interval. An LMS training phase of 500 symbols is sufficient.
Choice: LMS is adequate and has complexity. RLS would converge faster but per symbol is unnecessary given the margin.
Performance estimate
The MMSE-DFE on a 6-tap channel typically operates 2--4 dB from the MFB for 16-QAM, depending on the specific channel realisation. With proper training and decision-directed tracking, the expected gap is approximately 3 dB.
Alternative: Use OFDM with per-subcarrier equalization to avoid time-domain equalization entirely.