Exercises

ex-ch13-01

Easy

A discrete-time channel has impulse response h=[1,β€…β€Š0.4,β€…β€Šβˆ’0.2]h = [1,\; 0.4,\; -0.2].

(a) What is the channel memory LL?

(b) For BPSK (ak∈{+1,βˆ’1}a_k \in \{+1, -1\}), how many distinct noise-free received levels are possible?

(c) Compute all possible noise-free levels and find the minimum eye opening dmin⁑d_{\min}.

ex-ch13-02

Easy

Compute the matched-filter bound (MFB) for a BPSK system operating on a three-tap channel h=[0.5,β€…β€Š1.0,β€…β€Š0.3]h = [0.5,\; 1.0,\; 0.3] at Eb/N0=10E_b/N_0 = 10 dB.

ex-ch13-03

Medium

Prove that the folded spectrum condition for the Nyquist ISI-free criterion,

βˆ‘k=βˆ’βˆžβˆžC(ej(Ο‰βˆ’2Ο€k))=1\sum_{k=-\infty}^{\infty} C(e^{j(\omega - 2\pi k)}) = 1

is equivalent to c[n]=Ξ΄[n]c[n] = \delta[n] in the time domain.

ex-ch13-04

Medium

For a two-tap channel h=[1,β€…β€ŠΞ±]h = [1,\; \alpha] with ∣α∣<1|\alpha| < 1:

(a) Compute the infinite-length ZF equalizer impulse response.

(b) Show that the noise enhancement factor is 1/(1βˆ’Ξ±2)1/(1 - \alpha^2).

(c) For Ξ±=0.9\alpha = 0.9, compute the noise enhancement in dB.

ex-ch13-05

Medium

Derive the frequency-domain expression for the MMSE linear equalizer:

WMMSE(ejΟ‰)=Hβˆ—(ejΟ‰)∣H(ejΟ‰)∣2+N0/EsW_{\text{MMSE}}(e^{j\omega}) = \frac{H^*(e^{j\omega})}{|H(e^{j\omega})|^2 + N_0/E_s}

starting from the Wiener-Hopf equation w=Ryyβˆ’1 p\mathbf{w} = \mathbf{R}_{yy}^{-1}\, \mathbf{p}.

ex-ch13-06

Hard

Consider a 3-tap channel h=[1,β€…β€Š0.7,β€…β€Šβˆ’0.3]h = [1,\; 0.7,\; -0.3].

(a) Design a 5-tap MMSE equalizer at Es/N0=15E_s/N_0 = 15 dB with optimal delay dd.

(b) Compute the output SINR and compare with the matched-filter bound.

(c) How much does increasing the equalizer to 11 taps improve performance?

ex-ch13-07

Easy

Show that the MMSE linear equalizer reduces to the matched filter Hβˆ—(ejΟ‰)H^*(e^{j\omega}) (up to a scalar) when SNRβ†’0\text{SNR} \to 0.

ex-ch13-08

Medium

A channel has impulse response h=[1,β€…β€Š0.8,β€…β€Š0.3]h = [1,\; 0.8,\; 0.3].

(a) Design a DFE with Nf=3N_f = 3 feedforward taps and Nb=2N_b = 2 feedback taps at Es/N0=20E_s/N_0 = 20 dB.

(b) Compare the MMSE-DFE MSE with the linear MMSE MSE.

(c) By what factor does the DFE improve the output SINR?

ex-ch13-09

Hard

Show that the MMSE-DFE achieves a lower MSE than any linear equalizer by proving

JDFE=exp⁑ ⁣[12Ο€βˆ«βˆ’Ο€Ο€ln⁑ ⁣(N0/Es∣H(ejΟ‰)∣2+N0/Es)dΟ‰]≀JlinJ_{\text{DFE}} = \exp\!\left[\frac{1}{2\pi} \int_{-\pi}^{\pi} \ln\!\left(\frac{N_0/E_s}{|H(e^{j\omega})|^2 + N_0/E_s}\right) d\omega\right] \leq J_{\text{lin}}

where Jlin=[12Ο€βˆ«βˆ’Ο€Ο€dΟ‰βˆ£H∣2+N0/Es]βˆ’1β‹…N0EsJ_{\text{lin}} = \left[\frac{1}{2\pi} \int_{-\pi}^{\pi} \frac{d\omega}{|H|^2 + N_0/E_s}\right]^{-1} \cdot \frac{N_0}{E_s}.

ex-ch13-10

Medium

Explain the error propagation phenomenon in a DFE. For the channel h=[1,β€…β€Š0.9]h = [1,\; 0.9] with BPSK, calculate the probability that a single decision error causes the next symbol to also be detected incorrectly (assuming Eb/N0=12E_b/N_0 = 12 dB).

ex-ch13-11

Medium

For a channel h=[1,β€…β€Š0.5]h = [1,\; 0.5] with BPSK:

(a) Draw the 2-state trellis for the Viterbi MLSE.

(b) Compute the free distance dfreed_{\text{free}}.

(c) Compare the asymptotic BER with the matched-filter bound.

ex-ch13-12

Hard

A channel has memory L=3L = 3 with 4-PAM modulation (M=4M = 4).

(a) How many trellis states does the MLSE detector require?

(b) How many branch metric computations per time step?

(c) If the symbol rate is 1 Msymbol/s and each branch metric requires 10 floating-point operations, what is the computational load in GFLOPS?

(d) Suggest a reduced-complexity alternative and estimate its complexity.

ex-ch13-13

Challenge

Prove that the Viterbi algorithm finds the maximum-likelihood sequence by showing that the principle of optimality holds for the ISI channel trellis: the best path to any state at time kk must contain the best path to some state at time kβˆ’1k-1.

ex-ch13-14

Easy

An LMS equalizer with Nf=11N_f = 11 taps operates at SNR=20\text{SNR} = 20 dB. The input signal power is Οƒy2=2.0\sigma_y^2 = 2.0.

(a) Compute the maximum stable step size.

(b) Recommend a practical step size using the rule of thumb ΞΌβ‰ˆ1/(5NfΟƒy2)\mu \approx 1/(5 N_f \sigma_y^2).

ex-ch13-15

Medium

Derive the convergence time constant of the LMS algorithm for the slowest eigenmode. Show that

Ο„slow=12ΞΌΞ»min⁑\tau_{\text{slow}} = \frac{1}{2 \mu \lambda_{\min}}

where λmin⁑\lambda_{\min} is the smallest eigenvalue of Ryy\mathbf{R}_{yy}.

ex-ch13-16

Medium

Compare the LMS and RLS algorithms for a channel with eigenvalue spread Ο‡=Ξ»max⁑/Ξ»min⁑=20\chi = \lambda_{\max}/\lambda_{\min} = 20. The equalizer has Nf=15N_f = 15 taps.

(a) If LMS uses ΞΌ=0.01\mu = 0.01 and Ξ»min⁑=0.1\lambda_{\min} = 0.1, how many iterations does it take for the slowest mode to converge (to within eβˆ’3β‰ˆ5%e^{-3} \approx 5\% of initial error)?

(b) If RLS uses Ξ»=0.99\lambda = 0.99 and Ξ΄=0.01\delta = 0.01 (initial P0=Ξ΄βˆ’1I\mathbf{P}_0 = \delta^{-1} \mathbf{I}), approximately how many iterations does RLS take to converge?

(c) Compare the computational cost (multiply-accumulates per iteration) of both algorithms.

ex-ch13-17

Hard

Show that the misadjustment of the LMS algorithm (the ratio of excess MSE to minimum MSE) is approximately

M=JexcessJminβ‘β‰ˆΞΌβˆ‘i=1NfΞ»i1=μ tr(Ryy)\mathcal{M} = \frac{J_{\text{excess}}}{J_{\min}} \approx \mu \sum_{i=1}^{N_f} \frac{\lambda_i}{1} = \mu\, \text{tr}(\mathbf{R}_{yy})

for small step size ΞΌ\mu. Use this to show the fundamental trade-off: faster convergence (larger ΞΌ\mu) increases steady-state error.

ex-ch13-18

Challenge

Design a complete equalization strategy for a mobile wireless channel with the following parameters:

  • Channel memory: L=5L = 5 taps
  • Modulation: 16-QAM (M=16M = 16)
  • Symbol rate: 2 Msymbol/s
  • Doppler spread: 100 Hz (coherence time ∼5\sim 5 ms)
  • Training overhead budget: ≀10%\leq 10\%

(a) Evaluate the feasibility of MLSE. Compute the number of trellis states and the required processing rate.

(b) Propose a practical equalizer architecture (DFE or linear) and specify the number of taps.

(c) Choose between LMS and RLS for adaptation and justify your choice. Determine the training sequence length needed.

(d) Estimate the performance gap relative to the matched-filter bound.