Prerequisites

Before You Begin

This chapter bridges machine learning and wireless communications. If any item below feels unfamiliar, revisit the linked material before proceeding.

  • Linear algebra: least squares, SVD, matrix gradients(Review ch01)

    Self-check: Can you compute the gradient of a quadratic cost f(x)=Axb2f(\mathbf{x}) = \|\mathbf{A}\mathbf{x} - \mathbf{b}\|^2 and write the closed-form minimiser x^=(ATA)1ATb\hat{\mathbf{x}} = (\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^T\mathbf{b}?

  • Probability: Gaussian distributions, maximum likelihood, KL divergence(Review ch02)

    Self-check: Can you write the log-likelihood of a Gaussian observation model y=Hx+n\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{n} with nCN(0,σ2I)\mathbf{n} \sim \mathcal{CN}(\mathbf{0}, \sigma^2\mathbf{I}) and derive the ML estimate of x\mathbf{x}?

  • Digital modulation: constellations, BER, signal space geometry(Review ch03)

    Self-check: Can you draw the constellation for QPSK and 16-QAM, compute the average symbol energy, and explain how minimum-distance detection works?

  • Estimation and detection theory: MMSE, MAP, Cramer-Rao bound(Review ch09)

    Self-check: Can you state the MMSE estimator x^MMSE=E[xy]\hat{\mathbf{x}}_{\text{MMSE}} = \mathbb{E}[\mathbf{x}|\mathbf{y}], explain the bias-variance trade-off, and describe when the linear MMSE equals the full MMSE?

Notation for This Chapter

Symbols introduced in this chapter. See also the NGlobal Notation Table master table in the front matter.

SymbolMeaningIntroduced
fθ()f_\theta(\cdot)Parametric function (neural network) with learnable parameters θ\thetas01
L(θ)\mathcal{L}(\theta)Loss function to be minimised during trainings01
H^NN\hat{\mathbf{H}}_{\mathrm{NN}}Neural-network channel estimates01
Sθ\mathcal{S}_\thetaSoft-thresholding operator with learnable threshold θ\thetas02
W(l),θ(l)\mathbf{W}^{(l)}, \boldsymbol{\theta}^{(l)}Weight matrix and threshold at layer ll of an unfolded networks02
π(as)\pi(a|s)Policy mapping state ss to action aa in reinforcement learnings03
Q(s,a)Q(s,a)State-action value function (Q-function)s03
rtr_tReward at time step tts03
γ\gammaDiscount factor in the cumulative reward objectives03
wglobal\mathbf{w}_{\text{global}}Global model parameters in federated learnings04
CCNumber of clients (base stations) in federated learnings04
EENumber of local SGD epochs per communication rounds04