Prerequisites
Before You Begin
This chapter bridges machine learning and wireless communications. If any item below feels unfamiliar, revisit the linked material before proceeding.
- Linear algebra: least squares, SVD, matrix gradients(Review ch01)
Self-check: Can you compute the gradient of a quadratic cost and write the closed-form minimiser ?
- Probability: Gaussian distributions, maximum likelihood, KL divergence(Review ch02)
Self-check: Can you write the log-likelihood of a Gaussian observation model with and derive the ML estimate of ?
- Digital modulation: constellations, BER, signal space geometry(Review ch03)
Self-check: Can you draw the constellation for QPSK and 16-QAM, compute the average symbol energy, and explain how minimum-distance detection works?
- Estimation and detection theory: MMSE, MAP, Cramer-Rao bound(Review ch09)
Self-check: Can you state the MMSE estimator , explain the bias-variance trade-off, and describe when the linear MMSE equals the full MMSE?
Notation for This Chapter
Symbols introduced in this chapter. See also the NGlobal Notation Table master table in the front matter.
| Symbol | Meaning | Introduced |
|---|---|---|
| Parametric function (neural network) with learnable parameters | s01 | |
| Loss function to be minimised during training | s01 | |
| Neural-network channel estimate | s01 | |
| Soft-thresholding operator with learnable threshold | s02 | |
| Weight matrix and threshold at layer of an unfolded network | s02 | |
| Policy mapping state to action in reinforcement learning | s03 | |
| State-action value function (Q-function) | s03 | |
| Reward at time step | s03 | |
| Discount factor in the cumulative reward objective | s03 | |
| Global model parameters in federated learning | s04 | |
| Number of clients (base stations) in federated learning | s04 | |
| Number of local SGD epochs per communication round | s04 |