Summary

Chapter Summary

Key Points

  • 1.

    Supervised learning maps observations to estimates without explicit models. Neural-network channel estimators learn the mapping from pilot observations to full channel estimates, implicitly capturing the power delay profile and cross-subcarrier correlations that classical LS estimation ignores. A two-layer network with \sim5000 parameters, trained on \sim500 channel realisations, can approach MMSE performance without requiring explicit knowledge of the channel correlation matrix RHH\mathbf{R}_{HH}. The universal approximation theorem guarantees that a sufficiently wide network can approximate the MMSE estimator x^MMSE=E[xy]\hat{\mathbf{x}}_{\mathrm{MMSE}} = \mathbb{E}[\mathbf{x}|\mathbf{y}] to arbitrary accuracy.

  • 2.

    End-to-end autoencoders jointly optimise transmitter and receiver. By treating the communication chain (encoder \to channel \to decoder) as a single neural network, the autoencoder framework (O'Shea and Hoydis, 2017) discovers optimal constellation geometries and detection rules simultaneously. For AWGN channels, the learned constellations converge to classical PSK/QAM arrangements, validating the approach; for non-standard channels (non-linear amplifiers, limited feedback), the autoencoder can discover novel solutions that outperform hand-designed schemes. Backpropagation through the AWGN channel is trivial (y/x=I\partial\mathbf{y}/\partial\mathbf{x} = \mathbf{I}); for non-differentiable channels, surrogate models or policy-gradient methods are required.

  • 3.

    Deep unfolding converts iterative algorithms into trainable networks with dramatic efficiency gains. LISTA unfolds LL ISTA iterations into LL layers with per-layer learnable thresholds, achieving linear convergence (ρL\rho^L) compared to ISTA's sublinear O(1/t)O(1/t) rate. Typically L=5L = 5--2020 learned layers match T=50T = 50--500500 classical iterations, yielding a 55--25×25\times reduction in computational cost. The principle extends beyond sparse recovery: any iterative algorithm (ADMM, gradient descent, belief propagation, WMMSE) can be unfolded, with the algorithm structure providing a strong inductive bias that dramatically reduces the number of trainable parameters compared to black-box alternatives.

  • 4.

    Reinforcement learning enables adaptive resource allocation without explicit labels. Power control, scheduling, and beam management are sequential decision problems naturally modelled as MDPs. Tabular Q-learning discovers near-optimal power allocations for small systems (K4K \leq 4 users), while deep RL (DQN, DDPG, PPO) scales to larger networks. The key challenges are sample efficiency (RL requires many environment interactions), the curse of dimensionality in the joint action space (JKJ^K for KK users with JJ power levels), and multi-agent coordination in decentralised settings.

  • 5.

    Federated learning trains models across distributed base stations without sharing raw data. FedAvg aggregates local model updates from CC clients in communication rounds, reducing communication cost and preserving data privacy. Under IID data, FedAvg converges to the global optimum at a rate comparable to centralised SGD. Under non-IID data (the common case in wireless, where each BS sees different channel statistics), a persistent bias O(EΓ/μ)O(E \cdot \Gamma / \mu) remains, proportional to the data heterogeneity Γ\Gamma and the number of local epochs EE. Over-the-air aggregation, client selection, and gradient compression are active research areas for making FL practical over wireless links.

  • 6.

    Model-based ML outperforms black-box networks in the data-scarce regime that characterises wireless. Deep unfolding requires 1010--100×100\times fewer training samples than black-box NNs for comparable performance, because the algorithm structure encodes physical invariants. Model-based approaches also generalise better under distribution shift (different SNR, channel model, or number of users) and offer interpretability (each layer corresponds to one algorithm iteration). Black-box NNs are preferred when no suitable algorithm exists, abundant data is available, or the channel model is too complex for analytical treatment. In practice, hybrid approaches --- model-based backbone with NN refinement --- often achieve the best trade-off.

Looking Ahead

Machine learning for wireless communications is a rapidly evolving field at the intersection of signal processing, optimisation, and deep learning. Several directions are shaping the next generation of research. The O-RAN architecture, with its RAN Intelligent Controller (RIC), provides a standardised platform for deploying ML models in production networks, moving these techniques from simulation to real-world deployment. Semantic communication, where the goal is to transmit meaning rather than bits, leverages joint source-channel coding via autoencoders to achieve dramatic compression gains for structured data (images, text, sensor readings). Generative AI, including diffusion models and large language models, is being explored for channel modelling, codebook design, and network planning. The deep unfolding paradigm is being extended to more complex algorithms, including turbo decoding, massive MIMO precoding, and joint communication-sensing signal processing. Finally, the convergence of federated learning with over-the-air computation, differential privacy, and edge intelligence is creating a new systems paradigm where the wireless network itself becomes a distributed learning platform.