Prerequisites & Notation
Before You Begin
Chapter 17 closes the technical development of the book by composing AirComp (Chapter 16), federated learning fundamentals (Chapter 9), and secure aggregation (Chapter 10) into a single wireless-FL pipeline. The golden thread — privacy, robustness, communication efficiency — is here recast as a three-way design trade-off over a physical wireless channel. The CommIT contribution on information-theoretically secure federated representation learning appears in §17.4.
- FedAvg and FL convergence (§9.2)(Review ch09)
Self-check: State the convergence rate of FedAvg under smoothness and bounded gradient assumptions.
- Secure aggregation threat model (Chapter 10)(Review ch10)
Self-check: Recall the honest-but-curious server model and its privacy objective.
- Wireless resource allocation basics(Review ch06)
Self-check: What does water-filling power allocation optimize?
- Stochastic gradient descent convergence(Review ch28)
Self-check: How does noisy gradient descent with variance behave asymptotically for strongly-convex losses?
Notation for This Chapter
Chapter 17 combines FL notation (rounds, learning rate, gradients) with wireless notation (channel, power, MSE).
| Symbol | Meaning | Introduced |
|---|---|---|
| FL round index, | s01 | |
| Total FL rounds (budget) | s01 | |
| Global model parameters at round | s01 | |
| Local gradient of user at round | s01 | |
| Learning rate (distinguished from receive amplitude) | s01 | |
| Estimated aggregate gradient at round | s01 | |
| Per-round aggregation MSE | s02 | |
| Scheduled users at round | s03 | |
| Smoothness constant of the FL loss | s02 | |
| Strong-convexity constant of the FL loss | s02 | |
| Learned representation of user (§17.4) | s04 |