Chapter Summary

Chapter 18 Summary

Key Points

  • 1.

    Non-linear coded computing (§18.1): polynomial approximation gives K=O(log(1/ε))K = O(\log(1/\varepsilon)) recovery threshold for smooth ff, but breaks down for non-smooth (ReLU). Learned coded computing handles arbitrary functions empirically but lacks theoretical foundation. Information-theoretic optimality is open.

  • 2.

    Three-axis frontier (§18.2): joint privacy-robustness-DP has known lower bound Ω(n(T+B+log(1/δ)/ε2)d)\Omega(n(T + B + \log(1/\delta)/\varepsilon^2)d). Achievable via serial composition at 23×2-3\times cost; jointly-optimal schemes are open. CommIT research extends ByzSecAgg with DP.

  • 3.

    Decentralized FL (§18.3): D-SGD over graphs matches centralized asymptotic rate with graph-dependent constants. CCESA-style sparsification, AirComp-over-gossip, and graph-aware coded computing are open directions.

  • 4.

    Modern ML (§18.4): coded attention, coded MoE, federated RAG bridge the book's framework to transformer-era workloads. Quantum PIR offers strict rate advantages in some regimes. All are active research.

  • 5.

    The golden thread persists: privacy, robustness, and communication efficiency remain fundamentally coupled across all five parts. The book closes by mapping the open frontier — not by wrapping it up.