References & Further Reading

References

  1. K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, Practical Secure Aggregation for Privacy-Preserving Machine Learning, 2017

    The headline reference for this chapter. Introduces the pairwise-masking secure-aggregation protocol with Shamir-shared seeds for dropout handling. The production standard for FL privacy.

  2. A. R. Elkordy, A. S. Avestimehr, and G. Caire, On the Information-Theoretic Optimality of Secure Aggregation in Federated Learning with Uncoded Groupwise Keys, 2022

    The CommIT-group result establishing optimality of Bonawitz within the uncoded groupwise-key class. Tagged as the primary commit_contribution of this chapter; double-booked with §10.4.

  3. A. Shamir, How to Share a Secret, 1979

    Foundational paper on threshold secret sharing — the cryptographic primitive used for seed-sharing in §10.3's dropout handling.

  4. K. Bonawitz, H. Eichner, W. Grieskamp, and others, Towards Federated Learning at Scale: System Design, 2019. [Link]

    Google's engineering perspective on production federated learning. Describes Bonawitz's deployment in Gboard, including dropout-handling parameters.

  5. L. Zhu, Z. Liu, and S. Han, Deep Leakage from Gradients, 2019. [Link]

    The gradient-inversion attack motivating the privacy guarantees of this chapter.

  6. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, Communication-Efficient Learning of Deep Networks from Decentralized Data, 2017. [Link]

    FedAvg paper — necessary background for the FL context of this chapter.

  7. W. Diffie and M. E. Hellman, New Directions in Cryptography, 1976

    The foundational paper on Diffie–Hellman key exchange, used in §10.2's seed derivation step.

  8. O. Goldreich, Foundations of Cryptography, Vol. 2, Cambridge University Press, 2004

    Standard reference for the honest-but-curious adversary model and simulation-based proofs of protocol security. Background for §10.1's threat model.

  9. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, and others, Advances and Open Problems in Federated Learning, 2021

    Comprehensive FL survey including secure aggregation and its variants. Excellent secondary reference.

  10. J. So, B. Güler, and A. S. Avestimehr, Byzantine-Resilient Secure Federated Learning, 2021

    Byzantine-resilient FL framework — bridge between Chapter 10's honest-but-curious model and Chapter 11's malicious adversary. Useful preview.

  11. S. Kadhe, N. Rajaraman, O. O. Koyluoglu, and K. Ramchandran, FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning, 2020. [Link]

    Predecessor to CCESA (Chapter 12). Introduces sparse-graph variants of Bonawitz with reduced communication overhead. Important context for Chapter 12's CommIT contribution.

  12. J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, Inverting Gradients — How easy is it to break privacy in federated learning?, 2020. [Link]

    Strengthened gradient-inversion attacks; motivates the necessity of Bonawitz-style protocols.

Further Reading

Resources for going deeper into secure aggregation, its variants, and the underlying cryptographic primitives.

  • Secure multi-party computation foundations

    Cramer, Damgård, and Nielsen, *Secure Multiparty Computation and Secret Sharing*, Cambridge UP 2015

    The standard textbook on MPC and secret sharing. Provides the cryptographic foundation for understanding Bonawitz at full depth.

  • Communication-efficient secure aggregation

    Chapter 12 of this book — CCESA (CommIT contribution)

    The natural follow-on: how to break the $O(n^2)$ Bonawitz ceiling via sparse random graphs. The Caire et al. optimality theorem of §10.4 directly motivates Chapter 12's approach.

  • Byzantine-resilient secure aggregation

    Chapter 11 of this book — ByzSecAgg (CommIT contribution)

    Extends this chapter's honest-but-curious model to the malicious adversary. Combines Bonawitz with ramp secret sharing and coded computing.

  • Differential privacy in federated learning

    Abadi et al., *Deep Learning with Differential Privacy*, ACM CCS 2016

    Complementary privacy mechanism. Often combined with Bonawitz in production: SecAgg for individual privacy, DP for aggregate-level privacy.

  • Production secure-aggregation libraries

    Google TensorFlow Federated; OpenFL; NVIDIA Flare

    Working implementations of Bonawitz with engineering details (key derivation, dropout handling, etc.). Essential for anyone deploying FL in practice.