References & Further Reading
References
- T. Jahani-Nezhad, M. A. Maddah-Ali, and G. Caire, Byzantine-Resilient Secure Aggregation for Federated Learning Based on Ramp Sharing, 2023
The CommIT-group contribution carried by this chapter. Should be read in full alongside the chapter. Foundational reference for privacy-and-Byzantine-resilient FL.
- P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent, 2017. [Link]
The Krum algorithm for Byzantine-tolerant gradient aggregation. Foundational for §11.3's filtering step. ByzSecAgg's coded-distance trick makes Krum compatible with privacy.
- E. M. El Mhamdi, R. Guerraoui, and S. Rouault, The Hidden Vulnerability of Distributed Learning in Byzantium, 2018. [Link]
Introduces Bulyan, Krum's stronger variant with trimmed-mean post-processing. Relevant as an alternative filter within ByzSecAgg.
- D. Yin, Y. Chen, R. Kannan, and P. Bartlett, Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates, 2018. [Link]
Trimmed-mean Byzantine aggregator with rigorous statistical guarantees. Alternative to Krum in ByzSecAgg's filtering step.
- K. Bonawitz, V. Ivanov, B. Kreuter, and others, Practical Secure Aggregation for Privacy-Preserving Machine Learning, 2017
The honest-but-curious baseline (Chapter 10). ByzSecAgg extends this with Byzantine resilience; the privacy guarantee is preserved.
- A. Shamir, How to Share a Secret, 1979
Threshold secret sharing; the ramp generalization (§11.2) is its modular extension. Chapter 3 §3.2 covers the Shamir case in detail.
- G. R. Blakley and C. Meadows, Security of Ramp Schemes, 1985
Ramp secret sharing — the $g$-fold share-size savings that makes ByzSecAgg's communication complexity attractive. Covered in Chapter 3 §3.4.
- Q. Yu, N. Raviv, H. Soleymani, and A. S. Avestimehr, Lagrange Coded Computing: Optimal Design for Resiliency, Security, and Privacy, 2019. [Link]
Lagrange Coded Computing — the framework for quadratic-function computation on ramp shares used in §11.3's coded distance computation.
- M. Fang, X. Cao, J. Jia, and N. Z. Gong, Local Model Poisoning Attacks to Byzantine-Robust Federated Learning, 2020. [Link]
Practical demonstrations of Byzantine attacks on FL. Motivates the engineering importance of protocols like ByzSecAgg.
- A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, Analyzing Federated Learning Through an Adversarial Lens, 2019. [Link]
Targeted model-poisoning attacks in FL. Further motivation for Byzantine-resilient protocols.
- J. So, B. Güler, and A. S. Avestimehr, Byzantine-Resilient Secure Federated Learning, 2021
Predecessor to ByzSecAgg. Combines Bonawitz with Byzantine-tolerant aggregation at higher overhead.
- Y. Sun, S. Garg, and others, FedSeg: Segmented Federated Learning for Byzantine Robustness, 2021. [Link]
Segmented-filtering alternative to Krum within ByzSecAgg-like frameworks. Useful comparison for the filtering-step design choices.
- R. C. Merkle, A Digital Signature Based on a Conventional Encryption Function, 1988
Merkle trees — the vector-commitment primitive used in ByzSecAgg's Phase 1. Classical cryptographic reference.
Further Reading
Resources for going deeper into Byzantine-resilient federated learning and the ByzSecAgg framework.
ByzSecAgg implementation guide
CommIT group GitHub repository (reference implementation in Python)
The TU Berlin CommIT group maintains a reference implementation with engineering notes, test vectors, and deployment case studies.
Byzantine attacks — comprehensive survey
Lyu et al., *Threats to Federated Learning: A Survey*, arXiv 2020
Systematic survey of Byzantine attack strategies and defenses. Useful for understanding what ByzSecAgg defends against and where it has limits.
Verifiable secret sharing and vector commitments
Chor, Goldwasser, Micali, Awerbuch, *Verifiable Secret Sharing and Achieving Simultaneity in the Presence of Faults*, FOCS 1985
Foundational work on verifiable secret sharing; vector commitments in §11.2 build on this tradition.
Composing Byzantine FL with differential privacy
Nguyen et al., *Private Federated Learning with Byzantine Robustness*, IEEE TIFS 2024
Follow-on work exploring the DP + Byzantine composition. Relevant for deployments needing both guarantees.
Information-theoretic lower bounds for FL
Kairouz et al., FnT-ML 2021, Chapter 5
Comprehensive lower-bound analysis for FL privacy and robustness. Provides context for ByzSecAgg's optimality claims.