References & Further Reading
References
- Y. Chen, A. R. Elkordy, and A. S. Avestimehr, A Survey on Information-Theoretic Approaches to Secure and Private Federated Learning, 2023
Comprehensive review of the information-theoretic frontier of secure FL. Maps the open problems of this chapter.
- Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, Straggler Mitigation in Distributed Optimization Through Data Encoding, 2020
Polynomial approach to non-linear coded computing. Basis for §18.1.
- J. Kosaian, K. Rashmi, and S. Venkataraman, Parity Models: Erasure-Coded Resilience for Prediction Serving Systems, 2020
Learned coded computing for non-linear functions. Referenced in §18.1.
- S. Dutta, Z. Bai, T. Yun, G. Suh, and P. Grover, Short-Dot Products and Hybrid Coded Computing, 2020
Hybrid coded-uncoded schemes for non-linear coded computing. Referenced in §18.1.
- T. Jahani-Nezhad, M. A. Maddah-Ali, S. Li, and G. Caire, Byzantine-Resilient Secure Aggregation for Federated Learning With Consistency Guarantees, 2022
ByzSecAgg: jointly privacy + robustness. Referenced in §18.2.
- P. Kairouz et al., Advances and Open Problems in Federated Learning, 2021
Comprehensive FL survey. Open problems discussed in §18.2, §18.4.
- P. Blanchard, E. M. E. Mhamdi, R. Guerraoui, and J. Stainer, Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent, 2017
Krum and aggregation-level Byzantine-resilience. Referenced in §18.2.
- X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu, Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent, 2017
Foundational D-SGD. Basis for §18.3.
- A. Lalitha, S. Shekhar, T. Javidi, and F. Koushanfar, Fully Decentralized Federated Learning, 2018
Sparse D-SGD with graph-based gossip. Referenced in §18.3.
- R. Bommasani et al., On the Opportunities and Risks of Foundation Models, 2021
Survey of modern ML landscape (foundation models, transformers, MoE). Motivates the §18.4 frontiers.
- Y. Lu, C. Jia, K. Naouss, and P. Torr, Coded Attention for Transformer Acceleration, 2023
Early work on coded transformer attention. §18.4 frontier.
- N. Shazeer et al., Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer, 2017
The foundational MoE paper. Referenced in §18.4 on coded MoE.
- P. Lewis et al., Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, 2020
The RAG paper. Referenced in §18.4 for the PIR connection.
- S. Song and M. Hayashi, Capacity of Quantum Private Information Retrieval with Multiple Servers, 2019
Quantum PIR capacity. Referenced in §18.4.
- M. Allaix, S. Song, L. Holzbaur, T. Pllaha, M. Hayashi, and C. Hollanti, On the Capacity of Quantum Private Information Retrieval From MDS-Coded and Colluding Servers, 2022
Quantum advantage in $T$-colluding coded-storage PIR. Theorem 18.4.1.
Further Reading
Resources for navigating the open frontiers of secure and distributed computing.
Information-theoretic FL frontier
Chen, Elkordy, Avestimehr, FnT-CIT 2023
The most comprehensive recent survey of the information-theoretic frontier. Essential for finding the current research programs.
Federated learning open problems
Kairouz et al., FTML 2021
FL-field-wide view of open problems. Complements the more information-theoretic chapter.
Decentralized FL
Lian et al., NeurIPS 2017 (D-SGD)
Foundational D-SGD paper. Start here for decentralized FL research.
Quantum information theory
Wilde, *Quantum Information Theory*, 2017
Book-length treatment of quantum information theory. Useful for understanding quantum PIR at depth.
Modern ML architectures
Bommasani et al., *Foundation Models Report*, 2021
The computational patterns to which coded-computing theory needs to adapt.
The CommIT portfolio
G. Caire and collaborators, 2017-2024
The five CommIT contributions of this book, and the ongoing research program that produces them. Follow Caire's subsequent publications for new contributions.