References & Further Reading
References
- Q. Yu, N. Raviv, H. Soleymani, and A. S. Avestimehr, Lagrange Coded Computing: Optimal Design for Resiliency, Security, and Privacy, 2019. [Link]
The headline paper introducing Lagrange Coded Computing — unifies coded-computing schemes for arbitrary multivariate polynomial functions. Essential reading for this chapter.
- Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, Entangled Polynomial Codes for Coded Distributed Matrix Multiplication, 2020
Generalization of polynomial codes with entangled exponents, achieving recovery threshold $K = p + q - 1$ for convolution-like structured operations. Main reference for §8.2.
- Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, Polynomial Codes: An Optimal Design for High-Dimensional Coded Matrix Multiplication, 2017. [Link]
The Chapter 5 polynomial code — background for this chapter's extensions.
- S. Dutta, V. Cadambe, and P. Grover, Short-Dot: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products, 2019
Coded convolution applied to CNN training. Engineering background for §8.1's convolution treatment.
- H. Soleymani, M. Aliasgari, and S. El Rouayheb, Privacy for Free? Analysis of Coded Private Matrix Multiplication with Straggler Exploitation, 2021
$T$-private LCC — combines LCC with privacy primitives. Background for Chapters 10–11's secure-aggregation framework.
- A. Solanki, M. Cardone, and S. Mohajer, Byzantine-Resilient Distributed Coded Matrix Multiplication, 2020
Byzantine-robust LCC. Predecessor of ByzSecAgg (Chapter 11 — CommIT contribution).
- M. Fahim, V. R. Cadambe, and P. Grover, Numerically Stable Polynomially-Coded Computation, 2021
Addresses floating-point conditioning issues in polynomial codes. Relevant for practical LCC deployment.
- K. Lee, C. Suh, and K. Ramchandran, High-Dimensional Coded Matrix Multiplication, 2017
Earlier MDS-coded matrix multiplication. Historical context for the evolution leading to LCC.
- K. Wan, D. Tuninetti, M. Ji, G. Caire, and P. Piantanida, A Framework for Coded Data Shuffling in Distributed ML Training, 2021
CommIT-group coded-shuffling result (Chapter 7). Forward reference because the framework extends naturally to tensor-shuffle operations.
- S. Li and A. S. Avestimehr, Coded Computing: Mitigating Fundamental Bottlenecks in Large-Scale Distributed Computing and Machine Learning, 2020
Comprehensive survey including LCC, entangled polynomial codes, and all the extensions of this chapter.
- M. Dahl, J. Mancuso, Y. Dupis, B. Decoste, M. Giraud, I. Livingstone, J. Patriquin, G. Uhma, Private Machine Learning in TensorFlow Using Secure Computation, 2019. [Link]
TF-Encrypted framework — a research-level integration of cryptographic MPC with ML. Example of the hybrid coded + MPC approach discussed in §8.4.
Further Reading
Resources for going deeper into coded convolutions, tensor operations, and the non-polynomial frontier.
Tensor contractions — modern framework
Liu, Yu, Avestimehr, *Coded Tensor Computation*, IEEE T-IT 2022
Tensor-specific coded-computing results extending matrix-multiplication codes to higher-order contractions. Relevant for attention-mechanism workloads in Transformers.
Secure MPC for ML
Evans, Kolesnikov, Rosulek, *A Pragmatic Introduction to Secure Multi-Party Computation*, FnT-PS 2018
Comprehensive introduction to the cryptographic tools for non-polynomial operations (garbled circuits, SPDZ, BGW). Relevant for the hybrid approaches discussed in §8.4.
Privacy-preserving deep learning
Mohassel & Zhang, *SecureML: A System for Scalable Privacy-Preserving Machine Learning*, IEEE S&P 2017
Describes a production-oriented privacy-preserving deep-learning system using polynomial approximations for non-polynomial operations. Connects §8.4's polynomial-approximation strategy to real engineering choices.
Approximate and noisy coded computing
Jahani-Nezhad & Maddah-Ali, *Berrut Approximated Coded Computing: Straggler Resistance Beyond Polynomial Computing*, IEEE T-IT 2022
Explicit construction for approximate coded computing that handles non-polynomial operations via rational-function approximations. Directly addresses the non-polynomial barrier of §8.4.