Chapter Summary
Chapter Summary
Key Points
- 1.
Coded convolution achieves recovery threshold β linear in the partitions. The entangled polynomial code exploits convolution's reduced output structure (only distinct coefficients, not ) to beat the standard polynomial code's at the same per-worker storage.
- 2.
Lagrange Coded Computing unifies the framework for arbitrary multivariate polynomials. Recovery threshold is where is the function's total degree and is the number of input partitions. Polynomial codes, entangled polynomial codes, and gradient coding are all special cases.
- 3.
Specialized schemes beat LCC for common operations. For matrix multiplication, polynomial codes achieve ; LCC achieves . For convolution, entangled codes achieve ; LCC achieves . Use specialized schemes when the operation is known; use LCC when generality is needed (higher-degree polynomials, unusual structures).
- 4.
The non-polynomial barrier is the current frontier. Coded computing handles polynomial operations optimally but cannot exactly compute non-polynomial functions (ReLU, softmax, exponentials, cross-entropy). Workarounds include polynomial approximation, hybrid coding (polynomial layers coded, non-polynomial replicated), and cryptographic MPC for non-polynomial operations.
- 5.
Part II closes with a mature framework for polynomial distributed ML. Polynomial codes (Ch. 5), gradient coding (Ch. 6), coded shuffling (Ch. 7), entangled / Lagrange coded computing (Ch. 8). Each class of polynomial computation has an optimal scheme with matching achievability and converse. Production adoption is limited; research continues on the non-polynomial frontier and on composition with privacy / Byzantine primitives.
Looking Ahead
Part III begins with Chapter 9's overview of federated learning β the paradigm where the three axes of the golden thread (privacy, robustness, efficiency) most urgently meet. Chapter 10 treats secure aggregation (Bonawitz et al. protocol + CommIT optimality result). Chapter 11 carries the CommIT-group ByzSecAgg result, which builds directly on the polynomial-code, gradient- coding, and ramp-secret-sharing primitives of Chapters 3, 5, 6, and 8. Chapter 12 closes Part III with the CCESA sparse-graph construction β another CommIT contribution. The coded-computing framework of Part II is the foundation on which Part III builds.