Chapter Summary

Chapter Summary

Key Points

  • 1.

    Coded convolution achieves recovery threshold K=p+qβˆ’1K = p + q - 1 β€” linear in the partitions. The entangled polynomial code exploits convolution's reduced output structure (only p+qβˆ’1p + q - 1 distinct coefficients, not pqpq) to beat the standard polynomial code's K=pqK = pq at the same per-worker storage.

  • 2.

    Lagrange Coded Computing unifies the framework for arbitrary multivariate polynomials. Recovery threshold is Ktextrec=df(Kβˆ’1)+1K_{\\text{rec}} = d_f(K - 1) + 1 where dfd_f is the function's total degree and KK is the number of input partitions. Polynomial codes, entangled polynomial codes, and gradient coding are all special cases.

  • 3.

    Specialized schemes beat LCC for common operations. For matrix multiplication, polynomial codes achieve K=pqK = pq; LCC achieves 2pqβˆ’12pq - 1. For convolution, entangled codes achieve K=p+qβˆ’1K = p + q - 1; LCC achieves 2(p+qβˆ’1)βˆ’12(p + q - 1) - 1. Use specialized schemes when the operation is known; use LCC when generality is needed (higher-degree polynomials, unusual structures).

  • 4.

    The non-polynomial barrier is the current frontier. Coded computing handles polynomial operations optimally but cannot exactly compute non-polynomial functions (ReLU, softmax, exponentials, cross-entropy). Workarounds include polynomial approximation, hybrid coding (polynomial layers coded, non-polynomial replicated), and cryptographic MPC for non-polynomial operations.

  • 5.

    Part II closes with a mature framework for polynomial distributed ML. Polynomial codes (Ch. 5), gradient coding (Ch. 6), coded shuffling (Ch. 7), entangled / Lagrange coded computing (Ch. 8). Each class of polynomial computation has an optimal scheme with matching achievability and converse. Production adoption is limited; research continues on the non-polynomial frontier and on composition with privacy / Byzantine primitives.

Looking Ahead

Part III begins with Chapter 9's overview of federated learning β€” the paradigm where the three axes of the golden thread (privacy, robustness, efficiency) most urgently meet. Chapter 10 treats secure aggregation (Bonawitz et al. protocol + CommIT optimality result). Chapter 11 carries the CommIT-group ByzSecAgg result, which builds directly on the polynomial-code, gradient- coding, and ramp-secret-sharing primitives of Chapters 3, 5, 6, and 8. Chapter 12 closes Part III with the CCESA sparse-graph construction β€” another CommIT contribution. The coded-computing framework of Part II is the foundation on which Part III builds.