Part 2: Coded Computing

Chapter 6: Coded Gradient Computation

Advanced~200 min

Learning Objectives

  • Formulate distributed (synchronous) gradient descent and isolate the gradient-aggregation step
  • Construct gradient coding (Tandon–Lei–Dimakis–Karampatziakis) and prove its KK-of-NN recovery guarantee
  • Quantify the per-worker storage and computation cost of gradient coding
  • Develop approximate gradient coding (Charles–Papailiopoulos–Ellenberg) and characterize the rate–accuracy tradeoff
  • Compare with uncoded SGD, dropout-only baselines, and gradient sparsification in terms of straggler tolerance and convergence
  • Recognize when coded gradient computation pays off in production federated learning

Sections

💬 Discussion

Loading discussions...