Prerequisites & Notation
Before You Begin
Chapter 8 closes Part II by generalizing the polynomial-code framework of Chapter 5 to richer computational primitives: convolutions, matrix-vector products at scale, and general multivariate-polynomial tensor operations. The prerequisites are the polynomial-code background of Chapter 5 and comfort with tensor notation from basic linear algebra.
- Polynomial codes (Chapter 5)(Review ch05)
Self-check: Write the encoding polynomials and state the recovery threshold .
- Gradient coding (Chapter 6)(Review ch06)
Self-check: Why is gradient coding a specialization of polynomial codes to ?
- Tensor notation and the Kronecker product(Review ch01)
Self-check: What is the relationship between and ?
- Convolution: 1D, 2D, and their polynomial representations
Self-check: If (linear convolution), what is the relationship between the generating polynomials ?
- Multivariate polynomials and Lagrange interpolation(Review ch03)
Self-check: For (Lagrange form), what does the collection tell you about the coefficients ?
Notation for This Chapter
Tensor operations introduce new notation. We write tensors in bold upper-case () and convolution as . The generating polynomial of a tensor along one axis is denoted . The Lagrange Coded Computing framework uses interpolation points and evaluation points .
| Symbol | Meaning | Introduced |
|---|---|---|
| Input matrix (same as Chapter 5) | s01 | |
| Tensor of order (generalizes matrices) | s01 | |
| Linear convolution (1D or 2D) | s01 | |
| Partition counts (same role as Chapter 5) | s02 | |
| Recovery threshold (minimum responses) | s02 | |
| Lagrange interpolation points at which master evaluates the target function | s03 | |
| Evaluation points assigned to workers | s03 | |
| Target function: a -variate polynomial over vectors | s03 | |
| Degree of the target function | s03 |