Ramp Secret Sharing: Trading Security for Efficiency
Why Shrink the Shares?
Section 3.1 showed that perfect secrecy in a -threshold scheme forces every share to be at least as big as the secret. That is a real cost: if we want to share a 1 GB gradient vector across federated users, every user must store 1 GB of share. In federated learning, where per-user storage is at a premium, this overhead is crippling.
The ramp variant relaxes the perfect-secrecy constraint. Instead of "any reconstruct, any learn nothing", we demand "any reconstruct, any learn nothing" with . The gap is called the ramp width; coalitions of size between and learn a controlled, quantifiable amount about the secret. In exchange, shares become times smaller — exactly what we need for federated learning.
Ramp secret sharing is the workhorse of Chapter 11's ByzSecAgg protocol and of several CommIT-group results in later chapters. This section gives the construction and quantifies the trade-off.
Definition: -Ramp Secret Sharing
-Ramp Secret Sharing
A -ramp secret-sharing scheme with and ramp width consists of Share and Reconstruct algorithms satisfying:
- Correctness. Any shares reconstruct the secret exactly (same as Shamir).
- Perfect secrecy below . Any subset of size has .
- Linear leakage in the ramp zone. For a subset of size with , the mutual information is , and this scales smoothly with the coalition size between the two thresholds.
Equivalently, ramp splits the secret into pieces, each of which is Shamir-shared with a different threshold, producing a scheme with share size and a smoothly- degrading security guarantee.
Ramp Secret Sharing
A generalization of Shamir where the reconstruction threshold and the perfect-secrecy threshold are separated by a ramp width . Shares are times smaller than in Shamir; coalitions of intermediate size learn a fraction of the secret that grows linearly with the coalition size.
Theorem: Ramp Rate–Security Tradeoff
The -ramp scheme with ramp width satisfies
- Correctness. Any shares reconstruct the full secret .
- Perfect secrecy below . Any subset of size has .
- Share rate. field element, while field elements. Hence — a -fold savings over Shamir's .
- Linear leakage. For coalition size (), bits, linear in the number of "extra" shares beyond .
Ramp "reuses" the same degree- polynomial to encode secrets simultaneously. The perfect-secrecy guarantee is preserved for any coalition that cannot evaluate more than points — those coalitions see only a -dimensional subspace of random variables. Once the coalition size exceeds , each additional share reveals one more secret chunk in a controlled fashion. The -fold savings in share size is why ramp is the scheme of choice for federated learning with large gradients.
Correctness
Same as Shamir: a degree- polynomial is uniquely determined by its values at distinct points, so Lagrange interpolation from any shares recovers the full coefficient vector, including the secret chunks.
Perfect secrecy below $t_1$
Any size- coalition observes — a Vandermonde-invertible linear transformation of the uniform random coefficients (combined with the fixed secret chunks). The secret chunks, being fixed, do not contribute to the conditional distribution of the view, which is exactly uniform over — hence independent of .
Share rate
Each share is one field element; the secret is field elements. Per-share efficiency is of the secret size — a direct -fold savings over Shamir.
Linear leakage in the ramp
For a coalition of size with , the extra shares partially determine the polynomial's coefficients. A linear-algebraic calculation shows that exactly secret chunks can be expressed as linear combinations of the observations, giving bits.
Ramp Secret Sharing: Rate vs. Security
Plot the share-size rate as a function of the ramp width , with the overlaid security guarantee (perfect-secrecy threshold fraction). As grows, the shares shrink but perfect-secrecy tolerance () shrinks too. The Pareto frontier is a simple hyperbola.
Parameters
Example: Why Ramp Is the Right Choice for Federated Learning
In a federated-learning round, each user holds a gradient vector of coordinates. We want to secret-share each user's gradient among the other users, requiring perfect secrecy against any coalition of users and reconstructability by any . Compare the share size of Shamir vs. ramp.
Shamir approach
Use Shamir (for perfect secrecy against any 30-user coalition). Each gradient scalar requires one share per user, so each user's share is scalars — the full gradient size per user!
Ramp approach
Use -ramp with . Each share is scalars, a savings. Perfect secrecy holds against any 30-user coalition; reconstruction still works with any 70 users; coalitions of 31–69 users learn a controlled, quantifiable amount (linear in coalition size beyond ).
Caveat
The ramp scheme does not provide perfect secrecy against coalitions in the ramp zone. In federated learning this is usually acceptable because (i) the server is honest-but- curious and sees no coalition of users, and (ii) realistic threat models assume colluders. ByzSecAgg (Chapter 11) adds cryptographic bounds that make the ramp leakage provably small even in adversarial settings.
Ramp Leakage as Coalition Grows
Key Takeaway
Ramp secret sharing trades graceful-degradation of secrecy for a -fold share-size savings. Set and you recover Shamir with perfect secrecy; set large and shares become tiny at the cost of partial leakage in the ramp zone. The optimal operating point for federated learning is typically , balancing storage with realistic adversary bounds. ByzSecAgg (Chapter 11) makes this choice explicit.
Common Mistake: Ramp is NOT Perfectly Secret Beyond
Mistake:
Use a ramp scheme and describe its security guarantee as "perfect secrecy against any coalition smaller than ".
Correction:
Ramp is perfectly secret only against coalitions of size . Between and , a carefully-chosen coalition learns a linear amount of information about the secret. If the threat model includes coalitions in the ramp zone, you must either use Shamir (with its -fold storage cost) or supplement the ramp scheme with an additional cryptographic layer (as ByzSecAgg does in Chapter 11). Confusing "no reconstruction" with "perfect secrecy" is a common source of subtle bugs in FL system design.
Ramp in Production: ByzSecAgg's Key Primitive
ByzSecAgg (Chapter 11 of this book) is the CommIT group's Byzantine-resilient secure-aggregation protocol. It combines ramp secret sharing, coded computing for distance-based outlier detection, and vector commitments to achieve a communication complexity of against Byzantine users — a significant improvement over the of Bonawitz et al. The key insight is that the ramp parameters can be chosen proportionally to , which makes the per- user share small while preserving privacy. Without the ramp generalization, ByzSecAgg's savings would be impossible.
- •
ByzSecAgg uses in the ramp construction
- •
Share size scales as per user — much better than
- •
Trade-off: must accept some leakage in the ramp zone
Historical Note: Ramp Sharing: Blakley and Meadows, Yamamoto
1984–1986Ramp secret sharing was introduced independently by George Blakley and Catherine Meadows (Crypto 1984, published 1985) and by Hirosuke Yamamoto (IEEE T-IT 1986). Blakley and Meadows motivated ramp from a storage-efficiency angle (KGH's perfect- secrecy lower bound is too expensive for large secrets); Yamamoto gave the information-theoretic characterization of the leakage in the ramp zone and proved the capacity formulation that dominates modern treatment. The two papers are a classic illustration of the cross-pollination between cryptography and multi-user information theory that this book tries to embody.
Quick Check
A ramp scheme has . What is the per-share size as a fraction of the secret size?
. Each share is the size of the secret — a savings over Shamir's .