Cyclic Wrap-Around Scheme (HKD 2017)

Theorem: HKD Cyclic Multi-Access Rate

For the multi-access coded caching problem with cyclic wrap-around access (Λ=K\Lambda = K, LL consecutive caches per user), and per-cache memory MM satisfying LMNL M \leq N, the achievable delivery rate is RMA(L)  =  K(1LM/N)1+KLM/N.R_\text{MA}(L) \;=\; \frac{K(1 - L M/N)} {1 + K L M/N}. This is the MAN rate with effective memory LML M.

With cyclic disjoint placement, each user's LL caches together store LMLM files' worth of content. Apply MAN with this expanded effective memory. Reduction factor: 1+KLM/N1 + K L M/N — larger than MAN's 1+KM/N1 + KM/N by factor LL.

Definition:

HKD Placement (Cyclic, Disjoint)

HKD placement for K=ΛK = \Lambda caches, LL-access:

  1. Split each file WnW_n into K/gcd(K,L)K/\gcd(K, L) equal subfiles. For simplicity assume K=ΛK = \Lambda and gcd=1\gcd = 1.
  2. Cache jj stores subfile Wn,jW_{n, j} with indexing designed so that any LL consecutive caches store LL distinct subfiles of WnW_n (disjoint coverage).
  3. Each cache stores a fraction M/NM/N of the library.

Total user-accessible memory: LML \cdot M files worth (disjoint pieces).

The combinatorial structure is non-trivial: placement must be consistent across all KK cyclic access sets. A Latin-square-like construction handles it.

Multi-Access Caching Gain

Gain factor 1+KLμ1 + KL\mu vs number of caches accessed LL, for several memory ratios. Gain is linear in LL — directly adds LL to the effective reduction factor.

Parameters
30

HKD Cyclic Multi-Access Delivery

Complexity: Subpacketization: (Kt~)\binom{K}{\tilde t}. Placement requires combinatorial design to ensure disjoint coverage in every cyclic window.
Input: Library {Wn}\{W_n\}; KK caches arranged cyclically; each
user kk accesses {k,k+1,,k+L1}modK\{k, k+1, \ldots, k+L-1\} \bmod K; demands
d\mathbf{d}.
Output: Shared-link broadcast XX delivering WdkW_{d_k} to user
kk via multicasting.
1. Placement (offline).
2. \quad Split each WnW_n into KK subfiles of size F/KF/K.
3. \quad Cache jj stores subfiles Wn,j+iLmodKW_{n, j + iL \bmod K} for i=0,,MK/N1i = 0, \ldots, M K / N - 1, for all nn.
4. \quad Verify: user kk's LL caches cover LML M distinct subfiles per file.
5. Delivery (online).
6. \quad Treat each user's LL caches as one effective cache of size LMLM.
7. \quad Compute effective caching parameter t~=KLM/N\tilde t = K L M / N.
8. \quad Construct MAN XOR messages over (t~+1)(\tilde t + 1)-subsets of users.
9. \quad Broadcast all such XOR messages.
10. User decoding. Each user XOR-cancels using its LL caches' subfiles.

Works cleanly when KLM/NK L M/N is an integer and the combinatorial structure exists. Non-integer or irregular cases require time-sharing or resolvable-design generalizations (§18.3).

Example: HKD Walkthrough: K=6K = 6, L=2L = 2, M=2M = 2, N=6N = 6

Walk through placement and delivery for K=Λ=6K = \Lambda = 6 caches, L=2L = 2 consecutive access, M=2M = 2 files per cache, N=6N = 6 files. Demands d=(1,2,3,4,5,6)\mathbf{d} = (1, 2, 3, 4, 5, 6).

Cyclic Wrap-Around Multi-Access Coded Caching

Six caches in a ring (left). Each user (right) accesses L=2L = 2 consecutive caches (highlighted). Disjoint placement ensures Lμ=2/3L\mu = 2/3 effective memory. XOR coded delivery reduces rate by factor L=2L = 2 over single-access MAN.

Is HKD Optimal?

HKD's scheme is order-optimal: within a constant factor of information-theoretic lower bounds for cyclic topologies. However, it is not exactly optimal. Later works (Ma-Tuninetti 2019, Serbetci-Parrinello-Elia 2019, Katyal-Ramkumar 2021) improve the rate by exploiting richer combinatorial structure.

The situation mirrors MAN's improvement over uncoded in Ch 2: HKD's scheme is the combinatorial baseline; fancier designs close the gap.

Common Mistake: Don't Assume LμL\mu Effective Memory Is Always Achievable

Mistake:

Applying the formula R=K(1Lμ)/(1+KLμ)R = K(1-L\mu)/(1 + KL\mu) to any multi-access topology without checking disjoint placement feasibility.

Correction:

The LμL\mu effective memory requires disjoint coverage: user's LL accessed caches together must store LMLM distinct files. Random access topologies often violate this: overlapping caches have redundant content, wasting effective memory.

Cyclic wrap-around admits a clean disjoint placement; arbitrary topologies (random intersections) need case-specific analysis.

Key Takeaway

HKD 2017 cyclic wrap-around scheme achieves RMA=K(1LM/N)/(1+KLM/N)R_\text{MA} = K(1 - LM/N)/(1 + KLM/N) — MAN rate with effective memory LMLM. Key insight: disjoint combinatorial placement across a user's access set expands effective memory by factor LL. Placement complexity grows with LL, but the multicasting gain is proportional.