Deployment: Fronthaul, Synchronization, Scaling

Making Cell-Free Real

The theoretical framework of §§1-4 produces dramatic performance numbers: 35% throughput gain, cm-level coverage uniformity, 101310^{-13} BER at high mobility. But these assume: (i) sufficient fronthaul bandwidth, (ii) sub-microsecond AP synchronization, (iii) CPU compute matching the L×KL \times K scale. Each of these is a non-trivial engineering challenge at deployment scale. This section lays out the practical considerations that determine whether cell-free OTFS is actually deployable in a given context.

,

Definition:

Fronthaul Bandwidth Requirements

Cell-free OTFS fronthaul bandwidth per AP per frame:

  • Raw received signal (option 1, worst case): NaMNbit widthN_a \cdot MN \cdot \text{bit width}. For Na=4N_a = 4, MN=4096MN = 4096, 16-bit samples: 32 KB per frame.
  • Channel estimates only (option 2, standard): P4bit widthP \cdot 4 \cdot \text{bit width} per UE in cluster. For P=8P = 8, Lk=6L_k = 6 UEs: 768 B per frame per AP.
  • Precoded symbols (option 3, minimal): MNNabit widthMN \cdot N_a \cdot \text{bit width} after per-AP conjugate BF. Same size as raw, but content is data-bearing.

At 100 Hz frame rate:

  • Option 1: 3.2 MB/s per AP.
  • Option 2: 77 kB/s per AP.
  • Option 3: 3.2 MB/s per AP.

Aggregate across L=100L = 100 APs:

  • Option 1: 320 MB/s. Supported by 10 GbE.
  • Option 2: 7.7 MB/s. Supported by 1 GbE.
  • Option 3: 320 MB/s. 10 GbE needed.

Practical choice: Option 2 (channel estimates) with optional Option 3 for high-precision scenarios.

,

Theorem: Fronthaul Bandwidth Scaling

The total fronthaul bandwidth for cell-free OTFS with LL APs, KK UEs, NaN_a antennas per AP, PP paths per channel, MNMN DD cells, and 100 Hz frame rate is Bfronthaul  =  100Lmin(Lk,K)P(4×16 bits).B_{\text{fronthaul}} \;=\; 100 \cdot L \cdot \min(L_k, K) \cdot P \cdot (4 \times 16 \text{ bits}). For urban-scale (L=100L = 100, Lk=8L_k = 8, K=200K = 200, P=8P = 8): B5B \approx 5 Gbps total fronthaul bandwidth. Distributed across L=100L = 100 APs: 50 Mbps per AP average.

Optimization: user-centric clustering reduces the per-UE fronthaul bandwidth by factor Lk/LL_k/L. For Lk=8L_k = 8, L=100L = 100: 12×12\times reduction.

Fronthaul bandwidth scales as LLkPL \cdot L_k \cdot P — linear in AP count, linear in cluster size, linear in path count. None of these scale with DD grid size (MNMN) when we forward estimates only. This is the structural advantage of cell-free OTFS over cell-free OFDM: OFDM's per-subcarrier channel estimation requires MNLkMN \cdot L_k coefficients per UE, MN/P100×MN/P \sim 100\times more fronthaul.

Definition:

Synchronization Requirements

Cell-free OTFS requires timing and phase synchronization across APs:

Timing sync (10 ns class): all APs must sample the OTFS frame at the same instants. Error ϵt\epsilon_t translates to an effective inter-AP delay of ϵt\epsilon_t — causes ISI if exceeding Δτ=1/W\Delta\tau = 1/W.

Phase sync (1° class): conjugate BF requires coherent combining at the UE. Phase error δϕ\delta\phi across APs causes cos(δϕ)\cos(\delta\phi) signal loss. For <0.5< 0.5 dB loss: δϕ<15°\delta\phi < 15°.

Frequency sync: carrier frequencies across APs must match to 100\sim 100 Hz (for δν<1/T\delta\nu < 1/T). Usually achieved via shared frequency reference (GNSS-disciplined oscillator).

Synchronization methods:

  • GNSS-PPS + GNSS-disciplined oscillator: 50 ns timing, 1° phase, 100 Hz frequency. Standard for outdoor deployments.
  • PTP-1588v2 over fiber: 10 ns timing, 0.5° phase. For indoor cell-free or high-density deployments.
  • Hybrid GNSS + PTP: outdoor APs use GNSS; indoor/shielded use PTP fed from GNSS anchor.
,

Example: Sync Budget for Urban Cell-Free OTFS

Urban cell-free OTFS at 28 GHz, W=100W = 100 MHz, T=16T = 16 ms. Derive sync budget requirements.

Definition:

Central Processing Unit (CPU) Architecture

The cell-free CPU architecture:

  • Input layer: receives per-AP channel estimates over fronthaul. Aggregates into L×KL \times K channel database.
  • Processing layer: computes precoders, performs resource allocation, monitors link quality.
    • Per-UE precoder: O(P+NaLk)\mathcal{O}(P + N_a \cdot L_k) ops.
    • Resource allocation: solves multi-user proportional-fairness or MMSE problem. O(LK)\mathcal{O}(L K) per iteration.
  • Output layer: forwards precoder vectors to APs (option 3 fronthaul), or forwards data symbols (option 1).

Deployment form factors:

  • Edge server (local, per-site): ~2U server, 10\sim 10 GFLOPS/core, 10-20 cores. Suitable for L50L \leq 50.
  • Cloud CPU (remote, multi-site): GPU cluster. 100\sim 100 GFLOPS. L500L \leq 500.
  • Hierarchical: local edge CPUs + regional cloud coordination. Standard for 6G O-RAN architectures.

Fronthaul Bandwidth vs Deployment Scale

Plot total fronthaul bandwidth as a function of LL for different options (raw, channel estimates, precoded symbols). Overlay capabilities of commercial fronthaul hardware.

Parameters
200
200
4

Theorem: Cell-Free OTFS Scaling Limits

Cell-free OTFS is deployable up to scales limited by:

  1. Fronthaul: total bandwidth 400\leq 400 Gbps (10 GbE × LL links). Supports L1000L \leq 1000 APs with channel-estimate fronthaul.
  2. CPU compute: per-frame ops 1010\leq 10^{10}. Supports LK106L K \leq 10^6 user-AP pairs.
  3. Synchronization: maintains quality for L500L \leq 500 APs with PTP; L5000L \leq 5000 with GNSS + periodic calibration.
  4. Pilot contamination: for 35% gain, requires L/K5L/K \geq 5. Above K=1000K = 1000: needs L=5000+L = 5000+ APs.

Practical deployment scales:

  • Urban hot spot: L=50L = 50-100100, K=100K = 100-200200.
  • Dense urban: L=500L = 500-10001000, K=1000K = 1000-20002000.
  • National-scale (6G): L=106L = 10^6 APs. Possible, but requires hierarchical CPU + AI-based resource allocation.

The binding constraint depends on scale. For modest deployments (L<100L < 100), single edge CPU handles it with standard eCPRI fronthaul. For large deployments, hierarchical architecture distributes load. National-scale deployments need AI-based coordination — beyond current research but within 6G roadmap.

🔧Engineering Note

O-RAN and eCPRI Integration

Cell-free OTFS integrates with the O-RAN (Open RAN) architecture:

  • AP ↔ CPU fronthaul: eCPRI 7.2 split. AP handles RF + sampling; CPU handles everything else including channel estimation.
  • AP ↔ CPU bandwidth: 10-25 GbE per AP. Standard for 5G deployments.
  • Synchronization: PTP-1588v2 over Ethernet. GNSS backup.
  • AI/ML integration: CPU can run AI models for resource allocation, user clustering, prediction. Part of 6G O-RAN RIC.

Cost: $10\sim \$10k-5050k per AP fully provisioned (antenna, RF, fronthaul gear). At L=100L = 100: 11-55M per site — comparable to cellular base station.

Deployment timeline: O-RAN cell-free prototypes in labs 2024- 2026. Commercial trials 2026-2028. Mass deployment 2028+ with 6G standardization.

Practical Constraints
  • eCPRI 7.2: standard cell-free fronthaul split

  • PTP-1588v2 for sub-microsecond sync

  • 105010-50k per AP fully deployed

  • Commercial: 2028+ with 6G

Cell-Free OTFS Architecture and Data Flow

Animation of the cell-free OTFS architecture: distributed APs, UE, CPU, fronthaul. Data flow: pilots from UE → per-AP DD channel estimation → forwarded to CPU → aggregated → conjugate beamforming at APs → coherent signal at UE. The DD-domain processing happens at each AP locally, coordinated only through channel estimates at the CPU.

Why This Matters: Chapter 18: The Ultimate Cell-Free — LEO Constellation

Cell-free OTFS on the ground scales from 50 to 1000 APs over 1-100 km² areas. The next step — a LEO satellite constellation — is a cell-free network in the sky, with hundreds to thousands of satellites coordinating over the entire Earth. Chapter 18 (Buzzi- Caire-Colavolpe CommIT contribution) extends the cell-free framework to the orbital scale, where Doppler reaches ± 50 kHz and OFDM cannot compete.