Fronthaul Requirements and Compression
The Fronthaul Bottleneck
Every cooperation level requires some information exchange between the APs and the CPU. Even Level 1 must forward complex scalars per channel use. In practice, these complex numbers must be quantized to a finite number of bits for digital transmission over the fronthaul link. The quantization introduces distortion that degrades the effective SINR. The system designer faces a tradeoff: more quantization bits reduce distortion but increase the fronthaul rate requirement. This section develops the theory of fronthaul-aware distributed processing: how many bits are needed, how to allocate them across APs and users, and what happens when the fronthaul capacity is the binding constraint.
Definition: Scalar Quantization of Local Estimates
Scalar Quantization of Local Estimates
In distributed processing (Levels 1β3), AP computes local estimates for each and quantizes them before fronthaul transmission. Using a -bit uniform scalar quantizer on the real and imaginary parts independently, the quantized estimate is
where is the quantization error with variance
per real dimension, where is the quantization step size. The fronthaul rate per AP is
where is the sampling rate (equal to the system bandwidth for Nyquist sampling).
The factor of 2 accounts for quantizing both real and imaginary parts. With bits, users, and MHz: Gbit/s per AP. This is within the capacity of a single 10G Ethernet fronthaul link.
Definition: Fronthaul Capacity Constraint
Fronthaul Capacity Constraint
The fronthaul link between AP and the CPU has a finite capacity (in bit/s). The quantization resolution is constrained by
which gives the maximum number of quantization bits per AP:
For Level 4 (centralized processing), the fronthaul must carry the full received signal :
Since typically for multi-antenna APs, Level 4 requires higher fronthaul capacity β or fewer quantization bits β than Levels 1β3.
Theorem: SINR Degradation Under Fronthaul Quantization
Under -bit uniform scalar quantization of the local estimates, the effective SINR of user with LSFD (Level 3) is
where is the per-component quantization noise variance. In the high-resolution regime (), the SINR loss relative to unquantized LSFD is approximately
for a constant that depends on the signal dynamic range.
Quantization adds independent noise at each AP's fronthaul output. This noise is equivalent to increasing the thermal noise variance by per AP. With bits, , so the quantization noise is roughly 30 dB below the signal β negligible in most scenarios. With bits, the noise is only 18 dB below, which can degrade the SINR by 1β2 dB.
Additive quantization noise model
The quantized local estimate is where is independent of (additive quantization noise model, valid for high-resolution quantization). The variance of is per real dimension.
Effect on the CPU estimate
The CPU forms where . The quantization noise power is .
Modified SINR
Adding the quantization noise to the denominator of the LSFD SINR gives the stated result. The SINR loss follows from a first-order expansion assuming noise floor.
Fronthaul Rate vs. Quantization Distortion
Explore the tradeoff between fronthaul rate (determined by quantization bits ) and the resulting SINR degradation. Observe that 4β6 bits per dimension are sufficient for near-lossless performance in most scenarios.
Parameters
Example: Required Quantization Bits for 1 dB SINR Loss
A cell-free network operates at an average unquantized SINR of 15 dB. The dynamic range of the local estimates is where is the variance of the local estimates. How many quantization bits are needed to limit the SINR degradation to at most 1 dB?
Quantization noise variance
With bits and dynamic range :
SINR degradation condition
The SINR loss is approximately where is the effective noise floor. For 1 dB loss: dB, so .
At 15 dB SINR, the effective noise is . Condition: .
Solve for $b$
So bits suffice. The fronthaul rate per AP with users and MHz is Mbit/s = 1.6 Gbit/s.
Definition: Vector Quantization for Fronthaul Compression
Vector Quantization for Fronthaul Compression
Instead of quantizing each local estimate independently (scalar quantization), vector quantization jointly quantizes the vector of all local estimates at AP . By exploiting the correlation structure (nearby users have correlated channels), vector quantization achieves the same distortion as scalar quantization with fewer bits.
The rate-distortion bound for Gaussian sources gives the minimum fronthaul rate:
where are the eigenvalues of the local estimate covariance matrix and is the target distortion level.
Vector quantization provides diminishing returns when the local estimates are nearly uncorrelated (well-separated users). The largest gain occurs when is large and users share similar channel subspaces.
Scalar vs. Vector Quantization for Fronthaul
| Aspect | Scalar Quantization | Vector Quantization |
|---|---|---|
| Rate (bit/sample) | (rate-distortion bound) | |
| Distortion | per component | Optimally distributed across eigenvalues (water-filling) |
| Complexity | (covariance estimation + KLT) | |
| Rate saving | Baseline | 20β40% for correlated users |
| Implementation | Simple: standard ADC | Requires eigendecomposition of local estimate covariance |
Optimal Bit Allocation Across APs
Complexity: where is the maximum allowed bitsThe greedy algorithm gives a near-optimal solution because the SINR gain from each additional bit is a concave function of (diminishing returns). APs with strong LSFD weights receive more bits because their quantization noise has a larger impact on the final estimate.
Fronthaul Technologies and Their Capacities
The choice of fronthaul technology determines the achievable cooperation level:
- Dark fiber: 10β100 Gbps per link. Supports Level 4 with high-resolution quantization. Cost: high (fiber deployment), power: 0.5β2 W per link.
- Ethernet (eCPRI): 10β25 Gbps per link. Supports Level 4 with moderate quantization or Level 3 with high resolution. The O-RAN standard mandates eCPRI for the fronthaul interface.
- Millimeter-wave wireless: 1β10 Gbps per link. Supports Level 2β3. Used in urban small cells where fiber is not available.
- Sub-6 GHz wireless: 100 Mbpsβ1 Gbps. Supports only Level 1β2 with aggressive compression.
In practice, heterogeneous fronthaul is common: some APs have fiber (enabling Level 4) while others use wireless backhaul (limited to Level 2β3). The cooperation level can be selected per-AP based on the available fronthaul capacity.
- β’
eCPRI fronthaul adds 1β2 microseconds latency per hop
- β’
Wireless fronthaul adds 10β50 microseconds, limiting real-time cooperation
- β’
Fronthaul power scales with the data rate: approximately 0.1 W per Gbps
- β’
Multi-hop fronthaul (daisy-chain topology) accumulates latency and limits cooperation
Common Mistake: Using the Additive Noise Model for Low-Resolution Quantization
Mistake:
Applying the additive quantization noise model ( with independent of ) when bits.
Correction:
The additive noise model is accurate only for β bits. For very low resolution (β), the quantization error is correlated with the input and the Bussgang decomposition should be used instead: where is the Bussgang gain and is uncorrelated with . The Bussgang model accounts for the signal attenuation caused by coarse quantization.
Quick Check
An AP with antennas has a 10 Gbps eCPRI fronthaul link. The system bandwidth is MHz. What is the maximum number of quantization bits per real dimension for Level 4 operation?
bits
bits
bits
bit
Level 4 requires forwarding complex samples per channel use: , so . Actually, let us recompute: . So up to bits. However, with overhead (control, timing), the effective capacity is about , giving .
The Fronthaul Perspective on Cooperation Levels
The fronthaul analysis provides a unified view of the cooperation levels:
- Level 1β2 (local combining): Fronthaul carries complex scalars per channel use. With 4β6 bit quantization and : 0.8β1.2 Gbps per AP.
- Level 3 (LSFD): Same fronthaul as Level 2, plus LSFD weights updated every 100β1000 ms (negligible additional overhead).
- Level 4 (centralized MMSE): Fronthaul carries complex vectors per channel use. With 6β8 bit quantization and : 4.8β6.4 Gbps per AP.
This quantifies the fundamental tradeoff: Level 3 provides 70β90% of Level 4's SINR performance at 15β25% of the fronthaul cost. When fronthaul capacity is the bottleneck β which it is in most deployments β Level 3 is the rational choice.
Key Takeaway
Fronthaul quantization with 4β6 bits per dimension is sufficient for near-lossless distributed processing. The SINR loss from quantization is less than 0.5 dB in most scenarios, which is far smaller than the gap between cooperation levels. The system designer should choose the cooperation level first (based on fronthaul capacity and AP antennas), then allocate quantization bits within the chosen level. Vector quantization provides 20β40% rate savings over scalar quantization when users are spatially correlated, but the added complexity is justified only in fronthaul-constrained deployments.
Fronthaul Compression
The process of quantizing and encoding the signals transmitted between access points and the central processing unit over capacity-limited fronthaul links. Includes scalar quantization, vector quantization, and information-theoretic compression based on Wyner-Ziv coding.
Related: Fronthaul, Scalar Quantization of Local Estimates, Cooperation Level
Bussgang Decomposition
A technique for analyzing nonlinear systems (such as quantizers) applied to Gaussian inputs. The output is decomposed as where is a deterministic gain and is uncorrelated with . Used to analyze low-resolution ADCs and fronthaul quantization in massive MIMO systems.
Related: Scalar Quantization of Local Estimates, Fronthaul Compression