Uplink Fronthaul Strategies
Uplink Fronthaul: From Observations to Decisions
In the cell-free uplink, each AP observes a noisy superposition of all users' signals. The fundamental challenge is: how should AP compress its -dimensional observation into at most bits/s to forward to the CPU?
We study three strategies of increasing sophistication: compress-and-forward (scalar quantization), quantize-and-forward (vector quantization matched to the source statistics), and estimate-and-forward (local MMSE estimation followed by forwarding the estimate).
Definition: Uplink Signal Model with Fronthaul
Uplink Signal Model with Fronthaul
The received signal at AP () is: where is the channel from user to AP , is the transmit signal with power , and .
AP maps to a fronthaul message , where is the block length. The CPU decodes all users' data from .
Definition: Quantize-and-Forward (QF)
Quantize-and-Forward (QF)
In quantize-and-forward, AP applies vector quantization to its received signal to produce a quantized version . The quantization is modeled as: where is the quantization noise, independent of .
The fronthaul constraint requires: where is the covariance of .
The optimal quantization noise covariance depends on the channel statistics and can be optimized jointly across APs.
Theorem: Achievable Rate with Quantize-and-Forward
Under quantize-and-forward with optimal Gaussian quantization, the achievable rate for user is: The key difference from the unlimited-fronthaul expression is the additional term in the interference-plus-noise covariance, representing the penalty of quantization.
Quantization noise acts as additional interference. As , we have and recover the full-fronthaul rate. As , the quantization noise dominates and the rate vanishes.
Model the CPU's observation
The CPU observes for each AP. Stacking all APs: where has block-diagonal covariance .
Apply the SINR formula
Treating as effective noise with covariance , the SINR for user under linear MMSE decoding follows from the standard MMSE-SINR expression with modified noise covariance.
Relate quantization noise to fronthaul capacity
The fronthaul constraint determines the minimum achievable . For scalar quantization, with .
Definition: Estimate-and-Forward (EF)
Estimate-and-Forward (EF)
In estimate-and-forward, AP first computes a local MMSE estimate of the users' signals, then quantizes and forwards the estimate. Specifically:
-
Local estimation: AP computes where is the local MMSE combining matrix.
-
Quantization: The estimate is quantized to produce with quantization noise covariance .
-
Forwarding: is forwarded to the CPU.
The advantage is that is -dimensional (one component per user) rather than -dimensional, so the fronthaul load scales with instead of .
When (the massive MIMO regime), EF provides substantial fronthaul savings over QF. This is the strategy adopted in most practical cell-free implementations.
Quantize-and-Forward vs. Estimate-and-Forward
| Property | Quantize-and-Forward (QF) | Estimate-and-Forward (EF) |
|---|---|---|
| What is forwarded | Quantized observation | Quantized local estimate |
| Fronthaul dimension | per AP | per AP |
| Fronthaul scaling | Proportional to | Proportional to |
| Local processing | None (raw quantization) | MMSE combining at each AP |
| Information loss | Only quantization noise | Local estimation error + quantization noise |
| Optimality | Optimal as | Near-optimal when |
| Practical appeal | Simple but high fronthaul cost | Standard approach in cell-free deployments |
Definition: Wyner-Ziv Compression for Fronthaul
Wyner-Ziv Compression for Fronthaul
Wyner-Ziv compression exploits the fact that the CPU has side information from other APs when decoding the fronthaul message from AP . Instead of compressing independently, AP can use distributed source coding to reduce the fronthaul rate.
The Wyner-Ziv rate-distortion function for Gaussian sources states that the minimum rate to describe with distortion , given side information at the decoder, is: where is the conditional covariance of given all other APs' observations.
The Wyner-Ziv gain is largest when the APs' observations are highly correlated, which occurs when APs are closely spaced or when users are in the overlapping coverage region of multiple APs.
Theorem: Achievable Rate with Estimate-and-Forward
Under estimate-and-forward with local MMSE combining at each AP, the achievable rate for user with the UatF bound is: where is the -th column of the local combining matrix , and the last term in the denominator captures the fronthaul quantization penalty.
The rate expression has the same structure as the standard UatF bound from Chapter 4, with an additional quantization noise term. As fronthaul capacity increases, the quantization term vanishes and we recover the unlimited-fronthaul rate. The EF approach is attractive because the quantization operates on -dimensional estimates rather than -dimensional raw observations.
Start from the UatF bound
The UatF technique treats the channel estimate as deterministic and the estimation error as noise. With EF, the CPU receives where is the quantization noise component.
Decompose the received signal
Writing
Compute the SINR
Summing across all APs and applying the UatF bound (treating all but the coherent signal as uncorrelated noise) yields the stated SINR expression. The quantization noise covariance enters the denominator as the fronthaul penalty.
Quantize-and-Forward vs. Estimate-and-Forward
Compare the achievable sum rate of QF and EF strategies as a function of per-AP fronthaul capacity. Observe how EF outperforms QF when the number of antennas per AP is much larger than the number of users.
Parameters
Example: Scalar Quantization Fronthaul Rate
Consider a single-antenna AP () with received signal where and . The total received power is . If the AP uses uniform scalar quantization with bits, what is the quantization noise variance and the required fronthaul rate?
Quantization noise variance
For Gaussian sources with optimal Lloyd-Max quantization at high resolution, the quantization noise variance is approximately:
Fronthaul rate
The rate for transmitting bits per complex sample (I and Q components) over a bandwidth of Hz is:
Rate-distortion interpretation
From rate-distortion theory for a Gaussian source with variance : confirming that each bit of resolution reduces the quantization noise by 6 dB.
Common Mistake: Ignoring the Dimension Mismatch Between QF and EF
Mistake:
Comparing QF and EF at the same total fronthaul rate (in bits/s) without accounting for the dimensionality difference. QF forwards -dimensional observations while EF forwards -dimensional estimates. At the same total fronthaul rate, EF allocates more bits per dimension.
Correction:
When and , EF can allocate more bits per component than QF at the same total fronthaul rate. The correct comparison normalizes by the dimension of the forwarded vector: QF needs while EF needs .
Quick Check
In which scenario does estimate-and-forward (EF) provide the largest advantage over quantize-and-forward (QF)?
Single-antenna APs () serving many users
Many-antenna APs () serving few users ()
Equal number of antennas and users ()
Very high fronthaul capacity (unlimited)
EF forwards -dimensional estimates instead of -dimensional observations, giving a dimension reduction.
Quick Check
What is the key advantage of Wyner-Ziv compression over independent compression for fronthaul?
It eliminates quantization noise entirely
It exploits side information at the CPU to reduce the required fronthaul rate
It requires no local processing at the AP
It works only with single-antenna APs
The CPU has observations from other APs, which serve as correlated side information for decoding the fronthaul message from AP .
Historical Note: The Wyner-Ziv Theorem and Its Wireless Renaissance
1976--2009Aaron Wyner and Jacob Ziv published their foundational result on source coding with side information at the decoder in 1976. The theorem showed that for Gaussian sources, there is no rate penalty from not having side information at the encoder. This result remained largely theoretical for decades until the emergence of Cloud-RAN and distributed MIMO in the 2010s, where the fronthaul compression problem maps precisely onto Wyner-Ziv coding. Simeone, Somekh, Poor, and Shamai (2009) were among the first to make this connection explicit, launching a fruitful cross-fertilization between network information theory and wireless system design.
Key Takeaway
Estimate-and-forward is the practical workhorse for cell-free uplink fronthaul. By performing local MMSE combining before quantization, each AP reduces the fronthaul dimension from (antennas) to (users), enabling massive MIMO with affordable fronthaul infrastructure.
Quantize-and-Forward
An uplink fronthaul strategy where each AP directly quantizes its received signal vector and forwards the quantized version to the central processor. Optimal when fronthaul capacity is abundant.
Related: Estimate-and-Forward, Compress And Forward
Estimate-and-Forward
An uplink fronthaul strategy where each AP first computes local MMSE estimates of the users' signals, then quantizes and forwards the lower-dimensional estimates. Preferred when .
Related: Quantize-and-Forward, Local MMSE for Distributed Antenna Systems