Cloud-RAN and Cooperative MIMO
From User Cooperation to Infrastructure Cooperation
The cooperative diversity framework of Section 25.1 shows that cooperation creates virtual MIMO. But user-to-user cooperation faces practical hurdles: inter-user channels are unreliable, synchronization is difficult, and users have limited battery life. The natural question is: what if the infrastructure cooperates instead?
In Cloud-RAN (Cloud Radio Access Network), multiple base stations (called remote radio heads, or RRHs) are connected to a central processor (CP) via fronthaul links of finite capacity . The RRHs are simple: they receive the signal, perform minimal processing, and forward a compressed version to the CP. All the "intelligence" β channel estimation, decoding, interference management β is centralized.
The information-theoretic question is: what is the uplink capacity of this system as a function of the fronthaul capacity? This is a relay network problem with quantization constraints.
Cloud-RAN Architecture
Definition: Cloud-RAN System Model
Cloud-RAN System Model
Consider single-antenna users and single-antenna access points (APs), connected to a central processor via orthogonal fronthaul links of capacity bits per channel use each. The received signal at AP is:
where is the channel from user to AP , satisfies , and .
In vector form:
where with , , and .
Each AP maps its received signal to a fronthaul message , which is sent to the CP. The CP decodes all user messages from .
Cloud-RAN
A network architecture where distributed access points are connected to a central processor via capacity-limited fronthaul links, enabling centralized signal processing and cooperative MIMO.
Related: Fronthaul, Cell-free massive MIMO
Fronthaul
The communication link between a remote radio head (or access point) and the central processor in a Cloud-RAN architecture. Its finite capacity constrains the achievable cooperative gain.
Related: Cloud-RAN
Definition: Oblivious Relay Processing
Oblivious Relay Processing
AP processing is called oblivious if the compression codebook at AP depends only on the channel statistics (e.g., SNR, fading distribution), not on the instantaneous channel realization . Formally, the encoding function at AP is:
which maps to a fronthaul index, and is designed based on but not on .
Oblivious processing is practically appealing because it avoids the overhead of CSI acquisition and codebook adaptation at the APs. The information-theoretic question is: how much capacity do we lose by being oblivious?
Theorem: Uplink C-RAN Capacity with Compress-and-Forward
For the uplink C-RAN with users and APs, each with fronthaul capacity , the compress-and-forward strategy where each AP quantizes its observation and forwards to the CP achieves the rate region:
for all and all such that:
where is the quantized version of at distortion level chosen to satisfy the fronthaul constraint.
For Gaussian signaling and Gaussian quantization noise with independent of everything else:
subject to .
Each AP acts as a Wyner-Ziv encoder: it compresses its observation and sends the compressed version to the CP. The CP then decodes as if it had received the (noisier) observations . The fronthaul constraint determines the quantization noise : a larger fronthaul allows finer quantization, making closer to . In the limit , the quantization noise vanishes and we recover the full cooperative MIMO capacity .
Quantization codebook
Each AP generates i.i.d. quantization codewords from . Upon receiving , AP finds the quantization index such that are jointly typical. By the rate-distortion theorem, this succeeds if .
Central processor decoding
The CP receives , recovers the quantized observations , and performs joint typicality decoding for all user messages. The effective channel is , where is the quantization noise. The achievable sum rate follows from the capacity of this effective Gaussian channel.
Optimization over quantization noise
The quantization noise is determined by the fronthaul constraint: . Solving: . As , and the system behaves as a centralized MIMO receiver with perfect observations.
C-RAN Uplink Capacity vs Fronthaul Capacity
Explore how the uplink sum rate of a C-RAN system varies with fronthaul capacity. Compare compress-and-forward (CF) with decode-and-forward (DF) at the APs. The "ideal" curve shows the capacity with unlimited fronthaul (centralized MIMO).
Parameters
Decode-and-Forward vs Compress-and-Forward in C-RAN
| Property | Decode-and-Forward | Compress-and-Forward |
|---|---|---|
| AP processing | Full decoding of user messages | Quantize and forward raw observations |
| Fronthaul content | Decoded bits (digital) | Compressed baseband samples |
| Required CSI at AP | Full CSI for decoding | None (oblivious processing possible) |
| Fronthaul scaling | (user data rate) | (observation rate) |
| Best regime | Strong fronthaul, few users per AP | Limited fronthaul, many APs |
| Practical realization | Small-cell with backhaul | C-RAN with CPRI/eCPRI fronthaul |
| Information-theoretic model | Relay channel with DF | Relay channel with CF (Wyner-Ziv) |
Example: Quantization Noise vs Fronthaul Capacity
A C-RAN system has APs, each receiving a signal with power (10 dB SNR at each AP) and noise power . Each AP uses scalar Gaussian quantization for the CF strategy.
(a) Find the quantization noise power for fronthaul capacities bits per channel use.
(b) Compute the effective SNR at the CP and the resulting sum-rate loss compared to ideal fronthaul.
Quantization noise formula
From the fronthaul constraint:
| Effective noise | ||
|---|---|---|
| 2 bits/use | 1.733 | |
| 5 bits/use | 1.011 | |
| 10 bits/use | 1.000 |
Sum-rate comparison
With ideal fronthaul (), the sum rate with APs and equal-power users is upper bounded by . For illustrative purposes, consider (orthogonal channels): With : bits/use (18% loss). With : bits/use (0.3% loss).
The point is that even moderate fronthaul capacity ( bits/use) is sufficient to nearly match the ideal cooperative MIMO performance.
Common Mistake: Fronthaul Must Scale with Signal Bandwidth
Mistake:
Assuming a fixed fronthaul capacity (e.g., 10 Gbps) is sufficient regardless of the signal bandwidth. In practice, the required fronthaul rate scales linearly with the RF bandwidth: a 100 MHz carrier requires more fronthaul than a 10 MHz carrier for the same quantization quality.
Correction:
The fronthaul rate for quantize-and-forward is approximately bits/s, where is the bandwidth, is the number of quantization bits per I/Q sample, and is the number of AP antennas. For a 100 MHz 5G NR carrier with 12-bit quantization and 4 antennas per AP, this is Gbps per AP, which strains even fiber-optic fronthaul. This motivates partial decoding (functional splits) in practical C-RAN deployments.
Why This Matters: CoMP in LTE-Advanced and 5G NR
Coordinated Multi-Point (CoMP) in 3GPP standards implements a practical version of the C-RAN framework studied here. LTE-Advanced (Release 11) introduced Joint Transmission CoMP (JT-CoMP, analogous to cooperative MIMO) and Coordinated Beamforming (CB-CoMP, which shares only CSI, not data). 5G NR extends this with the concept of Transmission Reception Points (TRPs), which can belong to different cells but coordinate transmission. The information-theoretic analysis of C-RAN provides the performance ceiling that practical CoMP implementations approach.
See Book telecom, Ch. 21 for the system-level treatment of CoMP.
Functional Splits in 5G NR C-RAN
3GPP defines eight functional split options between the Central Unit (CU) and Distributed Unit (DU), ranging from Option 1 (RRC/PDCP split, low fronthaul rate) to Option 8 (RF/PHY split, highest fronthaul rate matching our CF model). The information-theoretic analysis corresponds to Option 8 (fully centralized baseband processing). In practice, most deployments use Option 7-2x (intra-PHY split after FFT and beamforming), which reduces the fronthaul requirement by a factor proportional to the number of spatial layers vs antennas.
- β’
Option 8 requires ~25 Gbps fronthaul per 100 MHz carrier (CPRI)
- β’
Option 7-2x requires ~5 Gbps (eCPRI), but loses some centralization gain
- β’
Fronthaul latency budget: 100-250 us for HARQ timing in 5G NR
Quick Check
In oblivious C-RAN processing, the AP's compression codebook does not depend on the instantaneous channel realization. What is the main advantage of this approach?
It achieves higher capacity than channel-aware compression
It eliminates the need for channel estimation and codebook adaptation at the APs, reducing overhead and complexity
It reduces the required fronthaul capacity
Oblivious processing is attractive because APs can be simple, cheap hardware that just quantize and forward. No CSI acquisition, no adaptive codebook design. The intelligence is centralized at the CP.