Cloud-RAN and Cooperative MIMO

From User Cooperation to Infrastructure Cooperation

The cooperative diversity framework of Section 25.1 shows that cooperation creates virtual MIMO. But user-to-user cooperation faces practical hurdles: inter-user channels are unreliable, synchronization is difficult, and users have limited battery life. The natural question is: what if the infrastructure cooperates instead?

In Cloud-RAN (Cloud Radio Access Network), multiple base stations (called remote radio heads, or RRHs) are connected to a central processor (CP) via fronthaul links of finite capacity CfhC_{\text{fh}}. The RRHs are simple: they receive the signal, perform minimal processing, and forward a compressed version to the CP. All the "intelligence" β€” channel estimation, decoding, interference management β€” is centralized.

The information-theoretic question is: what is the uplink capacity of this system as a function of the fronthaul capacity? This is a relay network problem with quantization constraints.

Cloud-RAN Architecture

The Cloud-RAN uplink: distributed access points quantize their received signals and forward compressed observations to a central processor via fronthaul links. The CP performs joint decoding of all user messages.

Definition:

Cloud-RAN System Model

Consider KK single-antenna users and LL single-antenna access points (APs), connected to a central processor via orthogonal fronthaul links of capacity CfhC_{\text{fh}} bits per channel use each. The received signal at AP ll is:

Yl=βˆ‘k=1KgklXk+Zl,l=1,…,LY_l = \sum_{k=1}^{K} g_{kl} X_k + Z_l, \qquad l = 1, \ldots, L

where gkl∈Cg_{kl} \in \mathbb{C} is the channel from user kk to AP ll, XkX_k satisfies E[∣Xk∣2]≀P\mathbb{E}[|X_k|^2] \le P, and Zl∼CN(0,Οƒ2)Z_l \sim \mathcal{CN}(0, \sigma^2).

In vector form: y=Gx+z\mathbf{y} = \mathbf{G}\mathbf{x} + \mathbf{z}

where G∈CLΓ—K\mathbf{G} \in \mathbb{C}^{L \times K} with [G]lk=gkl[\mathbf{G}]_{lk} = g_{kl}, x=[X1,…,XK]T\mathbf{x} = [X_1, \ldots, X_K]^T, and z∼CN(0,Οƒ2IL)\mathbf{z} \sim \mathcal{CN}(\mathbf{0}, \sigma^2\mathbf{I}_L).

Each AP ll maps its received signal YlnY_l^n to a fronthaul message Ml∈[1:2nCfh]M_l \in [1:2^{nC_{\text{fh}}}], which is sent to the CP. The CP decodes all KK user messages from (M1,…,ML)(M_1, \ldots, M_L).

Cloud-RAN

A network architecture where distributed access points are connected to a central processor via capacity-limited fronthaul links, enabling centralized signal processing and cooperative MIMO.

Related: Fronthaul, Cell-free massive MIMO

Fronthaul

The communication link between a remote radio head (or access point) and the central processor in a Cloud-RAN architecture. Its finite capacity constrains the achievable cooperative gain.

Related: Cloud-RAN

Definition:

Oblivious Relay Processing

AP processing is called oblivious if the compression codebook at AP ll depends only on the channel statistics (e.g., SNR, fading distribution), not on the instantaneous channel realization {gkl}k=1K\{g_{kl}\}_{k=1}^K. Formally, the encoding function at AP ll is:

fl:Cn→[1:2nCfh]f_l: \mathbb{C}^n \to [1:2^{nC_{\text{fh}}}]

which maps YlnY_l^n to a fronthaul index, and is designed based on E[∣Yl∣2]\mathbb{E}[|Y_l|^2] but not on {gkl}\{g_{kl}\}.

Oblivious processing is practically appealing because it avoids the overhead of CSI acquisition and codebook adaptation at the APs. The information-theoretic question is: how much capacity do we lose by being oblivious?

Theorem: Uplink C-RAN Capacity with Compress-and-Forward

For the uplink C-RAN with KK users and LL APs, each with fronthaul capacity CfhC_{\text{fh}}, the compress-and-forward strategy where each AP quantizes its observation and forwards to the CP achieves the rate region:

βˆ‘k∈SRk≀I(XS;Y^T∣XSc)\sum_{k \in \mathcal{S}} R_k \le I(\mathbf{X}_{\mathcal{S}}; \hat{\mathbf{Y}}_{\mathcal{T}} | \mathbf{X}_{\mathcal{S}^c})

for all SβŠ†[K]\mathcal{S} \subseteq [K] and all TβŠ†[L]\mathcal{T} \subseteq [L] such that:

βˆ‘l∈TCfhβ‰₯I(YT;Y^T∣Y^Tc)\sum_{l \in \mathcal{T}} C_{\text{fh}} \ge I(\mathbf{Y}_{\mathcal{T}}; \hat{\mathbf{Y}}_{\mathcal{T}} | \hat{\mathbf{Y}}_{\mathcal{T}^c})

where Y^l\hat{Y}_l is the quantized version of YlY_l at distortion level chosen to satisfy the fronthaul constraint.

For Gaussian signaling and Gaussian quantization noise Y^l=Yl+Ql\hat{Y}_l = Y_l + Q_l with Ql∼CN(0,Οƒq2)Q_l \sim \mathcal{CN}(0, \sigma_q^2) independent of everything else:

Rsum≀log⁑det⁑ ⁣(IL+1Οƒ2+Οƒq2GKxGH)R_{\text{sum}} \le \log\det\!\left(\mathbf{I}_L + \frac{1}{\sigma^2 + \sigma_q^2}\mathbf{G}\mathbf{K}_x\mathbf{G}^H\right)

subject to Lβ‹…12log⁑ ⁣(1+βˆ₯glβˆ₯2P+Οƒ2Οƒq2)≀Lβ‹…CfhL \cdot \frac{1}{2}\log\!\left(1 + \frac{\|\mathbf{g}_l\|^2 P + \sigma^2}{\sigma_q^2}\right) \le L \cdot C_{\text{fh}}.

Each AP acts as a Wyner-Ziv encoder: it compresses its observation YlY_l and sends the compressed version Y^l\hat{Y}_l to the CP. The CP then decodes as if it had received the (noisier) observations Y^\hat{\mathbf{Y}}. The fronthaul constraint determines the quantization noise Οƒq2\sigma_q^2: a larger fronthaul allows finer quantization, making Y^l\hat{Y}_l closer to YlY_l. In the limit Cfhβ†’βˆžC_{\text{fh}} \to \infty, the quantization noise vanishes and we recover the full cooperative MIMO capacity log⁑det⁑(I+SNR GGH/L)\log\det(\mathbf{I} + \text{SNR}\,\mathbf{G}\mathbf{G}^H / L).

C-RAN Uplink Capacity vs Fronthaul Capacity

Explore how the uplink sum rate of a C-RAN system varies with fronthaul capacity. Compare compress-and-forward (CF) with decode-and-forward (DF) at the APs. The "ideal" curve shows the capacity with unlimited fronthaul (centralized MIMO).

Parameters
4
8
10
10

Decode-and-Forward vs Compress-and-Forward in C-RAN

PropertyDecode-and-ForwardCompress-and-Forward
AP processingFull decoding of user messagesQuantize and forward raw observations
Fronthaul contentDecoded bits (digital)Compressed baseband samples
Required CSI at APFull CSI for decodingNone (oblivious processing possible)
Fronthaul scalingRsumR_{\text{sum}} (user data rate)I(Yl;Y^l)I(Y_l; \hat{Y}_l) (observation rate)
Best regimeStrong fronthaul, few users per APLimited fronthaul, many APs
Practical realizationSmall-cell with backhaulC-RAN with CPRI/eCPRI fronthaul
Information-theoretic modelRelay channel with DFRelay channel with CF (Wyner-Ziv)

Example: Quantization Noise vs Fronthaul Capacity

A C-RAN system has L=4L = 4 APs, each receiving a signal with power βˆ₯glβˆ₯2P=10\|\mathbf{g}_l\|^2 P = 10 (10 dB SNR at each AP) and noise power Οƒ2=1\sigma^2 = 1. Each AP uses scalar Gaussian quantization for the CF strategy.

(a) Find the quantization noise power Οƒq2\sigma_q^2 for fronthaul capacities Cfh∈{2,5,10}C_{\text{fh}} \in \{2, 5, 10\} bits per channel use.

(b) Compute the effective SNR at the CP and the resulting sum-rate loss compared to ideal fronthaul.

Common Mistake: Fronthaul Must Scale with Signal Bandwidth

Mistake:

Assuming a fixed fronthaul capacity (e.g., 10 Gbps) is sufficient regardless of the signal bandwidth. In practice, the required fronthaul rate scales linearly with the RF bandwidth: a 100 MHz carrier requires β‰ˆ10Γ—\approx 10\times more fronthaul than a 10 MHz carrier for the same quantization quality.

Correction:

The fronthaul rate for quantize-and-forward is approximately Cfhβ‰ˆ2Bβ‹…bβ‹…NantC_{\text{fh}} \approx 2B \cdot b \cdot N_{\text{ant}} bits/s, where BB is the bandwidth, bb is the number of quantization bits per I/Q sample, and NantN_{\text{ant}} is the number of AP antennas. For a 100 MHz 5G NR carrier with 12-bit quantization and 4 antennas per AP, this is 2Γ—100MΓ—12Γ—4=9.62 \times 100\text{M} \times 12 \times 4 = 9.6 Gbps per AP, which strains even fiber-optic fronthaul. This motivates partial decoding (functional splits) in practical C-RAN deployments.

Why This Matters: CoMP in LTE-Advanced and 5G NR

Coordinated Multi-Point (CoMP) in 3GPP standards implements a practical version of the C-RAN framework studied here. LTE-Advanced (Release 11) introduced Joint Transmission CoMP (JT-CoMP, analogous to cooperative MIMO) and Coordinated Beamforming (CB-CoMP, which shares only CSI, not data). 5G NR extends this with the concept of Transmission Reception Points (TRPs), which can belong to different cells but coordinate transmission. The information-theoretic analysis of C-RAN provides the performance ceiling that practical CoMP implementations approach.

See Book telecom, Ch. 21 for the system-level treatment of CoMP.

⚠️Engineering Note

Functional Splits in 5G NR C-RAN

3GPP defines eight functional split options between the Central Unit (CU) and Distributed Unit (DU), ranging from Option 1 (RRC/PDCP split, low fronthaul rate) to Option 8 (RF/PHY split, highest fronthaul rate matching our CF model). The information-theoretic analysis corresponds to Option 8 (fully centralized baseband processing). In practice, most deployments use Option 7-2x (intra-PHY split after FFT and beamforming), which reduces the fronthaul requirement by a factor proportional to the number of spatial layers vs antennas.

Practical Constraints
  • β€’

    Option 8 requires ~25 Gbps fronthaul per 100 MHz carrier (CPRI)

  • β€’

    Option 7-2x requires ~5 Gbps (eCPRI), but loses some centralization gain

  • β€’

    Fronthaul latency budget: 100-250 us for HARQ timing in 5G NR

πŸ“‹ Ref: 3GPP TR 38.801

Quick Check

In oblivious C-RAN processing, the AP's compression codebook does not depend on the instantaneous channel realization. What is the main advantage of this approach?

It achieves higher capacity than channel-aware compression

It eliminates the need for channel estimation and codebook adaptation at the APs, reducing overhead and complexity

It reduces the required fronthaul capacity