Levels of Cooperation (L1–L4)

A Taxonomy of Cooperation

The gap between fully distributed (each AP acts alone) and fully centralized (the CPU sees everything) is large. Bjornson and Sanguinetti formalized four cooperation levels that partition this gap into meaningful operating points. Each level corresponds to a different answer to two questions: (1) What combining does the AP perform locally? and (2) What information does the CPU use to form the final estimate? Moving from Level 1 to Level 4, both the local processing complexity and the fronthaul requirement increase, but so does the achievable SINR.

Definition:

Level 1 β€” Local MRC (Conjugate Beamforming)

At Level 1, each AP mm applies matched filtering (MRC) to its received signal using only the channel estimate to user kk:

amk(1)=g^mk\mathbf{a}_{mk}^{(1)} = \hat{\mathbf{g}}_{mk}

The local estimate is s^mk=g^mkHym\hat{s}_{mk} = \hat{\mathbf{g}}_{mk}^H \mathbf{y}_m. The CPU combines these with equal weights: αmk=1\alpha_{mk} = 1 for all m∈Mkm \in \mathcal{M}_k. The Level 1 SINR is

SINRk(1)=pk(βˆ‘m∈MkΞ³mk)2βˆ‘j=1Kpjβˆ‘m∈MkΞ²mj(Ξ³mk+Ξ²mk)+Οƒ2βˆ‘m∈MkΞ³mk\text{SINR}_k^{(1)} = \frac{p_k \left( \sum_{m \in \mathcal{M}_k} \gamma_{mk} \right)^2}{\sum_{j=1}^{K} p_j \sum_{m \in \mathcal{M}_k} \beta_{mj} \left( \gamma_{mk} + \beta_{mk} \right) + \sigma^2 \sum_{m \in \mathcal{M}_k} \gamma_{mk}}

where γmk=E[∣g^mk∣2]\gamma_{mk} = \mathbb{E}[|\hat{g}_{mk}|^2] for single-antenna APs.

Level 1 is the original cell-free massive MIMO formulation of Ngo et al. (2017). It requires no inter-user knowledge at the AP and minimal computation: one complex multiplication per user per sample.

Definition:

Level 2 β€” Local MMSE Combining

At Level 2, each AP mm applies local MMSE combining using the channel estimates to all users it serves:

amk(2)=(βˆ‘j∈Dmpjg^mjg^mjH+βˆ‘j∈DmpjCmj+Οƒ2IN)βˆ’1g^mk\mathbf{a}_{mk}^{(2)} = \left( \sum_{j \in \mathcal{D}_m} p_j \hat{\mathbf{g}}_{mj} \hat{\mathbf{g}}_{mj}^H + \sum_{j \in \mathcal{D}_m} p_j \mathbf{C}_{mj} + \sigma^2 \mathbf{I}_N \right)^{-1} \hat{\mathbf{g}}_{mk}

where Dm\mathcal{D}_m is the set of users served by AP mm, and Cmj=Ξ²mjRmjβˆ’Ξ³mjR^mj\mathbf{C}_{mj} = \beta_{mj} \mathbf{R}_{mj} - \gamma_{mj} \hat{\mathbf{R}}_{mj} is the estimation error covariance.

The CPU combines local estimates with equal weights: Ξ±mk=1\alpha_{mk} = 1.

The critical difference from Level 1 is that the AP now suppresses interference from other users it serves. This requires the AP to know channel estimates g^mj\hat{\mathbf{g}}_{mj} for all j∈Dmj \in \mathcal{D}_m, which is available locally from pilot processing. No additional fronthaul is needed for the combining weights.

Definition:

Level 3 β€” Large-Scale Fading Decoding (LSFD)

Level 3 augments Level 2 with optimized CPU weights. Each AP still performs local MMSE combining, but the CPU applies per-AP weights Ξ±k=[Ξ±1k,…,Ξ±Mk]T\boldsymbol{\alpha}_k = [\alpha_{1k}, \ldots, \alpha_{Mk}]^T that are chosen to maximize the SINR of user kk:

s^k=Ξ±kHs^k,s^k=[s^1k,…,s^Mk]T\hat{s}_k = \boldsymbol{\alpha}_k^H \hat{\mathbf{s}}_k, \qquad \hat{\mathbf{s}}_k = [\hat{s}_{1k}, \ldots, \hat{s}_{Mk}]^T

The LSFD weights depend only on the large-scale fading coefficients {Ξ²mk}\{\beta_{mk}\} and the local combining statistics, not on the instantaneous channel realizations. They are derived in Section 13.3.

The name "large-scale fading decoding" reflects the fact that the CPU weights change only when the large-scale fading changes (on the order of seconds), not at the small-scale fading rate. This makes Level 3 highly practical.

Definition:

Level 4 β€” Centralized MMSE

At Level 4, the APs do not perform any local combining. Instead, each AP mm forwards its full received vector ym∈CN\mathbf{y}_m \in \mathbb{C}^N to the CPU. The CPU forms the network-wide vector y=[y1T,…,yMT]T∈CMN\mathbf{y} = [\mathbf{y}_1^T, \ldots, \mathbf{y}_M^T]^T \in \mathbb{C}^{MN} and applies centralized MMSE combining as defined in DCentralized Processing.

Level 4 is the theoretical upper bound for linear processing. Every other cooperation level can be viewed as a structured restriction on the Level 4 combining vector.

Processing at Each Cooperation Level

Complexity: Level 1: O(N∣Dm∣)O(N |\mathcal{D}_m|) per AP. Level 2: O(N2∣Dm∣+N3)O(N^2 |\mathcal{D}_m| + N^3) per AP (matrix inversion). Level 3: Same as Level 2 at APs; O(MK)O(M K) at CPU for LSFD. Level 4: O((MN)2K)O((MN)^2 K) at CPU.
Input: Received signals {ym}m=1M\{\mathbf{y}_m\}_{m=1}^M, channel estimates {g^mk}\{\hat{\mathbf{g}}_{mk}\}, large-scale coefficients {Ξ²mk}\{\beta_{mk}\}
Level 1 (Local MRC):
1. At AP mm: For each k∈Dmk \in \mathcal{D}_m:
s^mk=g^mkHym\hat{s}_{mk} = \hat{\mathbf{g}}_{mk}^H \mathbf{y}_m
2. Forward {s^mk}k∈Dm\{\hat{s}_{mk}\}_{k \in \mathcal{D}_m} to CPU
3. At CPU: s^k=βˆ‘m∈Mks^mk\hat{s}_k = \sum_{m \in \mathcal{M}_k} \hat{s}_{mk}
Level 2 (Local MMSE):
1. At AP mm: Compute Am=(βˆ‘j∈Dmpjg^mjg^mjH+Zm)βˆ’1\mathbf{A}_m = (\sum_{j \in \mathcal{D}_m} p_j \hat{\mathbf{g}}_{mj} \hat{\mathbf{g}}_{mj}^H + \mathbf{Z}_m)^{-1}
2. For each k∈Dmk \in \mathcal{D}_m: s^mk=(Amg^mk)Hym\hat{s}_{mk} = (\mathbf{A}_m \hat{\mathbf{g}}_{mk})^H \mathbf{y}_m
3. Forward {s^mk}\{\hat{s}_{mk}\} to CPU
4. At CPU: s^k=βˆ‘m∈Mks^mk\hat{s}_k = \sum_{m \in \mathcal{M}_k} \hat{s}_{mk}
Level 3 (LSFD):
1–3. Same as Level 2
4. At CPU: s^k=βˆ‘m∈MkΞ±mk⋆s^mk\hat{s}_k = \sum_{m \in \mathcal{M}_k} \alpha_{mk}^{\star} \hat{s}_{mk}
where Ξ±k⋆\boldsymbol{\alpha}_k^{\star} are the optimal LSFD weights (Section 13.3)
Level 4 (Centralized MMSE):
1. At AP mm: Forward ym\mathbf{y}_m to CPU
2. At CPU: Form y=[y1T,…,yMT]T\mathbf{y} = [\mathbf{y}_1^T, \ldots, \mathbf{y}_M^T]^T
3. Compute s^k=vkHy\hat{s}_k = \mathbf{v}_{k}^{H} \mathbf{y} with centralized MMSE combiner

The computational bottleneck shifts from the APs (Levels 1–3) to the CPU (Level 4). Levels 1–3 are AP-scalable: the per-AP computation depends only on NN and ∣Dm∣|\mathcal{D}_m|, both bounded. Level 4 is not AP-scalable unless additional structure (e.g., block-diagonal approximations) is exploited.

SINR CDF for Cooperation Levels L1–L4

Compare the cumulative distribution function of per-user SINR across the four cooperation levels. The 95%-likely SINR (5th percentile) is the fairness metric. Observe how Level 3 (LSFD) closes much of the gap between Level 2 and Level 4.

Parameters
100

Number of access points

4

Antennas per access point

20

Number of users

10

Average transmit SNR in dB

Example: Local MMSE Combining at a Single AP

An AP with N=2N = 2 antennas serves ∣Dm∣=2|\mathcal{D}_m| = 2 users with channel estimates

g^m1=[1.20.3j],g^m2=[0.51.1]\hat{\mathbf{g}}_{m1} = \begin{bmatrix} 1.2 \\ 0.3j \end{bmatrix}, \quad \hat{\mathbf{g}}_{m2} = \begin{bmatrix} 0.5 \\ 1.1 \end{bmatrix}

and equal transmit powers p1=p2=1p_1 = p_2 = 1, noise variance Οƒ2=0.1\sigma^2 = 0.1. Assume perfect CSI (no estimation error: Cmj=0\mathbf{C}_{mj} = \mathbf{0}). Compute the Level 2 local MMSE combining vector for user 1.

Common Mistake: LSFD (Level 3) Is Not Centralized Processing

Mistake:

Treating Level 3 as a form of centralized processing because the CPU optimizes the combining weights.

Correction:

In Level 3, the CPU applies scalar weights Ξ±mk\alpha_{mk} that depend only on large-scale fading statistics, not on the instantaneous received signals. The AP still performs all signal-level processing locally. Level 3 is a form of distributed processing with centralized coordination of the weight selection. Centralized processing (Level 4) means the CPU processes the raw received signals.

Quick Check

What additional information does a Level 2 AP require compared to a Level 1 AP?

Channel estimates to all users in the network

Channel estimates to all users in its serving set Dm\mathcal{D}_m

The full network covariance matrix

Large-scale fading coefficients from the CPU

How Much Cooperation Is Enough?

The key insight from the four-level framework is that Level 3 (LSFD) achieves most of the gain of Level 4 at a fraction of the cost. The LSFD weights exploit the macro-diversity of the cell-free architecture β€” the fact that different APs have very different large-scale fading coefficients to the same user β€” without requiring the CPU to process raw signals. In networks with Nβ‰₯4N \geq 4 antennas per AP, the gap between Level 3 and Level 4 is typically less than 1 dB in median SINR. This observation has profound implications for system design: it suggests that the fronthaul bottleneck can be largely avoided without significant performance loss.

⚠️Engineering Note

Selecting the Cooperation Level in Practice

The choice of cooperation level depends on three deployment parameters:

  1. Fronthaul capacity: Fiber-connected APs can support Level 4; wireless backhaul (e.g., mmWave or sub-6 GHz) typically limits to Level 2–3.
  2. Number of antennas per AP (NN): With N=1N = 1, local MMSE degenerates to MRC, so the gain from Level 2 over Level 1 vanishes. Level 4 becomes essential for single-antenna APs. With Nβ‰₯4N \geq 4, Level 2–3 provides excellent performance.
  3. User density: In dense deployments (K/MK/M close to 1), interference is severe and centralized processing provides the largest gain. In sparse deployments, local processing suffices.

In 5G NR with O-RAN, the functional split (7.2x, 6, etc.) determines the achievable cooperation level. Most O-RAN deployments use Split 7.2x for urban macro (enabling Level 4) and Split 6 for rural small cells (Level 2–3).

Practical Constraints
  • β€’

    Fronthaul capacity must exceed 2NB2 N B bits/s per AP for Level 4 (where BB is the system bandwidth)

  • β€’

    Level 2 local MMSE requires O(N3)O(N^3) computation per coherence block at each AP

  • β€’

    Level 3 LSFD weights must be updated whenever large-scale fading changes (every 100–1000 ms)

Historical Note: The Origin of Large-Scale Fading Decoding

2013–2017

The concept of LSFD was first introduced by Ngo and Larsson in a 2013 conference paper on multi-cell massive MIMO. In that context, the "large-scale weights" were used to combine uplink signal estimates from multiple base stations to a user at the cell edge β€” a form of macro-diversity combining. When Ngo, Ashikhmin, Yang, Larsson, and Marzetta transplanted this idea to the cell-free paradigm in 2017, LSFD became a natural fit: instead of combining across base stations with cell boundaries, it combines across access points without boundaries. The key insight β€” that optimal combining weights depend only on large-scale statistics β€” carries over directly, making LSFD a cornerstone of practical cell-free processing.

Cooperation Level

A classification of cell-free processing architectures based on the degree of information sharing between APs and the CPU. Level 1 (local MRC) requires minimal sharing; Level 4 (centralized MMSE) requires full signal forwarding. Levels 2 (local MMSE) and 3 (LSFD) provide intermediate tradeoffs.

Related: Centralized Processing, Distributed Processing, Level 3 β€” Large-Scale Fading Decoding (LSFD)

Large-Scale Fading Decoding (LSFD)

A technique where the CPU combines local symbol estimates from distributed APs using weights that depend only on the large-scale fading coefficients. LSFD weights change on the timescale of seconds (as users move), not on the fast-fading timescale (milliseconds), making them practical to optimize and communicate.

Related: Cooperation Level, Distributed Processing

Quick Check

For Level 4 centralized MMSE, the CPU must invert a matrix of dimension:

NΓ—NN \times N

MΓ—MM \times M

MNΓ—MNMN \times MN

KΓ—KK \times K