Joint Fronthaul Load Balancing
Beyond Uniform Fronthaul Allocation
The strategies in Sections 14.2 and 14.3 assume fixed fronthaul capacities at each AP. In practice, the fronthaul infrastructure may support flexible allocation of capacity and computation resources across APs. This section presents the CommIT group's work on joint fronthaul load balancing and computation resource allocation, which optimizes how fronthaul and processing resources are distributed across the network.
Joint Fronthaul Load Balancing and Computation Resource Allocation
Goettsch, Li, and Caire developed a joint optimization framework for cell-free massive MIMO that simultaneously allocates fronthaul capacity and computation resources across APs and processing units.
The key insight is that fronthaul load and computation load are coupled: an AP that performs more local processing (e.g., local MMSE combining) generates less fronthaul traffic but requires more computation. Conversely, an AP that forwards raw observations consumes more fronthaul but less local computation.
The framework formulates a network utility maximization problem: where is the fronthaul load factor, is the computation allocation, is the local processing matrix, and is the total computation budget.
The paper shows that the joint optimization provides 15--30% sum rate improvement over separate fronthaul and computation optimization, with the largest gains in heterogeneous networks where APs have different fronthaul and computation capabilities.
Definition: Fronthaul Load Factor
Fronthaul Load Factor
The fronthaul load factor at AP quantifies the fraction of fronthaul capacity utilized: where is the fronthaul rate required to forward the locally processed signal. The load factor depends on the local processing strategy :
- No local processing (QF): scales with
- Full local combining (EF): scales with
- Partial local combining: scales with the number of users served by AP 's cluster
Theorem: Load Balancing Gain
Let be the sum rate with uniform fronthaul allocation (equal per AP) and be the sum rate with optimized load balancing. For a network with APs and total fronthaul budget , the load balancing gain satisfies: where are the optimal (unconstrained) fronthaul rates and the variance reflects the heterogeneity of the optimal allocation.
The gain from load balancing is proportional to the variance in the optimal fronthaul allocation. If all APs naturally need the same fronthaul (e.g., a perfectly symmetric deployment), the gain is zero. The gain is largest in heterogeneous networks where some APs serve many users (high fronthaul need) while others serve few.
Convexity argument
The achievable rate is a concave function of the fronthaul allocation. By Jensen's inequality, the uniform allocation is suboptimal whenever the optimal allocation is non-uniform.
Quantify the gap
A second-order Taylor expansion of the rate around the uniform allocation point gives the stated bound. The Hessian of the rate function is negative semidefinite, and the gap is proportional to the variance of the optimal allocation.
Joint Fronthaul and Computation Allocation
Complexity: Per iteration: for local combining updates, for waterfilling. Convergence in 5--10 iterations.The waterfilling step allocates more fronthaul to APs with high-quality channels (many users in their cluster), mirroring classical waterfilling in MIMO capacity optimization.
Load Balancing Gain in Cell-Free Networks
Compare the sum rate with uniform vs. optimized fronthaul allocation. Observe how the gain increases with network heterogeneity (unequal user distributions across APs).
Parameters
0 = uniform user distribution, 1 = highly clustered
Example: Load Balancing with Heterogeneous APs
Consider a cell-free network with APs and users. The APs have the following user clusters: AP 1 serves users , AP 2 serves users , AP 3 serves user , and AP 4 serves user . The total fronthaul budget is bits/s/Hz. Compare uniform allocation ( per AP) with proportional allocation (proportional to cluster size).
Uniform allocation
Each AP gets bits/s/Hz. APs 1 and 2 serve 3 users each: bits per user dimension. APs 3 and 4 serve 1 user each: bits per user dimension. The bottleneck is at APs 1 and 2 with low per-user resolution.
Proportional allocation
Total cluster weight: . AP 1: bits/s/Hz bits per user. AP 2: bits/s/Hz bits per user. AP 3: bits/s/Hz bits per user. AP 4: bits/s/Hz bits per user.
Compare
Proportional allocation equalizes the per-user fronthaul resolution at 5 bits per user across all APs, eliminating the bottleneck at APs 1 and 2. This yields a more balanced rate distribution with higher minimum user rate. The sum rate improvement depends on the channels but is typically 10--25% in this configuration.
Common Mistake: Optimizing Fronthaul and Computation Separately
Mistake:
Treating fronthaul allocation and computation allocation as independent problems. This ignores the coupling: an AP that performs more local processing needs less fronthaul but more computation, and vice versa.
Correction:
Use joint optimization as in the Goettsch/Li/Caire framework. The joint approach moves along the Pareto frontier of the fronthaul-computation tradeoff, achieving 15--30% higher sum rates than separate optimization in heterogeneous networks.
Quick Check
In which scenario does fronthaul load balancing provide the largest gain over uniform allocation?
All APs serve the same number of users
Some APs serve many users while others serve few
All APs have unlimited fronthaul
The network has only one AP
Heterogeneous load creates a mismatch between fronthaul need and uniform allocation, which load balancing resolves.
Key Takeaway
Joint fronthaul load balancing and computation resource allocation exploits the coupling between local processing and fronthaul usage. The Goettsch/Li/Caire framework shows that waterfilling fronthaul across APs (more capacity to busy APs, less to idle ones) provides 15--30% sum rate gains in heterogeneous cell-free networks.
Fronthaul Load Balancing
The optimization of fronthaul capacity allocation across access points in a distributed MIMO network, accounting for heterogeneous user distributions and per-AP processing capabilities.
Related: Fronthaul, Resource Allocation