Sum-Rate Maximization
The Hardest Fairness Criterion
Sum-rate maximization β choosing powers to maximize β is the most natural objective from a throughput perspective but also the most challenging to optimize. Unlike max-min fairness (quasi-concave) and proportional fairness (GP-amenable), the sum-rate objective is non-concave in the power vector due to the interference coupling. This section develops two powerful algorithmic frameworks: successive convex approximation (SCA) and the weighted minimum mean-square error (WMMSE) algorithm.
Definition: Weighted Sum-Rate Maximization
Weighted Sum-Rate Maximization
The weighted sum-rate maximization (WSRMax) problem is
where are user-specific weights. When for all , this is the unweighted sum-rate problem. By varying the weights, one can trace the boundary of the achievable rate region.
The weights can encode priority (premium users get higher weight), queue state (users with more buffered data get higher weight), or fairness constraints (setting approximates proportional fairness).
Theorem: NP-Hardness of Sum-Rate Maximization
The weighted sum-rate maximization problem with interference is NP-hard in general. Specifically, even for the two-user interference channel with single-antenna nodes, the problem of finding the power allocation maximizing is NP-hard when both users have non-zero interference coupling.
The difficulty arises from the non-concavity of the sum-rate function. Increasing improves (more signal) but hurts for (more interference). This creates a landscape with potentially many local maxima, and no polynomial-time algorithm is known to find the global optimum.
Reduction from MAX-CUT
Luo and Zhang (2008) showed that the sum-rate maximization problem for a -user interference channel can encode an instance of MAX-CUT on a weighted graph. Since MAX-CUT is NP-hard, the reduction establishes NP-hardness of the power control problem.
Implication for massive MIMO
In massive MIMO, the effective interference coupling is typically much smaller than the desired signal (favorable propagation). This makes the problem "closer to concave" in practice, and local methods often find near-optimal solutions. However, no polynomial-time algorithm guarantees global optimality.
Successive Convex Approximation (SCA)
An iterative optimization technique that replaces a non-convex problem with a sequence of convex subproblems. Each subproblem is constructed by approximating the non-convex parts with concave lower bounds, and the iterates provably converge to a stationary point of the original problem.
Related: WMMSE Algorithm
Definition: Successive Convex Approximation for Sum-Rate
Successive Convex Approximation for Sum-Rate
The SCA approach to WSRMax works by lower-bounding the concave part of and linearizing the interference term. At iteration , given the current powers , we solve
where is a concave surrogate of satisfying: (i) (tight at current point), (ii) (matching gradient), and (iii) for all (lower bound). Properties (i)β(iii) guarantee that the sequence converges to a stationary point.
Theorem: Convergence of SCA
Let be the sequence of iterates produced by the SCA algorithm. If the surrogate functions satisfy properties (i)β(iii) above, then:
- The objective is non-decreasing: .
- Every limit point of satisfies the KKT conditions of the original WSRMax problem.
The key idea is that each SCA iteration maximizes a lower bound that is tight at the current point. Because the bound is tight, the new iterate achieves at least the same objective value. Because the gradient matches, the stationary points of the surrogates coincide with those of the original problem.
Monotonicity
By property (iii), . Since maximizes , . By property (i), the right side equals .
Stationarity
At a limit point , we have . The KKT conditions of the surrogate subproblem at involve by property (ii), which are exactly the KKT conditions of the original problem.
Definition: The WMMSE Algorithm
The WMMSE Algorithm
The Weighted Minimum Mean-Square Error (WMMSE) algorithm is a particular instance of SCA that exploits the SINRβMSE relationship. For user with receive filter and weight , define the MSE
The WMMSE algorithm alternates between:
- Fix , update : MMSE receiver
- Fix , update : Weight (inverse MSE)
- Fix , update : Solve a convex quadratic program
This alternating optimization provably converges to a stationary point of the WSRMax problem.
WMMSE Algorithm
The Weighted Minimum Mean-Square Error algorithm for sum-rate maximization. It alternates between updating MMSE receivers, MSE weights, and transmit powers, converging to a KKT point of the weighted sum-rate problem.
Related: Successive Convex Approximation for Sum-Rate, Sum Rate Maximization
WMMSE Algorithm for Weighted Sum-Rate Maximization
Complexity: per iteration; typically β iterations to convergeThe key insight of WMMSE is the identity which connects the MSE to the rate. This allows the non-convex rate maximization to be decomposed into alternating convex steps.
Example: WMMSE Convergence for a 4-User System
Consider antennas, users with random i.i.d. Rayleigh channels, ZF combining, and dB. Run the WMMSE algorithm from equal-power initialization and compare the converged sum rate to the equal-power baseline.
Equal-power baseline
With equal power for all , the sum rate depends on the channel realization. For a typical realization with path losses ranging over 10 dB: bits/s/Hz.
WMMSE iterations
After 15 iterations, the WMMSE algorithm converges to bits/s/Hz β an 18% improvement. The algorithm shifts power from users with weak effective channels (after ZF processing) to users with strong effective channels.
Convergence behavior
The sum rate increases monotonically at each iteration (guaranteed by the SCA framework). Most of the improvement occurs in the first 5 iterations; the remaining iterations refine the solution by less than 0.1 bits/s/Hz per step.
Common Mistake: WMMSE Finds Local Optima, Not Global
Mistake:
A common error is to trust that the WMMSE algorithm finds the global optimum of the sum-rate problem because it is based on alternating optimization with closed-form updates.
Correction:
WMMSE converges to a stationary point (KKT point) of the sum-rate problem, which is a local optimum or saddle point β not necessarily the global optimum. In practice, running WMMSE from multiple random initializations and taking the best result improves the chance of finding a good solution. For massive MIMO with favorable propagation, the interference coupling is weak and the landscape has few local optima, so a single run usually suffices.
WMMSE Algorithm Convergence
Observe how the weighted sum rate evolves over WMMSE iterations. Compare convergence speed for different numbers of users and antenna counts.
Parameters
Computational Cost of WMMSE in Real-Time Systems
The WMMSE algorithm requires 10β50 iterations, each involving matrix inversions of size and a convex power allocation step. For massive MIMO with users and 1 ms scheduling granularity (5G NR), the algorithm must complete within approximately 100 microseconds on the baseband processor. This is achievable for small but becomes challenging for dense deployments with 64+ users.
- β’
5G NR scheduling period: 1 ms (2 slots at 30 kHz SCS)
- β’
Baseband processing budget: typically 100-200 microseconds for power allocation
- β’
WMMSE with and 30 iterations: ~50 microseconds on a modern DSP
- β’
For , heuristic methods from Section 5.4 are preferred
Quick Check
What is the key mathematical identity that enables the WMMSE algorithm?
The rate equals where is the MMSE
The sum rate is concave in the power vector under favorable propagation
The interference term can be linearized without loss of optimality
Water-filling is optimal when treating interference as noise
This identity connects rate maximization to MSE minimization. The WMMSE algorithm exploits this by introducing auxiliary weights that decouple the problem into alternating convex steps, each with a closed-form solution.
Key Takeaway
Weighted sum-rate maximization is NP-hard in general but yields to iterative methods. The WMMSE algorithm β which alternates between MMSE receiver updates, weight updates, and convex power allocation β converges to a KKT point and is the standard workhorse for sum-rate optimization in massive MIMO systems.