Semidefinite Relaxation and Gaussian Randomization
Lift, Relax, Randomize
Semidefinite relaxation (SDR) is the most powerful polynomial-time approach for unit-modulus QCQPs. Its three-step structure β lift the vector to a PSD matrix, relax the rank-1 constraint, randomize to get a feasible solution β is a central optimization pattern that appears throughout signal processing. For RIS problems at moderate , SDR gives the tightest polynomial-time bound on the global optimum and the highest-quality feasible solution.
Definition: The Lifting Transformation
The Lifting Transformation
The unit-modulus QCQP is rewritten in terms of the lifted variable . Key identities:
- .
- and .
- The unit-modulus constraint is equivalent to the diagonal constraint .
The lifted QCQP is
The objective and feasibility are now linear in β the only non-convexity is the rank constraint.
The rank constraint is the "secret source" of non-convexity. Without it, the problem is a standard SDP β solvable in polynomial time by interior-point methods. The SDR approach drops the rank constraint and solves the resulting SDP.
Theorem: SDR Provides a Tight Upper Bound
The SDR problem
is a convex SDP with polynomial-time solution by interior-point methods (). Its optimal value provides an upper bound on the original QCQP optimum. When , the relaxation is tight: , and is extracted by eigendecomposition.
For , the SDR is always tight. For higher rank, tightness is not guaranteed, but the bound is typically close to the global optimum.
Dropping a constraint can only increase the feasible region, so the relaxed optimum is at least as large as the original optimum. Empirically, the relaxation gap is small for RIS-relevant β usually .
SDR with Gaussian Randomization
Complexity: for the SDP + for randomizations; typicalThe randomizations essentially "sample" the set of rank-1 approximations to . The probability that one of samples achieves a -approximation of the SDR bound increases with .
Theorem: The Approximation Guarantee
For the unit-modulus QCQP with , the expected objective of a single random rounding is
where . Hence, taking the best of randomizations gives
with probability approaching 1 as . For , this guarantee holds with confidence in practice.
Gaussian randomization preserves the "direction" of the SDR solution and perturbs by a controlled amount. In expectation the randomized objective is a fraction of the SDR objective β the bound arises from computing the expectation of for complex Gaussian .
Single randomization bound
By Zhang and Huang (2006), for Hermitian PSD : under appropriate normalization of .
Boost by maximum
With i.i.d. samples, the max of realizations converges to the supremum of the distribution. Under mild concentration assumptions, the max is within of the supremum, so is effectively optimal.
Tight: for certain $\mathbf{A}$
For worst-case (MAX-CUT instances), is tight β cannot be improved. For RIS-relevant (rank-1 or nearly-rank-1), the actual approximation ratio approaches 1.
Empirically, Randomization Is Nearly Tight
The bound is a worst-case guarantee. In practice, for RIS problems with typical channels, Gaussian randomization achieves of the SDR objective, often . The theory is much more pessimistic than the practice. This is why SDR is the "gold standard" in RIS papers β it delivers near-optimal solutions at the cost of a heavy SDP solve.
SDR Lift and Gaussian Randomization
SDR Approximation Gap vs.
For a synthetic rank- problem, plot the normalized SDR approximation (best-of- over SDR upper bound) as a function of . At the average is ; by nearly all instances achieve ; by practically optimal.
Parameters
Example: SDR on a Toy Problem
Let (real and PSD). Find the unit-modulus maximizer and compare with SDR.
Set up SDR
Variables with (implied by PSD constraint). Objective .
Maximize
Maximum at : objective . This corresponds to , rank 1 β the SDR is tight, and .
Verify against QCQP
Direct: (with ). Maximized at : objective . Matches SDR.
Common Mistake: SDR Does Not Scale
Mistake:
"SDR is the gold standard, so I'll use it for my panel."
Correction:
SDR's interior-point cost is prohibitive above . For : flops β hours of computation per AO iteration. Manifold optimization (Section 6.3) scales as per iteration and is the only practical choice for large . Use SDR for benchmarking or small-to-medium deployments; use manifold or element-wise for production at scale.
SDP Solvers in Practice
Deploying SDR requires an SDP solver. Common choices:
- CVX (MATLAB) or CVXPY (Python): Disciplined convex programming frameworks. Easy to express the problem; use MOSEK or SeDuMi as the back-end solver.
- MOSEK: Commercial, fast, excellent for medium-size SDPs ().
- SCS: Open-source alternative, solver of choice for batch-mode Monte Carlo. Slightly less accurate, much faster.
- Custom ADMM: For production, a custom implementation can exploit problem structure (low-rank , Toeplitz ) for - speedup over general-purpose SDP solvers.
- β’
CVX / MOSEK at : s per SDP. : s. : s.
- β’
Memory requirement: doubles for the variable; needs 2 MB.
- β’
Randomization typically takes of the total SDR runtime.