Semidefinite Relaxation and Gaussian Randomization

Lift, Relax, Randomize

Semidefinite relaxation (SDR) is the most powerful polynomial-time approach for unit-modulus QCQPs. Its three-step structure β€” lift the vector to a PSD matrix, relax the rank-1 constraint, randomize to get a feasible solution β€” is a central optimization pattern that appears throughout signal processing. For RIS problems at moderate NN, SDR gives the tightest polynomial-time bound on the global optimum and the highest-quality feasible solution.

Definition:

The Lifting Transformation

The unit-modulus QCQP maxβ‘βˆ£Ο•~n∣=1Ο•~HAΟ•~\max_{|\tilde{\phi}_n| = 1} \boldsymbol{\tilde{\phi}}^H \mathbf{A} \boldsymbol{\tilde{\phi}} is rewritten in terms of the lifted variable X=Ο•~Ο•~H∈C(N+1)Γ—(N+1)\mathbf{X} = \boldsymbol{\tilde{\phi}} \boldsymbol{\tilde{\phi}}^H \in \mathbb{C}^{(N+1) \times (N+1)}. Key identities:

  • Ο•~HAΟ•~=tr(AX)\boldsymbol{\tilde{\phi}}^H \mathbf{A} \boldsymbol{\tilde{\phi}} = \text{tr}(\mathbf{A} \mathbf{X}).
  • X=Ο•~Ο•~Hβͺ°0\mathbf{X} = \boldsymbol{\tilde{\phi}} \boldsymbol{\tilde{\phi}}^H \succeq 0 and rank(X)=1\text{rank}(\mathbf{X}) = 1.
  • The unit-modulus constraint βˆ£Ο•~n∣=1|\tilde{\phi}_n| = 1 is equivalent to the diagonal constraint [X]nn=1[\mathbf{X}]_{nn} = 1.

The lifted QCQP is

max⁑Xβͺ°0Β tr(AX)s.t.[X]nn=1βˆ€n,Β rank(X)=1.\max_{\mathbf{X} \succeq 0}\ \text{tr}(\mathbf{A} \mathbf{X}) \quad \text{s.t.}\quad [\mathbf{X}]_{nn} = 1 \forall n, \ \text{rank}(\mathbf{X}) = 1.

The objective and feasibility are now linear in X\mathbf{X} β€” the only non-convexity is the rank constraint.

The rank constraint is the "secret source" of non-convexity. Without it, the problem is a standard SDP β€” solvable in polynomial time by interior-point methods. The SDR approach drops the rank constraint and solves the resulting SDP.

Theorem: SDR Provides a Tight Upper Bound

The SDR problem

XSDR=arg⁑max⁑Xβͺ°0Β tr(AX)s.t.Β [X]nn=1\mathbf{X}^{\text{SDR}} = \arg\max_{\mathbf{X} \succeq 0}\ \text{tr}(\mathbf{A}\mathbf{X}) \quad \text{s.t.}\ [\mathbf{X}]_{nn} = 1

is a convex SDP with polynomial-time solution by interior-point methods (O(N6.5)O(N^{6.5})). Its optimal value tr(AXSDR)\text{tr}(\mathbf{A}\mathbf{X}^{\text{SDR}}) provides an upper bound on the original QCQP optimum. When rank(XSDR)=1\text{rank}(\mathbf{X}^{\text{SDR}}) = 1, the relaxation is tight: XSDR=Ο•~⋆ϕ~⋆H\mathbf{X}^{\text{SDR}} = \boldsymbol{\tilde{\phi}}^\star {\boldsymbol{\tilde{\phi}}^\star}^H, and Ο•~⋆\boldsymbol{\tilde{\phi}}^\star is extracted by eigendecomposition.

For rank(A)=1\text{rank}(\mathbf{A}) = 1, the SDR is always tight. For higher rank, tightness is not guaranteed, but the bound is typically close to the global optimum.

Dropping a constraint can only increase the feasible region, so the relaxed optimum is at least as large as the original optimum. Empirically, the relaxation gap is small for RIS-relevant A\mathbf{A} β€” usually ≀1Β dB\leq 1\text{ dB}.

SDR with Gaussian Randomization

Complexity: O(N6.5)O(N^{6.5}) for the SDP + O(LN2)O(L N^2) for LL randomizations; typical L=1000L = 1000
Input: PSD matrix A∈C(N+1)Γ—(N+1)\mathbf{A} \in \mathbb{C}^{(N+1) \times (N+1)}; randomization count LL.
Output: Unit-modulus Ο•~⋆\boldsymbol{\tilde{\phi}}^\star with objective β‰₯\geq (Ο€/4)β‹…(\pi/4) \cdot(SDR bound).
1. Solve the SDP
XSDR=arg⁑max⁑Xβͺ°0Β tr(AX)\mathbf{X}^{\text{SDR}} = \arg\max_{\mathbf{X}\succeq 0}\ \text{tr}(\mathbf{A}\mathbf{X}) s.t. [X]nn=1[\mathbf{X}]_{nn} = 1.
2. If rank(XSDR)=1\text{rank}(\mathbf{X}^{\text{SDR}}) = 1: eigendecompose XSDR=uuH\mathbf{X}^{\text{SDR}} = \mathbf{u}\mathbf{u}^H; return Ο•~n=un/∣un∣\tilde{\phi}_n = u_n/|u_n|.
3. Otherwise (Gaussian randomization):
4. \quad Draw LL random vectors zβ„“βˆΌCN(0,XSDR)\mathbf{z}_\ell \sim \mathcal{CN}(\mathbf{0}, \mathbf{X}^{\text{SDR}}) for β„“=1,…,L\ell = 1, \ldots, L.
5. \quad For each β„“\ell: quantize to unit modulus, Ο•~β„“=zβ„“/∣zβ„“βˆ£\boldsymbol{\tilde{\phi}}_\ell = \mathbf{z}_\ell / |\mathbf{z}_\ell| (element-wise).
6. \quad Evaluate objective f(Ο•~β„“)f(\boldsymbol{\tilde{\phi}}_\ell) for each.
7. \quad Return Ο•~⋆=arg⁑max⁑ℓf(Ο•~β„“)\boldsymbol{\tilde{\phi}}^\star = \arg\max_\ell f(\boldsymbol{\tilde{\phi}}_\ell).

The LL randomizations essentially "sample" the set of rank-1 approximations to XSDR\mathbf{X}^{\text{SDR}}. The probability that one of LL samples achieves a (Ο€/4)(\pi/4)-approximation of the SDR bound increases with LL.

Theorem: The (Ο€/4)(\pi/4) Approximation Guarantee

For the unit-modulus QCQP with Aβͺ°0\mathbf{A} \succeq 0, the expected objective of a single random rounding is

Ez∼CN(0,XSDR)[f(z/∣z∣)]β‰₯Ο€4 fSDR,\mathbb{E}_{\mathbf{z} \sim \mathcal{CN}(\mathbf{0}, \mathbf{X}^{\text{SDR}})}\big[f(\mathbf{z}/|\mathbf{z}|)\big] \geq \frac{\pi}{4}\, f_{\text{SDR}},

where fSDR=tr(AXSDR)f_{\text{SDR}} = \text{tr}(\mathbf{A} \mathbf{X}^{\text{SDR}}). Hence, taking the best of LL randomizations gives

f(Ο•~⋆)β‰₯Ο€4 fSDRf(\boldsymbol{\tilde{\phi}}^\star) \geq \frac{\pi}{4}\, f_{\text{SDR}}

with probability approaching 1 as Lβ†’βˆžL \to \infty. For Lβ‰₯1000L \geq 1000, this guarantee holds with β‰₯0.99\geq 0.99 confidence in practice.

Gaussian randomization preserves the "direction" of the SDR solution and perturbs by a controlled amount. In expectation the randomized objective is a fraction (Ο€/4)(\pi/4) of the SDR objective β€” the bound arises from computing the expectation of ∣arg⁑z∣|\arg z| for complex Gaussian zz.

Empirically, Randomization Is Nearly Tight

The (Ο€/4)(\pi/4) bound is a worst-case guarantee. In practice, for RIS problems with typical channels, Gaussian randomization achieves >95%> 95\% of the SDR objective, often >99%> 99\%. The theory is much more pessimistic than the practice. This is why SDR is the "gold standard" in RIS papers β€” it delivers near-optimal solutions at the cost of a heavy SDP solve.

SDR Lift and Gaussian Randomization

Visualization of the three-step SDR pipeline. First, the unit-modulus vector is lifted to a rank-1 PSD matrix. Second, the rank constraint is relaxed, and an SDP is solved yielding a (generally higher-rank) XSDR\mathbf{X}^{\text{SDR}}. Third, Gaussian samples are drawn from CN(0,XSDR)\mathcal{CN}(\mathbf{0}, \mathbf{X}^{\text{SDR}}) and projected onto the unit-modulus torus; the best projection is returned.

SDR Approximation Gap vs. LL

For a synthetic rank-rr problem, plot the normalized SDR approximation (best-of-LL over SDR upper bound) as a function of LL. At L=1L = 1 the average is βˆΌΟ€/4β‰ˆ0.785\sim \pi/4 \approx 0.785; by L=100L = 100 nearly all instances achieve >0.95> 0.95; by L=1000L = 1000 practically optimal.

Parameters
32
2
500
20

Example: SDR on a N=2N = 2 Toy Problem

Let A=(2111)\mathbf{A} = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix} (real and PSD). Find the unit-modulus maximizer and compare with SDR.

Common Mistake: SDR Does Not Scale

Mistake:

"SDR is the gold standard, so I'll use it for my N=512N = 512 panel."

Correction:

SDR's O(N6.5)O(N^{6.5}) interior-point cost is prohibitive above N∼128N \sim 128. For N=512N = 512: 5126.5β‰ˆ3Γ—1017512^{6.5} \approx 3 \times 10^{17} flops β€” hours of computation per AO iteration. Manifold optimization (Section 6.3) scales as O(N2)O(N^2) per iteration and is the only practical choice for large NN. Use SDR for benchmarking or small-to-medium deployments; use manifold or element-wise for production at scale.

πŸ”§Engineering Note

SDP Solvers in Practice

Deploying SDR requires an SDP solver. Common choices:

  • CVX (MATLAB) or CVXPY (Python): Disciplined convex programming frameworks. Easy to express the problem; use MOSEK or SeDuMi as the back-end solver.
  • MOSEK: Commercial, fast, excellent for medium-size SDPs (N≀100N \leq 100).
  • SCS: Open-source alternative, solver of choice for batch-mode Monte Carlo. Slightly less accurate, much faster.
  • Custom ADMM: For production, a custom implementation can exploit problem structure (low-rank A\mathbf{A}, Toeplitz B\mathbf{B}) for 55-10Γ—10\times speedup over general-purpose SDP solvers.
Practical Constraints
  • β€’

    CVX / MOSEK at N=100N = 100: ∼0.1\sim 0.1 s per SDP. N=200N = 200: ∼2\sim 2 s. N=400N = 400: ∼30\sim 30 s.

  • β€’

    Memory requirement: N2N^2 doubles for the X\mathbf{X} variable; N=512N = 512 needs 2 MB.

  • β€’

    Randomization typically takes <1%< 1\% of the total SDR runtime.