The Passive-Beamforming Subproblem

Fix the Precoder, Choose the Phases

The second half of each AO iteration holds W\mathbf{W} fixed and optimizes Ξ¦\boldsymbol{\Phi}. Unlike the active subproblem, the passive one is genuinely non-convex β€” the unit-modulus constraint βˆ£Ο•n∣=1|\phi_n| = 1 defines a non-convex torus. This chapter frames the passive subproblem abstractly; Chapter 6 develops the three workhorse algorithms (SDR, manifold optimization, element-wise optimization) that solve it.

Definition:

The Passive Beamforming Subproblem

Given fixed W=[v1,…,vK]\mathbf{W} = [\mathbf{v}_{1}, \ldots, \mathbf{v}_{K}], the passive subproblem is

β€…β€Šmaxβ‘Ο•βˆˆCN:βˆ£Ο•n∣=1β€…β€ŠfΟ•(Ο•)β€…β€Š\boxed{\;\max_{\boldsymbol{\phi} \in \mathbb{C}^N: |\phi_n| = 1}\; f_{\boldsymbol{\phi}}(\boldsymbol{\phi})\;}

where fΟ•f_{\boldsymbol{\phi}} is the sum-rate (or another utility) with Ξ¦=diag(Ο•)\boldsymbol{\Phi} = \text{diag}(\boldsymbol{\phi}) plugged into the per-user SINRs. Using the diagonal-product identity from Chapter 3, hk,effHvj=hk,dHvj+Ο•T(diag(hk,2βˆ—)H1vj)\mathbf{h}_{k,\text{eff}}^H \mathbf{v}_{j} = \mathbf{h}_{k,d}^H \mathbf{v}_{j} + \boldsymbol{\phi}^T (\text{diag}(\mathbf{h}_{k,2}^*) \mathbf{H}_1 \mathbf{v}_{j}) is an affine function of Ο•\boldsymbol{\phi}, so the SINRs are ratios of quadratics in Ο•\boldsymbol{\phi}.

The passive subproblem has the structure "maximize a ratio of quadratic forms subject to unit-modulus constraints." This problem class is called the QCQP on the complex torus and has a substantial optimization literature β€” most of which Chapter 6 surveys.

Theorem: Passive Objective as a Ratio of Quadratic Forms

For fixed W\mathbf{W}, the kk-th SINR can be written as

SINRk(Ο•)=Ο•~HAkΟ•~Ο•~HBkΟ•~\text{SINR}_k(\boldsymbol{\phi}) = \frac{\boldsymbol{\tilde{\phi}}^H \mathbf{A}_k \boldsymbol{\tilde{\phi}}}{\boldsymbol{\tilde{\phi}}^H \mathbf{B}_k \boldsymbol{\tilde{\phi}}}

where Ο•~=[Ο•T,1]T∈CN+1\boldsymbol{\tilde{\phi}} = [\boldsymbol{\phi}^T, 1]^T \in \mathbb{C}^{N+1} (augmented to handle the direct path's constant term) and Ak,Bk∈C(N+1)Γ—(N+1)\mathbf{A}_k, \mathbf{B}_k \in \mathbb{C}^{(N+1) \times (N+1)} are Hermitian PSD matrices constructed from the channels and W\mathbf{W}. The passive sum-rate subproblem is

max⁑ϕ:βˆ£Ο•n∣=1βˆ‘klog⁑2 ⁣(1+Ο•~HAkΟ•~Ο•~HBkΟ•~)\max_{\boldsymbol{\phi}: |\phi_n| = 1} \sum_k \log_2\!\left(1 + \frac{\boldsymbol{\tilde{\phi}}^H \mathbf{A}_k \boldsymbol{\tilde{\phi}}}{\boldsymbol{\tilde{\phi}}^H \mathbf{B}_k \boldsymbol{\tilde{\phi}}}\right)

with the additional constraint βˆ£Ο•~N+1∣=1|\tilde{\phi}_{N+1}| = 1 (set to 1 wlog by phase reference).

Each SINR is a ratio whose numerator and denominator are both quadratic in Ο•\boldsymbol{\phi}, because the effective channel is affine in Ο•\boldsymbol{\phi} and products of affines give quadratics. The sum of log⁑\logs of ratios of quadratics is the full objective.

Three Ways to Attack the Subproblem

The unit-modulus quadratic problem has three canonical algorithmic approaches, each explored in Chapter 6:

  1. Semidefinite Relaxation (SDR): Lift Ο•~\boldsymbol{\tilde{\phi}} to a PSD matrix X=Ο•~Ο•~H\mathbf{X} = \boldsymbol{\tilde{\phi}} \boldsymbol{\tilde{\phi}}^H, relax the rank-1 constraint, solve an SDP, then use Gaussian randomization to recover Ο•\boldsymbol{\phi}. Strong theoretical guarantees; O(N6.5)O(N^{6.5}) solve time.
  2. Manifold optimization: View the torus as a Riemannian manifold, perform gradient descent in its tangent space, retract to the manifold. Fast O(Nlog⁑N)O(N \log N) per iteration; globally convergent under smoothness.
  3. Element-wise optimization: Update one Ο•n\phi_n at a time while fixing the others. Each update is a 1D problem with closed-form solution (maximize a sinusoid). Simplest, linear in NN per full sweep, often competitive in practice.

For the AO framework of this chapter, any of the three works as the passive update step.

Example: Element-wise Update for a Single-User Problem

Derive the closed-form element-wise update for the single-user MISO-RIS problem: max⁑ϕ:βˆ£Ο•n∣=1∣heffHvβ‹†βˆ£2\max_{\boldsymbol{\phi}: |\phi_n| = 1} |\mathbf{h}_{\text{eff}}^H \mathbf{v}^\star|^2 where heffHv⋆=a+Ο•Tb\mathbf{h}_{\text{eff}}^H \mathbf{v}^\star = a + \boldsymbol{\phi}^T \mathbf{b} for some a,ba, \mathbf{b}.

Element-wise Passive Update: One Element at a Time

Visualize the element-wise update on a complex-plane diagram. Each Ο•n\phi_n rotates to align with the combined contribution of the other elements and the direct path. After one sweep, the phases converge to near-matched-filter alignment; after 3-5 sweeps, the objective plateau.

Common Mistake: The Rank-1 Assumption Is Single-User-Only

Mistake:

"The element-wise matched-filter rule (Example 5.4) is the optimal passive update for any scenario."

Correction:

The closed-form element-wise rule is optimal only for single-user scenarios (or more generally, when the passive subproblem reduces to maximizing a single βˆ£Ο•Tb∣2|\boldsymbol{\phi}^T \mathbf{b}|^2). For multi-user sum-rate with non-trivial interference, the objective is a ratio of quadratics, not a single squared inner product, and element-wise updates become a heuristic. Use SDR or manifold methods (Chapter 6) for multi-user.

πŸ”§Engineering Note

Which Passive Algorithm to Use?

Rule of thumb for selecting the passive-update algorithm:

  • Single-user: element-wise closed-form. Fastest.
  • K≀4K \leq 4, modest NN (≀128\leq 128): SDR. Strong theoretical solution quality.
  • Large NN (>256> 256): manifold optimization. SDR becomes too expensive for larger RIS due to O(N6.5)O(N^{6.5}) SDP cost.
  • Real-time deployment: element-wise if acceptable; otherwise manifold with warm-starting from the previous iteration.
  • Discrete phases (Chapter 8): project element-wise onto the finite phase grid at each step, or use projected gradient in the manifold framework.
Practical Constraints
  • β€’

    SDR time for N=256N = 256 via CVX/MOSEK: ∼1\sim 1 s per solve. Prohibitive in real time.

  • β€’

    Element-wise sweep cost per step: ∼0.1\sim 0.1 ms at N=256N = 256 in NumPy.

  • β€’

    Manifold gradient descent convergence: typically 50-200 inner iterations.