Direct Discrete Optimization
Skip the Continuous Detour
Projection works well but takes a continuous-phase detour. Direct discrete optimization works natively in , potentially saving compute and recovering the small multi-user penalty of Section 8.2. The workhorse method is discrete BCD: update one at a time by choosing the best level from . This reduces to solving an -way discrete optimization per coordinate β trivial at . For very small (), branch-and-bound gives provable global optima. For very large with tight pilot budget, tabu search and simulated annealing serve as escape heuristics.
Discrete Block Coordinate Descent
Complexity: for -level grid; - sweeps.Step 5 is a closed-form maximization of a cosine over points; in fact, is the nearest grid point to . So the discrete BCD update is simply "continuous BCD result projected to the nearest grid level" β identical in the single-user case to Section 8.2.
Discrete BCD = Projected Continuous BCD
The closed-form continuous-BCD update is . The discrete-BCD update is . For single-user (where one full continuous sweep already reaches the global optimum), this is just projection of the continuous sweep's output β no different from Section 8.2. Discrete BCD yields benefits over projection only in multi-user scenarios where multiple continuous-BCD sweeps are needed and the coupling between coordinates evolves across sweeps β the discrete update at sweep uses the already-quantized phases from the previous coordinate, not continuous values. This tracks the discrete feasibility at every step and recovers - relative to projection-after-convergence.
Theorem: Monotone Convergence of Discrete BCD
Discrete BCD produces a monotone non-decreasing sequence of objective values . Since is finite and the iterates visit at most distinct points, the algorithm terminates in at most iterations. In practice, 3-10 sweeps suffice for convergence.
At termination, the output is a coordinate-stationary point: no single-element update can improve the objective.
Same monotonicity argument as continuous BCD (Chapter 6), adapted to finite . Each coordinate update picks the best discrete level, so the objective is non-decreasing. Since has finitely many points and the objective is bounded, the sequence must eventually stabilize.
Monotonicity
By the definition, each coordinate update satisfies .
Termination
is finite. Monotone non-decreasing sequence on a finite set must stabilize.
Branch-and-Bound for Small
Complexity: worst case; typical but much faster with good bounds.Feasible for or so in practice. The SDR upper bound at each node is the workhorse β it prunes most branches, giving orders-of-magnitude speedup over naive exhaustive search.
Example: Discrete BCD for a , Problem
For with , initialize and perform one discrete BCD sweep with ().
Continuous optimum for reference
: . In : .
Discrete BCD: element 1
(since ). Compute . rad. Nearest level in : ; equivalently (closest to -1.739). Update .
Elements 2, 3, 4
Continue similarly. After one sweep, all four phases are at their discrete matched-filter values (since this is a rank-1 / single-user problem, one sweep suffices). The result converges to the projection of the continuous optimum.
Verify
After one sweep, (rounding each continuous to the nearest grid). Objective is close to β the discrete-loss formula.
Discrete BCD vs. Projection
Compare discrete BCD and projection (with refinement) for the multi-user passive subproblem. At small the two coincide; at discrete BCD recovers a small edge by tracking quantized coordinates during the iteration.
Parameters
Discrete BCD: Rotating Through Phase Levels
Common Mistake: Don't Over-Engineer: Projection Is Usually Enough
Mistake:
"I need the absolute best discrete solution, so I'll use branch-and-bound with SDR bounds."
Correction:
For most RIS problems (), projection and discrete BCD produce essentially the same answer β within - of each other. Branch-and-bound is only worth the complexity for very small (research comparison studies). In deployment, projection-then-refine is the pragmatic choice: 99% of the quality at 1% of the compute.