Integer and Combinatorial Optimization Basics

When Convexity Breaks Down

Many wireless problems are inherently discrete: which users to schedule, which sub-carriers to assign, which modulation order to use. These problems are generally NP-hard — no polynomial-time algorithm is known. This section introduces the key ideas: complexity, relaxation, and approximation.

Definition:

Integer Program (IP)

An integer program is an optimisation problem of the form

minimisecTxsubject toAxbxZn\begin{aligned} \text{minimise} \quad & \mathbf{c}^T \mathbf{x} \\ \text{subject to} \quad & \mathbf{A}\mathbf{x} \preceq \mathbf{b} \\ & \mathbf{x} \in \mathbb{Z}^n \end{aligned}

When some variables are integer and others continuous, it is a mixed-integer program (MIP). When all variables are binary (xi{0,1}x_i \in \{0, 1\}), it is a binary integer program (BIP).

Integer programming is NP-hard in general. However, special structures (total unimodularity, submodularity) can sometimes be exploited.

Definition:

NP-Hardness (Informal)

A problem is NP-hard if no polynomial-time algorithm is known to solve all instances. Specifically, every problem in the class NP can be reduced to it in polynomial time.

Key NP-hard problems in wireless:

  • User scheduling in downlink MIMO (maximise sum rate by selecting KK users from MM)
  • Sub-carrier assignment in OFDMA
  • Antenna selection in massive MIMO
  • Maximum-likelihood detection (ML decoding)

Definition:

LP Relaxation

The LP relaxation of an integer program replaces xZn\mathbf{x} \in \mathbb{Z}^n with xRn\mathbf{x} \in \mathbb{R}^n (and typically 0xi10 \leq x_i \leq 1 for binary programs):

minimisecTxsubject toAxb,0xi1\begin{aligned} \text{minimise} \quad & \mathbf{c}^T \mathbf{x} \\ \text{subject to} \quad & \mathbf{A}\mathbf{x} \preceq \mathbf{b}, \quad 0 \leq x_i \leq 1 \end{aligned}

The LP relaxation provides a lower bound on the integer optimal value: pLPpIPp^\star_{\text{LP}} \leq p^\star_{\text{IP}}.

If the LP relaxation happens to return an integer solution, then that solution is optimal for the integer program too. This occurs when the constraint matrix A\mathbf{A} is totally unimodular (all square sub-determinants are {0,±1}\{0, \pm 1\}).

LP Relaxation vs. Integer Points

Visualise the LP relaxation polytope (shaded region) and the feasible integer points (dots) for a 2D integer program. Observe that the LP optimum may not be integer, and the integrality gap varies with the constraint structure.

Parameters

Definition:

SDP Relaxation

For a binary quadratic program (e.g., max  xTQx\max\;\mathbf{x}^T \mathbf{Q} \mathbf{x} subject to xi{1,+1}x_i \in \{-1, +1\}), the SDP relaxation lifts the variable to a matrix X=xxT\mathbf{X} = \mathbf{x}\mathbf{x}^T:

maximisetr(QX)subject toXii=1,X0\begin{aligned} \text{maximise} \quad & \text{tr}(\mathbf{Q}\mathbf{X}) \\ \text{subject to} \quad & X_{ii} = 1, \quad \mathbf{X} \succeq 0 \end{aligned}

The constraint rank(X)=1\text{rank}(\mathbf{X}) = 1 is dropped (this is the relaxation). The SDP solution provides a bound, and a feasible binary vector is recovered via randomised rounding (Goemans–Williamson).

SDP relaxation with randomised rounding achieves a 0.8780.878-approximation for MAX-CUT. In wireless, SDP relaxation is used for MIMO detection and multicast beamforming.

Definition:

Greedy Algorithm

A greedy algorithm builds a solution incrementally by making the locally optimal choice at each step:

Start with S=S = \emptyset. At each step, add the element e=argmaxeSΔ(eS)e^\star = \arg\max_{e \notin S} \Delta(e \mid S) that gives the largest marginal gain Δ(eS)=f(S{e})f(S)\Delta(e \mid S) = f(S \cup \{e\}) - f(S).

For general combinatorial problems, greedy may be arbitrarily bad. But for submodular objectives (diminishing returns), greedy achieves a (11/e)0.632(1 - 1/e) \approx 0.632 approximation ratio.

Definition:

Submodularity

A set function f:2VRf : 2^{\mathcal{V}} \to \mathbb{R} is submodular if for all STVS \subseteq T \subseteq \mathcal{V} and eTe \notin T:

f(S{e})f(S)f(T{e})f(T).f(S \cup \{e\}) - f(S) \geq f(T \cup \{e\}) - f(T).

This is the diminishing returns property: adding ee to a smaller set helps at least as much as adding it to a larger set.

In wireless, the mutual information I(y;hS)I(\mathbf{y}; \mathbf{h}_S) as a function of the selected antenna set SS is submodular. This makes greedy antenna selection near-optimal.

Theorem: Greedy Approximation for Monotone Submodular Maximisation

Let ff be a non-negative, monotone, submodular set function. The greedy algorithm that selects KK elements to maximise f(S)f(S) subject to SK|S| \leq K achieves

f(Sgreedy)(11e)f(S)0.632f(S)f(S_{\text{greedy}}) \geq \left(1 - \frac{1}{e}\right) f(S^\star) \approx 0.632\, f(S^\star)

where SS^\star is the optimal KK-element set.

Diminishing returns guarantees that the greedy choice at each step captures at least a 1/K1/K fraction of the remaining gap. After KK steps, the fraction of the optimum missed is at most (11/K)K1/e(1-1/K)^K \leq 1/e.

Example: Greedy Antenna Selection

A base station has M=8M = 8 antennas but can only use K=4K = 4 RF chains. The channel matrix is HCNr×M\mathbf{H} \in \mathbb{C}^{N_r \times M}. The capacity with antenna subset SS is C(S)=log2det(I+SNRHSHSH)C(S) = \log_2 \det(\mathbf{I} + \text{SNR} \cdot \mathbf{H}_S \mathbf{H}_S^H) where HS\mathbf{H}_S selects columns indexed by SS. Apply greedy selection.

Convex vs. Non-Convex Optimization Landscape

Convex vs. Non-Convex Optimization Landscape
Left: A convex objective has a single global minimum. Right: A non-convex objective has multiple local minima — gradient descent may get stuck.

Why This Matters: Combinatorial Problems in Wireless Scheduling

In every OFDMA or MU-MIMO time slot, the base station must select which users to serve and which resources to assign — a combinatorial problem. Practical schedulers (proportional fair, max-weight) use greedy heuristics. Understanding the approximation guarantees from submodularity tells us how close these heuristics are to optimal.

See full treatment in Cooperative Diversity

Historical Note: Relaxation in Optimisation

1947–2005

The idea of relaxing integer constraints goes back to Dantzig's simplex method (1947) for linear programs. Goemans and Williamson's 1995 SDP relaxation for MAX-CUT was a breakthrough in approximation algorithms. In wireless, Luo, Ma, and colleagues popularised SDP relaxation for beamforming and detection in the mid-2000s.

Common Mistake: Naive Rounding Can Be Arbitrarily Bad

Mistake:

Solving the LP relaxation and rounding each xix_i to the nearest integer, assuming the result is near-optimal.

Correction:

Simple rounding provides no approximation guarantee in general. The integrality gap (ratio of LP to IP optimal values) can be arbitrarily large for poorly structured problems. Use randomised rounding, dependent rounding, or problem-specific techniques.

Quick Check

Which of the following wireless problems is generally NP-hard?

Water-filling power allocation

Maximum-likelihood MIMO detection

MMSE receiver computation

SVD computation

Quick Check

The greedy algorithm achieves a (11/e)(1 - 1/e)-approximation for maximising a monotone submodular function under a cardinality constraint. What is 11/e1 - 1/e approximately?

0.500

0.632

0.750

0.878

NP-Hard

A problem for which no polynomial-time algorithm is known. Every problem in the class NP can be reduced to an NP-hard problem in polynomial time.

Related: Integer Program (IP), LP Relaxation

LP Relaxation

Replacing integer constraints with continuous relaxations (xi{0,1}xi[0,1]x_i \in \{0,1\} \to x_i \in [0,1]) to obtain a tractable lower bound on the integer program's optimal value.

Related: Integer Program (IP), Integrality Gap

Submodular Function

A set function with the diminishing-returns property: adding an element to a smaller set gives at least as much marginal gain as adding it to a larger set.

Related: Greedy Algorithm, Antenna Selection