Exercises

ex-ch03-01

Easy

Show that the intersection of two halfspaces {x:a1Txb1}\{\mathbf{x} : \mathbf{a}_1^T \mathbf{x} \leq b_1\} and {x:a2Txb2}\{\mathbf{x} : \mathbf{a}_2^T \mathbf{x} \leq b_2\} is convex.

ex-ch03-02

Easy

Determine whether f(x)=xf(x) = |x| is convex on R\mathbb{R}. (Note that ff is not differentiable at x=0x = 0.)

ex-ch03-03

Easy

Show that f(x)=max(x1,x2,,xn)f(\mathbf{x}) = \max(x_1, x_2, \ldots, x_n) is convex.

ex-ch03-04

Easy

Write the dual of the LP: min  3x1+2x2\min\; 3x_1 + 2x_2 subject to x1+x24x_1 + x_2 \geq 4, x10x_1 \geq 0, x20x_2 \geq 0.

ex-ch03-05

Easy

Verify the KKT conditions for the problem min  x2\min\; x^2 subject to x1x \geq 1. What are xx^\star and λ\lambda^\star?

ex-ch03-06

Easy

Compute one step of gradient descent on f(x)=(x3)2f(x) = (x-3)^2 starting from x0=0x_0 = 0 with step size α=0.5\alpha = 0.5.

ex-ch03-07

Medium

Show that f(x)=log ⁣(i=1nexi)f(\mathbf{x}) = \log\!\left(\sum_{i=1}^n e^{x_i}\right) (the log-sum-exp function) is convex.

ex-ch03-08

Medium

Derive the dual of the QP: min  12xTQx+cTx\min\; \frac{1}{2}\mathbf{x}^T \mathbf{Q}\mathbf{x} + \mathbf{c}^T \mathbf{x} subject to Ax=b\mathbf{A}\mathbf{x} = \mathbf{b}, where Q0\mathbf{Q} \succ 0.

ex-ch03-09

Medium

Water-filling with N=3N = 3 sub-channels: noise floors σi2/gi=[1,2,5]\sigma_i^2/g_i = [1, 2, 5] and total power Ptot=3P_{\text{tot}} = 3. Find the optimal allocation and total rate.

ex-ch03-10

Medium

Show that gradient descent with step size α>2/L\alpha > 2/L diverges on the quadratic f(x)=L2x2f(x) = \frac{L}{2}x^2.

ex-ch03-11

Medium

Project the vector y=(3,1,2)\mathbf{y} = (3, -1, 2) onto the probability simplex Δ={x:xi0,  ixi=1}\Delta = \{\mathbf{x} : x_i \geq 0,\; \sum_i x_i = 1\}.

ex-ch03-12

Medium

Show that f(X)=logdet(X)f(\mathbf{X}) = -\log\det(\mathbf{X}) is convex on the cone of positive definite matrices S++n\mathbb{S}^n_{++}.

ex-ch03-13

Medium

The Rosenbrock function is f(x1,x2)=(1x1)2+100(x2x12)2f(x_1, x_2) = (1-x_1)^2 + 100(x_2 - x_1^2)^2. Is it convex? Find its global minimum.

ex-ch03-14

Medium

Show that the MMSE estimator x^=(HHH+σ2I)1HHy\hat{\mathbf{x}} = (\mathbf{H}^H \mathbf{H} + \sigma^2 \mathbf{I})^{-1} \mathbf{H}^H \mathbf{y} solves the convex QP: minxyHx2+σ2x2\min_{\mathbf{x}} \|\mathbf{y} - \mathbf{H}\mathbf{x}\|^2 + \sigma^2 \|\mathbf{x}\|^2.

ex-ch03-15

Hard

(Dual decomposition for multi-cell power control.) Consider KK single-antenna links sharing a band. Link kk transmits with power pkp_k and achieves rate Rk=log2(1+pkgkk/(jkpjgjk+σ2))R_k = \log_2(1 + p_k g_{kk} / (\sum_{j \neq k} p_j g_{jk} + \sigma^2)). The network wants to maximise klogRk\sum_k \log R_k (proportional fair) subject to pkPmaxp_k \leq P_{\max}.

  1. Explain why this is non-convex.
  2. Formulate the high-SINR approximation (treat interference as noise, fix interference) as a convex problem.
  3. Write the Lagrangian dual and interpret the dual variables.

ex-ch03-16

Hard

(SDP relaxation for MIMO detection.) The ML detection problem is minx{1,+1}nyHx2\min_{\mathbf{x} \in \{-1,+1\}^n} \|\mathbf{y} - \mathbf{H}\mathbf{x}\|^2.

  1. Show this is equivalent to maxx{1,+1}nxTQx+qTx\max_{\mathbf{x} \in \{-1,+1\}^n} \mathbf{x}^T \mathbf{Q}\mathbf{x} + \mathbf{q}^T \mathbf{x} for appropriate Q\mathbf{Q} and q\mathbf{q}.
  2. Write the SDP relaxation.
  3. Explain randomised rounding.

ex-ch03-17

Hard

(Convergence of projected gradient descent.) Let ff be LL-smooth and convex, and C\mathcal{C} be a closed convex set. Show that projected gradient descent with α=1/L\alpha = 1/L satisfies

f(xk)f(x)Lx0x22kf(\mathbf{x}_k) - f(\mathbf{x}^\star) \leq \frac{L\|\mathbf{x}_0 - \mathbf{x}^\star\|^2}{2k}

where xC\mathbf{x}^\star \in \mathcal{C} is the constrained minimum.

ex-ch03-18

Hard

(Greedy antenna selection is near-optimal.) For a MIMO system with MM antennas and KK RF chains, the capacity with selected set SS is C(S)=log2det(I+SNRHSHSH)C(S) = \log_2\det(\mathbf{I} + \text{SNR} \cdot \mathbf{H}_S \mathbf{H}_S^H).

  1. Show that C(S)C(S) is monotone: C(S)C(T)C(S) \leq C(T) for STS \subseteq T.
  2. Show submodularity by proving the marginal gain from adding antenna mm decreases as the set grows.

ex-ch03-19

Challenge

(Dual decomposition for NUM.) Consider KK flows sharing LL links in a network. Flow kk uses links in set Lk\mathcal{L}_k and achieves rate xkx_k. Link \ell has capacity cc_\ell. The network utility maximisation (NUM) problem is:

maxk=1KUk(xk)s.t.k:Lkxkc,xk0.\max \sum_{k=1}^K U_k(x_k) \quad \text{s.t.} \quad \sum_{k : \ell \in \mathcal{L}_k} x_k \leq c_\ell, \quad x_k \geq 0.

Assume UkU_k is strictly concave and increasing (e.g., logxk\log x_k for proportional fairness).

  1. Form the Lagrangian dual.
  2. Show the dual decomposes into KK independent sub-problems.
  3. Propose a distributed algorithm using gradient ascent on the dual.
  4. Interpret the dual variables as link prices.

ex-ch03-20

Challenge

(Nesterov acceleration.) Prove that Nesterov's accelerated gradient method achieves an O(1/k2)O(1/k^2) convergence rate for LL-smooth convex functions — matching the theoretical lower bound for first-order methods.

The algorithm: set y0=x0\mathbf{y}_0 = \mathbf{x}_0. For k0k \geq 0:

xk+1=yk1Lf(yk),yk+1=xk+1+kk+3(xk+1xk).\mathbf{x}_{k+1} = \mathbf{y}_k - \frac{1}{L}\nabla f(\mathbf{y}_k), \quad \mathbf{y}_{k+1} = \mathbf{x}_{k+1} + \frac{k}{k+3}(\mathbf{x}_{k+1} - \mathbf{x}_k).