Exercises

ex-ch01-01

Easy

Find the eigenvalues and eigenvectors of A=[3113].\mathbf{A} = \begin{bmatrix} 3 & 1 \\ 1 & 3 \end{bmatrix}. Then verify the spectral decomposition A=QΛQT\mathbf{A} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^T by explicitly multiplying the factors.

ex-ch01-02

Easy

Compute the singular value decomposition (SVD) of A=[111001].\mathbf{A} = \begin{bmatrix} 1 & 1 \\ 1 & 0 \\ 0 & 1 \end{bmatrix}. Verify the factorization A=UΣVH\mathbf{A} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^H.

ex-ch01-03

Easy

For A=[1234],B=[0110],\mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix},\qquad \mathbf{B} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, compute the Kronecker product AB\mathbf{A} \otimes \mathbf{B} and verify that the result is 4×44 \times 4.

ex-ch01-04

Easy

Apply the Gram–Schmidt process to the set of vectors {[110],  [101],  [011]}\left\{\begin{bmatrix}1\\1\\0\end{bmatrix},\; \begin{bmatrix}1\\0\\1\end{bmatrix},\; \begin{bmatrix}0\\1\\1\end{bmatrix}\right\} in R3\mathbb{R}^3 to produce an orthonormal basis.

ex-ch01-05

Easy

Given the complex vector x=[1j1j],\mathbf{x} = \begin{bmatrix} 1 \\ j \\ 1-j \end{bmatrix}, compute the 1\ell_1-norm x1\|\mathbf{x}\|_1, the 2\ell_2-norm x2\|\mathbf{x}\|_2, and the \ell_\infty-norm x\|\mathbf{x}\|_\infty.

ex-ch01-06

Medium

Let ACm×n\mathbf{A} \in \mathbb{C}^{m \times n} with rank(A)=n\operatorname{rank}(\mathbf{A}) = n (i.e., A\mathbf{A} has full column rank, mnm \ge n). Show that AHA\mathbf{A}^H\mathbf{A} is positive definite.

ex-ch01-07

Medium

Prove that for any ACm×n\mathbf{A} \in \mathbb{C}^{m \times n}, tr(AHA)=i=1mj=1naij2=AF2,\operatorname{tr}(\mathbf{A}^H\mathbf{A}) = \sum_{i=1}^{m}\sum_{j=1}^{n} |a_{ij}|^2 = \|\mathbf{A}\|_F^2, where AF\|\mathbf{A}\|_F is the Frobenius norm.

ex-ch01-08

Medium

Show that for any unitary matrix U\mathbf{U} (i.e., UHU=I\mathbf{U}^H\mathbf{U} = \mathbf{I}), the Euclidean norm is preserved: Ux2=x2for all x.\|\mathbf{U}\mathbf{x}\|_2 = \|\mathbf{x}\|_2 \quad \text{for all } \mathbf{x}.

ex-ch01-09

Medium

For real vectors a,xRn\mathbf{a}, \mathbf{x} \in \mathbb{R}^n, derive the gradients x(aTx)=aandx(xTa)=a.\frac{\partial}{\partial \mathbf{x}}(\mathbf{a}^T\mathbf{x}) = \mathbf{a} \qquad\text{and}\qquad \frac{\partial}{\partial \mathbf{x}}(\mathbf{x}^T\mathbf{a}) = \mathbf{a}.

ex-ch01-10

Medium

Prove that all eigenvalues of a positive definite matrix A\mathbf{A} are strictly positive.

ex-ch01-11

Medium

Let A0\mathbf{A} \succeq 0 and B0\mathbf{B} \succeq 0 (both positive semidefinite). Prove that tr(AB)0\operatorname{tr}(\mathbf{A}\mathbf{B}) \geq 0.

ex-ch01-12

Medium

Prove the Eckart–Young theorem: if A=i=1rσiuiviH\mathbf{A} = \sum_{i=1}^r \sigma_i \mathbf{u}_i \mathbf{v}_i^H is the SVD of A\mathbf{A} (with σ1σ2σr>0\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_r > 0), then the best rank-kk approximation in Frobenius norm is Ak=i=1kσiuiviH,\mathbf{A}_k = \sum_{i=1}^k \sigma_i \mathbf{u}_i \mathbf{v}_i^H, i.e., Ak\mathbf{A}_k minimizes AXF\|\mathbf{A} - \mathbf{X}\|_F over all matrices X\mathbf{X} of rank at most kk.

ex-ch01-13

Medium

(Water-filling) Consider the optimization problem maxp1,,pn  i=1nlog(1+piλi)subject toi=1npi=P,    pi0,\max_{p_1,\dots,p_n} \;\sum_{i=1}^n \log(1 + p_i \lambda_i) \quad\text{subject to}\quad \sum_{i=1}^n p_i = P,\;\; p_i \ge 0, where λi>0\lambda_i > 0 are given constants and P>0P > 0 is a total power budget. Derive the water-filling solution using the KKT conditions.

ex-ch01-14

Medium

Show that the function f(A)=logdet(A)f(\mathbf{A}) = \log\det(\mathbf{A}) is concave on the cone of positive definite matrices. That is, for any A0\mathbf{A} \succ 0, B0\mathbf{B} \succ 0, and t[0,1]t \in [0,1]: f(tA+(1t)B)tf(A)+(1t)f(B).f(t\mathbf{A} + (1-t)\mathbf{B}) \ge t\,f(\mathbf{A}) + (1-t)\,f(\mathbf{B}).

ex-ch01-15

Medium

Prove the mixed-product property of Kronecker products: if A\mathbf{A}, B\mathbf{B}, C\mathbf{C}, D\mathbf{D} are matrices of compatible dimensions (so that AC\mathbf{AC} and BD\mathbf{BD} exist), then (AB)(CD)=(AC)(BD).(\mathbf{A} \otimes \mathbf{B})(\mathbf{C} \otimes \mathbf{D}) = (\mathbf{AC}) \otimes (\mathbf{BD}).

ex-ch01-16

Hard

Prove the spectral theorem for Hermitian matrices from scratch: every Hermitian matrix ACn×n\mathbf{A} \in \mathbb{C}^{n \times n} (i.e., A=AH\mathbf{A} = \mathbf{A}^H) can be written as A=QΛQH,\mathbf{A} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^H, where Q\mathbf{Q} is unitary and Λ\mathbf{\Lambda} is real diagonal. Use induction on the matrix size nn.

ex-ch01-17

Hard

Prove Hadamard's inequality using Fischer's inequality. Specifically:

(a) First prove Fischer's inequality: if A0\mathbf{A} \succ 0 is partitioned as A=[A11A12A21A22],\mathbf{A} = \begin{bmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22} \end{bmatrix}, then det(A)det(A11)det(A22)\det(\mathbf{A}) \le \det(\mathbf{A}_{11})\det(\mathbf{A}_{22}).

(b) Then apply Fischer's inequality iteratively to obtain Hadamard's inequality: det(A)i=1naii.\det(\mathbf{A}) \le \prod_{i=1}^n a_{ii}.

ex-ch01-18

Hard

Let HCnr×nt\mathbf{H} \in \mathbb{C}^{n_r \times n_t} be a MIMO channel matrix with SVD H=UΣVH\mathbf{H} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^H. Prove that the MIMO capacity C=maxQ0,  tr(Q)Plogdet ⁣(Inr+HQHH)C = \max_{\mathbf{Q} \succeq 0,\; \operatorname{tr}(\mathbf{Q}) \le P} \log\det\!\left(\mathbf{I}_{n_r} + \mathbf{H}\mathbf{Q}\mathbf{H}^H\right) is achieved by Q=VPVH\mathbf{Q}^\star = \mathbf{V}\mathbf{P}\mathbf{V}^H where P=diag(p1,,pnt)\mathbf{P} = \operatorname{diag}(p_1,\dots,p_{n_t}) with pi0p_i \ge 0, reducing the capacity to C=i=1min(nr,nt)log(1+piσi2),C = \sum_{i=1}^{\min(n_r,n_t)} \log(1 + p_i \sigma_i^2), where the pip_i satisfy the water-filling solution from Exercise 13 with λi=σi2\lambda_i = \sigma_i^2.

ex-ch01-19

Challenge

Power Iteration Algorithm.

Implement the power iteration algorithm in Python to find the dominant eigenvalue and eigenvector of a real symmetric matrix.

  1. Generate a random 5×55 \times 5 symmetric matrix A=M+MT\mathbf{A} = \mathbf{M} + \mathbf{M}^T where M\mathbf{M} has i.i.d. N(0,1)\mathcal{N}(0,1) entries.
  2. Starting from a random unit vector q(0)\mathbf{q}^{(0)}, iterate: z(k+1)=Aq(k),q(k+1)=z(k+1)z(k+1),λ(k+1)=(q(k+1))TAq(k+1).\mathbf{z}^{(k+1)} = \mathbf{A}\mathbf{q}^{(k)}, \qquad \mathbf{q}^{(k+1)} = \frac{\mathbf{z}^{(k+1)}}{\|\mathbf{z}^{(k+1)}\|}, \qquad \lambda^{(k+1)} = (\mathbf{q}^{(k+1)})^T \mathbf{A}\,\mathbf{q}^{(k+1)}.
  3. Plot λ(k)λ1|\lambda^{(k)} - \lambda_1| versus kk on a log scale, where λ1\lambda_1 is the true dominant eigenvalue (from numpy.linalg.eigh).
  4. Verify that the convergence rate is O(λ2/λ12k)O(|\lambda_2/\lambda_1|^{2k}).

Reference implementation: ch01/python/power_iteration.py.

ex-ch01-20

Challenge

SVD-Based Image Compression.

Implement SVD-based image compression in Python.

  1. Load a grayscale image as a matrix ARm×n\mathbf{A} \in \mathbb{R}^{m \times n}.
  2. Compute the full SVD: A=UΣVT\mathbf{A} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^T.
  3. Form rank-kk approximations Ak=i=1kσiuiviT\mathbf{A}_k = \sum_{i=1}^{k} \sigma_i \mathbf{u}_i \mathbf{v}_i^T for k=1,5,10,20,50k = 1, 5, 10, 20, 50.
  4. Display the reconstructed images side by side.
  5. Plot the relative reconstruction error AAkF/AF\|\mathbf{A} - \mathbf{A}_k\|_F / \|\mathbf{A}\|_F versus kk and verify that it equals i>kσi2iσi2.\frac{\sqrt{\sum_{i>k} \sigma_i^2}}{\sqrt{\sum_{i} \sigma_i^2}}.
  6. Also plot the compression ratio: the rank-kk approximation stores k(m+n+1)k(m + n + 1) numbers versus mnmn for the full image.

Reference implementation: ch01/python/svd_compression.py.