Exercises

ex-sp-ch06-01

Easy

Create a 4Γ—44 \times 4 random complex matrix A\mathbf{A} and a random complex vector b\mathbf{b}. Solve Ax=b\mathbf{A}\mathbf{x} = \mathbf{b} using both np.linalg.solve and np.linalg.inv. Verify that both give the same answer (up to numerical precision) and time both methods.

ex-sp-ch06-02

Easy

Compute the eigenvalues and eigenvectors of the 3Γ—33 \times 3 correlation matrix Rij=0.9∣iβˆ’j∣R_{ij} = 0.9^{|i-j|}. Verify that the eigenvalues are real, the eigenvectors are orthogonal, and R=VΞ›VT\mathbf{R} = \mathbf{V}\boldsymbol{\Lambda}\mathbf{V}^T.

ex-sp-ch06-03

Easy

Construct a 1000Γ—10001000 \times 1000 sparse tridiagonal matrix with βˆ’2-2 on the diagonal and 11 on the sub/super-diagonals. Compare its memory usage (sparse vs dense) and compute a matrix-vector product timing for both.

ex-sp-ch06-04

Easy

Compute the SVD of a 6Γ—46 \times 4 random matrix. Print the singular values, verify UHU=I\mathbf{U}^H\mathbf{U} = \mathbf{I} and VHV=I\mathbf{V}^H\mathbf{V} = \mathbf{I}, and reconstruct the original matrix from the SVD factors.

ex-sp-ch06-05

Easy

Compute the Kronecker product of A=[1234]\mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} and B=[0567]\mathbf{B} = \begin{bmatrix} 0 & 5 \\ 6 & 7 \end{bmatrix}. Verify the property det⁑(AβŠ—B)=(det⁑A)2(det⁑B)2\det(\mathbf{A} \otimes \mathbf{B}) = (\det\mathbf{A})^2 (\det\mathbf{B})^2.

ex-sp-ch06-06

Easy

Create a 4Γ—44 \times 4 Toeplitz matrix with first column [2,1,0,0][2, 1, 0, 0] and first row [2,βˆ’1,0,0][2, -1, 0, 0]. Verify that it represents a specific filter by comparing T @ x with np.convolve(h, x).

ex-sp-ch06-07

Medium

Implement the MMSE detector for a 4Γ—44 \times 4 MIMO system. Given y=Hx+n\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{n}, compute: x^MMSE=(HHH+Οƒn2I)βˆ’1HHy\hat{\mathbf{x}}_{\mathrm{MMSE}} = (\mathbf{H}^H\mathbf{H} + \sigma_n^2\mathbf{I})^{-1}\mathbf{H}^H\mathbf{y} Compare with ZF detection at SNR = 0, 10, 20 dB over 1000 trials. Plot the symbol error rate vs SNR.

ex-sp-ch06-08

Medium

Compute the condition number of a Hilbert matrix Hij=1/(i+jβˆ’1)H_{ij} = 1/(i+j-1) for sizes n=2,4,6,8,10,12n = 2, 4, 6, 8, 10, 12. Show how the condition number grows and explain why Hilbert matrices are notoriously ill-conditioned.

ex-sp-ch06-09

Medium

Build a graph Laplacian for a 1010-node ring graph (each node connected to its two neighbors) as a sparse matrix. Find the 3 smallest eigenvalues using eigsh. Verify that the smallest eigenvalue is 0 (connected graph).

ex-sp-ch06-10

Medium

Implement kron_matvec(A, B, x) that computes (AβŠ—B)x(\mathbf{A} \otimes \mathbf{B})\mathbf{x} without forming the full Kronecker product. Benchmark it against np.kron(A, B) @ x for 50Γ—5050 \times 50 factors.

ex-sp-ch06-11

Medium

Verify the circulant diagonalization theorem: for a circulant matrix C\mathbf{C} with first column c=[1,2,3,4,5]\mathbf{c} = [1, 2, 3, 4, 5], compute eigenvalues both via np.linalg.eig and via the DFT (Ξ»=n Fc\lambda = \sqrt{n}\,\mathbf{F}\mathbf{c}). Show they match.

ex-sp-ch06-12

Medium

Solve an overdetermined system Axβ‰ˆb\mathbf{A}\mathbf{x} \approx \mathbf{b} with m=100m = 100, n=5n = 5 using three methods: normal equations, QR factorization, and np.linalg.lstsq. Compare the errors.

ex-sp-ch06-13

Medium

Compute the matrix exponential eAte^{\mathbf{A}t} for the 2Γ—22 \times 2 system A=[βˆ’110βˆ’2]\mathbf{A} = \begin{bmatrix} -1 & 1 \\ 0 & -2 \end{bmatrix} at 20 time points from t=0t = 0 to t=5t = 5. Verify that eAβ‹…0=Ie^{\mathbf{A} \cdot 0} = \mathbf{I} and that βˆ₯eAtβˆ₯β†’0\|e^{\mathbf{A}t}\| \to 0 as tβ†’βˆžt \to \infty (stable system).

ex-sp-ch06-14

Hard

Implement PCA (Principal Component Analysis) from scratch using eigendecomposition:

  1. Generate 1000 points from a 2D Gaussian with covariance R=[31.51.51]\mathbf{R} = \begin{bmatrix} 3 & 1.5 \\ 1.5 & 1 \end{bmatrix}.
  2. Center the data (subtract mean).
  3. Compute the sample covariance matrix.
  4. Find eigenvectors via eigh.
  5. Project data onto the principal component and compute the fraction of variance explained.

ex-sp-ch06-15

Hard

Implement a function sparse_pagerank(A, alpha=0.85, tol=1e-6) that computes the PageRank vector for a sparse adjacency matrix using the power iteration method. Test on a random sparse graph with 10,000 nodes.

ex-sp-ch06-16

Hard

Implement Tikhonov regularization with L-curve analysis:

  1. Generate a noisy overdetermined system with a known solution.
  2. Compute the Tikhonov solution for 50 values of Ξ±\alpha logarithmically spaced from 10βˆ’610^{-6} to 10210^2.
  3. Plot the L-curve: log⁑βˆ₯AxΞ±βˆ’bβˆ₯\log\|\mathbf{Ax}_\alpha - \mathbf{b}\| vs log⁑βˆ₯xΞ±βˆ₯\log\|\mathbf{x}_\alpha\|.
  4. Find the optimal Ξ±\alpha at the corner of the L-curve.

ex-sp-ch06-17

Hard

Show that the 2D DFT of an NΓ—NN \times N image can be computed as a Kronecker product: vec(FXFT)=(FβŠ—F) vec(X)\mathrm{vec}(\mathbf{F}\mathbf{X}\mathbf{F}^T) = (\mathbf{F} \otimes \mathbf{F})\,\mathrm{vec}(\mathbf{X}). Verify numerically for N=8N = 8 and compare the timing of the direct Kronecker product vs np.fft.fft2.

ex-sp-ch06-18

Hard

Implement the Lanczos algorithm for finding the kk largest eigenvalues of a symmetric matrix. Compare your implementation with scipy.sparse.linalg.eigsh on a 1000Γ—10001000 \times 1000 sparse matrix.

ex-sp-ch06-19

Challenge

Implement a complete SVD-based MIMO system:

  1. Generate a 4Γ—44 \times 4 Rayleigh channel H\mathbf{H}.
  2. Compute H=UΞ£VH\mathbf{H} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^H.
  3. Implement SVD precoding: transmit x=Vx~\mathbf{x} = \mathbf{V}\tilde{\mathbf{x}}, receive y~=UHy\tilde{\mathbf{y}} = \mathbf{U}^H\mathbf{y}.
  4. Implement water-filling power allocation across the parallel channels y~i=Οƒix~i+n~i\tilde{y}_i = \sigma_i \tilde{x}_i + \tilde{n}_i.
  5. Compute and plot the capacity vs SNR, comparing uniform power allocation with water-filling.

ex-sp-ch06-20

Challenge

Build a preconditioned conjugate gradient (PCG) solver for a large sparse system:

  1. Construct the 2D Laplacian on a 100Γ—100100 \times 100 grid (10,000 unknowns) as a sparse matrix.
  2. Implement CG from scratch.
  3. Add Jacobi preconditioning (Mβˆ’1=diag(A)βˆ’1\mathbf{M}^{-1} = \mathrm{diag}(\mathbf{A})^{-1}).
  4. Compare convergence (iterations to tolerance 10βˆ’810^{-8}) of CG, PCG, and scipy.sparse.linalg.cg.