Exercises

ex-sp-ch12-01

Easy

Create a 5Γ—55 \times 5 identity matrix as a PyTorch tensor with float64 dtype. Verify its dtype, device, and that it equals torch.eye(5).

ex-sp-ch12-02

Easy

Convert a NumPy array a = np.linspace(0, 1, 100) to a PyTorch tensor using torch.from_numpy. Verify that modifying the tensor changes the original array (shared memory).

ex-sp-ch12-03

Easy

Compute the gradient of f(x)=x3βˆ’2x2+xf(x) = x^3 - 2x^2 + x at x=3x = 3 using autograd. Verify against the analytical derivative fβ€²(x)=3x2βˆ’4x+1f'(x) = 3x^2 - 4x + 1.

ex-sp-ch12-04

Easy

Use in-place operations to normalize a random tensor to have zero mean and unit variance. Do not create any intermediate tensors.

ex-sp-ch12-05

Easy

Create a complex tensor z=[1+2j,3+4j,5+6j]z = [1+2j, 3+4j, 5+6j] and compute its magnitude ∣z∣|z|, phase ∠z\angle z, and conjugate zβˆ—z^*.

ex-sp-ch12-06

Medium

Compute the Jacobian matrix of the function f(x)=[sin⁑(x1x2), cos⁑(x1+x2), x12x2]\mathbf{f}(\mathbf{x}) = [\sin(x_1 x_2),\, \cos(x_1 + x_2),\, x_1^2 x_2] at x=[1,2]\mathbf{x} = [1, 2] using torch.autograd.functional.jacobian. Verify against the analytical Jacobian.

ex-sp-ch12-07

Medium

Implement Newton's method to find the root of f(x)=x3βˆ’2xβˆ’5f(x) = x^3 - 2x - 5 using autograd to compute fβ€²(x)f'(x) at each step. Start from x0=2x_0 = 2.

ex-sp-ch12-08

Medium

Use batched torch.linalg.solve to solve 500 independent 3Γ—33 \times 3 linear systems Aixi=bi\mathbf{A}_i \mathbf{x}_i = \mathbf{b}_i. Compare the time against a Python loop with NumPy.

ex-sp-ch12-09

Medium

Compute the FFT of a chirp signal x(t)=cos⁑(2Ο€(f0+Ξ²t)t)x(t) = \cos(2\pi(f_0 + \beta t)t) with f0=10f_0 = 10 Hz, Ξ²=50\beta = 50 Hz/s, sampled at 500 Hz for 1 second. Plot the spectrogram to show the frequency increasing over time.

ex-sp-ch12-10

Medium

Verify the Wirtinger derivative: for L(z)=∣zβˆ’z0∣2L(z) = |z - z_0|^2 where z0=2+3jz_0 = 2 + 3j, confirm that z.grad equals zβˆ’z0z - z_0 at several test points.

ex-sp-ch12-11

Hard

Implement the power method for finding the dominant eigenvalue of a matrix using PyTorch, and differentiate through it with autograd to compute βˆ‚Ξ»1/βˆ‚A\partial \lambda_1 / \partial \mathbf{A}.

ex-sp-ch12-12

Hard

Write a backend-agnostic function that computes the condition number of a matrix. It should work with NumPy, PyTorch, and CuPy arrays using the Array API.

ex-sp-ch12-13

Hard

Implement a differentiable Tikhonov-regularized least squares solver: x^=(AHA+Ξ±I)βˆ’1AHb\hat{\mathbf{x}} = (\mathbf{A}^H\mathbf{A} + \alpha \mathbf{I})^{-1}\mathbf{A}^H\mathbf{b}. Then use autograd to find the optimal regularization parameter Ξ±\alpha that minimizes the MSE against a known ground truth.

ex-sp-ch12-14

Challenge

Implement an end-to-end differentiable beamformer. Given a batch of channel vectors hi∈C4\mathbf{h}_i \in \mathbb{C}^4, find beamforming weights w∈C4\mathbf{w} \in \mathbb{C}^4 that maximize the average SINR across the batch, using gradient ascent through the complex-valued SINR expression.

ex-sp-ch12-15

Challenge

Build a mixed SciPy + PyTorch pipeline: use scipy.sparse to assemble a finite-difference Laplacian matrix for a 2D Poisson equation on a 50Γ—5050 \times 50 grid, convert to a dense PyTorch tensor, solve the system on GPU using torch.linalg.solve, then differentiate the solution with respect to the right-hand side forcing function.