Exercises
ex-sp-ch04-01
EasyAdd type annotations to the following function. Include parameter types,
return type, and a TypeAlias for the channel matrix:
def make_channel(n_rx, n_tx, seed=None):
rng = np.random.default_rng(seed)
return (rng.standard_normal((n_rx, n_tx))
+ 1j * rng.standard_normal((n_rx, n_tx))) / np.sqrt(2)
Use TypeAlias from typing for ChannelMatrix = np.ndarray.
The seed can be int | None.
Annotated version
from typing import TypeAlias
import numpy as np
ChannelMatrix: TypeAlias = np.ndarray
def make_channel(
n_rx: int,
n_tx: int,
seed: int | None = None,
) -> ChannelMatrix:
rng = np.random.default_rng(seed)
return (rng.standard_normal((n_rx, n_tx))
+ 1j * rng.standard_normal((n_rx, n_tx))) / np.sqrt(2)
ex-sp-ch04-02
EasyWrite a function validate_mimo_inputs that takes a channel matrix H
and received vector y, checks their shapes, and raises appropriate
exceptions (not assertions) with descriptive error messages.
Check ndim first, then check dimension compatibility.
Use ValueError with f-strings that include the actual shapes.
Implementation
def validate_mimo_inputs(H: np.ndarray, y: np.ndarray) -> None:
if H.ndim != 2:
raise ValueError(f"H must be 2-D, got {H.ndim}-D with shape {H.shape}")
if y.ndim != 1:
raise ValueError(f"y must be 1-D, got {y.ndim}-D with shape {y.shape}")
if H.shape[0] != y.shape[0]:
raise ValueError(
f"Dimension mismatch: H has {H.shape[0]} rows but "
f"y has {y.shape[0]} elements"
)
ex-sp-ch04-03
EasyWrite a pytest test function that verifies np.fft.ifft(np.fft.fft(x))
recovers the original signal x for a random input vector.
Use np.testing.assert_allclose with an appropriate tolerance.
Create a seeded random vector with np.random.default_rng(42).
The FFT/IFFT roundtrip should be exact to machine precision: use atol=1e-14.
Test implementation
import numpy as np
from numpy.testing import assert_allclose
def test_fft_roundtrip():
rng = np.random.default_rng(42)
x = rng.standard_normal(128) + 1j * rng.standard_normal(128)
x_recovered = np.fft.ifft(np.fft.fft(x))
assert_allclose(x_recovered, x, atol=1e-14)
ex-sp-ch04-04
EasyWrite a parametrized pytest test that checks np.linalg.det(A @ B) == det(A) * det(B)
for square matrices of sizes 2, 3, 5, and 10.
Use @pytest.mark.parametrize('n', [2, 3, 5, 10]).
Use a seeded RNG for reproducibility.
Use assert_allclose with rtol=1e-10.
Parametrized test
import pytest
import numpy as np
from numpy.testing import assert_allclose
@pytest.mark.parametrize("n", [2, 3, 5, 10])
def test_det_multiplicative(n):
rng = np.random.default_rng(42)
A = rng.standard_normal((n, n))
B = rng.standard_normal((n, n))
det_AB = np.linalg.det(A @ B)
det_A_times_det_B = np.linalg.det(A) * np.linalg.det(B)
assert_allclose(det_AB, det_A_times_det_B, rtol=1e-10)
ex-sp-ch04-05
EasyCreate a pyproject.toml file for a package named signal-tools
with version 0.1.0, Python >= 3.11, dependencies on NumPy and SciPy,
and a dev dependency group containing pytest and mypy.
Use [build-system], [project], and [project.optional-dependencies] sections.
Complete pyproject.toml
[build-system]
requires = ["setuptools>=68.0"]
build-backend = "setuptools.backends._legacy:_Backend"
[project]
name = "signal-tools"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
"numpy>=1.26",
"scipy>=1.12",
]
[project.optional-dependencies]
dev = ["pytest>=8.0", "mypy>=1.8"]
[tool.setuptools.packages.find]
where = ["src"]
ex-sp-ch04-06
MediumWrite a generic function apply_to_columns that applies a callable
to each column of a 2-D NumPy array and returns the stacked results.
Use TypeVar and Callable for proper typing. Write a test that
uses it with both np.mean and np.fft.fft.
The callable signature is Callable[[np.ndarray], np.ndarray].
Use np.column_stack to assemble results.
Implementation and test
from typing import Callable
import numpy as np
def apply_to_columns(
data: np.ndarray,
func: Callable[[np.ndarray], np.ndarray],
) -> np.ndarray:
if data.ndim != 2:
raise ValueError(f"Expected 2-D array, got {data.ndim}-D")
results = [func(data[:, i]) for i in range(data.shape[1])]
return np.column_stack(results)
def test_apply_mean():
data = np.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
result = apply_to_columns(data, lambda col: np.array([np.mean(col)]))
assert_allclose(result, np.array([[3.0, 4.0]]))
ex-sp-ch04-07
MediumUse beartype to decorate a function that computes the MMSE equalizer:
.
The function should accept n_rx >= n_tx and noise_var > 0.
Write tests that verify beartype catches invalid inputs.
beartype checks types but not value constraints — add explicit raise ValueError for value checks.
Use pytest.raises to test that errors are raised.
Implementation
from beartype import beartype
import numpy as np
@beartype
def mmse_equalizer(
H: np.ndarray,
noise_var: float,
) -> np.ndarray:
if noise_var <= 0:
raise ValueError(f"noise_var must be positive, got {noise_var}")
if H.ndim != 2:
raise ValueError(f"H must be 2-D, got shape {H.shape}")
Nr, Nt = H.shape
return np.linalg.inv(H.conj().T @ H + noise_var * np.eye(Nt)) @ H.conj().T
def test_mmse_rejects_string():
with pytest.raises(Exception):
mmse_equalizer("not an array", 0.1)
def test_mmse_rejects_negative_noise():
H = np.eye(4) + 0j
with pytest.raises(ValueError, match="positive"):
mmse_equalizer(H, -0.1)
ex-sp-ch04-08
MediumWrite a pytest fixture that provides a "simulation environment" with
a channel matrix, noise, and transmitted signal. Parametrize the
fixture over (n_rx, n_tx) pairs [(4,2), (8,4), (16,8)].
Use this fixture to test that ZF detection recovers the transmitted
signal at high SNR.
Use @pytest.fixture(params=[...]) with request.param.
At high SNR (e.g., 40 dB), detection should be near-perfect.
Fixture and test
@pytest.fixture(params=[(4, 2), (8, 4), (16, 8)])
def mimo_env(request):
n_rx, n_tx = request.param
rng = np.random.default_rng(42)
H = (rng.standard_normal((n_rx, n_tx))
+ 1j * rng.standard_normal((n_rx, n_tx))) / np.sqrt(2)
x = rng.choice([-1, 1], n_tx) + 0j
snr_linear = 10 ** (40 / 10)
noise = rng.standard_normal(n_rx) / np.sqrt(snr_linear) + 0j
y = H @ x + noise
return {"H": H, "y": y, "x": x, "config": (n_rx, n_tx)}
def test_zf_high_snr(mimo_env):
x_hat = np.linalg.pinv(mimo_env["H"]) @ mimo_env["y"]
x_detected = np.sign(x_hat.real)
assert_allclose(x_detected, mimo_env["x"].real, atol=0.1)
ex-sp-ch04-09
MediumProfile the following two implementations of a correlation matrix computation and report which is faster and why:
def corr_loop(X):
n = X.shape[1]
C = np.zeros((n, n))
for i in range(n):
for j in range(n):
C[i, j] = np.mean(X[:, i] * X[:, j])
return C
def corr_vectorized(X):
return (X.T @ X) / X.shape[0]
Use time.perf_counter() or cProfile to compare.
Generate X with shape (10000, 50) for meaningful timing.
Profiling comparison
import time
import numpy as np
rng = np.random.default_rng(42)
X = rng.standard_normal((10000, 50))
t0 = time.perf_counter()
C1 = corr_loop(X)
t_loop = time.perf_counter() - t0
t0 = time.perf_counter()
C2 = corr_vectorized(X)
t_vec = time.perf_counter() - t0
print(f"Loop: {t_loop:.4f}s")
print(f"Vectorized: {t_vec:.6f}s")
print(f"Speedup: {t_loop / t_vec:.0f}x")
# Typical output:
# Loop: 0.8500s
# Vectorized: 0.000800s
# Speedup: 1000x
np.testing.assert_allclose(C1, C2, rtol=1e-12)
ex-sp-ch04-10
MediumUse hypothesis to test that for any square matrix , the trace
equals the sum of eigenvalues: .
Handle the fact that eigenvalues may be complex.
Use st.integers for matrix size and st.integers for seed.
Compare np.trace(A) with np.sum(np.linalg.eigvals(A)).
Use np.real() since trace of a real matrix is real.
Property test
from hypothesis import given, settings
from hypothesis import strategies as st
import numpy as np
from numpy.testing import assert_allclose
@given(
n=st.integers(min_value=1, max_value=30),
seed=st.integers(min_value=0, max_value=2**32 - 1),
)
@settings(max_examples=200)
def test_trace_equals_eigenvalue_sum(n, seed):
rng = np.random.default_rng(seed)
A = rng.standard_normal((n, n))
trace = np.trace(A)
eigenvalue_sum = np.sum(np.linalg.eigvals(A)).real
assert_allclose(trace, eigenvalue_sum, rtol=1e-10)
ex-sp-ch04-11
MediumCreate a complete project structure with src layout for a package
called ofdm-tools. Include:
pyproject.tomlwith dependenciessrc/ofdm_tools/__init__.pywith a public APIsrc/ofdm_tools/modulation.pywith aqam_modulatefunctiontests/test_modulation.pywith at least 3 tests- A CLI entry point that modulates a random bit sequence
Use [project.scripts] for the entry point.
The qam_modulate function maps bit groups to QAM constellation points.
Project structure
ofdm-tools/
+-- pyproject.toml
+-- src/ofdm_tools/
| +-- __init__.py
| +-- modulation.py
| +-- cli.py
+-- tests/
+-- test_modulation.py
Key files
# src/ofdm_tools/modulation.py
import numpy as np
from typing import Literal
def qam_modulate(
bits: np.ndarray,
order: Literal[4, 16, 64] = 4,
) -> np.ndarray:
bits_per_symbol = int(np.log2(order))
n_symbols = len(bits) // bits_per_symbol
bits = bits[:n_symbols * bits_per_symbol]
# Gray-coded QAM constellation
symbols = np.zeros(n_symbols, dtype=complex)
for i in range(n_symbols):
b = bits[i * bits_per_symbol:(i + 1) * bits_per_symbol]
idx = int("".join(str(x) for x in b), 2)
M = int(np.sqrt(order))
re = 2 * (idx % M) - M + 1
im = 2 * (idx // M) - M + 1
symbols[i] = complex(re, im)
return symbols / np.sqrt(np.mean(np.abs(symbols) ** 2))
ex-sp-ch04-12
HardImplement a @validate_shapes decorator that checks array argument
shapes against a specification string at runtime. Support:
- Named dimensions:
"(N, M)"— the same letter must match - Wildcard:
"(*,)"— any shape - Fixed:
"(3, 3)"— exact shape
Write comprehensive tests including edge cases.
Parse the shape spec string to extract dimension names/values.
Track named dimensions across arguments for consistency.
Use inspect.signature to bind arguments by name.
Decorator implementation
import functools, inspect, re
import numpy as np
def validate_shapes(**specs):
def decorator(fn):
@functools.wraps(fn)
def wrapper(*args, **kwargs):
sig = inspect.signature(fn)
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
dim_map = {}
for name, spec in specs.items():
arr = bound.arguments[name]
dims = [d.strip() for d in spec.strip("()").split(",")]
if dims == ["*"]:
continue
if len(dims) != arr.ndim:
raise ValueError(f"{name}: expected {len(dims)}-D, got {arr.ndim}-D")
for i, d in enumerate(dims):
if d.isdigit():
if arr.shape[i] != int(d):
raise ValueError(f"{name} dim {i}: expected {d}, got {arr.shape[i]}")
elif d.isalpha():
if d in dim_map:
if dim_map[d] != arr.shape[i]:
raise ValueError(f"Dim {d}: expected {dim_map[d]}, got {arr.shape[i]}")
else:
dim_map[d] = arr.shape[i]
return fn(*args, **kwargs)
return wrapper
return decorator
ex-sp-ch04-13
HardWrite a complete test suite for a KalmanFilter class with at least
10 test functions covering:
- Constructor validation (dimensions, positive definite covariance)
- Predict step (state and covariance update)
- Update step (Kalman gain, innovation)
- Full filter on a synthetic 1-D tracking problem
- Edge cases (zero noise, identity matrices)
Use fixtures, parametrize, and assert_allclose.
Create a @pytest.fixture that returns a configured KalmanFilter.
Test properties like: predict increases uncertainty, update decreases it.
For the tracking problem, generate ground truth + noisy measurements.
Test structure overview
@pytest.fixture
def kf_1d():
return KalmanFilter(
F=np.array([[1, 1], [0, 1]]), # Constant velocity
H=np.array([[1, 0]]), # Observe position
Q=0.1 * np.eye(2), # Process noise
R=np.array([[1.0]]), # Measurement noise
x0=np.zeros(2),
P0=np.eye(2),
)
def test_predict_increases_uncertainty(kf_1d):
P_before = kf_1d.P.copy()
kf_1d.predict()
# P_after = F @ P_before @ F.T + Q >= P_before (in PSD sense)
diff = kf_1d.P - P_before
eigenvalues = np.linalg.eigvalsh(diff)
assert np.all(eigenvalues >= -1e-12)
def test_update_decreases_uncertainty(kf_1d):
kf_1d.predict()
P_predicted = kf_1d.P.copy()
kf_1d.update(np.array([1.0]))
diff = P_predicted - kf_1d.P
eigenvalues = np.linalg.eigvalsh(diff)
assert np.all(eigenvalues >= -1e-12)
ex-sp-ch04-14
HardProfile a naïve OFDM transmitter that uses Python loops and optimize it
to be at least 50x faster using vectorized NumPy operations. Measure
and report the speedup using cProfile or time.perf_counter.
The transmitter pipeline: QAM modulation -> serial-to-parallel -> IFFT -> add cyclic prefix -> parallel-to-serial.
The naïve version uses a loop over OFDM symbols.
The vectorized version processes all symbols as a 2-D array.
Profiling and optimization
def ofdm_tx_naive(bits, n_fft=64, cp_len=16, mod_order=4):
symbols = qam_modulate(bits, mod_order)
n_ofdm = len(symbols) // n_fft
output = []
for i in range(n_ofdm):
freq_domain = symbols[i * n_fft:(i + 1) * n_fft]
time_domain = np.fft.ifft(freq_domain)
with_cp = np.concatenate([time_domain[-cp_len:], time_domain])
output.extend(with_cp)
return np.array(output)
def ofdm_tx_vectorized(bits, n_fft=64, cp_len=16, mod_order=4):
symbols = qam_modulate(bits, mod_order)
n_ofdm = len(symbols) // n_fft
freq = symbols[:n_ofdm * n_fft].reshape(n_ofdm, n_fft)
time = np.fft.ifft(freq, axis=1)
with_cp = np.hstack([time[:, -cp_len:], time])
return with_cp.ravel()
ex-sp-ch04-15
HardCreate a Protocol called Estimator that specifies fit(X, y) and
predict(X) methods with full type annotations. Implement two
classes satisfying this Protocol: LeastSquares and RidgeRegression.
Write property-based tests with hypothesis verifying:
- Fitting on noiseless data recovers the true coefficients
- Ridge regularization shrinks coefficients toward zero
- Both estimators produce predictions with the correct shape
Use @runtime_checkable so you can isinstance check.
For property 1, generate random (N, D) data with N > D.
Protocol and implementations
from typing import Protocol, runtime_checkable
import numpy as np
@runtime_checkable
class Estimator(Protocol):
def fit(self, X: np.ndarray, y: np.ndarray) -> None: ...
def predict(self, X: np.ndarray) -> np.ndarray: ...
class LeastSquares:
def __init__(self) -> None:
self.coef_: np.ndarray | None = None
def fit(self, X: np.ndarray, y: np.ndarray) -> None:
self.coef_ = np.linalg.lstsq(X, y, rcond=None)[0]
def predict(self, X: np.ndarray) -> np.ndarray:
return X @ self.coef_
class RidgeRegression:
def __init__(self, alpha: float = 1.0) -> None:
self.alpha = alpha
self.coef_: np.ndarray | None = None
def fit(self, X: np.ndarray, y: np.ndarray) -> None:
self.coef_ = np.linalg.solve(
X.T @ X + self.alpha * np.eye(X.shape[1]), X.T @ y
)
def predict(self, X: np.ndarray) -> np.ndarray:
return X @ self.coef_
ex-sp-ch04-16
HardBuild a complete Python package spectral-analyzer with:
srclayout with two modules:transforms.pyandwindows.py- CLI entry point that reads a WAV file and plots the spectrogram
- Type annotations on all public functions
- A test suite with at least 8 tests including fixtures and parametrize
pyproject.tomlwith dependencies and tool configuration for mypy and pytest
Use scipy.io.wavfile for reading WAV files.
Test Parseval's theorem: energy in time domain equals energy in frequency domain.
Project structure
spectral-analyzer/
+-- pyproject.toml
+-- src/spectral_analyzer/
| +-- __init__.py
| +-- transforms.py
| +-- windows.py
| +-- cli.py
+-- tests/
+-- conftest.py
+-- test_transforms.py
+-- test_windows.py
ex-sp-ch04-17
ChallengeImplement a shape-aware type system for NumPy arrays using Python's
__class_getitem__ protocol. Create a ShapedArray[N, M] type that
can be used in annotations and checked at runtime:
def matmul(A: ShapedArray[N, K], B: ShapedArray[K, M]) -> ShapedArray[N, M]:
...
The type checker should verify dimension consistency across function arguments. Write a comprehensive test suite.
Use TypeVar for dimension variables N, K, M.
Override __class_getitem__ to store shape metadata.
A decorator can extract and validate shape metadata at runtime.
Approach
This requires a custom metaclass or __class_getitem__ to capture
shape parameters, plus a decorator that inspects annotations and
validates shapes at call time. The key insight is that dimension
variables must be tracked across arguments to ensure consistency
(e.g., the K in A's columns must match the K in B's rows).
ex-sp-ch04-18
ChallengeCreate a mutation testing framework for numerical code. The framework should:
- Parse Python source files and identify numerical operations
- Generate mutants (e.g., replace
+with-,*with/, swap<and>) - Run the test suite against each mutant
- Report the mutation score (fraction of mutants killed by tests)
Apply it to a linear algebra module and analyze which mutations survive (indicating weak tests).
Use ast module to parse and modify Python source code.
Use importlib to reload mutated modules.
Track which mutants are killed (test fails) vs. survive (test passes).
Framework outline
import ast
import copy
class NumericalMutator(ast.NodeTransformer):
mutations = {
ast.Add: ast.Sub,
ast.Sub: ast.Add,
ast.Mult: ast.Div,
ast.Div: ast.Mult,
}
def visit_BinOp(self, node):
op_type = type(node.op)
if op_type in self.mutations:
mutant = copy.deepcopy(node)
mutant.op = self.mutations[op_type]()
return mutant
return node