Chapter Summary
Chapter Summary
Key Points
- 1.
Vector spaces as the setting. is the workhorse space of communications: every signal vector, channel vector, and beamforming weight vector lives here. Dimension, basis, and span are the organizing concepts that tell us how many degrees of freedom a system has and how to represent signals efficiently.
- 2.
Inner products giving geometry. The inner product endows with geometry — angles, distances, and projections all derive from it. The Cauchy–Schwarz inequality directly yields the matched-filter beamformer as the projection that maximizes received SNR.
- 3.
Matrices as linear maps. Range, null space, and rank characterize what a linear map can and cannot do — the rank of a channel matrix determines the number of independent data streams a MIMO link can support. Hermitian (), unitary (), and positive definite () matrices are the three special classes that dominate wireless system analysis.
- 4.
Eigendecomposition for Hermitian matrices. The spectral theorem reveals that every Hermitian matrix acts by scaling along orthogonal directions defined by its eigenvectors. The Rayleigh quotient connects eigenvalues to constrained optimization, a pattern that recurs throughout communications theory.
- 5.
SVD for everything. Every matrix decomposes as — rotation, scaling, rotation. MIMO channel capacity, optimal beamforming, and low-rank channel approximation all trace back to the SVD; the singular values quantify the strength of each spatial sub-channel.
- 6.
Trace/det inequalities in capacity. The MIMO capacity formula is built from trace and determinant. Hadamard's inequality, Fischer's inequality, and the concavity of on the positive definite cone are the analytical workhorses for bounding and optimizing capacity expressions.
- 7.
Kronecker products in channel models. The Kronecker product structures the separable spatial correlation model of MIMO channels: , factoring the full correlation into transmit and receive sides. The operator converts matrix equations to vector form, enabling compact manipulation via the identity .
- 8.
Matrix calculus for optimization. Gradients of the key building blocks — , , and — reduce wireless optimization problems such as beamformer design and precoder optimization to eigenvalue problems with closed-form solutions.
Looking Ahead
Chapter 2 introduces probability, random variables, and stochastic processes, building directly on the linear-algebraic foundations laid here. Random vectors live in the same spaces we have been studying, and their second-order statistics are captured by covariance matrices — precisely the positive semidefinite Hermitian matrices we have just learned to eigendecompose. The circularly symmetric complex Gaussian distribution , which models thermal noise and Rayleigh fading, is fully characterized by its covariance matrix, so every tool from Sections 1.3–1.4 (rank, eigenvalues, spectral decomposition) transfers directly to statistical analysis. Stochastic processes then add the time and frequency dimensions that complement the spatial dimension from linear algebra, completing the mathematical framework needed to analyze modern communication systems.