References & Further Reading
References
- A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, McGraw-Hill, 4th ed., 2002
Ch. 9 (m.s. calculus) and Ch. 13 (KL expansion) are the primary references for this chapter.
- S. Haykin, Adaptive Filter Theory, Prentice Hall, 4th ed., 2001
Ch. 2 covers stochastic process fundamentals including m.s. continuity and differentiation.
- H. L. Van Trees, Detection, Estimation, and Modulation Theory, Part I, John Wiley & Sons, 1968
Ch. 2, §2.4 provides the definitive treatment of the KL expansion in the context of detection theory.
- H. V. Poor, An Introduction to Signal Detection and Estimation, Springer, 2nd ed., 2013
Ch. 3 uses the KL expansion to reduce continuous-time detection to discrete-time.
- E. T. Whittaker, On the Functions Which Are Represented by the Expansions of the Interpolation Theory, 1915
The original cardinal interpolation formula — the mathematical ancestor of the sampling theorem.
- H. Nyquist, Certain Topics in Telegraph Transmission Theory, 1928
Establishes the fundamental relationship between bandwidth and signaling rate.
- C. E. Shannon, Communication in the Presence of Noise, 1949
The sampling theorem in its modern form, plus its application to channel capacity.
- K. Karhunen, Über lineare Methoden in der Wahrscheinlichkeitsrechnung, 1947
The original paper introducing the KL expansion (in Finnish/German).
- M. Loève, Probability Theory, Springer, 4th ed., 1978
Ch. 37 develops the KL expansion rigorously within the framework of $L^2$ theory.
- R. G. Gallager, Stochastic Processes: Theory for Applications, Cambridge University Press, 2013
Ch. 8 provides an accessible treatment of sampling, bandlimited processes, and KL expansion.
- J. G. Proakis and M. Salehi, Digital Communications, McGraw-Hill, 5th ed., 2008
Ch. 8 covers practical aspects of sampling and anti-aliasing in communication receivers.
- A. V. Oppenheim, A. S. Willsky, and S. H. Nawab, Signals and Systems, Prentice Hall, 2nd ed., 1997
Ch. 7 (sampling) provides the deterministic signals background for the random process extension.
- G. Caire, Fundamentals of Stochastic Processes: Lecture Notes, TU Berlin, 2024
Course material, Ch. 8: Second-Order Processes — m.s. calculus and sampling.
- A. Adhikary, J. Nam, J.-Y. Ahn, and G. Caire, Joint Spatial Division and Multiplexing: Realizing Massive MIMO Gains with Limited Channel State Information, 2014
Uses KL-type covariance eigendecomposition for two-stage massive MIMO beamforming.
Further Reading
These resources extend the chapter material to advanced topics in stochastic analysis, time-frequency concentration, and applications to signal processing.
Prolate spheroidal wave functions and the Slepian concentration problem
D. Slepian, 'Prolate Spheroidal Wave Functions, Fourier Analysis, and Uncertainty — V,' Bell System Technical Journal, 1978
The KL eigenfunctions of a bandlimited process on a finite interval are the prolate spheroidal wave functions — the unique functions that are simultaneously concentrated in time and frequency. Slepian's work reveals deep connections between sampling, KL expansion, and the uncertainty principle.
Itô calculus and stochastic differential equations
B. Øksendal, 'Stochastic Differential Equations,' 6th ed., Springer, 2003
The m.s. calculus of this chapter handles integration of processes against ordinary measure. Itô calculus extends to integration *with respect to* a Brownian motion, enabling stochastic differential equations for modeling diffusion processes.
Random matrix theory and eigenvalue distributions
A. M. Tulino and S. Verdú, 'Random Matrix Theory and Wireless Communications,' Now Publishers, 2004
The KL eigenvalue distribution determines the effective dimensionality of a process. In massive MIMO, the analogous eigenvalue distribution of sample covariance matrices follows Marchenko-Pastur law — the bridge from KL theory to MIMO capacity analysis.
Gaussian process regression and machine learning
C. E. Rasmussen and C. K. I. Williams, 'Gaussian Processes for Machine Learning,' MIT Press, 2006
The KL expansion underpins Gaussian process (GP) regression: the kernel eigenfunctions define the feature space, and truncating the KL expansion gives sparse GP approximations that are central to modern Bayesian machine learning.