References & Further Reading

References

  1. L. N. Trefethen and D. Bau III, Numerical Linear Algebra, SIAM, 1997

    The gold-standard textbook on numerical linear algebra. Covers SVD, LU, QR, eigenvalue algorithms, and iterative methods with exceptional clarity. Directly relevant to all sections of this chapter.

  2. G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, 4th ed., 2013

    The comprehensive reference for matrix algorithms. Chapter 12 on special matrix structures (Toeplitz, Hankel, circulant) and Chapter 4 on SVD are especially relevant.

  3. G. Strang, Linear Algebra and Learning from Data, Wellesley-Cambridge Press, 2019

    Connects linear algebra to machine learning and data science. Excellent treatment of SVD, PCA, and low-rank approximation from a geometric perspective.

  4. N. J. Higham, Functions of Matrices: Theory and Computation, SIAM, 2008

    The definitive reference on matrix functions (exponential, logarithm, square root). Chapter 10 on the matrix exponential describes the Pade/scaling-squaring algorithm used in SciPy.

  5. SciPy Community, scipy.linalg β€” Linear Algebra, 2024. [Link]

    Official documentation for SciPy's linear algebra module. Includes detailed descriptions of all functions used in this chapter with examples.

  6. NumPy Community, numpy.linalg β€” Linear Algebra, 2024. [Link]

    Official NumPy linear algebra documentation. Covers solve, eig, svd, lstsq, and norm with broadcasting semantics.

  7. T. A. Davis, Direct Methods for Sparse Linear Systems, SIAM, 2006

    The standard reference for sparse matrix storage formats and direct sparse solvers. Covers CSR, CSC, COO formats and the algorithms behind SuperLU and CHOLMOD.

  8. Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, 2nd ed., 2003

    Comprehensive treatment of Krylov subspace methods (CG, GMRES, BiCGSTAB) and preconditioning. Relevant to Exercise 20 and large-scale sparse problems.

Further Reading

  • Randomized numerical linear algebra

    N. Halko, P. G. Martinsson, and J. A. Tropp, *Finding Structure with Randomness*, SIAM Review, 2011

    Randomized SVD and related algorithms enable computing low-rank approximations of massive matrices in $O(mn\log k)$ time instead of $O(mn\min(m,n))$. Implemented in `sklearn.utils.extmath.randomized_svd`.

  • Sparse eigenvalue problems in wireless

    R. B. Lehoucq, D. C. Sorensen, and C. Yang, *ARPACK Users' Guide*, SIAM, 1998

    ARPACK is the Fortran library behind `scipy.sparse.linalg.eigsh` and `svds`. Understanding its implicitly restarted Lanczos algorithm helps you choose shift-invert parameters for interior eigenvalue problems.

  • Matrix-free methods and linear operators

    scipy.sparse.linalg.LinearOperator documentation

    When your matrix is too large to store but you can compute matvecs (e.g., Kronecker structure), wrap it as a LinearOperator and use iterative solvers directly.

  • GPU-accelerated linear algebra

    CuPy documentation (https://cupy.dev/) and cuSOLVER

    For matrices larger than ~2000x2000, GPU acceleration via CuPy provides 10-50x speedups. CuPy mirrors NumPy's API, so `cupy.linalg.solve`, `cupy.linalg.svd`, etc. work identically.