References & Further Reading

References

  1. G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, 4th ed., 2013

    The standard reference for numerical linear algebra. Chapter 12 covers Kronecker products, the vec operator, and their computational properties.

  2. J. W. Cooley and J. W. Tukey, An algorithm for the machine calculation of complex Fourier series, 1965

    The original FFT paper that reduced the DFT from $O(N^2)$ to $O(N \log N)$.

  3. J. A. Fessler and B. P. Sutton, Nonuniform fast Fourier transforms using min-max interpolation, 2003

    Develops the NUFFT with min-max optimal interpolation kernels. The accuracy and efficiency analysis motivates our use of NUFFT for non-uniform wavenumber sampling.

  4. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, 2011

    The standard monograph on ADMM: derivation, convergence theory, stopping criteria (primal and dual residuals), and adaptive penalty parameter selection.

  5. A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, 2011

    Introduces the Chambolle--Pock primal-dual algorithm with convergence analysis and primal-dual gap certificate.

  6. A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind, Automatic differentiation in machine learning: a survey, 2018

    Comprehensive survey of AD techniques, covering forward and reverse modes, implementation strategies, and applications in scientific computing and ML.

  7. A. Griewank and A. Walther, Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, SIAM, 2nd ed., 2008

    The definitive textbook on AD. Covers forward and reverse modes, checkpointing, higher-order derivatives, and complexity analysis.

  8. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, 1996

    Foundational text on regularization theory including Morozov's discrepancy principle, iterative regularization, and convergence analysis.

  9. G. Caire, On the Illumination and Sensing Model for RF Imaging, 2026

    The unifying framework for RF imaging that derives the Kronecker structure of the sensing operator and establishes the matched-filter baseline.

  10. V. Monga, Y. Li, and Y. C. Eldar, Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing, 2021

    Survey of algorithm unrolling: converting iterative algorithms into trainable neural networks. Motivates the need for AD in imaging.

  11. E. K. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin, Plug-and-Play Methods Provably Converge with Properly Trained Denoisers, 2019

    Establishes convergence guarantees for PnP algorithms under nonexpansiveness conditions on the denoiser Jacobian.

  12. NVIDIA Corporation, CUDA C++ Programming Guide, NVIDIA, 2024. [Link]

    Reference for GPU computing fundamentals: thread hierarchy, memory model, batched linear algebra, and mixed-precision computation.

  13. T. G. Kolda and B. W. Bader, Tensor Decompositions and Applications, 2009

    Comprehensive review of tensor algebra including n-mode products, which generalize the Kronecker vec trick to multi-factor settings.

  14. N. Halko, P.-G. Martinsson, and J. A. Tropp, Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions, 2011

    Foundational paper on randomized matrix algorithms that operate in a matrix-free access model.

  15. H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Springer, 2nd ed., 2017

    Reference for fixed-point theory and nonexpansive operators underlying the convergence of proximal algorithms.

  16. CommIT Group and G. Caire, Multi-sensor fusion with OAMP and learned denoiser for RF imaging, 2025

    CommIT group's work on unrolled OAMP with learned denoisers for multi-sensor RF imaging fusion.

Further Reading

Resources for deeper study of the computational tools developed in this chapter.

  • Kronecker products in signal processing

    Van Loan, C. F., 'The Ubiquitous Kronecker Product,' J. Comput. Appl. Math., 2000

    Comprehensive survey of Kronecker product applications including the vec trick and its generalizations.

  • GPU programming for scientific computing

    Kirk, D. and Hwu, W., 'Programming Massively Parallel Processors,' Morgan Kaufmann, 2016

    Practical introduction to CUDA programming with examples from numerical linear algebra.

  • Non-uniform FFTs in imaging

    Barnett, A. H. et al., 'A Parallel Nonuniform Fast Fourier Transform Library Based on an Exponential of Semicircle Kernel,' SIAM SISC, 2019

    State-of-the-art NUFFT library (FINUFFT) with performance benchmarks relevant to 3D imaging.

  • Automatic differentiation for inverse problems

    Adler, J. and Oktem, O., 'Learned Primal-Dual Reconstruction,' IEEE TMI, 2018

    Demonstrates end-to-end training of an unrolled primal-dual algorithm for CT reconstruction — directly analogous to RF imaging.

  • Convergence of iterative regularization

    Kaltenbacher, B. et al., 'Iterative Regularization Methods for Nonlinear Ill-Posed Problems,' de Gruyter, 2008

    Advanced treatment of semi-convergence, discrepancy principle, and stopping rules for nonlinear problems.