Prerequisites & Notation

Before You Begin

This chapter develops the computational machinery that turns the abstract inverse problem y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w} into a tractable numerical pipeline. We assume familiarity with the following material.

  • Kronecker products and the vec\text{vec} operator(Review telecom-ch01-s07)

    Self-check: Can you evaluate (AβŠ—B)vec(X)(\mathbf{A} \otimes \mathbf{B})\text{vec}(\mathbf{X}) without forming the Kronecker product explicitly?

  • Proximal operators, ADMM, and primal-dual splitting(Review telecom-ch03)

    Self-check: Can you state the ADMM update equations and explain the role of the augmented Lagrangian parameter?

  • The RF imaging forward model and sensing operator(Review rfi-ch02)

    Self-check: Can you write out y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w} and explain each term?

  • Regularization and inverse problem well-posedness(Review rfi-ch02)

    Self-check: Can you explain why regularization is necessary for underdetermined linear systems?

  • Basic Python/NumPy and familiarity with array broadcasting

    Self-check: Can you reshape and transpose multidimensional arrays without confusion?

Notation for This Chapter

Symbols introduced or used prominently in this chapter. See also the global notation table in the front matter.

SymbolMeaningIntroduced
A\mathbf{A}Sensing / measurement matrixch02
A1,A2\mathbf{A}_{1}, \mathbf{A}_{2}Kronecker factors of the sensing matrix: A=A1βŠ—A2\mathbf{A} = \mathbf{A}_{1} \otimes \mathbf{A}_{2}s01
c\mathbf{c}Discretized reflectivity vectorch02
y\mathbf{y}Observation vectorch02
w\mathbf{w}Noise vectorch02
βŠ—\otimesKronecker products01
vec(β‹…)\text{vec}(\cdot)Column-major vectorization of a matrixs01
r(t)r^{(t)}Primal residual at iteration tts04
s(t)s^{(t)}Dual residual at iteration tts04
βˆ‡f\nabla fGradient of ffs03
Jf\mathbf{J}_fJacobian matrix of ffs03