RF Imaging and Sparse Recovery

RF Imaging: From Radar Echoes to Spatial Maps

Beyond detecting and tracking individual targets, ISAC systems can construct images of the environment β€” spatial maps of reflectivity that reveal the geometry of objects, walls, and scatterers. RF imaging generalises the point-target model to a distributed scene, where the goal is to estimate the spatial reflectivity function x(r)x(\mathbf{r}) at every point r\mathbf{r} in the scene from a limited set of radar measurements.

In a multi-static sensing geometry, NTN_T transmitters at positions {pi}\{\mathbf{p}_i\} and NRN_R receivers at positions {qj}\{\mathbf{q}_j\} form NTNRN_T N_R bistatic pairs. For each pair (i,j)(i, j), the round-trip delay to a scatterer at position r\mathbf{r} is:

Ο„i,j(r)=βˆ₯piβˆ’rβˆ₯+βˆ₯qjβˆ’rβˆ₯c\tau_{i,j}(\mathbf{r}) = \frac{\|\mathbf{p}_i - \mathbf{r}\| + \|\mathbf{q}_j - \mathbf{r}\|}{c}

The measurement from the (i,j)(i,j) pair at frequency ff is:

yi,j(f)=∫x(r) eβˆ’j2Ο€fΟ„i,j(r) dr+ni,j(f)y_{i,j}(f) = \int x(\mathbf{r}) \, e^{-j2\pi f \tau_{i,j}(\mathbf{r})} \, d\mathbf{r} + n_{i,j}(f)

Discretising the scene onto a grid of GG pixels, this becomes the linear measurement model central to RF imaging.

Most indoor and outdoor environments are sparse in the reflectivity domain: only a small fraction of the scene pixels contain significant scatterers (walls, people, vehicles, furniture edges), while most of the scene is empty. This sparsity can be exploited to reconstruct the scene from far fewer measurements than the number of grid points, using tools from compressed sensing.

Definition:

Sparse Recovery Problem for RF Imaging

The sparse recovery problem for RF imaging is formulated as follows. Discretise the scene into GG grid points and stack the reflectivities into a vector x∈CG\mathbf{x} \in \mathbb{C}^G. The PP measurements (across all transmitter-receiver pairs and frequencies) are collected into a vector y∈CP\mathbf{y} \in \mathbb{C}^P:

y=Ax+n\mathbf{y} = \mathbf{A}\mathbf{x} + \mathbf{n}

where A∈CPΓ—G\mathbf{A} \in \mathbb{C}^{P \times G} is the sensing matrix (or measurement matrix) whose (p,g)(p, g) entry encodes the phase shift from the gg-th grid point to the pp-th measurement:

Ap,g=eβˆ’j2Ο€fpΟ„p(rg)A_{p,g} = e^{-j2\pi f_p \tau_p(\mathbf{r}_g)}

and n∼CN(0,Οƒ2I)\mathbf{n} \sim \mathcal{CN}(\mathbf{0}, \sigma^2\mathbf{I}) is measurement noise.

The system is typically underdetermined (Pβ‰ͺGP \ll G): there are far fewer measurements than unknowns. However, if the scene is ss-sparse (at most ss non-zero entries in x\mathbf{x}, with sβ‰ͺGs \ll G), exact or near-exact recovery is possible provided the sensing matrix A\mathbf{A} satisfies certain conditions.

The LASSO (least absolute shrinkage and selection operator) formulation promotes sparsity through β„“1\ell_1 regularisation:

x^=arg⁑min⁑xβ€…β€Š12βˆ₯yβˆ’Axβˆ₯22+Ξ»βˆ₯xβˆ₯1\hat{\mathbf{x}} = \arg\min_{\mathbf{x}} \; \frac{1}{2}\|\mathbf{y} - \mathbf{A}\mathbf{x}\|_2^2 + \lambda \|\mathbf{x}\|_1

where Ξ»>0\lambda > 0 is the regularisation parameter that controls the trade-off between data fidelity and sparsity.

The β„“1\ell_1 norm βˆ₯xβˆ₯1=βˆ‘g∣xg∣\|\mathbf{x}\|_1 = \sum_g |x_g| is a convex relaxation of the β„“0\ell_0 "norm" (the number of non-zero entries), which makes the optimisation problem tractable. Under suitable conditions on the sensing matrix, the β„“1\ell_1 minimiser coincides with the sparsest solution.

Theorem: Restricted Isometry Property (RIP) Recovery Guarantee

Let A∈CPΓ—G\mathbf{A} \in \mathbb{C}^{P \times G} be a sensing matrix. The restricted isometry constant Ξ΄s\delta_s of order ss is the smallest constant such that:

(1βˆ’Ξ΄s)βˆ₯xβˆ₯22≀βˆ₯Axβˆ₯22≀(1+Ξ΄s)βˆ₯xβˆ₯22(1 - \delta_s)\|\mathbf{x}\|_2^2 \leq \|\mathbf{A}\mathbf{x}\|_2^2 \leq (1 + \delta_s)\|\mathbf{x}\|_2^2

for all ss-sparse vectors x\mathbf{x} (vectors with at most ss non-zero entries).

Recovery guarantee (Cand`{e}s, 2008): If the sensing matrix A\mathbf{A} satisfies Ξ΄2s<2βˆ’1β‰ˆ0.4142\delta_{2s} < \sqrt{2} - 1 \approx 0.4142, then the solution x^\hat{\mathbf{x}} of the LASSO problem:

x^=arg⁑min⁑xβˆ₯xβˆ₯1s.t.βˆ₯yβˆ’Axβˆ₯2≀ϡ\hat{\mathbf{x}} = \arg\min_{\mathbf{x}} \|\mathbf{x}\|_1 \quad \text{s.t.} \quad \|\mathbf{y} - \mathbf{A}\mathbf{x}\|_2 \leq \epsilon

satisfies the error bound:

βˆ₯x^βˆ’x0βˆ₯2≀C0Οƒs(x0)1s+C1Ο΅\|\hat{\mathbf{x}} - \mathbf{x}_0\|_2 \leq C_0 \frac{\sigma_s(\mathbf{x}_0)_1}{\sqrt{s}} + C_1 \epsilon

where x0\mathbf{x}_0 is the true signal, Οƒs(x0)1=min⁑βˆ₯zβˆ₯0≀sβˆ₯x0βˆ’zβˆ₯1\sigma_s(\mathbf{x}_0)_1 = \min_{\|\mathbf{z}\|_0 \leq s} \|\mathbf{x}_0 - \mathbf{z}\|_1 is the β„“1\ell_1 error of the best ss-sparse approximation, Ο΅\epsilon bounds the noise level, and C0,C1C_0, C_1 are constants depending on Ξ΄2s\delta_{2s}.

Measurement requirement: Random matrices (Gaussian, partial Fourier) with Pβ‰₯C slog⁑(G/s)P \geq C \, s \log(G/s) rows satisfy the RIP with high probability. For RF imaging, this means that O(slog⁑(G/s))O(s \log(G/s)) measurements suffice to recover an ss-sparse scene of GG grid points β€” exponentially fewer than the GG measurements required by classical Nyquist sampling.

Iterative Shrinkage-Thresholding Algorithm (ISTA)

Input: Sensing matrix A∈CPΓ—G\mathbf{A} \in \mathbb{C}^{P \times G};
measurements y∈CP\mathbf{y} \in \mathbb{C}^P;
regularisation parameter Ξ»>0\lambda > 0;
step size ΞΌ=1/βˆ₯AHAβˆ₯\mu = 1/\|\mathbf{A}^H\mathbf{A}\| (reciprocal of largest eigenvalue);
max iterations Kmax⁑K_{\max}; tolerance ϡ>0\epsilon > 0
Output: Sparse estimate x^∈CG\hat{\mathbf{x}} \in \mathbb{C}^G
1. Initialise: x(0)=0∈CG\mathbf{x}^{(0)} = \mathbf{0} \in \mathbb{C}^G
2. For k=0,1,…,Kmaxβ‘βˆ’1k = 0, 1, \ldots, K_{\max} - 1:
a. Gradient step (data fidelity):
z(k)=x(k)+μ AH(yβˆ’Ax(k))\mathbf{z}^{(k)} = \mathbf{x}^{(k)} + \mu \, \mathbf{A}^H (\mathbf{y} - \mathbf{A}\mathbf{x}^{(k)})
b. Soft thresholding (sparsity promotion):
For each element g=1,…,Gg = 1, \ldots, G:
xg(k+1)=SΞΌΞ»(zg(k))x_g^{(k+1)} = \mathcal{S}_{\mu\lambda}(z_g^{(k)})
where the soft-thresholding operator is:
SΟ„(z)=sign(z)β‹…max⁑(∣zβˆ£βˆ’Ο„,0)\mathcal{S}_{\tau}(z) = \mathrm{sign}(z) \cdot \max(|z| - \tau, 0)
(for complex zz: SΟ„(z)=zβ‹…max⁑(1βˆ’Ο„/∣z∣,0)\mathcal{S}_{\tau}(z) = z \cdot \max(1 - \tau/|z|, 0))
c. Check convergence:
If βˆ₯x(k+1)βˆ’x(k)βˆ₯2/βˆ₯x(k)βˆ₯2<Ο΅\|\mathbf{x}^{(k+1)} - \mathbf{x}^{(k)}\|_2 / \|\mathbf{x}^{(k)}\|_2 < \epsilon, stop.
3. Return x^=x(Kmax⁑)\hat{\mathbf{x}} = \mathbf{x}^{(K_{\max})}
Complexity per iteration: O(PG)O(PG) for the matrix-vector products
Ax\mathbf{A}\mathbf{x} and AHr\mathbf{A}^H\mathbf{r}.
Convergence rate: O(1/k)O(1/k) for the objective value.

Neural Network-Based Learned Reconstruction

While ISTA and its variants are theoretically well-understood, their convergence can be slow and the regularisation parameter Ξ»\lambda must be carefully tuned. Deep unfolding replaces the fixed iterations of ISTA with a learned neural network that mimics the iterative structure but learns the step sizes, thresholds, and even the effective sensing matrix from training data.

LISTA (Learned ISTA) unfolds KK iterations of ISTA into KK neural network layers, where each layer kk computes:

x(k+1)=SΞΈk ⁣(Wkx(k)+Bky)\mathbf{x}^{(k+1)} = \mathcal{S}_{\boldsymbol{\theta}_k}\!\bigl(\mathbf{W}_k \mathbf{x}^{(k)} + \mathbf{B}_k \mathbf{y}\bigr)

The matrices Wk\mathbf{W}_k, Bk\mathbf{B}_k and thresholds ΞΈk\boldsymbol{\theta}_k are learned from training data (pairs of measurements and ground-truth scenes) via backpropagation. LISTA typically achieves the same reconstruction quality as ISTA in 10--20 learned layers, compared to hundreds or thousands of ISTA iterations.

More recent architectures use convolutional neural networks (CNNs) or vision transformers as learned priors, replacing the β„“1\ell_1 regulariser with a data-driven denoiser in a plug-and-play framework. These approaches can exploit structured sparsity (e.g., targets lying on walls or road surfaces) that the β„“1\ell_1 norm cannot capture.

Practical considerations:

  • Training requires representative scene data (simulated or measured).
  • The learned network is specific to the sensing geometry (A\mathbf{A}); changes in array configuration require retraining.
  • Interpretability and worst-case guarantees are weaker than for convex methods; the RIP-based bounds of the previous section do not directly apply to learned reconstructions.

Sparse Recovery via ISTA

Simulate sparse RF imaging using the ISTA algorithm. A scene with a specified number of point targets is placed on a 2D grid. Random Fourier measurements (mimicking a multi-static array) are taken, and ISTA iteratively recovers the sparse scene. Adjust the grid size to control the image resolution, the number of targets to vary the scene sparsity, and the SNR to observe how noise affects the reconstruction quality. The plot shows the true scene, the measurements, and the ISTA reconstruction with the normalised mean squared error (NMSE) reported.

Parameters
32
5
20

Why This Matters: Complete RF Imaging Theory in the RFI Book

This section introduces RF imaging and sparse recovery at an introductory level. The RFI book (RF Imaging) develops the complete theory from wave physics to neural reconstruction:

  • Chs 1-6: Electromagnetic scattering, Green's functions, Born approximation, diffraction tomography
  • Chs 7-12: Multi-static sensing model, Caire's illumination and sensing model, wavenumber analysis, resolution limits
  • Chs 13-18: Compressed sensing for imaging, LASSO/ISTA/FISTA, OAMP, structured sparsity models
  • Chs 19-25: Deep unfolding (LISTA, ADMM-Net), plug-and-play priors, neural scene representations (NeRF for radar)
  • Chs 26-30: Experimental validation, hardware-in-the-loop results, sub-THz imaging demonstrations

The RFI book is the natural next step for readers interested in pushing RF imaging from the simplified point-target model of this chapter to distributed scattering and real-world imaging systems.

Why This Matters: OTFS and Delay-Doppler ISAC in the OTFS Book

The OTFS waveform mentioned in the CommIT contribution by Yuan, Schober, and Caire is developed in full depth in the OTFS book:

  • Chs 1-4: Zak transform, delay-Doppler domain fundamentals
  • Chs 5-8: OTFS modulation, detection, and channel estimation
  • Chs 13-16: OTFS-ISAC β€” joint sensing and communication in the delay-Doppler domain, including the DD-domain waveform design framework by Gaudio/Kobayashi/Caire
  • Chs 17-19: MIMO-OTFS and LEO satellite OTFS

For high-mobility ISAC scenarios where OFDM suffers from ICI, OTFS provides a natural framework.

Quick Check

A scene is discretised into G=1024G = 1024 grid points and contains s=20s = 20 significant scatterers. According to the RIP-based recovery guarantee, approximately how many random measurements PP are needed for reliable sparse recovery via β„“1\ell_1 minimisation?

P=G=1024P = G = 1024 (Nyquist rate: one measurement per grid point)

Pβ‰ˆs=20P \approx s = 20 (one measurement per target)

P=O(slog⁑(G/s))β‰ˆ20log⁑(1024/20)β‰ˆ79P = O(s \log(G/s)) \approx 20 \log(1024/20) \approx 79 measurements

P=2s=40P = 2s = 40 (twice the sparsity level)