Cross-Pollination: What RF Imaging Can Learn and Contribute

Section Roadmap: Cross-Pollination Between Medical and RF Imaging

The previous three sections established that CT, MRI, and ultrasound share the same mathematical skeleton as RF imaging: a linear forward model, incomplete measurements, and the need for regularization or learning. This section draws the transfer map: which architectures and ideas from medical imaging carry over to RF, what modifications they need, and — critically — what RF imaging offers in return. The punchline: medical imaging has mature learned reconstruction pipelines that RF imaging can borrow, while RF imaging's ISAC paradigm (combining sensing with communication) has no analogue in medical imaging and represents a genuine intellectual export.

,

Definition:

Model-Based Deep Learning (MoDL)

MoDL (Aggarwal et al., 2019) alternates between a data-consistency step and a learned denoiser within a conjugate-gradient (CG) solver:

m(t+1)=argminmAmy22+λmDθ(m(t))22,\mathbf{m}^{(t+1)} = \arg\min_{\mathbf{m}} \|\mathcal{A}\mathbf{m} - \mathbf{y}\|_2^2 + \lambda\|\mathbf{m} - \mathcal{D}_\theta(\mathbf{m}^{(t)})\|_2^2,

where Dθ\mathcal{D}_\theta is a CNN denoiser with shared weights across iterations. The CG inner loop is differentiable, so θ\theta is learned end-to-end.

Transfer to RF imaging: Replace A\mathcal{A} with A\mathbf{A}. The CG inner loop must be adapted to exploit the Kronecker structure of A\mathbf{A} (Ch 07) for computational efficiency. The denoiser Dθ\mathcal{D}_\theta requires retraining on RF scene statistics, which differ substantially from medical images (discrete scatterers vs. continuous tissue).

Definition:

Learned Primal-Dual (Adler & Oktem, 2018)

The Learned Primal-Dual algorithm unrolls a primal-dual optimization scheme (cf. Telecom Ch 03) into TT layers:

Dual update: h(t+1)=Γθtd(h(t),Am(t),y)\mathbf{h}^{(t+1)} = \Gamma_{\theta_t^d}(\mathbf{h}^{(t)}, \mathcal{A}\mathbf{m}^{(t)}, \mathbf{y})

Primal update: m(t+1)=Λθtp(m(t),Ah(t+1))\mathbf{m}^{(t+1)} = \Lambda_{\theta_t^p}(\mathbf{m}^{(t)}, \mathcal{A}^*\mathbf{h}^{(t+1)})

where Γθtd\Gamma_{\theta_t^d} and Λθtp\Lambda_{\theta_t^p} are small CNNs. Originally developed for CT (where A=R\mathcal{A} = \mathcal{R}), this architecture generalises to any linear inverse problem.

Transfer to RF: This is structurally the same as the unrolled OAMP (Ch 18) viewed as a primal-dual algorithm. The RF-specific modification is to exploit the Kronecker structure in the forward/adjoint operations A,AH\mathbf{A}, \mathbf{A}^{H}.

Definition:

Self-Supervised Data Undersampling (SSDU)

SSDU (Yaman et al., 2020) enables training MRI reconstruction networks without fully sampled ground truth. The key idea: split the acquired k-space data into two disjoint subsets Ω1,Ω2\Omega_1, \Omega_2 with Ω1Ω2=Ω\Omega_1 \cup \Omega_2 = \Omega (the full acquired set):

  1. Train the reconstruction network fθf_\theta using Ω1\Omega_1 as input (further undersampled data).
  2. Validate/loss on Ω2\Omega_2: the network's output is compared against the measurements in Ω2\Omega_2 that it never saw.

The loss is L(θ)=FΩ2fθ(FΩ1HyΩ1)yΩ222\mathcal{L}(\theta) = \|\mathbf{F}_{\Omega_2} f_\theta(\mathbf{F}_{\Omega_1}^H \mathbf{y}_{\Omega_1}) - \mathbf{y}_{\Omega_2}\|_2^2.

Transfer to RF imaging: RF imaging often lacks ground-truth images entirely (no "fully sampled" RF measurement exists in practice). SSDU is directly applicable: split the Tx-Rx-frequency measurements into training and validation subsets. This is one of the most impactful transfers from medical imaging to RF.

Theorem: Architecture Transfer via Forward-Operator Substitution

Let fθAf_\theta^{\mathcal{A}} be a model-based learned reconstruction network that accesses the forward operator A\mathcal{A} and its adjoint A\mathcal{A}^* only through matrix-vector products Ax\mathcal{A}\mathbf{x} and Ay\mathcal{A}^*\mathbf{y} (black-box access). Then fθAf_\theta^{\mathcal{A}} can be transferred to any new inverse problem with forward operator B\mathcal{B} by:

  1. Replacing AB\mathcal{A} \mapsto \mathcal{B} and AB\mathcal{A}^* \mapsto \mathcal{B}^* in the network graph.
  2. Retraining the learnable parameters θ\theta on data from the new problem.

The network architecture (number of layers, CNN structure, skip connections) transfers unchanged. Only the data distribution and forward operator change.

Model-based deep learning separates the physics (forward model) from the prior (learned network). If the physics changes (e.g., from MRI's Fourier operator to RF's Kronecker sensing matrix), only the physics interface needs updating — the prior structure (U-Net, ResNet, etc.) remains. This is the "plug-and-play" principle at the architecture level.

,

What Medical Imaging Can Learn from RF: The ISAC Paradigm

The transfer from medical imaging to RF is well-documented. But the reverse direction is equally important and less recognized.

Integrated Sensing and Communications (ISAC) — the paradigm where the same waveform simultaneously communicates data and images the environment (Ch 29--30) — has no analogue in medical imaging. An MRI scanner does not communicate with the patient; a CT scanner does not serve as a data link. The ISAC constraint fundamentally changes the waveform design and the forward operator: the transmitted signal must be optimized for two objectives (data rate and imaging quality) simultaneously.

This dual-function constraint opens new theoretical questions (capacity-distortion tradeoffs, ITA Ch 18) and new architectural challenges (joint communication-sensing beamforming, Ch 29) that medical imaging researchers may find fertile ground for cross- disciplinary collaboration.

Learned Reconstruction Architectures and Their RF Imaging Counterparts

ArchitectureMedical Imaging OriginRF Imaging CounterpartKey Modification for RF
FBPConvNetCT post-processing (Jin 2017)MF + U-Net (Ch 26)Replace FBP with c^BP\hat{\mathbf{c}}^{\text{BP}}
E2E VarNetMRI unrolled gradient descent (Sriram 2020)Unrolled OAMP (Ch 18)Exploit Kronecker structure in A\mathbf{A}
MoDLMRI CG + denoiser (Aggarwal 2019)CG-Net for RFKronecker-accelerated CG inner loop
Learned Primal-DualCT/MRI (Adler 2018)Unrolled primal-dual for RFKronecker forward/adjoint products
SSDUSelf-supervised MRI (Yaman 2020)Self-supervised RF imagingSplit Tx-Rx measurements instead of k-space lines
DPS (Diffusion Posterior Sampling)General inverse problems (Chung 2023)Diffusion prior for RF (Ch 30 analogue)Score function trained on RF scenes
, ,

Example: Transferring MoDL from MRI to RF Imaging: What Changes?

Consider MoDL trained for 4x-accelerated brain MRI with AMRI=FΩdiag(s)\mathcal{A}_{\mathrm{MRI}} = \mathbf{F}_{\Omega}\text{diag}(\mathbf{s}). We want to apply MoDL to RF imaging with A=A1A2\mathbf{A} = \mathbf{A}_1 \otimes \mathbf{A}_2 (Kronecker structure, Ch 07).

Task: List the components that transfer unchanged and the components that must be modified.

Historical Note: The Deep Learning Revolution in Medical Imaging (2016--Present)

2016-present

The application of deep learning to medical image reconstruction accelerated dramatically after 2016. The fastMRI challenge (launched by Facebook AI Research and NYU Langone in 2018) provided a large-scale benchmark with raw k-space data from over 10,000 clinical MRI scans. This open dataset catalysed the field: within two years, learned methods (U-Net, E2E VarNet) surpassed compressed sensing on every metric.

The fastMRI model — open data, standardized evaluation, and reproducible baselines — is precisely what RF imaging lacks. The CommIT group's simulation framework (Ch 31) aims to fill this gap by providing standardized RF imaging benchmarks, but measured RF data at scale remains an open challenge.

Uncertainty Quantification — A Gift from Medical Imaging

Medical imaging requires uncertainty estimates for clinical decision-making: a radiologist needs to know whether a detected lesion is reliable or an artifact of the reconstruction. Methods developed for this purpose — Monte Carlo dropout, deep ensembles, conformal prediction for images, posterior sampling via diffusion models — transfer directly to RF imaging, where uncertainty quantification is equally important (is that detected scatterer real or a reconstruction artifact?).

The Bayesian framework of Ch 03 provides the theoretical foundation; medical imaging provides the practical implementations.

🎓CommIT Contribution(2026)

ISAC as a Unique RF Imaging Capability

G. CaireInternal note, TU Berlin

Caire's unified forward model (Ch 07) connects the imaging community's diffraction-tomography view with the wireless community's MIMO-radar view. A key insight: the RF imaging forward operator y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w} arises from a communication waveform that simultaneously carries data. This dual-function nature — the ISAC paradigm — has no counterpart in CT, MRI, or ultrasound, where the transmitted signal serves only a sensing purpose.

The capacity-distortion tradeoff (ITA Ch 18, RFI Ch 29) formalises how much imaging quality must be sacrificed to maintain communication rate, and vice versa. This is a genuinely new theoretical question that medical imaging cannot pose.

ISACforward-modelcross-pollination

Quick Check

When transferring E2E VarNet from MRI to RF imaging, which component requires the most significant modification?

The forward and adjoint operations (replacing Fourier with the Kronecker sensing matrix)

The U-Net refinement blocks

The loss function

The learning rate schedule

Common Mistake: Do Not Copy Medical Imaging Architectures Without Adapting the Forward Model

Mistake:

A common mistake in the RF imaging literature is to adopt a medical imaging architecture (e.g., U-Net for MRI denoising) as a black-box image-to-image network, discarding the forward model entirely. This works for post-processing (Ch 26) but sacrifices the physics-based data-consistency that makes model-based deep learning powerful.

Correction:

Always embed the RF forward model A\mathbf{A} and its adjoint AH\mathbf{A}^{H} in the network graph. Use the medical imaging architecture (E2E VarNet, MoDL, Learned Primal-Dual) as a template, replacing AMRI\mathcal{A}_{\mathrm{MRI}} with A\mathbf{A} and retraining on RF data. Caire's principle: "learned blocks in principled model-based schemes."

Definition:

Physics-Informed Neural Operators

A Fourier Neural Operator (FNO) (Li et al., 2021) learns a mapping between function spaces by parameterising integral operators in the Fourier domain:

[Kθu](r)=F1[RθF[u]](r),[\mathcal{K}_\theta u](\mathbf{r}) = \mathcal{F}^{-1} \left[R_\theta \cdot \mathcal{F}[u]\right](\mathbf{r}),

where RθR_\theta is a learnable spectral filter and F\mathcal{F} is the Fourier transform. FNOs are resolution-independent: once trained, they can be evaluated at any discretisation.

Transfer to RF imaging: FNOs can learn the inverse of the Helmholtz equation (the wave-equation analogue of the RF forward model) as a single forward pass, bypassing iterative solvers. However, they require large training datasets and struggle with sharp features (edges, point scatterers) due to the Gibbs phenomenon in truncated Fourier representations.

⚠️Engineering Note

Data Scarcity in RF Imaging vs Medical Imaging

Medical imaging benefits from decades of accumulated clinical data. The fastMRI dataset alone contains 10,000+ MRI scans with raw k-space data. RF imaging has no comparable dataset: measured RF imaging data is scarce, expensive to collect, and often proprietary.

Practical consequences for learned methods:

  1. Self-supervised methods (SSDU, deep image prior) are more important for RF than for MRI because supervised training data is unavailable.
  2. Simulation-based training (using the forward model from Ch 07 to generate synthetic data) is the primary training strategy, but domain gap between simulation and reality ("inverse crime") must be carefully managed (Ch 31).
  3. Transfer learning from medical imaging (pre-training on MRI data, fine-tuning on RF) is an open research question with promising early results.
Practical Constraints
  • No public RF imaging dataset with raw measurements exists as of 2026

  • Simulation-to-real transfer requires explicit domain randomization and noise model calibration

ISAC (Integrated Sensing and Communications)

A paradigm where a single waveform and hardware platform simultaneously communicates data and senses/images the environment. The RF imaging forward model arises naturally from the communication waveform.

Related: ISAC as a Unique RF Imaging Capability

MoDL (Model-Based Deep Learning)

A learned reconstruction architecture alternating between a data-consistency step (CG solver) and a CNN denoiser. Originally developed for MRI, transferable to any linear inverse problem by replacing the forward operator.

Related: Model-Based Deep Learning (MoDL)

Key Takeaway

The most impactful transfers from medical imaging to RF are: (1) learned reconstruction architectures (E2E VarNet, MoDL, Learned Primal-Dual) that separate the physics from the prior and transfer by substituting the forward operator; (2) self-supervised training methods (SSDU) that circumvent the lack of ground-truth RF data; and (3) uncertainty quantification methods that enable reliable detection decisions. In the reverse direction, RF imaging's unique ISAC paradigm — joint sensing and communication from a shared waveform — poses new theoretical questions (capacity-distortion tradeoffs) that have no analogue in medical imaging and represent a genuine intellectual contribution from the wireless community.