Cross-Pollination: What RF Imaging Can Learn and Contribute
Section Roadmap: Cross-Pollination Between Medical and RF Imaging
The previous three sections established that CT, MRI, and ultrasound share the same mathematical skeleton as RF imaging: a linear forward model, incomplete measurements, and the need for regularization or learning. This section draws the transfer map: which architectures and ideas from medical imaging carry over to RF, what modifications they need, and — critically — what RF imaging offers in return. The punchline: medical imaging has mature learned reconstruction pipelines that RF imaging can borrow, while RF imaging's ISAC paradigm (combining sensing with communication) has no analogue in medical imaging and represents a genuine intellectual export.
Definition: Model-Based Deep Learning (MoDL)
Model-Based Deep Learning (MoDL)
MoDL (Aggarwal et al., 2019) alternates between a data-consistency step and a learned denoiser within a conjugate-gradient (CG) solver:
where is a CNN denoiser with shared weights across iterations. The CG inner loop is differentiable, so is learned end-to-end.
Transfer to RF imaging: Replace with . The CG inner loop must be adapted to exploit the Kronecker structure of (Ch 07) for computational efficiency. The denoiser requires retraining on RF scene statistics, which differ substantially from medical images (discrete scatterers vs. continuous tissue).
Definition: Learned Primal-Dual (Adler & Oktem, 2018)
Learned Primal-Dual (Adler & Oktem, 2018)
The Learned Primal-Dual algorithm unrolls a primal-dual optimization scheme (cf. Telecom Ch 03) into layers:
Dual update:
Primal update:
where and are small CNNs. Originally developed for CT (where ), this architecture generalises to any linear inverse problem.
Transfer to RF: This is structurally the same as the unrolled OAMP (Ch 18) viewed as a primal-dual algorithm. The RF-specific modification is to exploit the Kronecker structure in the forward/adjoint operations .
Definition: Self-Supervised Data Undersampling (SSDU)
Self-Supervised Data Undersampling (SSDU)
SSDU (Yaman et al., 2020) enables training MRI reconstruction networks without fully sampled ground truth. The key idea: split the acquired k-space data into two disjoint subsets with (the full acquired set):
- Train the reconstruction network using as input (further undersampled data).
- Validate/loss on : the network's output is compared against the measurements in that it never saw.
The loss is .
Transfer to RF imaging: RF imaging often lacks ground-truth images entirely (no "fully sampled" RF measurement exists in practice). SSDU is directly applicable: split the Tx-Rx-frequency measurements into training and validation subsets. This is one of the most impactful transfers from medical imaging to RF.
Theorem: Architecture Transfer via Forward-Operator Substitution
Let be a model-based learned reconstruction network that accesses the forward operator and its adjoint only through matrix-vector products and (black-box access). Then can be transferred to any new inverse problem with forward operator by:
- Replacing and in the network graph.
- Retraining the learnable parameters on data from the new problem.
The network architecture (number of layers, CNN structure, skip connections) transfers unchanged. Only the data distribution and forward operator change.
Model-based deep learning separates the physics (forward model) from the prior (learned network). If the physics changes (e.g., from MRI's Fourier operator to RF's Kronecker sensing matrix), only the physics interface needs updating — the prior structure (U-Net, ResNet, etc.) remains. This is the "plug-and-play" principle at the architecture level.
Black-box access suffices
Networks like MoDL, E2E VarNet, and Learned Primal-Dual interact with only through forward and adjoint products. The internal structure of (Fourier vs. Kronecker vs. Radon) does not affect the network graph — it only affects the numerical values passed between layers.
Retraining adapts the learned prior
The CNN blocks learn image-domain priors (e.g., edge structure, texture patterns). RF scenes have different statistics than medical images, so the weights must be retrained. But the architecture — number of layers, receptive field, skip connections — transfers because it captures general image-processing operations.
What Medical Imaging Can Learn from RF: The ISAC Paradigm
The transfer from medical imaging to RF is well-documented. But the reverse direction is equally important and less recognized.
Integrated Sensing and Communications (ISAC) — the paradigm where the same waveform simultaneously communicates data and images the environment (Ch 29--30) — has no analogue in medical imaging. An MRI scanner does not communicate with the patient; a CT scanner does not serve as a data link. The ISAC constraint fundamentally changes the waveform design and the forward operator: the transmitted signal must be optimized for two objectives (data rate and imaging quality) simultaneously.
This dual-function constraint opens new theoretical questions (capacity-distortion tradeoffs, ITA Ch 18) and new architectural challenges (joint communication-sensing beamforming, Ch 29) that medical imaging researchers may find fertile ground for cross- disciplinary collaboration.
Learned Reconstruction Architectures and Their RF Imaging Counterparts
| Architecture | Medical Imaging Origin | RF Imaging Counterpart | Key Modification for RF |
|---|---|---|---|
| FBPConvNet | CT post-processing (Jin 2017) | MF + U-Net (Ch 26) | Replace FBP with |
| E2E VarNet | MRI unrolled gradient descent (Sriram 2020) | Unrolled OAMP (Ch 18) | Exploit Kronecker structure in |
| MoDL | MRI CG + denoiser (Aggarwal 2019) | CG-Net for RF | Kronecker-accelerated CG inner loop |
| Learned Primal-Dual | CT/MRI (Adler 2018) | Unrolled primal-dual for RF | Kronecker forward/adjoint products |
| SSDU | Self-supervised MRI (Yaman 2020) | Self-supervised RF imaging | Split Tx-Rx measurements instead of k-space lines |
| DPS (Diffusion Posterior Sampling) | General inverse problems (Chung 2023) | Diffusion prior for RF (Ch 30 analogue) | Score function trained on RF scenes |
Example: Transferring MoDL from MRI to RF Imaging: What Changes?
Consider MoDL trained for 4x-accelerated brain MRI with . We want to apply MoDL to RF imaging with (Kronecker structure, Ch 07).
Task: List the components that transfer unchanged and the components that must be modified.
Components that transfer unchanged
- Architecture: The alternating CG + denoiser structure.
- Denoiser structure: The U-Net architecture and skip connections (though weights must be retrained).
- End-to-end training pipeline: Backpropagation through the CG steps.
- Loss function: or image-domain loss.
Components that must change
- Forward/adjoint operations: changes from (FFT-based, ) to (Kronecker products, per Ch 07).
- Training data: Medical images RF scenes (discrete scatterers, walls, metallic objects — very different statistics).
- CG inner loop: Must exploit Kronecker structure for efficiency; naive dense CG is prohibitively slow for RF.
- Regularization weight : Must be re-tuned for the RF noise level () and conditioning of .
- Complex-valued processing: MRI is inherently complex; some RF implementations use real-valued processing of magnitude images, which loses phase information.
Historical Note: The Deep Learning Revolution in Medical Imaging (2016--Present)
2016-presentThe application of deep learning to medical image reconstruction accelerated dramatically after 2016. The fastMRI challenge (launched by Facebook AI Research and NYU Langone in 2018) provided a large-scale benchmark with raw k-space data from over 10,000 clinical MRI scans. This open dataset catalysed the field: within two years, learned methods (U-Net, E2E VarNet) surpassed compressed sensing on every metric.
The fastMRI model — open data, standardized evaluation, and reproducible baselines — is precisely what RF imaging lacks. The CommIT group's simulation framework (Ch 31) aims to fill this gap by providing standardized RF imaging benchmarks, but measured RF data at scale remains an open challenge.
Uncertainty Quantification — A Gift from Medical Imaging
Medical imaging requires uncertainty estimates for clinical decision-making: a radiologist needs to know whether a detected lesion is reliable or an artifact of the reconstruction. Methods developed for this purpose — Monte Carlo dropout, deep ensembles, conformal prediction for images, posterior sampling via diffusion models — transfer directly to RF imaging, where uncertainty quantification is equally important (is that detected scatterer real or a reconstruction artifact?).
The Bayesian framework of Ch 03 provides the theoretical foundation; medical imaging provides the practical implementations.
ISAC as a Unique RF Imaging Capability
Caire's unified forward model (Ch 07) connects the imaging community's diffraction-tomography view with the wireless community's MIMO-radar view. A key insight: the RF imaging forward operator arises from a communication waveform that simultaneously carries data. This dual-function nature — the ISAC paradigm — has no counterpart in CT, MRI, or ultrasound, where the transmitted signal serves only a sensing purpose.
The capacity-distortion tradeoff (ITA Ch 18, RFI Ch 29) formalises how much imaging quality must be sacrificed to maintain communication rate, and vice versa. This is a genuinely new theoretical question that medical imaging cannot pose.
Quick Check
When transferring E2E VarNet from MRI to RF imaging, which component requires the most significant modification?
The forward and adjoint operations (replacing Fourier with the Kronecker sensing matrix)
The U-Net refinement blocks
The loss function
The learning rate schedule
E2E VarNet accesses the forward operator through matrix-vector products. Replacing the FFT-based MRI operator with the Kronecker-structured RF sensing matrix is the critical change. The U-Net refinement blocks transfer architecturally unchanged (though weights must be retrained).
Common Mistake: Do Not Copy Medical Imaging Architectures Without Adapting the Forward Model
Mistake:
A common mistake in the RF imaging literature is to adopt a medical imaging architecture (e.g., U-Net for MRI denoising) as a black-box image-to-image network, discarding the forward model entirely. This works for post-processing (Ch 26) but sacrifices the physics-based data-consistency that makes model-based deep learning powerful.
Correction:
Always embed the RF forward model and its adjoint in the network graph. Use the medical imaging architecture (E2E VarNet, MoDL, Learned Primal-Dual) as a template, replacing with and retraining on RF data. Caire's principle: "learned blocks in principled model-based schemes."
Definition: Physics-Informed Neural Operators
Physics-Informed Neural Operators
A Fourier Neural Operator (FNO) (Li et al., 2021) learns a mapping between function spaces by parameterising integral operators in the Fourier domain:
where is a learnable spectral filter and is the Fourier transform. FNOs are resolution-independent: once trained, they can be evaluated at any discretisation.
Transfer to RF imaging: FNOs can learn the inverse of the Helmholtz equation (the wave-equation analogue of the RF forward model) as a single forward pass, bypassing iterative solvers. However, they require large training datasets and struggle with sharp features (edges, point scatterers) due to the Gibbs phenomenon in truncated Fourier representations.
Data Scarcity in RF Imaging vs Medical Imaging
Medical imaging benefits from decades of accumulated clinical data. The fastMRI dataset alone contains 10,000+ MRI scans with raw k-space data. RF imaging has no comparable dataset: measured RF imaging data is scarce, expensive to collect, and often proprietary.
Practical consequences for learned methods:
- Self-supervised methods (SSDU, deep image prior) are more important for RF than for MRI because supervised training data is unavailable.
- Simulation-based training (using the forward model from Ch 07 to generate synthetic data) is the primary training strategy, but domain gap between simulation and reality ("inverse crime") must be carefully managed (Ch 31).
- Transfer learning from medical imaging (pre-training on MRI data, fine-tuning on RF) is an open research question with promising early results.
- •
No public RF imaging dataset with raw measurements exists as of 2026
- •
Simulation-to-real transfer requires explicit domain randomization and noise model calibration
ISAC (Integrated Sensing and Communications)
A paradigm where a single waveform and hardware platform simultaneously communicates data and senses/images the environment. The RF imaging forward model arises naturally from the communication waveform.
MoDL (Model-Based Deep Learning)
A learned reconstruction architecture alternating between a data-consistency step (CG solver) and a CNN denoiser. Originally developed for MRI, transferable to any linear inverse problem by replacing the forward operator.
Related: Model-Based Deep Learning (MoDL)
Key Takeaway
The most impactful transfers from medical imaging to RF are: (1) learned reconstruction architectures (E2E VarNet, MoDL, Learned Primal-Dual) that separate the physics from the prior and transfer by substituting the forward operator; (2) self-supervised training methods (SSDU) that circumvent the lack of ground-truth RF data; and (3) uncertainty quantification methods that enable reliable detection decisions. In the reverse direction, RF imaging's unique ISAC paradigm — joint sensing and communication from a shared waveform — poses new theoretical questions (capacity-distortion tradeoffs) that have no analogue in medical imaging and represent a genuine intellectual contribution from the wireless community.