Simulation Frameworks and Our Simulator

Why Simulation Matters

Almost all RF imaging research relies on simulation at some stage: validating algorithms before hardware deployment, generating training data for learned methods, and exploring scenarios that are difficult to measure. A well-designed simulator is the foundation of credible research. This section surveys the major simulation frameworks, describes the CommIT group's own simulator, and covers the Monte Carlo methodology needed for statistically rigorous evaluation.

Definition:

Open-Source Simulation Frameworks

Sionna RT (NVIDIA): differentiable link-level simulator with ray tracing for 5G/6G. Supports OFDM, MIMO, and channel estimation. GPU-accelerated via TensorFlow/JAX. Python.

FEKO (Altair): commercial full-wave electromagnetic solver supporting MoM, FEM, FDTD, and physical optics. Used for antenna design and RCS computation. Accurate but slow for large scenes.

MATLAB Radar Toolbox: comprehensive radar simulation including FMCW, OFDM, MIMO, and SAR. Signal-level simulation with propagation models.

RadarSimPy: open-source Python radar simulator supporting FMCW, OFDM, phased array, and MIMO configurations. Includes target models, interference, and clutter.

OpenRadar: open-source Python library for TI mmWave radar data processing. Includes range-Doppler, CFAR, and angle estimation.

Sionna's differentiable ray tracing is particularly valuable for learned imaging: gradients flow through the channel model, enabling end-to-end training of reconstruction networks.

Definition:

Hierarchy of Forward Models

Forward models trade accuracy for speed:

Model Physics Speed Multipath Diffraction
Point scatterer Born approx. ΞΌ\mus No No
Ray tracing Geometric optics ms Yes Limited
FDTD/MoM Full Maxwell hours Yes Yes

The point-scatterer / Born model: ym(fkk)=βˆ‘q=1Qcqβ‹…am(d^q)β‹…eβˆ’j2Ο€fkkΟ„i,jm,q+wm(fkk)\mathbf{y}_{m}({f_k}_{k}) = \sum_{q=1}^{Q} \mathbf{c}_{q} \cdot a_m(\hat{\mathbf{d}}_q) \cdot e^{-j 2\pi {f_k}_{k} {\tau_{i,j}}_{m,q}} + \mathbf{w}_{m}({f_k}_{k})

is the most common in RF imaging papers but is also the most prone to the inverse crime (Section 31.2).

,

Simulation Framework Comparison

FrameworkPhysicsGPUDifferentiableLanguageCost
Sionna RTRay tracingYesYesPythonFree
FEKOFull-wave (MoM/FEM/FDTD)LimitedNoGUI/ScriptCommercial
MATLAB RadarSignal-levelLimitedNoMATLABCommercial
RadarSimPyPoint scatterer + rayNoNoPythonFree
OpenRadarData processing onlyNoNoPythonFree
CommIT SimulatorBorn / KroneckerYes (CuPy)Yes (PyTorch)PythonInternal

Definition:

The CommIT RF Imaging Simulator

The CommIT group's simulator implements the full imaging pipeline:

Architecture:

  • 2D or 3D voxel grid with configurable resolution.
  • Multi-view, multi-frequency sensing with Kronecker-structured A=AfreqβŠ—Aspace\mathbf{A} = \mathbf{A}_{\mathrm{freq}} \otimes \mathbf{A}_{\mathrm{space}}.
  • GPU-accelerated forward and adjoint operations via CuPy/PyTorch.

The 13-phase pipeline:

  1. Scene generation (ShapeNet / THuman / random)
  2. Array configuration (ULA / UPA / circular / custom)
  3. Frequency plan (f0f_0, WW, KK subcarriers)
  4. Sensing matrix A\mathbf{A} construction
  5. Kronecker factorisation and preconditioning
  6. Forward measurement y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w}
  7. Noise and impairment injection
  8. Reconstruction (MF, LASSO, OAMP, learned)
  9. Post-processing (thresholding, CFAR)
  10. Metric computation (PSNR, SSIM, detection)
  11. Logging and visualisation
  12. Monte Carlo repetition
  13. Statistical aggregation and reporting

The Kronecker structure enables fast matrix-vector products: Ac\mathbf{A}\mathbf{c} costs O(MK+NQ)O(MK + NQ) instead of O(MQ)O(MQ), where MM is measurements, KK frequencies, NN spatial channels, and QQ voxels.

Differentiable Simulators

For training learned imaging methods end-to-end, the simulator must be differentiable: gradients of the loss with respect to the input scene must propagate through the forward model.

  • Point scatterer model: fully differentiable (linear in cq\mathbf{c}_{q}, smooth in positions).
  • Ray tracing (Sionna): differentiable with respect to material parameters and geometry.
  • FDTD/MoM: not differentiable (discrete grid, iterative solver). Adjoint methods provide approximate gradients.

Using a differentiable simulator for both training and evaluation creates the inverse crime (Section 31.2).

Definition:

Monte Carlo Evaluation Framework

A Monte Carlo evaluation consists of:

  1. Parameter space: define the variables being swept (SNR, number of targets, compression ratio, etc.).

  2. Random factors: identify the random elements (noise, target positions, channel realisations).

  3. Number of trials: NMCN_{\mathrm{MC}} independent realisations per parameter setting.

  4. Performance metric: compute the metric mim_i for each trial i∈{1,…,NMC}i \in \{1, \ldots, N_{\mathrm{MC}}\}.

  5. Statistics: report the sample mean mΛ‰=1NMCβˆ‘i=1NMCmi\bar{m} = \frac{1}{N_{\mathrm{MC}}} \sum_{i=1}^{N_{\mathrm{MC}}} m_i and confidence interval CI1βˆ’Ξ±=mΛ‰Β±z1βˆ’Ξ±/2β‹…s/NMC\mathrm{CI}_{1-\alpha} = \bar{m} \pm z_{1-\alpha/2} \cdot s/\sqrt{N_{\mathrm{MC}}}, where ss is the sample standard deviation.

Theorem: Required Number of Monte Carlo Trials

To achieve a confidence interval half-width of Ο΅\epsilon at confidence level 1βˆ’Ξ±1 - \alpha, the required number of trials is:

NMCβ‰₯(z1βˆ’Ξ±/2β‹…ΟƒmΟ΅)2,N_{\mathrm{MC}} \geq \left(\frac{z_{1-\alpha/2} \cdot \sigma_m}{\epsilon}\right)^2,

where Οƒm\sigma_m is the standard deviation of the metric and z1βˆ’Ξ±/2z_{1-\alpha/2} is the standard normal quantile (z0.975=1.96z_{0.975} = 1.96 for 95% confidence).

Example: How Many Monte Carlo Trials?

An OFDM radar imaging algorithm is evaluated using PSNR as the metric. In a pilot run of 20 trials, the sample standard deviation is s=2.5s = 2.5 dB. How many trials are needed for a 95% confidence interval of Β±0.5\pm 0.5 dB?

Monte Carlo Convergence

Watch the running mean PSNR and its 95% confidence interval converge as the number of Monte Carlo trials increases. The shaded band is the 95% CI. Notice how the CI is wide for few trials and narrows as 1/NMC1/\sqrt{N_{\mathrm{MC}}}.

Parameters
25
3
500

Definition:

Statistical Significance Testing

To claim that method A outperforms method B, use a paired statistical test:

  1. Paired tt-test: compares mean performance on the same test instances. Null hypothesis: ΞΌAβˆ’ΞΌB=0\mu_A - \mu_B = 0.

  2. Wilcoxon signed-rank test: non-parametric alternative when the metric distribution is non-Gaussian.

  3. Effect size (Cohen's dd): d=∣mΛ‰Aβˆ’mΛ‰B∣/spooledd = |\bar{m}_A - \bar{m}_B| / s_{\mathrm{pooled}}. Conventions: d<0.2d < 0.2 (negligible), d∈[0.2,0.5]d \in [0.2, 0.5] (small), d∈[0.5,0.8]d \in [0.5, 0.8] (medium), d>0.8d > 0.8 (large).

Report both the pp-value and the effect size. A statistically significant but tiny improvement (d<0.2d < 0.2) may not be practically meaningful.

Common Monte Carlo Mistakes

  1. Too few trials: 10 trials cannot resolve a 0.5 dB difference. Use the sample size formula above.

  2. Same random seed: using identical seeds for all methods ensures fair comparison but must not be used to avoid reporting variability. Report both same-seed and independent-seed results.

  3. Cherry-picking: showing the best trial or a hand-picked scene. Always report aggregate statistics.

  4. Ignoring variance: reporting only the mean without confidence intervals or standard deviations.

  5. Multiple comparisons: testing 10 methods pairwise (45 tests) inflates the false positive rate. Use Bonferroni correction: Ξ±adj=0.05/45β‰ˆ0.001\alpha_{\mathrm{adj}} = 0.05/45 \approx 0.001.

⚠️Engineering Note

GPU Acceleration for Large-Scale Simulation

For a 128Γ—128Γ—128128 \times 128 \times 128 voxel grid with 64 frequencies and a 16Γ—1616 \times 16 MIMO array, the sensing matrix A\mathbf{A} has 16,384Γ—2,097,15216{,}384 \times 2{,}097{,}152 entries --- it cannot be stored explicitly. The Kronecker factorisation A=AfreqβŠ—Aspace\mathbf{A} = \mathbf{A}_{\mathrm{freq}} \otimes \mathbf{A}_{\mathrm{space}} enables matrix-free operations:

  • Forward: y=(AfreqβŠ—Aspace)vec⁑(c)\mathbf{y} = (\mathbf{A}_{\mathrm{freq}} \otimes \mathbf{A}_{\mathrm{space}}) \operatorname{vec}(\mathbf{c}) via two smaller matrix-vector products.
  • Adjoint: AHy\mathbf{A}^{H} \mathbf{y} via the same factored structure.

On an NVIDIA A100, this reduces the time per forward-adjoint pair from minutes (explicit matrix) to milliseconds (Kronecker).

Example: End-to-End Simulation with Sionna RT

Set up a Sionna simulation for RF imaging at 28 GHz with a 1616-antenna base station. The scene contains 5 radar targets in an indoor room. Outline the pipeline.

Common Mistake: Reporting Results from a Single Simulation Run

Mistake:

Running one simulation with a single noise realisation and reporting the resulting PSNR as representative.

Correction:

A single run depends on the specific noise realisation and target configuration. Use at least NMC=100N_{\mathrm{MC}} = 100 trials and report mean Β±\pm 95% confidence interval. For comparing two methods, use a paired tt-test to verify that the difference is statistically significant.

Why This Matters: From Channel Models to Imaging Simulators

The telecommunications community has developed sophisticated channel models (3GPP, QuaDRiGa) for link-level simulation. RF imaging simulators extend these by requiring geometric consistency: the scatterer positions that generate the channel must correspond to a physically meaningful scene that can serve as ground truth. Standard channel models lack this geometric consistency, which is why the CommIT group is developing a next-generation geometrically consistent channel simulation tool.

See full treatment in Chapter 32

Monte Carlo Method

A computational technique that uses repeated random sampling to estimate statistical quantities (means, variances, distributions) of a system. In RF imaging, used to evaluate algorithm performance over many noise and scene realisations.

Kronecker Structure

A matrix factorisation A=AfreqβŠ—Aspace\mathbf{A} = \mathbf{A}_{\mathrm{freq}} \otimes \mathbf{A}_{\mathrm{space}} that separates the frequency and spatial dimensions of the sensing operator, enabling fast matrix-vector products and reduced memory.

Related: Sensing Matrix

Quick Check

A simulation study uses 15 Monte Carlo trials and reports that method A achieves 28.3Β±1.828.3 \pm 1.8 dB PSNR while method B achieves 27.5Β±2.127.5 \pm 2.1 dB. Can we conclude that A is better than B?

Yes, A has higher mean PSNR

No, the confidence intervals overlap and 15 trials is too few

Yes, if we use a one-sided test

Cannot determine without the raw data

Key Takeaway

Sionna RT provides GPU-accelerated, differentiable simulation for end-to-end imaging research. The CommIT simulator uses Kronecker-structured sensing for fast 2D/3D imaging with a 13-phase pipeline. Monte Carlo evaluation with NMCβ‰₯100N_{\mathrm{MC}} \geq 100 trials, confidence intervals, and paired statistical tests is essential for credible performance claims. Common pitfalls include too few trials, cherry-picking, and ignoring multiple comparisons.