Simulation Frameworks and Our Simulator
Why Simulation Matters
Almost all RF imaging research relies on simulation at some stage: validating algorithms before hardware deployment, generating training data for learned methods, and exploring scenarios that are difficult to measure. A well-designed simulator is the foundation of credible research. This section surveys the major simulation frameworks, describes the CommIT group's own simulator, and covers the Monte Carlo methodology needed for statistically rigorous evaluation.
Definition: Open-Source Simulation Frameworks
Open-Source Simulation Frameworks
Sionna RT (NVIDIA): differentiable link-level simulator with ray tracing for 5G/6G. Supports OFDM, MIMO, and channel estimation. GPU-accelerated via TensorFlow/JAX. Python.
FEKO (Altair): commercial full-wave electromagnetic solver supporting MoM, FEM, FDTD, and physical optics. Used for antenna design and RCS computation. Accurate but slow for large scenes.
MATLAB Radar Toolbox: comprehensive radar simulation including FMCW, OFDM, MIMO, and SAR. Signal-level simulation with propagation models.
RadarSimPy: open-source Python radar simulator supporting FMCW, OFDM, phased array, and MIMO configurations. Includes target models, interference, and clutter.
OpenRadar: open-source Python library for TI mmWave radar data processing. Includes range-Doppler, CFAR, and angle estimation.
Sionna's differentiable ray tracing is particularly valuable for learned imaging: gradients flow through the channel model, enabling end-to-end training of reconstruction networks.
Definition: Hierarchy of Forward Models
Hierarchy of Forward Models
Forward models trade accuracy for speed:
| Model | Physics | Speed | Multipath | Diffraction |
|---|---|---|---|---|
| Point scatterer | Born approx. | s | No | No |
| Ray tracing | Geometric optics | ms | Yes | Limited |
| FDTD/MoM | Full Maxwell | hours | Yes | Yes |
The point-scatterer / Born model:
is the most common in RF imaging papers but is also the most prone to the inverse crime (Section 31.2).
Simulation Framework Comparison
| Framework | Physics | GPU | Differentiable | Language | Cost |
|---|---|---|---|---|---|
| Sionna RT | Ray tracing | Yes | Yes | Python | Free |
| FEKO | Full-wave (MoM/FEM/FDTD) | Limited | No | GUI/Script | Commercial |
| MATLAB Radar | Signal-level | Limited | No | MATLAB | Commercial |
| RadarSimPy | Point scatterer + ray | No | No | Python | Free |
| OpenRadar | Data processing only | No | No | Python | Free |
| CommIT Simulator | Born / Kronecker | Yes (CuPy) | Yes (PyTorch) | Python | Internal |
Definition: The CommIT RF Imaging Simulator
The CommIT RF Imaging Simulator
The CommIT group's simulator implements the full imaging pipeline:
Architecture:
- 2D or 3D voxel grid with configurable resolution.
- Multi-view, multi-frequency sensing with Kronecker-structured .
- GPU-accelerated forward and adjoint operations via CuPy/PyTorch.
The 13-phase pipeline:
- Scene generation (ShapeNet / THuman / random)
- Array configuration (ULA / UPA / circular / custom)
- Frequency plan (, , subcarriers)
- Sensing matrix construction
- Kronecker factorisation and preconditioning
- Forward measurement
- Noise and impairment injection
- Reconstruction (MF, LASSO, OAMP, learned)
- Post-processing (thresholding, CFAR)
- Metric computation (PSNR, SSIM, detection)
- Logging and visualisation
- Monte Carlo repetition
- Statistical aggregation and reporting
The Kronecker structure enables fast matrix-vector products: costs instead of , where is measurements, frequencies, spatial channels, and voxels.
Differentiable Simulators
For training learned imaging methods end-to-end, the simulator must be differentiable: gradients of the loss with respect to the input scene must propagate through the forward model.
- Point scatterer model: fully differentiable (linear in , smooth in positions).
- Ray tracing (Sionna): differentiable with respect to material parameters and geometry.
- FDTD/MoM: not differentiable (discrete grid, iterative solver). Adjoint methods provide approximate gradients.
Using a differentiable simulator for both training and evaluation creates the inverse crime (Section 31.2).
Definition: Monte Carlo Evaluation Framework
Monte Carlo Evaluation Framework
A Monte Carlo evaluation consists of:
-
Parameter space: define the variables being swept (SNR, number of targets, compression ratio, etc.).
-
Random factors: identify the random elements (noise, target positions, channel realisations).
-
Number of trials: independent realisations per parameter setting.
-
Performance metric: compute the metric for each trial .
-
Statistics: report the sample mean and confidence interval , where is the sample standard deviation.
Theorem: Required Number of Monte Carlo Trials
To achieve a confidence interval half-width of at confidence level , the required number of trials is:
where is the standard deviation of the metric and is the standard normal quantile ( for 95% confidence).
Proof
By the central limit theorem, for large . The half-width of the confidence interval is . Setting this and solving for gives the result.
Example: How Many Monte Carlo Trials?
An OFDM radar imaging algorithm is evaluated using PSNR as the metric. In a pilot run of 20 trials, the sample standard deviation is dB. How many trials are needed for a 95% confidence interval of dB?
Computation
. Round up: trials.
Interpretation
With 100 trials, the reported mean PSNR is accurate to dB with 95% confidence. The pilot run of 20 trials was insufficient (CI would have been dB). Many RF imaging papers report results from trials, which is inadequate for claims of dB improvement.
Monte Carlo Convergence
Watch the running mean PSNR and its 95% confidence interval converge as the number of Monte Carlo trials increases. The shaded band is the 95% CI. Notice how the CI is wide for few trials and narrows as .
Parameters
Definition: Statistical Significance Testing
Statistical Significance Testing
To claim that method A outperforms method B, use a paired statistical test:
-
Paired -test: compares mean performance on the same test instances. Null hypothesis: .
-
Wilcoxon signed-rank test: non-parametric alternative when the metric distribution is non-Gaussian.
-
Effect size (Cohen's ): . Conventions: (negligible), (small), (medium), (large).
Report both the -value and the effect size. A statistically significant but tiny improvement () may not be practically meaningful.
Common Monte Carlo Mistakes
-
Too few trials: 10 trials cannot resolve a 0.5 dB difference. Use the sample size formula above.
-
Same random seed: using identical seeds for all methods ensures fair comparison but must not be used to avoid reporting variability. Report both same-seed and independent-seed results.
-
Cherry-picking: showing the best trial or a hand-picked scene. Always report aggregate statistics.
-
Ignoring variance: reporting only the mean without confidence intervals or standard deviations.
-
Multiple comparisons: testing 10 methods pairwise (45 tests) inflates the false positive rate. Use Bonferroni correction: .
GPU Acceleration for Large-Scale Simulation
For a voxel grid with 64 frequencies and a MIMO array, the sensing matrix has entries --- it cannot be stored explicitly. The Kronecker factorisation enables matrix-free operations:
- Forward: via two smaller matrix-vector products.
- Adjoint: via the same factored structure.
On an NVIDIA A100, this reduces the time per forward-adjoint pair from minutes (explicit matrix) to milliseconds (Kronecker).
Example: End-to-End Simulation with Sionna RT
Set up a Sionna simulation for RF imaging at 28 GHz with a -antenna base station. The scene contains 5 radar targets in an indoor room. Outline the pipeline.
Scene setup
Load a 3D indoor model in Sionna RT. Place the base station at one wall. Place 5 metallic targets (RCS -- m) at known positions within the room.
Channel simulation
Run Sionna RT to compute target echoes via ray paths (up to 3 reflections). Export channel impulse responses at 28 GHz with 400 MHz bandwidth ( subcarriers).
Reconstruction
Form the sensing matrix from the array geometry and frequency plan. Apply matched-filter imaging () and LASSO for comparison. The Sionna ray-tracing data serves as the "honest" forward model; reconstruction uses the Born approximation, avoiding the inverse crime.
Evaluation
Compare reconstructed images to the known target positions. Compute PSNR, SSIM, and detection at .
Common Mistake: Reporting Results from a Single Simulation Run
Mistake:
Running one simulation with a single noise realisation and reporting the resulting PSNR as representative.
Correction:
A single run depends on the specific noise realisation and target configuration. Use at least trials and report mean 95% confidence interval. For comparing two methods, use a paired -test to verify that the difference is statistically significant.
Why This Matters: From Channel Models to Imaging Simulators
The telecommunications community has developed sophisticated channel models (3GPP, QuaDRiGa) for link-level simulation. RF imaging simulators extend these by requiring geometric consistency: the scatterer positions that generate the channel must correspond to a physically meaningful scene that can serve as ground truth. Standard channel models lack this geometric consistency, which is why the CommIT group is developing a next-generation geometrically consistent channel simulation tool.
See full treatment in Chapter 32
Monte Carlo Method
A computational technique that uses repeated random sampling to estimate statistical quantities (means, variances, distributions) of a system. In RF imaging, used to evaluate algorithm performance over many noise and scene realisations.
Kronecker Structure
A matrix factorisation that separates the frequency and spatial dimensions of the sensing operator, enabling fast matrix-vector products and reduced memory.
Related: Sensing Matrix
Quick Check
A simulation study uses 15 Monte Carlo trials and reports that method A achieves dB PSNR while method B achieves dB. Can we conclude that A is better than B?
Yes, A has higher mean PSNR
No, the confidence intervals overlap and 15 trials is too few
Yes, if we use a one-sided test
Cannot determine without the raw data
Correct. The 0.8 dB difference is within the confidence intervals. Need ~100 trials for a CI of +/- 0.5 dB.
Key Takeaway
Sionna RT provides GPU-accelerated, differentiable simulation for end-to-end imaging research. The CommIT simulator uses Kronecker-structured sensing for fast 2D/3D imaging with a 13-phase pipeline. Monte Carlo evaluation with trials, confidence intervals, and paired statistical tests is essential for credible performance claims. Common pitfalls include too few trials, cherry-picking, and ignoring multiple comparisons.