Exercises
ex31-01-hardware-selection
EasyA student wants to build a low-cost MIMO radar imaging demonstrator for a course project. The budget is $500. Recommend a hardware platform, specify the achievable range resolution, angular resolution, and maximum range.
TI evaluation boards are the most affordable option for MIMO radar.
Platform selection
TI IWR6843ISK evaluation board: \N_t = 3N_r = 4$ MIMO, 4 GHz bandwidth. Remaining budget for USB cables, mounting hardware, and corner reflectors.
Performance
Range resolution: cm. Virtual array: 12 elements at mm spacing. Angular resolution: (12-element ULA). Maximum range: m (limited by transmit power and small antenna gain). Sufficient for indoor imaging demonstrations.
ex31-02-psnr-ssim
EasyA radar image has pixel values in . The reconstruction has MSE . Compute the PSNR. If the SSIM is 0.85, which metric indicates better quality and why might they disagree?
PSNR for unit-range images.
PSNR
dB. This is moderate quality.
SSIM interpretation
SSIM is decent structural similarity. They may disagree because PSNR counts all errors equally (including background noise), while SSIM focuses on structural features. A reconstruction that is noisy everywhere but preserves edges and targets will have low PSNR but high SSIM. Conversely, a smooth but blurred reconstruction has high PSNR but low SSIM.
ex31-03-forward-model
EasyFor each scenario, recommend the appropriate forward model (point scatterer, ray tracing, or FDTD) and justify your choice: (a) training a deep unrolling network for sparse target detection; (b) validating an indoor imaging algorithm in a furnished room; (c) analysing diffraction around a building corner.
Consider the physics needed and the computational budget.
Recommendations
(a) Point scatterer: fast (s per realisation), generates millions of training samples. Sufficient for sparse targets (Born approximation valid). But use ray tracing for the test set to avoid the inverse crime. (b) Ray tracing: captures wall reflections and furniture multipath. Moderate speed (ms per scene). Sionna with a 3D room model provides realistic channels. (c) FDTD: diffraction requires solving Maxwell's equations; geometric optics (ray tracing) cannot model it accurately. Slow but necessary for this scenario.
ex31-04-noise-model
EasyA radar receiver has noise figure dB and bandwidth MHz. Compute: (1) the noise power at K; (2) the SNR for a target return with power dBm.
J/K.
Noise power
W dBm.
SNR
dB. The target is below the noise floor; coherent integration or CS reconstruction is needed for detection.
ex31-05-inverse-crime-test
MediumYou are running a simulation study comparing ISTA and a deep unrolling network for OFDM radar imaging. Your current setup uses the same DFT-based forward model for data generation and reconstruction. Design a modification that avoids the inverse crime. Quantify the expected performance degradation.
Use a higher-fidelity model for data generation.
Current (crime)
Forward: (DFT on grid). Reconstruction: .
Modified (honest)
Generate data on a grid using the exact Green's function (near-field model with range-dependent phase). Add mutual coupling ( matrix with 5% coupling coefficient). Reconstruct on the grid using the DFT model.
Expected degradation
Crime PSNR: dB. Honest PSNR: --28 dB (7--13 dB drop). The deep network may degrade more than ISTA because it overfit to the crime model during training. Solution: retrain the network on data from the honest forward model.
ex31-06-mc-design
MediumDesign a Monte Carlo study to compare 4 imaging algorithms (MF, LASSO, ISTA-Net, U-Net) across 5 SNR levels ( dB). The metric is PSNR with expected standard deviation 3 dB. Target: 95% CI of dB. Compute: (1) trials per setting; (2) total compute time if each trial takes 10 s (MF), 60 s (LASSO), 5 s (ISTA-Net), 2 s (U-Net).
.
Trials per setting
. Round up: .
Total compute time
Per algorithm: SNR levels trials. MF: s ( h). LASSO: s ( h). ISTA-Net: s ( h). U-Net: s ( h). Total: hours. Parallelisable across the 5 SNR levels: hours on 5 GPUs.
ex31-07-statistical-test
MediumIn a Monte Carlo study with 100 trials, ISTA-Net achieves mean PSNR 28.3 dB ( dB) and LASSO achieves 27.5 dB ( dB) on the same test instances. (1) Perform a paired -test at . (2) Compute Cohen's . (3) Is the improvement practically meaningful?
For paired test, compute the mean and std of the differences.
Paired $t$-test
dB. Assume dB (with correlation). . : statistically significant.
Effect size
: medium effect size.
Practical significance
A 0.8 dB improvement is statistically significant but modest in practice (barely perceptible visually). The medium effect size suggests the improvement is real but the practical relevance depends on the application: for target detection near the noise floor, 0.8 dB can be meaningful; for image display, it is negligible.
ex31-08-dataset-generation
MediumDesign a simulated dataset for training a learned RF imaging network. Specify: the target models (types, sources), the radar parameters (, , array), the noise model, the dataset size, and the train/val/test split.
Use ShapeNet or THuman for diverse targets.
Avoid the inverse crime in the test set.
Target models
Source: 500 ShapeNet models from 5 categories (chair, table, person, vehicle, box). Voxelise each onto a grid. 10 random orientations per model scenes.
Radar parameters
GHz, GHz, subcarriers. UPA with , , virtual array elements. SNR range: dB (uniform).
Data generation
Training/val: Born model on grid (fast, allows large dataset). Test: Born model on grid with off-grid targets and mutual coupling (avoids inverse crime). Split: 4000 train, 500 val, 500 test.
ex31-09-fair-comparison
MediumYou are reviewing a paper that claims a new deep learning method achieves 5 dB PSNR improvement over LASSO for OFDM radar imaging. The paper uses 10,000 training samples for the DL method and default CVXPY parameters for LASSO. Identify the issues and design a fair comparison.
LASSO's must be tuned; default parameters are not optimal.
Issues
(1) LASSO is not tuned: the default is likely suboptimal. (2) No validation set mentioned. (3) DL has 10,000 samples for learning; LASSO uses none. (4) Computational cost not reported. (5) PSNR alone may not capture task-relevant quality.
Fair comparison design
(1) Tune via 5-fold cross-validation. (2) Fixed train/val/test split. (3) Report PSNR, SSIM, NMSE, and . (4) Report inference time and memory. (5) Include ISTA-Net as an intermediate baseline.
ex31-10-chamfer
MediumTwo 3D RF imaging algorithms reconstruct a scene with 5 point targets. Algorithm A localises all 5 targets but with position errors of cm. Algorithm B localises only 4 targets (misses one) with errors cm. Compute the Chamfer distance for each. Which is better?
Chamfer distance penalises both missing points and position error.
Algorithm A
All 5 targets found. cm (assuming symmetric nearest-neighbour).
Algorithm B
4 found, 1 missed. Pred-to-truth: cm. Truth-to-pred: where is the distance from the missed target to the nearest detected target. If cm, then cm, total cm.
Comparison
Algorithm A is better by Chamfer distance despite larger individual errors, because it detects all targets. The missed target in B incurs a large penalty. This illustrates that Chamfer distance rewards coverage (detecting all targets) over per-target precision.
ex31-11-calibration
MediumA MIMO radar with , at 77 GHz requires phase calibration. A corner reflector is placed at range m. The measured phases deviate from expected by up to . Design the calibration procedure and analyse the residual error impact.
Phase errors cause pointing errors and increased sidelobes.
Calibration procedure
Place the corner reflector on boresight at m. Measure the complex response for each virtual channel . Expected phase: . Calibration vector: .
Residual error
If calibration reduces phase error from to RMS residual: sidelobe degradation is negligible for 32 elements. Without calibration ( RMS): sidelobes degrade by dB, significant for fine angular imaging.
ex31-12-dynamic-range
HardA VNA-based channel sounder has 120 dB dynamic range, while a TI IWR6843 has 45 dB. For a scene with targets at dB relative to the strongest, determine which targets are visible on each system. Discuss implications for CS reconstruction.
Dynamic range limits the weakest detectable target relative to the strongest.
VNA sounder
120 dB dynamic range: all 4 targets visible. The dB target has a margin of 60 dB above the noise floor.
TI IWR6843
45 dB dynamic range: targets at and dB are visible. Target at dB is marginal. Target at dB is invisible.
CS implications
For CS-SAR: the VNA data supports recovery of all 4 targets via sparsity; the TI data supports only 2--3. For learned methods: training on VNA data produces networks that expect high dynamic range; deploying on TI hardware causes performance degradation. Train (or fine-tune) on data matching the deployment hardware.
ex31-13-sim-vs-meas
HardAn RF imaging algorithm achieves 32 dB PSNR in simulation but only 18 dB on measured data. List and analyse 5 potential causes for this 14 dB gap, ordered by likelihood. For each, propose a diagnostic test.
The gap comes from model mismatch, hardware impairments, and calibration errors.
Potential causes and diagnostics
- Inverse crime (simulation uses same model): test by using different forward/inverse models. If PSNR drops to dB, the crime accounts for 10 dB.
- Calibration errors (uncorrected phase/amplitude): test by imaging a single known reflector.
- Model mismatch (Born vs multipath reality): test by adding ray-traced multipath to simulation.
- Hardware impairments (IQ imbalance, phase noise, ADC quantisation): test by adding these to simulation.
- Environmental factors (clutter, interference): test by measuring in an anechoic chamber.
Diagnostic priority
Start with (2) calibration (easiest to check), then (1) inverse crime, then (4) hardware, then (3) model mismatch, finally (5) environment.
ex31-14-benchmark-design
HardDesign a standardised benchmark for OFDM radar imaging algorithms. Specify: (1) test scenarios (easy/medium/hard); (2) metrics; (3) baseline algorithms; (4) scoring methodology.
Vary SNR, number of targets, target spacing for difficulty levels.
Test scenarios
Easy: 5 well-separated targets, dB, no clutter. Medium: 15 targets (some closely spaced), dB, static clutter. Hard: 30 targets (dense), dB, dynamic clutter. Each: 100 test images.
Metrics and scoring
Primary: NMSE (dB), at , SSIM. Secondary: inference time, memory, parameters. Composite: .
Baselines
MF (reference), LASSO (tuned), ISTA-Net-plus (16 layers), U-Net post-processing. All provided in benchmark package. Automated leaderboard with evaluation server.
ex31-15-ground-truth
HardFor an outdoor SAR imaging measurement campaign, compare three ground truth methods: (1) GPS survey; (2) drone-mounted LiDAR; (3) photogrammetry. For each, state the accuracy, cost, and limitations for validating a SAR image with cm resolution.
Ground truth accuracy should be better than the imaging resolution.
GPS survey
RTK-GPS: cm horizontal, cm vertical. Cost: $5k. Limitation: measures point positions only, not continuous surfaces.
Drone LiDAR
Accuracy: --5 cm. Cost: $15k--$50k. Dense 3D point cloud. Limitation: vegetation; metal multipath in LiDAR.
Photogrammetry
Accuracy: --10 cm. Cost: $2k. Textured 3D mesh. Limitation: fails on textureless surfaces.
Recommendation
For 15 cm SAR: all three adequate (-- finer). Best: LiDAR for scene, GPS for reference points. Photogrammetry as low-cost supplement.
ex31-16-deepinverse
HardYou want to implement PnP-ADMM for RF imaging using DeepInverse. The sensing matrix is with Kronecker structure. Outline the implementation steps: (a) defining the forward operator; (b) choosing the denoiser; (c) setting ADMM parameters; (d) evaluating against baselines.
DeepInverse's LinearPhysics class accepts a matrix or a function pair (forward, adjoint).
Forward operator
Define a custom LinearPhysics subclass. Implement A(x) as the Kronecker-structured forward and A_adjoint(y) as the adjoint. Use PyTorch tensors for GPU acceleration.
Denoiser
Use DeepInverse's pretrained DRUNet (handles multiple noise levels via -conditioning). Alternatively, train a denoiser on RF reflectivity maps if domain-specific performance is needed.
ADMM parameters
Penalty : start with and tune on validation set. Denoiser noise level . Number of ADMM iterations: 20--50 (monitor convergence).
Evaluation
Compare against MF, LASSO, and U-Net. Use trials, report PSNR, SSIM, , and inference time. Use honest forward model (different grid) for test data.
ex31-17-roc-analysis
HardThree algorithms produce the following (, ) pairs at the recommended operating point: MF , LASSO , U-Net . Compute the improvement of each algorithm over MF. Is the U-Net improvement over LASSO statistically significant if has standard deviation 0.03 over 100 trials?
Compare at the same .
Improvements
MF to LASSO: increase from 0.72 to 0.88 () at a lower ( vs ), making the comparison even more favourable for LASSO. MF to U-Net: increase from 0.72 to 0.93 ().
Statistical significance (U-Net vs LASSO)
. . . : highly significant. Cohen's : large effect size. The improvement is both statistically and practically significant.
ex31-18-full-system
ChallengeDesign and implement (pseudocode) a complete RF imaging experiment: from hardware setup through data collection, algorithm application, and evaluation. The goal is to image a room using a TI IWR6843 mounted on a rotating turntable (creating a synthetic aperture). Specify all parameters, the data collection procedure, the SAR reconstruction algorithm, and the evaluation against a ground-truth floor plan.
Turntable rotation creates a circular SAR aperture.
Ground truth: laser-measured room dimensions.
Hardware setup
TI IWR6843ISK on a turntable at room centre. Rotation: in 60 s (/s). FMCW chirp: GHz, GHz, 256 chirps per frame, 30 frames/s. Total data: 1800 frames channels.
Calibration and data collection
Calibrate with corner reflector at known position. Collect data during one full rotation. Record turntable angle (encoder) synchronised to radar frames. Export raw ADC data via DCA1000EVM. Repeat 3 times.
SAR reconstruction
- Range compression: FFT across fast-time ( cm). 2. Virtual array: 12 elements. 3. Circular SAR back-projection: coherently sum from all angles. 4. LASSO on oversampled grid. 5. CFAR detection.
Evaluation
Ground truth: laser measurements ( cm). Overlay SAR image on floor plan. Metrics: wall RMSE, furniture , false alarm rate. Expected: wall RMSE cm, furniture .
ex31-19-full-methodology
ChallengeDesign a complete experimental methodology for a paper claiming that a new deep learning method outperforms LASSO for OFDM radar imaging. Specify: (1) simulation setup (avoiding the inverse crime); (2) Monte Carlo design; (3) measurement campaign; (4) statistical analysis; (5) the figures and tables the paper should include.
The paper needs both simulation and measurement results.
Simulation
Forward model (generation): Sionna ray tracing at 28 GHz, indoor scene, grid. Reconstruction: Born approximation on grid. Off-grid targets. Noise: AWGN at dB. Scenes: 3 rooms of increasing complexity.
Monte Carlo
trials per SNR scene ( total). Metrics: PSNR, SSIM, NMSE, . Report: mean 95% CI.
Measurement
TI IWR6843. Calibrate at 3 positions. Scenes: empty room, office, corridor. 3 runs each. Ground truth: laser.
Statistical analysis
Paired -test for each SNR and scene. Bonferroni correction for 21 tests. Cohen's . Friedman test for overall ranking.
Required outputs
Table 1: PSNR/SSIM CI. Figure 1: PSNR vs SNR with error bars. Figure 2: example reconstructions (best, median, worst). Figure 3: measurement images + metrics. Table 2: compute cost. Table 3: statistical test -values and effect sizes.
ex31-20-metric-design
ChallengeCurrent metrics (PSNR, SSIM, LPIPS) were designed for natural images. Propose a new metric specifically for RF reflectivity maps that captures: (a) target localisation accuracy; (b) target amplitude fidelity; (c) background suppression. Define the metric mathematically, show it reduces to PSNR in a special case, and discuss its properties.
Decompose the image into target and background regions.
Weight errors differently in each region.
Definition
Let be target voxels () and the background. Define: where the three terms measure: (a) target region PSNR; (b) amplitude fidelity (ratio is perfect); (c) contrast ratio (higher is better).
Special case
When is the entire image (), RFIQ reduces to standard PSNR.
Properties
RFIQ is threshold-dependent (via ). It separately penalises target errors and background leakage. The contrast term rewards methods that suppress background clutter even at the cost of overall PSNR. Limitations: requires known target locations (from ground truth) and threshold selection.