Exercises
ex24-01-transmittance
EasyA ray passes through two walls of thickness 15 cm each, separated by 3 m of free space. The attenuation coefficients are Np/m and Np/m. Compute the transmittance at the far side of the second wall.
is the product of transmittances through each segment.
Wall 1
.
Air gap
.
Wall 2
. In dB: dB total power attenuation.
ex24-02-positional-encoding
EasyCompute the positional encoding for with frequency bands. What is the dimension of the encoded vector for a 3D input ?
.
Encoding for $x = 0.5$
.
Dimension
Each scalar produces components. For : dimension .
ex24-03-volume-rendering-weights
EasyFor samples along a ray with alpha values , , , compute the rendering weights and verify that .
, .
Transmittances
, , .
Weights
, , .
Sum check
. The remaining is the transmittance past the last sample: .
ex24-04-fresnel-number
EasyCompute the Fresnel number for: (a) a 2 m wide window at 10 m range, mm (60 GHz); (b) a 10 cm edge at 5 m range, cm (5 GHz). In which case is ray optics valid?
. Ray optics valid when .
Case (a)
. Ray optics is valid.
Case (b)
. Wave optics required --- diffraction dominates.
ex24-05-rf-rendering
MediumImplement the discrete RF volume rendering equation for samples along a ray with the following parameters (at GHz):
| (m) | (1/m) | ||
|---|---|---|---|
| 1 | 2.0 | 0.1 | |
| 2 | 3.0 | 5.0 | |
| 3 | 3.2 | 5.0 | |
| 4 | 5.0 | 0.05 |
Compute and identify the dominant scatterer.
Step sizes: .
Most energy comes from the sample with highest .
Step sizes and alpha values
, , , (last). , , , .
Transmittances
, , , .
Dominant scatterer
Sample 2: . Sample 3: . Sample 2 (wall front face at 3 m) dominates.
ex24-06-nerf2-training
MediumA NeRF model is trained on 1,000 RSS measurements in an office. The validation RMSE is 6.2 dB after 10k iterations, 5.8 dB after 50k iterations, but 6.5 dB after 100k iterations. Diagnose the problem and propose three fixes.
Validation error increasing while training error decreases indicates overfitting.
Diagnosis
The model overfits after k iterations. With 1,000 measurements and a large MLP (k parameters), the model memorises training data.
Three fixes
(1) Early stopping at 50k iterations. (2) Increase weight decay from to . (3) Reduce MLP size to 4 layers 128 units (k parameters). Additional options: hash encoding with smaller table, data augmentation.
ex24-07-coherent-incoherent
MediumTwo scatterers at distances m and m have equal reflectivities and equal weights . At GHz, compute the rendered power using: (a) coherent RF rendering; (b) incoherent (optical-style) rendering. Explain the difference.
. Phase difference .
Phase difference
rad/m. rad .
Coherent rendering
. . Destructive interference: .
Incoherent rendering
. Incoherent sum always adds constructively.
Explanation
The 1.5 cm spacing is exactly at 5 GHz, causing perfect cancellation in coherent rendering. Ignoring phase (optical-style) misses this physical effect entirely.
ex24-08-multipath
MediumIn a rectangular room ( m), a transmitter is at m and a receiver at m. Enumerate the first-order (single-bounce) reflection paths off each wall using the image method. For each, compute the path length and additional delay relative to the LOS path.
Image method: reflect the Tx across each wall.
LOS path
m. Delay: ns.
Reflections
North wall (): image Tx at . m. Extra delay: 11.1 ns. South wall (): image Tx at . m. Extra delay: 11.1 ns. West wall (): image Tx at . m. Extra delay: 6.7 ns. East wall (): image Tx at . m. Extra delay: 6.7 ns.
Relevance for RF-NeRF
A single-ray NeRF misses all 4 reflections. WiNeRT traces these secondary rays to capture multipath energy.
ex24-09-hash-encoding
MediumA hash grid encoding uses resolution levels with hash table size and feature dimension per level. Compute: (1) total trainable parameters in the hash tables; (2) memory in MB (float32); (3) compare to an 8-layer, 256-unit MLP.
Each level has entries of dimension .
Hash table parameters
parameters.
Memory
bytes MB.
MLP comparison
Input: . MLP: parameters ( MB). Hash table is larger in memory but enables -- faster convergence.
ex24-10-db-loss
MediumTwo measurements have true received power dBm (at 2 m range) and dBm (at 50 m range). A model predicts dBm and dBm. Compute the loss in: (a) linear-domain MSE; (b) dB-domain MSE. Which is more balanced?
Convert dBm to mW: .
Linear-domain MSE
mW, mW. mW, mW. Error 1: . Error 2: . MSE --- completely dominated by measurement 1.
dB-domain MSE
Error 1: . Error 2: . MSE dB. Both contribute equally. This is why NeRF uses dB-domain loss.
ex24-11-mip-nerf-cone
MediumA Mip-NeRF cone has half-angle (typical antenna beamwidth). At distance m, compute the cross-sectional radius. Compare with cm (5 GHz) and discuss the relevance of cone-tracing for RF-NeRF.
Cone radius: .
Cone radius
m cm.
Comparison
The cross-section spans . Point sampling misses this spatial averaging.
Relevance
Mip-NeRF's integrated positional encoding naturally models antenna beamwidth averaging, relevant for directional mmWave antennas.
ex24-12-dart-doppler
HardA DART model observes a stationary building and a car at m/s. Radar at GHz. For a ray at from velocity: (1) compute Doppler shift; (2) explain how DART separates car from building.
.
Doppler shift
mm. kHz.
Separation
Building has . DART's velocity MLP assigns at car locations. Incorrect velocity produces Doppler mismatch in the loss, driving correct assignment via gradients.
ex24-13-sar-nerf-resolution
HardSAR at GHz, MHz, m, m. (1) Compute range and cross-range resolutions. (2) With 20% aperture sampling, describe artifacts and how ISAR-NeRF mitigates them.
Sparse aperture produces grating lobes.
Resolutions
Range: m. Cross-range: m.
Sparse artifacts
Grating lobes produce ghost targets. ISAR-NeRF's spectral bias suppresses high-frequency artifacts; TV regulariser further cleans the image. Achieves 5--10 dB sidelobe suppression over back-projection.
ex24-14-material-dual-band
HardDesign a training procedure for material-aware RF-NeRF using dual-band (2.4 and 5.8 GHz) Wi-Fi. Define the loss function and explain why single-band is insufficient.
has two unknowns per point.
Loss function
.
Why dual-band
Two unknowns need two equations. Single frequency gives one constraint per ray: are not separable.
ex24-15-born-connection
HardDerive the Born forward model as a special case of RF volume rendering for isotropic point scatterers. State the three required assumptions.
Set , isotropic , and point scatterer density.
Three assumptions
(1) (no attenuation). (2) (isotropic). (3) .
Derivation
, which is the Born model .
ex24-16-spectral-bias
HardExplain why an MLP without positional encoding fails to learn on . Show that resolves the issue while does not.
Max encoded frequency with bands is .
Spectral bias
MLPs learn low-frequency components first (Rahaman et al., 2019). The target has frequency Hz, beyond the MLP's effective bandwidth.
$L = 8$
Max frequency . The MLP can express via the encoding features.
$L = 3$
Max frequency . Severe under-resolution.
ex24-17-winert-paths
MediumIn WiNeRT, compute for a LOS path ( m, , ) and one reflected path ( m, , ) at GHz.
Compute each phasor, sum, then take magnitude squared.
Phase terms
rad/m. LOS: . Reflected: .
Power estimate
Over random phase realisations, , i.e., dBm. The reflected path adds dB.
ex24-18-full-pipeline
ChallengeDesign a complete RF-NeRF pipeline for indoor wall reconstruction from Wi-Fi CSI. Specify measurement setup, architecture, training, and evaluation. Target: wall position error cm.
80 MHz Wi-Fi CSI gives m range resolution.
Setup
6 APs, 80 MHz bandwidth, 2,000 Rx locations = 12,000 CSI measurements.
Architecture
Hash grid (, , ) + geometry MLP (4 layers 128) + signal MLP (3 layers 64). 17M parameters total.
Training
Loss: . Adam, lr cosine to . 100k iterations, 30 min on A100.
Evaluation
Threshold , skeletonise. Metrics: wall RMSE, IoU, Hausdorff distance. Baselines: ray-tracing inversion, back-projection, RSS-only NeRF.
ex24-19-foundation-model
ChallengePropose a transfer learning strategy that pre-trains on 50 indoor environments and fine-tunes on a new one with only 100 RSS measurements. Define pre-training objective, architecture modifications, and expected performance.
Shared geometry backbone + per-scene signal heads.
Pre-training
Conditional hash encoding with scene embedding . Pre-train on all 50 environments jointly with RSS MSE loss.
Architecture
Shared: hash encoder + 4-layer geometry MLP. Per-scene: embedding + 2-layer signal MLP. Shared: 17M params; per-scene: 30k.
Fine-tuning
Freeze backbone, learn new + signal MLP. Expected: 5 dB RMSE after 5k iterations (vs 8 dB from scratch).
ex24-20-born-vs-nerf
ChallengeA 30 cm concrete wall ( Np/m) at m shadows a metal reflector () at m. At GHz, compare Born (no attenuation) vs full RF rendering. Quantify Born's error.
Compute round-trip transmittance through the wall.
Born prediction
Born ignores attenuation: full contribution from the metal reflector.
RF rendering
Round-trip wall transmittance: . Two-way: . Metal contribution attenuated by dB compared to Born.
Error
Born overestimates the metal reflector by dB. Acceptable only for free-space scenes without intervening materials.