RF-NeRF Variants
Beyond NeRF: Specialised RF Architectures
NeRF demonstrated that neural radiance fields can model RF propagation, but it leaves several gaps: no multipath, no explicit material properties, limited to RSS, and slow to train. A burst of follow-up work in 2023--2024 addressed these gaps by specialising the architecture for specific RF modalities. This section surveys six important variants.
Definition: WiNeRT: Wireless Neural Ray Tracing
WiNeRT: Wireless Neural Ray Tracing
WiNeRT (Orekondy et al., 2023) extends NeRF for wireless by incorporating multi-bounce ray tracing within the neural rendering framework:
- Primary rays are cast from Tx to Rx (as in NeRF).
- Reflection rays are generated at high-density surfaces using learned normal vectors: .
- Diffraction rays are added at edges detected by density gradients.
The total received signal is:
where is the set of traced paths, the path length, and a learned path weight.
WiNeRT achieves 2--3 dB lower RSS prediction error than NeRF in multipath-rich environments but increases training time by due to the multi-bounce ray tracing.
Definition: R-NeRF: RIS-Enabled RF Neural Fields
R-NeRF: RIS-Enabled RF Neural Fields
R-NeRF extends the RF-NeRF framework to environments with reconfigurable intelligent surfaces (RIS). The RIS is modelled as a controllable reflecting layer with element-wise phase shifts :
where denotes the free-space Green's function. The RIS phase configuration becomes a controllable input to the neural field, enabling joint scene reconstruction and RIS optimisation.
Definition: VoxelRF: Voxel-Based Acceleration
VoxelRF: Voxel-Based Acceleration
VoxelRF replaces the MLP with a dense or sparse voxel grid storing density and features at each voxel. For a query point , trilinear interpolation retrieves the local density and feature, which a small MLP (1--2 layers) decodes into the signal contribution.
Advantage: Eliminating the deep MLP evaluation at each sample reduces inference time by --.
Disadvantage: Memory scales as . For a m indoor scene at 5 cm resolution, the grid has voxels --- requiring pruning or sparse storage.
Definition: NeRF-APT: NeRF for Access Point Localisation
NeRF-APT: NeRF for Access Point Localisation
NeRF-APT inverts the typical NeRF workflow: instead of predicting RSS at known Rx locations given known Tx positions, it localises unknown Tx positions from RSS measurements at known Rx locations.
The approach treats the Tx position as a learnable parameter and jointly optimises:
The scene representation and Tx positions are learned simultaneously, analogous to how COLMAP jointly estimates camera poses and scene structure in optical NeRF.
Definition: DART: Doppler-Aided Radiance Transfer
DART: Doppler-Aided Radiance Transfer
DART (Huang et al., 2024) extends NeRF for radar by incorporating the Doppler dimension:
-
Input: Radar returns in range--angle--Doppler space, from multiple viewpoints.
-
Scene MLP: Predicts radar cross-section and velocity at each 3D point.
-
Doppler rendering: The Doppler shift at point along ray direction is:
and the rendered radar cube is:
DART is particularly suited for automotive radar, where targets have distinct velocities. The Doppler dimension provides discrimination absent in communication-focused RF-NeRF variants.
Definition: ISAR-NeRF: Neural Fields for Synthetic Aperture Radar
ISAR-NeRF: Neural Fields for Synthetic Aperture Radar
ISAR-NeRF adapts neural fields for coherent SAR / inverse SAR imaging. The forward model maps the 3D scene reflectivity to the measured phase history:
where is the aperture coordinate. ISAR-NeRF parameterises and minimises:
The MLP's spectral bias acts as an implicit regulariser, suppressing grating-lobe artifacts in sparse-aperture regimes.
Theorem: Material-Aware RF-NeRF
By parameterising the attenuation as , the learned coefficients can be interpreted as material properties. Define the complex permittivity map:
where is related to via the plane-wave attenuation formula. If the learned attenuation satisfies for all in the training bandwidth, then corresponds to a physically realisable (passive) material.
Linearised attenuation in lossy media
For a plane wave in a lossy dielectric with complex permittivity , the attenuation constant is . For low-loss materials (): , which is linear in , matching .
Passivity constraint
Non-negative attenuation implies (passive material). The regularisation term in the training loss enforces this physics constraint.
Example: Material Classification from Learned Attenuation
An RF-NeRF trained on 5 GHz Wi-Fi data learns attenuation coefficients at three locations. Classify the materials.
| Location | (Np/m) | (Np/m/GHz) |
|---|---|---|
| A | 0.1 | 0.005 |
| B | 2.5 | 0.15 |
| C | 0.8 | 0.04 |
Location A (low attenuation)
Np/m. Consistent with drywall or thin plasterboard (, loss tangent ).
Location B (high attenuation)
Np/m. Consistent with reinforced concrete (, loss tangent ) or metal-backed partition.
Location C (moderate attenuation)
Np/m. Consistent with glass or wooden furniture (, loss tangent ).
RF-NeRF Variant Comparison
| Method | Input Data | Multipath | Special Feature | Training Time |
|---|---|---|---|---|
| NeRF | RSS | No (single ray) | Foundational method | min |
| WiNeRT | RSS/CSI | Yes (multi-bounce) | Differentiable ray tracing | min |
| R-NeRF | RSS | Partial | RIS-aware rendering | min |
| VoxelRF | RSS/CSI | No | faster inference | min |
| NeRF-APT | RSS | No | Joint Tx localisation | min |
| DART | Range-Doppler cubes | Learned correction | RCS + velocity fields | min |
| ISAR-NeRF | Phase history | Not modelled | Implicit regularisation | min |
Hash Grid Encoding for Faster RF-NeRF Training
Instant-NGP's multi-resolution hash encoding, when applied to RF-NeRF, provides three specific benefits:
-
Large spatial extent: RF scenes (rooms, buildings) span meters to tens of meters, requiring many frequency bands in positional encoding. Hash encoding handles large scenes natively.
-
Tolerable collisions: RF propagation is smoother than optical radiance (fewer high-frequency details), so hash collisions are less damaging.
-
Memory efficiency: Storage scales as where is the hash table size, independent of scene volume.
Typical speedup: -- compared to MLP-based positional encoding, with comparable or better accuracy.
Historical Note: The Rapid Evolution of RF-NeRF (2023-2025)
2023-2025The adaptation of NeRF for RF propagation happened remarkably quickly. NeRF appeared in early 2023, WiNeRT later that year, and by 2024 a dozen specialised variants existed for channel modelling, localisation, radar, and SAR. This speed mirrors the optical NeRF explosion of 2020--2022 and reflects the machine learning community's ability to rapidly port successful ideas across domains.
The key enabler was the realisation that volume rendering --- the mathematical core of NeRF --- is agnostic to the physical quantity being rendered. Replacing colour with complex RF reflectivity is a change to the output layer and loss function, not to the fundamental architecture.
Common Mistake: Ignoring Multipath in Dense Indoor Environments
Mistake:
Using a single-ray RF-NeRF (NeRF or VoxelRF) for indoor environments with significant non-line-of-sight propagation.
Correction:
In indoor environments, multipath contributes 30--50% of received power. For NLOS scenarios, use multi-bounce methods like WiNeRT or add a learned multipath correction network. Alternatively, accept the accuracy trade-off and use NeRF only for LOS- dominated scenarios (outdoor urban, corridors).
Quick Check
What additional physical quantity does DART's scene MLP predict compared to NeRF?
Surface normal vectors
Object velocity
Material permittivity
Temperature
Correct. DART predicts both radar cross-section and velocity at each point, enabling Doppler rendering for automotive radar.
Key Takeaway
The RF-NeRF ecosystem has diversified rapidly: WiNeRT captures multipath via multi-bounce ray tracing; R-NeRF incorporates controllable RIS elements; DART adds Doppler for radar; ISAR-NeRF enables sparse-aperture SAR reconstruction. The common thread is differentiable volume rendering with physics-specific output layers (complex reflectivity, velocity, material properties). Hash grid encoding provides -- training speedup across all variants.