Volume Rendering Adapted for RF
The RF Rendering Equation
We now derive the complete RF volume rendering equation from first principles, connecting the neural NeRF framework to the physics-based forward model of Chapters 5--6. The central question is: under what conditions does neural volumetric rendering agree with the Born-approximation forward model ?
The answer reveals both the power and the limitations of the NeRF approach for RF imaging.
Theorem: RF Volume Rendering Equation
For a monochromatic RF signal at frequency , the received complex baseband signal along a ray from the transmitter at origin is:
where is the complex reflectivity, the factor captures round-trip propagation phase (), and the RF transmittance is:
with the frequency-dependent attenuation coefficient.
Each point along the ray contributes a complex phasor to the received signal. The transmittance accounts for cumulative absorption (energy lost traversing walls). The reflectivity determines how much and at what phase the material scatters. The exponential carries the round-trip propagation delay.
Physics derivation
At each point , the incident wave is: (1) attenuated by (absorption), (2) scattered with complex coefficient , and (3) phase-shifted by the round-trip path delay . Integrating along the ray and enforcing causality (the transmittance accounts for attenuation of all preceding segments) yields the result.
Discrete approximation
With samples :
where and . This is differentiable with respect to all MLP parameters.
Definition: Frequency-Dependent RF Transmittance
Frequency-Dependent RF Transmittance
Unlike optical transmittance, the RF transmittance depends on frequency:
For a material with linear frequency dependence , the transmittance factors as:
This factorisation is useful for multi-frequency training: the geometry MLP learns (frequency-independent loss), while the signal MLP learns (frequency-dependent dispersion).
Definition: View-Dependent RF Reflectivity
View-Dependent RF Reflectivity
The complex reflectivity depends on position, viewing direction, and frequency. For a planar surface with normal at , the Fresnel reflection coefficient gives:
where is the incidence angle, is the specular reflection direction, and is the Fresnel coefficient.
In the neural field, the MLP implicitly learns an approximation to this directional dependence. The stronger the specular component, the more capacity the MLP must allocate to directional features.
Theorem: Connection to the Born Forward Model
Under the Born approximation (weak scattering, single scattering), the RF volume rendering equation reduces to the linear forward model:
which is the continuous form of . Specifically, when:
- everywhere (negligible attenuation --- weak scattering);
- is an indicator function of the scatterer support;
- is view-independent (isotropic scattering);
then the neural rendering integral becomes the Born integral with the sensing kernel given by the free-space Green's function.
Setting $T = 1$ (weak scattering)
If attenuation is negligible, for all . The rendering integral becomes .
Isotropic scattering
If (no view dependence), the product reduces to the reflectivity function from Chapter 5.
Bistatic geometry
For separated Tx at and Rx at , the round-trip phase becomes , recovering the Born forward model. The neural field replaces the discrete vector , and the rendering integral replaces the matrix-vector product .
Example: 1D RF Volume Rendering with Two Walls
A ray traverses two walls at distances m and m. Each wall has density m and thickness m. Free space has . The reflectivities are and . At GHz, compute the rendered received signal using representative samples.
Sample positions and densities
Sample 1 (free space, m): , m. Sample 2 (wall 1, m): , m. Sample 3 (free space, m): , m. Sample 4 (wall 2, m): , m.
Alpha values and transmittances
(free space contributes nothing). . . . , , , .
Phase terms
rad/m. Phase at wall 1: . Phase at wall 2: .
Rendered signal
.
Wall 1 contributes and wall 2 contributes . Wall 1 dominates because it is encountered first (higher transmittance).
Discrete RF Volume Rendering
Complexity: per ray, where is the number of samples. Each sample requires one MLP evaluation: total, where is the cost of a forward pass.The algorithm is identical to optical volume rendering except for lines 2 (complex output), 5 (round-trip phase), and 9 (complex accumulation). Backpropagation through this algorithm is straightforward because all operations are differentiable.
When NeRF Disagrees with Born
The connection between neural volumetric rendering and the Born forward model breaks down when:
-
Strong scattering: Attenuation is significant (), violating the Born single-scattering assumption. The NeRF transmittance captures this naturally, while the Born model does not.
-
Multiple scattering: The Born model assumes single scattering; NeRF's single-ray rendering likewise misses multi-bounce paths. Both fail in the same way.
-
View dependence: The Born model assumes isotropic scattering ( independent of ). NeRF can model view-dependent reflectivity, which is necessary for specular surfaces but adds model complexity.
The point is that NeRF extends the Born model by adding attenuation and view dependence, but both share the limitation of single-scattering. Multi-bounce extensions (WiNeRT) address this at the cost of training complexity.
Example: Born Model as a Special Case of RF-NeRF
Show explicitly that for a 2D scene with point scatterers at positions with reflectivities , and a single Tx at and Rx at , the Born forward model is recovered from the RF-NeRF rendering equation under the Born assumptions.
Set $T = 1$
Negligible attenuation: everywhere, so for all .
Point scatterer density
The density . In the discrete approximation, at scatterer positions and elsewhere.
Isotropic scattering
(view-independent). The rendering sum becomes . For a bistatic geometry, is replaced by the total path length .
Conclude
This is exactly the Born forward model for point scatterers. The sensing matrix has entries , and the NeRF rendering integral implements each row of this matrix continuously.
Computational Cost of RF Volume Rendering
The computational bottleneck in RF-NeRF is the MLP evaluation at each sample point. For a scene with Tx-Rx pairs, samples per ray, and frequency subcarriers, the total number of MLP evaluations per training iteration is:
where is the batch size (number of Tx-Rx pairs per batch).
Practical numbers: , , (80 MHz Wi-Fi CSI) gives MLP evaluations per iteration. With an 8-layer, 256-unit MLP, this takes ms on an A100 GPU. For 100k iterations: minutes of training.
Mitigation: Hash encoding reduces MLP cost by . Frequency-sharing (evaluating the geometry MLP once per spatial point and only the signal MLP per frequency) reduces cost by .
- β’
GPU memory limits batch size for large products
- β’
Multi-frequency rendering requires signal MLP evaluations per sample
- β’
Hash encoding memory: MB for , ,
RF Volume Rendering vs Born Forward Model
Compare the Born forward model (no attenuation, single scattering) with full RF volume rendering (with attenuation) for a 1D scene. When , both agree perfectly. As attenuation increases, the Born model overestimates contributions from distant scatterers (it ignores shadowing by nearer objects), while the volume rendering correctly applies transmittance.
Parameters
Common Mistake: Assuming Born Validity for Strong Scatterers
Mistake:
Using the Born forward model for scenes with thick concrete walls or metallic objects, where through-wall attenuation exceeds 10 dB.
Correction:
The Born approximation assumes negligible attenuation (). For strong scatterers, the NeRF transmittance model naturally handles shadowing, while the Born model produces physically incorrect predictions. Use the full RF volume rendering equation, or at minimum validate Born predictions against measurements.
Open Problems in RF Volume Rendering
Several fundamental challenges remain:
-
Multi-bounce rendering: Single-ray NeRF captures only direct paths. WiNeRT adds reflections but at cost. Efficient differentiable multi-bounce rendering remains open.
-
Diffraction: Volume rendering handles absorption and specular reflection but not diffraction around edges. Incorporating the uniform theory of diffraction (UTD) into neural rendering is an active research direction.
-
Generalisation: Current RF-NeRFs train per-scene. Foundation models that generalise across environments (trained on diverse RF scenes) would mirror the trajectory of optical NeRF generalisation.
-
Phase coherence: Many practical systems measure only power (no phase). Training from power-only measurements introduces a phase-retrieval problem within the NeRF framework (see Chapter 16).
Quick Check
Under which condition does the RF volume rendering equation reduce to the Born forward model?
When the MLP has infinite capacity
When , scattering is isotropic, and density is supported on the scatterer locations
When the frequency is above 100 GHz
When hash encoding is used instead of positional encoding
Correct. These three conditions eliminate attenuation, view dependence, and the continuous density field, recovering the discrete Born model.
RF Reflectivity
The complex-valued function describing how a material at position scatters an incident RF wave from direction at frequency . The magnitude determines the scattered power; the phase determines the phase shift upon reflection. For specular surfaces, is strongly peaked around the specular direction.
Related: Volume Density
Key Takeaway
The RF volume rendering equation integrates complex-valued reflectivity, frequency-dependent attenuation, and round-trip propagation phase along each ray. Under the Born-approximation assumptions (no attenuation, isotropic scattering), it reduces exactly to the linear forward model of Chapters 5--6. The NeRF framework extends the Born model by naturally handling shadowing (via transmittance) and specular scattering (via view-dependent reflectivity), at the cost of per-scene neural network training.