From Optical to RF Rendering
The Translation Problem
NeRF was designed for optical wavelengths ( nm) where ray optics is an excellent approximation. Translating it to RF (-- cm) is not a matter of changing parameters --- it requires rethinking the physics. This section identifies the five fundamental differences between optical and RF rendering, each of which demands a structural modification to the NeRF framework.
Definition: Five Key Differences Between Optical and RF Rendering
Five Key Differences Between Optical and RF Rendering
The following table summarises the physics that must change when adapting NeRF from optics to RF:
| Property | Optical NeRF | RF NeRF |
|---|---|---|
| Wavelength | nm | -- cm |
| Propagation regime | Ray optics (geometric) | Wave optics (diffraction dominates) |
| Scattering | Predominantly diffuse (Lambertian) | Predominantly specular (mirror-like) |
| Imaging element | Lens (pinhole camera model) | No lens (lensless imaging) |
| Signal type | Real-valued intensity (RGB) | Complex-valued field (amplitude + phase) |
| Rendering integral | Integrates incoherent intensities | Integrates coherent complex fields |
Theorem: Diffraction Dominance at RF Wavelengths
For an object of characteristic size , the transition from ray optics to wave optics occurs when the Fresnel number
satisfies , where is the propagation distance. For RF imaging at cm, a wall segment of width m at range m gives (ray optics marginal). For a corner reflector with cm, (wave optics required).
Consequence: RF scenes contain a mixture of ray-optics-valid regions (large flat surfaces) and diffraction-dominated regions (edges, corners, apertures). An RF-NeRF must handle both regimes.
Fresnel number derivation
The Fresnel number counts the number of Fresnel half-zones visible through an aperture of size at distance . Each half-zone has width , so . When , many zones contribute and their phasor sum approximates the geometric-optics limit. When , only a few zones contribute and diffraction effects (bending around edges, spreading of beams) dominate.
Specular vs Diffuse: Why RF Scattering Is Different
At optical wavelengths, most surfaces appear rough relative to the wavelength (surface roughness nm), producing approximately Lambertian (diffuse) scattering. At RF wavelengths, the same surfaces appear smooth (roughness several cm), producing specular (mirror-like) reflections governed by Snell's law.
This has a profound consequence for NeRF: optical NeRF's view-dependent colour models mild specular highlights on mostly-diffuse surfaces. RF-NeRF's reflectivity must model strongly view-dependent reflections, since the scattered field changes dramatically with angle. The MLP must therefore allocate much more capacity to the directional dependence.
Definition: Lensless Imaging in RF
Lensless Imaging in RF
In optical NeRF, a lens focuses light from a 3D point onto a single pixel, establishing a one-to-one correspondence between scene points and pixels (the pinhole camera model). In RF, there is no lens.
The measurement at each receiver is a superposition of contributions from all visible scene points:
where is the Green's function and is the target region. Each measurement is a weighted integral over the entire scene. Reconstructing the scene from such measurements is an inverse problem --- precisely the problem studied in Chapters 1--23.
The point: NeRF replaces the explicit sensing matrix with a neural function, but the lensless measurement model is unchanged.
Theorem: Coherent (Field) vs Incoherent (Intensity) Rendering
The optical volume rendering integral
sums incoherent intensities (non-negative, real-valued colours). The RF rendering integral
sums coherent complex fields (amplitude and phase). Consequently:
- Constructive and destructive interference can occur between samples --- the rendered signal is not the sum of individual sample powers.
- The loss function must operate on complex-valued predictions (or on derived quantities like power or channel frequency response).
- The MLP must output complex-valued reflectivities, requiring either two real outputs (real and imaginary parts) or a magnitude-phase parameterisation.
Interference example
Consider two samples with equal power and weights . In optical rendering: (incoherent sum). In RF rendering: , which ranges from (destructive) to (constructive). The phase difference depends on the sample spacing relative to the wavelength.
Optical vs RF Volume Rendering Comparison
Compare optical (incoherent, real-valued) and RF (coherent, complex-valued) volume rendering along a single ray through a simple scene with two scatterers. In optical mode, the rendered value is always non-negative and increases monotonically with scatterer density. In RF mode, interference fringes appear as the frequency changes, and the rendered power can be less than either scatterer's individual contribution due to destructive interference.
Parameters
Example: Frequency-Dependent Attenuation Through a Wall
A concrete wall of thickness cm has attenuation coefficient Np/m (with in GHz). Compute the one-way power attenuation at GHz and GHz. Explain why an RF-NeRF that ignores frequency dependence will produce biased reconstructions.
Attenuation at 3.5 GHz (sub-6)
Np/m. One-way power loss: ( dB).
Attenuation at 28 GHz (mmWave)
Np/m. One-way power loss: ( dB).
Impact on RF-NeRF
The wall causes nearly twice the loss at 28 GHz compared to 3.5 GHz. A frequency-independent RF-NeRF would learn an averaged attenuation, under-predicting loss at high frequencies and over-predicting at low frequencies. Wideband or multi-band systems require frequency-dependent attenuation modelling (Section 24.4).
Optical NeRF vs RF-NeRF Architecture Comparison
| Component | Optical NeRF | RF-NeRF |
|---|---|---|
| Scene function output | ||
| Rendering output | Real RGB pixel colour | Complex baseband signal |
| Loss function | (photometric MSE) | or |
| View dependence | Mild (diffuse + specular highlights) | Strong (specular-dominated RF scattering) |
| Frequency dependence | None (narrowband visible light) | Critical (attenuation and phase vary with ) |
| Multipath | Single-bounce sufficient | Multi-bounce required for accuracy |
| Data density | pixels per image | -- RSS/CSI measurements |
Common Mistake: Treating RF as Real-Valued Rendering
Mistake:
Applying the optical NeRF rendering equation directly to RF signals, treating the received signal as a real-valued intensity.
Correction:
RF signals are complex-valued (amplitude + phase). Ignoring the phase discards half the information and prevents the model from capturing interference effects. The MLP must output complex reflectivities, and the volume rendering sum must be complex. If only power measurements are available, the loss operates on , but the internal rendering must remain complex.
Why This Matters: RF Rendering as a Neural Forward Model
The RF volume rendering equation is fundamentally a neural approximation to the Born forward model. In Chapter 6, we derived the forward model , where is the sensing matrix determined by the array geometry and the free-space Green's function. The RF-NeRF replaces the discrete reflectivity vector with a continuous neural function , and replaces the matrix-vector product with a differentiable rendering integral.
The advantage: the continuous representation naturally handles resolution, interpolation, and regularisation. The cost: the physics must be baked into the rendering equation, not into a pre-computed matrix.
See full treatment in Chapter 6، Section def-born-forward-model
Quick Check
Which of the following is NOT a key difference between optical and RF rendering in the NeRF framework?
RF uses complex-valued fields instead of real intensities
RF wavelengths require modelling diffraction effects
RF scattering is predominantly diffuse (Lambertian)
RF imaging has no focusing lens
Correct --- RF scattering is predominantly specular, not diffuse. Surfaces that appear rough at optical wavelengths are smooth relative to RF wavelengths.
Fresnel Number
The dimensionless quantity that determines whether ray optics () or wave optics () governs propagation through an aperture of size at distance and wavelength .
Related: Volume Density
Key Takeaway
Adapting NeRF from optics to RF requires five structural changes: (1) complex-valued outputs for coherent field rendering, (2) frequency-dependent attenuation and phase, (3) strong view-dependent reflectivity for specular scattering, (4) a lensless measurement model where each observation integrates the entire scene, and (5) accounting for diffraction in sub-wavelength features. The RF rendering equation is a neural implementation of the Born forward model from Chapters 5-6.