From Optical to RF Rendering

The Translation Problem

NeRF was designed for optical wavelengths (500\sim 500 nm) where ray optics is an excellent approximation. Translating it to RF (λ1\lambda \sim 1--3030 cm) is not a matter of changing parameters --- it requires rethinking the physics. This section identifies the five fundamental differences between optical and RF rendering, each of which demands a structural modification to the NeRF framework.

Definition:

Five Key Differences Between Optical and RF Rendering

The following table summarises the physics that must change when adapting NeRF from optics to RF:

Property Optical NeRF RF NeRF
Wavelength 500\sim 500 nm 1\sim 1--3030 cm
Propagation regime Ray optics (geometric) Wave optics (diffraction dominates)
Scattering Predominantly diffuse (Lambertian) Predominantly specular (mirror-like)
Imaging element Lens (pinhole camera model) No lens (lensless imaging)
Signal type Real-valued intensity (RGB) Complex-valued field (amplitude + phase)
Rendering integral Integrates incoherent intensities Integrates coherent complex fields
,

Theorem: Diffraction Dominance at RF Wavelengths

For an object of characteristic size DD, the transition from ray optics to wave optics occurs when the Fresnel number

F=D2λR\mathcal{F} = \frac{D^2}{\lambda R}

satisfies F1\mathcal{F} \lesssim 1, where RR is the propagation distance. For RF imaging at λ=1\lambda = 1 cm, a wall segment of width D=1D = 1 m at range R=10R = 10 m gives F=10\mathcal{F} = 10 (ray optics marginal). For a corner reflector with D=10D = 10 cm, F=0.1\mathcal{F} = 0.1 (wave optics required).

Consequence: RF scenes contain a mixture of ray-optics-valid regions (large flat surfaces) and diffraction-dominated regions (edges, corners, apertures). An RF-NeRF must handle both regimes.

Specular vs Diffuse: Why RF Scattering Is Different

At optical wavelengths, most surfaces appear rough relative to the wavelength (surface roughness 500\gg 500 nm), producing approximately Lambertian (diffuse) scattering. At RF wavelengths, the same surfaces appear smooth (roughness \ll several cm), producing specular (mirror-like) reflections governed by Snell's law.

This has a profound consequence for NeRF: optical NeRF's view-dependent colour c(x,d)\mathbf{c}(\mathbf{x}, \mathbf{d}) models mild specular highlights on mostly-diffuse surfaces. RF-NeRF's reflectivity ρ(x,d,f)\rho(\mathbf{x}, \mathbf{d}, f) must model strongly view-dependent reflections, since the scattered field changes dramatically with angle. The MLP must therefore allocate much more capacity to the directional dependence.

Definition:

Lensless Imaging in RF

In optical NeRF, a lens focuses light from a 3D point onto a single pixel, establishing a one-to-one correspondence between scene points and pixels (the pinhole camera model). In RF, there is no lens.

The measurement at each receiver is a superposition of contributions from all visible scene points:

yj=Ωρ(p)G(p,rj)dp+wj,y_j = \int_\Omega \rho(\mathbf{p})\, G(\mathbf{p}, \mathbf{r}_j)\,d\mathbf{p} + w_j,

where GG is the Green's function and Ω\Omega is the target region. Each measurement is a weighted integral over the entire scene. Reconstructing the scene from such measurements is an inverse problem --- precisely the problem studied in Chapters 1--23.

The point: NeRF replaces the explicit sensing matrix A\mathbf{A} with a neural function, but the lensless measurement model is unchanged.

Theorem: Coherent (Field) vs Incoherent (Intensity) Rendering

The optical volume rendering integral

C^=i=1Nwici,wiR+\hat{C} = \sum_{i=1}^{N} w_i\, \mathbf{c}_i, \qquad w_i \in \mathbb{R}_+

sums incoherent intensities (non-negative, real-valued colours). The RF rendering integral

S^=i=1Nwiρiejκ2ti,ρiC\hat{S} = \sum_{i=1}^{N} w_i\, \rho_i\, e^{-j\kappa \cdot 2t_i}, \qquad \rho_i \in \mathbb{C}

sums coherent complex fields (amplitude and phase). Consequently:

  1. Constructive and destructive interference can occur between samples --- the rendered signal S^|\hat{S}| is not the sum of individual sample powers.
  2. The loss function must operate on complex-valued predictions (or on derived quantities like power or channel frequency response).
  3. The MLP must output complex-valued reflectivities, requiring either two real outputs (real and imaginary parts) or a magnitude-phase parameterisation.

Optical vs RF Volume Rendering Comparison

Compare optical (incoherent, real-valued) and RF (coherent, complex-valued) volume rendering along a single ray through a simple scene with two scatterers. In optical mode, the rendered value is always non-negative and increases monotonically with scatterer density. In RF mode, interference fringes appear as the frequency changes, and the rendered power can be less than either scatterer's individual contribution due to destructive interference.

Parameters
64
5

Example: Frequency-Dependent Attenuation Through a Wall

A concrete wall of thickness d=20d = 20 cm has attenuation coefficient α(f)=0.5+0.02f\alpha(f) = 0.5 + 0.02 f Np/m (with ff in GHz). Compute the one-way power attenuation at f=3.5f = 3.5 GHz and f=28f = 28 GHz. Explain why an RF-NeRF that ignores frequency dependence will produce biased reconstructions.

Optical NeRF vs RF-NeRF Architecture Comparison

ComponentOptical NeRFRF-NeRF
Scene function output(σ,c)R+×[0,1]3(\sigma, \mathbf{c}) \in \mathbb{R}_+ \times [0,1]^3(σ,ρ)R+×C(\sigma, \rho) \in \mathbb{R}_+ \times \mathbb{C}
Rendering outputReal RGB pixel colourComplex baseband signal
Loss functionC^C22\|\hat{C} - C^*\|_2^2 (photometric MSE)P^(dB)P(dB)2|\hat{P}^{(\mathrm{dB})} - P^{*(\mathrm{dB})}|^2 or H^HF2\|\hat{H} - H^*\|_F^2
View dependenceMild (diffuse + specular highlights)Strong (specular-dominated RF scattering)
Frequency dependenceNone (narrowband visible light)Critical (attenuation and phase vary with ff)
MultipathSingle-bounce sufficientMulti-bounce required for accuracy
Data density10610^6 pixels per image10210^2--10310^3 RSS/CSI measurements
,

Common Mistake: Treating RF as Real-Valued Rendering

Mistake:

Applying the optical NeRF rendering equation directly to RF signals, treating the received signal as a real-valued intensity.

Correction:

RF signals are complex-valued (amplitude + phase). Ignoring the phase discards half the information and prevents the model from capturing interference effects. The MLP must output complex reflectivities, and the volume rendering sum must be complex. If only power measurements are available, the loss operates on S^2|\hat{S}|^2, but the internal rendering must remain complex.

Why This Matters: RF Rendering as a Neural Forward Model

The RF volume rendering equation is fundamentally a neural approximation to the Born forward model. In Chapter 6, we derived the forward model y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w}, where A\mathbf{A} is the sensing matrix determined by the array geometry and the free-space Green's function. The RF-NeRF replaces the discrete reflectivity vector c\mathbf{c} with a continuous neural function ρθ(x,d,f)\rho_\theta(\mathbf{x}, \mathbf{d}, f), and replaces the matrix-vector product with a differentiable rendering integral.

The advantage: the continuous representation naturally handles resolution, interpolation, and regularisation. The cost: the physics must be baked into the rendering equation, not into a pre-computed matrix.

See full treatment in Chapter 6، Section def-born-forward-model

,

Quick Check

Which of the following is NOT a key difference between optical and RF rendering in the NeRF framework?

RF uses complex-valued fields instead of real intensities

RF wavelengths require modelling diffraction effects

RF scattering is predominantly diffuse (Lambertian)

RF imaging has no focusing lens

Fresnel Number

The dimensionless quantity F=D2/(λR)\mathcal{F} = D^2/(\lambda R) that determines whether ray optics (F1\mathcal{F} \gg 1) or wave optics (F1\mathcal{F} \lesssim 1) governs propagation through an aperture of size DD at distance RR and wavelength λ\lambda.

Related: Volume Density

Key Takeaway

Adapting NeRF from optics to RF requires five structural changes: (1) complex-valued outputs for coherent field rendering, (2) frequency-dependent attenuation and phase, (3) strong view-dependent reflectivity for specular scattering, (4) a lensless measurement model where each observation integrates the entire scene, and (5) accounting for diffraction in sub-wavelength features. The RF rendering equation is a neural implementation of the Born forward model from Chapters 5-6.