Signed Distance Functions for RF

Why Signed Distance Functions for RF Imaging?

Classical RF imaging (Chapters 13--14) reconstructs the reflectivity c(p)c(\mathbf{p}) on a discrete voxel grid. This approach has two fundamental limitations: (1) the memory cost scales as O(N3)O(N^3) for a 3D grid of side NN, and (2) the representation cannot express sub-voxel geometry.

Signed distance functions offer an alternative: a continuous scalar field whose zero level set defines the surface, independent of any grid. When parameterised by a neural network, the SDF can represent complex 3D shapes with far fewer parameters than a voxel grid. More importantly for RF, the SDF provides surface normals (via the gradient) and interior/exterior classification --- both essential for physically-grounded scattering models.

This section develops the SDF formalism and its neural parameterisation as preparation for GeRaF (Section 25.2), which applies SDFs to mmWave radar reconstruction.

Definition:

Signed Distance Function

A signed distance function (SDF) f ⁣:R3Rf \colon \mathbb{R}^3 \to \mathbb{R} assigns to each point pR3\mathbf{p} \in \mathbb{R}^3 the signed distance to the nearest surface S\mathcal{S}:

f(p)={d(p,S)if p is inside the object,    0if pS,+d(p,S)if p is outside,f(\mathbf{p}) = \begin{cases} -d(\mathbf{p}, \mathcal{S}) & \text{if } \mathbf{p} \text{ is inside the object,} \\ \;\; 0 & \text{if } \mathbf{p} \in \mathcal{S}, \\ +d(\mathbf{p}, \mathcal{S}) & \text{if } \mathbf{p} \text{ is outside,} \end{cases}

where d(p,S)=minqSpqd(\mathbf{p}, \mathcal{S}) = \min_{\mathbf{q} \in \mathcal{S}} \|\mathbf{p} - \mathbf{q}\|.

The surface is the zero level set: S={p:f(p)=0}\mathcal{S} = \{\mathbf{p} : f(\mathbf{p}) = 0\}.

Theorem: The Eikonal Equation

If ff is a valid signed distance function, then the gradient norm equals unity almost everywhere:

f(p)=1,a.e. pR3.\|\nabla f(\mathbf{p})\| = 1, \quad \text{a.e. } \mathbf{p} \in \mathbb{R}^3.

This is the Eikonal equation. At any point on the surface S\mathcal{S}, the gradient f(p)/f(p)\nabla f(\mathbf{p})/\|\nabla f(\mathbf{p})\| gives the outward-pointing surface normal.

The SDF measures distance; moving one unit in any direction changes the distance by at most one unit (Lipschitz constant =1= 1). Furthermore, the gradient points in the direction of steepest ascent (away from the surface), and by the distance property the rate of increase is exactly 11.

,

Definition:

Neural SDF (DeepSDF)

A neural SDF parameterises ff as a multi-layer perceptron (MLP):

fθ(p)=MLPθ(γ(p)),f_\theta(\mathbf{p}) = \text{MLP}_\theta(\gamma(\mathbf{p})),

where γ(p)\gamma(\mathbf{p}) is a positional encoding:

γ(p)=(sin(20πp),cos(20πp),,sin(2L1πp),cos(2L1πp)).\gamma(\mathbf{p}) = \bigl(\sin(2^0 \pi \mathbf{p}),\, \cos(2^0 \pi \mathbf{p}),\, \ldots,\, \sin(2^{L-1} \pi \mathbf{p}),\, \cos(2^{L-1} \pi \mathbf{p})\bigr).

The positional encoding maps the 3D coordinate to a higher-dimensional space, enabling the MLP to represent high-frequency spatial detail despite its inherent spectral bias toward smooth functions.

Without positional encoding, standard MLPs with smooth activations (ReLU, Softplus) converge slowly to high-frequency targets. With L=10L = 10, the encoding provides frequencies up to 295002^9 \approx 500 cycles per unit length, sufficient for sub-wavelength RF imaging detail.

Definition:

Sphere Tracing

Sphere tracing finds the intersection of a ray r(t)=o+td\mathbf{r}(t) = \mathbf{o} + t\mathbf{d} with the zero level set of an SDF ff. The algorithm iterates:

tk+1=tk+f(r(tk)),t0=tmin.t_{k+1} = t_k + f(\mathbf{r}(t_k)), \qquad t_0 = t_{\min}.

At each step, the SDF value f(r(tk))f(\mathbf{r}(t_k)) gives the distance to the nearest surface. Since f=1\|\nabla f\| = 1, the ball of radius f(r(tk))f(\mathbf{r}(t_k)) is guaranteed to be surface-free, so the step never overshoots the surface.

Termination: When f(r(tk))<ε|f(\mathbf{r}(t_k))| < \varepsilon (surface hit) or tk>tmaxt_k > t_{\max} (ray missed).

Sphere tracing is much more efficient than dense ray marching because it takes large steps in empty regions and small steps near surfaces. For RF imaging, sphere tracing enables efficient computation of ray-surface intersections needed for differentiable rendering (Section 25.2).

Sphere Tracing Algorithm

Complexity: Convergence is linear: near the surface, f(r(tk))0f(\mathbf{r}(t_k)) \to 0 at rate O(1/k)O(1/k) for convex surfaces.
Input: Ray origin o\mathbf{o}, direction d\mathbf{d},
SDF ff, tolerance ε\varepsilon, maximum distance tmaxt_{\max}
Output: Intersection distance tt^* or NO_HIT
1. t0t \leftarrow 0
2. while t<tmaxt < t_{\max} do
3. po+td\quad \mathbf{p} \leftarrow \mathbf{o} + t \cdot \mathbf{d}
4. dsdff(p)\quad d_{\text{sdf}} \leftarrow f(\mathbf{p})
5. \quad if dsdf<ε|d_{\text{sdf}}| < \varepsilon then
6. \quad\quad return tt
7. \quad end if
8. tt+dsdf\quad t \leftarrow t + d_{\text{sdf}}
9. end while
10. return NO_HIT

For neural SDFs where fθ1\|\nabla f_\theta\| \neq 1 exactly, a safety factor α(0,1)\alpha \in (0, 1) can be used: tk+1=tk+αfθ(r(tk))t_{k+1} = t_k + \alpha \cdot f_\theta(\mathbf{r}(t_k)).

2D Signed Distance Field Visualization

Visualise the signed distance field for different 2D shapes. Blue regions are outside (positive SDF), red regions are inside (negative SDF), and the white contour marks the zero level set (the surface). Observe how f=1\|\nabla f\| = 1 everywhere for exact SDFs.

Parameters
100

Sphere Tracing in 2D

Watch sphere tracing find the ray-surface intersection for a 2D SDF. Each step advances by the SDF value at the current position (shown as a circle). In empty regions the steps are large; near the surface they shrink to zero.

Parameters
0

Example: Analytical SDFs for RF-Relevant Primitives

Write the SDF for (a) a sphere of radius RR centred at the origin, (b) an infinite half-space (modelling a wall), and (c) a cylinder of radius RR along the zz-axis (modelling a pipe or column). Verify the Eikonal equation for each.

Example: Constructive Solid Geometry via SDF Operations

Given SDFs fAf_A and fBf_B for two objects, construct the SDF for (a) the union ABA \cup B and (b) the intersection ABA \cap B. Explain why these operations do not preserve the exact distance property.

SDF vs. Voxel Grid: Memory and Resolution

A voxel grid at resolution NN requires O(N3)O(N^3) storage. For N=512N = 512, this is 134\sim 134 million voxels --- prohibitive for many real-time applications. A neural SDF with a modest MLP (105\sim 10^5 parameters) can represent the same geometry continuously, with the surface extractable at arbitrary resolution via marching cubes.

For RF imaging, the practical benefit is even more pronounced: the wavelength λ\lambda at 6060 GHz is 5\sim 5 mm, so λ\lambda-scale voxels over a 5×5×35 \times 5 \times 3 m room require 109\sim 10^9 voxels. A neural SDF bypasses this entirely.

Historical Note: The Eikonal Equation: From Optics to Neural Geometry

1828--2020

The Eikonal equation f=1\|\nabla f\| = 1 originates in geometrical optics, where it describes the propagation of wavefronts in media with unit refractive index. Hamilton (1828) and later Bruns (1895) developed the theory in the context of ray optics. The equation reappeared in computational geometry when Sethian (1996) used it as the foundation of the fast marching method for computing distance functions on grids. Park et al. (2019) brought the Eikonal equation into deep learning by using it as a regulariser for neural SDFs, and Gropp et al. (2020) showed that Eikonal regularization alone --- without ground-truth distance supervision --- suffices to learn valid SDFs from raw point clouds.

Historical Note: DeepSDF and the Rise of Neural Implicit Representations

2019--2020

Park et al. introduced DeepSDF at CVPR 2019, demonstrating that a single MLP can represent hundreds of 3D shapes via a learned latent code. The auto-decoder architecture (no encoder; optimise the latent code directly) proved both simple and powerful. Combined with Mildenhall et al.'s NeRF (2020), DeepSDF sparked the "neural implicit revolution" in computer vision --- a shift from explicit discrete representations (meshes, voxels) to continuous, differentiable function approximations. For RF imaging, this shift is particularly natural: electromagnetic fields are themselves continuous functions of space, and neural implicits provide a native representation.

Quick Check

The Eikonal equation f(p)=1\|\nabla f(\mathbf{p})\| = 1 implies which of the following?

The SDF is a convex function.

The SDF is Lipschitz continuous with constant 11.

The surface normal is always vertical.

The SDF has no local minima.

Quick Check

In sphere tracing, what determines the step size at iteration kk?

A fixed step size chosen before tracing.

The SDF value f(r(tk))f(\mathbf{r}(t_k)) at the current position.

The gradient magnitude f\|\nabla f\|.

The dot product of the ray direction and the surface normal.

Common Mistake: Approximate SDFs Break Sphere Tracing

Mistake:

Treating a neural network output as an exact SDF and using full-step sphere tracing: tk+1=tk+fθ(r(tk))t_{k+1} = t_k + f_\theta(\mathbf{r}(t_k)).

Correction:

Neural SDFs satisfy fθ1\|\nabla f_\theta\| \approx 1 but not exactly. If fθ<1\|\nabla f_\theta\| < 1 locally, the network overestimates the distance and the ray may overshoot the surface. Use a safety factor α(0.5,0.9)\alpha \in (0.5, 0.9) or clip the step size: tk+1=tk+min(fθ(r(tk)),δmax)t_{k+1} = t_k + \min(f_\theta(\mathbf{r}(t_k)),\, \delta_{\max}).

Common Mistake: Spectral Bias Without Positional Encoding

Mistake:

Training an MLP directly on 3D coordinates p\mathbf{p} without positional encoding, expecting it to learn sharp features at the wavelength scale.

Correction:

Standard MLPs are biased toward low-frequency functions. The positional encoding γ(p)\gamma(\mathbf{p}) lifts the input into a high-dimensional space where high-frequency targets become smooth, enabling the MLP to represent sub-wavelength detail. For RF imaging at f0=60f_0 = 60 GHz (λ5\lambda \approx 5 mm), features at the wavelength scale require L10L \geq 10 encoding levels.

Signed Distance Function (SDF)

A scalar field f(p)f(\mathbf{p}) that returns the signed distance from a point p\mathbf{p} to the nearest surface: negative inside, zero on the surface, positive outside.

Related: Eikonal Equation, Zero Level Set

Eikonal Equation

The constraint f(p)=1\|\nabla f(\mathbf{p})\| = 1 that characterises valid signed distance functions. Used as a regularisation loss during neural SDF training.

Related: Signed Distance Function (SDF)

Zero Level Set

The surface S={p:f(p)=0}\mathcal{S} = \{\mathbf{p} : f(\mathbf{p}) = 0\} implicitly defined by a signed distance function.

Sphere Tracing

A ray-marching algorithm that exploits the distance property of SDFs to take variable-size steps along a ray, reaching the surface in O(log(1/ε))O(\log(1/\varepsilon)) steps for smooth geometry.

Related: Signed Distance Function (SDF)

Key Takeaway

Signed distance functions provide a continuous, memory-efficient representation of 3D geometry. The Eikonal equation f=1\|\nabla f\| = 1 characterises valid SDFs and serves as a regulariser for neural parameterisations. Sphere tracing enables efficient ray-surface intersection. For RF imaging, neural SDFs bypass the O(N3)O(N^3) voxel grid bottleneck while providing surface normals essential for physically-grounded scattering models.