Signed Distance Functions for RF
Why Signed Distance Functions for RF Imaging?
Classical RF imaging (Chapters 13--14) reconstructs the reflectivity on a discrete voxel grid. This approach has two fundamental limitations: (1) the memory cost scales as for a 3D grid of side , and (2) the representation cannot express sub-voxel geometry.
Signed distance functions offer an alternative: a continuous scalar field whose zero level set defines the surface, independent of any grid. When parameterised by a neural network, the SDF can represent complex 3D shapes with far fewer parameters than a voxel grid. More importantly for RF, the SDF provides surface normals (via the gradient) and interior/exterior classification --- both essential for physically-grounded scattering models.
This section develops the SDF formalism and its neural parameterisation as preparation for GeRaF (Section 25.2), which applies SDFs to mmWave radar reconstruction.
Definition: Signed Distance Function
Signed Distance Function
A signed distance function (SDF) assigns to each point the signed distance to the nearest surface :
where .
The surface is the zero level set: .
Theorem: The Eikonal Equation
If is a valid signed distance function, then the gradient norm equals unity almost everywhere:
This is the Eikonal equation. At any point on the surface , the gradient gives the outward-pointing surface normal.
The SDF measures distance; moving one unit in any direction changes the distance by at most one unit (Lipschitz constant ). Furthermore, the gradient points in the direction of steepest ascent (away from the surface), and by the distance property the rate of increase is exactly .
Lipschitz property
By the triangle inequality, for any : . Hence is Lipschitz with constant , and by Rademacher's theorem it is differentiable a.e. with .
Lower bound on the gradient
Let be the closest surface point. The function decreases from to over distance . The directional derivative in this direction has magnitude , so .
Combining
From the two bounds, wherever is differentiable. By Rademacher's theorem, this holds almost everywhere.
Definition: Neural SDF (DeepSDF)
Neural SDF (DeepSDF)
A neural SDF parameterises as a multi-layer perceptron (MLP):
where is a positional encoding:
The positional encoding maps the 3D coordinate to a higher-dimensional space, enabling the MLP to represent high-frequency spatial detail despite its inherent spectral bias toward smooth functions.
Without positional encoding, standard MLPs with smooth activations (ReLU, Softplus) converge slowly to high-frequency targets. With , the encoding provides frequencies up to cycles per unit length, sufficient for sub-wavelength RF imaging detail.
Definition: Sphere Tracing
Sphere Tracing
Sphere tracing finds the intersection of a ray with the zero level set of an SDF . The algorithm iterates:
At each step, the SDF value gives the distance to the nearest surface. Since , the ball of radius is guaranteed to be surface-free, so the step never overshoots the surface.
Termination: When (surface hit) or (ray missed).
Sphere tracing is much more efficient than dense ray marching because it takes large steps in empty regions and small steps near surfaces. For RF imaging, sphere tracing enables efficient computation of ray-surface intersections needed for differentiable rendering (Section 25.2).
Sphere Tracing Algorithm
Complexity: Convergence is linear: near the surface, at rate for convex surfaces.For neural SDFs where exactly, a safety factor can be used: .
2D Signed Distance Field Visualization
Visualise the signed distance field for different 2D shapes. Blue regions are outside (positive SDF), red regions are inside (negative SDF), and the white contour marks the zero level set (the surface). Observe how everywhere for exact SDFs.
Parameters
Sphere Tracing in 2D
Watch sphere tracing find the ray-surface intersection for a 2D SDF. Each step advances by the SDF value at the current position (shown as a circle). In empty regions the steps are large; near the surface they shrink to zero.
Parameters
Example: Analytical SDFs for RF-Relevant Primitives
Write the SDF for (a) a sphere of radius centred at the origin, (b) an infinite half-space (modelling a wall), and (c) a cylinder of radius along the -axis (modelling a pipe or column). Verify the Eikonal equation for each.
Sphere
\nabla f = \mathbf{p}/|\mathbf{p}||\nabla f| = 1\mathbf{p} \neq \mathbf{0}$.
Half-space (wall at $x = 0$)
\nabla f = (1, 0, 0)^T|\nabla f| = 1x = 0$.
Cylinder (radius $R$, axis along $z$)
\nabla f = (p_x, p_y, 0)^T / \sqrt{p_x^2 + p_y^2}|\nabla f| = 1z$-axis.
Example: Constructive Solid Geometry via SDF Operations
Given SDFs and for two objects, construct the SDF for (a) the union and (b) the intersection . Explain why these operations do not preserve the exact distance property.
Union and intersection
- Union: .
- Intersection: .
The zero level sets are correct: a point is inside iff .
Distance property violation
Consider two overlapping spheres. At a point equidistant from both surfaces but closer to a third point on the union boundary, overestimates the true distance to the union surface. Formally, but may be strictly less than near the intersection seam. For neural SDFs, the Eikonal regulariser (Section 25.4) corrects this during training.
SDF vs. Voxel Grid: Memory and Resolution
A voxel grid at resolution requires storage. For , this is million voxels --- prohibitive for many real-time applications. A neural SDF with a modest MLP ( parameters) can represent the same geometry continuously, with the surface extractable at arbitrary resolution via marching cubes.
For RF imaging, the practical benefit is even more pronounced: the wavelength at GHz is mm, so -scale voxels over a m room require voxels. A neural SDF bypasses this entirely.
Historical Note: The Eikonal Equation: From Optics to Neural Geometry
1828--2020The Eikonal equation originates in geometrical optics, where it describes the propagation of wavefronts in media with unit refractive index. Hamilton (1828) and later Bruns (1895) developed the theory in the context of ray optics. The equation reappeared in computational geometry when Sethian (1996) used it as the foundation of the fast marching method for computing distance functions on grids. Park et al. (2019) brought the Eikonal equation into deep learning by using it as a regulariser for neural SDFs, and Gropp et al. (2020) showed that Eikonal regularization alone --- without ground-truth distance supervision --- suffices to learn valid SDFs from raw point clouds.
Historical Note: DeepSDF and the Rise of Neural Implicit Representations
2019--2020Park et al. introduced DeepSDF at CVPR 2019, demonstrating that a single MLP can represent hundreds of 3D shapes via a learned latent code. The auto-decoder architecture (no encoder; optimise the latent code directly) proved both simple and powerful. Combined with Mildenhall et al.'s NeRF (2020), DeepSDF sparked the "neural implicit revolution" in computer vision --- a shift from explicit discrete representations (meshes, voxels) to continuous, differentiable function approximations. For RF imaging, this shift is particularly natural: electromagnetic fields are themselves continuous functions of space, and neural implicits provide a native representation.
Quick Check
The Eikonal equation implies which of the following?
The SDF is a convex function.
The SDF is Lipschitz continuous with constant .
The surface normal is always vertical.
The SDF has no local minima.
everywhere implies , which is exactly Lipschitz continuity with constant .
Quick Check
In sphere tracing, what determines the step size at iteration ?
A fixed step size chosen before tracing.
The SDF value at the current position.
The gradient magnitude .
The dot product of the ray direction and the surface normal.
The SDF value gives the distance to the nearest surface, so stepping by is the largest step that guarantees no overshoot.
Common Mistake: Approximate SDFs Break Sphere Tracing
Mistake:
Treating a neural network output as an exact SDF and using full-step sphere tracing: .
Correction:
Neural SDFs satisfy but not exactly. If locally, the network overestimates the distance and the ray may overshoot the surface. Use a safety factor or clip the step size: .
Common Mistake: Spectral Bias Without Positional Encoding
Mistake:
Training an MLP directly on 3D coordinates without positional encoding, expecting it to learn sharp features at the wavelength scale.
Correction:
Standard MLPs are biased toward low-frequency functions. The positional encoding lifts the input into a high-dimensional space where high-frequency targets become smooth, enabling the MLP to represent sub-wavelength detail. For RF imaging at GHz ( mm), features at the wavelength scale require encoding levels.
Signed Distance Function (SDF)
A scalar field that returns the signed distance from a point to the nearest surface: negative inside, zero on the surface, positive outside.
Related: Eikonal Equation, Zero Level Set
Eikonal Equation
The constraint that characterises valid signed distance functions. Used as a regularisation loss during neural SDF training.
Related: Signed Distance Function (SDF)
Zero Level Set
The surface implicitly defined by a signed distance function.
Sphere Tracing
A ray-marching algorithm that exploits the distance property of SDFs to take variable-size steps along a ray, reaching the surface in steps for smooth geometry.
Related: Signed Distance Function (SDF)
Key Takeaway
Signed distance functions provide a continuous, memory-efficient representation of 3D geometry. The Eikonal equation characterises valid SDFs and serves as a regulariser for neural parameterisations. Sphere tracing enables efficient ray-surface intersection. For RF imaging, neural SDFs bypass the voxel grid bottleneck while providing surface normals essential for physically-grounded scattering models.