Eikonal Regularization and Material Estimation

Beyond Geometry: Estimating What Surfaces Are Made Of

Sections 25.1--25.2 focused on reconstructing where objects are (the geometry). But for RF propagation modelling, we also need to know what the surfaces are made of: concrete walls reflect most of the incident power; glass windows transmit much of it; metallic surfaces are near-perfect reflectors. These material properties --- reflectivity, roughness, permittivity --- determine how the electromagnetic wave interacts with the scene and are essential for accurate channel prediction.

This section develops the joint estimation framework: a neural SDF for geometry, coupled with material property networks, all trained end-to-end from multi-view RF measurements with Eikonal regularisation enforcing geometric validity.

,

Definition:

Eikonal Regularization Loss

The Eikonal regularization loss penalises deviations of the gradient norm from unity:

Leik(θ)=EpU(Ω)[(pfθ(p)1)2],\mathcal{L}_{\text{eik}}(\theta) = \mathbb{E}_{\mathbf{p} \sim \mathcal{U}(\Omega)} \bigl[\bigl(\|\nabla_{\mathbf{p}} f_\theta(\mathbf{p})\| - 1\bigr)^2\bigr],

where Ω\Omega is the scene volume and the expectation is approximated by sampling random points. In practice, the sample set P\mathcal{P} is a mixture:

  • Uniform samples in Ω\Omega (enforce the Eikonal equation globally).
  • Near-surface samples drawn within a band {p:fθ(p)<δ}\{\mathbf{p} : |f_\theta(\mathbf{p})| < \delta\} (enforce the equation where it matters most).

Gropp et al. (2020) showed that Eikonal regularisation alone --- without any ground-truth distance supervision --- can learn valid SDFs from unoriented point clouds. This "Implicit Geometric Regularization" (IGR) result means that for RF imaging, we do not need ground-truth 3D meshes; the radar measurements plus the Eikonal constraint suffice.

Theorem: Eikonal Self-Supervision for SDF Learning

Let X={pi}\mathcal{X} = \{\mathbf{p}_{i}\} be a set of points on or near a surface S\mathcal{S}, and let fθf_\theta minimise the loss

L(θ)=1Xifθ(pi)2+λEpU(Ω)[(fθ(p)1)2].\mathcal{L}(\theta) = \frac{1}{|\mathcal{X}|}\sum_{i}|f_\theta(\mathbf{p}_{i})|^2 + \lambda \mathbb{E}_{\mathbf{p} \sim \mathcal{U}(\Omega)} \bigl[(\|\nabla f_\theta(\mathbf{p})\| - 1)^2\bigr].

If fθf_\theta is expressive enough, the minimiser satisfies fθ(p)±d(p,S)f_\theta(\mathbf{p}) \approx \pm d(\mathbf{p}, \mathcal{S}): the network learns the signed distance function from surface points alone, without distance labels.

The first term forces fθ=0f_\theta = 0 at the surface points. The Eikonal term forces fθ=1\|\nabla f_\theta\| = 1 everywhere. The only smooth function satisfying both constraints is the signed distance function (up to a global sign flip).

Definition:

Material Property Networks

The joint scene model augments the neural SDF with material property networks:

  1. Reflectivity: Γϕ(p)[0,1]\Gamma_\phi(\mathbf{p}) \in [0, 1], the fraction of incident power reflected at the surface.
  2. Roughness: ρψ(p)[0,1]\rho_\psi(\mathbf{p}) \in [0, 1], parameterising the scattering lobe width (smooth metal 0\to 0, rough concrete 1\to 1).
  3. Relative permittivity: ϵr,ξ(p)[1,)\epsilon_{r,\xi}(\mathbf{p}) \in [1, \infty), governing penetration depth and Fresnel reflection coefficients.

All networks share the backbone feature extractor and positional encoding, with separate output heads. The total learnable parameters are (θ,ϕ,ψ,ξ)(\theta, \phi, \psi, \xi).

,

Definition:

Joint Geometry-Material Training Loss

The full training objective combines data fidelity, Eikonal regularisation, and optional material priors:

L=1Qq=1Q(PMF(v)(pq)P^MF(v)(pq))2multi-view data fidelity+λeikLeik+λΓLΓ+λϵLϵ,\mathcal{L} = \underbrace{\frac{1}{Q}\sum_{q=1}^{Q} \bigl(P_{\text{MF}}^{(v)}(\mathbf{p}_{q}) - \hat{P}_{\text{MF}}^{(v)}(\mathbf{p}_{q})\bigr)^2}_{\text{multi-view data fidelity}} + \lambda_{\text{eik}}\,\mathcal{L}_{\text{eik}} + \lambda_{\Gamma}\,\mathcal{L}_{\Gamma} + \lambda_{\epsilon}\,\mathcal{L}_{\epsilon},

where:

  • The sum over views v=1,,Vv = 1, \ldots, V uses MF power images from different radar positions.
  • LΓ=ΓϕTV\mathcal{L}_{\Gamma} = \|\Gamma_\phi\|_{\text{TV}} encourages piecewise-constant reflectivity (walls have uniform material).
  • Lϵ=p(ϵr,ξ(p)ϵˉr)2\mathcal{L}_{\epsilon} = \sum_{\mathbf{p}} (\epsilon_{r,\xi}(\mathbf{p}) - \bar{\epsilon}_r)^2 softly regularises permittivity toward typical building material values.
,

Example: Multi-View Estimation of Room Geometry and Materials

A mmWave MIMO radar is mounted on a mobile robot that collects measurements from V=8V = 8 positions in an indoor room with concrete walls (ϵr5\epsilon_r \approx 5, Γ0.7\Gamma \approx 0.7) and a glass window (ϵr6\epsilon_r \approx 6, Γ0.2\Gamma \approx 0.2). Describe the joint estimation procedure and what each view contributes.

Eikonal Loss During Training

Observe how the Eikonal loss, data fidelity loss, and surface quality evolve during training. Increasing λeik\lambda_{\text{eik}} produces smoother surfaces (enforcing f=1\|\nabla f\| = 1) but may slow convergence of the data fidelity term.

Parameters
0.1
200

Joint Geometry and Material Estimation

Visualise the estimated reflectivity map overlaid on the reconstructed SDF surface. Different materials appear as different colours on the surface: high reflectivity (metal, concrete) in red, low reflectivity (glass, drywall) in blue.

Parameters
8
0.1

Theorem: Permittivity Identifiability from Multi-Angle RF Data

Consider a planar dielectric interface with relative permittivity ϵr\epsilon_r. The Fresnel reflection coefficient for TE polarisation is

R(θi)=cosθiϵrsin2θicosθi+ϵrsin2θi,R_{\perp}(\theta_i) = \frac{\cos\theta_i - \sqrt{\epsilon_r - \sin^2\theta_i}}{\cos\theta_i + \sqrt{\epsilon_r - \sin^2\theta_i}},

where θi\theta_i is the angle of incidence. Given noise-free measurements of R(θi)2|R_{\perp}(\theta_i)|^2 at two or more distinct angles θ1θ2\theta_1 \neq \theta_2, the permittivity ϵr\epsilon_r is uniquely determined.

The Fresnel coefficient is a nonlinear function of ϵr\epsilon_r and θi\theta_i. A single measurement at one angle leaves a one-parameter family of (ϵr,Γ)(\epsilon_r, \Gamma) pairs consistent with the data. A second angle breaks this degeneracy because the angular dependence of RR_\perp is a signature of ϵr\epsilon_r.

Common Mistake: Choosing the Eikonal Weight λeik\lambda_{\text{eik}}

Mistake:

Setting λeik\lambda_{\text{eik}} too large (e.g., λeik=10\lambda_{\text{eik}} = 10), causing the Eikonal term to dominate and the network to learn a trivial solution fθ(p)pf_\theta(\mathbf{p}) \approx \|\mathbf{p}\| (a sphere) that perfectly satisfies the Eikonal equation but ignores the data.

Correction:

Start with λeik[0.01,0.1]\lambda_{\text{eik}} \in [0.01, 0.1] and tune via validation. A good heuristic: set λeik\lambda_{\text{eik}} so that the Eikonal loss and data fidelity loss are of comparable magnitude after the first few training epochs. Some implementations use a warm-up schedule, starting with λeik=0\lambda_{\text{eik}} = 0 and gradually increasing it.

Joint Geometry-Material Training Algorithm

Complexity: Per iteration: O(Pd)O(|\mathcal{P}| \cdot d) for MLP forward + backward, where dd is the MLP depth. Total: O(TPd)O(T \cdot |\mathcal{P}| \cdot d).
Input: MF power images {PMF(v)}v=1V\{P_{\text{MF}}^{(v)}\}_{v=1}^V,
weights λeik,λΓ,λϵ\lambda_{\text{eik}}, \lambda_\Gamma, \lambda_\epsilon
Output: Trained parameters (θ,ϕ,ψ,ξ)(\theta^*, \phi^*, \psi^*, \xi^*)
1. Initialise fθf_\theta with geometric initialisation (large sphere)
2. Initialise Γϕ,ρψ,ϵr,ξ\Gamma_\phi, \rho_\psi, \epsilon_{r,\xi} randomly
3. for t=1,,Tt = 1, \ldots, T do
4. \quad Sample batch of 3D points P\mathcal{P} (uniform + near-surface)
5. \quad Compute SDF: fθ(p)f_\theta(\mathbf{p}) for all pP\mathbf{p} \in \mathcal{P}
6. \quad Compute materials: Γϕ,ρψ,ϵr,ξ\Gamma_\phi, \rho_\psi, \epsilon_{r,\xi} at surface points
7. \quad Render MF power: P^MF(v)\hat{P}_{\text{MF}}^{(v)} for each view vv
8. \quad Compute Ldata+λeikLeik+λΓLΓ+λϵLϵ\mathcal{L}_{\text{data}} + \lambda_{\text{eik}} \mathcal{L}_{\text{eik}} + \lambda_\Gamma \mathcal{L}_\Gamma + \lambda_\epsilon \mathcal{L}_\epsilon
9. \quad Update (θ,ϕ,ψ,ξ)(\theta, \phi, \psi, \xi) via Adam
10. end for
🔧Engineering Note

Building Material Databases for RF Estimation

The regulariser Lϵ\mathcal{L}_\epsilon requires knowledge of typical permittivity values. Common building materials at mmWave frequencies:

Material ϵr\epsilon_r (60 GHz) Γ\Gamma (normal incidence)
Concrete 5.0--6.5 0.6--0.8
Glass 5.5--7.0 0.2--0.4
Drywall 2.5--3.0 0.3--0.5
Metal 1\gg 1 (conductor) 1.0\approx 1.0
Wood 2.0--3.0 0.2--0.4

These values can initialise the permittivity prior or serve as discrete classes for a classification-based material estimator.

Primitive-Based SDF Representations

An alternative to fully neural SDFs is to represent scenes as combinations of simple analytic primitives (planes, cylinders, spheres) whose SDFs are known in closed form. Boolean operations (union, intersection) compose primitives into complex scenes. This approach has several advantages for RF:

  • Surface normals are analytically available (no autodiff needed).
  • Only surface voxels contribute to scattering --- highly sparse.
  • Lambda-scale discretisation is feasible without excessive memory.
  • Interpretability: the scene is parameterised by a small number of geometric parameters (wall positions, column radii).

The cost is reduced expressiveness: scenes that do not decompose into simple primitives (e.g., furniture, vegetation) require the fully neural approach.

Quick Check

What is the primary purpose of the Eikonal regulariser Leik=E[(fθ1)2]\mathcal{L}_{\text{eik}} = \mathbb{E}[(\|\nabla f_\theta\| - 1)^2] in neural SDF training?

To make the network converge faster.

To ensure the network output is a valid signed distance function.

To reduce the number of training parameters.

To prevent overfitting to noise.

Eikonal Regularization

A training loss term Leik=E[(fθ1)2]\mathcal{L}_{\text{eik}} = \mathbb{E}[(\|\nabla f_\theta\| - 1)^2] that encourages a neural network to output a valid signed distance function by enforcing the Eikonal equation.

Related: Eikonal Equation, Signed Distance Function (SDF)

Relative Permittivity

The ratio ϵr=ϵ/ϵ0\epsilon_r = \epsilon / \epsilon_0 of the material's permittivity to the vacuum permittivity. Governs the Fresnel reflection coefficient at a dielectric interface.

Related: Surface Reflectivity

Surface Reflectivity

The fraction Γ(p)[0,1]\Gamma(\mathbf{p}) \in [0, 1] of incident electromagnetic power reflected at a surface point p\mathbf{p}. Depends on material properties and incidence angle.

Historical Note: Implicit Geometric Regularization

2020

Gropp, Yariv, Haim, Atzmon, and Lipman at the Weizmann Institute introduced Implicit Geometric Regularization (IGR) in 2020, demonstrating that the Eikonal loss alone can supervise SDF learning from raw point clouds without distance labels. This result was surprising: the community had assumed that explicit distance supervision was necessary. IGR opened the door to learning SDFs from any data source that provides surface point locations --- including radar returns, which give approximate surface points via matched filtering.

Key Takeaway

Eikonal regularisation enforces valid SDF geometry without requiring ground-truth distance labels. Joint estimation of geometry, reflectivity, roughness, and permittivity from multi-view RF data is feasible because the angular dependence of the Fresnel reflection coefficient provides material-discriminating information. The full training pipeline optimises all parameters end-to-end, with the Eikonal constraint serving as the geometric "anchor" that prevents the optimisation from converging to physically meaningless solutions.