Differentiable Rendering
The Rendering Equation as a Forward Model
In Chapter 1 we introduced the forward operator abstractly. For optical imaging, the forward operator is precisely the rendering equation: given a 3D scene (geometry, materials, lighting), it computes the 2D image observed by a camera. Making this renderer differentiable means we can compute and solve the inverse problem β inferring geometry, materials, and lighting from images β by gradient descent.
This section introduces the rendering equation and its inverse; Section 28.3 adapts both to RF wave propagation.
Definition: The Rendering Equation
The Rendering Equation
The rendering equation (Kajiya, 1986) describes how light interacts with surfaces:
where:
- : outgoing radiance at surface point in direction .
- : emitted radiance (light sources).
- : BRDF (bidirectional reflectance distribution function) describing surface material.
- : incoming radiance from direction .
- : surface normal at .
- : hemisphere above the surface.
The integral sums light arriving from all directions, weighted by the BRDF and the cosine foreshortening factor.
Definition: Bidirectional Reflectance Distribution Function (BRDF)
Bidirectional Reflectance Distribution Function (BRDF)
The BRDF characterises how a surface reflects light:
Physically valid BRDFs satisfy reciprocity () and energy conservation ().
The Lambertian model (, constant) is the simplest; the Cook-Torrance microfacet model is the standard in physically-based rendering. In RF, the "BRDF" is replaced by the bistatic scattering cross-section .
Definition: Differentiable Rendering
Differentiable Rendering
A differentiable renderer computes , enabling gradient-based optimisation of scene parameters . The key challenges:
-
Visibility discontinuities: When a surface edge moves, pixels transition between seeing different surfaces. The rendering function has discontinuities at silhouette edges.
-
Solutions:
- Soft rasterisation (SoftRas): Replace hard visibility with smooth sigmoid-based occupancy.
- Edge sampling: Explicitly detect and integrate over silhouette edges.
- Volume rendering (NeRF): Avoids hard visibility entirely by using a continuous density field.
- 3DGS: Gaussian alpha-compositing is inherently smooth.
Definition: Inverse Rendering
Inverse Rendering
Inverse rendering recovers scene properties (geometry, materials, lighting) from observed images by inverting the rendering equation. Given observed images and a differentiable renderer :
where and provides regularisation.
This is an ill-posed inverse problem: different combinations of geometry, materials, and lighting can produce the same image.
Theorem: Volume Rendering Integral
The colour of a pixel corresponding to camera ray is:
where is the volume density, is the emitted colour, and the transmittance is:
The discrete (quadrature) approximation used in NeRF is:
where and the sum runs over samples along the ray.
Volume rendering accumulates colour along a ray, weighting each point by how much light it emits () and how much of the ray has not been absorbed yet (). This is analogous to Beer-Lambert absorption β and to the RF matched-filter integral along a range cell.
Emission-absorption model
The change in radiance along a ray is governed by:
where the first term is absorption and the second is emission.
Integrating factor
Multiply both sides by the integrating factor :
Integration
Integrating from to with (background black) gives:
Differentiable Volume Rendering: 1D Ray Marching
Visualise the 1D volume rendering integral along a single ray. The top panel shows the density profile and colour along the ray; the bottom panel shows the transmittance and the accumulated colour. Observe how increasing the number of samples improves the quadrature approximation, and how high-density regions absorb light rapidly, causing to drop.
Parameters
Example: NeuS: Neural Implicit Surface Rendering
NeuS represents a scene as a neural SDF (Chapter 24) and renders it via volume rendering. Explain how the SDF value is converted to a volume density for the rendering integral, and why this is preferable to NeRF's unconstrained density field.
SDF to density conversion
NeuS defines a density-like weight function via the logistic function applied to the SDF:
where is the sigmoid with learnable inverse width . The weight peaks at the zero-level set of the SDF (the surface) and decays away from it.
Rendering
The volume rendering integral becomes:
As , , recovering surface rendering. During training, is gradually increased (annealed) to transition from volume to surface rendering.
Advantage over NeRF
NeRF's density field has no geometric interpretation β the density can be nonzero everywhere, producing "floaters" and fuzzy surfaces. NeuS constrains the geometry to an SDF, guaranteeing a well-defined surface with exact normals (), which is critical for downstream tasks like relighting and material estimation.
Common Mistake: Visibility Discontinuities in Differentiable Rendering
Mistake:
Naively differentiating a hard rasteriser with respect to mesh vertex positions, obtaining zero gradients almost everywhere and undefined gradients at silhouette edges.
Correction:
Hard rasterisation is a step function in object coordinates: a pixel is either inside or outside a triangle. The gradient is zero (no information) in the interior and undefined at edges.
Solutions: (i) Soft rasterisation (SoftRas) replaces the step with a sigmoid: each pixel gets a soft "occupancy" from nearby triangles. (ii) Volume rendering (NeRF/NeuS) avoids hard visibility altogether. (iii) 3DGS uses smooth Gaussian splatting. All three approaches provide informative gradients for inverse rendering.
BRDF (Bidirectional Reflectance Distribution Function)
A function that describes the ratio of reflected radiance to incident irradiance at a surface point, characterising the material's reflective properties.
Related: The Rendering Equation
Inverse Rendering
The problem of recovering scene geometry, materials, and lighting from observed images, typically solved via gradient-based optimisation through a differentiable renderer.
Related: Inverse Rendering
Quick Check
In the volume rendering integral, what does the transmittance represent physically?
The probability that the ray has not been absorbed by depth
The total emitted light up to depth
The surface normal at depth
The accumulated density along the ray
is the probability that a photon travelling along the ray reaches depth without being absorbed. It decreases monotonically from (no absorption yet) as the ray passes through regions of nonzero density.
Key Takeaway
The rendering equation is the forward model for optical imaging. Differentiable renderers β SoftRas for meshes, volume rendering for NeRF/NeuS, Gaussian splatting for 3DGS β enable gradient-based inverse rendering: estimating geometry, materials, and lighting from images. The core challenge is handling visibility discontinuities smoothly so that gradients are informative.