RadarSplat and Automotive Applications

Gaussian Splatting Meets Automotive Radar

Automotive radar presents unique challenges for neural scene representations: the sensor outputs sparse, noisy 3D point clouds (not images), operates with FMCW waveforms that impose specific range-Doppler processing, and must be fused with camera and LiDAR for reliable perception. RadarSplat (Niedermayr et al., 2024) and GSpaRC (Dong et al., 2024) adapt 3DGS to this automotive setting, enabling radar point cloud synthesis, augmentation for autonomous driving, and fast channel reconstruction.

Definition:

RadarSplat

RadarSplat represents an automotive scene as a set of 3D Gaussians where each primitive carries radar-specific attributes:

(ΞΌk,Ξ£k,Ξ±k,ΟƒRCS,k,vk,ψk),(\boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k, \alpha_k, \sigma_{\text{RCS},k}, v_k, \boldsymbol{\psi}_k),

where:

  • ΟƒRCS,k∈R+\sigma_{\text{RCS},k} \in \mathbb{R}_+ is the radar cross-section (RCS),
  • vk∈Rv_k \in \mathbb{R} is the radial velocity (Doppler),
  • ψk\boldsymbol{\psi}_k encodes additional features (e.g., material class).

The rendering process models the FMCW radar pipeline: each Gaussian generates a response in the range-Doppler-angle domain, and the rendered "image" is a synthetic radar point cloud.

Definition:

FMCW-Aware Gaussian Rendering

In RadarSplat, the rendering equation is adapted for FMCW radar by incorporating the range-Doppler response of each Gaussian. For a Gaussian at position ΞΌk\boldsymbol{\mu}_k, the beat signal contribution to radar receiver at \ntnrxpos\ntn{rx_pos} is:

sk(t)=ΟƒRCS,k αk Gk(\ntnrxpos) Tk exp⁑ ⁣(j2Ο€(2f0cvkt+WTchirp2Rkct)),s_k(t) = \sigma_{\text{RCS},k}\,\alpha_k\,G_k(\ntn{rx_pos})\,T_k\,\exp\!\left(j2\pi\left(\frac{2f_0}{c}v_k t + \frac{W}{T_{\text{chirp}}}\frac{2R_k}{c}t\right)\right),

where Rk=βˆ₯ΞΌkβˆ’\ntnrxposβˆ₯R_k = \|\boldsymbol{\mu}_k - \ntn{rx_pos}\| is the range, TchirpT_{\text{chirp}} is the chirp duration, and WW is the sweep bandwidth. After range-Doppler FFT processing, the Gaussian appears as a peak in the range-Doppler map at (Rk,vk)(R_k, v_k) with amplitude proportional to ΟƒRCS,kΞ±k\sigma_{\text{RCS},k}\alpha_k.

The total radar response is the coherent sum over all Gaussians:

s(t)=βˆ‘k=1Nsk(t).s(t) = \sum_{k=1}^N s_k(t).

Theorem: Gaussian Size and Radar Resolution

For an FMCW radar with bandwidth WW and coherent processing interval TCPIT_{\text{CPI}}, the minimum Gaussian scale smin⁑s_{\min} that can be resolved is:

smin⁑β‰₯max⁑ ⁣(c2W,β€…β€ŠΞ»2Na),s_{\min} \geq \max\!\left(\frac{c}{2W},\; \frac{\lambda}{2N_a}\right),

where the first term is the range resolution and the second is the angular resolution for an array of NaN_a elements. Gaussians smaller than smin⁑s_{\min} are indistinguishable from point scatterers and should be parameterised as isotropic (Σk=smin⁑2I\boldsymbol{\Sigma}_k = s_{\min}^2 \mathbf{I}).

The radar cannot "see" structure finer than its resolution cells. Making Gaussians smaller than the resolution cell adds parameters without adding information, leading to overfitting. This is the RF analog of the optical resolution limit that constrains Gaussian size in optical 3DGS.

Definition:

GSpaRC --- Gaussian Splatting with Physics-Augmented Rendering for Compact Scenes

GSpaRC (Dong et al., 2024) extends RadarSplat by incorporating physics-based propagation models into the rendering pipeline:

  1. Free-space path loss: Each Gaussian's contribution is attenuated by (Ξ»/(4Ο€Rk))2(\lambda/(4\pi R_k))^2, the radar equation path loss.
  2. Multi-bounce rendering: Gaussians can act as secondary sources, modelling double-bounce scattering common in urban environments (e.g., ground-wall dihedral reflections).
  3. Compact representation: A sparsity-promoting loss Lsparse=Ξ»sβˆ‘kΞ±k\mathcal{L}_{\text{sparse}} = \lambda_s \sum_k \alpha_k encourages removing unnecessary Gaussians, achieving 55--10Γ—10\times fewer primitives than vanilla RadarSplat.

The physics augmentation improves generalisation to novel viewpoints and reduces the required training data by encoding known propagation laws rather than learning them from data.

Example: Synthetic Radar Point Cloud for Training Data Augmentation

An autonomous driving team has 10 hours of camera + LiDAR data but only 2 hours of co-registered radar data (due to a sensor failure). They need balanced multi-modal training data. Describe how RadarSplat generates synthetic radar point clouds for the missing 8 hours.

,

Gaussian RCS and Opacity Visualisation

Visualise a collection of 2D Gaussians with radar cross-section encoded as colour intensity and opacity controlling visibility. Adjust parameters to see how the representation changes.

Parameters
100
10
0.1

Common Mistake: Coherent vs Incoherent Summation in Radar Splatting

Mistake:

Treating the radar splatting as an incoherent power summation (as in the optical case) rather than a coherent field summation.

Correction:

In optical 3DGS, the alpha-compositing sums intensities (powers) because camera pixels detect intensity. Radar, however, operates on complex-valued signals: the IF signal s(t)s(t) is the coherent sum of contributions from all Gaussians. After range-Doppler FFT processing, nearby Gaussians can interfere constructively or destructively depending on their phase relationship.

RadarSplat must track the phase Ο•k=2ΞΊRk\phi_k = 2\kappa R_k of each Gaussian and perform coherent summation in the complex domain before computing the detected power ∣s∣2|s|^2. Ignoring phase leads to systematic overestimation of radar returns (by treating all contributions as constructive).

⚠️Engineering Note

Timing Constraints for Automotive Radar Synthesis

Automotive perception systems require sensor data at specific rates:

  • Radar: 10--20 frames per second (50--100 ms per frame).
  • Camera: 10--30 FPS.
  • LiDAR: 10--20 FPS.

For RadarSplat to be useful in simulation or hardware-in-the-loop testing, the rendering time must be <50< 50 ms per frame. With N∼104N \sim 10^4 Gaussians (typical for a 100 m road segment), the tile-based rasteriser achieves ∼5\sim 5 ms per frame on an A100 GPU, well within the timing budget. This sub-millisecond-per-Gaussian rendering is a key advantage of splatting over ray-tracing-based simulators (e.g., Sionna RT), which require ∼100\sim 100 ms per frame for comparable scene complexity.

Practical Constraints
  • β€’

    Rendering latency must be <50< 50 ms for real-time simulation

  • β€’

    GPU memory scales linearly with NN (approx 200 bytes per Gaussian)

RadarSplat vs GSpaRC

PropertyRadarSplatGSpaRC
Path loss modelLearned (implicit)Physics-based (Ξ»2/(4Ο€R)2\lambda^2/(4\pi R)^2)
Multi-bounceNo (single-bounce only)Yes (double-bounce rendering)
Typical Gaussians∼50,000\sim 50{,}000∼5,000\sim 5{,}000--10,00010{,}000
Sparsity lossStandard pruningExplicit β„“1\ell_1 on opacity
Novel view generalisationGood (<0.5< 0.5 m Chamfer)Better (<0.3< 0.3 m Chamfer)
Training time∼30\sim 30 min∼45\sim 45 min
Rendering speed>100> 100 FPS∼80\sim 80 FPS
,
πŸŽ“CommIT Contribution(2026)

Gaussian Splatting for RF Scene Reconstruction

A. Rezaei, G. Caire β€” CommIT Group, TU Berlin (in preparation)

The CommIT group is investigating the application of 3DGS to the unified forward model y=Ac+w\mathbf{y} = \mathbf{A}\mathbf{c} + \mathbf{w} developed in Chapter 7. The key research question is whether representing the reflectivity c\mathbf{c} as a set of Gaussian primitives (instead of a voxel grid or neural field) can simultaneously improve reconstruction quality and enable real-time rendering for digital twin applications.

Preliminary results show that the Kronecker structure of A\mathbf{A} (Chapter 7) can be exploited to accelerate the Gaussian rendering step: the spatial and frequency components of the forward model factorise, allowing separate 2D splatting operations rather than a single 3D operation. This reduces the per-iteration cost from O(Nβ‹…Kβ‹…Q)O(N \cdot K \cdot Q) to O(Nβ‹…K+Nβ‹…Q)O(N \cdot K + N \cdot Q), where KK is the number of subcarriers and QQ the number of spatial samples.

3dgsrf-imagingforward-modelkronecker

FMCW-Aware Rendering

A rendering process that models the frequency-modulated continuous wave (FMCW) radar signal processing chain: each scene primitive generates a beat signal contribution with range- and Doppler-dependent phase, and the rendered output is the range-Doppler map after FFT processing (rather than an image).

Related: Radio Radiance Field

Radar Cross-Section (RCS)

A measure of how much electromagnetic energy a target reflects back toward the radar, defined as ΟƒRCS=lim⁑Rβ†’βˆž4Ο€R2∣Es∣2/∣Ei∣2\sigma_{\text{RCS}} = \lim_{R \to \infty} 4\pi R^2 |E_s|^2/|E_i|^2, where EsE_s and EiE_i are the scattered and incident field amplitudes. In RadarSplat, each Gaussian primitive carries an RCS attribute that determines its radar reflectivity.

Related: FMCW-Aware Rendering

Key Takeaway

RadarSplat and GSpaRC adapt 3D Gaussian Splatting for automotive radar by augmenting each Gaussian with radar cross-section and Doppler velocity attributes, and replacing the optical rasteriser with an FMCW-aware rendering pipeline that produces range-Doppler maps. Physics-augmented rendering (GSpaRC) improves generalisation and compactness. A critical difference from the optical setting is the need for coherent (complex-valued) summation to correctly model radar interference patterns.