References & Further Reading

References

  1. B. Kerbl, G. Kopanas, T. Leimkuhler, and G. Drettakis, 3D Gaussian Splatting for real-time radiance field rendering, 2023

    The foundational 3DGS paper. Introduces explicit Gaussian primitives with differentiable tile-based rasterisation, adaptive density control, and spherical harmonics for view-dependent appearance. Achieves real-time rendering at quality comparable to NeRF. Section 26.1 is based on this work.

  2. H. Zhang, S. Fang, Y. Xiong, and X. Wang, RF-3DGS: 3D Gaussian Splatting for radio radiance field reconstruction, 2024

    Adapts 3DGS for RF propagation by replacing optical colour with dB-scale received power. Introduces image-based initialisation and dB-scale loss functions. Section 26.2 is based on this work.

  3. S. Chen, W. Jiang, Y. Li, and Z. Zhang, RFCanvas: multi-modal fusion for radio field reconstruction with Gaussian splatting, 2024

    Fuses visual priors (LiDAR + camera) with few-shot RF measurements via tensorial RF fields and spherical harmonics. Demonstrates 5x reduction in RF data requirements. Section 26.3 is based on this work.

  4. S. Niedermayr, R. Quast, and F. Shroff, RadarSplat: 3D Gaussian splatting for automotive radar point cloud synthesis, 2024

    Extends 3DGS to automotive radar with FMCW-aware rendering and radar cross-section attributes. Section 26.4 draws heavily on this work.

  5. Y. Dong, Z. Li, and A. Geiger, GSpaRC: Gaussian splatting with physics-augmented rendering for compact radar scenes, 2024

    Adds physics-based path loss and multi-bounce rendering to radar Gaussian splatting, achieving more compact representations. Discussed in Section 26.4.

  6. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, NeRF: Representing scenes as neural radiance fields for view synthesis, 2020

    The original NeRF paper establishing implicit neural scene representations via volume rendering. 3DGS is positioned as the explicit counterpart to NeRF.

  7. Y. Zhao, Z. Wu, M. Guo, and F. Gao, NeRF2: Neural radio-radiance fields, 2024

    Adapts NeRF for RF propagation prediction. The RF-NeRF baseline against which RF-3DGS is compared in Section 26.5.

  8. M. Zwicker, H. Pfister, J. van Baar, and M. Gross, EWA volume splatting, 2001

    Foundational work on EWA (elliptical weighted average) splatting that 3DGS builds upon. Establishes the projection of 3D ellipsoids to 2D ellipses.

  9. G. Caire, On the illumination and sensing model for RF imaging, 2026

    The unified forward model that connects RF imaging to both diffraction tomography and radar signal processing. The Kronecker structure exploited in Section 26.4 originates here.

  10. T. Mueller, A. Evans, C. Schied, and A. Keller, Instant neural graphics primitives with a multiresolution hash encoding, 2022

    Instant-NGP accelerates NeRF training to minutes using multi-resolution hash encoding. Provides the speed baseline for NeRF comparisons in Section 26.1.

  11. J. Luiten, G. Kopanas, B. Leibe, and D. Ramanan, Dynamic 3D Gaussians: Tracking by persistent dynamic view synthesis, 2024

    Extends 3DGS to dynamic scenes by adding per-Gaussian motion parameters. Relevant to the open question of dynamic RF environments in Section 26.5.

Further Reading

For readers who want to go deeper into specific topics from this chapter.

  • 3DGS extensions and survey

    T. Chen, S. Wang, and G. Drettakis, 'A survey on 3D Gaussian Splatting,' arXiv:2401.03890, 2024

    Comprehensive survey covering 3DGS extensions for dynamic scenes, relighting, editing, and generation. Provides the broader computer vision context for the RF adaptations in this chapter.

  • Neural radiance fields for wireless channels

    Y. Zhao et al., 'NeRF2: Neural radio-radiance fields,' IEEE Trans. Wireless Commun., 2024

    The NeRF-based approach to RF modelling that RF-3DGS improves upon in rendering speed. Useful for understanding the NeRF-to-3DGS transition in the RF domain.

  • Multi-modal sensor fusion for autonomous driving

    Y. Li et al., 'BEVFusion: Multi-task multi-sensor fusion with unified bird's-eye view representation,' Proc. ICRA, 2023

    The multi-modal fusion framework that motivates RadarSplat's integration of radar with camera and LiDAR in Section 26.4.

  • Differentiable rendering for inverse problems

    T. Li, M. Aittala, F. Durand, and J. Lehtinen, 'Differentiable Monte Carlo ray tracing through edge sampling,' ACM Trans. Graphics, vol. 37, no. 6, 2018

    Establishes differentiable rendering as a tool for inverse problems. Provides the theoretical foundation connecting Chapter 25's differentiable rendering to the 3DGS approach.