Chapter Summary

Chapter Summary

Key Points

  • 1.

    VAEs provide principled probabilistic generation. The ELBO loss balances reconstruction and regularisation. The reparameterisation trick enables end-to-end training. Watch for posterior collapse.

  • 2.

    GANs produce sharp samples but are hard to train. Use spectral normalisation, WGAN-GP, or progressive growing for stability. Monitor FID/IS for quality, not just visual inspection.

  • 3.

    DDPM training is just denoising at random noise levels. Add noise, predict noise, minimise MSE. The iterative sampling (reverse process) produces high-quality samples but is slow.

  • 4.

    Flow matching simplifies continuous-time generative models. Learn a velocity field along straight interpolation paths. Simpler than score-based SDEs, fewer sampling steps needed.

  • 5.

    Choose the right generative model for your task. VAE for latent representation + fast sampling. GAN for sharpness. Diffusion/flow for best quality. All can generate wireless channel realisations.

Looking Ahead

Chapter 33 shows how to use pre-trained generative models (denoisers) as plug-and-play priors for inverse problems in wireless systems.