Chapter Summary
Chapter Summary
Key Points
- 1.
Pre-trained models are the starting point. Use torchvision or HuggingFace models. Feature extraction for tiny datasets, fine-tuning for medium, LoRA for large models with limited compute.
- 2.
PnP denoisers solve inverse problems without task-specific training. Pre-trained DRUNet acts as a universal prior. Alternate between data fidelity and denoising. Controls noise level via the noise map input.
- 3.
Domain adaptation bridges simulation and reality. Train on cheap simulated data, adapt to real measurements. Domain-adversarial training learns domain-invariant features.
- 4.
Export models for deployment. TorchScript for PyTorch inference, ONNX for cross-framework, TensorRT for maximum GPU speed. Quantisation (INT8) reduces latency by 2-4x.
- 5.
Always use the correct preprocessing. Pre-trained models expect specific normalisation. Wrong preprocessing silently degrades performance.
Looking Ahead
Part VI is now complete. You have the tools to build, train, and deploy neural networks for scientific and wireless applications, from basic MLPs through transformers and generative models.