Exercises

ex-sp-ch27-01

Easy

Compute the output size of Conv2d(3, 64, 5, stride=2, padding=2) applied to a (1, 3, 224, 224) input. Verify with PyTorch.

ex-sp-ch27-02

Easy

Implement a Conv-BN-ReLU block and verify it preserves spatial dimensions when using kernel_size=3, padding=1, stride=1.

ex-sp-ch27-03

Easy

Count the parameters in Conv2d(64, 128, 3, bias=False) and Conv2d(64, 128, 3, bias=True). Explain the difference.

ex-sp-ch27-04

Easy

Implement depthwise separable convolution and compare its parameter count to a standard Conv2d(64, 128, 3).

ex-sp-ch27-05

Easy

Compute the receptive field of a network with 5 layers of 3x3 convolutions (stride=1). Then repeat with 5x5 kernels.

ex-sp-ch27-06

Medium

Implement a ResidualBlock with a 1x1 skip connection for the case where in_channels != out_channels or stride > 1.

ex-sp-ch27-07

Medium

Build a simple CNN classifier for CIFAR-10 (32x32 RGB, 10 classes) with 3 stages of 2 residual blocks each, channels [64, 128, 256]. Train for 20 epochs and report accuracy.

ex-sp-ch27-08

Medium

Implement a 3-level U-Net and verify it produces output with the same spatial dimensions as the input for any even-sized input.

ex-sp-ch27-09

Medium

Implement DnCNN with 17 layers and train it to denoise images corrupted with Gaussian noise (Οƒ=25/255\sigma = 25/255). Use PSNR to evaluate quality.

ex-sp-ch27-10

Medium

Replace all BatchNorm layers in a ResNet with GroupNorm (32 groups) and compare training with batch size 4. Which normalisation performs better at small batch sizes?

ex-sp-ch27-11

Hard

Implement the Bottleneck block from ResNet-50 and build a 4-stage ResNet with [3, 4, 6, 3] bottleneck blocks. Report total parameters.

ex-sp-ch27-12

Hard

Implement DRUNet with a noise level map input. Train on noise levels Οƒβˆˆ[5,50]/255\sigma \in [5, 50]/255 and evaluate on unseen noise levels.

ex-sp-ch27-13

Hard

Implement dilated convolutions with rates [1, 2, 4, 8] and compute the effective receptive field. Compare to standard convolutions with the same number of layers.

ex-sp-ch27-14

Hard

Visualise learned filters of the first Conv layer of a trained CNN. Do they resemble classical edge detectors?

ex-sp-ch27-15

Challenge

Implement a CNN that takes an OFDM resource grid (subcarriers x symbols x 2 for real/imag) and estimates the channel at all positions. Compare to least-squares interpolation.

ex-sp-ch27-16

Challenge

Implement a ConvNeXt block (depthwise 7x7 conv, LayerNorm, 1x1 conv with GELU) and compare performance to a standard ResNet block on CIFAR-10.