Prerequisites & Notation

Before You Begin

This chapter sits between three separate communities: (i) the massive MIMO system-design literature of Parts I-IV of this book, (ii) the model-based estimation and inference theory of Book FSI, and (iii) the modern deep-learning toolbox. The reader is expected to be comfortable with all three viewpoints and, crucially, with the tension between them: when the closed-form MMSE estimator exists it beats any black-box network, and when it does not the learned alternative is the best tool we have.

  • Linear MMSE channel estimation with known spatial covariance(Review ch03)

    Self-check: Can you write the MMSE estimator H^=Σ\ntnch(Σ\ntnch+(SNR)1I)1yp\hat{\mathbf{H}} = \boldsymbol{\Sigma}_{\ntn{ch}} (\boldsymbol{\Sigma}_{\ntn{ch}} + (\text{SNR})^{-1}\mathbf{I})^{-1} \mathbf{y}_p and identify the two ingredients that make it optimal?

  • FDD CSI feedback overhead, Type I / Type II codebooks, JSDM(Review ch08)

    Self-check: Can you explain why FDD massive MIMO feedback overhead scales with the product of the number of antennas and the number of quantization bits per coefficient?

  • Deep unfolding: turning an iterative algorithm into a trainable network by making each iteration a layer(Review ch18)

    Self-check: Can you describe how unrolling KK iterations of ISTA produces a KK-layer network with learnable step sizes and soft-threshold parameters?

  • Approximate message passing (AMP) and orthogonal AMP (OAMP)(Review ch20)

    Self-check: Can you state the AMP iteration xt+1=ηt(xt+AHrt)\mathbf{x}^{t+1} = \eta_t(\mathbf{x}^t + \mathbf{A}^H \mathbf{r}^t) and explain the role of the Onsager correction term?

  • Mutual information and rate-distortion as information-theoretic primitives(Review ch13)

    Self-check: Can you write the rate-distortion function R(D)=minp(x^x):Ed(x,x^)DI(X;X^)R(D) = \min_{p(\hat{x}|x): \mathbb{E} d(x, \hat{x}) \leq D} I(X; \hat{X}) and identify the optimization variable?

  • Stochastic gradient descent, backpropagation, standard DL architectures (MLP, CNN, Transformer)

    Self-check: Can you implement a minimal training loop in PyTorch or JAX with a custom loss, without relying on a high-level trainer?

  • Markov decision processes, Bellman equation, policy gradients

    Self-check: Can you write the Bellman equation Vπ(s)=E[r+γVπ(s)s,π]V^{\pi}(s) = \mathbb{E}[r + \gamma V^{\pi}(s^\prime) \mid s, \pi] and explain why policy gradient methods optimize a stochastic policy rather than a deterministic one?

Notation for This Chapter

Symbols introduced or specialized in this chapter. Customizable symbols use \ntn\ntn{} tokens. Machine-learning-specific symbols (network weights, loss functions, learning rate) are written in raw LaTeX because they are not part of the massive MIMO notation registry. See the NGlobal Notation Table master table.

SymbolMeaningIntroduced
H\mathbf{H}True channel matrix (target of estimation / feedback / prediction)s01
H^\hat{\mathbf{H}}Estimated or reconstructed channel matrix (output of the learned network)s01
Si,k\mathbf{S}_{i,k}Pilot matrix used for uplink trainings01
yp\mathbf{y}_pPilot observation vector at the BSs01
fθ()f_\theta(\cdot)Neural network with trainable parameters θ\thetas01
L(θ)\mathcal{L}(\theta)Training loss (NMSE, cross-entropy, negative log-likelihood, etc.)s01
NMSE\text{NMSE}Normalized mean-squared error EHH^F2/EHF2\mathbb{E} \|\mathbf{H} - \hat{\mathbf{H}}\|_F^2 / \mathbb{E} \|\mathbf{H}\|_F^2s01
z\mathbf{z}Latent code in the CSI feedback encoder-decoders02
BBFeedback payload size in bits per channel instances02
R(D)R(D)Rate-distortion function: minimum bits per channel sample to achieve NMSE D\leq Ds02
iti^{\star}_tOptimal beam index at time slot tt within a predefined codebooks03
sts_t, ata_t, rtr_tMDP state, action, and reward at time tts04
πϕ(as)\pi_\phi(a \mid s)Stochastic policy parametrized by ϕ\phi (PPO actor network)s04
Vπ(s)V^{\pi}(s)Value function of policy π\pi at state sss04
SNR\text{SNR}Per-user SNR (linear scale)s01
KKNumber of users sharing the resource blocks04
NtN_tNumber of BS transmit antennass01