Prerequisites & Notation

Before You Begin

This chapter assumes you have already studied MIMO fundamentals from the Telecom book (Chapters 15–18) and multi-user capacity from the ITA book (Chapters 13–16). The following specific tools are used without re-derivation.

  • MIMO channel model: y=Hx+w\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{w}, singular value decomposition, water-filling capacity(Review ch15)

    Self-check: Can you write the ergodic capacity of an nrΓ—ntn_r \times n_t MIMO channel in terms of singular values and explain the water-filling solution?

  • Multi-user MIMO uplink (MAC) and downlink (BC) capacity regions; dirty-paper coding is optimal but complex(Review ch16)

    Self-check: Can you sketch the sum-rate capacity of a KK-user AWGN MAC and explain why treating interference as noise is suboptimal in general?

  • Law of large numbers and concentration inequalities: 1nβˆ‘iXiβ†’E[X]\frac{1}{n}\sum_i X_i \to \mathbb{E}[X] a.s. as nβ†’βˆžn \to \infty

    Self-check: Can you state the strong law of large numbers and give a rough argument for why sample means concentrate around their expectations?

  • Linear algebra: trace, determinant, Frobenius norm, tr(AB)=tr(BA)\text{tr}(\mathbf{AB}) = \text{tr}(\mathbf{BA}), Woodbury identity(Review ch01)

    Self-check: Can you compute tr(AHA)\text{tr}(\mathbf{A}^H\mathbf{A}) and explain why it equals βˆ₯Aβˆ₯F2\|\mathbf{A}\|_F^2?

  • Channel estimation via least squares and MMSE; pilot-based training sequences(Review ch07)

    Self-check: Given y=hs+w\mathbf{y} = \mathbf{h}s + \mathbf{w}, can you derive the MMSE estimate of h\mathbf{h} from a training signal ss?

Notation for This Chapter

Symbols introduced in this chapter. See also the NGlobal Notation Table master table in the front matter. All customizable symbols use \ntn\ntn{} tokens; the values shown are the defaults from the notation registry.

SymbolMeaningIntroduced
NtN_tNumber of base station antennas (also written MM in some literature)s01
KKNumber of single-antenna userss01
H\mathbf{H}Uplink channel matrix, NtΓ—KN_t \times K (columns = per-user channel vectors hk\mathbf{h}_k)s01
hk\mathbf{h}_kkk-th column of H\mathbf{H}; uplink channel vector from user kk to the BS, NtΓ—1N_t \times 1s01
w\mathbf{w}Additive white Gaussian noise vector, ∼CN(0,Οƒ2I)\sim \mathcal{CN}(\mathbf{0}, \sigma^2\mathbf{I})s01
Οƒ2\sigma^2Noise variance per antennas01
SNR\text{SNR}Per-user transmit SNR =Pk/Οƒ2= P_k / \sigma^2s01
W\mathbf{W}Precoding / receive combining matrix, NtΓ—KN_t \times Ks04
v\mathbf{v}Beamforming / combining vector for a single user, NtΓ—1N_t \times 1s04
Ξ²\betaLarge-scale fading (path-loss + shadowing) coefficient for user kks02
Rk\mathbf{R}_kSpatial covariance matrix of user kk, E[hkhkH]\mathbb{E}[\mathbf{h}_k \mathbf{h}_k^H], size NtΓ—NtN_t \times N_ts03
Ο„p\tau_pPilot sequence length (symbols per coherence block allocated to uplink training)s05
Ο„c\tau_cCoherence block length (symbols): Ο„c=Bcβ‹…Tc\tau_c = B_c \cdot T_c in discrete symbolss05