Prerequisites & Notation

Before You Begin

This chapter takes the lattice machinery of Chapters 15-16 — point- group lattices with their generator matrices and fundamental regions, nested codebooks ΛcΛs\Lambda_c \supset \Lambda_s on the AWGN channel, the Erez-Zamir mod-Λ\Lambda scheme with MMSE scaling that attains the Gaussian capacity — and shows that the same primitives give a dramatic gain in network settings. The Nazer-Gastpar compute-and-forward framework (2011) is the flagship application: instead of decoding individual users at a relay, the relay decodes a linear combination kakuk\sum_k a_k \mathbf{u}_k directly from the analog superposition y=khkxk+wy = \sum_k h_k \mathbf{x}_k + \mathbf{w}. Lattice codes make this possible because Λ\Lambda is closed under integer linear combinations. Readers need the Chapter 15 definition of a lattice and its fundamental region, the Chapter 16 construction of nested Erez-Zamir codebooks and the MMSE-scaling achievability of CAWGNC_{\rm AWGN}, and a working knowledge of KK-user Gaussian multiple-access channels. Familiarity with network coding (the XOR of packets in routing) helps contextualise §4 but is not required; the lattice derivation is self-contained.

  • Lattices, generator matrices, and fundamental regions(Review ch15)

    Self-check: Can you state that a lattice ΛRn\Lambda \subset \mathbb{R}^n is a discrete additive subgroup of full rank, write it as Λ={Gz:zZn}\Lambda = \{\mathbf{G} \mathbf{z} : \mathbf{z} \in \mathbb{Z}^n\} for some invertible generator matrix G\mathbf{G}, and identify the fundamental Voronoi region V(Λ)\mathcal{V}(\Lambda) as the nearest- neighbour cell around 0\mathbf{0}?

  • Nested lattices and Erez-Zamir mod-Λ\Lambda coding(Review ch16)

    Self-check: Can you describe the nested construction ΛsΛc\Lambda_s \subset \Lambda_c where Λc\Lambda_c is the fine (coding) lattice and Λs\Lambda_s is the coarse (shaping) lattice, define the coset codebook C=ΛcV(Λs)\mathcal{C} = \Lambda_c \cap \mathcal{V}(\Lambda_s), and explain why its rate is R=1nlog2C=1nlog2(V(Λs)/V(Λc))R = \tfrac{1}{n} \log_2 |\mathcal{C}| = \tfrac{1}{n} \log_2 (V(\Lambda_s)/V(\Lambda_c))?

  • MMSE-scaled mod-Λ\Lambda and AWGN capacity(Review ch16)

    Self-check: Can you reproduce the Erez-Zamir-Zamir-Shamai achievability of 12log2(1+SNR)\tfrac{1}{2} \log_2 (1 + \text{SNR}): at the receiver, form αymodΛs\alpha^* y \bmod \Lambda_s with α=SNR/(1+SNR)\alpha^* = \text{SNR}/(1 + \text{SNR}), decode to the nearest Λc\Lambda_c point in V(Λs)\mathcal{V}(\Lambda_s), and bound the effective noise variance by SNR/(1+SNR)\text{SNR}/(1 + \text{SNR})?

  • KK-user Gaussian MAC and its capacity region(Review ch07)

    Self-check: Can you state that the capacity region of the KK-user scalar Gaussian MAC y=k=1Khkxk+wy = \sum_{k=1}^K h_k x_k + w with per-user power PP is the polytope kSRk12log2(1+PkShk2)\sum_{k \in S} R_k \le \tfrac{1}{2} \log_2 (1 + P \sum_{k \in S} h_k^2) for every S{1,,K}S \subseteq \{1, \ldots, K\}, and recall that the sum-rate corner is achieved by successive interference cancellation?

  • Relay channels and cut-set bounds

    Self-check: Can you state the three basic relay strategies — amplify-and- forward (relay retransmits a scaled yy), decode-and-forward (relay decodes the message and re-encodes), and compress-and- forward (relay quantises yy and forwards the description) — and recall that the cut-set bound on the relay channel is tight for the degraded case (Cover-El Gamal 1979)?

  • Integer lattices Zn\mathbb{Z}^n and Gaussian integers Z[j]\mathbb{Z}[j](Review ch15)

    Self-check: Can you recognise Zn\mathbb{Z}^n as the simplest lattice, state that its fundamental region is the unit cube, and identify Z[j]=Z+jZ\mathbb{Z}[j] = \mathbb{Z} + j\mathbb{Z} as the Gaussian integers — the natural "integer" lattice in CR2\mathbb{C} \cong \mathbb{R}^2 where complex-valued integer coefficients akZ[j]a_k \in \mathbb{Z}[j] live in the Nazer-Gastpar framework?

  • Shortest-vector problem and lattice reduction(Review ch15)

    Self-check: Can you state the Shortest-Vector Problem (SVP): given a lattice Λ\Lambda via its generator matrix, find v=argminvΛ{0}v\mathbf{v}^* = \arg\min_ {\mathbf{v} \in \Lambda \setminus \{0\}} \|\mathbf{v}\|? And recall that SVP is NP-hard in general but is approximated in polynomial time by the LLL algorithm (Lenstra-Lenstra-Lovász 1982)?

Notation for This Chapter

Symbols specific to compute-and-forward and its network-coding applications. The Chapter 15-16 lattice notation (lattice Λ\Lambda, generator matrix G\mathbf{G}, fundamental region V(Λ)\mathcal{V}(\Lambda), nested codebook ΛcΛs\Lambda_c \supset \Lambda_s, MMSE coefficient α\alpha^*) continues to apply and is not repeated here. The Chapter 1-7 single-user/MAC notation (SNR\text{SNR}, noise w\mathbf{w}, Gaussian noise variance σ2\sigma^2) also carries over.

SymbolMeaningIntroduced
KKNumber of users (transmitters) sharing the channel to a single relays01
hRK\mathbf{h} \in \mathbb{R}^KChannel vector from the KK transmitters to the relay; entry hkh_k is the gain from user kk to the relay. Assumed real and known at the relay; complex extension is straightforwards01
xkRn\mathbf{x}_k \in \mathbb{R}^nLattice codeword transmitted by user kk; drawn from a nested Erez-Zamir codebook ΛcV(Λs)\Lambda_c \cap \mathcal{V}(\Lambda_s) and satisfying the power constraint E[xk2]nP\mathbb{E}[\|\mathbf{x}_k\|^2] \le n Ps01
ukΛc\mathbf{u}_k \in \Lambda_cLattice point (message) of user kk; related to the transmitted codeword by dithering: xk=[ukdk]modΛs\mathbf{x}_k = [\mathbf{u}_k - \mathbf{d}_k] \bmod \Lambda_ss01
aZK\mathbf{a} \in \mathbb{Z}^KInteger coefficient vector that defines the linear combination the relay wants to decode: k=1KakukmodΛs\sum_{k=1}^K a_k \mathbf{u}_k \bmod \Lambda_ss01
R(h,a)R(\mathbf{h}, \mathbf{a})Nazer-Gastpar computation rate: the rate at which the relay can reliably decode kakukmodΛs\sum_k a_k \mathbf{u}_k \bmod \Lambda_s for channel h\mathbf{h} and integer coefficient vector a\mathbf{a}s02
α(h,a)\alpha^*(\mathbf{h}, \mathbf{a})MMSE scaling coefficient at the relay: α=SNRhTa/(1+SNRh2)\alpha^* = \text{SNR} \, \mathbf{h}^T \mathbf{a} / (1 + \text{SNR} \|\mathbf{h}\|^2); minimises the effective noise variance of αykakxk\alpha y - \sum_k a_k \mathbf{x}_ks02
log2+(x)\log_2^+(x)Truncated logarithm: log2+(x)=max(log2x,0)\log_2^+(x) = \max(\log_2 x, 0); the Nazer-Gastpar rate is 00 when the argument of the log\log falls below 11s02
PNCPNCPhysical-layer network coding (Zhang-Liew-Lam 2006): the integer-coefficient-(1,1)(1,1) special case of the Nazer-Gastpar framework applied to the two-way relay channels04
TWRCTWRCTwo-way relay channel: two users A,BA, B exchange messages via a single relay RR; the canonical K=2K = 2 application of compute-and-forwards03
a(h)\mathbf{a}^*(\mathbf{h})Integer-coefficient optimiser: a(h)=argmaxaZK{0}R(h,a)\mathbf{a}^*(\mathbf{h}) = \arg\max_{\mathbf{a} \in \mathbb{Z}^K \setminus \{0\}} R(\mathbf{h}, \mathbf{a}); formulated as a shortest-vector problem on an α\alpha^*-scaled channel lattices05
NeffN_{\rm eff}Effective noise variance at the relay after MMSE scaling and subtraction: Neff=a2SNR(hTa)2/(1+SNRh2)N_{\rm eff} = \|\mathbf{a}\|^2 - \text{SNR} (\mathbf{h}^T \mathbf{a})^2 / (1 + \text{SNR} \|\mathbf{h}\|^2); controls the computation rate via R=12log2+(1/Neff)R = \tfrac{1}{2} \log_2^+(1/N_{\rm eff})s02