Exercises
ex-ch18-01
EasyState the Nazer-Gastpar computation rate as a function of the channel vector , integer coefficient vector , and SNR. Identify the MMSE scaling coefficient that achieves it.
Use Theorem TNazer-Gastpar Computation Rate.
The rate formula uses (truncation to non-negative).
The rate formula
where . The truncation is necessary because can exceed (low SNR or strong misalignment), giving .
The MMSE coefficient
. Scalar; minimises the effective-noise variance .
ex-ch18-02
EasyVerify that the Nazer-Gastpar formula reduces to the single-user AWGN capacity when (one user, channel gain , integer coefficient ).
Compute the formula components
; ; . Then .
Substitute into $R$
— the single-user AWGN capacity at effective SNR .
ex-ch18-03
EasyCompute for , , and dB. This is the symmetric TWRC case with integer-coefficient .
Compute norms and inner product
; ; .
Compute $N_{\rm eff}$
.
Compute $R$
bits per channel use per user. Compare to Thm. TNam-Chung-Lee: CF Is Within 1/2 Bit of TWRC Capacity: — identical.
ex-ch18-04
MediumFor the asymmetric channel , enumerate the integer coefficient vectors and identify which gives the highest CF rate at dB.
.
Build a table of .
Tabulate
, denominator .
Identify the winner
with bits/user — the vector that exactly matches . The vector has the same norm but much worse alignment: vs. .
ex-ch18-05
EasyCompute the MMSE scaling coefficient for , at dB. Verify it is the MMSE-optimal by checking .
.
Compute the derivative and set to zero.
Formula substitution
.
Derivative check
. Wait — that gives , NOT . Let me recheck: the formula uses . With per-dimension variance : derivative , giving , i.e., .
ex-ch18-06
MediumProve that is the unique MMSE coefficient in Thm. TNazer-Gastpar Computation Rate's proof step 3, by showing that is strictly convex in .
is quadratic in .
A quadratic with positive leading coefficient is strictly convex.
Expand $N(\alpha)$
.
Second derivative and convexity
for any and any . Hence is strictly convex and has a unique minimiser.
Solve for $\alpha^*$
.
ex-ch18-07
MediumOn the symmetric Gaussian TWRC with , compute the gap between the compute-and-forward rate (using ) and the cut-set upper bound . Verify numerically that the gap is below bit.
CF rate
bits per user.
Cut-set upper bound
bits per user.
Gap
bits. Gap .
ex-ch18-08
MediumDerive the Nam-Chung-Lee -bit gap explicitly: show that is bounded above by for all , with equality at and the supremum approached as only in the limit.
Write .
Show that the argument of the log is monotonically decreasing in .
Simplify $\Delta$
.
Monotonicity
The argument is strictly decreasing in : at , it equals ; as , it tends to .
Bounds
At : bit. As : . Strict monotonicity: is in for all , with attained and asymptotic.
ex-ch18-09
MediumState the integer-coefficient search as a shortest-vector problem (SVP) on an integer lattice. Specify the Gram matrix.
Re-express as a quadratic form in .
Use .
Quadratic form
.
SVP formulation
Define . Then . Maximising is equivalent to minimising over — the shortest-vector problem (SVP) on the integer lattice under the Gram matrix .
ex-ch18-10
MediumVerify that on the symmetric TWRC () the Gram matrix for the integer-coefficient search is , and compute its spectrum.
, .
is a rank-1 matrix with eigenvalue .
Gram matrix
with .
Eigendecomposition
has eigenvalues with eigenvectors where . Hence has eigenvalues . At , , so eigenvalues are .
Interpretation
The small eigenvalue corresponds to the direction — vectors nearly parallel to have tiny and large . The eigenvalue corresponds to the perpendicular direction — no lattice-alignment gain. The SVP solution will be an integer vector in the -direction, which for is exactly .
ex-ch18-11
HardProve the MAC sum-rate constraint for DF-relay on a symmetric Gaussian TWRC: the relay can jointly decode if . Hence per-user rate is bounded by . Show that this is asymptotically half the CF rate at high SNR.
Apply the MAC-sum capacity bound (Chapter 7).
at high .
MAC sum-capacity
Gaussian MAC , per-user power , noise . Joint decoding sum-rate cap: .
DF per-user rate
DF splits: . Sum: , hence .
CF rate
per user.
Asymptotic comparison
At high : ; . Ratio as . DF gets half the CF rate at high SNR because it decodes two messages where CF decodes only one (the sum).
ex-ch18-12
MediumCompute the PNC (BPSK-XOR) error probability on the symmetric Gaussian TWRC at dB. Compare to the corresponding lattice-CF rate achievable at the same SNR.
PNC analog-sum decision thresholds at .
Error when noise drives the sum across a threshold; use .
Noise and threshold
, . Decision thresholds at ; mean sum values and .
Error probability
For mean (or ): error when , i.e., noise in magnitude . For mean : error when , . Averaging: (dominated by mean-0 case which occurs with probability ).
PNC rate
bits per user. Very close to the BPSK cap.
Lattice-CF rate
bits per user. Lattice-CF wins by bits.
Conclusion
At dB, lattice-CF dominates PNC-BPSK by in rate. The advantage grows at higher SNR because PNC is rate-capped at bit/use while lattice-CF is not.
ex-ch18-13
MediumState the integer-forcing linear receiver's decoding rule and verify that the overall sum-rate equals the sum of per-row CF rates: .
Each row of is decoded independently as a CF problem.
After decoding all rows, invert to recover streams.
Receiver structure
For each row : apply a row-specific equaliser , then run CF decoding with effective channel and integer vector . The decoded row is .
Joint recovery via matrix inversion
; since , exists (over ; the modular inverse exists in ). Then .
Sum-rate
Each row- CF rate is independent — the lattice points are decoded in parallel. The total rate (summing over users' message load) is . The optimisation chooses jointly to maximise this sum.
ex-ch18-14
HardFor the MIMO channel at dB, compute the zero-forcing sum-rate, the integer-forcing sum-rate with (the trivial choice), and the integer- forcing sum-rate with an LLL-reduced . Compare.
ZF rate per stream: where are eigenvalues of .
For , row is just stream decoded independently.
LLL may give for this channel.
Channel eigenvalues
. Eigenvalues: . Condition number: .
Zero-forcing rate
Per-stream SNR: . . Per-stream SNRs: and . Rate: bits.
IF with $\mathbf{A} = \mathbf{I}$
This is identical to ZF (each stream decoded independently). IF sum-rate: bits.
LLL-reduced $\mathbf{A}$
Try . Row 1: — CF decodes . Row 2: — CF decodes . The effective channel is , . CF rate for row 2: optimised over . Numerical optimisation gives bits. Row 1: . IF sum-rate: bits.
Conclusion
IF with LLL-reduced beats ZF by bits (). The gain comes from avoiding the noise enhancement of inverting the ill-conditioned .
ex-ch18-15
MediumShow that PNC (BPSK, ) is equivalent to compute-and-forward with the binary nested lattice , . What is the codebook cardinality? Verify the rate bound bit per use.
BPSK constellation.
for nested one-dimensional lattices.
Codebook
modulo gives cosets ; representatives (or for the BPSK-symmetric version). Codebook size: .
Rate
bit per channel use. The hard-decision decoding rule matches the XOR logic.
Equivalence
With and BPSK , the relay computes and rounds to the nearest point in — yielding (for bits ). This is the Zhang-Liew-Lam XOR rule. The Nazer-Gastpar MMSE scaling at high SNR matches the "no scaling" assumption of PNC.
ex-ch18-16
HardThe Nazer-Gastpar rate on the -user Gaussian interference channel: each receiver treats the transmitters as a MAC and decodes with a receiver-specific integer vector . Write down the sum-rate expression and explain why this gives a within-constant-gap bound on the symmetric sum-capacity.
is the channel vector from all transmitters to receiver .
The gap to the Etkin-Tse-Wang upper bound is bounded by a constant depending only on , not on SNR or .
Sum-rate
where is the -th row of the interference-channel coupling matrix , and is the best integer coefficient for receiver .
Within-constant-gap
Ordentlich-Erez-Nazer 2014 proves where is the Etkin-Tse-Wang sum-capacity upper bound and is a constant independent of the channel matrix and SNR.
Why it works
At each receiver , CF decodes a lattice combination . Since the combinations are chosen via the integer-coefficient search (SVP on channel Gram), they align with the channel structure and absorb the interference constructively. The constant-gap term reflects the discretisation loss from restricting to integer coefficients.
ex-ch18-17
EasyIdentify the three key ingredients of the Nazer-Gastpar proof from §2 (nested codebook, MMSE scaling, mod- decoding) and explain how each contributes to achieving the computation rate.
Review Thm. TNazer-Gastpar Computation Rate's proof_steps.
Each ingredient has an Erez-Zamir Ch. 16 counterpart.
Nested codebook $\Lambda_c \supset \Lambda_s$
Provides the discrete hypothesis space and the "lattice closure under integer linear combination" property. All users share the codebook so is automatic.
MMSE scaling $\alpha^*$
Minimises the effective-noise variance . At , the signal interference is traded off against the noise , giving the smallest residual.
Mod-$\Lambda_s$ decoding
Reduces the ambient space to the Voronoi region , ensuring the decoded lattice point is a legal codeword. Combined with Poltyrev-good , the decoding error probability vanishes whenever the noise variance is below the capacity-achieving threshold.
Together
The combined protocol achieves . Each ingredient is inherited from Erez-Zamir Ch. 16; the new flavour is that the users share the codebook and the relay targets an integer-coefficient combination.
ex-ch18-18
MediumCompute the Nazer-Gastpar rate for at dB and find the best integer vector from .
, .
Tabulate and choose the largest .
Set up
Denominator .
Tabulate
Best choice
with bits per user. Although gives better alignment, its penalty dominates.
ex-ch18-19
HardSuppose the two-way relay channel is asymmetric: , with users at per-user SNR 10. The optimal integer vector is . Show that the MA CF rate exactly matches the sum-capacity upper bound — and discuss the implication for the "within-1/2-bit" Nam-Chung-Lee bound.
When exactly, the Diophantine mismatch is zero.
The MAC sum-capacity is .
CF rate with $\mathbf{a} = \mathbf{h}$
; . . bits per user.
MAC sum-capacity
Sum-cap: bits. Per user (symmetric split): bits.
CF vs. per-user sum-cap
CF at bits/user EXCEEDS the per-user-sum-cap split of . This is possible because CF is not bound by the MAC sum-cap: the relay decodes only the sum lattice point, not the individual messages, avoiding the MAC bottleneck.
Implication for Nam-Chung-Lee
The within--bit statement of Thm. TNam-Chung-Lee: CF Is Within 1/2 Bit of TWRC Capacity compares to the cut-set bound , not the MAC sum-split. Cut-set here: bits. Gap: bit per user — larger than bit for this asymmetric case. The -bit result only holds for the symmetric TWRC ().
ex-ch18-20
ChallengeProve that the integer-coefficient search problem is NP-hard under randomised reductions for a generic positive- definite Gram matrix . Cite the classical result.
This is the Shortest-Vector Problem (SVP).
Ajtai (1996) proved NP-hardness under randomised reductions; earlier Ajtai-Kumar-Sivakumar showed deterministic hardness.
SVP formulation
The integer-coefficient search is identical to SVP on the integer lattice with Gram matrix : find the shortest non-zero under the norm .
NP-hardness under randomised reductions
Ajtai (1996) proved that SVP is NP-hard under randomised polynomial-time reductions. The proof uses a worst-case-to- average-case connection: solving random SVP instances would imply solving the worst-case Shortest-Vector-with-Gap problem, which is NP-hard.
Practical consequences
For a generic , no polynomial-time algorithm is known to find the exact . LLL reduction gives a -approximation in time (Lenstra- Lenstra-Lovász 1982); block KZ / BKZ improves this to -approximation in exponential time. For the specific Gram matrix arising in CF, the structure (rank-one perturbation of identity) might enable faster specialised algorithms — this is an active research area.
ex-ch18-21
ChallengeFor the -user interference channel with channel matrix where is the all-ones matrix and the per-user SNR is 15 dB, find the best integer vector for receiver 1. Compute the corresponding CF rate.
(first row of ).
Enumerate and select .
Set up
; . . Denominator .
Enumerate candidate vectors
Try :
| same as above |
Best choice
with bits per user. The best strategy here is to decode only user 1 — the channel from users 2, 3 into receiver 1 is too weak () for integer combinations involving them to beat simple-user-1 decoding. This illustrates an important principle: the CF receiver can mix and match between CF and "decode-one-user" as a function of the channel strengths.
ex-ch18-22
HardAnalyse the sensitivity of the CF rate to a channel estimation error. Suppose the relay's estimate differs from the true channel by . Estimate the rate penalty.
The relay computes using but the actual rate is with the integer vector chosen for .
Linearise around the true channel.
Effect of channel mismatch
The relay picks but the true uses . If where is an integer perturbation, the rate penalty is . Since is integer and the integer search is discrete, small channel errors usually don't change — until crosses a Voronoi boundary in the integer-vector plane.
Worst-case small error
If does not change, the rate penalty is . For the symmetric TWRC with and : (simplifying). , so for perturbation, — roughly rate loss.
Worst case: boundary crossing
If is near a Voronoi boundary between and , a tiny error in can flip the relay's choice to a sub-optimal . Rate loss can reach at the boundary. This is why practical CF requires an integer-coefficient-search margin (a "second-best" fallback vector).
Engineering implication
CF requires accurate channel estimation — pilot SNR at least plus a few dB. In a 5G NR pilot design, this translates to a resource overhead for CF vs. for standard linear detection. A practical obstacle.