Exercises
ex-ch24-01
EasyProve that feedback does not increase the capacity of a point-to-point DMC. Specifically, show that .
Use the chain rule and the memoryless property: .
Show that .
Apply Fano's inequality
.
Expand the mutual information
where the last step uses: (a) is a function of , and (b) memorylessness , so .
Bound by capacity
. Since direct coding achieves : .
ex-ch24-02
EasyFor the Gaussian MAC with and , compute the sum-rate with feedback as a function of and verify that the maximum sum-rate exceeds the no-feedback sum-rate.
Sum-rate with correlation : .
No-feedback sum-rate (): .
Sum-rate formula
With symmetric powers and correlation : .
Maximize over $\rho$
The sum-rate is maximized at : . But at , individual rates are zero. For positive individual rates, the Kramer-Gastpar result gives sum-rate achievable with a Schalkwijk-Kailath extension.
Comparison
No-feedback: . With feedback: . For : no-feedback , feedback . Gain: 0.5 bits.
ex-ch24-03
EasyExplain why feedback does not enlarge the capacity region of the degraded broadcast channel.
In a degraded BC, the stronger receiver's output is a sufficient statistic for the weaker receiver's message.
The capacity is achieved by superposition coding, which does not benefit from feedback.
Degraded structure
In a degraded BC with (user 1 is stronger), user 1's output contains all the information about that user 2's output has, plus more.
Feedback is redundant
The encoder already knows , which determines and (up to noise). Feedback of provides no information beyond what the encoder already knows about the channel state (since the noise is independent of ). More formally: the capacity-achieving scheme (superposition coding) does not use feedback, and the converse does not improve with feedback because the degraded structure means , which is already the no-feedback bound.
ex-ch24-04
EasyFor the Gaussian two-way channel , , show that user 1 can perfectly cancel its own interference, leaving an equivalent point-to-point channel .
User 1 knows and hence can subtract it from . But does not contain .
Channel structure
. Notice that does not depend on at all! User 1's own transmission does not appear in its received signal (additive model with separate channels in each direction).
No cancellation needed
Since is independent of , there is nothing to cancel. The channel from user 2 to user 1 is simply an AWGN channel with capacity .
Both directions
Similarly, gives user 2 a clean AWGN channel. Both directions achieve full point-to-point capacity simultaneously. This is why the Gaussian additive two-way channel decomposes.
ex-ch24-05
MediumFor the binary erasure MAC with erasure probabilities , show that feedback strictly enlarges the capacity region. Specifically, find a rate pair that is achievable with feedback but not without.
Without feedback, the sum-rate is limited by the joint capacity.
With feedback, both encoders can use the known erasure pattern to coordinate retransmissions.
No-feedback MAC capacity
For the erasure MAC where when not erased (and when erased): , , .
Feedback-enabled cooperation
With feedback, both encoders learn the erasure pattern. When user 1's symbol is erased but user 2's is received, both encoders know this. User 1 can retransmit its erased symbol while user 2 stays silent, avoiding collision. This scheduling is impossible without feedback because the encoders do not know the erasure pattern.
Achievable rate with feedback
With perfect scheduling: for each user (full individual capacity). The sum-rate becomes when , which holds for all .
ex-ch24-06
MediumIn the Cover-Leung scheme, the auxiliary variable represents common information extracted from feedback. Show that setting (no common information) recovers the standard MAC capacity region without feedback.
With , the conditioning disappears from all mutual information terms.
Substitute $U = \emptyset$
The Cover-Leung region becomes: , , . With the distributions and (independent of ).
This is the standard MAC region
With , the inputs are independent (), and the rate constraints are exactly the no-feedback MAC capacity region. The Cover-Leung bound generalizes the standard result by allowing correlated inputs through a non-trivial .
ex-ch24-07
MediumFor the Gaussian MAC with feedback, the Ozarow capacity region uses correlation between inputs. Show that the individual rate constraints decrease monotonically in , while the sum-rate constraint increases in for .
For the individual rate: decreases as increases.
For the sum-rate: increases in .
Individual rates
. Since decreases as increases (from 1 at to 0 at ), the individual rate constraints become more restrictive with increasing . At : .
Sum-rate
. For : , so the argument of the log increases. The sum-rate constraint relaxes with increasing .
Trade-off
The optimal trades off individual rates (which decrease with ) against sum-rate (which increases with ). The capacity region is the union over all , capturing both extremes.
ex-ch24-08
MediumFor the binary erasure BC with and : (a) Compute the no-feedback capacity region. (b) Compute the sum-rate improvement from the XOR trick.
No-feedback sum-rate: .
XOR gain: .
(a) No-feedback region
, . Sum-rate: .
(b) XOR improvement
Bits in set (user 1 received, user 2 erased): . Bits in set (user 2 received, user 1 erased): . XOR gain: . Sum-rate with feedback: .
Interpretation
The XOR trick improves the sum-rate from 0.92 to 1.04 β a 13% gain. The gain is limited by , the smaller of the two "XOR-able" sets.
ex-ch24-09
MediumShow that for the symmetric Gaussian MAC (), the feedback gain in sum-rate satisfies and compute this for dB.
Feedback sum-rate: . No-feedback: .
Sum-rate gain formula
.
Numerical values
| (dB) | (linear) | No-FB sum-rate | FB sum-rate | Gain (bits) |
|---|---|---|---|---|
| 0 | 1 | 0.79 | 1.16 | 0.37 |
| 10 | 10 | 2.18 | 2.68 | 0.50 |
| 20 | 100 | 3.83 | 4.33 | 0.50 |
High-SNR limit
As : bits. The feedback gain saturates at 0.5 bits β the beamforming gain from coherent combining.
ex-ch24-10
MediumThe Schalkwijk-Kailath scheme for the Gaussian channel achieves doubly exponential error decay: for some constants . Sketch the key steps of the scheme and explain why feedback enables this dramatic improvement over the ordinary exponential error decay .
The encoder transmits the MMSE estimation error in each round.
The estimation error variance decreases geometrically with each round.
The scheme
- Map the message to a real number .
- Round 1: transmit . Receiver estimates .
- Round : encoder computes the estimation error (known via feedback) and transmits , where .
- Receiver updates: .
Error variance reduction
After each round, . After rounds: . The error probability is .
Why feedback enables this
Without feedback, the encoder cannot adapt to the specific noise realization. With feedback, the encoder "zooms in" on the receiver's current estimate, transmitting increasingly refined corrections. Each round squares the effective SNR, leading to doubly exponential error decay.
ex-ch24-11
MediumIn the Shayevitz-Wigger scheme for the Gaussian BC with feedback, the transmitter sends a linear combination of the two receivers' estimation errors in the retransmission phase. Explain why dirty-paper coding (DPC) is needed in this step.
The retransmission includes information about both receivers' errors.
From receiver 1's perspective, receiver 2's error component is 'known interference' at the transmitter.
The retransmission signal
The transmitter sends , where is receiver 's estimation error (known to the transmitter via feedback).
DPC for interference management
From receiver 1's perspective: it wants to decode (to use as side information). The component is interference β but is known at the transmitter! By DPC (Costa's writing on dirty paper), the transmitter can pre-subtract without rate loss, so receiver 1 sees an effective channel with only plus noise.
Mutual benefit
Similarly, receiver 2 gets DPC-cleaned access to . Each receiver decodes the other's error and uses it to refine its own message estimate. DPC is essential because without it, the interference from the other user's error would reduce the effective rate.
ex-ch24-12
HardDerive the Ozarow capacity region for the symmetric Gaussian MAC (, ) with feedback by finding the optimal auxiliary variable in the Cover-Leung bound and showing that it reduces to the -parameterized region.
Set and where independent.
The correlation is .
Gaussian auxiliary variable
Let and with independent of and each other. Then , so .
Individual rate
(using for the correlation structure). Actually: and for effective noise , giving .
Sum-rate
. The total sum-rate (including the common part) is: .
Union over $\rho$
Taking the union over (by symmetry, suffices for the symmetric case) and verifying that Gaussian is optimal completes the derivation.
ex-ch24-13
HardConsider a two-user MAC where (binary XOR channel, no noise). Without feedback, the sum-rate is . Show that with feedback, the capacity region is β feedback doubles the sum-rate.
With feedback, encoder 2 learns . Since it knows , it can compute .
Once encoder 2 knows (user 1's message), it can cooperate.
No-feedback capacity
: bit. So (both users share a single output bit).
Feedback: round 1
Round 1: User 1 sends (first message bit). User 2 sends . . Receiver decodes . With feedback, encoder 2 also learns .
Feedback: round 2
Round 2: User 2 sends . User 1 sends . . Receiver decodes .
Generalization
By alternating: each user sends 1 bit per 2 channel uses, giving . But with a more clever scheme: User 1 sends in round 1, both learn it via feedback. User 2 XORs with (known from feedback) to "piggyback" on user 1's message. With rounds, both users achieve rate 1 bit per channel use. The full rectangle is achievable.
ex-ch24-14
HardFor the binary multiplying two-way channel , prove that the Shannon inner bound (independent coding) gives the capacity region and that interaction cannot improve upon it.
Show that the outer bound also gives sum-rate .
The output entropy is regardless of the joint distribution.
Inner bound
With and (deterministic): , and , . By time-sharing, .
Outer bound
. Since : . But more tightly: . (Because has at most bits of entropy.)
Tightness
Inner bound = outer bound = 1 bit sum-rate. Interaction cannot help because the output has only 1 bit per channel use regardless of the input distribution.
ex-ch24-15
HardShow that for the Gaussian MAC with feedback, the Kramer-Gastpar sum-rate is tight (equals the capacity) by deriving a matching outer bound.
Use the entropy power inequality (EPI).
The key is to show that correlated Gaussian inputs maximize .
Outer bound via EPI
. By the EPI: . But we need an upper bound on .
Maximum entropy under power constraint
. The inequality follows from Cauchy-Schwarz: . Maximum entropy under this variance constraint: Gaussian with variance .
Conclude
. This matches the Kramer-Gastpar achievable rate, proving tightness.
ex-ch24-16
ChallengeConsider a "partially interactive" two-way channel where user 1 can adapt to past observations (interactive encoding) but user 2 uses non-adaptive encoding (one-shot). Show that this asymmetric model can have a capacity region strictly between the one-shot (Shannon inner bound) and the fully interactive capacity region.
User 1's rate can benefit from observing ; user 2's rate is limited to with fixed .
User 2's constraint
Since user 2 uses non-adaptive encoding: does not depend on . The rate is limited to for some fixed .
User 1's advantage
User 1 can adapt: . Since depends on (which is fixed by user 2's one-shot code), user 1 can learn about through and adapt its encoding to create useful correlation.
Capacity region
User 1's rate can exceed the Shannon inner bound because it can create input correlation through adaptation. But user 2 cannot do the same. The resulting region is asymmetric: user 1's achievable rate is larger than in the one-shot case, while user 2's rate is unchanged. This is strictly between the one-shot and fully interactive regions for channels where both directions benefit from interaction.
ex-ch24-17
ChallengeConsider the Gaussian MAC with noisy feedback: the feedback link has capacity . Show that as , the capacity region approaches Ozarow's perfect-feedback region, and as , it reduces to the no-feedback region. Derive a lower bound on the sum-rate as a function of .
Noisy feedback limits the correlation that can be achieved.
The maximum achievable correlation scales with the feedback rate.
Feedback rate limits correlation
With feedback capacity , the encoders can exchange at most bits per channel use of common information through the feedback. The maximum correlation coefficient scales as (since bits of feedback allow refining the common observation to bits of precision).
Sum-rate lower bound
The sum-rate is at least: . As : , recovering Ozarow. As : , recovering no-feedback.
Practical insight
Even limited feedback provides a significant fraction of the full feedback gain. With bit per channel use, , capturing most of the coherent combining gain. This justifies the use of low-rate feedback in practical wireless systems.
ex-ch24-18
ChallengeThe general two-way channel capacity is unknown. Consider the modular additive two-way channel: , (both outputs are the XOR) with binary inputs. Determine the capacity region and prove it is tight. Does interaction help?
Each user observes and knows its own input.
User can compute the other user's input from and : .
Without interaction: both users communicate at rate 1 per channel use.
Self-interference cancellation
User 1 observes and knows . It computes . User 1 perfectly recovers in one channel use. Similarly, user 2 perfectly recovers .
Capacity region
Each user can transmit 1 bit per channel use and the other user decodes perfectly. The capacity region is .
Converse
. Similarly for . This matches the inner bound.
Interaction does not help
The capacity region is already a full rectangle , achieved without interaction. Interaction cannot improve beyond 1 bit per user per channel use (the alphabet size constraint). The modular additive two-way channel is one of the rare cases where the capacity is known exactly and equals the Shannon inner bound.