Channels with State Known at the Decoder
Side Information at the Decoder
What if the state is known at the decoder instead of (or in addition to) the encoder? This scenario arises naturally in fading channels where the receiver estimates the channel (CSIR) through pilot symbols.
The answer turns out to be much simpler than the encoder-side case: the decoder simply uses the state as additional observations, and the capacity is . There is no need for binning or auxiliary variables β the state directly improves the decoder's ability to distinguish codewords.
The asymmetry between encoder-side and decoder-side state information is one of the recurring themes of multiuser information theory.
Theorem: Capacity with State Known at the Decoder
The capacity of a DMC with state where the state is known at the decoder (but not at the encoder) is
where and the second equality follows from the independence of and .
The decoder knows and uses both for decoding. Since the encoder does not know , the best it can do is choose to maximize the mutual information as if were part of the channel output. The condition holds because .
Achievability
Standard random coding: generate codewords . The decoder finds the unique such that is jointly typical. Error vanishes if .
Converse
By Fano's inequality: . Since the channel is memoryless and : .
Example: Gaussian Channel with State at the Decoder
For with , , and state known at the decoder only, compute the capacity.
Compute $\ntn{mi}(X; Y | S)$
Given , the channel becomes , which is a standard AWGN channel. Therefore:
Compare with other cases
- No state information: .
- State at decoder only: .
- State at encoder only (non-causal, DPC): .
Remarkably, for the Gaussian channel, non-causal encoder state information and decoder state information give the same capacity! This is not true in general β for discrete channels, encoder-side non-causal state can give higher capacity than decoder-side state.
Capacity with Different State Information Configurations
| Configuration | Capacity (Gaussian) | Comment |
|---|---|---|
| No state info | Interference treated as noise | |
| Causal at encoder | Partial cancellation possible | |
| Non-causal at encoder (DPC) | Costa: interference eliminated | |
| At decoder only (CSIR) | State subtracted at decoder | |
| At both (full CSI) | Coherent combining of and |
The Asymmetry of Side Information
The comparison table reveals a subtle asymmetry:
- Decoder state is always helpful (it can only improve decoding).
- Encoder state (causal) helps partially but not completely.
- Encoder state (non-causal) achieves the same as decoder state for the Gaussian channel, but via a completely different mechanism (binning vs. direct observation).
- Full CSI (state at both) is strictly better than either alone, because the encoder can coherently add to the state signal.
The Gaussian case is special because DPC and CSIR give the same result. For discrete channels, the Gel'fand-Pinsker capacity can exceed the CSIR capacity β the encoder can exploit the state in ways that simply observing it at the decoder cannot.
Capacity Under Different State Information Configurations
Compare the channel capacity for the Gaussian state channel under different CSI configurations. Adjust the interference power to see how the gap between configurations changes.
Parameters
Common Mistake: Full CSI Is Not Just DPC + CSIR
Mistake:
Assuming that having state at both encoder and decoder gives capacity (same as one-sided).
Correction:
When state is known at both sides, the encoder can coherently align with , achieving . This is strictly larger than when . The encoder treats as a free power source, allocating to constructively add to . This is the principle behind energy harvesting and simultaneous wireless information and power transfer.
Quick Check
For with known only at the decoder, , , , the capacity is:
bits
bits
bits
Depends on
When the decoder knows , it simply subtracts it: . The capacity is bits. The interference power is irrelevant β the decoder removes it perfectly.