Source Coding for the Multiple Access Channel
Why Combine Source and Channel Coding over the MAC?
In Chapters 5–8, we studied source coding in isolation: compress first, then transmit over a perfect channel. In Chapters 9–14, we studied channel coding in isolation: transmit independent messages over a noisy channel. But in many practical systems, multiple users observe correlated data and transmit it over a shared channel.
The natural question is: can we design the source compression and the channel coding independently, or must they be designed jointly? For the point-to-point case, Shannon's separation theorem tells us that separate design is optimal — we lose nothing by compressing first and then coding for the channel. But for multi-terminal systems, the answer is far more nuanced, and separation can fail in surprising ways.
Transmissible source–channel pair
A source pair is transmissible over a channel if there exists a sequence of joint source–channel codes achieving vanishing probability of error.
Theorem: Sufficient Condition for Separation over the MAC
If the Slepian–Wolf rate region for the source is contained in the MAC capacity region, i.e., if there exist rates such that and , then separate source and channel coding is optimal: the source pair is transmissible over the MAC.
The idea is straightforward: compress each source at its Slepian–Wolf rate (exploiting the correlation at the decoder), then transmit the compressed bits reliably over the MAC. Since the compressed rates fall inside the MAC capacity region, reliable transmission is possible. The key insight is that the decoder can exploit the source correlation during decompression without the encoders needing to coordinate.
Slepian–Wolf compression
By the Slepian–Wolf theorem (Chapter 7), there exist distributed source codes at rates satisfying the entropy constraints such that the decoder can reconstruct from the compressed indices with .
MAC channel coding
Since , the MAC coding theorem (Chapter 14) guarantees the existence of channel codes that transmit reliably over the MAC with .
Combining via union bound
The overall error probability satisfies This completes the proof.
Sufficient but Not Necessary
The condition in Theorem 19.1 is sufficient for transmissibility, but it is not necessary in general. The reason is subtle: joint source–channel coding can sometimes exploit the correlation structure between the sources and the channel in ways that separate coding cannot. The sources' correlation provides a form of implicit cooperation between the encoders, even though they do not communicate directly.
Example: Doubly Symmetric Binary Sources over a Binary Adder MAC
Let and where is independent of , so where is the binary entropy function. The channel is a binary adder MAC: .
Determine whether separate source–channel coding is optimal.
Slepian–Wolf region
The Slepian–Wolf rate region requires:
MAC capacity region
For the binary adder MAC with uniform inputs, the capacity region is: The sum-rate constraint is the binding one.
Comparison
The Slepian–Wolf sum rate exceeds the MAC sum capacity for all . Therefore, the sufficient condition for separation fails.
However, consider the joint scheme: each encoder transmits its source bit directly, i.e., . The receiver observes and can compute perfectly. Since , we have , which reveals and hence . But the receiver cannot recover individually.
This shows that separate coding fails, but a clever joint design might still succeed by exploiting the algebraic structure — the receiver may only need a function of the sources rather than both sources individually.
Common Mistake: Separation Always Works — Wrong for Multiuser
Mistake:
Assuming that Shannon's separation theorem (optimal for point-to-point) extends to all multi-terminal settings.
Correction:
Separation is optimal for the point-to-point case and for some specific multi-terminal settings (e.g., sending independent sources over a MAC when the Slepian–Wolf region fits inside the capacity region). But in general, joint source–channel coding can strictly outperform separate coding in multi-terminal networks. The source correlation provides implicit coordination that separate coding discards.
Historical Note: Cover, El Gamal, and Salehi (1980)
1980sThe problem of sending correlated sources over a MAC was formulated by Cover, El Gamal, and Salehi in their 1980 paper. They showed that separation is not always optimal for the MAC and gave the first sufficient conditions for transmissibility that go beyond the separation approach. Their key insight was that the correlation between sources can serve as a form of implicit cooperation between the encoders. This paper opened a rich line of research in joint source–channel coding for multi-terminal networks.
Definition: Source–Channel Bandwidth Ratio
Source–Channel Bandwidth Ratio
When the source produces symbols per unit time and the channel allows channel uses per unit time, the bandwidth ratio (or bandwidth expansion factor) is . The case corresponds to one channel use per source symbol. When , the channel has excess bandwidth; when , the source rate exceeds the channel bandwidth.
The transmissibility condition for correlated sources over a MAC with bandwidth ratio requires the Slepian–Wolf rates scaled by to fit inside the MAC capacity region.
Source–channel bandwidth ratio
The ratio of channel uses to source symbols, determining how much channel resource is available per source symbol. Also called the bandwidth expansion factor.
Quick Check
Two users observe independent sources () and transmit over a MAC. Is separate source and channel coding optimal?
Yes — when sources are independent, the Slepian–Wolf rates equal the marginal entropies, and separate coding is optimal if these rates fit in the MAC region
No — joint source–channel coding is always better over a MAC
It depends on the channel
When the sources are independent, and , so the Slepian–Wolf region reduces to independent compression. Separation is then optimal because there is no correlation to exploit jointly.
Slepian–Wolf Region vs. MAC Capacity Region
Compare the Slepian–Wolf rate region for correlated binary sources with the capacity region of a Gaussian MAC. When the SW region fits inside the MAC region, separation is optimal. Adjust the source correlation and channel SNR to see when separation fails.
Parameters
Source correlation parameter
Channel SNR in dB
Ratio of user powers
Key Takeaway
For correlated sources over a MAC, separate source and channel coding is sufficient when the Slepian–Wolf rate region fits inside the MAC capacity region. However, this condition is not necessary — joint source–channel coding can exploit source correlation as implicit encoder cooperation, potentially succeeding where separation fails.
Why This Matters: Correlated Sensor Data over Uplink Channels
In IoT and sensor networks, multiple devices often observe correlated physical phenomena (temperature, humidity, vibration) and transmit over a shared uplink channel. The theory in this section tells us that when the correlation is strong and the channel capacity is limited, joint source–channel coding can provide significant gains over the standard compress-then-transmit approach used in current systems. This insight is driving research in joint compression and transmission for massive machine-type communication (mMTC) in 5G and beyond.