Source Coding for the Multiple Access Channel

Why Combine Source and Channel Coding over the MAC?

In Chapters 5–8, we studied source coding in isolation: compress first, then transmit over a perfect channel. In Chapters 9–14, we studied channel coding in isolation: transmit independent messages over a noisy channel. But in many practical systems, multiple users observe correlated data and transmit it over a shared channel.

The natural question is: can we design the source compression and the channel coding independently, or must they be designed jointly? For the point-to-point case, Shannon's separation theorem tells us that separate design is optimal — we lose nothing by compressing first and then coding for the channel. But for multi-terminal systems, the answer is far more nuanced, and separation can fail in surprising ways.

Definition:

Correlated Sources over a MAC

Consider two users observing correlated source sequences (S1n,S2n)(S_1^n, S_2^n) drawn i.i.d. from a joint distribution PS1S2P_{S_1 S_2}. User kk observes SknS_k^n and must communicate it to a common receiver through a discrete memoryless multiple access channel PYX1X2P_{Y|X_1 X_2}.

A (2nR1,2nR2,n)(2^{nR_1}, 2^{nR_2}, n) joint source–channel code consists of:

  • Encoder kk: a mapping fk:SknXknf_k: \mathcal{S}_k^n \to \mathcal{X}_k^n for k=1,2k = 1, 2
  • Decoder: a mapping g:YnS1n×S2ng: \mathcal{Y}^n \to \mathcal{S}_1^n \times \mathcal{S}_2^n

The probability of error is Pe(n)=Pr[(S1n,S2n)g(Yn)]P_e^{(n)} = \Pr\bigl[(S_1^n, S_2^n) \neq g(Y^n)\bigr]. We say the source pair (S1,S2)(S_1, S_2) is transmissible over the MAC if there exists a sequence of codes with Pe(n)0P_e^{(n)} \to 0.

Notice that the encoders do not communicate with each other — each encoder sees only its own source. This is the distributed nature of the problem.

Transmissible source–channel pair

A source pair (S1,S2)(S_1, S_2) is transmissible over a channel if there exists a sequence of joint source–channel codes achieving vanishing probability of error.

Theorem: Sufficient Condition for Separation over the MAC

If the Slepian–Wolf rate region for the source (S1,S2)(S_1, S_2) is contained in the MAC capacity region, i.e., if there exist rates (R1,R2)(R_1, R_2) such that R1H(S1S2),R2H(S2S1),R1+R2H(S1,S2)R_1 \geq H(S_1 | S_2), \quad R_2 \geq H(S_2 | S_1), \quad R_1 + R_2 \geq H(S_1, S_2) and (R1,R2)CMAC(R_1, R_2) \in C_{\text{MAC}}, then separate source and channel coding is optimal: the source pair is transmissible over the MAC.

The idea is straightforward: compress each source at its Slepian–Wolf rate (exploiting the correlation at the decoder), then transmit the compressed bits reliably over the MAC. Since the compressed rates fall inside the MAC capacity region, reliable transmission is possible. The key insight is that the decoder can exploit the source correlation during decompression without the encoders needing to coordinate.

Sufficient but Not Necessary

The condition in Theorem 19.1 is sufficient for transmissibility, but it is not necessary in general. The reason is subtle: joint source–channel coding can sometimes exploit the correlation structure between the sources and the channel in ways that separate coding cannot. The sources' correlation provides a form of implicit cooperation between the encoders, even though they do not communicate directly.

Example: Doubly Symmetric Binary Sources over a Binary Adder MAC

Let S1Bern(1/2)S_1 \sim \text{Bern}(1/2) and S2=S1NS_2 = S_1 \oplus N where NBern(p)N \sim \text{Bern}(p) is independent of S1S_1, so H(S1,S2)=1+h(p)H(S_1, S_2) = 1 + h(p) where h()h(\cdot) is the binary entropy function. The channel is a binary adder MAC: Y=X1X2Y = X_1 \oplus X_2.

Determine whether separate source–channel coding is optimal.

Common Mistake: Separation Always Works — Wrong for Multiuser

Mistake:

Assuming that Shannon's separation theorem (optimal for point-to-point) extends to all multi-terminal settings.

Correction:

Separation is optimal for the point-to-point case and for some specific multi-terminal settings (e.g., sending independent sources over a MAC when the Slepian–Wolf region fits inside the capacity region). But in general, joint source–channel coding can strictly outperform separate coding in multi-terminal networks. The source correlation provides implicit coordination that separate coding discards.

Historical Note: Cover, El Gamal, and Salehi (1980)

1980s

The problem of sending correlated sources over a MAC was formulated by Cover, El Gamal, and Salehi in their 1980 paper. They showed that separation is not always optimal for the MAC and gave the first sufficient conditions for transmissibility that go beyond the separation approach. Their key insight was that the correlation between sources can serve as a form of implicit cooperation between the encoders. This paper opened a rich line of research in joint source–channel coding for multi-terminal networks.

Definition:

Source–Channel Bandwidth Ratio

When the source produces kk symbols per unit time and the channel allows nn channel uses per unit time, the bandwidth ratio (or bandwidth expansion factor) is κ=n/k\kappa = n/k. The case κ=1\kappa = 1 corresponds to one channel use per source symbol. When κ>1\kappa > 1, the channel has excess bandwidth; when κ<1\kappa < 1, the source rate exceeds the channel bandwidth.

The transmissibility condition for correlated sources over a MAC with bandwidth ratio κ\kappa requires the Slepian–Wolf rates scaled by 1/κ1/\kappa to fit inside the MAC capacity region.

Source–channel bandwidth ratio

The ratio κ=n/k\kappa = n/k of channel uses to source symbols, determining how much channel resource is available per source symbol. Also called the bandwidth expansion factor.

Quick Check

Two users observe independent sources (I(S1;S2)=0I(S_1; S_2) = 0) and transmit over a MAC. Is separate source and channel coding optimal?

Yes — when sources are independent, the Slepian–Wolf rates equal the marginal entropies, and separate coding is optimal if these rates fit in the MAC region

No — joint source–channel coding is always better over a MAC

It depends on the channel

Slepian–Wolf Region vs. MAC Capacity Region

Compare the Slepian–Wolf rate region for correlated binary sources with the capacity region of a Gaussian MAC. When the SW region fits inside the MAC region, separation is optimal. Adjust the source correlation and channel SNR to see when separation fails.

Parameters
0.1

Source correlation parameter

10

Channel SNR in dB

1

Ratio of user powers

Key Takeaway

For correlated sources over a MAC, separate source and channel coding is sufficient when the Slepian–Wolf rate region fits inside the MAC capacity region. However, this condition is not necessary — joint source–channel coding can exploit source correlation as implicit encoder cooperation, potentially succeeding where separation fails.

Why This Matters: Correlated Sensor Data over Uplink Channels

In IoT and sensor networks, multiple devices often observe correlated physical phenomena (temperature, humidity, vibration) and transmit over a shared uplink channel. The theory in this section tells us that when the correlation is strong and the channel capacity is limited, joint source–channel coding can provide significant gains over the standard compress-then-transmit approach used in current systems. This insight is driving research in joint compression and transmission for massive machine-type communication (mMTC) in 5G and beyond.

Correlated Sources over a MAC: When Does Separation Fail?

Animates the Slepian–Wolf rate region and the MAC capacity region, showing how source correlation determines whether separate coding suffices or joint source–channel coding is needed.