The Nair-El Gamal Outer Bound

The Need for Tight Outer Bounds

Marton's inner bound tells us what rates are achievable β€” but how do we know whether we are leaving performance on the table? For the degraded BC, the converse was relatively straightforward (Bergmans' argument using the entropy power inequality). For the general BC, the converse is much harder because we cannot appeal to a degradation ordering.

The strongest known outer bound for the general broadcast channel is due to Nair and El Gamal (2007). It tightens the earlier UV outer bound by introducing an auxiliary random variable that captures the correlation structure between the two receivers' observations. When this outer bound matches Marton's inner bound, we have the capacity region.

Definition:

The UV Outer Bound

For any achievable rate pair (R1,R2)(R_{1}, R_{2}) on the two-user BC PY1,Y2∣XP_{Y_1, Y_2 | X}, there exist auxiliary random variables U,VU, V such that Uβ†’Vβ†’Xβ†’(Y1,Y2)U \to V \to X \to (Y_1, Y_2) forms a Markov chain and: R1≀I(V;Y1∣U)R_{1} \leq I(V; Y_1 | U) R2≀I(U;Y2)R_{2} \leq I(U; Y_2) R1+R2≀I(V;Y1∣U)+I(U;Y2)R_{1} + R_{2} \leq I(V; Y_1 | U) + I(U; Y_2)

This is essentially the converse of superposition coding β€” it bounds the rates by what a "layered" encoding could achieve.

The UV outer bound is not tight in general. It does not capture the gains that binning (Marton coding) can provide for non-degraded channels.

Theorem: The Nair-El Gamal Outer Bound

For any achievable rate pair (R1,R2)(R_{1}, R_{2}) on the two-user BC PY1,Y2∣XP_{Y_1, Y_2 | X}, there exist auxiliary random variables U,VU, V with Uβ†’Vβ†’Xβ†’(Y1,Y2)U \to V \to X \to (Y_1, Y_2) such that: R1≀I(V;Y1∣U)R_{1} \leq I(V; Y_1 | U) R2≀I(U;Y2)R_{2} \leq I(U; Y_2) R1+R2≀I(V;Y1∣U)+I(U;Y2)βˆ’I(V;Y2∣U,Y1)β‹…1[condition]R_{1} + R_{2} \leq I(V; Y_1 | U) + I(U; Y_2) - I(V; Y_2 | U, Y_1) \cdot \mathbb{1}[\text{condition}]

More precisely, the Nair-El Gamal bound introduces an additional constraint: R1+R2≀I(X;Y1)+I(U;Y2∣Y1)R_{1} + R_{2} \leq I(X; Y_1) + I(U; Y_2 | Y_1)

This "sum-rate tightening" uses the fact that the total information extractable from (Y1,Y2)(Y_1, Y_2) jointly cannot exceed I(X;Y1,Y2)I(X; Y_1, Y_2), combined with a careful single-letterization argument.

The key insight is that receiver 1's observation Y1Y_1 constrains what receiver 2 can learn about the common message UU. The term I(U;Y2∣Y1)I(U; Y_2 | Y_1) captures how much "new" information Y2Y_2 provides about UU beyond what Y1Y_1 already reveals. This is tighter than simply bounding R2R_{2} by I(U;Y2)I(U; Y_2).

,

Example: Tightness for the Binary-Input Broadcast Channel

Consider a two-user BC with binary input X∈{0,1}X \in \{0,1\} and arbitrary output alphabets Y1,Y2\mathcal{Y}_1, \mathcal{Y}_2. Show that the Nair-El Gamal outer bound matches Marton's inner bound, establishing the capacity region.

Historical Note: The Quest for the BC Capacity Region

1972-2007

The broadcast channel capacity problem has a fascinating history. Cover introduced the problem in 1972 and conjectured that superposition coding is optimal for the general BC β€” this conjecture turned out to be false for channels with more than two inputs. Marton's 1979 inner bound showed that binning can strictly improve on superposition. El Gamal and Van der Meulen, and later Nair and El Gamal (2007), developed increasingly tight outer bounds.

Despite nearly 50 years of effort, the capacity region of the general discrete memoryless broadcast channel remains unknown. The problem is listed among the major open problems in network information theory by El Gamal and Kim. The Gaussian MIMO case, however, was resolved by Weingarten, Steinberg, and Shamai (2006), building on the work of Caire and Shamai on dirty-paper coding.

Common Mistake: The Outer Bound Is Not a Converse for Every Channel

Mistake:

Claiming that the Nair-El Gamal outer bound provides a tight converse for all broadcast channels.

Correction:

The outer bound is the best known but is not proven tight for all channels. For channels with more than two inputs, there may be a gap between Marton's inner bound and the Nair-El Gamal outer bound. The gap, if it exists, is not yet understood.

Mrs. Gerber's Lemma

An information-theoretic inequality relating the entropy of the output of a binary symmetric channel to the entropy of its input. Formally: if X∼Bernoulli(p)X \sim \text{Bernoulli}(p) and Y=XβŠ•ZY = X \oplus Z with Z∼Bernoulli(Ξ΄)Z \sim \text{Bernoulli}(\delta), then hbβˆ’1(H(Y))β‰₯hbβˆ’1(H(X))βˆ—Ξ΄h_b^{-1}(H(Y)) \geq h_b^{-1}(H(X)) * \delta, where βˆ—* denotes binary convolution.

Related: Binary entropy function, Data Processing Inequality

Quick Check

For a degraded broadcast channel, the Nair-El Gamal outer bound reduces to which known result?

The superposition coding capacity region (Bergmans-Gallager)

The MAC capacity region with time-sharing

Shannon's point-to-point capacity

Key Takeaway

The Nair-El Gamal outer bound is the tightest known general converse for the broadcast channel. It matches Marton's inner bound for degraded, deterministic, and binary-input BCs, establishing the capacity region in these cases. For the general discrete memoryless BC, the capacity region remains open β€” one of the few fundamental single-hop channels for which this is the case.