Channel Model and Capacity Region
From Single-User to Multiuser
So far in this book, we have studied communication between a single transmitter and a single receiver. But real communication systems β cellular uplink, Wi-Fi, satellite return links β involve multiple transmitters sending independent messages to a common receiver. The fundamental question becomes: what is the set of rate pairs that can be simultaneously achieved with vanishing error probability?
The answer is no longer a single number (the capacity ) but a region in the rate plane. This is the first instance of a pattern that will recur throughout multiuser information theory: the fundamental limit is a convex set of rate tuples, not a scalar.
The multiple access channel (MAC) is the simplest multiuser channel and also the one where the theory is most complete. We will see that:
- The capacity region is a pentagon (for two users) determined by three mutual information inequalities.
- Every point in this region is achievable β there is no gap between inner and outer bounds.
- The result generalizes beautifully to users, Gaussian channels, MIMO, and fading.
Definition: Discrete Memoryless Multiple Access Channel (DM-MAC)
Discrete Memoryless Multiple Access Channel (DM-MAC)
A two-user discrete memoryless multiple access channel (DM-MAC) consists of:
- Two finite input alphabets and ,
- A finite output alphabet ,
- A conditional PMF for each .
The channel is memoryless: when used times,
User () has a message uniformly distributed over , independent of the other user's message. User 's encoder maps to a codeword . The decoder observes and produces estimates .
The key feature is independence: the two encoders cannot cooperate. Each encoder knows only its own message. This constraint is what makes the MAC fundamentally different from a single-user channel with a larger input alphabet .
Multiple Access Channel (MAC)
A multiuser channel model with two or more independent transmitters sending messages to a single receiver. The capacity region is the closure of all simultaneously achievable rate tuples.
Related: Capacity Region, Time-Sharing
Definition: Achievable Rate Pair and Capacity Region
Achievable Rate Pair and Capacity Region
A rate pair is achievable for the DM-MAC if there exists a sequence of codes such that the average probability of error
tends to zero as .
The capacity region of the MAC is the closure of the set of all achievable rate pairs.
Notice that we require both messages to be decoded correctly. A single error event β whether user 1's message or user 2's β counts as a decoding failure.
Capacity Region
The closure of all rate tuples that are simultaneously achievable with vanishing probability of error. For the MAC, it is a convex region in described by mutual information inequalities.
Theorem: Capacity Region of the Two-User DM-MAC
The capacity region of the two-user DM-MAC is the closure of the convex hull of all rate pairs satisfying
for some product distribution on .
The three inequalities have natural interpretations:
- The first says: even if user 2's signal is known (i.e., can be subtracted), user 1 cannot exceed the single-user capacity treating user 2's input as side information.
- The second is the symmetric statement for user 2.
- The third says: the total throughput cannot exceed the mutual information when we view both users as a single "super-user."
The product distribution constraint reflects the independence of the encoders β they cannot coordinate their inputs. The convex hull (over input distributions) is achieved by time-sharing.
Converse (outer bound)
We prove that any achievable rate pair must satisfy the three inequalities. Suppose is achievable. By Fano's inequality, for any achievable code:
where . Now we bound conditioned on (which determines ):
where the last step uses the memoryless property and the fact that is a function of . Since and are independent, . Introducing a time-sharing variable uniform on and setting , , :
The sum-rate bound follows similarly from Fano applied to jointly.
Achievability (sketch)
We defer the full achievability proof to Section 14.2. The key idea is random coding with joint typicality decoding: generate codewords for user 1 and codewords for user 2, independently according to their respective input distributions. The decoder looks for a unique pair such that is jointly typical. The error analysis shows that the three inequalities are sufficient to drive the error probability to zero.
Convex hull and time-sharing
For a fixed product distribution , the achievable region is a pentagon (intersection of three half-planes in ). Different input distributions give different pentagons. The full capacity region is the convex hull of the union of all such pentagons. Time-sharing between different input distributions achieves points in the convex hull.
Formally, if is achievable with input distribution and with , then for any , is achievable by using for a fraction of the time and for the rest.
Definition: The Pentagonal Region
The Pentagonal Region
For a fixed product input distribution , define the rate region as the set of all satisfying:
This is the intersection of three half-planes in the first quadrant. When (the typical case), the region is a pentagon with five vertices:
The segment from to is the dominant face β it lies on the line .
When , the sum-rate constraint is inactive and the region is a rectangle . This happens when the users' inputs provide "complementary" information about the output. In most cases of practical interest, and the pentagon is strictly inside the rectangle.
Pentagonal Region
The MAC capacity region for a fixed input distribution, shaped as a pentagon in the plane. The five vertices arise from the intersection of three half-plane constraints: individual rate bounds and a sum-rate bound.
Related: Capacity Region, Dominant Face
Dominant Face
The segment of the MAC capacity region boundary where the sum-rate constraint is active. Points on the dominant face achieve the maximum sum rate.
Related: Pentagonal Region
The Corner Points and Decoding Order
The two corner points of the dominant face have a beautiful operational interpretation:
Corner point A: . Here user 2 is decoded first (treating user 1 as noise), achieving rate . Then user 2's signal is subtracted, and user 1 is decoded interference-free at rate .
Corner point B: . Now user 1 is decoded first (treating user 2 as noise), and then user 2 is decoded interference-free.
The point is that successive cancellation decoding (decode one user, subtract, decode the other) naturally achieves the corner points. The entire dominant face is then achieved by time-sharing between the two decoding orders. We will make this precise in Section 14.2.
Definition: Time-Sharing
Time-Sharing
Time-sharing is a technique to achieve rate pairs in the convex hull of a rate region. Given two achievable rate pairs and , a time-sharing scheme divides the channel uses into two groups: a fraction of the time uses the first coding scheme, and the remaining fraction uses the second. The resulting rate pair is
Time-sharing requires a shared "clock" between transmitters and receiver so that all parties agree on which scheme is active. Formally, a time-sharing random variable is introduced, known to all parties.
Time-sharing is a somewhat crude technique β it convexifies the region but does not exploit the channel's structure. For the Gaussian MAC (Section 14.3), we will see that a single Gaussian input distribution suffices and no time-sharing is needed to achieve the full capacity region.
Time-Sharing
A scheme that alternates between two or more coding strategies over different fractions of the block length, achieving any convex combination of the corresponding rate tuples.
Related: Capacity Region
Example: The Binary Adder MAC
Consider the binary adder MAC where , , and (integer addition, no noise). Find the capacity region.
Compute the mutual information quantities
Let and independently. Since the channel is noiseless ( is a deterministic function of ), we have , so
Also, , since given , knowing is the same as knowing . Similarly, .
Evaluate $\ntn{entropy}(Y)$
The output takes values with probabilities:
The maximum of over is achieved at , giving bits.
Describe the capacity region
For : bit and bits. The pentagon has corner points and .
The capacity region is the convex hull over all . Since the pentagon for already contains the pentagons for all other input distributions (by concavity of entropy), the capacity region is exactly this pentagon:
The sum rate bits is strictly less than bits. Intuitively, the "collision" when both users send 1 creates ambiguity β the output 1 does not reveal which user sent 1.
MAC Capacity Region Pentagon
Explore the pentagonal capacity region of the two-user DM-MAC. Adjust the three mutual information quantities , , and to see how the shape of the pentagon changes. The corner points of the dominant face and the sum-rate line are highlighted.
Parameters
User 1 single-user rate (with user 2 known)
User 2 single-user rate (with user 1 known)
Sum mutual information
Quick Check
For a two-user MAC with , , and , what is the maximum achievable sum rate ?
bits per channel use
bits per channel use
bits per channel use
bits per channel use
The sum rate is bounded by , which is less than . The sum-rate constraint is active.
Quick Check
Why does the MAC capacity region theorem require a product distribution rather than an arbitrary joint distribution ?
Product distributions are easier to analyze mathematically
The encoders operate independently and cannot coordinate their inputs
Joint distributions would make the capacity region larger than achievable
Product distributions maximize mutual information
Each encoder knows only its own message and has no knowledge of the other encoder's message or input. Therefore, the inputs are necessarily independent: , which for i.i.d. codebooks gives a product distribution per symbol.
Historical Note: Ahlswede and Liao: The MAC Capacity Region
1971The capacity region of the multiple access channel was independently determined by Rudolf Ahlswede (1971) and Henry Liao (1972). Ahlswede's proof was for the two-user DM-MAC, while Liao's PhD thesis (under the supervision of Andrew Viterbi at UCLA) treated the general case.
The result was remarkable for its time: it showed that for the MAC, random coding arguments could be extended to multiuser settings, and the resulting capacity region was a convex polytope determined by mutual information quantities. The proof that time-sharing suffices to convexify the region (rather than more sophisticated coding techniques) was established by careful analysis of error events in joint typicality decoding.
Interestingly, the MAC capacity region was determined about a decade before the broadcast channel capacity region for general channels remained open (and is still open today for general non-degraded broadcast channels). The MAC is "easier" because the independent encoding constraint naturally leads to a product distribution, while the broadcast channel requires the more subtle technique of superposition coding.
Common Mistake: The MAC Is Not a Point-to-Point Channel with Larger Input
Mistake:
Treating the MAC as a single-user channel with input alphabet and computing .
Correction:
The MAC has independent encoders, so only product distributions are feasible. The capacity is not a single number but a region of rate pairs. The maximum sum rate is , which may be strictly less than .
Common Mistake: Sum Rate Does Not Characterize the Capacity Region
Mistake:
Describing the MAC capacity by its maximum sum rate alone, e.g., "the MAC capacity is ."
Correction:
The sum rate is only one of three constraints. The individual rate constraints and also limit the region. Two MACs with the same sum rate can have very different capacity regions (e.g., one symmetric, one highly asymmetric).
Key Takeaway
The MAC capacity region is a convex pentagon in the rate plane, described by three mutual information inequalities (individual rates plus sum rate). The corner points of the dominant face correspond to successive cancellation decoding with different user orderings. Every point in the region is achievable β there is no gap between achievability and converse.