The MAC with More Than Two Users
Beyond Two Users
Real systems serve many users simultaneously β a 5G cell may have to active users. The two-user MAC theory generalizes naturally to users, but the structure of the capacity region becomes richer and more complex. Instead of three inequalities, we need β one for every non-empty subset of users. The resulting capacity region is a polymatroid, a beautiful combinatorial-geometric object that connects information theory to discrete optimization.
Definition: The -User Discrete Memoryless MAC
The -User Discrete Memoryless MAC
The -user DM-MAC consists of encoders with input alphabets , an output alphabet , and a conditional PMF . Each encoder maps its message to a codeword . The messages are independent. The decoder produces estimates from .
A rate tuple is achievable if as .
Theorem: Capacity Region of the -User DM-MAC
The capacity region of the -user DM-MAC is the closure of the convex hull of all rate tuples satisfying
for some product distribution , where and .
Each inequality constrains the total rate of a subset of users. The bound says: even if all users outside are perfectly decoded and removed, the users in cannot collectively exceed the mutual information of their combined input with the output.
For , this gives three inequalities (, , ), recovering the pentagon of Section 14.1.
For , we get inequalities: three individual, three pairwise, and one triple sum-rate.
Achievability
The achievability proof extends the joint typicality argument of Section 14.2. Generate independent random codebooks, one per user. The decoder searches for a unique -tuple such that all codewords and the received sequence are jointly typical.
The error analysis now has error events, one per non-empty subset : the event that codewords in are wrong while those in are correct. Each such event contributes a probability bounded by
where we used the independence of the codebooks. Each term vanishes when the corresponding subset inequality is strict.
Converse
For each subset , apply Fano's inequality to the messages conditioned on the messages :
By the chain rule and memoryless property, the mutual information term is bounded by for some product distribution, completing the converse.
Definition: Polymatroid
Polymatroid
A polymatroid is a polytope defined by a submodular set function with :
A set function is submodular if for all :
Submodularity captures the "diminishing returns" property.
For a fixed input distribution, the MAC capacity region is a polymatroid with . The submodularity of this function follows from the submodularity of entropy: for independent inputs, is a submodular function of .
Polymatroid
A polytope in defined by inequalities indexed by subsets, where the bounding function is submodular. The MAC capacity region for a fixed input distribution has this structure.
Related: Capacity Region
Theorem: Submodularity of the MAC Rate Function
For a fixed product input distribution , the function
is a monotone submodular set function. That is, for all :
- Monotonicity: .
- Submodularity: For any user , .
Monotonicity says that adding more users to the subset increases the mutual information (more users carry more information). Submodularity says that the marginal contribution of a new user decreases as the existing subset grows β adding user to a large group helps less than adding it to a small group. This is a "diminishing returns" property that arises because the existing users already carry some of the information that user provides.
Express in terms of entropy
Since the inputs are independent:
Let . Then , and it suffices to show is monotone submodular.
Submodularity from conditioning
For , conditioning reduces entropy gives since , i.e., (monotonicity).
For submodularity, consider user . The marginal gain . Since implies , conditioning reduces mutual information:
Wait, conditioning on more variables can increase or decrease mutual information. But here is independent of the conditioning variables, so conditioning on more X's reduces the "noise" and increases the mutual information... Actually, we need to use:
and similarly for . Since , more conditioning in the case makes the marginal smaller. Formally, this follows from the chain rule of mutual information and the non-negativity of conditional mutual information.
SIC Orderings and Corner Points for Users
For the -user MAC, the polymatroidal capacity region has corner points on the dominant face, one for each permutation of the user indices.
The corner point corresponding to permutation is achieved by SIC with decoding order :
- User is decoded first (treating all others as noise) at rate .
- User is decoded second (after removing ) at rate .
- User is decoded -th at rate .
The sum of all rates equals regardless of the permutation (by the chain rule).
The entire dominant face is the convex hull of these corner points. For , this gives 6 corner points; for , it gives corner points.
Example: Three-User Symmetric Gaussian MAC
Consider a three-user Gaussian MAC: , with independently and . Write down all capacity region constraints and identify the symmetric rate point on the dominant face.
Individual rate constraints ($|\mathcal{S}| = 1$)
For any single user, the other two contribute noise:
Pairwise constraints ($|\mathcal{S}| = 2$)
For any pair , the third user contributes noise:
Sum-rate constraint ($|\mathcal{S}| = 3$)
$
Symmetric rate point
By symmetry, the fair operating point is . The binding constraint is the tightest. Setting :
We must verify this satisfies the individual and pairwise constraints. For : bits. Individual bound: bits.
Since , the individual constraint is violated! So the symmetric sum-rate point is not in the capacity region. The correct symmetric rate is:
For : the individual constraint is the tightest. The total symmetric throughput is bits, much less than bits. This shows the severe "fairness cost" in the symmetric MAC at high SNR.
Massive Random Access and the Many-User MAC
The -user MAC framework is particularly relevant for massive random access in IoT scenarios, where can be very large (thousands to millions of devices). In this regime:
- The constraints make the capacity region description exponentially complex.
- Practical interest shifts to the symmetric sum rate and the per-user rate as .
- New paradigms like unsourced random access (Polyanskiy, 2017) sidestep the exponential complexity by treating all users as identical and only requiring the decoder to recover the set of transmitted messages (without user identification).
The connection between the classical -user MAC and modern massive access is an active area of research, with links to compressed sensing, approximate message passing, and coded random access.
Exponential Growth of MAC Constraints
The -user MAC capacity region requires inequalities. For practical system design with users, this exponential description is intractable. Practical approaches include:
- Greedy SIC: Choose a specific decoding order (typically by decreasing received power) and operate at the corresponding corner point. This requires only rate constraints.
- Equal-rate operation: Operate at the symmetric rate point, which is determined by a single constraint (the tightest among all subsets).
- Clustering: Group users into small clusters () and apply SIC within each cluster while treating inter-cluster interference as noise.
- β’
Full capacity region requires exponential number of constraints
- β’
SIC decoding order affects fairness among users
- β’
Scheduling algorithms must balance throughput and fairness
Quick Check
For a 4-user MAC, how many subset inequalities define the capacity region?
4
7
15
24
: 4 singleton, 6 pairs, 4 triples, 1 quadruple.
Key Takeaway
The -user MAC capacity region is described by mutual information inequalities, one per non-empty user subset. The region has a polymatroidal structure characterized by the submodularity of the mutual information set function. SIC with a specific decoding order achieves one of corner points on the dominant face. The exponential growth of constraints motivates practical approaches like greedy SIC and user clustering.