Multiple Descriptions
Why Multiple Descriptions?
Imagine streaming video over the internet via two independent paths. If both packets arrive, you want high-quality video. If one packet is lost, you still want a watchable (though degraded) picture. This is the multiple descriptions problem: encode a single source into two descriptions such that each description alone provides a coarse reconstruction, while both together provide a fine one.
Unlike the Slepian-Wolf or Berger-Tung setups where the decoder always receives all encodings, in the multiple descriptions problem the decoder may receive only a subset. This creates a tension between redundancy (for robustness) and efficiency (for quality when both arrive).
Definition: The Multiple Descriptions Problem
The Multiple Descriptions Problem
A source is encoded into two descriptions at rates and :
There are three decoders:
- Decoder 0 (both descriptions): , with distortion constraint
- Decoder 1 (description 1 only): , with distortion constraint
- Decoder 2 (description 2 only): , with distortion constraint
A rate-distortion tuple is achievable if all three distortion constraints can be simultaneously met.
Multiple descriptions
A source coding problem where a single source is encoded into two (or more) descriptions. A decoder receiving any subset of descriptions can produce a reconstruction, with quality improving as more descriptions are available. The key challenge is balancing individual description quality against joint quality.
Related: Rate-distortion theory and the information bottlenβ¦, Diversity Order, Successive Refinement
Theorem: El Gamal-Cover Achievable Region
The following rate-distortion tuples are achievable for the multiple descriptions problem. For any jointly distributed with (Markov chain, given the descriptions are conditionally independent) and reconstruction functions , , :
subject to:
The sum-rate constraint reveals the fundamental tension. The term is the minimum total rate to describe through . The extra term is the cost of redundancy β the overlap between the two descriptions. More redundancy means better individual descriptions but higher total rate.
When , the descriptions carry independent information and we recover the rate-distortion region. When is large, there is significant overlap, giving good individual quality at the expense of total rate.
Codebook generation
Generate codebooks for and independently, each using i.i.d. sequences from and respectively. The codebook for has sequences, partitioned into bins.
Encoding
Given , Encoder 1 finds jointly typical with (covering lemma, requires ). Encoder 2 similarly finds jointly typical with .
Since and are generated independently but are correlated through , the pair is jointly typical with high probability.
Each encoder sends the index of the chosen sequence (not a bin index β individual decoders need the full index).
Decoding
Decoder (receiving only description ): Reconstructs using .
Decoder 0 (receiving both): The decoder has indices and reconstructs .
The sum-rate constraint arises because the encoder must find a pair that is jointly typical with and with each other. The mutual information quantifies the excess rate needed to ensure this joint typicality.
Definition: No Excess Rate Multiple Descriptions
No Excess Rate Multiple Descriptions
A multiple descriptions code has no excess rate if , meaning the total rate equals the minimum rate to achieve distortion with a single encoder. In the no-excess-rate regime, there is no rate penalty from splitting the encoding into two descriptions β all the "redundancy cost" is zero.
The no-excess-rate condition is achievable when the source is successively refinable: , , and the nested structure allows the descriptions to combine without waste.
Example: Gaussian Multiple Descriptions
Let with squared-error distortion. Characterize the achievable distortion triple for symmetric rates .
The Ozarow region
For the Gaussian source with symmetric rates , Ozarow (1980) showed that the achievable distortion region is:
where depends on the rates. The key tradeoff: making smaller (better individual descriptions) forces larger (worse joint description), because more rate is spent on redundancy.
Extreme cases
Maximum redundancy (, each description is as good as possible): The joint distortion β the second description adds nothing.
No redundancy (, best possible joint distortion): The individual distortions are β each description alone is useless (all rate goes to complementary information).
The practical operating point lies between these extremes, trading off individual and joint quality.
Multiple Descriptions: Redundancy-Quality Tradeoff
Explore the tradeoff between individual description quality and joint quality in the Gaussian multiple descriptions problem with symmetric rates.
Parameters
Rate allocated to each description (bits per sample)
Variance of the Gaussian source
Historical Note: The Multiple Descriptions Problem: A Long-Standing Challenge
1980-1982The multiple descriptions problem was first posed by Witsenhausen in 1980 and independently by Wolf, Wyner, and Ziv, motivated by packet communication over unreliable networks. El Gamal and Cover provided the first general achievable region in 1982, and Ozarow solved the Gaussian case that same year.
What makes this problem fascinating is that the general characterization remains open after more than 40 years. Even the two-description case for a Gaussian source with asymmetric rates and general distortion constraints is not fully resolved. The problem has deep connections to the Korner-Marton coding problem and to questions about the structure of optimal quantizers.
Quick Check
In the multiple descriptions problem, if we want each individual description to provide a good reconstruction ( small), what happens to the joint reconstruction quality ?
also improves (gets smaller)
gets worse (larger) for a fixed total rate
is unaffected by the individual distortion requirements
Correct! More redundancy means better individual quality but less new information when both descriptions are combined. This is the core tradeoff in multiple descriptions coding.
Key Takeaway
The multiple descriptions problem captures the fundamental tradeoff between redundancy and efficiency when a source must be robustly described over unreliable links. The El Gamal-Cover region characterizes the achievable rate-distortion tuples. The sum-rate penalty quantifies the cost of redundancy. The Gaussian case (Ozarow) reveals a clean tradeoff between individual and joint quality.