Chapter Summary
Chapter Summary
Key Points
- 1.
Set partitioning is the geometric core of TCM. An Ungerboeck partition of a -point constellation is a chain of successive binary splits in which each split is chosen to maximize the intra-subset minimum squared Euclidean distance . For 8-PSK: ; for 16-QAM: (clean doubling from the 2D lattice structure). Every level of splitting at least doubles the intra-subset MSED, and this geometric amplification is the source of TCM's coding gain.
- 2.
Mapping by set partitioning separates coded from uncoded bits. A TCM scheme uses a rate- convolutional code to select a level- coset, then uses uncoded bits to select a point inside the coset. The coset-selecting bits are the ones that need coding protection; the uncoded bits ride the large intra-subset MSED for free. The result is coding gain without any bandwidth expansion: the constellation size doubles from to at fixed baud rate and fixed bit rate, absorbing the code rate.
- 3.
Free Euclidean distance obeys a two-term lower bound. For any Ungerboeck TCM, β the first term bounds single-symbol parallel-transition errors, the second bounds longer error events via the Hamming distance of the convolutional code times the level-1 MSED. A good design balances the two terms; the celebrated 4-state 8-PSK TCM has them both at , giving dB over uncoded QPSK.
- 4.
The three Ungerboeck design rules pick the label assignment. (R1) parallel transitions carry deepest-level coset points (same level- coset); (R2) transitions from or into a single state carry next-level coset points (same level- coset); (R3) uniform constellation usage. These rules jointly ensure and are the direct outputs of the computer search that produced Ungerboeck's 1982 Tables IβIII.
- 5.
Viterbi + subset decoding is the ML decoder. For each received sample and each level- coset, the subset decoder finds the nearest in-coset point and records its squared Euclidean distance. Viterbi then runs over the state-pair trellis using those coset-level branch metrics. The per-symbol complexity is ACS operations plus distance evaluations. Traceback depth to keeps the bit-error floor tight.
- 6.
Asymptotic coding gain is . At high SNR, the BER curve shifts horizontally by dB relative to the uncoded baseline at the same spectral efficiency. For 2D TCM this gain is 3β6 dB (the upper end reached by 64β256 state trellises); larger gains require higher-dimensional constellations as in V.34's 4D TCM. Shaping gain ( dB, DShaping Gain (Preview)) is a separate quantity, added via constellation-shaping techniques in Chapter 4.
- 7.
V.32/V.34 modems were TCM made flesh. V.32 (1984, 9.6 kbit/s) used 8-state 32-CROSS TCM for 4 dB gain over uncoded 16-QAM. V.34 (1994, 33.6 kbit/s) combined 16-state 4D TCM with shell-mapping shaping to reach within 1 dB of Shannon capacity on a typical telephone line. The V.34 modem was, for a decade, the most sophisticated commercial communications system in the world β and Ungerboeck's 1982 paper was the single most influential theoretical input to its design.
- 8.
BICM supplanted TCM in wireless; Ungerboeck's partition idea lives on in probabilistic shaping. The interleaver in BICM converts a code's Hamming distance into diversity order, which beats TCM on fading channels (CaireβTariccoβBiglieri 1998). TCM remains optimal on AWGN-dominant links (satellite, optical), and the set-partitioning idea is alive in modern probabilistic amplitude shaping (PAS) for DVB-S2X and high-capacity optical transponders β the partition now carries the shaping gain instead of the coding gain, but the geometric insight is the same.
Looking Ahead
Chapter 3 generalizes from a single trellis code on a partitioned constellation to multilevel coding and multistage decoding (MLC/MSD) β Imai and Hirasawa's 1977 scheme, which uses a different code at each partition level and decodes them sequentially. MLC reveals the "capacity rule" for rate allocation across levels: the optimal coding rate at level equals the capacity of the binary-input channel induced by the partition at that level. Multistage decoding is low-complexity (each stage is a binary decoding problem), and under the capacity rule it achieves the same capacity as joint (coded-modulation) decoding β an information-theoretic equivalence that will bear on our comparison with BICM in Chapter 5. Chapter 4 then reinterprets the partition as a chain of sublattice cosets, which unlocks the shaping gain and the lattice-coset-code framework of Forney (1988).