The DD Channel Matrix

From a Sum to a Matrix

The input-output relation of Section 2 can be written as yDD=HDD xDD+wDD\mathbf{y}_{DD} = \mathbf{H}_{DD}\,\mathbf{x}_{DD} + \mathbf{w}_{DD}, where HDD∈CMNΓ—MN\mathbf{H}_{DD} \in \mathbb{C}^{MN \times MN} is the DD channel matrix. This matrix form is what every linear detector (MMSE, ZF, matched filtering) takes as input, so we now write it out explicitly.

The point is that HDD\mathbf{H}_{DD} is not a generic MNΓ—MNMN \times MN complex matrix β€” it has two structural properties that detection algorithms exploit: sparsity (at most Pβ‹…MNP \cdot MN nonzero entries, not M2N2M^2 N^2) and block circulant structure (a consequence of the doubly-circular 2D convolution).

Definition:

Vectorized DD Grid

The DD grid matrix XDD[β„“,k]X_{DD}[\ell, k] of size MΓ—NM \times N is vectorized column-by-column: xDDβ€…β€Š=β€…β€Švec(XDD)β€…β€Š=β€…β€Š[XDD[0,0],XDD[1,0],…,XDD[Mβˆ’1,Nβˆ’1]]T∈CMN.\mathbf{x}_{DD} \;=\; \text{vec}(X_{DD}) \;=\; \left[X_{DD}[0, 0], X_{DD}[1, 0], \ldots, X_{DD}[M-1, N-1]\right]^T \in \mathbb{C}^{MN}. Similarly yDD=vec(YDD)\mathbf{y}_{DD} = \text{vec}(Y_{DD}) and wDD=vec(WDD)\mathbf{w}_{DD} = \text{vec}(W_{DD}). The index mapping is (β„“,k)↔j=k M+β„“(\ell, k) \leftrightarrow j = k\,M + \ell for β„“βˆˆ{0,…,Mβˆ’1}\ell \in \{0, \ldots, M-1\} and k∈{0,…,Nβˆ’1}k \in \{0, \ldots, N-1\}.

Theorem: Structure of the DD Channel Matrix

With the column-major vectorization of the DD grid, the channel matrix HDD∈CMNΓ—MN\mathbf{H}_{DD} \in \mathbb{C}^{MN \times MN} is a sum of PP weighted shift-phase matrices: HDDβ€…β€Š=β€…β€Šβˆ‘i=1Phi (Ξ Nki ΔNβ„“i)βŠ—Ξ Mβ„“i,\mathbf{H}_{DD} \;=\; \sum_{i=1}^{P} h_i\,(\boldsymbol{\Pi}_N^{k_i}\,\boldsymbol{\Delta}_N^{\ell_i}) \otimes \boldsymbol{\Pi}_M^{\ell_i}, where βŠ—\otimes is the Kronecker product, Ξ K\boldsymbol{\Pi}_K is the KΓ—KK \times K circular-shift (forward) permutation matrix, and Ξ”N=diag(1,ej2Ο€/N,…,ej2Ο€(Nβˆ’1)/N)\boldsymbol{\Delta}_N = \text{diag}(1, e^{j 2\pi/N}, \ldots, e^{j 2\pi(N-1)/N}) is the diagonal modulation matrix encoding the Zak twist.

The matrix is:

  1. Sparse: exactly Pβ‹…MNP \cdot MN non-zero entries out of M2N2M^2 N^2.
  2. Block circulant with circulant blocks (for integer-Doppler paths).
  3. Diagonalized by the 2D DFT: HDD=(FNβŠ—FM)H Λ (FNβŠ—FM)\mathbf{H}_{DD} = (\mathbf{F}_N \otimes \mathbf{F}_M)^H\,\boldsymbol{\Lambda}\,(\mathbf{F}_N \otimes \mathbf{F}_M) with Ξ›\boldsymbol{\Lambda} diagonal β€” this is the ISFFT/SFFT structure of Chapter 3.

The Kronecker product decomposes HDD\mathbf{H}_{DD} into a "Doppler part" Ξ NkiΞ”Nβ„“i\boldsymbol{\Pi}_N^{k_i}\boldsymbol{\Delta}_N^{\ell_i} acting on the NN-dimensional Doppler axis, and a "delay part" Ξ Mβ„“i\boldsymbol{\Pi}_M^{\ell_i} acting on the MM-dimensional delay axis. Each axis is shifted circularly by the path index; the Doppler part also carries the Zak-twist phase because the delay shift crosses block boundaries of length MM in the vectorized representation.

,

Sparsity Pattern of the DD Channel Matrix

Sparsity Pattern of the DD Channel Matrix
Sparsity pattern of HDD\mathbf{H}_{DD} for an M=16,N=16M = 16, N = 16 grid with P=4P = 4 paths. Each path contributes a diagonal band of MN=256MN = 256 non-zeros, for a total of Pβ‹…MN=1024P \cdot MN = 1024 non-zeros out of (MN)2=65,536(MN)^2 = 65{,}536 β€” a density of P/(MN)=1.6%P/(MN) = 1.6\%. For large frames (M=512,N=16M = 512, N = 16, P=8P = 8) the density is ∼0.1%\sim 0.1\%.

DD Channel Matrix Sparsity vs Channel Parameters

Visualize ∣HDD[j,jβ€²]∣|\mathbf{H}_{DD}[j, j']| as a heatmap for a random PP-tap channel. Observe: (i) the matrix is sparse with PP diagonal bands, (ii) the bands correspond to path (β„“i,ki)(\ell_i, k_i) offsets, (iii) sparsity density equals P/(MN)P / (MN). Vary PP and watch the density scale linearly.

Parameters
16
8
4
3
1
7

Example: Explicit (M,N)=(4,2)(M, N) = (4, 2) Channel Matrix

For M=4,N=2M = 4, N = 2 (so the DD grid has 8 cells), construct the DD channel matrix for a single path (h1,β„“1,k1)=(1,1,0)(h_1, \ell_1, k_1) = (1, 1, 0). Ignore the Zak twist (since β„“1<M\ell_1 < M and no boundary crossing for this simple example).

How Detectors Exploit This Structure

The structured form HDD=βˆ‘ihi(Ξ NkiΞ”Nβ„“i)βŠ—Ξ Mβ„“i\mathbf{H}_{DD} = \sum_i h_i (\boldsymbol{\Pi}_N^{k_i}\boldsymbol{\Delta}_N^{\ell_i}) \otimes \boldsymbol{\Pi}_M^{\ell_i} is exactly what enables efficient OTFS detection (Chapter 8):

  • Matched filter: HDDH y\mathbf{H}_{DD}^{H}\,\mathbf{y} can be computed as PP sparse back-projections, each costing O(MN)O(MN) β€” total O(PMN)O(PMN).
  • MMSE: (HDDHHDD+Οƒ2I)βˆ’1(\mathbf{H}_{DD}^{H}\mathbf{H}_{DD} + \sigma^2\mathbf{I})^{-1} is itself block-circulant, diagonalizable by the 2D DFT. The inverse can be computed in O(MNlog⁑(MN))O(MN \log(MN)) via the SFFT.
  • Message passing: The factor graph has O(MN)O(MN) variable nodes and O(MN)O(MN) factor nodes, with each factor having at most PP incident variables. Belief-propagation iterations cost O(PMN)O(PMN) per sweep.

All three scale linearly (up to log factors) with the grid size β€” a direct consequence of the sparsity. A dense MNΓ—MNMN \times MN matrix would make OTFS impractical at typical 5G frame sizes (MN∼104MN \sim 10^4); the structure brings it down to realtime.

Key Takeaway

The DD channel matrix is sparse and structured β€” this is the enabler. Each of the PP physical paths contributes a Kronecker product of circular-shift and diagonal-phase matrices. The union is sparse with density P/(MN)β‰ͺ1P/(MN) \ll 1, block-circulant with circulant blocks, and diagonalized by the 2D DFT. Every practical OTFS detector (matched filter, MMSE, message passing) exploits at least one of these three properties. Without them, detection on an MNβ‰₯104MN \geq 10^4 grid would be computationally infeasible.

Computational Complexity: Dense vs Structured DD Matrix

OperationDense matrixSparse DD structure
Matrix-vector product HDDx\mathbf{H}_{DD}\mathbf{x}O(M2N2)O(M^2 N^2)O(Pβ‹…MN)O(P \cdot MN)
Matched filter HDDHy\mathbf{H}_{DD}^{H}\mathbf{y}O(M2N2)O(M^2 N^2)O(Pβ‹…MN)O(P \cdot MN)
Solve HDDx=y\mathbf{H}_{DD}\mathbf{x} = \mathbf{y} (direct)O((MN)3)O((MN)^3)O(MNlog⁑(MN))O(MN \log(MN)) via 2D FFT
MMSE: (HDDHHDD+Οƒ2I)βˆ’1y(\mathbf{H}_{DD}^{H}\mathbf{H}_{DD} + \sigma^2\mathbf{I})^{-1}\mathbf{y}O((MN)3)O((MN)^3)O(MNlog⁑(MN))O(MN \log(MN))
StorageO((MN)2)O((MN)^2)O(P)O(P) parameters