Exercises
ex-otfs-ch08-01
EasyFor QPSK () and frame size , compute the ML search space size . Express in scientific notation.
.
Compute
. Precisely: .
Interpret
candidates per frame. With ops/sec (EFLOPS machine), it would take seconds per frame β longer than the age of the universe. ML is infeasible.
ex-otfs-ch08-02
EasyWrite down the 1-iteration complexity of MMSE detection for . What is the key dominant term?
MMSE = 2 Γ 2D-FFT + element-wise.
Components
2D FFT: ops. Element-wise: ops. Total: ops.
Dominant term
The two 2D FFTs dominate. Element-wise is a tenth of the FFT cost. MMSE is essentially "the cost of two 2D FFTs" β equivalent to the cost of the OTFS demodulator itself.
ex-otfs-ch08-03
MediumDerive the DD-domain LMMSE expression from the general LMMSE formula, using the diagonalization of .
Use .
Commute with .
Start from LMMSE
.
Substitute diagonalization
. . So .
Inverse
Its inverse is , where the inverse in parentheses is element-wise.
Combine
.
ex-otfs-ch08-04
MediumShow that for a single-path channel (), MMSE in the DD domain reduces to element-wise Wiener filtering with constant (equal to ) magnitude of .
gives a single-path shift in .
Eigenvalues of a permutation are unit-modulus roots of unity.
Single path
β a permutation-plus-diagonal times . Its eigenvalues are .
Magnitude
for all β constant.
MMSE
Element-wise multiply by . Same for all cells.
Interpretation
Single-path channel: all DD cells experience the same SNR. MMSE is uniformly Wiener β BER is the same across cells. Multi-path case has cell-dependent , creating per-cell SINR variability.
ex-otfs-ch08-05
MediumAn OTFS system uses , , 5 LCD iterations. Compute the total flop count for one frame's detection.
Per iteration: 2 Γ 2D FFT + sparse mult + soft-quantize.
Per iteration
2D FFT + iFFT: ops. Sparse multiply (): ops. Soft-quantize (, QPSK): ops. Per iteration: ops.
Total
5 iterations: ops.
Compare
MMSE alone: ops (one "iteration"). LCD (5 iter) = 5Γ MMSE = ops. MP (10 iter): ops. LCD and MP are in the same ballpark; MMSE is 3-5Γ faster. LCD typically chosen for predictable latency.
ex-otfs-ch08-06
MediumA channel has paths with i.i.d. gains. Show that the post-MMSE SINR at any DD cell has expected value (unit channel power), independent of .
. Channel power is .
Eigenvalue variance depends on .
Channel power
sum of iid exponential = . Mean: . Variance of total: .
Eigenvalue magnitude squared
, a sum of complex variables: . exponential with mean 1.
MMSE SINR
. Expected value: . Same as single-path β but with Rayleigh fading (diversity 1 regardless of ).
Interpretation
MMSE sees an effective single-tap Rayleigh channel, regardless of . The "diversity " advantage of OTFS is entirely in the detector's nonlinear processing (MP or LCD iterations), not in the channel's post-MMSE statistics.
ex-otfs-ch08-07
HardDerive the Gaussian BP message update for the DD factor graph. At a factor node (DD observation), given messages , compute the factor-to-variable message for .
Gaussian messages: .
Integrate over variables, keep the one of interest.
Factor node likelihood
.
Marginalize over $\{x_j\}_{j \neq i}$
. Each message is Gaussian: .
Gaussian integral
The integrand is Gaussian in ; integrating gives another Gaussian. The result is with .
Interpretation
The factor says: "Given the messages from my other neighbors, the variable should be with uncertainty ." The message aggregates this with priors from its other incident factors.
ex-otfs-ch08-08
HardThe MP algorithm converges to a fixed point. Show that the fixed-point equation is where is a soft-thresholding operator β specifically, the AMP iteration.
At convergence, messages stabilize: .
AMP's update is matched-filter + soft-threshold.
Converged messages
At fixed point, for all incident factors. Equivalently, the posterior at each variable equals the product of all factor messages.
AMP form
The converged update becomes , where is a soft-thresholding (or QAM-mapping) operator and is the "Onsager" correction term accounting for self-feedback.
Interpretation
MP is essentially AMP (approximate message passing) specialized to the DD-channel structure. AMP has been extensively studied in compressive sensing and MIMO detection; OTFS detection is a special case with structured (sparse) matrices.
Practical impact
Recognizing this connection gives access to AMP's convergence analysis (state evolution) and its asymptotic optimality results. The OTFS-specific convergence bounds in Raviteja et al. (2018) are AMP results specialized to the DD factor graph.
ex-otfs-ch08-09
MediumA receiver uses LCD with 2 iterations. At SNR = 15 dB, with , the BER is . At SNR = 10 dB, what is the expected BER? (Use the diversity argument.)
Diversity : BER .
Drop in SNR: dB = factor of .
Diversity scaling
. .
Compute
β but BER is bounded by 1/2 for binary symbols, so the extrapolation breaks down.
Real scaling
The simple scaling holds in the high-SNR regime. At low SNR, the BER saturates. The real is probably in the β range β detector is noise-limited, not diversity-limited.
Moral
Diversity only helps in the high-SNR regime where BER is small. At low SNR, outer coding is essential (not just detector diversity). This is why IDD is the standard for low-SNR operation.
ex-otfs-ch08-10
MediumArgue that MP and LCD produce different outputs under finite SNR, even though both asymptotically approach ML. At what SNR is the gap between MP and LCD typically < 0.5 dB?
LCD's soft quantization discards extrinsic information.
MP preserves full posterior.
Finite-SNR behavior
MP's Gaussian messages retain full information. LCD's soft- quantization reduces messages to QAM expected values, losing some uncertainty information between iterations.
Gap at high SNR
At high SNR, both methods converge to ML. Gap . The asymptotic "diversity " is identical for both.
Gap at moderate SNR
At dB, MP is typically 0.5 dB better than LCD. Below dB, gap can be 1-2 dB.
Practical choice
For high-SNR operation (eMBB at 10+ dB), LCD's simpler structure makes it the preferred choice. For low-SNR operation (mMTC at 0 dB), MP offers measurable gains. In practice, both are accompanied by outer LDPC coding which largely closes the gap via iterative decoding.
ex-otfs-ch08-11
HardShow that the MP algorithm's BER in the high-SNR regime scales as . Provide a sketch of the proof using pairwise error probability.
Pairwise error: probability that one specific wrong symbol is preferred.
MP's PEP matches ML.
PEP
Pairwise error probability at SNR : .
Conditioned on channel
. Depends on , which is -weighted in the 2D DFT basis.
Channel averaging
For iid paths, is chi-squared with DoF. Averaging over this distribution: at large .
Union bound
BER . Since the exponential factor in dominates for large , BER .
MP matching ML
At convergence, MP's decisions match ML's up to tie-breaking effects. Asymptotic BER slope is the same .
ex-otfs-ch08-12
MediumAn IDD scheme uses 3 outer iterations. The detector (LCD) runs 3 iterations per pass; the LDPC decoder runs 5 iterations per pass. For , what is the total compute per frame?
Per outer iter: LCD pass + LDPC pass.
Per outer iter
LCD (3 inner iter): ops. LDPC (5 inner iter on a rate-1/2 code): ops (per-bit variable-to-check messages). Total per outer iter: ops.
3 outer iterations
ops per frame.
Assessment
ops/frame total including the IDD. At 100 frames/sec, ops/sec β well within a modern SoC. IDD is realtime-feasible for 5G NR.
ex-otfs-ch08-13
HardShow that for a -path channel with i.i.d. taps, the ergodic channel capacity is β same as a single-tap LTI channel.
Capacity of with : .
For OTFS, per-cell effective channel is .
Per-cell capacity
Each DD cell has effective gain (since total channel power is 1). Per-cell capacity: .
Frame capacity
Total frame capacity: . Per-cell, per-DoF: .
Compare single-tap
Single-tap Rayleigh: same formula. Identical capacity.
Interpretation
OTFS does not improve ergodic capacity over OFDM. Both have the same total rate on the same channel. What OTFS improves is outage capacity β the fraction of realizations with rate above a threshold. For rare deep fades, OTFS's diversity- reliability wins without changing the average.
ex-otfs-ch08-14
MediumA receiver uses sphere decoding over a small OTFS frame (, QPSK, ). Estimate the expected number of candidates examined at SNR = 20 dB.
Sphere decoding examines candidates within a sphere of radius .
At high SNR, fewer candidates survive.
Brute-force size
candidates.
Sphere reduction at high SNR
At dB, only a small fraction of candidates are within the sphere. Typical reduction: β candidates (from empirical simulation).
Operational regime
Sphere decoding is feasible at but not . For OTFS, it is a niche choice β OTFS frames are larger than the sphere-decoding sweet spot.
ex-otfs-ch08-15
Challenge(Research open problem.) Derive a MAP detector for OTFS that exploits both the sparse channel structure AND a known prior on the data (e.g., from a low-rate outer code). What is the complexity, and what is the BER benefit over LCD?
MAP = detection with data prior.
Can use the prior in the message-passing update.
MAP formulation
. With a data prior (e.g., encoding constraints), the MAP problem is a constrained ML.
Integration with MP
Use as a prior message at variable nodes. Per-variable message: . Convergence conditions depend on code structure.
BER benefit
For a rate-1/2 LDPC outer code with MAP detection, BER typically improves by 1-2 dB over LCD at moderate SNR. At low SNR, the MAP benefit is larger ( dB) because the prior corrects cells that the channel alone cannot resolve.
Complexity
MAP detection integrated with LDPC decoding is , same as IDD. The difference is that MAP uses the marginal posterior at each bit, while IDD uses the extrinsic LLR β EXIT analysis shows that the two converge to the same fixed point asymptotically.
Open problem
A fully joint MAP detector-decoder that exploits both the DD channel sparsity AND the LDPC graph structure remains an active research topic. Current implementations use the simpler IDD loop (detector and decoder operate independently with message exchange), which is sub-optimal but close.
ex-otfs-ch08-16
MediumThe cross-domain detector fuses TF-domain and DD-domain outputs via reliability-weighted LLR averaging. Write down the LLR combination rule for an LLR from the TF detector and from the DD detector.
If LLRs are Gaussian with variances , the MAP combination is MMSE.
LLR combination
Treating each LLR as a noisy observation of the true bit (true bit , noise per Gaussian approximation): (independent observations combine additively in log-likelihood).
With reliability weighting
If the LLR magnitudes are unreliable (possibly biased), reweight: with .
Interpretation
Cross-domain combination preserves the information from both detectors, weighted by their reliability. When one domain fails (e.g., TF in a deep fade), its LLR is suppressed by the small weight, and the other domain carries the decision.