MMSE (Regularized ZF) Receiver

The Best of Both Worlds

MRC maximizes signal power but ignores interference. ZF eliminates interference but amplifies noise. The MMSE receiver β€” also called regularized ZF β€” strikes the optimal balance: it minimizes the total mean squared error E[βˆ₯x^βˆ’xβˆ₯2]\mathbb{E}[\|\hat{\mathbf{x}} - \mathbf{x}\|^2], jointly accounting for both interference and noise.

The MMSE filter reduces to MRC at low SNR (where noise dominates) and to ZF at high SNR (where interference dominates), smoothly interpolating between the two extremes. This makes it the uniformly best linear receiver across all operating regimes.

Definition:

MMSE (Regularized ZF) Receiver

The MMSE receiver for the uplink model y=Hx+w\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{w} with E[xxH]=PI\mathbb{E}[\mathbf{x}\mathbf{x}^H] = P \mathbf{I} is

GMMSE=(HHH+Οƒ2PINt)βˆ’1H,\mathbf{G}^{\text{MMSE}} = \left(\mathbf{H} \mathbf{H}^{H} + \frac{\sigma^2}{P} \mathbf{I}_{N_t}\right)^{-1} \mathbf{H},

or equivalently, using the matrix inversion lemma:

GMMSE=H(HHH+Οƒ2PIK)βˆ’1.\mathbf{G}^{\text{MMSE}} = \mathbf{H} \left(\mathbf{H}^{H} \mathbf{H} + \frac{\sigma^2}{P} \mathbf{I}_{K}\right)^{-1}.

The soft estimate is x^MMSE=(GMMSE)Hy\hat{\mathbf{x}}^{\text{MMSE}} = (\mathbf{G}^{\text{MMSE}})^H \mathbf{y}.

The second form is computationally preferred when K<NtK < N_t: it inverts a KΓ—KK \times K matrix rather than an NtΓ—NtN_t \times N_t one. The regularization term Οƒ2PI\frac{\sigma^2}{P} \mathbf{I} is precisely what distinguishes MMSE from ZF β€” it prevents the noise enhancement that arises from inverting a near-singular Gram matrix.

Theorem: MMSE Is the Optimal Linear Receiver

Among all linear receivers x^=AHy\hat{\mathbf{x}} = \mathbf{A}^H \mathbf{y}, the MMSE receiver minimizes the mean squared error:

GMMSE=arg⁑min⁑AE[βˆ₯AHyβˆ’xβˆ₯2].\mathbf{G}^{\text{MMSE}} = \arg\min_{\mathbf{A}} \mathbb{E}\left[\|\mathbf{A}^H \mathbf{y} - \mathbf{x}\|^2\right].

The minimum MSE for user kk is

MMSEk=Pβˆ’P2hkH(HPHH+Οƒ2I)βˆ’1hk,\text{MMSE}_k = P - P^2 \mathbf{h}_k^H \left(\mathbf{H} \mathbf{P} \mathbf{H}^{H} + \sigma^2 \mathbf{I}\right)^{-1} \mathbf{h}_k,

where P=diag(P1,…,PK)\mathbf{P} = \text{diag}(P_1, \ldots, P_{K}).

The MMSE receiver is the linear MMSE (LMMSE) estimator from estimation theory applied to the linear model y=Hx+w\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{w}. The connection to FSI Ch. 12 is direct: H\mathbf{H} plays the role of the observation matrix, x\mathbf{x} is the unknown parameter vector, and the LMMSE solution uses the prior covariance E[xxH]=PI\mathbb{E}[\mathbf{x}\mathbf{x}^H] = P\mathbf{I}.

Theorem: MMSE SINR Expression

With equal per-user power PP, the post-detection SINR for user kk with the MMSE receiver is

SINRkMMSE=PhkH(βˆ‘jβ‰ kPhjhjH+Οƒ2I)βˆ’1hk.\text{SINR}_k^{\text{MMSE}} = P \mathbf{h}_k^H \left(\sum_{j \neq k} P \mathbf{h}_j \mathbf{h}_j^H + \sigma^2 \mathbf{I}\right)^{-1} \mathbf{h}_k.

Equivalently,

SINRkMMSE=1[(HHH+Οƒ2PI)βˆ’1]kkβˆ’1.\text{SINR}_k^{\text{MMSE}} = \frac{1}{\left[(\mathbf{H}^{H} \mathbf{H} + \frac{\sigma^2}{P} \mathbf{I})^{-1}\right]_{kk}} - 1.

The MMSE SINR has a beautiful structure: the interference-plus-noise covariance βˆ‘jβ‰ kPhjhjH+Οƒ2I\sum_{j \neq k} P \mathbf{h}_j \mathbf{h}_j^H + \sigma^2\mathbf{I} is inverted and then the signal is projected through it. This is precisely the Capon beamformer applied to the detection problem β€” it steers a "spatial null" toward the interference while collecting the signal.

Key Takeaway

MMSE interpolates between MRC and ZF. At low SNR (P/Οƒ2β†’0P/\sigma^2 \to 0), the regularization term dominates and MMSE reduces to MRC (matched filter). At high SNR (P/Οƒ2β†’βˆžP/\sigma^2 \to \infty), the regularization vanishes and MMSE reduces to ZF. The MMSE receiver is never worse than either.

Example: MMSE vs. ZF for Two Correlated Users

Consider Nt=4N_t = 4, K=2K = 2 with channel vectors h1=[1,1,0,0]T\mathbf{h}_1 = [1, 1, 0, 0]^T and h2=[1,0.9,0.1,0]T\mathbf{h}_2 = [1, 0.9, 0.1, 0]^T (highly correlated). Compare the SINR of ZF and MMSE at SNR=P/Οƒ2=10\text{SNR} = P/\sigma^2 = 10 dB.

SINR Comparison: MRC vs. ZF vs. MMSE

Compare the average per-user SINR of MRC, ZF, and MMSE receivers as a function of NtN_t. Observe that MMSE always dominates, ZF suffers at low Nt/KN_t/K ratios, and all three converge in the massive MIMO regime.

Parameters
8
10

Comparison of Linear Receivers

PropertyMRCZFMMSE
Combining matrix G\mathbf{G}H\mathbf{H}H(HHH)βˆ’1\mathbf{H}(\mathbf{H}^{H}\mathbf{H})^{-1}H(HHH+Οƒ2PI)βˆ’1\mathbf{H}(\mathbf{H}^{H}\mathbf{H} + \frac{\sigma^2}{P}\mathbf{I})^{-1}
Interference handlingIgnoresNulls completelyBalances suppression and noise
Noise enhancementNoneCan be severeBounded (regularized)
Per-symbol complexityO(NtK)\mathcal{O}(N_tK)O(NtK)\mathcal{O}(N_tK)O(NtK)\mathcal{O}(N_tK)
One-time complexityNoneO(K3)\mathcal{O}(K^{3})O(K3)\mathcal{O}(K^{3})
Optimal atLow SNR, Nt≫KN_t \gg KHigh SNR, well-conditioned H\mathbf{H}All regimes
Massive MIMO SINRNtPΞ²k/Οƒ2N_t P \beta_k / \sigma^2(Ntβˆ’K)PΞ²k/Οƒ2(N_t - K) P \beta_k / \sigma^2NtPΞ²k/Οƒ2N_t P \beta_k / \sigma^2

Common Mistake: MMSE Is Not Just 'ZF with Diagonal Loading'

Mistake:

Students sometimes view MMSE as merely adding a small constant to the Gram matrix diagonal to fix numerical issues. This trivializes the fundamental statistical optimality of MMSE.

Correction:

The regularization Οƒ2/Pβ‹…I\sigma^2/P \cdot \mathbf{I} is not arbitrary β€” it is the exact ratio of noise power to signal power dictated by the Bayesian LMMSE estimator. Changing this ratio degrades performance. MMSE is the unique linear receiver that minimizes the MSE, and its regularization strength adapts to the SNR.

MMSE Receiver

The linear minimum mean squared error receiver, which minimizes E[βˆ₯x^βˆ’xβˆ₯2]\mathbb{E}[\|\hat{\mathbf{x}} - \mathbf{x}\|^2]. Equivalent to the LMMSE estimator from Bayesian estimation theory and to the regularized ZF (or Wiener) filter.

Related: MMSE via Matrix Inversion Lemma, MMSE (Regularized ZF) Receiver, Wiener Filter

ZF vs MMSE: Noise Enhancement Tradeoff

Constellation diagrams for ZF and MMSE receivers under well-conditioned and ill-conditioned channels. When the channel is well-conditioned, both receivers produce tight constellations. When the condition number is large, ZF noise enhancement scatters the constellation widely (variance ∝(HHH)βˆ’1\propto (\mathbf{H}^{H}\mathbf{H})^{-1}), while MMSE trades a small bias toward the origin for dramatically reduced noise variance.