Direction Finding and Angular Resolution

From Beamforming to Super-Resolution

Classical beamforming (Telecom Ch 07) resolves sources separated by at least the Rayleigh limit Δθλ/(Nad)\Delta\theta \approx \lambda/(N_a d). This is the diffraction limit of the array aperture. For imaging applications where targets are closely spaced or where the array is small, we need methods that go beyond this limit.

Subspace methods (MUSIC, ESPRIT) exploit the eigen-structure of the data covariance to achieve super-resolution — resolving sources separated by much less than the Rayleigh limit. These methods connect directly to the spectral analysis of AHA\mathbf{A}^{H}\mathbf{A} in the imaging context (Ch 07.2, Ch 13.4).

Definition:

Classical (Bartlett) Beamformer

For a ULA with NaN_a elements at spacing dd, receiving KK narrowband sources, the classical beamformer scans the steering vector a(θ)\mathbf{a}(\theta) across angles:

PBF(θ)=aH(θ)R^a(θ),P_{\text{BF}}(\theta) = \mathbf{a}^{H}(\theta)\,\hat{\mathbf{R}}\,\mathbf{a}(\theta),

where R^=1L=1LxxH\hat{\mathbf{R}} = \frac{1}{L}\sum_{\ell=1}^{L}\mathbf{x}_\ell\mathbf{x}_\ell^H is the sample covariance from LL snapshots.

The resolution is limited by the array factor: Δθ0.886λ/(Nad)\Delta\theta \approx 0.886\lambda/(N_a d) (Rayleigh criterion for a uniform taper).

Definition:

Capon (MVDR) Beamformer

The Capon beamformer minimizes the output power while maintaining unity gain in the look direction:

PCapon(θ)=1aH(θ)R^1a(θ).P_{\text{Capon}}(\theta) = \frac{1}{\mathbf{a}^{H}(\theta)\,\hat{\mathbf{R}}^{-1}\,\mathbf{a}(\theta)}.

Compared to the classical beamformer, Capon provides:

  • Narrower mainlobe (data-dependent resolution improvement).
  • Better sidelobe suppression (adapts to the interference).
  • Reduced dynamic range for closely spaced sources.

However, Capon is biased: it underestimates the power of strong sources due to self-nulling, and it requires accurate covariance estimation (LNaL \geq N_a snapshots).

Theorem: MUSIC (MUltiple SIgnal Classification) Algorithm

Given KK narrowband sources (K<NaK < N_a) and the eigendecomposition of the covariance matrix:

R=i=1NaλieieiH=EsΛsEsH+σ2EnEnH,\mathbf{R} = \sum_{i=1}^{N_a} \lambda_i \mathbf{e}_i \mathbf{e}_i^H = \mathbf{E}_s \boldsymbol{\Lambda}_s \mathbf{E}_s^H + \sigma^2\mathbf{E}_n\mathbf{E}_n^H,

where EsCNa×K\mathbf{E}_s \in \mathbb{C}^{N_a \times K} spans the signal subspace and EnCNa×(NaK)\mathbf{E}_n \in \mathbb{C}^{N_a \times (N_a - K)} spans the noise subspace, the MUSIC pseudo-spectrum is:

PMUSIC(θ)=1aH(θ)EnEnHa(θ).P_{\text{MUSIC}}(\theta) = \frac{1}{\mathbf{a}^{H}(\theta)\,\mathbf{E}_n\mathbf{E}_n^H\,\mathbf{a}(\theta)}.

PMUSIC(θ)P_{\text{MUSIC}}(\theta) \to \infty when a(θ)\mathbf{a}(\theta) lies in the signal subspace — i.e., when θ\theta equals a source direction. The DOA estimates are the KK peaks of the pseudo-spectrum.

MUSIC exploits the orthogonality between the signal and noise subspaces. The steering vector of a true source is (approximately) orthogonal to the noise subspace, making the denominator small. The sharpness of the peaks is not limited by the array aperture — hence super-resolution.

Definition:

ESPRIT Algorithm

ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) exploits the shift-invariance structure of a ULA to estimate DOAs without a spectral search.

Partition the ULA into two overlapping subarrays of Na1N_a - 1 elements (elements 1,,Na11, \ldots, N_a - 1 and 2,,Na2, \ldots, N_a). The signal subspaces of the two subarrays satisfy:

Es,2=Es,1Φ,\mathbf{E}_{s,2} = \mathbf{E}_{s,1}\,\boldsymbol{\Phi},

where Φ=diag(ej2πdsinθ1/λ,,ej2πdsinθK/λ)\boldsymbol{\Phi} = \text{diag}(e^{j2\pi d\sin\theta_1/\lambda}, \ldots, e^{j2\pi d\sin\theta_K/\lambda}).

The DOA estimates are obtained from the eigenvalues ϕk\phi_k of the matrix Φ=(Es,1Es,2)\boldsymbol{\Phi} = (\mathbf{E}_{s,1}^\dagger\mathbf{E}_{s,2}):

θ^k=arcsin ⁣(λ2πdarg(ϕk)).\hat{\theta}_k = \arcsin\!\left(\frac{\lambda}{2\pi d}\,\text{arg}(\phi_k)\right).

ESPRIT avoids the grid search of MUSIC and has better statistical efficiency for closely spaced sources.

Theorem: Cramer-Rao Bound for DOA Estimation

For a single source at angle θ0\theta_0 observed by a ULA with NaN_a elements, spacing dd, LL snapshots, and SNR\text{SNR} per element, the CRB for the DOA estimate is:

CRB(θ0)=6λ2(2πdcosθ0)22LSNRNa(Na21).\text{CRB}(\theta_0) = \frac{6\lambda^{2}}{(2\pi d \cos\theta_0)^2 \cdot 2L \cdot \text{SNR} \cdot N_a(N_a^2 - 1)}.

The key scaling is:

  • CRB Na3\propto N_a^{-3} (cubic improvement with array size).
  • CRB 1/(LSNR)\propto 1/(L \cdot \text{SNR}) (standard estimation scaling).
  • CRB 1/cos2θ0\propto 1/\cos^2\theta_0 (degradation at endfire).

The Na3N_a^{-3} scaling reflects three sources of information: NaN_a elements each contribute one phase measurement, the phase sensitivity scales with element index (Na\propto N_a), and the effective aperture scales with NaN_a.

Example: MUSIC DOA Estimation with Two Sources

A ULA with Na=16N_a = 16, d=λ/2d = \lambda/2, receives two equal-power uncorrelated sources at θ1=5°\theta_1 = -5° and θ2=5°\theta_2 = 5° with per-element SNR=10\text{SNR} = 10 dB. Using L=100L = 100 snapshots: (a) Can the classical beamformer resolve these sources? (b) Can MUSIC resolve them? (c) What is the CRB for each DOA estimate?

MUSIC Pseudo-Spectrum vs. Snapshots and SNR

Compare the classical beamformer, Capon, and MUSIC spectra for two closely spaced sources. Adjust the source separation, SNR, and number of snapshots to see when each method resolves the sources.

Parameters
5
10
100
16

Comparison of DOA Estimation Methods

PropertyBartlettCapon (MVDR)MUSICESPRIT
Resolution limitRayleigh (λ/(Nad)\sim \lambda/(N_a d))Better than RayleighSuper-resolutionSuper-resolution
Requires KK known?NoNoYesYes
ComputationO(Na2)O(N_a^2) per angleO(Na3)O(N_a^3) onceO(Na3)O(N_a^3) + searchO(Na3)O(N_a^3) (no search)
Snapshot requirementLowLNaL \geq N_aLNaL \geq N_aLNaL \geq N_a
Correlated sourcesOKDegradedFails (need spatial smoothing)Fails
Calibration errorsRobustModerateSensitiveLess sensitive
Amplitude estimationBiased (high)Biased (low)NoNo

Historical Note: The Subspace Revolution

1979-1989

MUSIC was introduced by Ralph Schmidt in 1979 (published 1986) at the University of Cincinnati, revolutionizing direction finding by breaking the Rayleigh resolution barrier. ESPRIT followed in 1989 from Roy and Kailath at Stanford, eliminating the need for a spectral search. These algorithms emerged from the signal processing community but quickly influenced radar, sonar, and telecommunications.

The subspace idea — decomposing the data covariance into signal and noise components — is the same principle that underlies principal component analysis, sparse recovery, and the low-rank structure exploited in modern imaging algorithms.

Common Mistake: Wrong Number of Sources in MUSIC

Mistake:

Applying MUSIC with an incorrect estimate of the number of sources KK, expecting correct DOA estimates.

Correction:

If K^<K\hat{K} < K, some sources are invisible (projected onto the noise subspace). If K^>K\hat{K} > K, spurious peaks appear. Use information-theoretic criteria (AIC, MDL/BIC) or sequential hypothesis testing to estimate KK from the eigenvalue spectrum. MDL is preferred in radar as it is consistent (selects the correct KK as LL \to \infty).

MUSIC

MUltiple SIgnal Classification — a subspace-based DOA estimation algorithm that achieves super-resolution by projecting the steering vector onto the noise subspace. The pseudo-spectrum peaks at true source directions.

Quick Check

The CRB for DOA estimation scales as Na3N_a^{-3}. If a radar doubles its array from 8 to 16 elements (all else equal), by how many dB does the minimum DOA estimation error (RMSE) decrease?

4.5 dB

3 dB

9 dB

6 dB

🎓CommIT Contribution(2026)

Caire's Unified Illumination and Sensing Model

G. CaireInternal note, TU Berlin CommIT group

Caire's note unifies two communities: the imaging/diffraction tomography view (where the forward model is a Fourier-domain restriction operator on the Ewald sphere) and the radar/wireless view (where the forward model is a product of array steering vectors and frequency responses). Both views lead to the same sensing matrix A\mathbf{A} with Kronecker structure.

The radar signal processing tools of this chapter — matched filtering (the adjoint AH\mathbf{A}^{H}), range-Doppler processing (the 2D FFT of the Kronecker components), and STAP (the MVDR applied column-by-column to A\mathbf{A}) — are the building blocks that Caire's model connects to the imaging algorithms of Part IV.

The key insight: the point-spread function of AHA\mathbf{A}^{H}\mathbf{A} for a physical (structured) sensing matrix has dramatically different sidelobe structure than for a random matrix. This structure determines which reconstruction algorithms succeed and which fail.

rf-imagingforward-modelsensing-operator

Key Takeaway

MUSIC and ESPRIT achieve super-resolution DOA estimation by exploiting the signal/noise subspace decomposition of the data covariance. The CRB scales as Na3N_a^{-3} — much faster than the Rayleigh limit improvement with aperture. These subspace methods generalize directly to the imaging context, where they become adaptive beamforming for image reconstruction (Ch 13.4).