The Continuous-Time Matched Filter

Lifting Discrete Results to L2L^2

Real receivers operate on continuous-time waveforms y(t)y(t) before sampling. The discrete vector-space story of Sections 2.1--2.3 translates verbatim to the Hilbert space L2([0,T])L^2([0,T]) of square-integrable functions once we install an appropriate inner product. The payoff is a geometric picture of every digital modulation scheme as a constellation of signal vectors, with optimal detection performed by projection onto the span of those vectors --- the signal-space view that underlies all of coherent communications.

Definition:

The L2L^2 Inner Product

For real-valued signals u(t),v(t)u(t), v(t) defined on the interval [0,T][0,T], the L2L^2 inner product is

u,v=0Tu(t)v(t)dt,\langle u, v\rangle = \int_0^T u(t)\, v(t)\,dt,

with induced norm u=u,u\|u\| = \sqrt{\langle u, u\rangle}. The space of functions with finite norm is L2([0,T])L^2([0,T]) --- a Hilbert space.

All linear-algebra identities established for Rn\mathbb{R}^n --- Cauchy--Schwarz, projection onto a subspace, Gram--Schmidt --- transfer to L2([0,T])L^2([0,T]) once sums over ii are replaced by integrals.

Definition:

Continuous-Time Detection in AWGN

Let s(t)s(t) be a known deterministic waveform on [0,T][0,T] with finite energy Es=0Ts2(t)dtE_s = \int_0^T s^2(t)\,dt. Let w(t)w(t) be a zero-mean white Gaussian noise process with autocorrelation E[w(t)w(τ)]=N02δ(tτ)\mathbb{E}[w(t)w(\tau)] = \tfrac{N_0}{2}\delta(t-\tau). The continuous-time detection problem is

H0:y(t)=w(t),H1:y(t)=s(t)+w(t),t[0,T].\mathcal{H}_0: y(t) = w(t), \qquad \mathcal{H}_1: y(t) = s(t) + w(t), \qquad t\in[0,T].

Definition:

Continuous-Time Matched Filter

The matched filter for signal s(t)s(t) is the linear time-invariant filter with impulse response

h(t)=s(Tt),t[0,T].h(t) = s(T - t), \qquad t\in[0,T].

The output of the filter when driven by y(t)y(t) is z(t)=(yh)(t)=0Ty(τ)h(tτ)dτz(t) = (y*h)(t) = \int_0^T y(\tau)\,h(t-\tau)\,d\tau.

Sampling the output at t=Tt = T gives z(T)=0Ty(τ)s(τ)dτ=y,sz(T) = \int_0^T y(\tau) s(\tau)\,d\tau = \langle y, s\rangle --- the L2L^2 correlator. The time-reversal s(Tt)s(T-t) is precisely what makes the convolution reproduce a correlation at the sampling instant.

Theorem: The L2L^2 Correlator Is the Continuous-Time Sufficient Statistic

For the continuous-time problem in Definition DContinuous-Time Detection in AWGN, the sufficient statistic for testing H0\mathcal{H}_0 against H1\mathcal{H}_1 is

T(y)=y,s=0Ty(t)s(t)dt.T(y) = \langle y, s\rangle = \int_0^T y(t) s(t)\,dt.

Under H0\mathcal{H}_0, TN(0,N02Es)T\sim\mathcal{N}(0, \tfrac{N_0}{2}E_s); under H1\mathcal{H}_1, TN(Es,N02Es)T\sim\mathcal{N}(E_s, \tfrac{N_0}{2}E_s). The detection performance satisfies

Pd=Q ⁣(Q1(Pf)2Es/N0).P_d = Q\!\Bigl(Q^{-1}(P_f) - \sqrt{2E_s/N_0}\Bigr).

This is the identical statement as the discrete case, with the deflection d2=2Es/N0d^2 = 2E_s/N_0 once again emerging as the only quantity that matters. Sampling resolution, oversampling, and the particular discretisation grid all fall away.

Theorem: The Continuous-Time Matched Filter Maximises Output SNR

Among all linear time-invariant filters with impulse response h(t)h(t) of finite energy, the filter h(t)=s(Tt)h^\star(t) = s(T-t) maximises the output signal-to-noise ratio at the sampling instant t=Tt=T:

SNRout(h)=(hs)(T)2(N0/2)h2    2EsN0,\mathrm{SNR}_{\mathrm{out}}(h) = \frac{|(h*s)(T)|^2}{(N_0/2)\|h\|^2} \;\leq\; \frac{2E_s}{N_0},

with equality iff h(t)=cs(Tt)h(t) = c\,s(T-t).

Theorem: Signal-Space Representation via Gram--Schmidt

Let {s1(t),,sM(t)}L2([0,T])\{s_1(t),\dots,s_M(t)\}\subset L^2([0,T]) be MM waveforms. The Gram--Schmidt procedure produces an orthonormal basis {ϕ1,,ϕN}\{\phi_1,\dots,\phi_N\} with NMN\leq M and coefficients sm(t)=k=1Nsmkϕk(t)s_m(t) = \sum_{k=1}^N s_{mk}\phi_k(t), smk=sm,ϕks_{mk} = \langle s_m,\phi_k\rangle. Under AWGN, the sufficient statistic for detecting which sms_m was transmitted is the projection vector y=(y,ϕ1,,y,ϕN)TRN\mathbf{y} = (\langle y,\phi_1\rangle,\dots,\langle y,\phi_N\rangle)^{\mathsf{T}}\in\mathbb{R}^N, and the ML detector chooses m^=argminmysm\widehat{m} = \arg\min_m \|\mathbf{y}-\mathbf{s}_m\|.

Any set of waveforms lives in at most an MM-dimensional subspace of L2L^2. Gram--Schmidt finds that subspace, and the detector only needs the NN coordinates of y(t)y(t) inside it. Everything else is noise orthogonal to every hypothesis --- informationally irrelevant.

Example: QPSK as a Two-Dimensional Constellation

The four QPSK waveforms are sm(t)=Acos(2πfct+mπ/2+π/4)s_m(t) = A\cos(2\pi f_c t + m\pi/2 + \pi/4) for m{0,1,2,3}m\in\{0,1,2,3\} over [0,T][0,T], with fcTf_cT a large integer. Construct the signal-space representation and identify NN.

Example: Matched Filter for a Rectangular Pulse

Compute the matched-filter impulse response and plot its output when driven by y(t)=s(t)+w(t)y(t)=s(t)+w(t) for the rectangular pulse s(t)=1[0,T](t)s(t) = \mathbf{1}_{[0,T]}(t).

Common Mistake: Causality of the Matched Filter

Mistake:

"The matched filter h(t)=s(Tt)h(t) = s(T-t) looks non-causal because s(t)s(-t) would be non-causal."

Correction:

On the interval [0,T][0,T], s(Tt)s(T-t) is the time-reversed signal shifted by TT so that it occupies [0,T][0,T] again. This built-in delay is precisely what makes the filter causal: the output at time t=Tt=T depends only on past samples of yy during [0,T][0,T].

Common Mistake: Sample at t=Tt=T, Not at the Peak

Mistake:

"To maximise SNR I should sample the matched-filter output at its empirical peak."

Correction:

Peak-picking is a separate detector (the max-correlator for unknown delay --- a GLRT!) and has a different, generally worse, false-alarm behaviour than sampling at the known delay t=Tt=T. The optimality of 2Es/N02E_s/N_0 is tied to sampling at the known delay.

Quick Check

For MM distinct signals in L2([0,T])L^2([0,T]), the signal-space dimension NN satisfies:

N=MN = M always.

NMN \leq M, with equality iff the signals are linearly independent.

NMN \geq M.

NN equals the number of orthogonal signals in the set.

Quick Check

In the AWGN autocorrelation E[w(t)w(τ)]=N02δ(tτ)\mathbb{E}[w(t)w(\tau)]=\tfrac{N_0}{2}\delta(t-\tau), what is the physical interpretation of N0/2N_0/2?

Noise power in watts.

Two-sided noise PSD in W/Hz.

Noise energy.

Noise sample variance.

Key Takeaway

Detecting any waveform in any finite-dimensional set against AWGN reduces to detecting a vector in RN\mathbb{R}^N against white Gaussian noise, where NN is the dimension of the signals' span. Every digital modulation --- BPSK, QPSK, MM-QAM, MM-PSK, pulse-position modulation, orthogonal frequency-shift keying --- is an instance of this reduction.

Why This Matters: Signal Space and Modulation Taxonomy

The signal-space dimension NN determines a modulation's geometry: N=1N=1 for MM-PAM, N=2N=2 for MM-PSK / MM-QAM, N=MN=M for orthogonal MM-FSK, and N=T/TcN=T/T_c for CDMA (where TcT_c is the chip period). The minimum distance dmind_{\min} within the constellation, combined with the AWGN deflection result, determines the symbol-error probability via the union bound --- a topic deferred to Chapter 3.

See full treatment in Chapter 3

Historical Note: Wozencraft and Jacobs (1965)

1960s

The signal-space view of digital modulation was systematised in Wozencraft & Jacobs' 1965 textbook Principles of Communication Engineering. They took the geometric picture --- inherited from the Karhunen--Loeve expansion and from Shannon's 1948 geometric capacity argument --- and made it the pedagogical centrepiece of modulation theory. Every subsequent digital-communications textbook has followed their lead; ours is no exception.