Band-Limited Channels and the Sampling Theorem
Connecting Continuous-Time to Discrete-Time
So far we have analyzed the discrete-time AWGN channel. But real communication happens in continuous time: a transmitter sends a waveform , it propagates through a channel, and the receiver observes . How do we bridge the gap?
The answer is the Nyquist-Shannon sampling theorem, which shows that a band-limited signal of bandwidth Hz is completely described by samples per second. This converts the continuous-time channel into a discrete-time one, and the capacity per second becomes . This section makes this connection rigorous.
Definition: Continuous-Time Band-Limited AWGN Channel
Continuous-Time Band-Limited AWGN Channel
The continuous-time AWGN channel with single-sided bandwidth Hz is
where:
- is the transmitted waveform, band-limited to , with average power ,
- is white Gaussian noise with two-sided power spectral density .
The noise power in the band is (for a real baseband channel with bandwidth ).
Theorem: Capacity of the Band-Limited AWGN Channel
The capacity of the continuous-time band-limited AWGN channel with bandwidth Hz, signal power , and noise spectral density is
This is the celebrated Shannon-Hartley formula.
By the sampling theorem, the band-limited channel produces independent real samples per second. Each sample sees an i.i.d. AWGN channel with signal power and noise power , giving capacity bits per sample. Multiplying by samples per second yields the result.
Apply the sampling theorem
By the Nyquist-Shannon theorem, a signal band-limited to Hz is completely described by samples at rate per second. Over a time interval , this gives samples.
Convert to discrete-time
The sampled channel is where are i.i.d. The signal power per sample is , so the per-sample SNR is .
Compute the capacity in bits/s
The per-sample capacity is bits per real sample. With samples per second:
Shannon-Hartley formula
The capacity of the band-limited AWGN channel: bits/s. Named for Shannon (information-theoretic derivation) and Hartley (earlier work on the log relationship between bandwidth and capacity).
Related: AWGN channel, Spectral efficiency
Definition: Passband Channel Capacity
Passband Channel Capacity
For a passband channel centered at carrier frequency with single-sided bandwidth , the complex baseband equivalent has bandwidth Hz and sample rate complex samples per second.
Each complex sample sees the channel , , with power constraint and .
The capacity per complex symbol is bits, and the capacity in bits/s is
which is the same as the real baseband result — both conventions give the same bits/s.
Example: Shannon Capacity of a 5G NR Channel
A 5G NR cell operates at carrier frequency GHz with bandwidth MHz and received dB. What is the maximum achievable data rate?
Convert SNR to linear
.
Apply the Shannon-Hartley formula
$
Compare with practical throughput
In practice, 5G NR achieves about 60-80% of the Shannon limit due to cyclic prefix overhead (~7%), reference signal overhead (~15-20%), control channel overhead, and coding gap. A realistic throughput is approximately – Mbit/s for this configuration.
Bandwidth vs. Power Tradeoff
Capacity vs. Bandwidth
Explore how capacity scales with bandwidth for fixed power . At small bandwidth, capacity grows almost linearly with (bandwidth-limited regime). At large bandwidth, capacity saturates at (power-limited regime).
Parameters
Two Operating Regimes
The Shannon-Hartley formula reveals two fundamentally different operating regimes:
Bandwidth-limited regime ( small, high): Capacity grows approximately as . Adding more bandwidth is very valuable. This is the regime of fiber optics and high-SNR short-range wireless links.
Power-limited regime ( large, small): Capacity saturates at regardless of bandwidth. The bottleneck is energy, not spectrum. This is the regime of satellite communication, deep-space links, and IoT devices.
Most practical systems operate between these extremes, and the spectral efficiency indicates which regime dominates: means bandwidth-limited, means power-limited.
Thermal Noise Floor
The noise spectral density has a fundamental physical origin: at temperature Kelvin, the thermal noise power spectral density is , where J/K is Boltzmann's constant.
At room temperature ( K): W/Hz dBm/Hz.
This is the ultimate noise floor for any receiver operating at room temperature. In practice, the receiver noise figure (typically 3-8 dB) increases the effective noise spectral density to .
- •
Thermal noise floor at 290 K: dBm/Hz
- •
Receiver noise figure adds 3-8 dB in practice
- •
Cryogenic receivers (radio astronomy) can achieve dB
Historical Note: From Hartley to Shannon
Ralph Hartley published "Transmission of Information" in 1928, establishing the logarithmic relationship between the number of distinguishable signal levels and the amount of information. His formula (for signal levels) captures the bandwidth dependence but misses the noise.
Shannon's 1948 breakthrough was to incorporate noise into the picture, yielding . The "+1" inside the logarithm — which seems like a minor detail — is actually the crucial difference: it says that even with infinite bandwidth and infinite signal levels, noise limits what you can communicate. Hartley's formula gives as ; Shannon's formula correctly gives a finite limit.
Quick Check
A system with bandwidth and power achieves capacity . If the bandwidth is doubled to (keeping power fixed), the new capacity satisfies:
(bandwidth does not help)
Doubling bandwidth increases noise power () while keeping signal power fixed, so the SNR halves. By the concavity of , we get . The capacity increases but by less than a factor of two — this is the diminishing returns of bandwidth in the power-limited regime.