Exercises
ex-sp-ch20-01
EasyGenerate a BPSK constellation with and compute its minimum Euclidean distance. Verify that .
BPSK symbols are .
Implementation
import numpy as np
Eb = 1.0
bpsk = np.array([np.sqrt(Eb), -np.sqrt(Eb)])
dmin = np.abs(bpsk[0] - bpsk[1])
print(f"dmin = {dmin:.4f}, 2*sqrt(Eb) = {2*np.sqrt(Eb):.4f}")
ex-sp-ch20-02
EasyWrite a function that generates an -PSK constellation for any and verify that all symbols have unit energy.
for .
Implementation
def psk(M):
return np.exp(1j * 2 * np.pi * np.arange(M) / M)
for M in [2, 4, 8]:
c = psk(M)
print(f"{M}-PSK: |s|^2 = {np.abs(c)**2}") # all ones
ex-sp-ch20-03
EasyCompute the theoretical BER for BPSK at dB using the -function.
and .
Implementation
from scipy.special import erfc
Q = lambda x: 0.5 * erfc(x / np.sqrt(2))
for EbN0_dB in [0, 5, 10, 15]:
EbN0 = 10**(EbN0_dB / 10)
print(f"Eb/N0 = {EbN0_dB} dB: BER = {Q(np.sqrt(2*EbN0)):.2e}")
ex-sp-ch20-04
EasyConvert dB to for QPSK, 16-QAM, and 64-QAM.
Implementation
EbN0_dB = 12
for M in [4, 16, 64]:
k = np.log2(M)
EsN0_dB = EbN0_dB + 10*np.log10(k)
print(f"{int(M)}-QAM: Es/N0 = {EsN0_dB:.2f} dB")
ex-sp-ch20-05
EasyImplement ML detection for QPSK using only sign decisions on the I and Q components (no distance computation).
Implementation
qpsk = np.array([1+1j, -1+1j, -1-1j, 1-1j]) / np.sqrt(2)
N = 10000
bits = np.random.randint(0, 2, (N, 2))
idx = bits[:, 0] * 2 + bits[:, 1]
tx = qpsk[idx]
rx = tx + 0.3*(np.random.randn(N) + 1j*np.random.randn(N))
det_I = (np.real(rx) > 0).astype(int)
det_Q = (np.imag(rx) > 0).astype(int)
det_bits = np.column_stack([1-det_I, 1-det_Q])
ber = np.mean(bits != det_bits)
print(f"BER = {ber:.5f}")
ex-sp-ch20-06
MediumImplement a complete 16-QAM transmitter-receiver chain with Gray coding. Simulate BER at dB with at least 100 errors per point. Compare with the theoretical formula.
Use separate Gray codes for the I and Q axes.
Implementation
# See code supplement ch20/python/qam_simulation.py
# for the complete implementation with Gray coding.
ex-sp-ch20-07
MediumCompute the union bound on the SER for 16-QAM and compare with the exact nearest-neighbor approximation. At what SNR does the bound become tight (within 0.5 dB)?
Analysis
from itertools import combinations
const = qam_constellation(16)
N_sym = len(const)
# Full union bound
EbN0_dB = np.arange(0, 20, 0.5)
for snr_db in EbN0_dB:
snr = 10**(snr_db/10)
sigma = np.sqrt(1/(2*4*snr)) # k=4 for 16-QAM
ser_ub = 0
for i, j in combinations(range(N_sym), 2):
d = abs(const[i] - const[j])
ser_ub += 2 * Q(d / (2*sigma)) / N_sym
ex-sp-ch20-08
MediumImplement a BPSK matched filter receiver with root-raised-cosine pulse shaping. Measure the output SNR and verify it equals .
Root-raised-cosine at TX and RX gives overall raised-cosine (ISI-free).
Implementation
from scipy.signal import fftconvolve
# See code supplement ch20/python/matched_filter.py
ex-sp-ch20-09
MediumCompare BPSK and DBPSK (differential BPSK) BER over AWGN. Show that DBPSK has a 1-2 dB penalty.
Analysis
# DBPSK: BER = 0.5 * exp(-Eb/N0)
# BPSK: BER = Q(sqrt(2*Eb/N0))
for EbN0_dB in range(0, 16, 2):
EbN0 = 10**(EbN0_dB/10)
ber_bpsk = Q(np.sqrt(2*EbN0))
ber_dbpsk = 0.5 * np.exp(-EbN0)
penalty = EbN0_dB - 10*np.log10(-np.log(2*ber_bpsk)) if ber_bpsk > 0 else float('inf')
print(f"Eb/N0={EbN0_dB}: BPSK={ber_bpsk:.2e}, DBPSK={ber_dbpsk:.2e}")
ex-sp-ch20-10
MediumImplement error vector magnitude (EVM) measurement for 16-QAM. Show the relationship between EVM and SNR.
Implementation
def measure_evm(tx_symbols, rx_symbols):
error = rx_symbols - tx_symbols
evm_rms = np.sqrt(np.mean(np.abs(error)**2) / np.mean(np.abs(tx_symbols)**2))
evm_dB = 20 * np.log10(evm_rms)
return evm_rms, evm_dB
ex-sp-ch20-11
HardImplement 8-PSK with set partitioning (Ungerboeck labeling) instead of Gray coding. Compare BER with Gray-coded 8-PSK. Why is set partitioning used in trellis-coded modulation?
Analysis
Set partitioning maximizes within each partition, enabling trellis-coded modulation to achieve coding gain. Without the trellis code, it performs worse than Gray coding.
ex-sp-ch20-12
HardDesign a 32-cross-QAM constellation (the "cross" shape used when is not integer). Normalize to unit energy and compute and the average number of nearest neighbors.
Implementation
# 32-cross: start with 6x6 grid, remove 4 corner points
pam6 = np.arange(6) - 2.5
I, Q = np.meshgrid(pam6, pam6)
all_pts = (I + 1j*Q).flatten()
# Remove corners: |I|=2.5 and |Q|=2.5
mask = ~((np.abs(I.flatten()) == 2.5) & (np.abs(Q.flatten()) == 2.5))
cross32 = all_pts[mask]
cross32 /= np.sqrt(np.mean(np.abs(cross32)**2))
ex-sp-ch20-13
HardImplement a soft-decision demapper for 16-QAM that outputs log-likelihood ratios (LLRs) for each bit. Show that soft decisions improve BER when fed to a convolutional decoder.
LLR computation
def compute_llr(rx, constellation, bit_labels, N0):
k = int(np.log2(len(constellation)))
llrs = np.zeros((len(rx), k))
for b in range(k):
s0 = constellation[bit_labels[:, b] == 0]
s1 = constellation[bit_labels[:, b] == 1]
d0 = np.min(np.abs(rx[:, None] - s0[None, :])**2, axis=1)
d1 = np.min(np.abs(rx[:, None] - s1[None, :])**2, axis=1)
llrs[:, b] = (d1 - d0) / N0 # max-log approximation
return llrs
ex-sp-ch20-14
HardImplement a complete BER simulation with importance sampling for BPSK at dB (where ). Show that IS achieves the same accuracy as standard MC with 1000x fewer samples.
Importance sampling
# Shift the noise distribution toward the error boundary
# See variance_reduction.py from Chapter 9
ex-sp-ch20-15
ChallengeDesign an optimal 8-ary constellation for AWGN that minimizes SER at dB. Compare rectangular 8-QAM, 8-PSK, and your optimized constellation. Use gradient descent to optimize symbol positions.
Approach
Use the union bound as the objective function. Initialize with
8-PSK, then use scipy.optimize.minimize to move symbols. The
optimal constellation should be close to a hexagonal packing.
ex-sp-ch20-16
ChallengeImplement a probabilistic constellation shaping (PCS) scheme for 64-QAM where inner constellation points are transmitted more frequently. Show the shaping gain by comparing with uniform 64-QAM at the same average bit rate.
Use a Maxwell-Boltzmann distribution:
Approach
Maxwell-Boltzmann shaping approximates the capacity-achieving Gaussian input distribution. The shaping gain is up to 1.53 dB.