Monte Carlo Simulation Methodology

Why Monte Carlo Simulation Is the Workhorse of Wireless Research

Most wireless systems are too complex for closed-form BER analysis: coded OFDM with channel estimation errors, MIMO detection with imperfect CSI, or NOMA with successive interference cancellation. Monte Carlo simulation is the only universal tool — generate random bits, pass them through the system model, count errors, repeat.

This section gives you the methodology to write Monte Carlo simulations that are correct, efficient, reproducible, and statistically rigorous.

Definition:

Monte Carlo Estimator

To estimate θ=E[g(X)]\theta = E[g(X)] where XfX \sim f, the Monte Carlo estimator is:

θ^N=1Ni=1Ng(Xi),Xif i.i.d.\hat{\theta}_N = \frac{1}{N} \sum_{i=1}^{N} g(X_i), \quad X_i \sim f \text{ i.i.d.}

Properties:

  • Unbiased: E[θ^N]=θE[\hat{\theta}_N] = \theta
  • Consistent: θ^Na.s.θ\hat{\theta}_N \xrightarrow{a.s.} \theta (law of large numbers)
  • CLT: N(θ^Nθ)dN(0,σg2)\sqrt{N}(\hat{\theta}_N - \theta) \xrightarrow{d} \mathcal{N}(0, \sigma_g^2)

For BER estimation: g(Xi)=1[bit i is in error]g(X_i) = \mathbf{1}[\text{bit } i \text{ is in error}] and P^b=k/N\hat{P}_b = k/N where kk is the error count.

Definition:

Standard BER Simulation Loop

The canonical BER simulation has this structure:

def simulate_ber(snr_db, n_bits, modulation='bpsk'):
    snr_lin = 10 ** (snr_db / 10)
    rng = np.random.default_rng(42)

    # 1. Generate random bits
    bits = rng.integers(0, 2, size=n_bits)

    # 2. Modulate: BPSK -> {-1, +1}
    x = 2 * bits - 1

    # 3. Channel: AWGN
    noise_std = 1 / np.sqrt(2 * snr_lin)
    y = x + noise_std * rng.standard_normal(n_bits)

    # 4. Detect: hard decision
    bits_hat = (y > 0).astype(int)

    # 5. Count errors
    n_errors = np.sum(bits != bits_hat)
    ber = n_errors / n_bits
    return ber, n_errors

Key rules:

  • Process bits in large blocks (vectorized), not one at a time
  • Use rng objects, not np.random.seed()
  • Track error count separately for confidence intervals

For low-BER points, use early stopping or loop-based simulation that runs until a minimum number of errors is reached.

Definition:

Monte Carlo Stopping Rules

Two common stopping criteria for BER simulations:

Fixed-count stopping: Run until kmink_{\min} errors are observed:

n_errors = 0
n_bits_total = 0
while n_errors < k_min:
    ber_block, errors_block = simulate_block(snr_db, block_size)
    n_errors += errors_block
    n_bits_total += block_size
ber = n_errors / n_bits_total

Precision-based stopping: Run until the relative CI width <ϵ< \epsilon:

while ci_width / ber_hat > epsilon:
    # ... run more blocks

Typical choices: kmin=100k_{\min} = 100 errors, or ϵ=10%\epsilon = 10\% relative precision at 95% confidence.

Definition:

Eb/N0E_b/N_0 vs SNR Conversion

The bit energy to noise ratio Eb/N0E_b/N_0 and the SNR per symbol are related by:

EbN0=SNRRlog2M\frac{E_b}{N_0} = \frac{\mathrm{SNR}}{R \cdot \log_2 M}

where RR is the code rate and MM is the modulation order.

Modulation log2M\log_2 M SNR = Eb/N0E_b/N_0 when R=1R=1
BPSK 1 SNR = Eb/N0E_b/N_0
QPSK 2 SNR = 2Eb/N02 E_b/N_0
16-QAM 4 SNR = 4Eb/N04 E_b/N_0
def ebno_to_snr(ebno_db, bits_per_symbol, code_rate=1.0):
    return ebno_db + 10 * np.log10(bits_per_symbol * code_rate)

Definition:

Parallelizing with multiprocessing

Monte Carlo is embarrassingly parallel: each trial is independent. Use multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor to distribute SNR points across CPU cores:

from concurrent.futures import ProcessPoolExecutor
from numpy.random import SeedSequence

def worker(args):
    snr_db, seed = args
    rng = np.random.default_rng(seed)
    return simulate_ber(snr_db, n_bits=1_000_000, rng=rng)

ss = SeedSequence(42)
seeds = ss.spawn(len(snr_range))
with ProcessPoolExecutor(max_workers=8) as pool:
    results = list(pool.map(worker, zip(snr_range, seeds)))

Use SeedSequence.spawn() to ensure each worker gets statistically independent random streams. Never share the same seed across workers.

Theorem: Monte Carlo Convergence Rate

For the Monte Carlo estimator θ^N=1Ng(Xi)\hat{\theta}_N = \frac{1}{N}\sum g(X_i) with Var[g(X)]=σ2<\mathrm{Var}[g(X)] = \sigma^2 < \infty:

MSE[θ^N]=σ2N\mathrm{MSE}[\hat{\theta}_N] = \frac{\sigma^2}{N}

The root-mean-square error decreases as O(1/N)O(1/\sqrt{N}), regardless of the dimension of XX. This dimension-independence is the key advantage of Monte Carlo over deterministic quadrature.

Each sample contributes independently to the estimate. Doubling the samples halves the variance but only reduces the RMSE by 21.41\sqrt{2} \approx 1.41.

Theorem: BPSK BER over AWGN

For BPSK modulation over an AWGN channel, the exact BER is:

Pb=Q ⁣(2Eb/N0)=12erfc ⁣(Eb/N0)P_b = Q\!\left(\sqrt{2 E_b/N_0}\right) = \frac{1}{2}\mathrm{erfc}\!\left(\sqrt{E_b/N_0}\right)

This serves as the fundamental benchmark for validating Monte Carlo BER simulations.

BPSK maps bits to ±Eb\pm\sqrt{E_b}. An error occurs when noise exceeds the decision boundary at the origin. The noise exceeds the signal amplitude with probability Q(2Eb/N0)Q(\sqrt{2E_b/N_0}).

Theorem: Minimum Trials for Target BER Precision

To estimate PbP_b with relative precision ϵ\epsilon at confidence level 1α1-\alpha:

Nzα/22(1Pb)Pbϵ2zα/22Pbϵ2N \ge \frac{z_{\alpha/2}^2 (1-P_b)}{P_b \cdot \epsilon^2} \approx \frac{z_{\alpha/2}^2}{P_b \cdot \epsilon^2}

For Pb=105P_b = 10^{-5}, 95% confidence, 10% relative precision: N1.962/(105×0.01)3.84×107N \ge 1.96^2 / (10^{-5} \times 0.01) \approx 3.84 \times 10^7.

Lower BER requires more trials because errors are rarer events. Every decade decrease in target BER requires a decade increase in simulation length.

Example: Complete BPSK BER Simulation with Validation

Write a complete Monte Carlo BER simulation for BPSK over AWGN, compare with theory, and compute confidence intervals.

Example: Adaptive Simulation with Error-Count Stopping

Implement a BER simulation that runs until at least 100 errors are observed at each SNR point.

Example: Parallelizing BER Simulation Across SNR Points

Use ProcessPoolExecutor to run BER simulations for different SNR points in parallel.

Monte Carlo BER Simulator

Run a live BPSK BER simulation and compare with theory. Adjust the number of bits and watch the BER curve converge to the theoretical result.

Parameters

Monte Carlo Simulation

python
Complete BER simulation with stopping rules, CI, parallel execution.
# Code from: ch09/python/monte_carlo.py
# Load from backend supplements endpoint

Quick Check

You are simulating BER at Pb=104P_b = 10^{-4}. Approximately how many bits must you transmit to observe 100 errors?

10410^4

10610^6

10810^8

100

Common Mistake: Python Loop for Each Bit

Mistake:

Processing bits one at a time in a Python for-loop:

for i in range(n_bits):
    bit = rng.integers(0, 2)
    x = 2 * bit - 1
    y = x + noise[i]
    if (y > 0) != bit:
        errors += 1

Correction:

Vectorize the entire operation using NumPy arrays:

bits = rng.integers(0, 2, size=n_bits)
x = 2 * bits - 1
y = x + noise
errors = np.sum(bits != (y > 0))

The vectorized version is 100-1000x faster.

Key Takeaway

The three golden rules of Monte Carlo BER simulation: (1) Use vectorized NumPy operations, not Python loops. (2) Run until at least 100 errors (or use precision-based stopping). (3) Always validate against a known theoretical result (e.g., BPSK AWGN) before simulating complex systems.

Historical Note: The Name 'Monte Carlo'

1940s

The Monte Carlo method was named by Stanislaw Ulam and John von Neumann during the Manhattan Project (1940s), after the Monte Carlo Casino in Monaco. Ulam was playing solitaire while recovering from illness and wondered about the probability of winning — he realized that random sampling was more practical than combinatorial analysis. Von Neumann programmed the first Monte Carlo simulations on ENIAC in 1947.

Monte Carlo Stopping Strategies

StrategyCriterionProsCons
Fixed bitsN=NmaxN = N_{\max}Simple, predictable runtimeMay waste time at high SNR or undercount at low SNR
Fixed errorskkmink \ge k_{\min}Uniform relative precisionUnknown runtime; may never stop at very low BER
Precision-basedCI width <ϵP^b< \epsilon \hat{P}_bRigorous precision guaranteeRequires CI computation overhead
Time-limitedt<tmaxt < t_{\max}Bounded wall-clock timeNo precision guarantee

Why This Matters: Monte Carlo in Industry Standards

3GPP validation of 5G NR physical layer implementations requires Monte Carlo BER/BLER simulations at specific SNR points with defined confidence levels. The methodology in this section — vectorized simulation, error-count stopping, confidence intervals — is exactly what is used in standards-compliant link-level simulators.

See full treatment in Variance Reduction Techniques

Monte Carlo Method

A computational technique that uses random sampling to estimate mathematical quantities, particularly expectations and probabilities.

Bit Error Rate (BER)

The ratio of erroneously received bits to total transmitted bits, Pb=k/NP_b = k/N.

Related: Monte Carlo Method

Eb/N0E_b/N_0

Bit energy to noise spectral density ratio, the standard SNR metric for comparing modulation schemes on a per-bit basis.