Monte Carlo Simulation Methodology
Why Monte Carlo Simulation Is the Workhorse of Wireless Research
Most wireless systems are too complex for closed-form BER analysis: coded OFDM with channel estimation errors, MIMO detection with imperfect CSI, or NOMA with successive interference cancellation. Monte Carlo simulation is the only universal tool — generate random bits, pass them through the system model, count errors, repeat.
This section gives you the methodology to write Monte Carlo simulations that are correct, efficient, reproducible, and statistically rigorous.
Definition: Monte Carlo Estimator
Monte Carlo Estimator
To estimate where , the Monte Carlo estimator is:
Properties:
- Unbiased:
- Consistent: (law of large numbers)
- CLT:
For BER estimation: and where is the error count.
Definition: Standard BER Simulation Loop
Standard BER Simulation Loop
The canonical BER simulation has this structure:
def simulate_ber(snr_db, n_bits, modulation='bpsk'):
snr_lin = 10 ** (snr_db / 10)
rng = np.random.default_rng(42)
# 1. Generate random bits
bits = rng.integers(0, 2, size=n_bits)
# 2. Modulate: BPSK -> {-1, +1}
x = 2 * bits - 1
# 3. Channel: AWGN
noise_std = 1 / np.sqrt(2 * snr_lin)
y = x + noise_std * rng.standard_normal(n_bits)
# 4. Detect: hard decision
bits_hat = (y > 0).astype(int)
# 5. Count errors
n_errors = np.sum(bits != bits_hat)
ber = n_errors / n_bits
return ber, n_errors
Key rules:
- Process bits in large blocks (vectorized), not one at a time
- Use
rngobjects, notnp.random.seed() - Track error count separately for confidence intervals
For low-BER points, use early stopping or loop-based simulation that runs until a minimum number of errors is reached.
Definition: Monte Carlo Stopping Rules
Monte Carlo Stopping Rules
Two common stopping criteria for BER simulations:
Fixed-count stopping: Run until errors are observed:
n_errors = 0
n_bits_total = 0
while n_errors < k_min:
ber_block, errors_block = simulate_block(snr_db, block_size)
n_errors += errors_block
n_bits_total += block_size
ber = n_errors / n_bits_total
Precision-based stopping: Run until the relative CI width :
while ci_width / ber_hat > epsilon:
# ... run more blocks
Typical choices: errors, or relative precision at 95% confidence.
Definition: vs SNR Conversion
vs SNR Conversion
The bit energy to noise ratio and the SNR per symbol are related by:
where is the code rate and is the modulation order.
| Modulation | SNR = when | |
|---|---|---|
| BPSK | 1 | SNR = |
| QPSK | 2 | SNR = |
| 16-QAM | 4 | SNR = |
def ebno_to_snr(ebno_db, bits_per_symbol, code_rate=1.0):
return ebno_db + 10 * np.log10(bits_per_symbol * code_rate)
Definition: Parallelizing with multiprocessing
Parallelizing with multiprocessing
Monte Carlo is embarrassingly parallel: each trial is independent.
Use multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor
to distribute SNR points across CPU cores:
from concurrent.futures import ProcessPoolExecutor
from numpy.random import SeedSequence
def worker(args):
snr_db, seed = args
rng = np.random.default_rng(seed)
return simulate_ber(snr_db, n_bits=1_000_000, rng=rng)
ss = SeedSequence(42)
seeds = ss.spawn(len(snr_range))
with ProcessPoolExecutor(max_workers=8) as pool:
results = list(pool.map(worker, zip(snr_range, seeds)))
Use SeedSequence.spawn() to ensure each worker gets statistically
independent random streams. Never share the same seed across workers.
Theorem: Monte Carlo Convergence Rate
For the Monte Carlo estimator with :
The root-mean-square error decreases as , regardless of the dimension of . This dimension-independence is the key advantage of Monte Carlo over deterministic quadrature.
Each sample contributes independently to the estimate. Doubling the samples halves the variance but only reduces the RMSE by .
Theorem: BPSK BER over AWGN
For BPSK modulation over an AWGN channel, the exact BER is:
This serves as the fundamental benchmark for validating Monte Carlo BER simulations.
BPSK maps bits to . An error occurs when noise exceeds the decision boundary at the origin. The noise exceeds the signal amplitude with probability .
Theorem: Minimum Trials for Target BER Precision
To estimate with relative precision at confidence level :
For , 95% confidence, 10% relative precision: .
Lower BER requires more trials because errors are rarer events. Every decade decrease in target BER requires a decade increase in simulation length.
Example: Complete BPSK BER Simulation with Validation
Write a complete Monte Carlo BER simulation for BPSK over AWGN, compare with theory, and compute confidence intervals.
Simulation code
import numpy as np
from scipy.special import erfc
def bpsk_ber_theory(ebno_db):
ebno = 10 ** (ebno_db / 10)
return 0.5 * erfc(np.sqrt(ebno))
def simulate_bpsk(ebno_db_range, n_bits=1_000_000):
rng = np.random.default_rng(42)
results = []
for ebno_db in ebno_db_range:
ebno = 10 ** (ebno_db / 10)
bits = rng.integers(0, 2, size=n_bits)
x = 2.0 * bits - 1.0
noise = rng.standard_normal(n_bits) / np.sqrt(2 * ebno)
y = x + noise
errors = np.sum(bits != (y > 0).astype(int))
results.append((ebno_db, errors / n_bits, errors))
return results
ebno_range = np.arange(0, 11, 1.0)
results = simulate_bpsk(ebno_range)
for ebno, ber, nerr in results:
theory = bpsk_ber_theory(ebno)
print(f"Eb/N0={ebno:5.1f} dB: BER={ber:.4e}, "
f"Theory={theory:.4e}, Errors={nerr}")
Example: Adaptive Simulation with Error-Count Stopping
Implement a BER simulation that runs until at least 100 errors are observed at each SNR point.
Implementation
def simulate_until_errors(ebno_db, min_errors=100,
block_size=100_000, max_bits=1e9):
rng = np.random.default_rng(42)
ebno = 10 ** (ebno_db / 10)
total_errors = 0
total_bits = 0
while total_errors < min_errors and total_bits < max_bits:
bits = rng.integers(0, 2, size=block_size)
x = 2.0 * bits - 1.0
noise = rng.standard_normal(block_size) / np.sqrt(2*ebno)
y = x + noise
total_errors += np.sum(bits != (y > 0).astype(int))
total_bits += block_size
ber = total_errors / total_bits
# 95% CI (normal approximation)
ci = 1.96 * np.sqrt(ber * (1-ber) / total_bits)
return ber, total_errors, total_bits, ci
Example: Parallelizing BER Simulation Across SNR Points
Use ProcessPoolExecutor to run BER simulations for different
SNR points in parallel.
Parallel implementation
from concurrent.futures import ProcessPoolExecutor
from numpy.random import SeedSequence
def worker(args):
ebno_db, seed = args
rng = np.random.default_rng(seed)
# ... simulation code using rng ...
return ebno_db, ber, n_errors
ss = SeedSequence(42)
ebno_range = np.arange(0, 13, 1.0)
seeds = ss.spawn(len(ebno_range))
with ProcessPoolExecutor(max_workers=8) as pool:
results = list(pool.map(worker,
zip(ebno_range, seeds)))
Monte Carlo BER Simulator
Run a live BPSK BER simulation and compare with theory. Adjust the number of bits and watch the BER curve converge to the theoretical result.
Parameters
Monte Carlo Simulation
# Code from: ch09/python/monte_carlo.py
# Load from backend supplements endpointQuick Check
You are simulating BER at . Approximately how many bits must you transmit to observe 100 errors?
100
bits to expect 100 errors.
Common Mistake: Python Loop for Each Bit
Mistake:
Processing bits one at a time in a Python for-loop:
for i in range(n_bits):
bit = rng.integers(0, 2)
x = 2 * bit - 1
y = x + noise[i]
if (y > 0) != bit:
errors += 1
Correction:
Vectorize the entire operation using NumPy arrays:
bits = rng.integers(0, 2, size=n_bits)
x = 2 * bits - 1
y = x + noise
errors = np.sum(bits != (y > 0))
The vectorized version is 100-1000x faster.
Key Takeaway
The three golden rules of Monte Carlo BER simulation: (1) Use vectorized NumPy operations, not Python loops. (2) Run until at least 100 errors (or use precision-based stopping). (3) Always validate against a known theoretical result (e.g., BPSK AWGN) before simulating complex systems.
Historical Note: The Name 'Monte Carlo'
1940sThe Monte Carlo method was named by Stanislaw Ulam and John von Neumann during the Manhattan Project (1940s), after the Monte Carlo Casino in Monaco. Ulam was playing solitaire while recovering from illness and wondered about the probability of winning — he realized that random sampling was more practical than combinatorial analysis. Von Neumann programmed the first Monte Carlo simulations on ENIAC in 1947.
Monte Carlo Stopping Strategies
| Strategy | Criterion | Pros | Cons |
|---|---|---|---|
| Fixed bits | Simple, predictable runtime | May waste time at high SNR or undercount at low SNR | |
| Fixed errors | Uniform relative precision | Unknown runtime; may never stop at very low BER | |
| Precision-based | CI width | Rigorous precision guarantee | Requires CI computation overhead |
| Time-limited | Bounded wall-clock time | No precision guarantee |
Why This Matters: Monte Carlo in Industry Standards
3GPP validation of 5G NR physical layer implementations requires Monte Carlo BER/BLER simulations at specific SNR points with defined confidence levels. The methodology in this section — vectorized simulation, error-count stopping, confidence intervals — is exactly what is used in standards-compliant link-level simulators.
See full treatment in Variance Reduction Techniques
Monte Carlo Method
A computational technique that uses random sampling to estimate mathematical quantities, particularly expectations and probabilities.
Bit Error Rate (BER)
The ratio of erroneously received bits to total transmitted bits, .
Related: Monte Carlo Method
Bit energy to noise spectral density ratio, the standard SNR metric for comparing modulation schemes on a per-bit basis.