Classes and Inheritance
When to Use Classes vs. Functions in Scientific Code
Not every piece of scientific code needs a class. Use a function when you
have a stateless transformation (e.g., normalize(x)). Use a class when you
need to bundle state with behavior β for instance, a solver that maintains
internal buffers across iterations, or a configuration object that holds dozens
of parameters.
The rule of thumb: if you find yourself passing the same five arguments to every function in a module, those arguments want to be an object.
Definition: Class
Class
A class is a blueprint for creating objects that bundle state (attributes)
and behavior (methods). In Python, every class implicitly inherits from object:
class Signal:
"""A discrete-time signal with metadata."""
def __init__(self, data: np.ndarray, sample_rate: float = 1.0):
self.data = data
self.sample_rate = sample_rate
def duration(self) -> float:
return len(self.data) / self.sample_rate
def __repr__(self) -> str:
return f"Signal(n={len(self.data)}, fs={self.sample_rate})"
The __init__ method initializes instance attributes; self refers to the
instance being created.
Definition: Dataclass
Dataclass
A dataclass (from dataclasses) auto-generates __init__, __repr__,
and __eq__ from annotated class attributes. This eliminates boilerplate
for classes that are primarily data containers:
from dataclasses import dataclass, field
@dataclass
class SimulationConfig:
"""All parameters for a compressed sensing simulation."""
n_measurements: int = 100
n_features: int = 500
sparsity: int = 10
snr_db: float = 20.0
algorithm: str = "lasso"
max_iterations: int = 1000
tolerance: float = 1e-6
random_seed: int = 42
tags: list[str] = field(default_factory=list)
The field(default_factory=list) avoids the mutable default argument trap.
Dataclasses support frozen=True for immutable configs and slots=True
(Python 3.10+) for memory-efficient storage.
Definition: Inheritance
Inheritance
Inheritance lets a child class reuse and extend a parent class. The child inherits all attributes and methods, and can override or extend them:
class Solver:
"""Base class for iterative solvers."""
def __init__(self, config: SimulationConfig):
self.config = config
self.history: list[float] = []
def solve(self, A: np.ndarray, y: np.ndarray) -> np.ndarray:
raise NotImplementedError("Subclasses must implement solve()")
def log_iteration(self, cost: float) -> None:
self.history.append(cost)
class LASSOSolver(Solver):
"""LASSO via ISTA (Iterative Shrinkage-Thresholding)."""
def solve(self, A: np.ndarray, y: np.ndarray) -> np.ndarray:
x = np.zeros(A.shape[1])
lam = 1.0 / self.config.snr_db
for i in range(self.config.max_iterations):
gradient = A.T @ (A @ x - y)
x = self._soft_threshold(x - 0.01 * gradient, lam * 0.01)
cost = 0.5 * np.linalg.norm(A @ x - y)**2 + lam * np.linalg.norm(x, 1)
self.log_iteration(cost)
return x
@staticmethod
def _soft_threshold(x: np.ndarray, threshold: float) -> np.ndarray:
return np.sign(x) * np.maximum(np.abs(x) - threshold, 0)
LASSOSolver inherits __init__, log_iteration, and history from
Solver, and provides its own solve implementation.
Definition: Method Resolution Order (MRO)
Method Resolution Order (MRO)
The Method Resolution Order is the sequence in which Python searches classes when looking up a method. Python uses the C3 linearization algorithm to compute a consistent ordering for multiple inheritance:
class A:
def method(self): return "A"
class B(A):
def method(self): return "B"
class C(A):
def method(self): return "C"
class D(B, C):
pass
print(D.__mro__)
# (D, B, C, A, object)
print(D().method()) # "B" β B comes before C in MRO
Use super() to delegate to the next class in the MRO, not just the
parent. This ensures cooperative multiple inheritance works correctly.
Definition: The super() Function
The super() Function
super() returns a proxy object that delegates method calls to the next
class in the MRO. It is essential for cooperative multiple inheritance:
class Solver:
def __init__(self, config):
self.config = config
class LoggingMixin:
def __init__(self, *args, verbose=False, **kwargs):
super().__init__(*args, **kwargs)
self.verbose = verbose
class VerboseLASSOSolver(LoggingMixin, LASSOSolver):
pass # Gets both LoggingMixin.__init__ and LASSOSolver.solve
Always use super() instead of hardcoding the parent class name. This
makes refactoring and mixin composition safe.
Historical Note: Old-Style vs New-Style Classes
Python 2 to 3 transition (2008-2020)In Python 2, classes that did not explicitly inherit from object were
"old-style classes" with a different (broken) MRO. Python 3 eliminated
this distinction: all classes are new-style and inherit from object
implicitly. This is why you never need to write class Foo(object): in
Python 3 β plain class Foo: is equivalent.
Example: The SimulationConfig Pattern
Design a dataclass SimulationConfig that holds all parameters for a
compressed sensing experiment. Show how to use it to avoid passing
many arguments to functions.
Define the config dataclass
from dataclasses import dataclass, field, asdict
import json
@dataclass(frozen=True)
class SimulationConfig:
n_measurements: int = 100
n_features: int = 500
sparsity: int = 10
snr_db: float = 20.0
algorithm: str = "lasso"
max_iterations: int = 1000
tolerance: float = 1e-6
random_seed: int = 42
tags: tuple[str, ...] = () # tuple for frozen compatibility
Using frozen=True makes the config immutable and hashable β you can use
it as a dictionary key for caching results.
Use the config to parametrize experiments
def run_experiment(config: SimulationConfig) -> dict:
rng = np.random.default_rng(config.random_seed)
A = rng.standard_normal((config.n_measurements, config.n_features))
x_true = np.zeros(config.n_features)
support = rng.choice(config.n_features, config.sparsity, replace=False)
x_true[support] = rng.standard_normal(config.sparsity)
noise_std = np.linalg.norm(A @ x_true) * 10 ** (-config.snr_db / 20)
y = A @ x_true + noise_std * rng.standard_normal(config.n_measurements)
solver = SOLVERS[config.algorithm](config)
x_hat = solver.solve(A, y)
return {
"nmse_db": 10 * np.log10(np.linalg.norm(x_hat - x_true)**2 /
np.linalg.norm(x_true)**2),
"config": asdict(config),
}
# Sweep over parameters
configs = [SimulationConfig(snr_db=snr) for snr in range(0, 30, 5)]
results = [run_experiment(c) for c in configs]
Serialize configs for reproducibility
# Save config to JSON
config = SimulationConfig(snr_db=15.0, algorithm="oamp")
with open("experiment_config.json", "w") as f:
json.dump(asdict(config), f, indent=2)
# Load config from JSON
with open("experiment_config.json") as f:
loaded = SimulationConfig(**json.load(f))
Example: Building a Solver Hierarchy
Implement a Solver base class and two subclasses (LASSOSolver and
OAMPSolver) that share common iteration tracking but differ in their
core algorithm.
Base class with shared infrastructure
import time
import numpy as np
class Solver:
name: str = "base"
def __init__(self, config: SimulationConfig):
self.config = config
self.history: list[float] = []
self._elapsed: float = 0.0
def solve(self, A: np.ndarray, y: np.ndarray) -> np.ndarray:
start = time.perf_counter()
result = self._iterate(A, y)
self._elapsed = time.perf_counter() - start
return result
def _iterate(self, A, y):
raise NotImplementedError
def convergence_report(self) -> str:
return (f"{self.name}: {len(self.history)} iters, "
f"final cost={self.history[-1]:.6f}, "
f"time={self._elapsed:.3f}s")
LASSO subclass (ISTA algorithm)
class LASSOSolver(Solver):
name = "LASSO-ISTA"
def _iterate(self, A, y):
n = A.shape[1]
x = np.zeros(n)
step_size = 1.0 / np.linalg.norm(A, ord=2)**2
lam = 0.1
for _ in range(self.config.max_iterations):
grad = A.T @ (A @ x - y)
x = np.sign(x - step_size * grad) * np.maximum(
np.abs(x - step_size * grad) - lam * step_size, 0
)
cost = 0.5 * np.linalg.norm(A @ x - y)**2 + lam * np.linalg.norm(x, 1)
self.log_iteration(cost)
if len(self.history) > 1 and abs(self.history[-1] - self.history[-2]) < self.config.tolerance:
break
return x
OAMP subclass
class OAMPSolver(Solver):
name = "OAMP"
def _iterate(self, A, y):
m, n = A.shape
x = np.zeros(n)
W = np.linalg.pinv(A) # LMMSE-style linear estimator
for _ in range(self.config.max_iterations):
# Linear estimation step
r = x + W @ (y - A @ x)
# Denoising step (soft thresholding)
tau = np.std(r - x) if np.any(x) else 1.0
x_new = np.sign(r) * np.maximum(np.abs(r) - tau, 0)
cost = np.linalg.norm(A @ x_new - y)**2
self.log_iteration(cost)
if np.linalg.norm(x_new - x) < self.config.tolerance:
break
x = x_new
return x
Theorem: Liskov Substitution Principle (LSP)
If S is a subclass of T, then objects of type T may be replaced with
objects of type S without altering any desirable property of the program
(correctness, task performed, etc.).
Formally: if is a property provable about objects of type , then should be true for objects of type where is a subtype of .
In our solver hierarchy, any code that works with a Solver reference should
work identically with a LASSOSolver or OAMPSolver. This means subclasses
must accept the same inputs, return compatible outputs, and not strengthen
preconditions or weaken postconditions.
Common Mistake: Mutable Default Arguments in init
Mistake:
@dataclass
class Config:
tags: list[str] = [] # BUG: all instances share the same list!
Correction:
@dataclass
class Config:
tags: list[str] = field(default_factory=list) # Each instance gets its own list
For non-dataclass classes, use None as default and create the mutable
object inside __init__:
def __init__(self, tags=None):
self.tags = tags if tags is not None else []
Common Mistake: Forgetting super().init() in Subclasses
Mistake:
class LASSOSolver(Solver):
def __init__(self, config, regularization=0.1):
self.regularization = regularization
# Forgot super().__init__(config)!
# self.config, self.history are missing
Correction:
class LASSOSolver(Solver):
def __init__(self, config, regularization=0.1):
super().__init__(config) # Initialize parent state
self.regularization = regularization
Class Hierarchy Explorer
Explore how different solver subclasses converge on a compressed sensing problem. Adjust the problem parameters to see how each algorithm's convergence behavior changes.
Parameters
Quick Check
What does @dataclass(frozen=True) provide that a regular dataclass does not?
It makes the class abstract
It makes instances immutable and hashable
It prevents subclassing
It adds slots automatically
Frozen dataclasses raise FrozenInstanceError on attribute assignment and auto-generate __hash__, making them usable as dict keys and set members.
Quick Check
Given class D(B, C) where both B and C inherit from A, what is the MRO of D?
D, B, A, C, A, object
D, B, C, A, object
D, A, B, C, object
D, C, B, A, object
C3 linearization produces D -> B -> C -> A -> object, respecting local precedence order (B before C) and monotonicity.
Class
A blueprint for creating objects that bundles state (attributes) and behavior (methods). Defined with the class keyword.
Related: Instance, Inheritance
Instance
A concrete object created from a class via ClassName(). Each instance has its own attribute namespace (__dict__) unless __slots__ is used.
Related: Class
Inheritance
A mechanism where a child class automatically acquires the attributes and methods of a parent class, enabling code reuse and specialization.
Related: Method Resolution Order (MRO), Composition
Method Resolution Order (MRO)
The order in which Python searches classes for a method or attribute, computed using the C3 linearization algorithm. Viewable via ClassName.__mro__.
Related: Inheritance
Dataclass
A class decorated with @dataclass that auto-generates __init__, __repr__, and __eq__ from annotated fields. Supports frozen, slots, and order options.
Related: Class
Classes and Inheritance
# Code from: ch03/python/classes_and_inheritance.py
# Load from backend supplements endpointWhy This Matters: Solver Hierarchies in Wireless Communications
The Solver base class pattern maps directly to signal processing in
wireless communications. In MIMO detection, you might have a Detector
base class with subclasses like MMSEDetector, ZFDetector, and
MLDetector. Each shares the same interface (detect(y, H) -> x_hat)
but uses a different algorithm internally.
In Chapter 14 (Compressed Sensing), we will build exactly this pattern
with AMP, OAMP, and VAMP solvers that all conform to the same
Solver protocol.
See full treatment in Chapter 14
Key Takeaway
Use functions for stateless transformations and classes when you need
to bundle state with behavior. The @dataclass decorator eliminates boilerplate
for parameter containers like SimulationConfig. Use frozen=True for
immutable, hashable configs.