Chapter Summary

Chapter Summary

Key Points

  • 1.

    Environment isolation is non-negotiable. Every project gets its own virtual environment (venv) or conda environment with pinned dependencies. System Python is for the OS, not for your research. Conda is preferred for scientific stacks with GPU dependencies.

  • 2.

    The data model is the foundation. Every Python operator dispatches to a dunder method (__add__, __matmul__, __getitem__). NumPy arrays, PyTorch tensors, and SciPy sparse matrices all work because they implement these protocols. Understanding this mechanism lets you read library source code and write interoperable custom classes.

  • 3.

    Match the data structure to the access pattern. Lists for ordered sequences (O(1)O(1) append), dicts and sets for O(1)O(1) lookup, deques for double-ended access, dataclasses for typed parameter containers. Using a list where a set is needed turns O(1)O(1) into O(n)O(n).

  • 4.

    Generators provide lazy evaluation. Generator functions (yield) and generator expressions produce values on demand with O(1)O(1) memory, regardless of dataset size. Use itertools.product for parameter sweeps, functools.partial for currying, and lru_cache for memoization.

  • 5.

    f-strings with format specs are the standard. Use .2e for scientific notation (BER values), .1f for fixed-point (SNR in dB), and aligned columns for result tables. Use the logging module instead of print() for production code. Use YAML for configs, CSV for tabular results, and HDF5 for large numerical arrays.

Looking Ahead

With the Python toolchain in place, Chapter 2 goes deeper into the language: functions as first-class objects, closures, decorators, and context managers. These patterns are the building blocks of clean, reusable scientific code — from @timer decorators that profile your algorithms to context managers that track GPU memory.