Prerequisites & Notation

Before You Begin

This chapter assumes proficiency with NumPy array operations (Chapter 5), basic understanding of linear algebra computations (Chapter 6), and familiarity with Python functions and decorators (Chapter 2). Some sections reference GPU computing concepts from Chapter 10.

  • NumPy arrays, broadcasting, and vectorization (Chapter 5)(Review ch05)

    Self-check: Can you write a fully vectorized NumPy function that avoids Python loops?

  • Linear algebra with NumPy/SciPy (Chapter 6)(Review ch06)

    Self-check: Can you solve a linear system and compute matrix decompositions with NumPy?

  • Functions, decorators, and closures (Chapter 2)(Review ch02)

    Self-check: Do you understand how @decorator syntax transforms a function?

  • Basic GPU computing concepts (Chapter 10)(Review ch10)

    Self-check: Do you understand the concept of host vs. device memory and kernel launches?

Notation for This Chapter

Symbols and conventions introduced in this chapter.

SymbolMeaningIntroduced
TJITT_{\mathrm{JIT}}Execution time after JIT compilation (excluding compilation overhead)s01
SpS_pParallel speedup with pp workers: Sp=T1/TpS_p = T_1 / T_ps03
fsf_sSerial fraction of a program (Amdahl's Law)s03
ηp\eta_pParallel efficiency: ηp=Sp/p\eta_p = S_p / ps03
f\nabla fGradient of ff (computed via automatic differentiation in JAX)s02