Matrices and Linear Maps
Why Matrices Are Central to Telecommunications
Every linear operation in a wireless communication system can be represented as a matrix acting on a vector. The MIMO channel relates the transmit signal vector to the received signal vector via
where is the channel matrix and is additive noise. A precoder at the transmitter is a matrix applied before transmission; a combiner at the receiver is a matrix applied after reception. The effective input–output relation becomes .
Understanding the algebraic properties of these matrices — their rank, range, null space, and special structure (Hermitian, unitary, positive definite) — is therefore not an abstract exercise but the foundation upon which transceiver design, capacity analysis, and signal processing algorithms rest.
Definition: Linear Map
Linear Map
Let and be vector spaces over the same field (typically or ). A function is a linear map (or linear transformation) if for all and all :
- Additivity: .
- Homogeneity: .
Equivalently, is linear if and only if for all and .
Once bases are chosen for and , every linear map is represented uniquely by a matrix (where , ) such that . Conversely, every matrix defines a linear map. This correspondence is an isomorphism of vector spaces: .
Definition: Matrix–Vector and Matrix–Matrix Products
Matrix–Vector and Matrix–Matrix Products
Let and .
Matrix–vector product. For , the product has entries Equivalently, is a linear combination of the columns of : if are the columns, then .
Matrix–matrix product. The product has entries Column of equals , where is column of .
The column-view of the matrix–vector product is fundamental: it shows that the output always lies in the column space of , no matter what is. This observation is the key to understanding range and rank.
Definition: Range (Column Space)
Range (Column Space)
The range (or column space) of a matrix is where are the columns of .
is a subspace of . It is the image of the linear map : it tells you which output vectors are achievable. In a MIMO system, is the subspace of receive-signal space that the channel can excite.
Definition: Null Space (Kernel)
Null Space (Kernel)
The null space (or kernel) of a matrix is
is a subspace of (the domain, not the codomain). It consists of all inputs that the linear map "kills." A nonzero null space means the map is not injective. In MIMO, any transmit vector produces zero received signal — the channel is "blind" in those directions.
Definition: Rank of a Matrix
Rank of a Matrix
The rank of is Equivalently, equals:
- the maximum number of linearly independent columns of , or
- the maximum number of linearly independent rows of (row rank equals column rank), or
- the number of nonzero singular values of .
Since , a matrix is called full rank when equality holds. In MIMO, the rank of the channel matrix determines the maximum number of independent data streams (spatial multiplexing gain) that can be transmitted.
Definition: Invertibility
Invertibility
A square matrix is invertible (or nonsingular) if there exists a matrix such that The following are equivalent:
- is invertible.
- .
- .
- .
- All eigenvalues of are nonzero.
Invertibility means the linear map is a bijection: every output has a unique pre-image. When the channel matrix is square and invertible, zero-forcing detection amounts to multiplying the received signal by .
Definition: Change of Basis
Change of Basis
Let and be two ordered bases of a vector space . The change-of-basis matrix from to is the unique invertible matrix whose -th column contains the coordinates of in basis : If denotes the coordinate vector of in basis , then The matrix representation of a linear map transforms as
In wireless systems, change of basis is used constantly. For example, switching between the antenna-domain representation and the beamspace (angular) representation of a channel is a change of basis performed by the DFT matrix. Eigendecomposition and SVD can also be viewed as finding particularly revealing bases.
Definition: Hermitian Matrix
Hermitian Matrix
A matrix is Hermitian (or self-adjoint) if , where denotes the conjugate transpose.
Hermitian matrices are the complex generalization of real symmetric matrices. Every covariance matrix in wireless communications is Hermitian positive semidefinite.
Definition: Unitary Matrix
Unitary Matrix
A matrix is unitary if .
Unitary matrices preserve inner products and norms: . In wireless, DFT matrices and beamforming codebooks are often unitary.
Definition: Positive Definite and Positive Semidefinite Matrices
Positive Definite and Positive Semidefinite Matrices
A Hermitian matrix is:
- Positive definite () if for all nonzero .
- Positive semidefinite () if for all .
Every covariance matrix is positive semidefinite. The MIMO capacity formula requires .
Theorem: Rank–Nullity Theorem
For any matrix :
The domain splits into two complementary pieces: the directions that maps to nonzero outputs (contributing to the rank) and the directions that collapses to zero (the null space). Together they must account for every dimension of the domain. In a MIMO channel with transmit antennas, if the channel has rank , then independent streams survive, while transmit directions are "wasted" (they lie in the null space of ).
Construct a basis for and extend it to a basis of .
Show that maps the extension vectors to linearly independent vectors in .
Conclude by counting dimensions.
Step 1: Set up notation
Let and let . We must show .
Choose a basis for . Since is a subspace of , by the basis extension theorem we can extend this to a basis of :
Step 2: Show the images of the extension vectors span $\mathcal{R}(\mathbf{A})$
We claim that spans .
Take any . Then for some . Since is a basis of , we can write Applying : Thus every vector in is a linear combination of , so these vectors span .
Step 3: Show the images of the extension vectors are linearly independent
Suppose . Then , which means .
Since is a basis of , there exist scalars such that Rearranging: But is a basis of , hence linearly independent. Therefore all coefficients must be zero: for all and for all .
This proves is linearly independent.
Step 4: Conclude
From Steps 2 and 3, is a basis of . Therefore: Rearranging: .
Theorem: Properties of Hermitian Matrices
Let be Hermitian. Then:
- All eigenvalues of are real.
- Eigenvectors corresponding to distinct eigenvalues are orthogonal.
- The diagonal entries are real for all .
A Hermitian matrix is "the same as its own adjoint," so it interacts symmetrically with inner products. This forces eigenvalues to be real (no imaginary component can survive the symmetry) and eigenvectors from different eigenvalues to be orthogonal.
For (1): compute and use .
For (2): compute two different ways.
Proof of (1): Eigenvalues are real
Let with . Compute the inner product : On the other hand, since : Since , we conclude , so .
Proof of (2): Orthogonality of eigenvectors
Let and with . where the last step uses from part (1). Thus . Since , we must have .
Proof of (3): Real diagonal entries
By definition, means for all . Setting : , so each diagonal entry is real.
Theorem: Unitary Matrices Preserve Inner Products and Norms
Let be unitary. Then for all :
- (inner product preservation).
- (norm / energy preservation).
Unitary transformations are "rigid rotations" (possibly with reflections) of complex space: they rotate and relabel axes without stretching or compressing any direction. This is why they preserve lengths and angles.
Expand .
Proof
\mathbf{y} = \mathbf{x}|\mathbf{U}\mathbf{x}|^2 = \langle \mathbf{U}\mathbf{x}, \mathbf{U}\mathbf{x} \rangle = \langle \mathbf{x}, \mathbf{x} \rangle = |\mathbf{x}|^2|\mathbf{U}\mathbf{x}| = |\mathbf{x}|\blacksquare$
Special Matrix Classes
| Property | Hermitian () | Unitary () | Positive Definite () |
|---|---|---|---|
| Eigenvalues | Real | On unit circle () | Real and positive |
| Diagonalizable? | Always (spectral theorem) | Always | Always (since Hermitian) |
| Determinant | Real | Real and positive | |
| Inverse | is Hermitian (if it exists) | (always exists) | is also positive definite |
| Preserves norms? | Not in general | Yes: | Not in general |
| Quadratic form | Always real | Complex in general | Always real and positive (for ) |
| Telecom example | Covariance matrix | DFT matrix, beamforming codebook | Noise covariance |
| Requires Hermitian? | By definition | No | Yes, by definition |
| Closure under addition? | Yes | No | Yes (cone) |
| Closure under multiplication? | No (product of two Hermitian matrices is Hermitian iff they commute) | Yes | No in general |
Example: Computing Rank, Range, and Null Space
Let Find , , and . Verify the rank–nullity theorem.
Step 1: Row reduce $\mathbf{A}$
Apply elementary row operations: The row echelon form has one pivot (in column 1).
Step 2: Read off rank
The number of pivots is 1, so .
Step 3: Find the range $\mathcal{R}(\mathbf{A})$
The pivot is in column 1, so column 1 of the original matrix forms a basis for the column space: Indeed, column 2 is column 1, and column 3 equals column 1.
Step 4: Find the null space $\mathcal{N}(\mathbf{A})$
Solve using the row echelon form. The system reduces to , with and as free variables. Setting gives ; setting gives . Thus:
Step 5: Verify rank–nullity
.
Example: The DFT Matrix Is Unitary
Let . The DFT matrix is Verify that is unitary.
Compute $\mathbf{F}_2^H \mathbf{F}_2$
Since is real in this case, . Since , the matrix is unitary. (For a square matrix, implies as well.)
Example: Checking Positive Definiteness
Show that the matrix is positive definite.
Method 1: Eigenvalue test
The characteristic polynomial is . The eigenvalues are and . Since is symmetric (hence Hermitian over ) and all eigenvalues are strictly positive, .
Method 2: Direct quadratic form
For any : This is a sum of squares, and at least one of is nonzero, so . Adding the nonnegative term , the entire expression is strictly positive. Thus .
Rank
The rank of a matrix is the dimension of its column space , equivalently the number of linearly independent columns (or rows). It satisfies .
Related: Rank of a Matrix, Range (Column Space), Rank–Nullity Theorem
Null Space (Kernel)
The null space of is the set of all vectors satisfying . It is a subspace of with dimension .
Related: Null Space (Kernel), Rank–Nullity Theorem, Rank
Hermitian Matrix
A square matrix satisfying . Hermitian matrices have real eigenvalues, orthogonal eigenvectors, and are diagonalizable by a unitary similarity transformation (spectral theorem).
Related: Hermitian Matrix, Properties of Hermitian Matrices, Rank
Unitary Matrix
A square matrix satisfying . Unitary matrices preserve inner products and norms, have eigenvalues on the unit circle, and their inverse is simply .
Related: Unitary Matrix, Unitary Matrices Preserve Inner Products and Norms
Positive Definite Matrix
A Hermitian matrix such that for every nonzero vector . Equivalently, all eigenvalues are strictly positive. Positive semidefinite relaxes "" to "."
Related: Positive Definite and Positive Semidefinite Matrices, Hermitian Matrix
Quick Check
Let with . What is ?
By the rank–nullity theorem, .
Quick Check
A matrix represents a MIMO channel with 4 transmit and 8 receive antennas. If , what is ?
Cannot be determined
and , so . The channel is full column rank, meaning all 4 transmit streams can be recovered.
Quick Check
Which of the following is always true for a unitary matrix ?
is Hermitian
is positive definite
From , taking determinants: , so , giving . The determinant itself can be any complex number on the unit circle (e.g., for reflection matrices), so is not always true.
Quick Check
Let be Hermitian. Which statement is false?
All eigenvalues of are real.
All diagonal entries of are real.
All entries of are real.
is diagonalizable.
A Hermitian matrix satisfies . The off-diagonal entries need not be real; they satisfy (conjugate symmetry). For example, is Hermitian but has complex off-diagonal entries.
Quick Check
If and , is ?
Yes, always
Only if and commute
Not necessarily
For any nonzero : . The set of positive definite matrices forms an open convex cone.
Common Mistake: Transpose vs. Conjugate Transpose for Complex Matrices
Mistake:
"I need the adjoint of my channel matrix, so I'll just transpose it: ."
Correction:
For complex matrices, the adjoint (Hermitian transpose) is the conjugate transpose , not the plain transpose . Using instead of will give wrong results whenever has complex entries.
For example, let . Then , but .
The transpose gives a bilinear form; the conjugate transpose gives a sesquilinear (inner product) form. Only the latter guarantees with equality iff .
Tip: Rule of thumb: in complex-valued signal processing and wireless communications, always use unless you have a specific algebraic reason to use (e.g., when working with the operator or Kronecker products where the transpose is genuinely intended).
Common Mistake: Rank Is Not the Number of Nonzero Rows or Columns
Mistake:
"My matrix has 3 nonzero rows, so its rank is 3."
Correction:
Rank is the number of linearly independent rows (or columns), not the number of nonzero ones. For example, has 2 nonzero rows but because row 2 is row 1. Always row-reduce (or compute the SVD) to determine rank.
Tip: The most numerically reliable way to compute rank is via the SVD: count the number of singular values that exceed a suitable tolerance.
Numerical Invertibility and Condition Numbers
The theoretical invertibility criterion is useless in floating-point arithmetic. A matrix with is mathematically invertible but numerically singular. The practical criterion is the condition number :
- : well-conditioned, safe to invert.
- : ill-conditioned, use regularization (Tikhonov, diagonal loading).
- (in 64-bit): effectively singular, do not invert. In MIMO detection, computing (zero-forcing) fails when the channel is ill-conditioned. Use MMSE regularization instead.
- •
IEEE 754 double precision: ~15 significant digits, so matrices with lose all precision
- •
32-bit (single precision): only ~7 digits; is dangerous
- •
NumPy: np.linalg.cond(H) computes condition number; never use np.linalg.inv(H) without checking
Computing Rank in Practice: SVD Thresholding
The theoretical rank (number of nonzero singular values) is never exactly computable in floating-point. In practice, rank is determined by counting singular values above a tolerance:
where is typically (with for double precision). This is the default used by numpy.linalg.matrix_rank.
- •
SVD cost: for an matrix
- •
For large MIMO systems (), the SVD of costs ~ flops
Key Takeaway
For , the domain decomposes as (an orthogonal direct sum). The rank–nullity theorem is the dimension count of this decomposition. In MIMO, this tells us exactly how many independent data streams the channel supports.
Key Takeaway
Three matrix classes appear everywhere in telecommunications:
- Hermitian (): covariance matrices, Fisher information matrices, optimization cost matrices. Key property: real eigenvalues.
- Unitary (): DFT matrices, beamforming codebooks, precoder/combiner matrices in capacity-achieving schemes. Key property: norm preservation.
- Positive (semi)definite (): covariance matrices, Gram matrices . Key property: nonnegative eigenvalues guarantee meaningful power/energy interpretations.
The spectral theorem unifies them: every Hermitian matrix is unitarily diagonalizable with real eigenvalues, and it is positive (semi)definite iff those eigenvalues are all positive (nonneg.).
Key Takeaway
Many wireless signal processing operations are changes of basis in disguise. The DFT transforms from antenna domain to beamspace (angular domain). The SVD of a channel matrix provides the optimal transmit and receive bases that diagonalize the channel into independent parallel sub-channels. Recognizing an operation as a change of basis often reveals the underlying structure and suggests efficient implementations.
Why This Matters: The Channel Matrix Maps Transmit to Receive Signal Space
In a narrowband MIMO system with transmit antennas and receive antennas, the channel is described by where is the channel matrix. The concepts of this section directly govern system performance:
-
Range: is the set of noiseless received signals achievable by varying . Only the projection of onto matters for detection; noise in is irrelevant.
-
Null space: consists of transmit directions that produce zero received signal. In multiuser MIMO, one user's precoder is deliberately chosen in of the interfered user (zero-forcing).
-
Rank: is the number of independent spatial streams the channel supports. In rich scattering, with probability 1 (full rank). In line-of-sight channels, (rank-deficient), and spatial multiplexing is impossible.
-
Hermitian structure: and are Hermitian positive semidefinite. Their eigenvalues are the squared singular values of , which determine the SNR on each spatial sub-channel.
-
Unitary precoders/combiners: Capacity-achieving transmission on the MIMO channel uses the SVD . The transmit precoder and receive combiner are unitary, ensuring no energy is wasted in the transformation.
Why This Matters: Null Space and Zero-Forcing in Multiuser MIMO
Consider a base station with antennas serving two single-antenna users with channel vectors . To eliminate inter-user interference, the zero-forcing precoder for user 1 must satisfy , i.e., .
By the rank–nullity theorem, (assuming ). Thus the precoder has degrees of freedom: enough to also maximize the signal power subject to the null-space constraint. This is the foundation of zero-forcing beamforming, one of the most widely used multiuser MIMO techniques.