Matrices and Linear Maps

Why Matrices Are Central to Telecommunications

Every linear operation in a wireless communication system can be represented as a matrix acting on a vector. The MIMO channel relates the transmit signal vector xCnt\mathbf{x} \in \mathbb{C}^{n_t} to the received signal vector yCnr\mathbf{y} \in \mathbb{C}^{n_r} via

y=Hx+n,\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{n},

where HCnr×nt\mathbf{H} \in \mathbb{C}^{n_r \times n_t} is the channel matrix and n\mathbf{n} is additive noise. A precoder at the transmitter is a matrix F\mathbf{F} applied before transmission; a combiner at the receiver is a matrix WH\mathbf{W}^H applied after reception. The effective input–output relation becomes y~=WHHFs+WHn\tilde{\mathbf{y}} = \mathbf{W}^H \mathbf{H} \mathbf{F} \mathbf{s} + \mathbf{W}^H \mathbf{n}.

Understanding the algebraic properties of these matrices — their rank, range, null space, and special structure (Hermitian, unitary, positive definite) — is therefore not an abstract exercise but the foundation upon which transceiver design, capacity analysis, and signal processing algorithms rest.

Definition:

Linear Map

Let VV and WW be vector spaces over the same field F\mathbb{F} (typically R\mathbb{R} or C\mathbb{C}). A function T:VWT : V \to W is a linear map (or linear transformation) if for all x,yV\mathbf{x}, \mathbf{y} \in V and all αF\alpha \in \mathbb{F}:

  1. Additivity: T(x+y)=T(x)+T(y)T(\mathbf{x} + \mathbf{y}) = T(\mathbf{x}) + T(\mathbf{y}).
  2. Homogeneity: T(αx)=αT(x)T(\alpha \mathbf{x}) = \alpha\, T(\mathbf{x}).

Equivalently, TT is linear if and only if T(αx+βy)=αT(x)+βT(y)T(\alpha \mathbf{x} + \beta \mathbf{y}) = \alpha\, T(\mathbf{x}) + \beta\, T(\mathbf{y}) for all α,βF\alpha, \beta \in \mathbb{F} and x,yV\mathbf{x}, \mathbf{y} \in V.

Once bases are chosen for VV and WW, every linear map TT is represented uniquely by a matrix AFm×n\mathbf{A} \in \mathbb{F}^{m \times n} (where n=dimVn = \dim V, m=dimWm = \dim W) such that T(x)=AxT(\mathbf{x}) = \mathbf{A}\mathbf{x}. Conversely, every matrix defines a linear map. This correspondence is an isomorphism of vector spaces: L(V,W)Fm×n\mathcal{L}(V, W) \cong \mathbb{F}^{m \times n}.

Definition:

Matrix–Vector and Matrix–Matrix Products

Let A=[aij]Cm×n\mathbf{A} = [a_{ij}] \in \mathbb{C}^{m \times n} and B=[bjk]Cn×p\mathbf{B} = [b_{jk}] \in \mathbb{C}^{n \times p}.

Matrix–vector product. For xCn\mathbf{x} \in \mathbb{C}^n, the product y=AxCm\mathbf{y} = \mathbf{A}\mathbf{x} \in \mathbb{C}^m has entries yi=k=1naikxk,i=1,,m.y_i = \sum_{k=1}^{n} a_{ik}\, x_k, \qquad i = 1, \ldots, m. Equivalently, Ax\mathbf{A}\mathbf{x} is a linear combination of the columns of A\mathbf{A}: if a1,,an\mathbf{a}_1, \ldots, \mathbf{a}_n are the columns, then Ax=x1a1++xnan\mathbf{A}\mathbf{x} = x_1 \mathbf{a}_1 + \cdots + x_n \mathbf{a}_n.

Matrix–matrix product. The product C=ABCm×p\mathbf{C} = \mathbf{A}\mathbf{B} \in \mathbb{C}^{m \times p} has entries cik=j=1naijbjk,i=1,,m,  k=1,,p.c_{ik} = \sum_{j=1}^{n} a_{ij}\, b_{jk}, \qquad i = 1,\ldots,m,\; k = 1,\ldots,p. Column kk of C\mathbf{C} equals Abk\mathbf{A}\mathbf{b}_k, where bk\mathbf{b}_k is column kk of B\mathbf{B}.

The column-view of the matrix–vector product is fundamental: it shows that the output y=Ax\mathbf{y} = \mathbf{A}\mathbf{x} always lies in the column space of A\mathbf{A}, no matter what x\mathbf{x} is. This observation is the key to understanding range and rank.

Definition:

Range (Column Space)

The range (or column space) of a matrix ACm×n\mathbf{A} \in \mathbb{C}^{m \times n} is R(A)={Ax:xCn}=span(a1,,an)Cm,\mathcal{R}(\mathbf{A}) = \{\mathbf{A}\mathbf{x} : \mathbf{x} \in \mathbb{C}^n\} = \text{span}(\mathbf{a}_1, \ldots, \mathbf{a}_n) \subseteq \mathbb{C}^m, where a1,,an\mathbf{a}_1, \ldots, \mathbf{a}_n are the columns of A\mathbf{A}.

R(A)\mathcal{R}(\mathbf{A}) is a subspace of Cm\mathbb{C}^m. It is the image of the linear map xAx\mathbf{x} \mapsto \mathbf{A}\mathbf{x}: it tells you which output vectors are achievable. In a MIMO system, R(H)\mathcal{R}(\mathbf{H}) is the subspace of receive-signal space that the channel can excite.

Definition:

Null Space (Kernel)

The null space (or kernel) of a matrix ACm×n\mathbf{A} \in \mathbb{C}^{m \times n} is N(A)={xCn:Ax=0}Cn.\mathcal{N}(\mathbf{A}) = \{\mathbf{x} \in \mathbb{C}^n : \mathbf{A}\mathbf{x} = \mathbf{0}\} \subseteq \mathbb{C}^n.

N(A)\mathcal{N}(\mathbf{A}) is a subspace of Cn\mathbb{C}^n (the domain, not the codomain). It consists of all inputs that the linear map "kills." A nonzero null space means the map is not injective. In MIMO, any transmit vector xN(H)\mathbf{x} \in \mathcal{N}(\mathbf{H}) produces zero received signal — the channel is "blind" in those directions.

Definition:

Rank of a Matrix

The rank of ACm×n\mathbf{A} \in \mathbb{C}^{m \times n} is rank(A)=dim ⁣(R(A)).\text{rank}(\mathbf{A}) = \dim\!\big(\mathcal{R}(\mathbf{A})\big). Equivalently, rank(A)\text{rank}(\mathbf{A}) equals:

  • the maximum number of linearly independent columns of A\mathbf{A}, or
  • the maximum number of linearly independent rows of A\mathbf{A} (row rank equals column rank), or
  • the number of nonzero singular values of A\mathbf{A}.

Since rank(A)min(m,n)\text{rank}(\mathbf{A}) \leq \min(m, n), a matrix is called full rank when equality holds. In MIMO, the rank of the channel matrix H\mathbf{H} determines the maximum number of independent data streams (spatial multiplexing gain) that can be transmitted.

Definition:

Invertibility

A square matrix ACn×n\mathbf{A} \in \mathbb{C}^{n \times n} is invertible (or nonsingular) if there exists a matrix A1Cn×n\mathbf{A}^{-1} \in \mathbb{C}^{n \times n} such that AA1=A1A=In.\mathbf{A}\mathbf{A}^{-1} = \mathbf{A}^{-1}\mathbf{A} = \mathbf{I}_n. The following are equivalent:

  1. A\mathbf{A} is invertible.
  2. rank(A)=n\text{rank}(\mathbf{A}) = n.
  3. N(A)={0}\mathcal{N}(\mathbf{A}) = \{\mathbf{0}\}.
  4. det(A)0\det(\mathbf{A}) \neq 0.
  5. All eigenvalues of A\mathbf{A} are nonzero.

Invertibility means the linear map xAx\mathbf{x} \mapsto \mathbf{A}\mathbf{x} is a bijection: every output has a unique pre-image. When the channel matrix is square and invertible, zero-forcing detection amounts to multiplying the received signal by H1\mathbf{H}^{-1}.

Definition:

Change of Basis

Let B={b1,,bn}\mathcal{B} = \{\mathbf{b}_1, \ldots, \mathbf{b}_n\} and B={b1,,bn}\mathcal{B}' = \{\mathbf{b}'_1, \ldots, \mathbf{b}'_n\} be two ordered bases of a vector space VV. The change-of-basis matrix from B\mathcal{B} to B\mathcal{B}' is the unique invertible matrix PCn×n\mathbf{P} \in \mathbb{C}^{n \times n} whose kk-th column contains the coordinates of bk\mathbf{b}_k in basis B\mathcal{B}': bk=i=1npikbi,k=1,,n.\mathbf{b}_k = \sum_{i=1}^{n} p_{ik}\, \mathbf{b}'_i, \qquad k = 1,\ldots,n. If [x]B[\mathbf{x}]_{\mathcal{B}} denotes the coordinate vector of x\mathbf{x} in basis B\mathcal{B}, then [x]B=P1[x]B.[\mathbf{x}]_{\mathcal{B}'} = \mathbf{P}^{-1} [\mathbf{x}]_{\mathcal{B}}. The matrix representation of a linear map TT transforms as [A]B=P1[A]BP.[\mathbf{A}]_{\mathcal{B}'} = \mathbf{P}^{-1} [\mathbf{A}]_{\mathcal{B}}\, \mathbf{P}.

In wireless systems, change of basis is used constantly. For example, switching between the antenna-domain representation and the beamspace (angular) representation of a channel is a change of basis performed by the DFT matrix. Eigendecomposition and SVD can also be viewed as finding particularly revealing bases.

Definition:

Hermitian Matrix

A matrix ACn×n\mathbf{A} \in \mathbb{C}^{n \times n} is Hermitian (or self-adjoint) if AH=A\mathbf{A}^H = \mathbf{A}, where ()H(\cdot)^H denotes the conjugate transpose.

Hermitian matrices are the complex generalization of real symmetric matrices. Every covariance matrix in wireless communications is Hermitian positive semidefinite.

Definition:

Unitary Matrix

A matrix UCn×n\mathbf{U} \in \mathbb{C}^{n \times n} is unitary if UHU=UUH=In\mathbf{U}^H \mathbf{U} = \mathbf{U} \mathbf{U}^H = \mathbf{I}_n.

Unitary matrices preserve inner products and norms: Ux,Uy=x,y\langle \mathbf{U}\mathbf{x}, \mathbf{U}\mathbf{y} \rangle = \langle \mathbf{x}, \mathbf{y} \rangle. In wireless, DFT matrices and beamforming codebooks are often unitary.

Definition:

Positive Definite and Positive Semidefinite Matrices

A Hermitian matrix ACn×n\mathbf{A} \in \mathbb{C}^{n \times n} is:

  • Positive definite (A0\mathbf{A} \succ 0) if xHAx>0\mathbf{x}^H \mathbf{A} \mathbf{x} > 0 for all nonzero xCn\mathbf{x} \in \mathbb{C}^n.
  • Positive semidefinite (A0\mathbf{A} \succeq 0) if xHAx0\mathbf{x}^H \mathbf{A} \mathbf{x} \geq 0 for all xCn\mathbf{x} \in \mathbb{C}^n.

Every covariance matrix R=E[xxH]\mathbf{R} = \mathbb{E}[\mathbf{x}\mathbf{x}^H] is positive semidefinite. The MIMO capacity formula logdet(I+SNRHHH)\log\det(\mathbf{I} + \text{SNR} \cdot \mathbf{H}\mathbf{H}^H) requires HHH0\mathbf{H}\mathbf{H}^H \succeq 0.

Theorem: Rank–Nullity Theorem

For any matrix ACm×n\mathbf{A} \in \mathbb{C}^{m \times n}: rank(A)+dim(N(A))=n.\text{rank}(\mathbf{A}) + \dim(\mathcal{N}(\mathbf{A})) = n.

The domain Cn\mathbb{C}^n splits into two complementary pieces: the directions that A\mathbf{A} maps to nonzero outputs (contributing to the rank) and the directions that A\mathbf{A} collapses to zero (the null space). Together they must account for every dimension of the domain. In a MIMO channel with ntn_t transmit antennas, if the channel has rank rr, then rr independent streams survive, while ntrn_t - r transmit directions are "wasted" (they lie in the null space of H\mathbf{H}).

Theorem: Properties of Hermitian Matrices

Let ACn×n\mathbf{A} \in \mathbb{C}^{n \times n} be Hermitian. Then:

  1. All eigenvalues of A\mathbf{A} are real.
  2. Eigenvectors corresponding to distinct eigenvalues are orthogonal.
  3. The diagonal entries aiia_{ii} are real for all ii.

A Hermitian matrix is "the same as its own adjoint," so it interacts symmetrically with inner products. This forces eigenvalues to be real (no imaginary component can survive the symmetry) and eigenvectors from different eigenvalues to be orthogonal.

Theorem: Unitary Matrices Preserve Inner Products and Norms

Let UCn×n\mathbf{U} \in \mathbb{C}^{n \times n} be unitary. Then for all x,yCn\mathbf{x}, \mathbf{y} \in \mathbb{C}^n:

  1. Ux,Uy=x,y\langle \mathbf{U}\mathbf{x}, \mathbf{U}\mathbf{y} \rangle = \langle \mathbf{x}, \mathbf{y} \rangle (inner product preservation).
  2. Ux=x\|\mathbf{U}\mathbf{x}\| = \|\mathbf{x}\| (norm / energy preservation).

Unitary transformations are "rigid rotations" (possibly with reflections) of complex space: they rotate and relabel axes without stretching or compressing any direction. This is why they preserve lengths and angles.

Special Matrix Classes

PropertyHermitian (AH=A\mathbf{A}^H = \mathbf{A})Unitary (UHU=I\mathbf{U}^H\mathbf{U} = \mathbf{I})Positive Definite (A0\mathbf{A} \succ 0)
EigenvaluesRealOn unit circle (λ=1|\lambda| = 1)Real and positive
Diagonalizable?Always (spectral theorem)AlwaysAlways (since Hermitian)
DeterminantRealdet=1|\det| = 1Real and positive
InverseA1\mathbf{A}^{-1} is Hermitian (if it exists)U1=UH\mathbf{U}^{-1} = \mathbf{U}^H (always exists)A1\mathbf{A}^{-1} is also positive definite
Preserves norms?Not in generalYes: Ux=x\|\mathbf{U}\mathbf{x}\| = \|\mathbf{x}\|Not in general
Quadratic form xHAx\mathbf{x}^H \mathbf{A} \mathbf{x}Always realComplex in generalAlways real and positive (for x0\mathbf{x} \neq \mathbf{0})
Telecom exampleCovariance matrix Rx\mathbf{R}_{\mathbf{x}}DFT matrix, beamforming codebookNoise covariance Rn0\mathbf{R}_{\mathbf{n}} \succ 0
Requires Hermitian?By definitionNoYes, by definition
Closure under addition?YesNoYes (cone)
Closure under multiplication?No (product of two Hermitian matrices is Hermitian iff they commute)YesNo in general

Example: Computing Rank, Range, and Null Space

Let A=[121242363]R3×3.\mathbf{A} = \begin{bmatrix} 1 & 2 & 1 \\ 2 & 4 & 2 \\ 3 & 6 & 3 \end{bmatrix} \in \mathbb{R}^{3 \times 3}. Find rank(A)\text{rank}(\mathbf{A}), R(A)\mathcal{R}(\mathbf{A}), and N(A)\mathcal{N}(\mathbf{A}). Verify the rank–nullity theorem.

Example: The 2×22 \times 2 DFT Matrix Is Unitary

Let ω=ej2π/2=ejπ=1\omega = e^{-j 2\pi / 2} = e^{-j\pi} = -1. The 2×22 \times 2 DFT matrix is F2=12[111ω]=12[1111].\mathbf{F}_2 = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & \omega \end{bmatrix} = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}. Verify that F2\mathbf{F}_2 is unitary.

Example: Checking Positive Definiteness

Show that the matrix A=[2112]\mathbf{A} = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} is positive definite.

Rank

The rank of a matrix ACm×n\mathbf{A} \in \mathbb{C}^{m \times n} is the dimension of its column space R(A)\mathcal{R}(\mathbf{A}), equivalently the number of linearly independent columns (or rows). It satisfies 0rank(A)min(m,n)0 \leq \text{rank}(\mathbf{A}) \leq \min(m, n).

Related: Rank of a Matrix, Range (Column Space), Rank–Nullity Theorem

Null Space (Kernel)

The null space of ACm×n\mathbf{A} \in \mathbb{C}^{m \times n} is the set of all vectors xCn\mathbf{x} \in \mathbb{C}^n satisfying Ax=0\mathbf{A}\mathbf{x} = \mathbf{0}. It is a subspace of Cn\mathbb{C}^n with dimension nrank(A)n - \text{rank}(\mathbf{A}).

Related: Null Space (Kernel), Rank–Nullity Theorem, Rank

Hermitian Matrix

A square matrix A\mathbf{A} satisfying AH=A\mathbf{A}^H = \mathbf{A}. Hermitian matrices have real eigenvalues, orthogonal eigenvectors, and are diagonalizable by a unitary similarity transformation (spectral theorem).

Related: Hermitian Matrix, Properties of Hermitian Matrices, Rank

Unitary Matrix

A square matrix U\mathbf{U} satisfying UHU=I\mathbf{U}^H \mathbf{U} = \mathbf{I}. Unitary matrices preserve inner products and norms, have eigenvalues on the unit circle, and their inverse is simply UH\mathbf{U}^H.

Related: Unitary Matrix, Unitary Matrices Preserve Inner Products and Norms

Positive Definite Matrix

A Hermitian matrix A\mathbf{A} such that xHAx>0\mathbf{x}^H \mathbf{A} \mathbf{x} > 0 for every nonzero vector x\mathbf{x}. Equivalently, all eigenvalues are strictly positive. Positive semidefinite relaxes ">0> 0" to "0\geq 0."

Related: Positive Definite and Positive Semidefinite Matrices, Hermitian Matrix

Quick Check

Let AC4×6\mathbf{A} \in \mathbb{C}^{4 \times 6} with rank(A)=3\text{rank}(\mathbf{A}) = 3. What is dim(N(A))\dim(\mathcal{N}(\mathbf{A}))?

11

33

44

66

Quick Check

A matrix HC8×4\mathbf{H} \in \mathbb{C}^{8 \times 4} represents a MIMO channel with 4 transmit and 8 receive antennas. If N(H)={0}\mathcal{N}(\mathbf{H}) = \{\mathbf{0}\}, what is rank(H)\text{rank}(\mathbf{H})?

44

88

1212

Cannot be determined

Quick Check

Which of the following is always true for a unitary matrix U\mathbf{U}?

U\mathbf{U} is Hermitian

det(U)=1\det(\mathbf{U}) = 1

det(U)=1|\det(\mathbf{U})| = 1

U\mathbf{U} is positive definite

Quick Check

Let A\mathbf{A} be Hermitian. Which statement is false?

All eigenvalues of A\mathbf{A} are real.

All diagonal entries of A\mathbf{A} are real.

All entries of A\mathbf{A} are real.

A\mathbf{A} is diagonalizable.

Quick Check

If A0\mathbf{A} \succ 0 and B0\mathbf{B} \succ 0, is A+B0\mathbf{A} + \mathbf{B} \succ 0?

Yes, always

Only if A\mathbf{A} and B\mathbf{B} commute

Not necessarily

Common Mistake: Transpose vs. Conjugate Transpose for Complex Matrices

Mistake:

"I need the adjoint of my channel matrix, so I'll just transpose it: HT\mathbf{H}^T."

Correction:

For complex matrices, the adjoint (Hermitian transpose) is the conjugate transpose HH=HT\mathbf{H}^H = \overline{\mathbf{H}}^T, not the plain transpose HT\mathbf{H}^T. Using HT\mathbf{H}^T instead of HH\mathbf{H}^H will give wrong results whenever H\mathbf{H} has complex entries.

For example, let h=[1j]\mathbf{h} = \begin{bmatrix} 1 \\ j \end{bmatrix}. Then h2=hHh=1+1=2\|\mathbf{h}\|^2 = \mathbf{h}^H \mathbf{h} = 1 + 1 = 2, but hTh=1+j2=11=0\mathbf{h}^T \mathbf{h} = 1 + j^2 = 1 - 1 = 0.

The transpose gives a bilinear form; the conjugate transpose gives a sesquilinear (inner product) form. Only the latter guarantees xHx0\mathbf{x}^H \mathbf{x} \geq 0 with equality iff x=0\mathbf{x} = \mathbf{0}.

Tip: Rule of thumb: in complex-valued signal processing and wireless communications, always use ()H(\cdot)^H unless you have a specific algebraic reason to use ()T(\cdot)^T (e.g., when working with the vec\text{vec} operator or Kronecker products where the transpose is genuinely intended).

Common Mistake: Rank Is Not the Number of Nonzero Rows or Columns

Mistake:

"My matrix has 3 nonzero rows, so its rank is 3."

Correction:

Rank is the number of linearly independent rows (or columns), not the number of nonzero ones. For example, A=[1224]\mathbf{A} = \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix} has 2 nonzero rows but rank(A)=1\text{rank}(\mathbf{A}) = 1 because row 2 is 2×2 \times row 1. Always row-reduce (or compute the SVD) to determine rank.

Tip: The most numerically reliable way to compute rank is via the SVD: count the number of singular values that exceed a suitable tolerance.

⚠️Engineering Note

Numerical Invertibility and Condition Numbers

The theoretical invertibility criterion det(A)0\det(\mathbf{A}) \neq 0 is useless in floating-point arithmetic. A matrix with det(A)=1015\det(\mathbf{A}) = 10^{-15} is mathematically invertible but numerically singular. The practical criterion is the condition number κ(A)=σmax/σmin\kappa(\mathbf{A}) = \sigma_{\max}/\sigma_{\min}:

  • κ<103\kappa < 10^3: well-conditioned, safe to invert.
  • 103<κ<101010^3 < \kappa < 10^{10}: ill-conditioned, use regularization (Tikhonov, diagonal loading).
  • κ>1010\kappa > 10^{10} (in 64-bit): effectively singular, do not invert. In MIMO detection, computing H1\mathbf{H}^{-1} (zero-forcing) fails when the channel is ill-conditioned. Use MMSE regularization (HHH+αI)1HH(\mathbf{H}^H\mathbf{H} + \alpha\mathbf{I})^{-1}\mathbf{H}^H instead.
Practical Constraints
  • IEEE 754 double precision: ~15 significant digits, so matrices with κ>1015\kappa > 10^{15} lose all precision

  • 32-bit (single precision): only ~7 digits; κ>107\kappa > 10^7 is dangerous

  • NumPy: np.linalg.cond(H) computes condition number; never use np.linalg.inv(H) without checking

🔧Engineering Note

Computing Rank in Practice: SVD Thresholding

The theoretical rank (number of nonzero singular values) is never exactly computable in floating-point. In practice, rank is determined by counting singular values above a tolerance: rankϵ(A)={i:σi>ϵ},\text{rank}_\epsilon(\mathbf{A}) = |\{i : \sigma_i > \epsilon\}|, where ϵ\epsilon is typically ϵ=max(m,n)σ1εmach\epsilon = \max(m,n) \cdot \sigma_1 \cdot \varepsilon_{\text{mach}} (with εmach2.2×1016\varepsilon_{\text{mach}} \approx 2.2 \times 10^{-16} for double precision). This is the default used by numpy.linalg.matrix_rank.

Practical Constraints
  • SVD cost: O(mnmin(m,n))O(mn \min(m,n)) for an m×nm \times n matrix

  • For large MIMO systems (nt=64n_t = 64), the SVD of HC64×64\mathbf{H} \in \mathbb{C}^{64 \times 64} costs ~10610^6 flops

Key Takeaway

For ACm×n\mathbf{A} \in \mathbb{C}^{m \times n}, the domain Cn\mathbb{C}^n decomposes as Cn=R(AH)N(A)\mathbb{C}^n = \mathcal{R}(\mathbf{A}^H) \oplus \mathcal{N}(\mathbf{A}) (an orthogonal direct sum). The rank–nullity theorem rank(A)+dim(N(A))=n\text{rank}(\mathbf{A}) + \dim(\mathcal{N}(\mathbf{A})) = n is the dimension count of this decomposition. In MIMO, this tells us exactly how many independent data streams the channel supports.

Key Takeaway

Three matrix classes appear everywhere in telecommunications:

  • Hermitian (AH=A\mathbf{A}^H = \mathbf{A}): covariance matrices, Fisher information matrices, optimization cost matrices. Key property: real eigenvalues.
  • Unitary (UHU=I\mathbf{U}^H \mathbf{U} = \mathbf{I}): DFT matrices, beamforming codebooks, precoder/combiner matrices in capacity-achieving schemes. Key property: norm preservation.
  • Positive (semi)definite (A0\mathbf{A} \succeq 0): covariance matrices, Gram matrices HHH\mathbf{H}^H \mathbf{H}. Key property: nonnegative eigenvalues guarantee meaningful power/energy interpretations.

The spectral theorem unifies them: every Hermitian matrix is unitarily diagonalizable with real eigenvalues, and it is positive (semi)definite iff those eigenvalues are all positive (nonneg.).

Key Takeaway

Many wireless signal processing operations are changes of basis in disguise. The DFT transforms from antenna domain to beamspace (angular domain). The SVD of a channel matrix provides the optimal transmit and receive bases that diagonalize the channel into independent parallel sub-channels. Recognizing an operation as a change of basis often reveals the underlying structure and suggests efficient implementations.

Why This Matters: The Channel Matrix Maps Transmit to Receive Signal Space

In a narrowband MIMO system with ntn_t transmit antennas and nrn_r receive antennas, the channel is described by y=Hx+n,\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{n}, where HCnr×nt\mathbf{H} \in \mathbb{C}^{n_r \times n_t} is the channel matrix. The concepts of this section directly govern system performance:

  • Range: R(H)Cnr\mathcal{R}(\mathbf{H}) \subseteq \mathbb{C}^{n_r} is the set of noiseless received signals achievable by varying x\mathbf{x}. Only the projection of n\mathbf{n} onto R(H)\mathcal{R}(\mathbf{H}) matters for detection; noise in R(H)\mathcal{R}(\mathbf{H})^\perp is irrelevant.

  • Null space: N(H)Cnt\mathcal{N}(\mathbf{H}) \subseteq \mathbb{C}^{n_t} consists of transmit directions that produce zero received signal. In multiuser MIMO, one user's precoder is deliberately chosen in N(Hk)\mathcal{N}(\mathbf{H}_k) of the interfered user kk (zero-forcing).

  • Rank: rank(H)\text{rank}(\mathbf{H}) is the number of independent spatial streams the channel supports. In rich scattering, rank(H)=min(nt,nr)\text{rank}(\mathbf{H}) = \min(n_t, n_r) with probability 1 (full rank). In line-of-sight channels, rank(H)=1\text{rank}(\mathbf{H}) = 1 (rank-deficient), and spatial multiplexing is impossible.

  • Hermitian structure: HHHCnt×nt\mathbf{H}^H \mathbf{H} \in \mathbb{C}^{n_t \times n_t} and HHHCnr×nr\mathbf{H}\mathbf{H}^H \in \mathbb{C}^{n_r \times n_r} are Hermitian positive semidefinite. Their eigenvalues are the squared singular values of H\mathbf{H}, which determine the SNR on each spatial sub-channel.

  • Unitary precoders/combiners: Capacity-achieving transmission on the MIMO channel uses the SVD H=UΣVH\mathbf{H} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^H. The transmit precoder V\mathbf{V} and receive combiner UH\mathbf{U}^H are unitary, ensuring no energy is wasted in the transformation.

Why This Matters: Null Space and Zero-Forcing in Multiuser MIMO

Consider a base station with ntn_t antennas serving two single-antenna users with channel vectors h1H,h2HC1×nt\mathbf{h}_1^H, \mathbf{h}_2^H \in \mathbb{C}^{1 \times n_t}. To eliminate inter-user interference, the zero-forcing precoder for user 1 must satisfy h2Hf1=0\mathbf{h}_2^H \mathbf{f}_1 = 0, i.e., f1N(h2H)\mathbf{f}_1 \in \mathcal{N}(\mathbf{h}_2^H).

By the rank–nullity theorem, dim(N(h2H))=ntrank(h2H)=nt1\dim(\mathcal{N}(\mathbf{h}_2^H)) = n_t - \text{rank}(\mathbf{h}_2^H) = n_t - 1 (assuming h20\mathbf{h}_2 \neq \mathbf{0}). Thus the precoder has nt1n_t - 1 degrees of freedom: enough to also maximize the signal power h1Hf12|\mathbf{h}_1^H \mathbf{f}_1|^2 subject to the null-space constraint. This is the foundation of zero-forcing beamforming, one of the most widely used multiuser MIMO techniques.