Vector Spaces and Subspaces
Why Vector Spaces for Wireless Communications?
Every modern wireless system transmits and receives vectors. When a base station equipped with antennas sends a symbol vector , a receiver with antennas observes
where is the channel matrix and is additive noise. The entirety of MIMO theory --- beamforming, spatial multiplexing, interference alignment --- reduces to geometric operations inside complex vector spaces: projections, subspace decompositions, and changes of basis.
A rigorous command of vector-space fundamentals is therefore not mathematical overhead; it is the language in which capacity-achieving schemes are conceived and analysed. This section builds that language from the axioms up, with always in view as the workhorse space of communications engineering.
Definition: Field
Field
A field is a set together with two binary operations, addition () and multiplication (), satisfying the usual axioms of commutativity, associativity, distributivity, existence of additive identity , multiplicative identity , additive inverses, and multiplicative inverses for every nonzero element.
Throughout this text the relevant fields are:
- , the real numbers, and
- , the complex numbers (where ).
Engineering convention uses for the imaginary unit (reserving for current). We follow this convention throughout.
Definition: Vector Space
Vector Space
Let be a field. A vector space over is a non-empty set equipped with two operations --- vector addition and scalar multiplication --- satisfying, for all and all :
| # | Axiom | Statement |
|---|---|---|
| A1 | Additive closure | |
| A2 | Additive commutativity | |
| A3 | Additive associativity | |
| A4 | Existence of zero vector | There exists such that |
| A5 | Existence of additive inverse | For each there exists with |
| S1 | Scalar-multiplication closure | |
| S2 | Compatibility of scalar multiplication | |
| S3 | Multiplicative identity | |
| D1 | Distributivity over vector addition | |
| D2 | Distributivity over scalar addition |
The axioms are not independent (e.g., additive closure is sometimes folded into the definition of the addition map), but listing all eight properties plus the two closure conditions makes verification of concrete examples systematic.
Definition: The Space
The Space
For a positive integer , define
with component-wise addition and scalar multiplication:
Under these operations is a vector space over . The zero vector is .
In wireless communications usually equals the number of antenna elements at one end of the link. A transmit vector assigns a complex baseband signal to each of antennas.
Definition: Subspace
Subspace
Let be a vector space over . A non-empty subset is a subspace of if and only if the following three conditions hold:
- Contains the zero vector: .
- Closed under addition: .
- Closed under scalar multiplication: .
Equivalently, is a subspace if and only if for all and all (closure under linear combinations).
Condition 1 can be replaced by the requirement that is non-empty, since taking in condition 3 then gives . We list it explicitly for clarity.
Definition: Linear Combination
Linear Combination
Let be a vector space over and let . A vector is a linear combination of if there exist scalars such that
Definition: Span
Span
Let . The span of is the set of all linear combinations of vectors in :
By convention .
is always a subspace of --- in fact it is the smallest subspace containing . If we say spans (or generates) .
Definition: Linear Independence
Linear Independence
A set is linearly independent if the equation
implies .
A set that is not linearly independent is called linearly dependent; equivalently, at least one vector in the set can be written as a linear combination of the others.
Definition: Basis
Basis
A set is a basis for if:
- is linearly independent, and
- .
Equivalently, is a basis if and only if every vector can be written uniquely as
The standard (canonical) basis for is where has a in position and elsewhere.
Definition: Dimension
Dimension
The dimension of a vector space , denoted , is the number of vectors in any basis for .
This is well-defined because all bases of have the same cardinality (see TUniqueness of Dimension (Invariance of Basis Cardinality) below).
In particular, (over ).
Theorem: Uniqueness of Dimension (Invariance of Basis Cardinality)
Let be a vector space over a field . If has a finite basis, then every basis of is finite and all bases have the same number of elements.
A basis is a "maximally efficient coordinate system" for the space. Because every vector has a unique expansion in a given basis, no basis can be "shorter" than another (it would miss some direction) or "longer" (it would contain a redundancy). The Steinitz exchange argument formalises this by showing that independent vectors can always be swapped into a spanning set one-by-one without losing the spanning property.
Use the Steinitz Exchange Lemma: if a set of vectors is linearly independent and another set of vectors spans the space, then .
Apply the lemma twice, once in each direction.
Steinitz Exchange Lemma
We first establish the following key lemma.
Lemma (Steinitz Exchange). Let be a linearly independent set in and let be a set that spans . Then , and there exists a subset of of size such that its union with still spans .
Proof of the lemma. We proceed by induction on .
Base case (). The empty set is trivially linearly independent, is obvious, and itself spans .
Inductive step. Assume the result holds for . By the inductive hypothesis there exists a re-indexing of such that
spans (and ). Since spans , write
Not all can be zero, for otherwise would be a linear combination of , contradicting the linear independence of . Hence (there is at least one term) and we may choose some index with . Solving for and substituting shows that
(after re-indexing so that becomes ) still spans . This completes the induction and proves the lemma.
Applying the lemma to two bases
Now let and be two bases of .
- is linearly independent and spans , so the Steinitz Exchange Lemma gives .
- is linearly independent and spans , so the Steinitz Exchange Lemma gives .
Therefore . Every basis of has the same cardinality, and the dimension is well-defined.
Theorem: Extension of Linearly Independent Sets to a Basis
Let be a finite-dimensional vector space with . Every linearly independent set with can be extended to a basis for . In particular, every linearly independent set in contains at most vectors.
If a linearly independent set does not yet span the whole space, there is some direction it "misses." We can always adjoin a vector from that missing direction without destroying independence, and repeat until we have vectors --- at which point the Steinitz argument guarantees we span the space.
Iteratively adjoin vectors that lie outside the current span.
Use the Steinitz Exchange Lemma to show the process terminates.
Iterative extension procedure
Let be linearly independent with . Fix a basis for .
If then is already a basis and by TUniqueness of Dimension (Invariance of Basis Cardinality).
Otherwise, there exists with . Set .
Preservation of independence
We claim is linearly independent. Suppose
If we could solve for as a linear combination of , contradicting . Hence , and then all by the independence of .
Termination
Repeat the extension: at each stage the cardinality increases by one and the set remains linearly independent. By the Steinitz Exchange Lemma, a linearly independent set in has at most elements, so the process terminates after at most additions. The resulting set has linearly independent vectors in an -dimensional space, hence it spans (otherwise one could adjoin yet another vector, producing independent vectors, contradicting the Steinitz bound). Therefore it is a basis.
Example: Linear Independence and Basis Verification in
Consider the following three vectors in :
(a) Determine whether is linearly independent over .
(b) If it is, verify that it forms a basis for .
Set up the dependence equation
We require all solutions of
Writing this component-wise:
Row-reduce the coefficient matrix
Form the coefficient matrix and row-reduce:
Step 1. Eliminate the in position via :
since .
Step 2. Eliminate position via :
Since , the matrix has rank .
Conclude independence and basis
(a) The only solution is , so the set is linearly independent over .
(b) Because and we have three linearly independent vectors in , the set is a basis for .
Alternatively, ; computing in full gives , confirming invertibility and hence that the columns span .
Remark: In practice, checking is the fastest test, but row reduction reveals the structure more clearly and is numerically more stable for large systems.
Vector space
A set over a field with addition and scalar multiplication operations satisfying the ten axioms (A1--A5, S1--S3, D1--D2) listed in DVector Space.
Related: Vector Space, Subspace, Basis
Dimension
The cardinality of any basis for a vector space . All bases share the same cardinality by TUniqueness of Dimension (Invariance of Basis Cardinality). Notation: .
Related: Dimension, Uniqueness of Dimension (Invariance of Basis Cardinality), Basis
Span
The set of all linear combinations of a given collection of vectors. is the smallest subspace containing .
Related: Span, Linear Combination, Subspace
Quick Check
What is , the dimension of when it is viewed as a vector space over (rather than over )?
Over , every complex component requires two real coordinates. A real basis for is , which has elements. Hence .
Quick Check
Which of the following subsets of is a subspace (over )?
(the closed unit ball)
(a single nonzero vector)
This set is the solution space of the homogeneous linear equation , hence it is closed under addition and scalar multiplication and contains the zero vector. The unit ball fails closure under scalar multiplication (scaling by with leaves the ball).
Why This Matters: and Antenna Arrays
In a MIMO (Multiple-Input Multiple-Output) system, the baseband signal at a uniform linear array (ULA) of antenna elements is naturally represented as a vector . The -th component is the complex baseband signal --- amplitude and phase --- fed to the -th element.
The array steering vector for a plane wave arriving at angle takes the form
where is the inter-element spacing and is the carrier wavelength.
Key link to this section: The set for distinct directions of arrival is linearly independent (for almost all choices of ) whenever . This is precisely the linear-independence concept of DLinear Independence applied to . A receiver with antennas can therefore resolve up to spatial paths --- a fact that underpins spatial multiplexing and beamforming gain.
See full treatment in Chapter 3، Section 2
Why This Matters: Signal and Noise Subspaces
After receiving snapshots from antennas, the sample covariance matrix can be eigen-decomposed. Its column space splits into a signal subspace (spanned by eigenvectors corresponding to the largest eigenvalues) and a noise subspace (spanned by the remaining eigenvectors). These are complementary subspaces of in the sense of DSubspace. Algorithms such as MUSIC and ESPRIT exploit this decomposition for high-resolution direction-of-arrival estimation.
See full treatment in Chapter 7، Section 3
Common Mistake: Confusing with
Mistake:
Treating and as interchangeable. A common error is to claim that a set of vectors in can be linearly independent (over ), reasoning that "has real degrees of freedom."
Correction:
is -dimensional over and -dimensional over . The dimension depends on the scalar field.
- Over : at most vectors can be linearly independent.
- Over : the same set has dimension , and up to vectors can be -linearly independent.
In wireless communications the scalar field is almost always (complex baseband representation), so the relevant dimension is .
Example: In , the vectors and are -linearly independent but -linearly dependent (since ).
Common Mistake: Forgetting that
Mistake:
Assuming is undefined or equals the empty set. This sometimes leads to incorrect conclusions about the dimension of the trivial subspace.
Correction:
By convention , which is the trivial subspace of any vector space. Its dimension is , and the empty set serves as its (unique) basis. This convention ensures that "every subspace has a basis" holds without exception.
Key Takeaway
The core message of this section in three bullets:
-
is the stage. Virtually every signal, channel, and noise vector in wireless communications lives in for some . Mastering this space is non-negotiable.
-
Dimension is the fundamental invariant. It tells you how many independent directions a (sub)space has --- equivalently, how many spatial streams a MIMO system can support or how many parameters a signal model requires.
-
Bases are coordinate systems. Choosing a good basis (eigenvectors, steering vectors, DFT columns) is the single most recurring design step in communications theory. All bases for a given subspace have the same size, but their structure profoundly affects algorithm performance.