Kronecker Products and Vec Operator
Why Kronecker Products and the Vec Operator Matter
The Kronecker product and vec operator are the workhorses of structured multi-dimensional linear algebra. They appear whenever a problem has inherent matrix structure that must be reshaped into a standard form.
In telecommunications the most prominent application is the Kronecker channel model for spatially correlated MIMO channels. When transmit and receive correlations are separable, the full channel covariance matrix factors as a Kronecker product of the much smaller transmit and receive correlation matrices. This factorization
- Reduces parameter count from to โ essential for estimation and feedback.
- Enables closed-form capacity analysis by decoupling transmit and receive spatial structure.
- Allows vectorization of matrix equations via the vec operator and the identity , converting matrix-valued least-squares, Lyapunov equations, and Sylvester equations into standard linear systems.
Mastering these tools is prerequisite to understanding channel estimation, covariance feedback, and capacity analysis in modern MIMO systems.
Definition: Kronecker Product
Kronecker Product
Let and . The Kronecker product (or tensor product) is the block matrix defined by Equivalently, the -th block of size is , so the entry in global row and global column is
The Kronecker product is bilinear: linear in each factor when the other is held fixed. It is also associative: , but it is not commutative in general: unless special conditions hold (e.g., both are scalars).
Definition: Vec Operator
Vec Operator
Let with columns . The vec operator stacks the columns of into a single column vector: The vec operator is linear: .
The entry of at position equals (the element in row , column of ).
The vec operator provides the bridge between matrix equations and ordinary linear systems. It is the key ingredient in converting Lyapunov equations, Sylvester equations, and matrix least-squares problems into standard form.
Theorem: Mixed-Product Property of the Kronecker Product
Let , , , and . Then
The Kronecker product "separates" two independent matrix operations. Multiplying two Kronecker products corresponds to performing the underlying multiplications independently in each factor space. Think of it as: the part and the part do not interact; each evolves according to its own multiplication.
Verify that the dimensions are compatible on both sides.
Write out the -th block of each side and show they are equal.
Each block of the left-hand side is a sum over products of blocks from the two factors.
Step 1: Verify dimensions
The left-hand side has dimensions: and , so the product is in .
The right-hand side: and , so .
The dimensions match.
Step 2: Compute the $(i,j)$-th block of the left-hand side
Partition into blocks of size , and into blocks of size .
The -th block (of size ) of the product is obtained by multiplying the -th block row of with the -th block column of :
Step 3: Identify with the right-hand side
The scalar factor is precisely the -th entry of . Therefore which is exactly the -th block of (by the definition of the Kronecker product).
Since all blocks agree, we conclude
Theorem: Vec Identity for Matrix Triple Products
Let , , and . Then
The vec identity says that the linear map โ which acts on matrices โ can be represented as ordinary matrix-vector multiplication once we vectorize . The "coefficient matrix" of this linear map is , which encodes the left multiplication by and right multiplication by simultaneously.
Start with the special case to get .
Use column-by-column reasoning and the definition of the Kronecker product.
For general , express each column of as a linear combination of columns of .
Step 1: Preliminary โ vec of a product $\mathbf{A}\mathbf{X}$
Write . Then , so This establishes the identity for the special case .
Step 2: Right multiplication by $\mathbf{B}$ โ column analysis
Let . The -th column of is where is the -th column of and are its entries.
Step 3: Vectorize and identify the Kronecker structure
Stacking all columns: The block matrix has -th block equal to . Recalling that , this block matrix is precisely (by definition of the Kronecker product).
The column vector on the right is . Therefore
Theorem: Eigenvalues of a Kronecker Product
Let have eigenvalues and let have eigenvalues . Then the eigenvalues of are exactly the products If is an eigenvector of for and is an eigenvector of for , then is an eigenvector of for the eigenvalue .
The Kronecker product combines two independent linear maps, one acting on each factor of a tensor product space. An eigenvector of the combined map is simply an eigenvector for each factor, and the eigenvalue is the product โ just as the joint probability of independent events is the product of the individual probabilities.
Use the mixed-product property with rank-1 matrices.
The key identity is .
Step 1: Eigenvector verification
Let and . Apply the mixed-product property (TMixed-Product Property of the Kronecker Product) with (an matrix) and (an matrix): Since and , the Kronecker product . Therefore is an eigenvalue of with eigenvector .
Step 2: All eigenvalues are accounted for
We have exhibited eigenpairs for , . Since , its characteristic polynomial has degree , so it has exactly eigenvalues (counted with algebraic multiplicity).
It remains to show that the vectors span . If are linearly independent (which holds when is diagonalizable) and are linearly independent (which holds when is diagonalizable), then forms a basis of (a standard result in multilinear algebra; the dimension count gives ). Thus we have found all eigenvalues.
Step 3: General case (non-diagonalizable matrices)
For the general case (including non-diagonalizable matrices), we use a continuity/density argument. The set of diagonalizable matrices is dense in (any matrix can be approximated arbitrarily closely by one with distinct eigenvalues, which is necessarily diagonalizable).
The characteristic polynomial is a polynomial (hence continuous) function of the entries of and . Since the result holds on the dense set of diagonalizable pairs and eigenvalues are continuous functions of matrix entries, the result extends to all square matrices.
Alternatively, one can verify the identity on characteristic polynomials directly: which confirms that the roots are exactly .
Example: Kronecker Product Computation and Mixed-Product Verification
Let
(a) Compute .
(b) Verify the mixed-product property by computing both sides of .
Part (a): Compute $\mathbf{A} \otimes \mathbf{B}$
By definition, the -th block is :
Part (b): Left-hand side โ direct multiplication
First compute by multiplying the matrix by itself:
Computing row by row:
- Row 1:
- Row 2:
- Row 3:
- Row 4:
So .
Part (b): Right-hand side โ Kronecker of squares
Compute and separately:
Therefore
Both sides are equal: .
Kronecker Product Structure Visualization
See how inherits block structure from with each block scaled by the corresponding entry of and shaped like .
Parameters
Kronecker Product vs Hadamard (Element-wise) Product
| Property | Kronecker product () | Hadamard product () |
|---|---|---|
| Definition | -th block is ; result is | ; requires same dimensions, result is |
| Output dimensions | , result is (dimensions multiply) | result is (dimensions unchanged) |
| Associativity | Yes: | Yes: |
| Mixed-product property | No analogous property; in general | |
| Relation to eigenvalues | No simple relation; Schur product theorem: if then | |
| Commutativity | Not commutative ( in general, and they may have different dimensions) | Commutative: |
| Typical wireless application | Kronecker channel model: ; covariance structuring | Element-wise channel gain masking; per-subcarrier power allocation in OFDM; Schur complement bounds |
Why This Matters: The Kronecker MIMO Channel Model
A widely used model for spatially correlated MIMO channels assumes that transmit and receive correlations are separable. Let be the channel matrix. The Kronecker model posits: where:
- is the receive spatial correlation matrix (, often normalized so that ).
- is the transmit spatial correlation matrix (, similarly normalized).
- has i.i.d. entries .
- and are any matrix square roots (e.g., Cholesky factors or Hermitian square roots).
Connection to the Kronecker product and vec operator. Using the vec identity (TVec Identity for Matrix Triple Products), vectorize both sides: Since , the covariance of is Applying the mixed-product property and : Therefore
Why this matters:
- The full covariance is completely determined by the two smaller matrices and โ a dramatic reduction in the number of parameters to estimate and feed back.
- The eigenvalues of are all pairwise products (by TEigenvalues of a Kronecker Product), enabling closed-form capacity and outage analysis.
- This model is accurate when scattering at the transmitter and receiver are physically separated, which is common in macro-cellular deployments.
See full treatment in Uplink and Downlink Processing
Efficient Kronecker Matrix-Vector Products
A naive matrix-vector product with and costs flops if the Kronecker product is formed explicitly โ a waste of both memory and computation.
The identity reduces this to two smaller matrix multiplications costing โ a factor of cheaper.
This trick is critical in:
- MIMO channel covariance operations: for spatial filtering.
- RF imaging (Book RFI, Ch 8.6): The sensing matrix has Kronecker structure , and all image reconstruction algorithms exploit this to avoid forming the full operator.
- OAMP/VAMP estimation (Book FSI, Ch 17-21): The LMMSE step uses Kronecker-factored matrix inverses.
- โข
Explicit Kronecker product of two matrices: entries โ do not store
- โข
Factored product: flops โ 32x cheaper
Kronecker Products Across the Library
The Kronecker product structure introduced here reappears as a central computational tool in three specialized books:
- Book MIMO (Ch 3, 7): Spatial correlation modeling and JSDM precoding exploit .
- Book RFI (Ch 8.6): The multi-static RF imaging sensing operator factors as , enabling efficient backpropagation and OAMP reconstruction.
- Book FSI (Ch 17-21): Message-passing algorithms (AMP, OAMP, VAMP) exploit Kronecker structure in the LMMSE step for computational efficiency.
Key Takeaway
The Kronecker product and vec operator together provide a systematic way to convert matrix equations into vector equations. The identity is the central bridge between the two representations.
In MIMO wireless communications, the Kronecker structure encodes separable spatial correlation: the full channel covariance factors as , reducing complexity from to parameters. The mixed-product property and eigenvalue factorization underpin closed-form capacity expressions and efficient covariance estimation algorithms.
Common Mistake: Kronecker Product Is Not Commutative
Mistake:
Assuming . This is a natural but incorrect extension of scalar multiplication commutativity to the Kronecker product.
Correction:
In general, . In fact, if and with or , the two products do not even have the same dimensions: while (same size, but different internal block structure).
Even when both factors are square of the same size, the block structure differs. For example, with and : These are different.
However, the two products are always related by a permutation: there exist permutation matrices and such that . In the Kronecker channel model, confusing the order leads to swapping transmit and receive correlations โ a subtle but consequential error.
Quick Check
Let and . What are the dimensions of ?
By definition, if and , then . Here and .
Quick Check
Let have eigenvalues and have eigenvalues . Which of the following is the set of eigenvalues of ?
Kronecker Product
The Kronecker product of an matrix and a matrix is the block matrix whose -th block (of size ) equals . It satisfies the mixed-product property, has eigenvalues that are pairwise products of the factors' eigenvalues, and is the primary tool for encoding separable correlation structure in MIMO channels.
Related: Vec Operator, mixed-product property, The Kronecker MIMO Channel Model
Vec Operator
The vec operator maps an matrix to an column vector by stacking the columns of from left to right. It is linear and satisfies the fundamental identity , which converts matrix equations into standard linear systems.
Related: Kronecker Product, matrix vectorization, Lyapunov equation