Kronecker Product Structure of the Sensing Matrix
Why Kronecker Structure Is the Key to Scalable RF Imaging
In Chapter 6 we derived the sensing matrix that maps the discretized reflectivity to measurements . For a typical 3D imaging scenario with transmitters, receivers, subcarriers, and a voxel grid, the full matrix has complex entries --- over 2 GB of storage. Forming this matrix explicitly, let alone inverting it, is computationally prohibitive.
The central insight of this section is that under physically reasonable assumptions, decomposes as a Kronecker product of three much smaller factor matrices. This factorization reduces storage by five orders of magnitude and enables matrix-vector products that are fast enough for iterative reconstruction in real time. Every reconstruction algorithm in Parts IV--VI exploits this structure.
Definition: Kronecker Product
Kronecker Product
The Kronecker product of matrices and is the block matrix
The Kronecker product satisfies the fundamental vec identity:
for any conformable matrix .
The vec identity is the computational engine behind every fast algorithm in this chapter. It converts a single large matrix-vector product into a sequence of smaller matrix multiplications.
Kronecker product
A matrix operation that replaces each element of with the block , producing an matrix from factors of sizes and .
Related: {{Ref:Gloss Vec Operator}}, {{Ref:Telecom:Ch01:Gloss Kronecker}}
Vectorization operator
The operator that stacks the columns of a matrix into a single vector in . The inverse operation reshapes a vector back into a matrix of specified dimensions.
Related: {{Ref:Gloss Kronecker Product}}
Definition: Algebraic Properties of the Kronecker Product
Algebraic Properties of the Kronecker Product
The Kronecker product satisfies the following identities, each of which we exploit for computational gain:
Mixed-product rule:
Transpose:
Inverse (when both factors are invertible):
Pseudoinverse:
Eigenvalues: If has eigenvalues and has eigenvalues , then has eigenvalues .
The mixed-product rule is particularly powerful: it means that inherits the Kronecker structure of , so the Gram matrix can be analyzed factor by factor.
Theorem: Kronecker Factorization of the Sensing Matrix
Consider a multi-static RF imaging system with transmitters, receivers, and frequency bins, operating in the far field with the scene discretized on a separable Cartesian grid of dimensions . Under these assumptions, the sensing matrix factors as:
where:
- : receive spatial factor (columns are Rx steering vectors evaluated at grid angles),
- : transmit spatial factor (columns are Tx steering vectors evaluated at grid angles),
- : frequency (range) factor (a partial DFT matrix sampling the delay-frequency relationship).
The total matrix with and is specified by only parameters instead of .
The physical reason for separability is that in the far field, the round-trip phase from transmitter through voxel to receiver at frequency separates into three independent terms: one depending only on the Rx-angle pair, one on the Tx-angle pair, and one on the frequency-range pair. Since the sensing matrix entry is the product of these three phase terms, the full matrix is their Kronecker product.
Separability of the far-field phase
From the Born approximation (Ch 06, Eq. 21), the entry of for measurement index and voxel involves the phase
For a separable grid and transmit/receive arrays oriented along orthogonal axes, the far-field Taylor expansion of Ch 06 gives:
The gradient separates into components along the grid axes.
Three independent phase factors
After the Taylor expansion, the phase factors as:
where and are unit direction vectors, and is the round-trip delay for range bin .
Kronecker product identification
Since , this is precisely the definition of the Kronecker product:
On the Factor Ordering Convention
The ordering corresponds to the measurement index ordering --- receive antenna varies slowest, frequency varies fastest --- and scene index ordering . Different stacking conventions lead to permuted Kronecker factors. The CommIT simulator uses the convention above, matching Caire's note (Eq. 22--23). When reading other references, verify the index ordering before comparing expressions.
Definition: Storage and Computational Savings
Storage and Computational Savings
For a Kronecker-factored sensing matrix with factor sizes :
| Quantity | Full | Kronecker factors |
|---|---|---|
| Storage | complex numbers | complex numbers |
| Matvec | multiplications |
For the representative parameters , , , and :
| Full | Kronecker | Ratio | |
|---|---|---|---|
| Storage | GB | KB | |
| Matvec flops |
The Kronecker factorization makes iterative reconstruction feasible for 3D imaging problems that would otherwise require supercomputer-scale resources.
Theorem: Fast Matrix-Vector Product via Kronecker Structure
For with factors of size (), the product can be computed in
operations, compared to for the naive product. For balanced factors (), this reduces from to .
Instead of applying the giant matrix all at once, we reshape the reflectivity vector into a 3D tensor and apply each factor matrix along one mode at a time --- three small multiplications instead of one enormous one.
Reshape into a tensor
Reshape into a 3D tensor .
Sequential mode products
Apply each factor along one tensor mode:
- : cost .
- : cost .
- : cost .
The result .
Example: Kronecker Structure for 2D Imaging with ULAs
A monostatic system uses a -element Tx ULA and a co-located -element Rx ULA, both with half-wavelength spacing, transmitting on subcarriers with spacing MHz centered at GHz. The scene is a 2D grid of cross-range bins and range bins.
(a) Write the factor matrices explicitly. (b) Compute the storage saving. (c) Determine the computational saving for one matvec.
Factor matrices
Since this is a 2D problem, where the angular factor combines Tx and Rx via the virtual array (Ch 06.7). The virtual array has elements.
So and .
Storage
Full matrix: complex entries MB (at 16 bytes per complex double).
Kronecker factors: entries KB. Saving: .
Matvec cost
Full matvec: flops.
Kronecker matvec: reshape into , then costs flops, costs flops. Total: flops. Saving: .
When the factors are DFT matrices (uniform grid, uniform frequency spacing), each product becomes an FFT: per mode.
Kronecker Structure Visualization
Visualizes the magnitude of the sensing matrix and its Kronecker factors.
Top row: Magnitude plots of the three factor matrices , , .
Bottom left: Magnitude of the full sensing matrix , showing the characteristic block-repetitive pattern.
Bottom right: Singular values of (blue) compared with products of factor singular values (red circles), confirming the multiplicative SVD property.
Parameters
The Gram Matrix Inherits Kronecker Structure
Since , the Gram matrix factors as
This means the point-spread function (PSF) --- the response of the imaging system to a point scatterer --- can be analyzed factor by factor. The overall PSF is the Kronecker product of the PSFs in the receive-angle, transmit-angle, and range dimensions. A narrow main lobe in one dimension cannot compensate for wide sidelobes in another: the imaging resolution is determined by the worst factor.
Common Mistake: When Kronecker Structure Breaks Down
Mistake:
Assuming exact Kronecker decomposition in all operating conditions. The factorization requires:
- Far-field approximation --- spherical wavefronts in the near field break separability.
- Separable Cartesian grid --- polar or adaptive grids prevent clean factorization.
- Narrowband per subcarrier --- wideband effects within a single subcarrier couple range and angle.
- No mutual coupling --- antenna interactions create non-separable effects.
Correction:
When these assumptions are only approximately satisfied, where is a perturbation. Low-rank approximations of or iterative refinement can recover most of the computational benefit. For near-field scenarios, see the extended Fresnel-zone corrections in Ch 08.4.
Historical Note: Leopold Kronecker and the Product That Bears His Name
1858--presentThe Kronecker product was introduced by the German mathematician Leopold Kronecker (1823--1891), though it was also independently developed by Johann Georg Zehfuss in 1858. The operation was initially a curiosity in pure algebra. Its computational significance was recognized much later, when Van Loan and Pitsianis (1993) showed that Kronecker product approximation provides optimal low-rank factorizations for structured matrices --- precisely the setting we encounter in imaging. In the radar and signal processing community, the Kronecker structure was exploited for MIMO radar by Li and Stoica (2007), and Caire's framework extends it to the multi-static, multi-frequency RF imaging context.
Separable grid
A discretization of the target region where the voxel positions form a Cartesian product: . This separability is what enables the Kronecker factorization of .
Related: {{Ref:Gloss Kronecker Product}}
GPU Implementation of Kronecker Matvec
The sequential mode-product algorithm maps naturally to GPU computation. Each mode product is a batched matrix multiplication:
- Mode 1 ( along range): batch of matrix-vector products of size .
- Mode 2 ( along Tx-angle): batch of products of size .
- Mode 3 ( along Rx-angle): batch of products of size .
Using CuPy or PyTorch, each mode product is a single
torch.einsum or cupy.tensordot call, achieving near-peak
GPU throughput. The CommIT simulator implements this pattern,
enabling real-time imaging at millimeter-wave frequencies.
Key Takeaway
Under far-field and separable-grid assumptions, the sensing matrix factors as . This reduces storage from to (a factor of for typical parameters) and matrix-vector products from to sequential mode products costing for balanced factors, or when factors are DFT matrices. Every reconstruction algorithm in this book exploits this structure.