Exercises
ex01-kronecker-basic
EasyLet and .
(a) Compute . (b) Compute . (c) Verify that . (d) Compute the eigenvalues of both products and verify the eigenvalue product property.
Replace each in with the block .
The eigenvalues of are .
The eigenvalues of are .
$\mathbf{B} \otimes \mathbf{C}$
$
$\mathbf{C} \otimes \mathbf{B}$
$
These are different matrices (the blocks are in different positions), confirming non-commutativity.
Eigenvalue verification
Eigenvalues of : , . Eigenvalues of : , . Products: . Both and have the same eigenvalues (they are similar).
ex02-vec-identity
EasyVerify the vec identity for , , and .
(column-stacking).
Compute both sides independently and check equality.
Left-hand side
, . Product: .
Right-hand side
, . . They match.
ex03-storage-saving
EasyA 3D imaging system has , , , and the scene grid is .
(a) Compute (number of measurements) and (number of voxels). (b) Compute the storage for the full in MB (16 bytes per complex double). (c) Compute the storage for the Kronecker factors in KB. (d) What is the storage ratio?
.
Factor sizes: , , .
Dimensions
. .
Full storage
bytes MB.
Kronecker storage
entries. bytes KB.
Ratio
reduction.
ex04-adjoint-kronecker
EasyShow that the adjoint of a Kronecker product satisfies directly from the definition.
Use the entry-wise definition: .
Entry-wise computation
. The conjugate transpose entry is: .
ex05-condition-product
MediumLet and with and .
(a) What is ? (b) If we add a third factor with , what is the overall condition number? (c) Which factor should be improved first to most effectively reduce the overall conditioning?
The condition number of a Kronecker product is the product of factor condition numbers.
Improving the worst-conditioned factor has the largest absolute effect.
Two-factor case
.
Three-factor case
.
Which to improve
Reducing from 12 to 6 would reduce overall from 180 to 90 (saving 90). Reducing from 5 to 2.5 would give (also saving 90). Reducing from 3 to 1.5 gives (saving 90).
In terms of relative improvement, halving any factor halves the total. The key insight is that the multiplicative structure means all factors contribute proportionally --- improving the worst-conditioned factor by a given multiplicative ratio has the same effect as improving any other.
ex06-svd-kronecker
MediumLet and .
(a) Compute the singular values of and . (b) Compute all singular values of . (c) Verify by computing the SVD of the full matrix.
Singular values are square roots of eigenvalues of .
is diagonal, so its singular values are and .
SVD of $\ntn{sens}_1$
. Eigenvalues: , so and .
SVD of $\ntn{sens}_2$
is diagonal: , .
Kronecker product singular values
All pairwise products: . Sorted: . .
ex07-coherence-ula
MediumA ULA with elements at half-wavelength spacing observes a scene discretized into angular bins uniformly spaced in .
(a) Write the entry of the sensing matrix. (b) Compute the coherence between adjacent bins. (c) Compute the Welch bound and compare. (d) How many bins must be removed (from the edges) to reduce the coherence below 0.5?
Adjacent bins have .
The inner product is a geometric series in .
Sensing matrix entries
for and .
Adjacent-bin coherence
.
Welch bound
.
The physical matrix coherence (0.64) greatly exceeds the Welch bound (0.18).
ex08-preconditioned-convergence
MediumA Tikhonov problem with is solved by CG.
(a) Estimate the number of CG iterations to reduce the residual by a factor of without preconditioning. (b) A Kronecker preconditioner reduces the effective condition number to 25. How many iterations are needed now? (c) If each preconditioned iteration costs as much as an unpreconditioned iteration, is preconditioning worthwhile?
CG convergence: .
Unpreconditioned
iterations.
Preconditioned
iterations.
Cost comparison
Unpreconditioned total cost: . Preconditioned total cost: . Speedup: . Preconditioning is highly worthwhile.
ex09-mode-product
MediumA tensor and a matrix .
(a) What is the size of ? (b) Express the mode-2 product in terms of a standard matrix multiplication using unfolding. (c) Compute the total number of multiplications.
Mode-2 product replaces the second dimension with .
Unfold along mode 2 to get a matrix.
Output size
.
Unfolding
The mode-2 unfolding stacks the fibers along mode 2. . Refolding gives .
Multiplication count
multiplications. Compare to the full Kronecker product which would involve a matrix of size .
ex10-psf-analysis
MediumFor a 2D imaging system with where is a partial DFT and is a partial DFT:
(a) Compute the Gram matrix as a Kronecker product. (b) Show that the diagonal of is constant (all columns have equal norm) for a DFT matrix. (c) Derive the PSF main-lobe width in the angular and range dimensions.
For a partial DFT , is a partial identity plus off-diagonal terms.
The PSF main-lobe width is in the DFT domain.
Gram matrix factorization
. Each factor Gram matrix is Hermitian positive semi-definite.
Constant diagonal
For a DFT-like matrix with columns , (normalized columns). Since both factors have unit-norm columns, for all voxels.
Main-lobe width
Angular PSF: width in . Range PSF: width in normalized delay. The 2D PSF is the outer product of these 1D PSFs.
ex11-rip-kronecker-bound
HardLet satisfy the RIP of order with constant , and satisfy the RIP of order with constant .
(a) Using the Jokar-Mehta bound, compute the RIP constant of at order . (b) Is this sufficient for LASSO recovery (requires )? (c) What is the maximum per-factor RIP constant that still guarantees LASSO recovery for the product?
.
Solve for .
Kronecker RIP bound
.
LASSO feasibility
, so the RIP condition for LASSO recovery is satisfied.
Maximum per-factor constant
With equal constants : . . Solving: . Each factor can have RIP constant up to 0.189.
ex12-near-field-correction
HardA Tx ULA with elements at half-wavelength spacing at GHz images a scene at distance m.
(a) Compute the Fraunhofer distance where is the array aperture. (b) Is the far-field assumption valid? (c) Write the exact (near-field) phase for each element and quantify the deviation from the far-field Kronecker model. (d) Propose a first-order correction to maintain approximate Kronecker structure.
At 28 GHz, mm. Aperture mm.
The near-field phase includes a quadratic term in the element position.
Fraunhofer distance
mm. mm. m.
Far-field validity
m m. The far-field assumption is valid for this small array. For a larger array (, mm), m, and the near-field correction becomes essential.
Near-field phase deviation
The exact distance from element to a target at : . The quadratic term breaks the linear phase assumption and hence the Kronecker factorization.
First-order correction
Include the quadratic phase as a diagonal correction: . This preserves the computational structure (Kronecker + diagonal) while correcting the near-field phase.
ex13-optimal-precond
HardFor a 2D Kronecker system with :
(a) Show that the optimal (in spectral norm) diagonal preconditioner is . (b) Compute the effective condition number after diagonal preconditioning for a specific example with and . (c) Compare to the Kronecker preconditioner.
The diagonal of a Kronecker product is the Kronecker product of the diagonals.
The effective condition number is .
Diagonal preconditioner
. For DFT-like matrices with equal column norms, and the diagonal preconditioner does nothing.
Effective condition number
Full singular values: (all products). . For unequal column norms (non-DFT case), diagonal preconditioning corrects the norm variation but not the off-diagonal coherence, so but improvement is limited.
Kronecker preconditioner
. (perfect preconditioning when factors are invertible). The Kronecker preconditioner is strictly better than diagonal.
ex14-hutchinson-trace
HardThe OAMP algorithm (Ch 17) requires computing , which involves the trace of the LMMSE matrix .
(a) Show that this trace can be computed exactly in operations using the Kronecker eigendecomposition. (b) Compare to the Hutchinson stochastic trace estimator. (c) Under what conditions is the exact computation preferable?
.
The trace of a diagonal matrix is the sum of diagonal entries.
Exact trace via eigendecomposition
Using the factored eigendecomposition, the eigenvalues of are .
The trace is .
This triple sum has terms. But if we precompute the per-factor eigenvalues ( values), the sum is in the worst case.
Hutchinson estimator
The Hutchinson estimator uses random probe vectors: . Each probe requires one matvec ( with Kronecker) plus one LMMSE solve (). Total: .
Comparison
The exact trace is , which is asymptotically faster than Hutchinson's . The exact computation is preferable whenever the Kronecker eigendecomposition is available (precomputed once per geometry). Hutchinson is useful when the matrix does not admit exact Kronecker factorization.
ex15-kspace-coverage
HardA multi-static system has 3 Tx arrays and 4 Rx arrays arranged around a target at distance m, operating with subcarriers spanning MHz centered at GHz.
(a) Compute the wavenumber-domain points sampled by each Tx-Rx pair. (b) Sketch the k-space coverage for the full system. (c) Identify the direction with poorest coverage and predict which factor's condition number is largest. (d) Compute the approximate condition number in each dimension.
Each Tx-Rx-frequency triple samples .
Range coverage is determined by bandwidth; angular coverage by the bistatic angle spread.
Wavenumber points
for each subcarrier. The spatial frequency sampled is . With 3 Tx and 4 Rx, there are Tx-Rx pairs, each contributing 32 frequency points: 384 total k-space samples.
Coverage analysis
The radial (range) extent in k-space is rad/m. The angular extent depends on the array placement geometry.
Conditioning prediction
If the arrays are clustered on one side of the target (limited angular diversity), the angular k-space coverage is narrow, producing a large condition number for . The frequency factor is a well-conditioned partial DFT. The overall conditioning is dominated by the angular factor.
ex16-nkp-approximation
ChallengeWhen the sensing matrix is only approximately Kronecker, , the Nearest Kronecker Product (NKP) problem seeks:
(a) Show that this can be solved via the SVD of a rearranged matrix . (b) Define the rearrangement operator explicitly. (c) Extend to the rank- approximation . (d) For a near-field correction term with rank 3, estimate the computational overhead of using the NKP-3 approximation vs. the exact Kronecker structure.
The rearrangement reshapes into an matrix by reordering indices.
The rank-1 SVD of gives the best single Kronecker factor pair.
Rearrangement operator
Define . This rearranges the matrix into an matrix whose rank-1 SVD gives the NKP.
SVD solution
Let . The best rank-1 approximation gives , .
Rank-$R$ extension and overhead
. Each matvec requires Kronecker matvecs: cost . For : overhead compared to exact Kronecker, but still faster than the full matvec.
ex17-randomized-kron-precond
ChallengeDesign a preconditioner for the case when does not admit an exact Kronecker factorization. Consider the approach:
- Compute the NKP approximation .
- Use as a preconditioner for the exact system.
(a) Derive a bound on the effective condition number in terms of the NKP approximation error . (b) For what values of does this preconditioner improve over unpreconditioned CG? (c) Implement this for a simulated near-field scenario and compare convergence.
Use the perturbation theory for eigenvalues: .
The preconditioner is effective when , i.e., when the matrix is close to Kronecker.
Condition number bound
Let and . Then . By Weyl's inequality: which depends on and the regularization .
Effectiveness threshold
The NKP preconditioner improves over unpreconditioned CG when . This holds for , i.e., when the NKP approximation error is smaller than the spectral spread of the Gram matrix.
ex18-complexity-comparison
ChallengeConsider a 3D imaging problem with parameters , , and grid size . Compare the total computational cost (in terms of ) for reconstructing the image using:
(a) Matched filter (single application of ). (b) Tikhonov via direct solve. (c) ISTA with iterations. (d) OAMP with iterations (LMMSE step + denoiser). (e) Learned unrolled OAMP with layers (forward + backward pass).
For each, give the cost with and without Kronecker exploitation, and determine the crossover point where learned OAMP becomes cheaper than classical OAMP.
The denoiser (CNN) costs per application.
The backward pass through a linear layer costs the same as the forward pass.
Classical OAMP needs iterations while learned OAMP uses fixed --.
Cost table
| Method | Naive | Kronecker |
|---|---|---|
| MF | ||
| Tikhonov direct | precompute + apply | |
| ISTA ( iter) | ||
| OAMP ( iter) | ||
| Learned OAMP ( layers, train) | per sample | per sample |
Crossover analysis
Classical OAMP: iterations. Learned OAMP: -- layers (fixed). Per-image cost ratio: . For : learned is faster. For : learned is faster. The advantage grows with the condition number, which is precisely the regime where learned methods shine.