Wavefield Networked Sensing
Imaging as a Network Service
Previous chapters considered imaging from a single platform (SAR, Chapter 12) or a collocated array (MIMO radar, Chapter 11). The 5G/6G wireless infrastructure creates a fundamentally new opportunity: networked imaging, where multiple spatially distributed nodes (base stations, vehicles, drones) cooperatively image the environment by sharing their measurements or partial reconstructions.
The point is that each node sees only a slice of k-space β a partial view of the scene's spatial frequency content. Combining these slices fills the k-space more completely, improving resolution and conditioning. This is the k-space tessellation concept from Chapter 6.5, now realised by a distributed network.
Definition: Networked Sensing Architecture
Networked Sensing Architecture
A networked sensing system consists of nodes connected by a communication graph :
-
Nodes : each node has a local sensing matrix and collects measurements .
-
Edges : communication links between nodes. The link capacity constrains how much data can be shared.
The global imaging problem is:
Centralised processing (sending all to a fusion centre) achieves optimal performance but requires enormous backhaul. Distributed algorithms that operate locally with limited communication are essential for practical deployment.
Definition: k-Space Tessellation by the Network
k-Space Tessellation by the Network
Each Tx-Rx pair at frequency contributes a single point in k-space at the combined wavenumber .
A distributed network of transmitters and receivers produces k-space samples. The k-space tessellation is the set:
The imaging resolution is determined by the extent of (larger extent finer resolution), and the conditioning is determined by the uniformity of the coverage (uniform coverage lower condition number more stable reconstruction).
A single monostatic node provides k-space coverage along a narrow cone. Adding nodes at different angles fills in the gaps. This is the distributed analogue of MIMO radar virtual aperture extension (Chapter 11).
k-Space Coverage by Distributed Network
Visualise how the k-space coverage changes as nodes are added to the network. Each node contributes a cluster of k-space samples determined by its position and bandwidth.
Observe that a single node covers a narrow angular wedge; adding nodes at different positions fills in the spatial frequency plane, improving resolution in all directions.
Parameters
Wavefield Networked Sensing
Manzoni, Tebaldini, and Caire introduced wavefield networked sensing: a framework where multiple distributed access points (APs) cooperatively image a scene by combining their k-space contributions.
The key contributions are:
-
Per-AP diffraction tomography: Each AP performs local imaging using the diffraction tomography framework of Chapter 15, producing a partial image from its own k-space slice.
-
k-space tessellation: The network geometry determines which spatial frequencies each AP can measure. The paper characterises the k-space coverage as a function of network topology and shows that distributed nodes provide more uniform coverage than collocated arrays.
-
Back-Projection Algorithm in Time (BPAT): An efficient distributed reconstruction algorithm where each AP computes a local backprojection, and the results are coherently combined. This avoids centralised processing while achieving near-optimal resolution.
The framework establishes RF imaging as a network-level service rather than a single-node capability β a paradigm shift for 6G sensing infrastructure.
Definition: Consensus ADMM for Distributed Imaging
Consensus ADMM for Distributed Imaging
The global imaging problem is decomposed using consensus ADMM. Introduce a global variable that all nodes agree on:
where .
ADMM updates:
-
Local (parallel):
-
Global (averaging + prox):
-
Dual:
Each local update requires only the node's own measurements. The global update requires averaging local estimates β only image vectors are exchanged, not raw measurements. This scales to hundreds of nodes.
Theorem: Convergence of Distributed Imaging
For a connected graph with spectral gap and consensus-ADMM with penalty , the distributed image estimate converges to the centralised solution at rate:
where is the centralised solution and depends on and the sensing matrices.
Each consensus round averages neighbouring estimates, gradually propagating information across the network. Better-connected graphs (larger spectral gap) propagate faster. A fully connected graph () achieves instant consensus; a ring graph () converges slowly.
Consensus iteration
The averaging step with doubly stochastic weights contracts the disagreement: .
ADMM convergence
The ADMM penalty term drives each local estimate toward the global consensus. Combined with the contraction from averaging, the overall rate is per iteration.
Distributed ADMM Convergence
Watch how consensus ADMM iteratively combines local images from distributed nodes. The plot shows the reconstruction NMSE vs. ADMM iteration for different network topologies.
Observe that the fully connected network converges in iterations, while a ring network requires . The centralised solution (dashed line) is the convergence target.
Parameters
Consensus ADMM for Networked Imaging
Complexity: per node, where = image dimensionStep 4 is the bottleneck: solving a linear system of size . When the sensing matrix has Kronecker structure (), this reduces to two smaller solves (Chapter 7).
Example: Resolution Gain from Networked Imaging
A single 5G base station at 28 GHz with a 64-element ULA has angular resolution . Three additional base stations are placed at , , and around the target area. Compute the effective resolution and conditioning improvement.
Single-node resolution
rad . Cross-range resolution at 20 m: m. A single node resolves targets along one direction only.
Four-node resolution
Four nodes at , , , provide angular coverage from all directions. The combined system achieves resolution from each direction, giving isotropic resolution of m.
Conditioning improvement
Single node: . Four nodes: ( improvement). The diverse viewing geometries fill the k-space uniformly.
Data Fusion Levels for Networked Imaging
| Fusion Level | What is Shared | Bandwidth per Node | Image Quality |
|---|---|---|---|
| Raw data | Measurements | β high | Optimal |
| Image-level | Local images | β moderate | Near-optimal ( dB loss) |
| Feature-level | Detected targets | β low | Good for detection, poor for reconstruction |
| Decision-level | Binary decisions | β minimal | Lowest quality |
Backhaul Budget for Networked Imaging
For a 4-node network at 28 GHz, each with 64 antennas and 400 MHz bandwidth, the backhaul requirements are:
-
Raw fusion: bits Mbps per snapshot at 100 Hz Gbps. Exceeds typical 5G backhaul.
-
Image fusion (consensus ADMM): complex image KB per iteration, 20 iterations per snapshot: MB per snapshot. At 100 Hz: Gbps. Feasible with 5G backhaul.
-
Feature fusion: detected targets 40 bytes KB per snapshot. Negligible bandwidth.
The practical choice is image-level fusion for imaging applications and feature-level fusion for detection-only applications.
- β’
5G Xn interface: 10 Gbps theoretical, 2-5 Gbps practical
- β’
V2X (PC5): 10-50 Mbps β feature-level fusion only
- β’
Latency: ADMM iterations add 10-50 ms per convergence cycle
Common Mistake: Stale Information in Distributed Imaging
Mistake:
Running consensus ADMM with asynchronous updates where some nodes have significantly outdated estimates (e.g., due to communication delays or node failures).
Correction:
Stale estimates cause oscillation or divergence. Mitigation:
- Bounded delay: accept only estimates iterations old.
- Weighted averaging: weight by inverse delay: .
- Redundancy: if a node fails, neighbours continue with reduced spectral gap.
- Gossip protocol: random pairwise averaging per iteration, robust to node failures.
Quick Check
Adding a second sensing node at from the first primarily improves:
Cross-range resolution (fills orthogonal k-space region)
Range resolution (doubles the bandwidth)
Signal-to-noise ratio (doubles the received energy)
Each node covers a k-space wedge aligned with its viewing direction. A offset fills the orthogonal wedge, improving resolution in the perpendicular direction.
k-Space Tessellation
The partitioning of the spatial frequency (k-space) plane into regions covered by different sensing nodes in a distributed imaging network. Each Tx-Rx-frequency combination contributes one k-space sample; the union of all samples determines the achievable imaging resolution.
Related: {{Ref:Def Kspace Tessellation}}
Consensus ADMM
A distributed optimisation algorithm that decomposes a global problem into parallel local sub-problems with consensus constraints. Each node solves its local problem, then nodes average their solutions to reach agreement. Convergence rate depends on the graph spectral gap.
Related: {{Ref:Def Consensus Admm}}
Historical Note: From Radar Networks to 6G Imaging
Distributed radar networks date back to Cold War over-the-horizon radar systems in the 1960s. The 2000s saw renewed interest with multi-static radar for air defence (e.g., the British Celldar system using cellular base station emissions as illumination sources). The 2010s brought cooperative perception for autonomous vehicles (V2V, V2X).
The 2020s are seeing the convergence of these strands with cellular infrastructure: 5G/6G base stations become sensing nodes, and the communication network backbone serves as the backhaul for distributed imaging. Manzoni et al. (2025) formalised this vision as "wavefield networked sensing," connecting it to the diffraction tomography framework of this book.
Key Takeaway
Networked sensing transforms RF imaging from a single-platform capability to a network-level service. Each node contributes a k-space slice; combining them via consensus ADMM achieves near-optimal image quality with manageable backhaul. The convergence rate depends on the graph spectral gap, and the optimal fusion level depends on the available backhaul bandwidth.