Wavefield Networked Sensing

Imaging as a Network Service

Previous chapters considered imaging from a single platform (SAR, Chapter 12) or a collocated array (MIMO radar, Chapter 11). The 5G/6G wireless infrastructure creates a fundamentally new opportunity: networked imaging, where multiple spatially distributed nodes (base stations, vehicles, drones) cooperatively image the environment by sharing their measurements or partial reconstructions.

The point is that each node sees only a slice of k-space β€” a partial view of the scene's spatial frequency content. Combining these slices fills the k-space more completely, improving resolution and conditioning. This is the k-space tessellation concept from Chapter 6.5, now realised by a distributed network.

Definition:

Networked Sensing Architecture

A networked sensing system consists of NnodesN_{\mathrm{nodes}} nodes connected by a communication graph G=(V,E)\mathcal{G} = (\mathcal{V}, \mathcal{E}):

  • Nodes V={1,…,Nnodes}\mathcal{V} = \{1, \ldots, N_{\mathrm{nodes}}\}: each node gg has a local sensing matrix Ag\mathbf{A}_{g} and collects measurements yg=AgΟƒ+wg\mathbf{y}_{g} = \mathbf{A}_{g} \boldsymbol{\sigma} + \mathbf{w}_{g}.

  • Edges E\mathcal{E}: communication links between nodes. The link capacity constrains how much data can be shared.

The global imaging problem is:

Οƒ^=arg⁑minβ‘Οƒβˆ‘g=1Nnodes12βˆ₯ygβˆ’AgΟƒβˆ₯22+λ R(Οƒ).\hat{\boldsymbol{\sigma}} = \arg\min_{\boldsymbol{\sigma}} \sum_{g=1}^{N_{\mathrm{nodes}}} \frac{1}{2}\|\mathbf{y}_{g} - \mathbf{A}_{g}\boldsymbol{\sigma}\|_2^2 + \lambda \, R(\boldsymbol{\sigma}).

Centralised processing (sending all yg\mathbf{y}_{g} to a fusion centre) achieves optimal performance but requires enormous backhaul. Distributed algorithms that operate locally with limited communication are essential for practical deployment.

,

Definition:

k-Space Tessellation by the Network

Each Tx-Rx pair (i,j)(i, j) at frequency fkf_k contributes a single point in k-space at the combined wavenumber ΞΊs,r=ΞΊ\ntntxposi+ΞΊ\ntnrxposj\kappa_{\mathbf{s},\mathbf{r}} = \kappa_{\ntn{txpos}_i} + \kappa_{\ntn{rxpos}_j}.

A distributed network of NTN_T transmitters and NRN_R receivers produces NTΓ—NRΓ—NfN_T \times N_R \times N_f k-space samples. The k-space tessellation is the set:

K={ΞΊs,r(i,j,k):i∈[NT], j∈[NR], k∈[Nf]}.\mathcal{K} = \{\kappa_{\mathbf{s},\mathbf{r}}^{(i,j,k)} : i \in [N_T], \, j \in [N_R], \, k \in [N_f]\}.

The imaging resolution is determined by the extent of K\mathcal{K} (larger extent β†’\to finer resolution), and the conditioning is determined by the uniformity of the coverage (uniform coverage β†’\to lower condition number β†’\to more stable reconstruction).

A single monostatic node provides k-space coverage along a narrow cone. Adding nodes at different angles fills in the gaps. This is the distributed analogue of MIMO radar virtual aperture extension (Chapter 11).

,

k-Space Coverage by Distributed Network

Visualise how the k-space coverage changes as nodes are added to the network. Each node contributes a cluster of k-space samples determined by its position and bandwidth.

Observe that a single node covers a narrow angular wedge; adding nodes at different positions fills in the spatial frequency plane, improving resolution in all directions.

Parameters
4
400
πŸŽ“CommIT Contribution(2025)

Wavefield Networked Sensing

M. Manzoni, S. Tebaldini, G. Caire β€” IEEE Open Journal of the Communications Society, vol. 6, pp. 181-197

Manzoni, Tebaldini, and Caire introduced wavefield networked sensing: a framework where multiple distributed access points (APs) cooperatively image a scene by combining their k-space contributions.

The key contributions are:

  1. Per-AP diffraction tomography: Each AP performs local imaging using the diffraction tomography framework of Chapter 15, producing a partial image from its own k-space slice.

  2. k-space tessellation: The network geometry determines which spatial frequencies each AP can measure. The paper characterises the k-space coverage as a function of network topology and shows that distributed nodes provide more uniform coverage than collocated arrays.

  3. Back-Projection Algorithm in Time (BPAT): An efficient distributed reconstruction algorithm where each AP computes a local backprojection, and the results are coherently combined. This avoids centralised processing while achieving near-optimal resolution.

The framework establishes RF imaging as a network-level service rather than a single-node capability β€” a paradigm shift for 6G sensing infrastructure.

networked sensingk-space tessellationdistributed imaging6G

Definition:

Consensus ADMM for Distributed Imaging

The global imaging problem is decomposed using consensus ADMM. Introduce a global variable Οƒ\boldsymbol{\sigma} that all nodes agree on:

min⁑{Οƒg},Οƒβˆ‘g=1Nnodesfg(Οƒg)+λ R(Οƒ)s.t.Οƒg=Οƒβ€…β€Šβ€…β€Šβˆ€g\min_{\{\boldsymbol{\sigma}_g\}, \boldsymbol{\sigma}} \sum_{g=1}^{N_{\mathrm{nodes}}} f_g(\boldsymbol{\sigma}_g) + \lambda \, R(\boldsymbol{\sigma}) \quad \text{s.t.} \quad \boldsymbol{\sigma}_g = \boldsymbol{\sigma} \;\; \forall g

where fg(Οƒg)=12βˆ₯ygβˆ’AgΟƒgβˆ₯2f_g(\boldsymbol{\sigma}_g) = \frac{1}{2}\|\mathbf{y}_{g} - \mathbf{A}_{g}\boldsymbol{\sigma}_g\|^2.

ADMM updates:

  1. Local (parallel): Οƒg(t+1)=arg⁑min⁑σgfg(Οƒg)+ρ2βˆ₯Οƒgβˆ’Οƒ(t)+ug(t)βˆ₯2\boldsymbol{\sigma}_g^{(t+1)} = \arg\min_{\boldsymbol{\sigma}_g} f_g(\boldsymbol{\sigma}_g) + \frac{\rho}{2}\|\boldsymbol{\sigma}_g - \boldsymbol{\sigma}^{(t)} + \mathbf{u}_g^{(t)}\|^2

  2. Global (averaging + prox): Οƒ(t+1)=prox⁑λR/(ρNnodes) ⁣(ΟƒΛ‰(t+1)+uΛ‰(t))\boldsymbol{\sigma}^{(t+1)} = \operatorname{prox}_{\lambda R/(\rho N_{\mathrm{nodes}})}\!\left(\bar{\boldsymbol{\sigma}}^{(t+1)} + \bar{\mathbf{u}}^{(t)}\right)

  3. Dual: ug(t+1)=ug(t)+Οƒg(t+1)βˆ’Οƒ(t+1)\mathbf{u}_g^{(t+1)} = \mathbf{u}_g^{(t)} + \boldsymbol{\sigma}_g^{(t+1)} - \boldsymbol{\sigma}^{(t+1)}

Each local update requires only the node's own measurements. The global update requires averaging local estimates β€” only image vectors are exchanged, not raw measurements. This scales to hundreds of nodes.

Theorem: Convergence of Distributed Imaging

For a connected graph G\mathcal{G} with spectral gap γ\gamma and consensus-ADMM with penalty ρ\rho, the distributed image estimate converges to the centralised solution at rate:

βˆ₯Οƒg(t)βˆ’Οƒβˆ—βˆ₯2≀Cβ‹…(1βˆ’Ξ³)tβ‹…βˆ₯Οƒg(0)βˆ’Οƒβˆ—βˆ₯2\|\boldsymbol{\sigma}_g^{(t)} - \boldsymbol{\sigma}^*\|_2 \leq C \cdot (1 - \gamma)^t \cdot \|\boldsymbol{\sigma}_g^{(0)} - \boldsymbol{\sigma}^*\|_2

where Οƒβˆ—\boldsymbol{\sigma}^* is the centralised solution and CC depends on ρ\rho and the sensing matrices.

Each consensus round averages neighbouring estimates, gradually propagating information across the network. Better-connected graphs (larger spectral gap) propagate faster. A fully connected graph (γ=1\gamma = 1) achieves instant consensus; a ring graph (γ∝1/Nnodes2\gamma \propto 1/N_{\mathrm{nodes}}^2) converges slowly.

,

Distributed ADMM Convergence

Watch how consensus ADMM iteratively combines local images from distributed nodes. The plot shows the reconstruction NMSE vs. ADMM iteration for different network topologies.

Observe that the fully connected network converges in ∼5\sim 5 iterations, while a ring network requires ∼30\sim 30. The centralised solution (dashed line) is the convergence target.

Parameters
4
1

Consensus ADMM for Networked Imaging

Complexity: O(Tβ‹…Nβ‹…Q2)O(T \cdot N \cdot Q^2) per node, where QQ = image dimension
Input: Local measurements {yg}g=1N\{\mathbf{y}_{g}\}_{g=1}^{N}, sensing matrices {Ag}\{\mathbf{A}_{g}\}, penalty ρ\rho, regulariser R(β‹…)R(\cdot)
Output: Reconstructed image Οƒ^\hat{\boldsymbol{\sigma}}
1. Initialise: Οƒg(0)=AgHyg\boldsymbol{\sigma}_g^{(0)} = \mathbf{A}_{g}^{H} \mathbf{y}_{g}, ug(0)=0\mathbf{u}_g^{(0)} = \mathbf{0}, Οƒ(0)=1Nβˆ‘gΟƒg(0)\boldsymbol{\sigma}^{(0)} = \frac{1}{N}\sum_g \boldsymbol{\sigma}_g^{(0)}
2. for t=0,1,…,Tβˆ’1t = 0, 1, \ldots, T-1 do
3. \quad for g=1,…,Ng = 1, \ldots, N in parallel do
4. Οƒg(t+1)=(AgHAg+ρI)βˆ’1(AgHyg+ρ(Οƒ(t)βˆ’ug(t)))\quad\quad \boldsymbol{\sigma}_g^{(t+1)} = (\mathbf{A}_{g}^{H}\mathbf{A}_{g} + \rho\mathbf{I})^{-1}(\mathbf{A}_{g}^{H}\mathbf{y}_{g} + \rho(\boldsymbol{\sigma}^{(t)} - \mathbf{u}_g^{(t)}))
5. \quad end for
6. Οƒ(t+1)=prox⁑λR/(ρN) ⁣(1Nβˆ‘g(Οƒg(t+1)+ug(t)))\quad \boldsymbol{\sigma}^{(t+1)} = \operatorname{prox}_{\lambda R/(\rho N)}\!\left(\frac{1}{N}\sum_g (\boldsymbol{\sigma}_g^{(t+1)} + \mathbf{u}_g^{(t)})\right)
7. \quad for g=1,…,Ng = 1, \ldots, N in parallel do
8. ug(t+1)=ug(t)+Οƒg(t+1)βˆ’Οƒ(t+1)\quad\quad \mathbf{u}_g^{(t+1)} = \mathbf{u}_g^{(t)} + \boldsymbol{\sigma}_g^{(t+1)} - \boldsymbol{\sigma}^{(t+1)}
9. \quad end for
10. end for
11. return Οƒ(T)\boldsymbol{\sigma}^{(T)}

Step 4 is the bottleneck: solving a linear system of size QQ. When the sensing matrix has Kronecker structure (Ag=Ag(x)βŠ—Ag(y)\mathbf{A}_{g} = \mathbf{A}_{g}^{(x)} \otimes \mathbf{A}_{g}^{(y)}), this reduces to two smaller solves (Chapter 7).

Example: Resolution Gain from Networked Imaging

A single 5G base station at 28 GHz with a 64-element ULA has angular resolution Δθ=1.8Β°\Delta\theta = 1.8Β°. Three additional base stations are placed at 90Β°90Β°, 180Β°180Β°, and 270Β°270Β° around the target area. Compute the effective resolution and conditioning improvement.

Data Fusion Levels for Networked Imaging

Fusion LevelWhat is SharedBandwidth per NodeImage Quality
Raw dataMeasurements yg\mathbf{y}_{g}O(Mg)O(M_g) β€” highOptimal
Image-levelLocal images Οƒ^g\hat{\boldsymbol{\sigma}}_gO(Q)O(Q) β€” moderateNear-optimal (<1< 1 dB loss)
Feature-levelDetected targetsO(Kg)O(K_g) β€” lowGood for detection, poor for reconstruction
Decision-levelBinary decisionsO(1)O(1) β€” minimalLowest quality
⚠️Engineering Note

Backhaul Budget for Networked Imaging

For a 4-node network at 28 GHz, each with 64 antennas and 400 MHz bandwidth, the backhaul requirements are:

  • Raw fusion: 4Γ—64Γ—3300Γ—164 \times 64 \times 3300 \times 16 bits β‰ˆ135\approx 135 Mbps per snapshot at 100 Hz =13.5= 13.5 Gbps. Exceeds typical 5G backhaul.

  • Image fusion (consensus ADMM): 200Γ—200200 \times 200 complex image β‰ˆ640\approx 640 KB per iteration, 20 iterations per snapshot: β‰ˆ12.8\approx 12.8 MB per snapshot. At 100 Hz: β‰ˆ1.3\approx 1.3 Gbps. Feasible with 5G backhaul.

  • Feature fusion: ∼50\sim 50 detected targets Γ—\times 40 bytes =2= 2 KB per snapshot. Negligible bandwidth.

The practical choice is image-level fusion for imaging applications and feature-level fusion for detection-only applications.

Practical Constraints
  • β€’

    5G Xn interface: 10 Gbps theoretical, 2-5 Gbps practical

  • β€’

    V2X (PC5): 10-50 Mbps β€” feature-level fusion only

  • β€’

    Latency: ADMM iterations add 10-50 ms per convergence cycle

Common Mistake: Stale Information in Distributed Imaging

Mistake:

Running consensus ADMM with asynchronous updates where some nodes have significantly outdated estimates (e.g., due to communication delays or node failures).

Correction:

Stale estimates cause oscillation or divergence. Mitigation:

  1. Bounded delay: accept only estimates ≀D\leq D iterations old.
  2. Weighted averaging: weight by inverse delay: wgj∝1/(1+dgj)w_{gj} \propto 1/(1 + d_{gj}).
  3. Redundancy: if a node fails, neighbours continue with reduced spectral gap.
  4. Gossip protocol: random pairwise averaging per iteration, robust to node failures.

Quick Check

Adding a second sensing node at 90Β°90Β° from the first primarily improves:

Cross-range resolution (fills orthogonal k-space region)

Range resolution (doubles the bandwidth)

Signal-to-noise ratio (doubles the received energy)

k-Space Tessellation

The partitioning of the spatial frequency (k-space) plane into regions covered by different sensing nodes in a distributed imaging network. Each Tx-Rx-frequency combination contributes one k-space sample; the union of all samples determines the achievable imaging resolution.

Related: {{Ref:Def Kspace Tessellation}}

Consensus ADMM

A distributed optimisation algorithm that decomposes a global problem into parallel local sub-problems with consensus constraints. Each node solves its local problem, then nodes average their solutions to reach agreement. Convergence rate depends on the graph spectral gap.

Related: {{Ref:Def Consensus Admm}}

Historical Note: From Radar Networks to 6G Imaging

Distributed radar networks date back to Cold War over-the-horizon radar systems in the 1960s. The 2000s saw renewed interest with multi-static radar for air defence (e.g., the British Celldar system using cellular base station emissions as illumination sources). The 2010s brought cooperative perception for autonomous vehicles (V2V, V2X).

The 2020s are seeing the convergence of these strands with cellular infrastructure: 5G/6G base stations become sensing nodes, and the communication network backbone serves as the backhaul for distributed imaging. Manzoni et al. (2025) formalised this vision as "wavefield networked sensing," connecting it to the diffraction tomography framework of this book.

,

Key Takeaway

Networked sensing transforms RF imaging from a single-platform capability to a network-level service. Each node contributes a k-space slice; combining them via consensus ADMM achieves near-optimal image quality with manageable backhaul. The convergence rate depends on the graph spectral gap, and the optimal fusion level depends on the available backhaul bandwidth.