Position Estimation Algorithms

From Measurements to Position Estimates

Section 30.1 established that each measurement type --- TOA, TDOA, AOA, RSSI --- defines a geometric locus (circle, hyperbola, bearing line) in the plane. In the absence of noise, the UE position is simply the intersection of these loci. In practice, noise corrupts every measurement, the loci do not intersect at a single point, and we must formulate and solve an estimation problem.

This section develops the complete estimation pipeline: (1) linearised least-squares for a fast closed-form initialisation; (2) Gauss-Newton iteration for refined nonlinear least-squares; (3) maximum likelihood for statistically optimal estimation; (4) the Cramer-Rao bound and position error bound (PEB) to quantify the fundamental accuracy limits. The section concludes with NLOS detection and mitigation, which is the dominant practical challenge in real-world positioning.

Definition:

Linearised Least-Squares Positioning (TOA)

Given NN TOA range measurements d^i\hat{d}_i from BSs at known positions pi\mathbf{p}_i, the range equations are:

d^i2=(xโˆ’xi)2+(yโˆ’yi)2,i=1,โ€ฆ,N\hat{d}_i^2 = (x - x_i)^2 + (y - y_i)^2, \qquad i = 1, \ldots, N

Expanding and subtracting the equation for a reference BS (say BS 1) eliminates the quadratic terms x2+y2x^2 + y^2:

d^i2โˆ’d^12=โˆ’2(xiโˆ’x1)xโˆ’2(yiโˆ’y1)y+(xi2+yi2)โˆ’(x12+y12)\hat{d}_i^2 - \hat{d}_1^2 = -2(x_i - x_1)x - 2(y_i - y_1)y + (x_i^2 + y_i^2) - (x_1^2 + y_1^2)

This yields the linear system Ap=b\mathbf{A}\mathbf{p} = \mathbf{b}:

A=โˆ’2[x2โˆ’x1y2โˆ’y1x3โˆ’x1y3โˆ’y1โ‹ฎโ‹ฎxNโˆ’x1yNโˆ’y1],b=[d^22โˆ’d^12โˆ’(x22+y22โˆ’x12โˆ’y12)d^32โˆ’d^12โˆ’(x32+y32โˆ’x12โˆ’y12)โ‹ฎd^N2โˆ’d^12โˆ’(xN2+yN2โˆ’x12โˆ’y12)]\mathbf{A} = -2\begin{bmatrix} x_2 - x_1 & y_2 - y_1 \\ x_3 - x_1 & y_3 - y_1 \\ \vdots & \vdots \\ x_N - x_1 & y_N - y_1 \end{bmatrix}, \qquad \mathbf{b} = \begin{bmatrix} \hat{d}_2^2 - \hat{d}_1^2 - (x_2^2 + y_2^2 - x_1^2 - y_1^2) \\ \hat{d}_3^2 - \hat{d}_1^2 - (x_3^2 + y_3^2 - x_1^2 - y_1^2) \\ \vdots \\ \hat{d}_N^2 - \hat{d}_1^2 - (x_N^2 + y_N^2 - x_1^2 - y_1^2) \end{bmatrix}

The ordinary least-squares estimate is:

p^LS=(ATA)โˆ’1ATb\hat{\mathbf{p}}_{\mathrm{LS}} = (\mathbf{A}^T \mathbf{A})^{-1} \mathbf{A}^T \mathbf{b}

For TDOA, a similar linearisation applies with d^i1=d^iโˆ’d^1\hat{d}_{i1} = \hat{d}_i - \hat{d}_1 replacing d^i\hat{d}_i directly.

The linearised LS solution is not statistically efficient (it does not achieve the CRB) because squaring the range measurements distorts the noise distribution and the reference-BS subtraction introduces correlated errors. However, it is an excellent initialiser for iterative methods.

Gauss-Newton Algorithm for Position Estimation

Input: BS positions {pi}i=1N\{\mathbf{p}_i\}_{i=1}^N; measurements
d^=[d^1,โ€ฆ,d^N]T\hat{\mathbf{d}} = [\hat{d}_1, \ldots, \hat{d}_N]^T;
noise covariance C\mathbf{C}; initial estimate p^(0)\hat{\mathbf{p}}^{(0)}
(e.g., from linearised LS); tolerance ฯต\epsilon; max iterations KK
Output: Refined position estimate p^โˆ—\hat{\mathbf{p}}^*
1. For k=0,1,โ€ฆ,Kโˆ’1k = 0, 1, \ldots, K-1:
a. Compute predicted ranges:
di(k)=โˆฅp^(k)โˆ’piโˆฅd_i^{(k)} = \|\hat{\mathbf{p}}^{(k)} - \mathbf{p}_i\|,
i=1,โ€ฆ,N\quad i = 1, \ldots, N
b. Compute residual vector:
r(k)=d^โˆ’d(k)\mathbf{r}^{(k)} = \hat{\mathbf{d}} - \mathbf{d}^{(k)}
c. Compute Jacobian matrix H(k)โˆˆRNร—2\mathbf{H}^{(k)} \in \mathbb{R}^{N \times 2}:
Hi1(k)=x(k)โˆ’xidi(k),Hi2(k)=y(k)โˆ’yidi(k)H_{i1}^{(k)} = \frac{x^{(k)} - x_i}{d_i^{(k)}}, \qquad H_{i2}^{(k)} = \frac{y^{(k)} - y_i}{d_i^{(k)}}
(Each row is the unit vector from BS ii towards the current estimate.)
d. Compute the weighted Gauss-Newton update:
ฮ”p(k)=(H(k)TCโˆ’1H(k))โˆ’1H(k)TCโˆ’1r(k)\Delta\mathbf{p}^{(k)} = \bigl(\mathbf{H}^{(k)T} \mathbf{C}^{-1} \mathbf{H}^{(k)}\bigr)^{-1} \mathbf{H}^{(k)T} \mathbf{C}^{-1} \mathbf{r}^{(k)}
e. Update: p^(k+1)=p^(k)+ฮ”p(k)\hat{\mathbf{p}}^{(k+1)} = \hat{\mathbf{p}}^{(k)} + \Delta\mathbf{p}^{(k)}
f. If โˆฅฮ”p(k)โˆฅ<ฯต\|\Delta\mathbf{p}^{(k)}\| < \epsilon: return p^(k+1)\hat{\mathbf{p}}^{(k+1)}
2. Return p^(K)\hat{\mathbf{p}}^{(K)} (maximum iterations reached)
Complexity: Each iteration costs O(N)O(N) for the Jacobian and
O(1)O(1) for the 2ร—22 \times 2 matrix inversion.

Maximum Likelihood Position Estimation

Under the assumption of independent Gaussian measurement errors, the ML estimator for the UE position minimises the negative log-likelihood:

p^ML=argโกminโกpโˆ‘i=1N(d^iโˆ’โˆฅpโˆ’piโˆฅ)2ฯƒr,i2\hat{\mathbf{p}}_{\mathrm{ML}} = \arg\min_{\mathbf{p}} \sum_{i=1}^{N} \frac{(\hat{d}_i - \|\mathbf{p} - \mathbf{p}_i\|)^2}{\sigma_{r,i}^2}

This is precisely the weighted nonlinear least-squares (WNLS) problem solved by the Gauss-Newton algorithm above with C=diag(ฯƒr,12,โ€ฆ,ฯƒr,N2)\mathbf{C} = \mathrm{diag}(\sigma_{r,1}^2, \ldots, \sigma_{r,N}^2).

For TDOA, the ML problem becomes:

p^ML=argโกminโกpโˆ‘i=2N(d^i1โˆ’(โˆฅpโˆ’piโˆฅโˆ’โˆฅpโˆ’p1โˆฅ))2ฯƒi12\hat{\mathbf{p}}_{\mathrm{ML}} = \arg\min_{\mathbf{p}} \sum_{i=2}^{N} \frac{(\hat{d}_{i1} - (\|\mathbf{p} - \mathbf{p}_i\| - \|\mathbf{p} - \mathbf{p}_1\|))^2}{\sigma_{i1}^2}

For hybrid measurements (e.g., TOA from some BSs, AOA from others), the log-likelihood simply sums the individual terms, leading to a data fusion framework.

Under the Gaussian noise model, the ML estimator is asymptotically efficient: it achieves the Cramer-Rao bound as the number of measurements or the SNR grows.

Definition:

Fisher Information Matrix for Position Estimation

The Fisher information matrix (FIM) for estimating the 2D position p=[x,y]T\mathbf{p} = [x, y]^T from NN independent range measurements with Gaussian errors niโˆผN(0,ฯƒr,i2)n_i \sim \mathcal{N}(0, \sigma_{r,i}^2) is:

J(p)=โˆ‘i=1N1ฯƒr,i2uiuiT\mathbf{J}(\mathbf{p}) = \sum_{i=1}^{N} \frac{1}{\sigma_{r,i}^2} \mathbf{u}_i \mathbf{u}_i^T

where ui=(pโˆ’pi)/โˆฅpโˆ’piโˆฅ\mathbf{u}_i = (\mathbf{p} - \mathbf{p}_i)/\|\mathbf{p} - \mathbf{p}_i\| is the unit direction vector from BS ii to the UE.

Each term uiuiT\mathbf{u}_i \mathbf{u}_i^T is a rank-1 matrix that contributes information only along the radial direction from BS ii. The FIM is the weighted sum of these directional contributions: it encodes how the geometric arrangement of base stations and their individual measurement accuracies combine to determine the overall positioning precision.

For AOA measurements with errors ฮทiโˆผN(0,ฯƒฮธ,i2)\eta_i \sim \mathcal{N}(0, \sigma_{\theta,i}^2), the FIM contribution from BS ii is:

JiAOA=1ฯƒฮธ,i2di2uiโŠฅ(uiโŠฅ)T\mathbf{J}_i^{\mathrm{AOA}} = \frac{1}{\sigma_{\theta,i}^2 d_i^2} \mathbf{u}_i^{\perp} (\mathbf{u}_i^{\perp})^T

where uiโŠฅ=[โˆ’sinโกฮธi,cosโกฮธi]T\mathbf{u}_i^{\perp} = [-\sin\theta_i, \cos\theta_i]^T is the unit vector perpendicular to the radial direction. Notably, AOA information is orthogonal to TOA information from the same BS, and scales inversely with the square of the distance.

The FIM has a beautiful geometric interpretation. Defining the information ellipse as {p:pTJp=1}\{\mathbf{p}: \mathbf{p}^T \mathbf{J} \mathbf{p} = 1\}, its shape reveals the directional distribution of positioning information. A circular information ellipse indicates uniform accuracy in all directions; an elongated ellipse indicates good accuracy along the major axis but poor accuracy along the minor axis --- which occurs when all BSs are clustered in one direction.

,

Theorem: Position Error Bound (PEB)

For any unbiased estimator p^\hat{\mathbf{p}} of the 2D position p\mathbf{p}, the mean squared position error is lower-bounded by:

E[โˆฅp^โˆ’pโˆฅ2]โ‰ฅtr(Jโˆ’1(p))\mathbb{E}\bigl[\|\hat{\mathbf{p}} - \mathbf{p}\|^2\bigr] \geq \mathrm{tr}(\mathbf{J}^{-1}(\mathbf{p}))

The position error bound is defined as:

PEB=tr(Jโˆ’1(p))=1ฮป1+1ฮป2\mathrm{PEB} = \sqrt{\mathrm{tr}(\mathbf{J}^{-1}(\mathbf{p}))} = \sqrt{\frac{1}{\lambda_1} + \frac{1}{\lambda_2}}

where ฮป1,ฮป2\lambda_1, \lambda_2 are the eigenvalues of J\mathbf{J}. The PEB represents the minimum achievable root-mean-square (RMS) position error, analogous to the GDOP (geometric dilution of precision) in GNSS.

For the special case of NN BSs with equal ranging accuracy ฯƒr\sigma_r, the PEB simplifies to:

PEB=ฯƒrฮป1โ‹…1+ฮป1ฮป2\mathrm{PEB} = \frac{\sigma_r}{\sqrt{\lambda_1}} \cdot \sqrt{1 + \frac{\lambda_1}{\lambda_2}}

where the ratio ฮป1/ฮป2\lambda_1/\lambda_2 captures the geometry: equal eigenvalues (isotropic geometry) give the smallest PEB for a given total information.

The PEB is the positioning analogue of the standard error in 1D estimation. It tells you: "given these BSs at these locations with these measurement accuracies, the best possible RMS position error is PEB metres." Poor geometry (e.g., all BSs in a line) makes one eigenvalue of J\mathbf{J} small, inflating the PEB. Adding a BS in a different direction (increasing angular diversity) is far more beneficial than adding one near an existing BS.

,

Gauss-Newton Iteration Convergence

Watch the Gauss-Newton algorithm converge from a coarse initial estimate (linearised LS) to the refined position. Each frame shows one iteration: the current estimate, the linearised range residuals, and the update step. Observe the quadratic convergence rate near the solution and how increased noise slows convergence.

Parameters
5
10

Position Error Bound (PEB) Contour Map

Visualise the PEB as a function of the UE position across a 2D area containing NN base stations. The contour map reveals how BS geometry affects positioning accuracy: the PEB is lowest inside the convex hull of the BS locations and increases rapidly outside. Increase the number of BSs to observe the improvement in coverage. Reduce ฯƒr\sigma_r (which decreases with bandwidth) to see the PEB shrink uniformly. The bandwidth parameter converts to ฯƒr\sigma_r via the CRB on delay estimation at a reference SNR of 10 dB.

Parameters
4
3
100

The NLOS Problem: The Dominant Error Source in Practice

In real-world environments, the direct (line-of-sight) path between UE and BS is frequently blocked by buildings, walls, or other obstacles. The signal arrives via a reflected or diffracted non-line-of-sight (NLOS) path that is longer than the geometric distance, introducing a positive range bias:

d^i=di+bi+ni,biโ‰ฅ0\hat{d}_i = d_i + b_i + n_i, \qquad b_i \geq 0

where bib_i is the NLOS excess delay (often tens to hundreds of metres equivalent). This bias is not captured by the Gaussian noise model and can cause positioning errors of tens of metres even with high-bandwidth signals.

NLOS mitigation strategies fall into three categories:

  1. Detection and exclusion: Identify NLOS measurements and remove them before positioning. Methods include residual analysis (measurements with large residuals after an initial LS fit are flagged as NLOS), hypothesis testing on the ranging error distribution, and machine learning classifiers trained on channel impulse response features.

  2. Robust estimation: Use estimators that are resilient to outliers, such as M-estimators (replacing the squared loss with a Huber or Tukey loss function), RANSAC-based approaches, or constrained optimisation that enforces biโ‰ฅ0b_i \geq 0.

  3. NLOS modelling: Treat the NLOS bias as a random variable with a known prior distribution (e.g., exponential) and incorporate it into a Bayesian estimation framework.

NLOS Detection and Mitigation

Monte Carlo simulation comparing positioning accuracy with and without NLOS mitigation. A fraction PNLOSP_{\mathrm{NLOS}} of the BS links are affected by NLOS bias. The plot shows the CDF of the position error for: (1) naive LS ignoring NLOS, (2) residual-based NLOS detection and exclusion, and (3) robust M-estimation. Observe how increasing PNLOSP_{\mathrm{NLOS}} or the NLOS bias dramatically degrades naive LS but is partially mitigated by the robust approaches.

Parameters
0.3
30
200

Common Mistake: NLOS Errors Are Not Gaussian

Mistake:

Modelling NLOS range errors as zero-mean Gaussian and applying standard least-squares positioning.

Correction:

NLOS introduces a positive bias (biโ‰ฅ0b_i \geq 0) because the reflected path is always longer than the direct path. Standard LS/ML estimators assume zero-mean noise and will systematically pull the position estimate toward the NLOS BSs, potentially causing errors of tens of metres.

Proper handling requires either: (1) detecting and excluding NLOS measurements (residual analysis, hypothesis testing), (2) robust estimation (M-estimators with asymmetric loss), or (3) explicit bias modelling (e.g., exponential prior on bib_i).

Common Mistake: More BSs Do Not Help If Geometry Is Poor

Mistake:

Adding more base stations in the same direction from the UE, expecting significant PEB improvement.

Correction:

The PEB depends on the eigenvalues of the FIM, which in turn depend on the angular diversity of BS directions. If all NN BSs are clustered in one direction, HTH\mathbf{H}^T\mathbf{H} has one large and one small eigenvalue; adding more BSs in the same direction improves the large eigenvalue but not the small one.

The PEB is dominated by the weakest direction (1/ฮปminโก1/\lambda_{\min}). A single BS placed orthogonally to the cluster is worth more than doubling the cluster size. This is why positioning-aware network planning considers angular diversity, not just signal coverage.

Key Takeaway

The PEB captures the fundamental accuracy limit for any positioning method, given the available measurements. It is determined by two independent factors: measurement quality (ฯƒr\sigma_r, ฯƒฮธ\sigma_\theta) and geometry (GDOP). Improving either factor helps, but neither alone is sufficient โ€” a system with excellent ranging but poor geometry (all BSs in a line) cannot position accurately in the perpendicular direction, no matter how precise the individual measurements.

โš ๏ธEngineering Note

Clock Synchronisation Requirements for Positioning

Different positioning methods impose different synchronisation requirements, which have significant infrastructure implications:

  • TOA (one-way): Requires UE-to-BS synchronisation with accuracy ฮ”t<ฯƒr/c\Delta t < \sigma_r / c. For ฯƒr=1\sigma_r = 1 m, this means ฮ”t<3.3\Delta t < 3.3 ns โ€” achievable only with GPS- disciplined oscillators, not standard crystal oscillators.
  • TDOA: Requires BS-to-BS synchronisation at the same level. 5G gNBs typically achieve <50< 50 ns via PTP (IEEE 1588) or GPS timing, translating to <15< 15 m synchronisation error.
  • RTT (round-trip): Eliminates clock sync entirely by measuring two-way delay. The price is that the UE must transmit (uplink overhead) and processing delays TUET_{\mathrm{UE}}, TgNBT_{\mathrm{gNB}} must be calibrated.
  • AOA: No timing requirements โ€” purely spatial. But requires calibrated antenna arrays.

In practice, Multi-RTT is preferred for wide-area deployments because it avoids the stringent synchronisation requirements of TDOA, at the cost of additional UL signalling.

Practical Constraints
  • โ€ข

    PTP synchronisation accuracy is 50--100 ns in typical 5G deployments

  • โ€ข

    GPS-disciplined oscillators add โˆผ\sim$500 per BS for timing

  • โ€ข

    UE crystal oscillator drift is โˆผ\sim1 ppm, too coarse for one-way TOA

Example: AOA-Based Positioning (Triangulation)

Two base stations with large antenna arrays are located at p1=[0,0]T\mathbf{p}_1 = [0, 0]^T and p2=[200,0]T\mathbf{p}_2 = [200, 0]^T (metres). The estimated angles of arrival from the UE are ฮธ^1=50โˆ˜\hat{\theta}_1 = 50^\circ and ฮธ^2=130โˆ˜\hat{\theta}_2 = 130^\circ (measured from the positive xx-axis).

(a) Write the bearing-line equations. (b) Find the intersection point (UE position). (c) If ฯƒฮธ=2โˆ˜\sigma_\theta = 2^\circ, estimate the position uncertainty.

Quick Check

Four base stations are placed at the corners of a square of side length LL, and all have the same ranging accuracy ฯƒr\sigma_r. A fifth BS is to be added to minimise the PEB at the centre of the square. Where should it be placed?

At the centre of the square, co-located with the UE

At any corner โ€” adding a fifth corner-BS is always best

At any location on the boundary of the square โ€” the FIM is insensitive to the fifth BS placement

At any location that is not collinear with existing BSs; the exact position does not matter much since the four-corner geometry is already isotropic