You are currently on the new version of our website. Access the old version .
SensorsSensors
  • Article
  • Open Access

15 January 2026

Reconstructing Spatial Localization Error Maps via Physics-Informed Tensor Completion for Passive Sensor Systems

,
,
and
1
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
2
Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
3
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
4
Institute of Remote Sensing and Digital Earth, Beijing 100094, China
This article belongs to the Special Issue Multi-Agent Sensors Systems and Their Applications

Abstract

Accurate mapping of localization error distribution is essential for assessing passive sensor systems and guiding sensor placement. However, conventional analytical methods like the Geometrical Dilution of Precision (GDOP) rely on idealized error models, failing to capture the complex, heterogeneous error distributions typical of real-world environments. To overcome this challenge, we propose a novel data-driven framework that reconstructs high-fidelity localization error maps from sparse observations in TDOA-based systems. Specifically, we model the error distribution as a tensor and formulate the reconstruction as a tensor completion problem. A key innovation is our physics-informed regularization strategy, which incorporates prior knowledge from the analytical error covariance matrix into the tensor factorization process. This allows for robust recovery of the complete error map even from highly incomplete data. Experiments on a real-world dataset validate the superiority of our approach, showing an accuracy improvement of at least 27.96% over state-of-the-art methods.

1. Introduction

Passive localization is ubiquitous in fields ranging from wireless sensor networks (WSNs) to autonomous driving [1,2,3]. The reliability of these systems hinges on a thorough understanding of their performance across the operational domain. Consequently, a high-fidelity spatial localization error map is an indispensable tool, providing crucial insights for system evaluation, operational planning, and strategic sensor deployment. In practice, however, acquiring a complete and dense error map is often infeasible due to cost and logistical constraints, typically yielding only sparse measurements at discrete locations. Therefore, robustly reconstructing the full error distribution from limited data remains a significant challenge in the field.
Currently, the Geometric Dilution of Precision (GDOP) is the most widely adopted metric for performance evaluation [4,5]. The underlying principle is that measurement errors (e.g., in Time-Difference-of-Arrival, TDOA) are amplified by the geometric arrangement of the sensors, where GDOP provides a scalar metric to quantify this amplification factor [6,7]. Typically, unknown model parameters, such as measurement error variances, are estimated statistically from limited emitters and then extrapolated to unobserved regions. To enhance model fidelity in complex environments, numerous extensions to the basic GDOP model have been proposed. These include Weighted GDOP (WGDOP) variants accounting for heterogeneous noise in satellite navigation [8,9,10], as well as adaptations for indoor spaces [11,12], urban canyons [13], UAV-assisted networks [1,14,15], and WSNs [5,16].
Despite these refinements, model-driven methods share a fundamental limitation: they are predicated on idealized assumptions and oversimplified error models. They often fail to capture the complex coupling of error sources in real-world environment ssuch as non-line-of-sight (NLOS) propagation, multipath fading, and clock drift, leading to an unavoidable mismatch between predicted and actual system performance.
To mitigate this model mismatch, data-driven paradigms have emerged as a promising alternative. By learning directly from measurement data, these methods capture fine-grained environmental effects and complex error structures without relying on rigid physical models. Existing data-driven strategies for spatial map reconstruction generally fall into three categories. The first includes classical spatial interpolation techniques, such as Kriging [17,18] and kernel-based methods [19,20,21,22], which estimate values at unobserved locations based on spatial correlation. The second category comprises low-rank matrix and tensor completion methods, including compressed sensing [23], singular value thresholding [24,25,26], and tensor decomposition algorithms [27,28,29]. For instance, Zhang et al. [28] utilized block term decomposition (BTD) to reconstruct electromagnetic maps, while Sun et al. [30] improved accuracy by combining local interpolation with nuclear norm minimization (NNM-T). The third category encompasses machine learning approaches, such as RadioUNet [31], autoencoders [32], and Vision Transformer (ViT)-based methods [33]. While powerful, the efficacy of these purely data-driven methods is critically contingent on the availability of dense and uniformly distributed data [34,35]. In practical scenarios with sparse measurements, their performance degrades sharply, often failing to converge to physically meaningful solutions.
To overcome the limitations of inaccurate physical models and insufficient observational data, this paper proposes a framework that integrates model-based insights with the flexibility of data-driven approaches. First, we model the three-dimensional (3-D) spatial distribution of localization error as a third-order tensor, termed Tensorized GDOP (TGDOP). This representation offers a powerful mathematical tool to describe complex spatial error distributions, capturing intrinsic multi-dimensional structures and inherent anisotropy [36] that scalar metrics like GDOP fail to express. Subsequently, we formulate the reconstruction of the complete error map as a tensor completion problem. To solve this ill-posed problem under sparse measurement conditions, we develop a physics-informed regularization strategy. Specifically, we incorporate prior knowledge from the analytical error covariance matrix directly into the tensor factorization process by imposing polynomial constraints on the factor matrices. This physics-based constraint guides the reconstruction, ensuring the solution adheres to the underlying geometric principles of localization. It effectively compensates for missing observations, dramatically enhancing reconstruction accuracy and robustness without relying on simplified environmental models. Furthermore, this approach is not scenario-dependent and holds potential for extension to complex multi-system fusion scenarios.
Notably, while the concept of physics-informed learning is prominent in Physics-Informed Neural Networks (PINNs) [37], our usage differs fundamentally. PINNs typically integrate partial differential equations (PDEs) into a loss function and require training on large datasets to learn latent physical laws. In contrast, the proposed approach embeds these laws directly into the tensor structure as constraints for decomposition. Consequently, our method operates in a single-shot manner, which is training-free and requires no historical data. Experimental results on a real-world dataset demonstrate that, even with only 1% of observation data available, the proposed framework improves reconstruction accuracy by at least 27.96% compared to state-of-the-art baselines.
The main contributions of this paper are summarized as follows:
  • Tensor-Based Spatial Error Modeling (TGDOP): We propose a novel framework that models the spatial distribution of positioning errors as a third-order tensor. Unlike conventional scalar GDOP metrics, this tensor representation explicitly captures the anisotropic characteristics and complex coupling of error sources in real-world 3-D environments.
  • Physics-Informed Sparse Reconstruction Algorithm: We develop a robust tensor completion algorithm tailored for extremely sparse observational data. By deriving spatial properties from the theoretical error covariance matrix, we introduce polynomial constraints to the factor matrices during tensor decomposition.
  • Training-Free and Model-Robust Performance: The proposed method operates as a single-shot, data-driven approach that does not require historical training data or idealized channel assumptions, validated on both simulated and real-world datasets.
The remainder of this paper is organized as follows: Section 2 describes the problem formulation, and Section 3 provides the preliminaries. The definition and properties of TGDOP are detailed in Section 4, followed by the sparse reconstruction algorithm in Section 5. Section 6 presents simulation and real-data experiments, and conclusions are drawn in Section 7.

2. Problem Statement

Consider a 3-D space discretized into a grid of N 1 × N 2 × N 3 cells. The coordinate corresponding to a grid cell with index ( i , j , k ) Ω is denoted by u i j k , where Ω represents the set of all such indices, with  cardinality | Ω | = N 1 N 2 N 3 . Let Ω e be the set of indices for grid cells containing emitters.
The characterization of positioning error begins with understanding its statistical nature at a specific location u i j k . Assume there are N independent positioning results u ^ i j k n (for n = 1 , , N ) for an emitter at a true location u i j k , then the positioning error for each result is given by du i j k n = u ^ i j k n u i j k . The covariance matrix of the positioning error at u i j k is then calculated by
P ^ i j k = E [ du i j k du i j k T ] ,
where du i j k = du i j k 1 du i j k N . Suppose the positioning error measurements are available for any grid cell, then a heatmap of the spatial distribution of localization error can be generated by calculating the Root Mean Square Error (RMSE) value at each grid point, RMSE ( u i j k ) = tr ( P ^ i j k ) , as  depicted in Figure 1.
Figure 1. Schematicheatmap illustrating the magnitude of positioning error in a representative 3-D TDOA positioning system.
In real-world applications, however, the acquisition of valid measurements is restricted to a very limited number of locations equipped with emitters. This inherent limitation means that statistical approaches can only characterize the positioning error locally and are incapable of perceiving the error map across the entire space. To overcome this, conventional methods use a parameterized empirical model for P i j k instead of a statistical one in (1), allowing extrapolation to emitter-free locations. These models typically define a few unknown parameters based on key error sources, which are then solved for using the limited available measurements. A typical example is the error map derived by the GDOP model for TDOA systems, which primarily accounts for errors in TDOA measurement noise N ( 0 , σ Δ t l 2 ) and sensor position uncertainties N ( 0 , σ s 2 ) :
GDOP ( u i j k ) = tr ( P i j k ) , where P i j k = C i j k P ˜ ( { σ Δ t l 2 } , σ s 2 ) C i j k T .
Here, the specific forms of the geometry matrix C i j k and the intermediate parameter error covariance P ˜ are detailed in Section 4.2. The variance of the l-th TDOA measurement, σ Δ t l 2 , can be further expressed based on signal parameters [38] as
σ Δ t l = 1 β 1 B T SNR l ,
where β is the root mean square (RMS) signal bandwidth, B is the signal bandwidth, T is the RMS integration time, and  SNR l is the effective signal-to-noise ratio for the l-th TDOA calculation.
The parametric models can align with statistical estimates calculated by (1) if the underlying model assumptions are accurate. However, real-world scenarios involve complex propagation channels and diverse, often unmodeled, error sources. For instance, in typical NLOS scenarios, while the factors detailed (such as signal bandwidth and SNR) in (3) influence the standard deviation of TDOA measurements, the error component arising from NLOS propagation often plays a more dominant and decisive role. However, the GDOP model would be highly inaccurate because it does not account for NLOS errors. Moreover, the  need to design distinct models for P i j k tailored to different localization systems and specific application scenarios significantly curtails the practical utility and generalizability of conventional models.
To address the limitations of conventional models, the tensor-based framework surpasses conventional models by learning complex spatial distributions directly from data, independent of empirical models. We define the tensor representation of the 3-D positioning error map as G ̲ R N 1 × N 2 × N 3 . Its components along the Cartesian axes (x, y, and z) are denoted as G ̲ 1 , G ̲ 2 , and G ̲ 3 , respectively. Then, the relationship between G ̲ and its components is given by
G ̲ = r = 1 3 G ̲ r .
In practical scenarios with spatially sparse emitters, the error statistics can only be measured at indices ( i , j , k ) Ω e . The measurement model for an element at index ( i , j , k ) is
G ̲ ^ ( i , j , k ) = tr ( P ^ i j k ) + N ̲ ( i , j , k ) , G ̲ ^ r ( i , j , k ) = P ^ i j k ( r , r ) + N ̲ r ( i , j , k ) , ( r { 1 , 2 , 3 } ) ,
where N ̲ ( i , j , k ) and N ̲ r ( i , j , k ) represent the observation errors associated with G ̲ and G ̲ r , respectively.
The learnable parameters in traditional models are limited to a predefined set of error sources from a prior model. In contrast, the tensor model’s parameters consist of the factor matrices and the core tensor (detailed in Section 4.1), where the factor matrices characterize the axial trends of the error distribution, and the core tensor represents the coupling relationships among these axial components. Thus, the tensor model can better adapt to complex real-world environments for inference in regions with no observations.
In this context, our primary problem is to reconstruct the complete underlying error component tensors G ̲ r (and, consequently, the total error tensor G ̲ ) across the entire spatial domain Ω from sparse and noisy measurements G ̲ ^ r ( i , j , k ) (and G ̲ ^ ( i , j , k ) ) available only at indices ( i , j , k ) Ω e . It is typically the case that the number of observed points is much smaller than the total number of grid points, i.e.,  | Ω e | | Ω | , and the observed locations Ω e are often randomly distributed.
Notation: In this paper, we follow the established convention in signal processing. We denote scalars, vectors, matrices, and tensors with lowercase letters ( a ) , boldface lowercase ( a ) , boldface capitals ( A ) , and underlined boldface capitals ( A ̲ ) , respectively. A ̲ ( i , j , k ) denotes the ( i , j , k ) -th element of a third-order tensor. The colon notation indicates the sub-tensors of a given tensor. Italic capitals are also used to denote the index upper bounds. We use the superscripts A 1 and A to represent the inverse and the pseudo-inverse of a matrix, respectively. In addition, we use A ̲ F = i , j , k a i j k 2 to denote the Frobenius norms of a tensor. The operator diag ( · ) stacks its scalar arguments in a square diagonal matrix, cov ( a , b ) is the covariance of a and b , and  tr ( · ) computes the trace of a matrix. The  blockdiag ( · ) operator is defined as blockdiag ( A 1 , , A R ) = A 1 0 0 A R . The outer product follows that a b = a b j . The Hadamard product follows that A B = a i j b i j . The Kronecker product is derived by A B = a i j B , and its capital symbol i A i is used to denote the multiplicative form. Then, the Khatri–Rao product is defined as the block-wise Kronecker product: A b B = A 1 B 1 A R B R . The δ i j denotes the Kronecker delta. The  ( N × N ) identity matrix is represented by I N × N , ( M × N ) zero matrix is denoted by 0 M × N , and  1 N is a column vector of all ones of length N.

3. Preliminaries

We begin with fundamental tensor analysis concepts to grasp the approach outlined in this paper. The issue discussed in this paper aligns with BTD, so the emphasis is on introducing the key concepts of BTD.

3.1. Mode-N Unfolding and Mode Product

For an N-th order tensor X ̲ R I 1 × I 2 × × I N , the mode-n unfolding of X ̲ , denoted as X ̲ ( n ) , unfolds the n-th fiber as the columns of the resulting matrix. It maps the tensor element x = X ̲ ( i 1 , i 2 , , i N ) to the matrix element ( i n , j ) , where
j = 1 + k = 1 , k n N ( i k 1 ) J k , w i t h J k = l = 1 , l n k 1 I l .
The mode-n product of the X ̲ R I 1 × I 2 × × I N with the U R J × I n is denoted as X ̲ × n U , and it follows that
( X ̲ × n U ) i 1 i n 1 j i n + 1 i N = i n = 1 I n x i 1 i 2 i N u j i n .
Its matrix representation is expressed as
Y ̲ = X ̲ × n U , with Y ̲ ( n ) = U X ̲ ( n ) .

3.2. Block Tensor Decomposition in Multilinear Rank-(L,M,N) Terms

Unlike matrices, tensor rank determination is an NP-hard problem, leading to varied rank definitions among algorithms. The concept of multilinear rank is proposed in BTD. The tensor’s mode-n rank is defined as
Definition 1
(Mode-n rank). The mode-n rank of a tensor X ̲ is the dimension of the subspace spanned by its mode-n vectors.
Then, a third-order tensor’s multilinear rank is rank ( L , M , N ) if its mode-1 rank, mode-2 rank, and mode-3 rank are equal to L, M, and N, respectively.
A decomposition of a tensor X ̲ R I × J × K in a sum of rank-(L,M,N) terms can be written as
X ̲ = r = 1 R S ̲ r × 1 U r × 2 V r × 3 W r ,
where S ̲ r R L × M × N is rank-(L,M,N), and U r R I × L , V r R J × M , and  W r R K × N are matrices with full column rank. Typically, the symbol S ̲ r is referred to as the core tensor, while the matrices U r , V r , and  W r are denoted as the factor matrices. Calculating the mode-n unfolding of (9), we have
X ̲ ( 1 ) = U blockdiag ( ( S ̲ 1 ) ( 1 ) , , ( S ̲ R ) ( 1 ) ) ( W V ) T , X ̲ ( 2 ) = V blockdiag ( ( S ̲ 1 ) ( 2 ) , , ( S ̲ R ) ( 2 ) ) ( W U ) T , X ̲ ( 3 ) = W blockdiag ( ( S ̲ 1 ) ( 3 ) , , ( S ̲ R ) ( 3 ) ) ( V U ) T ,
where U = U 1 U R , V = V 1 V R , and  W = W 1 W R .

4. TGDOP and Its Properties

In this section, we introduce the TGDOP to characterize the distribution of localization errors and investigate the potential properties of TGDOP by analyzing the covariance matrix of positioning errors. Initially, we present a decomposed formulation of TGDOP, which explicitly accounts for the anisotropic characteristics inherent in localization errors. Subsequently, we analyze the general expressions for the error covariance matrix under various localization schemes. This analysis aims to establish the universality of the TGDOP model across different localization methodologies. Finally, by mapping the properties of the covariance matrix to the fundamental characteristics of the factor matrices within the TGDOP tensor space, we discuss the theoretical underpinnings that enable data-driven reconstruction using this model.

4.1. Tensor Model of Positioning Error Distribution

In real-world applications, conventional positioning error models are disturbed by model mismatch because their constrained parametric form limits their ability to capture the multiplicity of error sources found in complex environments. In contrast, tensor models are structured with far greater expressive power, allowing them to be effectively data-driven in constructing models that are sufficiently accurate and adaptable for practical applications.
The expressive power of tensor models stems from their core tensor and factor matrices. Analogous to the singular value decomposition (SVD) in 2-D cases, the 3-D tensor G ̲ r ( r { 1 , 2 , 3 } ) could be characterized by eigenspaces defined by three sets of eigenvectors. We denote these eigenspaces as V 1 r R N 1 × M 1 , V 2 r R N 2 × M 2 , and V 3 r R N 3 × M 3 , corresponding to the factor matrices on the x, y, and z dimensions, respectively. Consequently, we have
G ̲ r = S ̲ r × 1 V 1 r × 2 V 2 r × 3 V 3 r ( r { 1 , 2 , 3 } ) ,
where S ̲ r R M 1 × M 2 × M 3 is the core tensor of G ̲ r . Substituting (11) into (4), G ̲ can be expressed as
G ̲ = r = 1 3 S ̲ r × 1 V 1 r × 2 V 2 r × 3 V 3 r .
The theoretical TGDOP model is presented in (12). Employing a BTD framework, this model effectively maps the spatial distribution of the diagonal components of the covariance matrix P onto a set of latent core tensors, S ̲ r . When this formulation is integrated with the expression for TGDOP measurements given in (5), it yields G ̲ ^ = G ̲ + N ̲ . The probability distribution of the measurement noise N ̲ here satisfies Theorem 1.
Theorem 1
(Measurement Noise Distribution). Consider a 3-D scenario where an emitter is located at u , and there are N positioning results for this emitter. Let P represent the covariance matrix of the positioning error. When N is relatively larger than the diagonal elements in P , the observation noise of G ^ ̲ and G ^ ̲ r at u follows that
N ̲ r ( u ) N ( 0 , 2 N P 2 ( r , r ) ) ( r { 1 , 2 , 3 } ) , N ̲ ( u ) N ( 0 , 2 N r = 1 3 P 2 ( r , r ) ) .
The proof is relegated to Appendix A. Theorem 1 establishes the condition under which measurements conform to a Gaussian distribution.
Notably, the TGDOP model presented in (12) involves not only the tensor G ̲ but also all its constituent core tensors and factor matrices. These elements enable the reconstruction of both G ̲ and its components, G ̲ r . In essence, TGDOP can be represented by a higher-order tensor X ̲ satisfying X ̲ ( : , : , : , r ) = G ̲ r . We demonstrate that G ̲ , which lacks directional information, is a collapse of the directionally informed X ̲ along different dimensions. The relationship between G ̲ and X ̲ is further elucidated in Appendix C through the definition of a tensor operation.

4.2. The Covariance Matrix of Positioning Error

The covariance matrix of the positioning error is directly related to the TGDOP measurements, so the properties of the covariance matrix are the theoretical basis for designing the TGDOP factor matrix. Many documents have derived the covariance matrices of positioning errors under different positioning systems [36,39,40,41,42] in 2-D cases. This subsection proposes the covariance matrices of positioning errors for the TDOA positioning system in a 3-D Cartesian coordinate system.
Now, by defining the covariance matrix of the positioning error as P R I × J , we have I = J = 3 for a 3-D scenario. Assume there are N sensors for TDOA positioning, where the main sensor and the auxiliary sensors are located at u 0 = [ x 0 , y 0 , z 0 ] T and u i = [ x i , y i , z i ] T ( i = 1 , 2 , , N 1 ) , respectively, and the radiation source is located at u = [ x , y , z ] T . Let Δ t i denote the TDOA measurements for the i-th group of sensors, σ Δ t i 2 denote the variance of Δ t i , and  σ s 2 denote the variance of the sensor position error. The i-th row of the direction cosine matrix is given by
F ( i , : ) = ( u u i 2 u u 0 2 ) / u = x x i r i , x y z x x 0 r 0 , x y z y y i r i , x y z y y 0 r 0 , x y z z z i r i , x y z z z 0 r 0 , x y z , r i , x y z = ( x x i ) 2 + ( y y i ) 2 + ( z z i ) 2 .
Let c denote the light speed, then the covariance matrix of the positioning error follows that P = C ( P Θ + P s ) C T , where
C = ( F T F ) 1 F T , P Θ ( i , j ) = c 2 σ Δ t i 2 i = j c 2 η i j σ Δ t i σ Δ t j i j , P s ( i , j ) = 2 σ s 2 i = j σ s 2 i j .
Next, by defining P ˜ = P Θ + P s to represent the sum of the covariance matrices of all possible estimation errors, we have
P = C P ˜ C T .
It is found that under different localization regimes, the error covariance matrix P can consistently be derived from P ˜ and C in a form analogous to (15). This is because (15) elucidates the principal sources of localization error. Specifically, P ˜ represents the inherent estimation errors within the localization system, while C quantifies the anisotropic amplification effect on these errors attributable to the station geometry. Therefore, (15) can be considered a general expression for the localization error covariance matrix. Moreover, according to (5), the TGDOP measurements also incorporate this universal expressive capability, which explains its applicability across various localization systems.

4.3. Properties of TGDOP Derived from Error Covariance Matrix

A thorough understanding of its factor matrix properties is paramount to enable the effective reconstruction of the TGDOP model introduced in (12). This subsection investigates these properties by first examining the positioning error covariance matrix. We then translate the characteristics of this covariance matrix to the factor matrices in the tensor space, thereby establishing the smoothness, non-negativity, and low-rank properties of the proposed TGDOP model.

4.3.1. Spatial Smoothness

The mathematical notation C k is commonly used to describe the smoothness of a function. Specifically, C 0 indicates that a function is continuous over its domain, while C 1 signifies that its first derivative exists and is also continuous over its domain (i.e., the function is continuously differentiable). Theorem 2 outlines the conditions for spatial continuity when the covariance matrix is treated as a matrix function of spatial position. A detailed derivation is available in Appendix B.
Theorem 2 (Continuity of Derivatives).
P ( x , y , z ) C 1 ( I ) when F : I R m × n satisfies the following conditions:
  • F C 1 ( I ) ;
  • ( x , y , z ) I , rank ( F ( x , y , z ) ) = n .
(x, y, z) represent the position variables of the radiation source in the Cartesian coordinate system.
Theorem 2 illustrates that the spatial continuity of F ( x , y , z ) plays a crucial role in the TGDOP model. Subsequently, we will analyze the continuity conditions for F ( x , y , z ) in the TDOA localization system. Calculate the x F ( i , : ) :
x F ( i , 1 ) = r i , x y z 1 r 0 , x y z 1 + ( x x 0 ) 2 r 0 , x y z 3 ( x x i ) 2 r i , x y z 3 , x F ( i , 2 ) = ( x x 0 ) ( y y 0 ) r 0 , x y z 3 ( x x i ) ( y y i ) r i , x y z 3 , x F ( i , 3 ) = ( x x 0 ) ( z z 0 ) r 0 , x y z 3 ( x x i ) ( z z i ) r i , x y z 3 ,
where r i , x y z is defined in (13). It can be readily observed that for an interval I where the condition r i , x y z 0 holds, the partial derivative x F is continuous on I (i.e., x F C 0 ( I ) ). The forms of y F and z F are analogous to that of x F , and thus, they adhere to the same continuity properties. Furthermore, F itself is continuous on I (i.e., F C 0 ( I ) ). Additionally, under a typical/standard sensor configuration, F possesses full column rank. Consequently, it can be readily concluded that P ( x , y , z ) C 1 ( I ) is based on Theorem 2.
According to (5) and (12), the property that P ( x , y , z ) C 1 ( I ) can be mapped to the tensor space. This implies that the dimension of the factor matrices’ solution space in (12) can be reduced through structured constraints, which is important for subsequent sparse reconstruction algorithms.

4.3.2. Non-Negative

Theorem 3 states that the covariance matrix P is symmetric and positive semi-definite. Let t = [ 1 0 0 ] T . Then, for all u i j k Ω , we have
t T P i j k t = P i j k ( 1 , 1 ) 0 .
Considering the definition of G ̲ r for r { 1 , 2 , 3 } in (5), it follows that G ̲ 1 0 . By defining t = [ 0 1 0 ] T and t = [ 0 0 1 ] T , respectively, similar conclusions can be drawn for G ̲ 2 0 and G ̲ 3 0 . Leveraging this property, non-negative constraints are applied to the factor matrices of (12) in the subsequent algorithm. This confines the solution space to the domain of non-negative real numbers, reducing the variable dimensions in parameter optimization.
Theorem 3 (Symmetric Positive Semi-definite).
Let P = E [ du du T ] , where du denotes the positioning error vector. Then, P satisfies the following:
  • P T = P ;
  • t R n { 0 } , t T P t 0 .
Proof. 
P T = E [ ( du du T ) T ] = P t T P t = t T E [ du du T ] t = E [ t T du du T t ] = E [ ( t T du ) 2 ] 0
   □

4.3.3. Low Rank

To investigate the hypothesized low-rank property of the proposed tensor model G ̲ , which is constructed from individual covariance matrices P as per (5), our approach is to analyze the rank characteristics of its constituent slices using SVD. The methodology involves simulating a scenario with high variability in the underlying error parameters to assess if low-rank structures persist even under such complex conditions.
The positioning error covariance matrix P at a location u i j k is generally expressed from (15) as P ( u i j k ) = C ( u i j k ) P ˜ C ( u i j k ) T . Here, C ( u i j k ) is a geometry-dependent transformation matrix, and  P ˜ is a symmetric, positive, semi-definite matrix encapsulating the statistics of underlying sensor measurements or intermediate error parameters, influenced by environmental factors Θ . Performing a Cholesky decomposition on P ˜ yields P ˜ = L L T , where L is a lower triangular matrix. Consequently, the covariance matrix can be written as
P ( u i j k ) = ( C ( u i j k ) L ) ( C ( u i j k ) L ) T .
The matrix C ( u i j k ) is determined by the fixed station configuration. Thus, the variability and specific structure of the positioning error map across different environments are primarily determined by L .
Assuming there are N var independent intermediate variables contributing to P ˜ (e.g., if N is the number of variables per sensor and M is the number of sensors or total measurements, then N var could be M × N ), P ˜ is an N var × N var matrix, and thus L R N var × N var . We populate the non-zero elements of L with random values to simulate a complex error environment lacking strong prior structural information. The tensor G ̲ is then constructed using these generated P ( u i j k ) matrices. We then perform SVD on matrix slices of G ̲ . For instance, if a G ̲ ( : , : , k ) results in a matrix G k R D 1 × D 2 (for k = 1 , , K ), the normalized cumulative energy is defined as ζ i , k = j = 1 i λ j , k / j = 1 J λ j , k , where λ j , k is the j-th singular value of G k in descending order, and  J = min ( D 1 , D 2 ) . Then, the average of cumulative energy for K slices is defined as ζ i = 1 K k = 1 K ζ i , k .
Figure 2 plots ζ i versus i for such an analysis, where the considered tensor slice dimensions result in J = 80 singular values. It is found from Figure 2 that the first five singular values account for the majority of the energy in the analyzed tensor. This rapid decay of singular values suggests that the matrix representation of the positioning error distribution, even under these simulated complex conditions, exhibits a strong low-rank characteristic. This finding provides empirical support for the premise that the spatial distribution of positioning error can be effectively modeled by low-rank structures, offering a foundation for the low-rank reconstruction algorithm developed in this paper. While this analysis is based on a simulated scenario, the results suggest a general tendency that is further exploited by our proposed method.
Figure 2. The plot of normalized cumulative singular value energy ζ i versus singular value index i for the TGDOP, with L generated randomly. This indicates the potential low-rank property of the TGDOP.

5. Reconstruction Algorithm for TGDOP

In this section, we propose the sparse reconstruction algorithm for TGDOP. In cases where the positioning error sources are unidentified and limited emitters are available, we propose a BTD-based algorithm to enhance reconstruction performance under sparse measurements by constraining the factor matrix. In addition, we extend the algorithm when the directional measurements of positioning error are available. Specifically, we constrain the corresponding factor matrices with the directional measurements of positioning error in the objective function and reconstruct the optimized factor matrices into TGDOP.

5.1. BTD-Based Sparse Reconstruction Algorithm for TGDOP

We begin by outlining the context for the proposed algorithm. In this paper, a “sparse observational scenario” signifies a situation where calibration emitters are present in only a limited number of grids within the target area. Sensors receive electromagnetic signals from these emitters. Subsequent statistical analysis of multiple positioning results at each grid yields positioning errors; the squared magnitudes of these errors constitute the observational values, as detailed in (5).
As outlined in (12), the primary objective is the reconstruction of the complete positioning error tensor. This goal is achieved by solving the following optimization problem, which models the error tensor as a sum of R BTD components:
min { S ̲ r , V 1 r , V 2 r , V 3 r } r = 1 R 1 2 G ^ ̲ r = 1 R ( S ̲ r × 1 V 1 r × 2 V 2 r × 3 V 3 r ) F 2 .
In this formulation, G ^ ̲ denotes the measurement tensor. Each term in the summation, S ̲ r × 1 V 1 r × 2 V 2 r × 3 V 3 r , represents a BTD component comprising a core tensor S ̲ r and associated factor matrices V 1 r R N 1 × M 1 , V 2 r R N 2 × M 2 , and V 3 r R N 3 × M 3 . In accordance with (12), the number of BTD blocks is set to 3, representing the three directional components: x, y, and z. The dimensions M k satisfy M k N k (for k { 1 , 2 , 3 } ), and these factor matrices are constrained to possess full column rank, a common requirement for model identifiability.

5.1.1. Designing the Structure of Factor Matrices

Section 4.3 analyzes the intrinsic properties of the positioning error tensor G ̲ . The factor matrices in the tensor decomposition represent projections of G ̲ ’s features along its different modes. Consequently, the  characteristics of G ̲ should inform the structure of these factor matrices. Theorem 2 reveals the spatial smoothness inherent in positioning errors. This implies that the column vectors of the factor matrices (which capture variations along spatial dimensions) can effectively represent most of the information using low-order polynomials. This prior knowledge is particularly valuable in scenarios with sparse observational data. Standard random initialization of factor matrices fails to incorporate this fundamental prior, potentially leading to models that are poorly constrained by sparse measurements and thus increasing the risk of convergence to suboptimal local minima.
Theorem 2 establishes the continuity of elements within the tensor of positioning error distribution. Here, we relate this property to the factor matrices of the tensor via Theorem 4, the detailed proof of which is provided in Appendix E.
Theorem 4
(Vandermonde-like structure). Given that the error distribution tensor G ( x , y , z ) exhibits C 1 continuity with respect to spatial position and possesses a low-rank structure, it follows that the column vectors of its factor matrices reside in a subspace generated by low-order polynomials.
Based on Theorem 4, we propose structuring the column vectors of each factor matrix using polynomial generators. The optimization then focuses on two main steps. First, a  polynomial coefficient matrix is randomly initialized. Second, these coefficients are used to construct the columns of the actual factor matrix. Subsequent optimization iterations refine these polynomial coefficients rather than directly adjusting the elements of the factor matrices. This methodology explicitly incorporates the prior knowledge of spatial smoothness, promoting spatial continuity in the reconstructed tensor. Furthermore, by embedding this structural prior, the approach reduces the search space and guides the optimization, thereby decreasing the likelihood of converging to undesirable local optima.
Consider optimizing V 1 r in (19) as an illustrative example. We first randomly initialize a polynomial coefficient matrix Φ 1 r R M 1 × m , where m is the number of coefficients used to define a polynomial of degree m 1 . Each of the M 1 columns of V 1 r is generated using a distinct set of m polynomial coefficients. The initial factor matrix V 1 r ( 1 ) is then constructed as
V 1 r ( 1 ) = 1 1 1 1 2 2 m 1 1 N 1 N 1 m 1 Φ 1 r T = Q Φ 1 r T ,
where Q R N 1 × m is the Vandermonde matrix whose columns form a basis for polynomials up to degree m 1 . Since Q is full column rank (given m N 1 ), optimizing the objective function with respect to V 1 r ( 1 ) is equivalent to optimizing with respect to Φ 1 r . The optimization task thus shifts from determining V 1 r (with M 1 N 1 parameters) to determining Φ 1 r (with M 1 m parameters), significantly reducing the dimension of parameters to be optimized in the solution space when m N 1 . This reduction effectively smooths the optimization landscape and prevents the algorithm from getting trapped in bad local minima associated with high-frequency noise, thereby improving the robustness to random initialization.
To ensure the non-negativity of the factor matrices indicated in Theorem 3, we define the final factor matrix V 1 r ( 2 ) through an element-wise squaring operation:
V 1 r ( 2 ) = V 1 r ( 1 ) V 1 r ( 1 ) .
Upon substituting Equations (20) and (21) into the objective function (denoted as H ) defined in (19), the optimization problem concerning the factor matrix V 1 r can be re-expressed as
min V 1 r ( 2 ) H ( V 1 r ( 2 ) ) = min Φ 1 r H ( Q Φ 1 r T ) ( Q Φ 1 r T ) .
Consequently, optimizing the objective function H from (19) concerning the factor matrix V 1 r (now denoted V 1 r ( 2 ) to reflect its construction) becomes equivalent to optimizing for the polynomial coefficient matrix Φ 1 r . For brevity, any subsequent reference to optimizing a factor matrix implies the optimization of its underlying polynomial parameter matrix Φ .

5.1.2. Sparse Formulation of the Objective Function

In practical applications, tensor reconstruction often addresses scenarios with sparse measurements, where the measured data G ^ ̲ covers only a minor portion of the entire target area. To model this, we introduce a binary sampling tensor W ̲ , where W ̲ ( i , j , k ) = 1 if the element ( i , j , k ) is observed (i.e., ( i , j , k ) Ω e ), and 0 otherwise. Assuming the unobserved entries in G ^ ̲ are represented as zeros, the optimization problem for sparse data is formulated as
min S ̲ r , V 1 r , V 2 r , V 3 r 1 2 G ^ ̲ G ̲ ˜ F 2 , with G ̲ ˜ = r = 1 R W ̲ ( S ̲ r × 1 V 1 r × 2 V 2 r × 3 V 3 r ) .

5.1.3. Solving Equation (23) Using Block Coordinate Descent

Now, we employ a Block Coordinate Descent (BCD) strategy to solve (23), which involves iteratively minimizing the objective function with respect to one block of variables while keeping the others fixed. This leads to the following four sub-problems, derived from the mode-n matricized forms of the objective function:
min V 1 G ^ ̲ ( 1 ) W ̲ ( 1 ) V 1 blockdiag ( S ̲ 1 : R ) ( 1 ) ( V 3 b V 2 ) T F 2 , min V 2 G ^ ̲ ( 2 ) W ̲ ( 2 ) V 2 blockdiag ( S ̲ 1 : R ) ( 2 ) ( V 3 b V 1 ) T F 2 , min V 3 G ^ ̲ ( 3 ) W ̲ ( 3 ) V 3 blockdiag ( S ̲ 1 : R ) ( 3 ) ( V 2 b V 1 ) T F 2 , min { S ̲ r } vec ( G ^ ̲ ) vec ( W ̲ ) ( V 3 b V 2 b V 1 ) vec ( S ̲ 1 : R ) F 2 ,
where
V k = [ V k 1 , , V k R ] , for k = 1 , 2 , 3 , blockdiag ( S ̲ 1 : R ) ( i ) = blockdiag { ( S ̲ 1 ) ( i ) , , ( S ̲ R ) ( i ) } , vec ( S ̲ 1 : R ) = [ vec ( S ̲ 1 ) ; ; vec ( S ̲ R ) ] .
The variables to be optimized in (23) are the sets of factor matrices { V 1 r } r = 1 R , { V 2 r } r = 1 R , { V 3 r } r = 1 R (collectively denoted V 1 , V 2 , V 3 ), and the set of core tensors { S ̲ r } r = 1 R . Denoting the objective function in (23) by F ( V 1 , V 2 , V 3 , { S ̲ r } ) , the  BCD method iteratively solves the following:
V 1 ( t + 1 ) = arg min V 1 F ( V 1 , V 2 ( t ) , V 3 ( t ) , { S ̲ r ( t ) } ) , V 2 ( t + 1 ) = arg min V 2 F ( V 1 ( t + 1 ) , V 2 , V 3 ( t ) , { S ̲ r ( t ) } ) , V 3 ( t + 1 ) = arg min V 3 F ( V 1 ( t + 1 ) , V 2 ( t + 1 ) , V 3 , { S ̲ r ( t ) } ) , { S ̲ r ( t + 1 ) } = arg min { S ̲ r } F ( V 1 ( t + 1 ) , V 2 ( t + 1 ) , V 3 ( t + 1 ) , { S ̲ r } ) .
Each sub-problem in (26) is a linear least squares problem concerning the optimized variable block and is therefore convex. Consequently, an Alternating Least Squares (ALS) approach, a specific type of BCD, can be employed, as detailed in Algorithm 1.
Algorithm 1 ALS algorithm for solving (23).
  • Initialize V 1 , V 2 , V 3 , { S ̲ r } r = 1 R .
  • while not converged do
  •     for  i = 1 , 2 , 3  do
  •         Update  V i = [ V i 1 , , V i R ] :
  •          B blockdiag ( S ̲ 1 : R ) ( i ) ( j = 3 , j i 1 V j ) T .
  •         for  j = 1 , , N 1  do
  •           w W ̲ ( i ) ( j , : ) .
  •           V i ( j , : ) ( G ^ ̲ ( i ) ( j , : ) w ) B T B diag ( w ) B T .
  •         end for
  •         for  r = 1 , , R  do
  •          Perform QR-factorization: V i r = Q i r R i r .
  •           V ir Q i r , S ̲ r S ̲ r × i R i r .
  •         end for
  •     end for
  •     Update  { S ̲ r } r = 1 R :
  •          A V 3 b V 2 b V 1 , w vec ( W ̲ ) .
  •          vec ( S ̲ 1 : R ) ( A T diag ( w ) A ) A T diag ( w ) vec ( G ^ ̲ ) .
  • end while
  • Reconstruct G ̲ and G ̲ r using converged factors via (11) and (12), respectively.
  • Return   G ̲ and G ̲ r .
The focus of this study lies in tensor modeling methodology, so we employed a straightforward ALS-BCD optimization scheme without an exhaustive comparison of alternative reconstruction strategies. Figure 3 illustrates the average convergence curves of the proposed algorithm and the classical BTD algorithm in LOS and NLOS scenarios, based on 100 random initializations. In the figure, the x-axis represents the number of optimization steps performed by the ALS algorithm, and the y-axis represents the total squared Frobenius norm error defined in (23). This value is calculated based on the normalized measurement tensor and is used to illustrate the relative magnitude of the convergence trend. It can be observed that the proposed algorithm outperforms the classical BTD method in terms of both convergence speed and stability, and exhibits sufficient robustness to initial value selection. Nevertheless, to extend the algorithm to larger tensor spaces, advanced techniques such as momentum-based updates, stochastic optimization, or distributed computation can be incorporated to further boost convergence speed and robustness.
Figure 3. The average convergence curves of the proposed algorithm and the classical BTD algorithm in LOS and NLOS scenarios, with 100 random initializations.
For the aforementioned solution process, the derivation of its computational complexity is provided in Appendix D:
C total O | Ω | · R 2 ( M 1 M 2 M 3 ) 2 ,
where | Ω | denotes the number of observed samples. It is evident that the complexity of the proposed algorithm is acceptable under the low-rank assumption. However, when the multilinear rank is large, reconstruction accuracy must be sacrificed to reduce time costs. Future research may overcome this technical bottleneck by modifying the optimization strategy.

5.2. Solution Approach with Available Directional Measurements

The rotational ambiguity of sub-tensors is a known challenge in tensor decomposition and is often handled with prior constraints, initialization, or post-processing [28]. This subsection explores an alternative solution strategy when directional components of the positioning error are available as measurements. It is emphasized that the proposed method does not decompose the tensor into three arbitrary blocks. Instead, it reconstructs three physically meaningful components grounded in the definition of the error covariance matrix. This ensures that the physical correspondence is guaranteed by the model’s design from the outset, rather than being an ambiguous interpretation of the decomposition results.
While existing research on positioning errors has largely concentrated on scalar distributions (e.g., GDOP-based models) that overlook directional characteristics, such information is not irretrievable. When a set of N positioning measurements { u i } i = 1 N is collected for an emitter at a known location u , the directional error components G ^ ̲ r can be extracted through statistical analysis.
The availability of these directional measurements G ^ ̲ r enables a refined modeling approach. We can reformulate the objective function to model each of the R observed directional tensors with a dedicated BTD component, leading to the expression:
min S ̲ r , V 1 r , V 2 r , V 3 r 1 2 r = 1 R G ^ ̲ r W ̲ ( S ̲ r × 1 V 1 r × 2 V 2 r × 3 V 3 r ) F 2 .
This objective would then be decomposed into sub-problems:
min V 1 r r G ^ ̲ r ( 1 ) W ̲ ( 1 ) ( V 1 r S ̲ r ( 1 ) ( V 3 r V 2 r ) T ) F 2 , min V 2 r r G ^ ̲ r ( 2 ) W ̲ ( 2 ) ( V 2 r S ̲ r ( 2 ) ( V 3 r V 1 r ) T ) F 2 , min V 3 r r G ^ ̲ r ( 3 ) W ̲ ( 3 ) ( V 3 r S ̲ r ( 3 ) ( V 2 r V 1 r ) T ) F 2 , min S ̲ r r vec ( G ^ ̲ r ) vec ( W ̲ ) ( ( V 3 r V 2 r V 1 r ) vec ( S ̲ r ) ) F 2 .
Utilizing directional measurements aims to provide richer constraints for the factor matrices and core tensors, potentially improving the accuracy of directional positioning error reconstruction. Similar to Algorithm 1, a BCD approach can be applied to solve these convex sub-problems; the detailed algorithmic steps are omitted here for brevity.

5.3. Lower Bound on Sample Complexity

In this section, we discuss the lower bound of the data requirement of the proposed algorithm under the random uniform sampling strategy. We consider the recovery of a K-th order tensor G ̲ R N 1 × × N K , which is represented as a sum of R low-rank Tucker tensors { T r } r = 1 R : G ̲ = r = 1 R T r . Each component tensor T r has a Tucker decomposition:
T r = S r × 1 V r , 1 × 2 V r , 2 × K V r , k ,
where S r is the core tensor and V r , k R N k × d r k is the factor matrix for component r along mode k, with d r k being the mode-k Tucker rank. The analysis results on Tucker decomposition [43,44] indicate that the lower bound for the sample complexity is determined by the most difficult-to-recover matrix unfolding, T ( k ) . The recovery difficulty of a matrix is lower-bounded by a function proportional to rank × ( dim 1 + dim 2 ) , that is,
z max k { 1 , , K } Ω rank ( T ( k ) ) N k + j k N j .
In Section 5.1.1, we assume that each column of every factor matrix V r , k is a polynomial sampled at N k distinct points. This structural prior can be formulated as a factorization V r , k = Q r , k Φ r , k , where Q r , k R N k × m r , k and Φ r , k R d r k × m r , k . Here, d r k represents the mode-k rank of the r-th component of the tensor, and m r , k represents the order used for polynomial approximation. Each row of Φ r , k contains the coefficients for the corresponding column of V r , k in the basis defined by Q r , k .
The mode-k unfolding of the r-th component tensor, ( T r ) ( k ) , is given by
( T r ) ( k ) = V r , k ( S r ) ( k ) V r , k V r , k + 1 V r , k 1 V r , 1 T .
Substituting the polynomial constraint V r , j = Q r , j Φ r , j for all modes j = 1 , , K , we obtain
( T r ) ( k ) = ( Q r , k Φ r , j ) ( S r ) ( k ) j k ( Q r , j Φ r , j ) T .
The expression for ( T r ) ( k ) shows that its column space is a subspace of the column space of the known matrix Q r , k :
col ( T r ) ( k ) col ( Q r , k ) .
The search for the left singular vectors is thus restricted from the ambient R N k space to the m r , k -dimensional subspace spanned by the columns of Q r , k . The effective row dimension is reduced from N k to m r , k . Using the mixed-product property of the Kronecker product on the right-hand side of the expression:
j k ( Q r , j Φ r , j ) = j k Q r , j j k Φ r , j .
This implies that the row space of ( T r ) ( k ) is contained within the column space of the matrix j k Q r , j , whose dimension is j k m r , j . The effective column dimension is therefore reduced from j k N j to j k m r , j .
For the matrix recovery problem of ( T r ) ( k ) , we conclude that it is rank- d r k with effective m r , k row dimension and j k m r , j column dimension. Applying the standard lower bound for matrix recovery yields the bound for this specific unfolding:
L r ( k ) = Ω d r k m r , k + j k m r , j .
Theorem 5 gives the overall sample complexity lower bound for the BTD recovery problem, showing that the sample complexity is determined by the bottleneck among all R × K unfoldings.
Theorem 5.
For the recovery of a BTD tensor with polynomial factor matrix constraints, the required number of measurements m has the following lower bound:
z max r { 1 , , R } , k { 1 , , K } Ω d r k m r , k + j k m r , j ,
where d r k is the Tucker rank and m r , k is the degree of freedom for the factor matrix of component r along mode k.
Figure 4 shows the log-reconstruction error under different observation ratios. The x-axis represents a ratio between 0 and 1, indicating the proportion of observed data points relative to the total number of tensor elements. The y-axis denotes the size N of the N × N × N three-dimensional tensor. The colorbar of the heatmap represents the reconstruction error on a logarithmic scale. The performance of the conventional BTD algorithm and the proposed method is presented in Figure 4a and Figure 4b, respectively. The results indicate that under identical sampling conditions, our method exhibits superior performance by achieving a lower reconstruction error. This consequently reduces the sample lower bound for achieving high-fidelity tensor reconstruction.
Figure 4. Analysis of the impact of sample size on the performance of tensor reconstruction. (a) Reconstruction error of the conventional BTD model. (b) Reconstruction error of the BTD model with factor matrix prior constraints.

6. Results with Measurements and Simulations

To evaluate the performance and robustness of the proposed algorithm under diverse conditions, we conduct simulations on two typical TDOA-based positioning scenarios, denoted as S T D O A and S T D O A N , respectively. The experimental findings and conclusions can also be extended to Direction-of-Arrival (DOA) positioning scenarios. Nevertheless, specific results from DOA experiments are not further detailed in this paper for brevity.
The scenario S T D O A utilizes a 3-D TDOA positioning system with four sensors. Three sets of TDOA measurements are the intermediate parameters. The standard deviations for these TDOA measurement errors are set to 18 ns, 20 ns, and 25 ns. The standard deviation of sensor position errors is modeled to be 0.5 m. The sensor coordinates (in meters) are [ 0 , 0 , 0 ] T , [ 640 , 1070 , 35 ] T , [ 900 , 180 , 27 ] T , and [ 1000 , 660 , 35 ] T , forming a “Y”-shaped constellation. This configuration is chosen not only because it represents the most common and high-performing geometry in distributed positioning, but also to align with the experimental setup described later. Due to a shared reference sensor in TDOA calculations, correlations are introduced among the TDOA measurement errors. Let the TDOA error vector be ϵ TDOA = [ ϵ 1 , ϵ 2 , ϵ 3 ] T . The correlation matrix R ϵ for these TDOA errors has ρ 12 = 0.3 , ρ 13 = 0.5 , and ρ 23 = 0.2 . The maximum simulated detection range extends to 8 km, with the target area’s elevation spanning from 0.5 km to 5 km. This target volume is discretized into a grid of 810 × 810 × 110 points. Scenario S TDOA N builds upon S TDOA by incorporating multipath errors within a quarter of the region spanning elevations from 0.5 km to 1.4 km. The multipath error affecting the TDOA estimates is modeled using an exponential distribution with a mean of 100 ns.
To quantitatively assess the algorithm’s performance, several error metrics were employed. Mean Frobenius Norm Error (MFNE) is used to evaluate the average reconstruction error of a tensor G ̲ over multiple trials: MFNE G ̲ = 1 M m = 1 M G ̲ G ^ ̲ m F , where G ̲ is the true tensor, G ^ ̲ m is the tensor reconstructed in the m-th Monte Carlo trial, and M is the total number of trials. Relative Frobenius Norm Error (RFNE) measures the reconstruction error relative to the magnitude of the true tensor: RFNE G ̲ = G ̲ G ^ ̲ m F G ̲ F . This metric is typically calculated for each trial or averaged over M trials. Signal-to-Noise Ratio (SNR) represents the ratio of the true signal power to the noise power in the measurements, defined in decibels (dB) as SNR G ̲ = 10 log 10 G ̲ F G ̲ noisy G ̲ F , where G ̲ is the true underlying (noise-free) tensor and G ̲ noisy is the observed noisy tensor.

6.1. Multilinear Rank Analysis

In this subsection, we employ the L-curve method (Note: The L-curve corner identifies the inflection point where further increases in model parameters yield diminishing returns in error reduction; the rank at this corner is selected as the optimal multilinear rank to balance accuracy and model simplicity.) to determine a suitable multilinear rank for the tensor model optimized by our proposed algorithm. Given that the concept of low rank is relative to a tensor’s dimensions, we estimate the multilinear rank for the target tensor.
To manage computational load during rank analysis, especially for large tensors, we introduce a down-sampling factor f s . The analysis is performed on a down-sampled version of the target data tensor, denoted as G ̲ R N 1 / f s × N 2 / f s × N 3 / f s . The L-curve method typically involves plotting a measure of reconstruction error against model complexity. The corner of this curve often indicates an optimal trade-off. We identify this region by conducting trials with various combinations of multilinear ranks and select the ranks near the L-curve’s inflection point. This selection is further guided by the criterion that the RFNE between the G ̲ and its low-rank approximation G ^ ̲ falls below a predefined threshold 2 × 10 3 .
Figure 5 illustrates the estimation of the tensor’s multilinear rank using the L-curve method at f s = 20 . The x-axis represents the compression ratio, defined as the percentage of parameters required for reconstruction relative to the total tensor size, while the y-axis denotes the RFNE. The selection of f s = 20 provides an optimal trade-off, because it reduces the number of elements by a factor of f s 3 , enabling rapid iterative rank searching while maintaining the macro-scale spatial correlation of the error distribution as guaranteed by the continuity of the TGDOP model. Furthermore, extra experimental results also reveal that across various down-sampling factors f s from 1 to 20, the estimated multilinear rank for each mode consistently remained below 10% of that mode’s original dimension, underscoring the inherent low-rank nature of the data.
Figure 5. Multilinear rank estimation via the L-curve method at f s = 20 . Iterate through multilinear ranks and select the first point in the Pareto front set with an error below the predefined threshold as the L-curve corner. (a) The L-curve for scenario S TDOA . (b) The L-curve for scenario S TDOA N .

6.2. Performance Under Sparse Measurements

This subsection evaluates the algorithm’s reconstruction performance when the emitters are sparsely and randomly distributed within the target area. We define the missing emitter ratio ( MER ) as the proportion of grid points in the target area lacking emitter data [45]. We conducted experiments with MER varying from 80% to 99% (corresponding to an observation ratio of 20% to 1%) to assess performance under increasing sparsity where the observation ratio is defined as 100% MER. For these experiments, the down-sampling factor f s is set to 10, the SNR is maintained at 22 dB, and the number of Monte Carlo trials (M) is 50.
Figure 6 compares the reconstruction performance of the proposed algorithm against several baseline methods across different MER levels. The baseline algorithms included in the comparison encompass both classic reconstruction algorithms and deep learning (DL)-based generative algorithms. It is worth noting that the scenario addressed in this paper is a “single-shot reconstruction” problem, characterized by extremely sparse valid observations and the absence of a historical database. Traditional DL methods typically fail in such zero-shot and extremely sparse conditions due to the inability to acquire sufficient training data. To enable the neural networks to function in this context, we generated 50 sets of historical data to serve as prior information for each training session. Although this constitutes an unfair comparison (as it provides an advantage to the DL baselines), we include these results to verify the superiority of the proposed algorithm from another perspective.
Figure 6. Comparison of reconstruction performance between the proposed algorithm and baseline methods ( f s = 10 , M = 50 ). (a,b) Reconstruction MFNE versus MER at SNR = 22   dB for S TDOA and S TDOA N . (c,d) Boxplots of reconstruction MFNE versus elevation with MER = 99 % (observation ratio = 1%) and SNR = 22   dB for S TDOA and S TDOA N . (e,f) Reconstruction MFNE versus SNR with MER = 80 % (observation ratio = 20%) for S TDOA and S TDOA N .
Figure 6a,b demonstrate that the proposed algorithm achieves the lowest MFNE , indicating superior reconstruction performance. Specifically, at a high sparsity level of MER = 99 % (only 1% observation ratio), the proposed algorithm exhibits a performance improvement ranging from 43.58% to 97.23% compared to the baseline methods, as detailed in Table 1. Here, Uplift ( % ) is defined as ( MFNE baseline MFNE proposed ) / MFNE baseline × 100 % . Note that the GeneralBTD method represents the standard BTD without the proposed physics-based polynomial constraints. The significant performance gap between the proposed algorithm and GeneralBTD quantifies the contribution of the proposed physical constraints. Furthermore, comparative analysis indicates that the proposed algorithm attains comparable reconstruction accuracy while requiring significantly fewer emitters. This substantial reduction in required emitters can translate to considerable operational cost savings.
Table 1. Reconstruction Performance Comparison: f s = 10 , SNR = 22   dB , MER = 99 % (Observation Ratio = 1%), M = 50 .
Figure 6c,d present boxplots illustrating the distribution of MFNE at different elevations for MER = 99 % . As shown in Figure 6d, the introduction of NLOS error at altitudes between 0.5 km and 1.4 km leads to significant reconstruction errors for the WGDOP algorithm in this range. Consequently, its MFNE in Figure 6b is much higher than that of the other algorithms. These boxplots summarize key statistical measures such as quartiles and medians. A smaller inter-quartile range and more compact box shape generally indicate more consistent performance (i.e., higher robustness) across varying elevations. The results suggest that the proposed algorithm exhibits superior consistency compared to the baseline methods in this regard.

6.3. Performance Under Noise

In this subsection, we investigate the impact of observational noise on the algorithm’s reconstruction performance. We evaluate the reconstruction performance of the proposed and baseline schemes for SNR values ranging from 2 dB to 22 dB. For these experiments, the down-sampling factor f s was fixed at 10, the MER at 80%, and the number of Monte Carlo trials (M) at 50.
Figure 6e,f illustrate the influence of varying SNR levels on the reconstruction performance. Across the tested noise levels, the proposed algorithm consistently demonstrates superior reconstruction performance and exhibits greater resilience to noise compared to the other methods.
To quantitatively assess this noise resilience, we define a metric termed “Slope” as the average absolute rate of change of MFNE with respect to SNR :
Slope = 1 N snr 1 n = 1 N snr 1 MFNE n + 1 MFNE n SNR n + 1 SNR n ,
where SNR n and MFNE n represent the n-th smallest SNR value and its corresponding reconstruction error, respectively. N snr denotes the total number of distinct SNR levels evaluated in the experiment. A smaller Slope value indicates better noise robustness. Table 2 presents the Slope values for all algorithms in both scenarios. The percentage improvement in this Slope metric for the proposed algorithm, relative to the baseline methods, ranges from 7.11% to 95.92%, indicating superior noise adaptability. In scenario S TDOA N , the WGDOP method’s low Slope value is notable, primarily because NLOS errors, rather than observational noise, dictate its reconstruction accuracy. Figure 6d shows high reconstruction errors for this method at elevations from 0.5 km to 1.4 km due to multipath effects. Thus, despite good noise robustness, its overall reconstruction performance is substantially inferior to the proposed algorithm, corroborating the conclusions in Table 2.
Table 2. Noise Resistance Performance ( f s = 10 , MER = 80 % , M = 50 ).

6.4. Performance with Available Directional Measurements

In this subsection, we evaluate the proposed algorithm’s performance in a scenario where directional measurements of positioning errors are available. This validates the extended algorithm introduced in Section 5.2, assessing its reconstruction accuracy for both scalar ( G ̲ ) and vector ( G ̲ r ) representations of the spatial distribution of positioning error.
Table 3 presents performance metrics under varying degrees of MER. Here, MFNE 1 quantifies the reconstruction error for the G ̲ . MFNE 2 is computed as the mean of the individual MFNEs obtained for each directional component tensor: MFNE 2 = 1 3 r = 1 3 MFNE G ̲ r . Comparing the results in Table 3 with those presented for the scalar error model in Figure 6a, it is found that incorporating directional information enhances the proposed algorithm’s performance. This improvement is observed in the reconstruction of both scalar and vector distributions of positioning errors across various MER conditions.
Table 3. Performance with directional measurements for various MER ( f s = 10 , SNR = 22   dB , M = 50 ).
To more intuitively demonstrate the reconstruction performance for G ̲ r , Figure 7 depicts heatmaps of the true and reconstructed directional component tensors in S TDOA and S TDOA N . For enhanced visualization, particularly when positioning errors are small, the heatmap values are presented on a dB scale, specifically showing 10 log 10 ( G ̲ r ) for r { 1 , 2 , 3 } .
Figure 7. Heatmaps of real and reconstructed directional distributions of positioning error ( MER = 99 % , f s = 10 , SNR = 22   dB ). Values are shown on a dB scale for visualization. (a) The real heatmaps of positioning error distribution in S TDOA . (b) The reconstructed heatmaps of positioning error distribution in S TDOA . (c) The real heatmaps of positioning error distribution in S TDOA N . (d) The reconstructed heatmaps of positioning error distribution in S TDOA N .

6.5. Real-Data Experiment

In this subsection, we further validate the proposed algorithm using a real-world dataset collected from a TDOA positioning system. The dataset covers an area of 2500 × 4500 m2 and includes measurements across 120 frequency bands, ranging from 2.397   G Hz to 2.519   G Hz . The experimental setup comprises four sensors and an unmanned aerial vehicle (UAV) acting as the emitter. The sensors are deployed in a Y-shaped configuration, with baseline distances of 1 k m , 1.1   k m , and 1.3   k m from the peripheral sensors to the central reference sensor, respectively. Figure 8 illustrates the experimental site, including the sensor layout and four sets of flight trajectories for the UAV.
Figure 8. Schematic diagram of the experimental site, sensor stations layout, and UAV flight trajectories. (a) Trajectory S 1 : flight trajectory in large-scale open terrain. (b) Trajectory S 2 : flight trajectory in small-scale open terrain. (c) Trajectory S 3 : low-altitude flight trajectory over a lake. (d) Trajectory S 4 : low-altitude flight trajectory in an urban area.
A key challenge presented by this dataset is its extreme sparsity. For example, the trajectory S 4 is sampled at 386 distinct locations, with ground-truth positions from its onboard GPS. To analyze the spatial distribution of positioning errors, the target area was discretized into a fine-grained 200 × 200 × 10 grid. Consequently, the 386 measurement points represent only a tiny fraction of the 400,000 grid points, resulting in a MER of approximately 99.9%. This high-sparsity scenario serves as a stringent test for the algorithm’s tensor completion capabilities. To evaluate the algorithm’s performance under these conditions, we adopt a cross-validation approach. For each Monte Carlo trial, a subset of the observed data is randomly selected for training the algorithms, with the remainder reserved for testing. The proportion of data used for training is denoted by ρ . A total of M = 50 Monte Carlo trials are conducted for each value of ρ .
Table 4 presents the percentage of performance improvement of the proposed algorithm compared with the comparative methods in various trajectories. The final column indicates whether an algorithm was capable of reconstructing the complete tensor (“Y” for yes, “N” for no), as some methods could only reconstruct partial slices or fail under extreme sparsity. The performance of the proposed algorithm is comparable to that of the classical GDOP method under simple channel conditions (such as S 2 ), while it has obvious advantages in complex channel environments (such as S 4 ). It can also be observed from Table 4 that under relatively ideal environmental conditions ( S 2 ), the proposed algorithm yields only marginal performance improvements compared to simpler baseline algorithms (e.g., WGDOP), despite possessing significantly higher computational complexity. This raises the question of determining the appropriate scenarios for its application. In practice, however, error map reconstruction is typically performed offline to evaluate the station deployment of positioning systems; thus, time complexity is not the primary concern. In such cases, it is recommended to employ the proposed algorithm across all scenarios, trading higher computational complexity for a performance gain of approximately 30%. Conversely, for time-sensitive scenarios where computational efficiency is critical, we recommend restricting the use of the proposed algorithm to complex terrains, such as urban areas or canyons.
Table 4. Performance improvement (%) of the Proposed Algorithm compared with the Comparison Algorithms in four trajectories.
Then, we take trajectory S 4 as an example to give a further analysis. Table 5 presents the average MFNE of the reconstructed positioning error tensor on the test sets in trajectory S 4 , along with computation times.
Table 5. Performance on Trajectory S 4 , where MFNE is reported for the Test Set.
When comparing the proposed tensor-based modeling approach with WGDOP, we observe that our method further reduces the MFNE by over 27.96%. This highlights the benefit of our model in capturing complex error distributions. In contrast, traditional data-driven interpolation methods such as RBF and Kriging struggle significantly under such highly sparse conditions, particularly when entire rows or columns (or more generally, large contiguous regions) of the tensor lack measurements, leading to their failure in reconstructing the complete tensor. The NNM-T algorithm was also tested but is omitted from the table as it failed to converge reliably under this level of data scarcity. It also indicates that the proposed algorithm incurs a longer computation time compared to some baselines. However, real-time inference is often not a primary requirement because the analysis of the positioning error map is typically performed offline as a post-processing step.
Figure 9 presents heatmaps of the reconstructed positioning error map from the real-world data using the proposed algorithm. It suggests that the proposed method effectively models underlying error characteristics and can provide valuable guidance for system analysis. For instance, examining the magnitudes of the reconstructed directional error components in Figure 9b–d, we observe that the error magnitudes typically follow the relationship G x G y G z . This implies that the positioning reliability is highest along the z-axis and lowest along the x-axis for this specific setup, an observation consistent with statistical analysis of the raw positioning data.
Figure 9. Reconstructed positioning error map for trajectory S 4 ( ρ = 80 % , M = 50 ). Values are presented on a dB scale for visualization. (a) Reconstructed heatmap of the spatial distribution of positioning error. (bd) Reconstructed heatmaps of the spatial distribution of positioning error in x, y, and z directions, respectively.

7. Conclusions

This paper tackles the critical challenge of high-fidelity reconstruction of spatial localization error maps from limited measurements. We propose a novel tensor-based framework that overcomes the limitations of conventional model-driven approaches, which are constrained by idealized assumptions, and purely data-driven methods that are dependent on dense observational data. Central to our approach is the TGDOP model, a novel representation that captures the complexity and anisotropic nature of localization errors. To address the practical issue of data scarcity, we developed a physics-informed tensor completion algorithm. By incorporating prior knowledge derived from the analytical properties of the error covariance matrix directly into the factorization process, our algorithm achieves robust reconstruction even from severely incomplete data. Comprehensive experiments on both simulated and real-world TDOA datasets demonstrate the superiority of the proposed method. Specifically, under extremely sparse conditions, our method improves reconstruction accuracy by at least 27.96% compared to state-of-the-art baselines. Future work will explore more efficient optimization strategies for large-scale scenarios and investigate advanced constraints on factor matrices to capture richer physical information.

Author Contributions

Conceptualization, Z.Z. and Z.H.; methodology, Z.Z. and Z.H.; software, Z.Z.; validation, Z.Z., C.W. and Q.J.; formal analysis, Z.Z. and Z.H.; investigation, Z.Z.; resources, Z.Z. and Z.H.; data curation, Z.Z. and Z.H.; writing—original draft preparation, Z.Z.; writing—review and editing, Z.Z., Z.H., C.W. and Q.J.; visualization, Z.Z., C.W. and Q.J.; supervision, Z.Z., C.W. and Q.J.; project administration, Z.Z., C.W. and Q.J.; funding acquisition, Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the project “Electromagnetic Detection Payloads and Big Data Processing”, grant number 20232002290.

Data Availability Statement

The codes and datasets in this paper are available at https://github.com/gitZHZhang/Code_and_Data_for_TGDOP.git (accessed on 7 January 2026).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Theorem 1

Consider a 3-D scenario with a single emitter at a true location u = [ μ x , μ y , μ z ] T . We have N positioning results for this emitter, denoted as u i = [ x i , y i , z i ] T for i = 1 , , N . We assume the positioning errors are drawn from a distribution with mean 0 and a covariance matrix P , where the ( j , k ) -th element of P is estimated as P ( j , k ) = 1 N i = 1 N ( u i ( j ) u ( j ) ) ( u i ( k ) u ( k ) ) . The positioning results u i are assumed to follow a multivariate normal distribution, and their spatial spread often forms an ellipsoid. The probability density function (PDF) of u i is given by
f ( u i ) = 1 ( 2 π ) 3 / 2 | P | 1 / 2 exp 1 2 ( u i u ) T P 1 ( u i u ) .
According to the properties of the multivariate normal distribution, a 3-D random variable that follows a joint Gaussian distribution has Gaussian marginal and conditional distributions. Thus, we have x i N ( μ x , P 11 ) , y i N ( μ y , P 22 ) , and z i N ( μ z , P 33 ) . To simplify the analysis and intuitively assess the error magnitudes along each coordinate axis, this paper utilizes the diagonal elements of the covariance matrix to approximately characterize the extent of the positioning error. So, we can treat x i , y i , and z i as independent variables in the following analysis.
Applying this to the positioning results u i relative to the true emitter location u , each component of the i-th positioning result follows u i ( r ) N ( u ( r ) , P ( r , r ) ) for r { 1 , 2 , 3 } and i = 1 , , N . Suppose that G ^ ̲ and G ^ ̲ r are calculated from the N positioning results u i relative to the true location u . Since u i ( r ) u ( r ) P ( r , r ) N ( 0 , 1 ) , it follows that ( u i ( r ) u ( r ) P ( r , r ) ) 2 χ 2 ( 1 ) (Chi-squared distribution with 1 degree of freedom). Thus, G ^ ̲ r can be written as
G ^ ̲ r = P ( r , r ) N i = 1 N u i ( r ) u ( r ) P ( r , r ) 2 P ( r , r ) N χ 2 ( N ) P ( r , r ) N Gamma N 2 , 1 2 ,
where Gamma ( α , β ) denotes a Gamma distribution with shape α and rate β (or 1 / β as scale). Here, α = N / 2 and rate β = 1 / 2 .
The total tensor is decomposed into the sum of three terms, that is, G ^ ̲ = r G ^ ̲ r . Note that G ^ ̲ 1 , G ^ ̲ 2 , G ^ ̲ 3 are independent sums of squared normals, then the distribution of their sum G ^ ̲ would be a sum of scaled independent χ 2 ( N ) variables, which generally has a complex form (generalized Chi-squared distribution).
However, by the Central Limit Theorem (CLT), for large N, each G ^ ̲ r can be approximated by a Gaussian distribution: E [ G ^ ̲ r ] = P ( r , r ) and Var ( G ^ ̲ r ) = 2 P ( r , r ) 2 N . Thus, for large N,
G ^ ̲ r N P ( r , r ) , 2 P ( r , r ) 2 N , for r { 1 , 2 , 3 } .
For the sum G ^ ̲ = r G ^ ̲ r , E [ G ^ ̲ ] = r E [ G ^ ̲ r ] = r P ( r , r ) = tr ( P ) , and Var ( G ^ ̲ ) = r Var ( G ^ ̲ r ) = r 2 P ( r , r ) 2 N . Then, for large N,
G ^ ̲ N tr ( P ) , r = 1 3 2 P ( r , r ) 2 N .
Note that N, the number of samples, represents repeated localizations of an emitter at a specific position. Consequently, for a stationary emitter, acquiring a substantial sample size N is typically feasible in practical applications.
Correspondingly, the estimation errors N ̲ r = G ^ ̲ r P ( r , r ) and N ̲ = G ^ ̲ tr ( P ) follow
N ̲ r N 0 , 2 P ( r , r ) 2 N , for r { 1 , 2 , 3 } , N ̲ N 0 , r { 1 , 2 , 3 } 2 P ( r , r ) 2 N .

Appendix B. Proof of Theorem 2

Considering the emitter’s position u = [ x , y , z ] T as the independent variable in a 3-D scenario, differentiating both sides of (15) with respect to u yields
u P = x P y P z P , where p P = p C P ˜ C T + C P ˜ p C T p { x , y , z } .
To calculate the p C , we obtain
p C = ( F T F ) 1 F T p F + p F T F ( F T F ) 1 F T Term 1 + ( F T F ) 1 p F T Term 2 .
Then, we analyze the continuity of the two terms in p C .
The continuity of Term 1 is contingent upon the following:
  • Continuity of ( F T F ) 1 : If F ( p ) C 1 ( I ) (i.e., F ( p ) is continuously differentiable on the interval I), then its transpose F T ( p ) is also C 1 ( I ) . The product F T ( p ) F ( p ) is consequently C 1 ( I ) . The matrix inversion operation, A A 1 , is a smooth mapping on the set of invertible matrices. Therefore, provided F T ( p ) F ( p ) is invertible for all p I , ( F T ( p ) F ( p ) ) 1 is also C 1 ( I ) .
  • Continuity of product terms F T p F and p F T F : Since F ( p ) C 1 ( I ) , both F ( p ) and its derivative p F (and similarly p F T ) are continuous on I (i.e., C 0 ( I ) ). Consequently, their products F T ( p ) p F and p F T F ( p ) are also continuous on I.
Given these points, Term 1, being composed of sums and products of continuous matrix functions (specifically, ( F T F ) 1 C 1 ( I ) , F T C 1 ( I ) , p F C 0 ( I ) , and p F T C 0 ( I ) ), is itself continuous on I. Then, we concentrate on the continuity of Term 2. As established, ( F T ( p ) F ( p ) ) 1 C 1 ( I ) (and thus also C 0 ( I ) ). Since F ( p ) C 1 ( I ) , its transpose derivative p F T is continuous on I (i.e., C 0 ( I ) ). The product of these two continuous matrix functions, ( F T ( p ) F ( p ) ) 1 p F T , is therefore continuous on I.
To sum up, a conclusion can be drawn that p P C 0 ( I ) (that is, P C 1 ( I ) ) under the following conditions:
  • F ( p ) C 1 ( I ) .
  • p I , rank ( F ( p ) ) = n (where n is the number of columns of F , ensuring F T F is invertible).

Appendix C. Definition and Properties of a Tensor Operation

Analogous to how multiple systems of equations can be represented using matrix multiplication, we introduce a tensor operation, the mode-n transpose product, denoted as “ × n T ”. This operation is designed to represent linear relationships between higher-order and lower-order tensors. A formal definition is provided below.
Definition 2
(Mode-n Transpose Product). Define a Kth-order tensor X ̲ R I 1 × × I K and a cell array U = { U 1 , U 2 , , U K } . Here, U n is the weight matrix and the others are the mask matrices. The dimensions of these matrices are given by
U i R N i × N i i n R I n × 1 i = n ,
where N i = j = 1 , j i K 1 I v j and v i = i i n K i = n . Then, the mode-n transpose product of X ̲ and U is defined as
Y ̲ = X ̲ × n T U , Y ̲ ( i ) = X ̲ ( v i ) · ( U n U i ) ,
where
U n U i R ( I n N i ) × N i ( i n ) , Y ̲ R I v 1 × I v 2 × I v K 1 .
Compared with the mode-n product defined in (8), the mode-n product typically applies a linear transformation to the mode-n fibers without altering their fundamental vector structure. In contrast, the mode-n transpose product effectively compresses the tensor along its n-th dimension. Specifically, each mode-n fiber, f n = X ̲ ( i 1 , , i n 1 , : , i n + 1 , , i K ) , is mapped to a scalar value in the resulting tensor Y ̲ .
Using the operator “ × n T ”, (12) can be expressed as
G ̲ = X ̲ × 4 T { I N 2 N 3 , I N 1 N 3 , I N 1 N 2 , 1 3 } .
Let us elaborate on this example to illustrate the fiber reorganization. We begin by noting the dimensions: X ̲ R N 1 × N 2 × N 3 × 3 , U 4 R 3 × 1 , U 1 R N 2 N 3 × N 2 N 3 , U 2 R N 1 N 3 × N 1 N 3 , and U 3 R N 1 N 2 × N 1 N 2 . According to (A8), we obtain
G ̲ ( 1 ) = X ̲ ( 1 ) · ( 1 3 I N 2 N 3 ) ,
G ̲ ( 2 ) = X ̲ ( 2 ) · ( 1 3 I N 1 N 3 ) ,
G ̲ ( 3 ) = X ̲ ( 3 ) · ( 1 3 I N 1 N 2 ) .
Suppose the fiber X ̲ ( : , j , k , l ) corresponds to the mth column in X ̲ ( 1 ) , and X ̲ ( : , j + Δ j , k + Δ k , l + Δ l ) corresponds to the n-th column. Based on (6), we have
n = m + Δ j + Δ k · N 2 + Δ l · N 2 N 3 .
In the matrix 1 3 I N 2 N 3 , each pair of non-zero elements in the column vectors are separated by N 2 N 3 zeros. Correspondingly, when Δ j = 0 , Δ k = 0 , and Δ l = 1 in (A13), the difference in the fiber column indices in X ̲ ( 1 ) is N 2 N 3 . According to the rules of matrix operations, (A10) is equivalent to
G ̲ ( : , j , k ) = l = 1 3 X ̲ ( : , j , k , l ) .
Similarly, analogous analysis results can be derived for Equations (A11) and (A12), that is,
G ̲ ( i , : , k ) = l = 1 3 X ̲ ( i , : , k , l ) , G ̲ ( i , j , : ) = l = 1 3 X ̲ ( i , j , : , l ) .
Next, we introduce the general physical meaning of the mode-n transpose product. Substitute U 4 = [ λ 1 , λ 2 , λ 3 ] T in (A8). Reorganize the X ̲ ( 1 ) into a block matrix form such that X ̲ ( 1 ) = S ( 1 , 1 ) , S ( 2 , 1 ) , S ( 3 , 1 ) , where S ( i , j ) = X ̲ ( : , : , : , i ) ( j ) , then we have
G ̲ ( 1 ) = X ̲ ( 1 ) · ( U 4 U 1 ) = i = 1 3 λ i · S ( i , 1 ) · U 1 .
From (A16), we can conclude that the elements in U 4 are the weights of S ( i , 1 ) (i.e., the mode-1 unfolding matrices of the sub-tensors), and U 1 is the linear transformation matrix of S i . Therefore, the mode-n transpose product is essentially a vectorized representation of linear operations on the sub-tensor slices of the tensor along the nth dimension. The weight matrix and the mask matrices play the same role in the matrices X ̲ ( 2 ) and X ̲ ( 3 ) , so we could generalize (A16) as
G ̲ ( j ) = i = 1 3 λ i · S ( i , j ) · U j .

Appendix D. Computational Complexity Analysis

We analyze the computational complexity of the proposed algorithm per iteration. Let | Ω | denote the number of sparse observation samples, R be the number of block terms, and ( M 1 , M 2 , M 3 ) represent the multilinear ranks for the three modes, respectively. Additionally, let m denote the degree of the polynomial basis functions used in the physics-informed constraints. The algorithm employs a BCD strategy, which iteratively updates the core tensors and the factor matrices. The complexity is determined by the least squares (LS) solvers applied in each step. In the step of updating the core tensors { S r } r = 1 R , the algorithm solves for all elements within the core tensors simultaneously. The total number of optimization variables, denoted as P core , is given by the sum of elements in R tensors of size M 1 × M 2 × M 3 :
P core = R · ( M 1 M 2 M 3 ) .
The update involves solving a linear least squares problem with | Ω | equations. The computational cost is dominated by the complexity of the LS solver, which is generally quadratic in the number of variables for a fixed number of observations:
C core O | Ω | · P core 2 = O | Ω | · R 2 ( M 1 M 2 M 3 ) 2 .
When updating the factor matrices like V 1 for mode-1 with polynomial constraints, the optimization targets the polynomial coefficient matrix Φ 1 R m × M 1 rather than the full factor matrix. The number of variables for mode-1 across R components is
P factor ( 1 ) = R · M 1 · m .
Consequently, the complexity for updating the factor matrices along all three modes is
C factor k = 1 3 O | Ω | · ( R M k m ) 2 = O | Ω | R 2 m 2 ( M 1 2 + M 2 2 + M 3 2 ) .
Combining the costs from the core tensor and factor matrix updates, the total computational complexity per iteration is
C total = O | Ω | R 2 ( M 1 M 2 M 3 ) 2 + m 2 ( M 1 2 + M 2 2 + M 3 2 ) .
It is observed that the complexity term associated with the core tensors, ( M 1 M 2 M 3 ) 2 , grows with the product of the ranks across all modes, whereas the term associated with the factor matrices, m 2 M k 2 , grows linearly with respect to the mode dimensions. In practical settings, the polynomial degree m is a small constant (typically m M 1 M 2 M 3 ). Therefore, the core tensor update dominates the computational load:
( M 1 M 2 M 3 ) 2 m 2 ( M 1 2 + M 2 2 + M 3 2 ) .
As a result, the term involving m is often negligible in the asymptotic analysis, and the overall complexity can be simplified to
C total O | Ω | · R 2 ( M 1 M 2 M 3 ) 2 .

Appendix E. Proof of Theorem 4

Let the positioning error distribution in 3-D space be described by a function f : Ω R 3 R . We construct the error tensor G R I × J × K , where the elements are obtained by sampling on a spatial grid ( x i , y j , z k ) :
G i j k = f ( x i , y j , z k ) .
According to the high-dimensional generalization of the Schmidt decomposition, the low-rank property implies strong separability. We can approximate f as a sum of products of separable univariate functions:
f ( x , y , z ) p = 1 P q = 1 Q r = 1 R g p q r · ϕ p ( x ) · ψ q ( y ) · ζ r ( z ) ,
where g p q r are the weighted coefficients, and { ϕ p ( x ) } , { ψ q ( y ) } , { ζ r ( z ) } are the principal component basis functions along the x , y , z axes, respectively. Since the original function f is C 1 continuous with respect to position, and the error field is typically governed by physical laws, integral operator theory dictates that the decomposed basis functions (e.g., ϕ p ( x ) ) must inherit the smoothness of f. Thus, ϕ p ( x ) C 1 . Consider an arbitrary basis function in the x-direction, ϕ p ( x ) . Since ϕ p ( x ) is a C 1 continuous function defined on a closed interval, the Weierstrass Approximation Theorem is applicable. For a smooth function that is determined by the low-rank principal components, we can approximate it with high precision using a low-order polynomial P p ( x ) of degree d:
ϕ p ( x ) n = 0 d w p , n x n = w p , 0 + w p , 1 x + + w p , d x d
This implies that the function ϕ p ( x ) approximately belongs to the polynomial function space spanned by { 1 , x , x 2 , , x d } . Returning to the discrete decomposition of tensor G :
G S × 1 U × 2 V × 3 W G x y z p = 1 P q = 1 Q r = 1 R S p q r u x p v y q w z r .
Physically, each column u p of the mode-1 factor matrix U R I × P represents the discretized vector of the basis function ϕ p ( x ) evaluated at the sampling points x 1 , , x I :
u p = ϕ p ( x 1 ) ϕ p ( x 2 ) ϕ p ( x I ) T
Combining this with the polynomial form in (A27), we expand u p as
u p n = 0 d w p , n x 1 n x 2 n x I n T
Define the matrix D x = 1 , x , x 2 , , x d R I × ( d + 1 ) , the column vector u p of the factor matrix can be expressed as u p D x · w p Through the derivation above, we have proven that the column vectors of the factor matrix U (and similarly for V , W ) can be approximated as linear combinations of the columns of a Vandermonde matrix. Therefore, the column vectors of the factor matrices lie within the linear subspace generated by the low-order polynomial basis { 1 , x , , x d } , that is,
span ( U ) span ( { 1 , x , , x d } )

References

  1. Lyu, J.; Song, T.; He, S. Range-Only UWB 6-D Pose Estimator for Micro UAVs: Performance Analysis. IEEE Trans. Aerosp. Electron. Syst. 2025, 61, 5284–5301. [Google Scholar] [CrossRef]
  2. Jiang, S.; Zhao, C.; Zhu, Y.; Wang, C.; Du, Y. A Practical and Economical Ultra-wideband Base Station Placement Approach for Indoor Autonomous Driving Systems. J. Adv. Transp. 2022, 2022, 3815306. [Google Scholar] [CrossRef]
  3. Causa, F.; Fasano, G. Improving Navigation in GNSS-Challenging Environments: Multi-UAS Cooperation and Generalized Dilution of Precision. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 1462–1479. [Google Scholar] [CrossRef]
  4. Feng, D.; Wang, C.; He, C.; Zhuang, Y.; Xia, X.G. Kalman-Filter-Based Integration of IMU and UWB for High-Accuracy Indoor Positioning and Navigation. IEEE Internet Things J. 2020, 7, 3133–3146. [Google Scholar] [CrossRef]
  5. Mazuelas, S.; Lorenzo, R.M.; Bahillo, A.; Fernandez, P.; Prieto, J.; Abril, E.J. Topology Assessment Provided by Weighted Barycentric Parameters in Harsh Environment Wireless Location Systems. IEEE Trans. Signal Process. 2010, 58, 3842–3857. [Google Scholar] [CrossRef]
  6. Wang, D.; Qin, H.; Zhang, Y.; Yang, Y.; Lv, H. Fast Clustering Satellite Selection Based on Doppler Positioning GDOP Lower Bound for LEO Constellation. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 9401–9410. [Google Scholar] [CrossRef]
  7. Wang, Y.; Ho, K.C. TDOA Source Localization in the Presence of Synchronization Clock Bias and Sensor Position Errors. IEEE Trans. Signal Process. 2013, 61, 4532–4544. [Google Scholar] [CrossRef]
  8. Won, D.H.; Ahn, J.; Lee, S.W.; Lee, J.; Sung, S.; Park, H.W.; Park, J.P.; Lee, Y.J. Weighted DOP With Consideration on Elevation-Dependent Range Errors of GNSS Satellites. IEEE Trans. Instrum. Meas. 2012, 61, 3241–3250. [Google Scholar] [CrossRef]
  9. Chen, C.S. Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication. Sensors 2015, 15, 803–817. [Google Scholar] [CrossRef]
  10. Won, D.H.; Lee, E.; Heo, M.; Lee, S.W.; Lee, J.; Kim, J.; Sung, S.; Lee, Y.J. Selective Integration of GNSS, Vision Sensor, and INS Using Weighted DOP Under GNSS-Challenged Environments. IEEE Trans. Instrum. Meas. 2014, 63, 2288–2298. [Google Scholar] [CrossRef]
  11. Wang, M.; Chen, Z.; Zhou, Z.; Fu, J.; Qiu, H. Analysis of the Applicability of Dilution of Precision in the Base Station Configuration Optimization of Ultrawideband Indoor TDOA Positioning System. IEEE Access 2020, 8, 225076–225087. [Google Scholar] [CrossRef]
  12. Liang, X.; Pan, S.; Du, S.; Yu, B.; Li, S. An Optimization Method for Indoor Pseudolites Anchor Layout Based on MG-MOPSO. Remote Sens. 2025, 17, 1909. [Google Scholar] [CrossRef]
  13. Hong, W.; Choi, K.; Lee, E.; Im, S.; Heo, M. Analysis of GNSS Performance Index Using Feature Points of Sky-View Image. IEEE Trans. Intell. Transp. Syst. 2014, 15, 889–895. [Google Scholar] [CrossRef]
  14. Rao, R.M.; Emenonye, D.R. Iterative RNDOP-Optimal Anchor Placement for Beyond Convex Hull ToA-Based Localization: Performance Bounds and Heuristic Algorithms. IEEE Trans. Veh. Technol. 2024, 73, 7287–7303. [Google Scholar] [CrossRef]
  15. Ding, Y.; Shen, D.; Pham, K.; Chen, G. Measurement Along the Path of Unmanned Aerial Vehicles for Best Horizontal Dilution of Precision and Geometric Dilution of Precision. Sensors 2025, 25, 3901. [Google Scholar] [CrossRef]
  16. Deng, Z.; Wang, H.; Zheng, X.; Yin, L. Base Station Selection for Hybrid TDOA/RTT/DOA Positioning in Mixed LOS/NLOS Environment. Sensors 2020, 20, 4132. [Google Scholar] [CrossRef] [PubMed]
  17. Peng, Y.; Niu, X.; Tang, J.; Mao, D.; Qian, C. Fast Signals of Opportunity Fingerprint Database Maintenance with Autonomous Unmanned Ground Vehicle for Indoor Positioning. Sensors 2018, 18, 3419. [Google Scholar] [CrossRef]
  18. Roger, S.; Botella, C.; Pérez-Solano, J.J.; Perez, J. Application of Radio Environment Map Reconstruction Techniques to Platoon-based Cellular V2X Communications. Sensors 2020, 20, 2440. [Google Scholar] [CrossRef]
  19. Hong, S.; Chae, J. Active Learning With Multiple Kernels. IEEE Trans. Neural Networks Learn. Syst. 2022, 33, 2980–2994. [Google Scholar] [CrossRef] [PubMed]
  20. Lyu, S.; Xiang, Y.; Soja, B.; Wang, N.; Yu, W.; Truong, T.K. Uncertainties of Interpolating Satellite-Specific Slant Ionospheric Delays and Impacts on PPP-RTK. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 490–505. [Google Scholar] [CrossRef]
  21. Bazerque, J.A.; Giannakis, G.B. Nonparametric Basis Pursuit via Sparse Kernel-Based Learning: A Unifying View with Advances in Blind Methods. IEEE Signal Process. Mag. 2013, 30, 112–125. [Google Scholar] [CrossRef]
  22. Wang, Q.; Shi, W.; Atkinson, P.M. Sub-pixel mapping of remote sensing images based on radial basis function interpolation. ISPRS J. Photogramm. Remote Sens. 2014, 92, 1–15. [Google Scholar] [CrossRef]
  23. Golbabaee, M.; Arberet, S.; Vandergheynst, P. Compressive Source Separation: Theory and Methods for Hyperspectral Imaging. IEEE Trans. Image Process. 2013, 22, 5096–5110. [Google Scholar] [CrossRef] [PubMed]
  24. Li, C.; Xie, H.B.; Fan, X.; Xu, R.Y.D.; Van Huffel, S.; Sisson, S.A.; Mengersen, K. Image Denoising Based on Nonlocal Bayesian Singular Value Thresholding and Stein’s Unbiased Risk Estimator. IEEE Trans. Image Process. 2019, 28, 4899–4911. [Google Scholar] [CrossRef] [PubMed]
  25. Ye, J.; Xiong, F.; Zhou, J.; Qian, Y. Iterative Low-Rank Network for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5528015. [Google Scholar] [CrossRef]
  26. Candès, E.J.; Sing-Long, C.A.; Trzasko, J.D. Unbiased Risk Estimates for Singular Value Thresholding and Spectral Estimators. IEEE Trans. Signal Process. 2013, 61, 4643–4657. [Google Scholar] [CrossRef]
  27. De Lathauwer, L. Decompositions of a higher-order tensor in block terms—Part II: Definitions and uniqueness. SIAM J. Matrix Anal. Appl. 2008, 30, 1033–1066. [Google Scholar] [CrossRef]
  28. Zhang, G.; Fu, X.; Wang, J.; Zhao, X.L.; Hong, M. Spectrum cartography via coupled block-term tensor decomposition. IEEE Trans. Signal Process. 2020, 68, 3660–3675. [Google Scholar] [CrossRef]
  29. Chen, J.; Zhang, Y.; Yao, C.; Tu, G.; Li, J. Hermitian Toeplitz Covariance Tensor Completion With Missing Slices for Angle Estimation in Bistatic MIMO Radars. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 8401–8418. [Google Scholar] [CrossRef]
  30. Sun, H.; Chen, J. Propagation map reconstruction via interpolation assisted matrix completion. IEEE Trans. Signal Process. 2022, 70, 6154–6169. [Google Scholar] [CrossRef]
  31. Levie, R.; Yapar, Ç.; Kutyniok, G.; Caire, G. RadioUNet: Fast radio map estimation with convolutional neural networks. IEEE Trans. Wirel. Commun. 2021, 20, 4001–4015. [Google Scholar] [CrossRef]
  32. Teganya, Y.; Romero, D. Deep Completion Autoencoders for Radio Map Estimation. IEEE Trans. Wirel. Commun. 2022, 21, 1710–1724. [Google Scholar] [CrossRef]
  33. Yu, H.; She, C.; Yue, C.; Hou, Z.; Rogers, P.; Vucetic, B.; Li, Y. Distributed Split Learning for Map-Based Signal Strength Prediction Empowered by Deep Vision Transformer. IEEE Trans. Veh. Technol. 2024, 73, 2358–2373. [Google Scholar] [CrossRef]
  34. Specht, M. Statistical Distribution Analysis of Navigation Positioning System Errors—Issue of the Empirical Sample Size. Sensors 2020, 20, 7144. [Google Scholar] [CrossRef] [PubMed]
  35. Specht, M. Consistency analysis of global positioning system position errors with typical statistical distributions. J. Navig. 2021, 74, 1201–1218. [Google Scholar] [CrossRef]
  36. Paradowski, L. Uncertainty ellipses and their application to interval estimation of emitter position. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 126–133. [Google Scholar] [CrossRef]
  37. Lombardi, G.; Crivello, A.; Barsocchi, P.; Chessa, S.; Furfari, F. Reducing Training Data for Indoor Positioning through Physics-Informed Neural Networks. In Proceedings of the 2025 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Tampere, Finland, 15–18 September 2025; pp. 1–6. [Google Scholar]
  38. Stein, S. Algorithms for ambiguity function processing. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 588–599. [Google Scholar] [CrossRef]
  39. Shin, D.H.; Sung, T.K. Comparisons of error characteristics between TOA and TDOA positioning. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 307–311. [Google Scholar] [CrossRef]
  40. Urruela, A.; Sala, J.; Riba, J. Average performance analysis of circular and hyperbolic geolocation. IEEE Trans. Veh. Technol. 2006, 55, 52–66. [Google Scholar] [CrossRef]
  41. Liao, B.; Chan, S.C.; Tsui, K.M. Recursive steering vector estimation and adaptive beamforming under uncertainties. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 489–501. [Google Scholar] [CrossRef]
  42. Fan, J.; Ma, G. Characteristics of GPS positioning error with non-uniform pseudorange error. GPS Solut. 2014, 18, 615–623. [Google Scholar] [CrossRef]
  43. Tomioka, R.; Hayashi, K.; Kashima, H. Estimation of low-rank tensors via convex optimization. arXiv 2010, arXiv:1010.0789. [Google Scholar]
  44. Mu, C.; Huang, B.; Wright, J.; Goldfarb, D. Square deal: Lower bounds and improved relaxations for tensor recovery. In Proceedings of the International Conference on Machine Learning PMLR, Beijing, China, 22–24 June 2014; pp. 73–81. [Google Scholar]
  45. Hoyer, P.O. Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res. 2004, 5, 1457–1469. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.