Next Article in Journal
Experimental Testbed and Measurement Campaign for Multi-Constellation LEO Positioning
Previous Article in Journal
Analysis of the Galileo SAR Return Link Service Using the GalileoSARlib Open-Source Library
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Toward an Interpretable Multipath Error Model from GNSS Observables Through the Application of Deep Learning †

Abbia GNSS Technologies, 31100 Toulouse, France
*
Author to whom correspondence should be addressed.
Presented at the European Navigation Conference 2025 (ENC 2025), Wrocław, Poland, 21–23 May 2025.
Eng. Proc. 2026, 126(1), 14; https://doi.org/10.3390/engproc2026126014
Published: 14 February 2026
(This article belongs to the Proceedings of European Navigation Conference 2025)

Abstract

Multipath degradation of GNSS measurements is the main source of error in urban areas. Robust mitigation of this error source is still a challenge for standalone low-cost GNSS receivers. The complexity associated with the development of Multipath degradation models requires the use of advanced methods such as Deep Learning. However, Deep Learning based mitigation methods tend to be hard to deploy due to a general lack of trust in their prediction due to their “black-box” behavior. This work tackles the notion of interpretability and generalization of multipath degradation models obtained using Auto-Encoders. We demonstrate the ability of Auto-Encoders to generate interpretable representations and to generalize to unseen situations.

1. Introduction

Global Navigation Satellite Systems (GNSS) are integral to a myriad of applications relying on precise and robust positioning. However, GNSS receiver measurements are affected by various sources of error, degrading the accuracy and reliability of the position estimation process. Several error correction techniques have been developed to tackle this issue, allowing for the mitigation of most error sources to enable precise positioning.
Nevertheless, Multipath (MP) error components remain a challenge for precise and robust positioning in urban environments [1]. While information from additional sensors (IMU, LiDAR, …) can improve resilience to MP degradation, standalone low-cost GNSS receivers still lack capabilities to properly mitigate their impact [1]. MP is caused by the scattering, blocking, and shadowing of the line-of-sight (LOS) signals by the objects lying in the immediate vicinity of the receiver. The error is therefore inherently linked to the surrounding environment of a receiver, making it correlated with small areas [1]. Consequently, a receiver cannot rely on external information (correlation characteristics) to mitigate it. It has, therefore, to rely on more advanced internal algorithms.
A wide range of methods has been developed to reduce the impact of MP interference. For instance, statistical methods detect outliers in the measurements, notably Non-LOS (NLOS) cases [2]. Additionally, robust estimation methods reduce MP impact on the positioning process [3]. These methods do not provide an isolated correction to the MP interference but mitigate all remaining error factors in the measurements as a single one. While such methods demonstrate promising results, the lack of decomposition of the error sources results in a lack of clear comprehension of the decisions of the algorithm.
An accurate method to estimate MP interference is to use raytracing with a 3D map [4]. However, this method requires significant computational resources and access to an up-to-date 3D map [4], limiting its usage. We focus on an alternative method based on the conception of a model-based approximation of MP degradation of the measurements from the measurements themselves. As the dependencies between the measurements’ degradation are complex, we need to rely on advanced modeling methods [5,6].
The Deep Learning (DL) paradigm has appeared as a promising modeling approach due to its ability to rely on examples to approximate complex functions [7]. DL-based algorithms offer state-of-the-art performances for different tasks related to the mitigation of MP degradation, such as LOS/NLOS classification [5,7] or uncertainty estimation [8]. However, the deployment of such methods is difficult due to a lack of trust in the predictions of the DL models [9]. This wariness stems from two main questions regarding the use of DL:
-
DL-models are trained on a finite set of examples, but there is a need to ensure that they operate properly for contexts outside of the training set (context is the combination of receiver/antenna, environments, etc.).
-
The complexity of DL-Models operations results in a lack of comprehension of their inner mechanisms. DL-Models are typically considered as black boxes [9].
Driven by the need for transparency of the DL-Models, the field of Explainable AI has emerged with powerful methods to tackle the interpretability question [10]. Some methods have been applied to GNSS, such as LIME and SHAP [11], LRP [9], or the analysis of attention scores [5]. This work exploits the visual exploration of the latent space, a method previously implemented by the authors of [12] in the context of spoof detection.
Beyond the development of mitigation methods, another facet of the literature demonstrated a broader ability of Deep Neural Networks (DNN) to capture environmental information when applied to MP interference [5,13]; and more globally to constitute complex and comprehensive models of MP degradation [14].
For these reasons, this work studies the ability to capture correlations between Carrier-to-Noise density ratio (CN0) and Code-Minus-Carrier (CMC) measurements [15], using the Self-Supervised Auto-Encoder (AE) method [16]. The trained AE is our MP degradation model for the selected measurements. The employed DL approach provides intermediate results that can be interpreted through latent space exploration [17] and a metric to evaluate the generalization ability without additional information. This method and the analysis we performed pose the basis of confidence indicators regarding the ability of one to trust generalization and interpretability qualities of a DNN-based MP degradation model.

2. Methods

2.1. Deep Learning

DNNs are parametric models producing predictions from inputs sampled from a distribution called the domain. DNNs extract features from raw data to perform a task (e.g., classification); they do not require previous feature extraction steps [6]. The parameters of a DNN are learned during a training phase, where the model’s predictions are evaluated against a supervisory signal (label) using a loss function. The lifecycle of a DNN is decomposed into two phases: a training phase used to determine the value of the different weights using a specific dataset, and an inference phase. The model is expected to generalize to samples drawn from the same domain but unseen during the training phase. However, it is practically limiting to consider a model able to operate for its training domain only, as small variations in the domain frequently occur for real-world data [18]. Models implemented for deployment purposes need to be robust to slight domain changes.
A DNN relies on the features it can extract to produce its predictions. The activations of a given layer for a given sample contain the information extracted for this sample at this point, and are called representations of the sample by the model. As the model relies on its representations to produce its predictions, more informative representations enable more coherent predictions. Encoders are models able to extract robust features from a domain to produce information-packed representations [6,19]. Encoders can serve as a feature-extracting backbone for distinct, smaller, specialized task-fulfilling models [20].
As obtaining solid representations might be as important as completing a specific task, algorithms have been developed to focus the training phase of DL models on their feature extraction ability [19]. Self-Supervised Learning (SSL) is a training paradigm consisting of producing supervisory signals from the samples themselves to train the model [20]. This method circumvents the need to rely on labeled data to train a model. A popular approach is to train an Encoder using SSL before adapting the model to a specific task using labeled data [20]. The first step of this process is called pretraining, and the second step is called adaptation to a downstream task. An Encoder can be trained by linking a Decoder to it, forming an AE [16]. Both are trained jointly, such as: (1) the encoder produces a latent representation of the input; (2) the decoder reconstructs the input from the latent representation. The loss is evaluated as the discrepancy between the input and the prediction. A constraint is imposed on the training process for the Encoder to learn meaningful features; a dimension bottleneck constraint, for instance [16]. Figure 1a illustrates the operations of an AE.

2.2. Methodology

The objective of this work is to train a DNN to become a model of MP degradation of key GNSS observables. We implement an SSL training phase to obtain an encoder that enables the subsequent analyses. Our methodology consists of two main steps: the AE training using distinct datasets, and their subsequent evaluation.
First, we explain our motivation to train an Encoder. We then detail the training procedure, starting with an explanation of the composition of the datasets, before summarizing the trained AE models. Finally, we explain how we experimented with the different AE models through the distinct Generalization and Interpretability analysis.

2.2.1. Deep Learning Approach

Rather than developing an MP mitigation method, we implement Encoders able to capture dependencies between CN0 and CMC measurements. Those Encoders are our models of MP degradation of the selected measurements. This work implements the pretraining of an Encoder. The Encoders produce latent representations of the samples and consequently provide a means to observe how the models process the information and what they capture. Additionally, the AE method requires no labels to compute a reconstruction error metric. This metric is subsequently used to evaluate the generalization of the models and can therefore measure generalization beyond labeled datasets.

2.2.2. Data Collection and Training Phase

We collected data from 77 distinct stations of the IGS network [21] over the same 10-day period. The data yield almost no MP degradation. Additionally, we performed a 10 h data campaign in our facilities. The measurements are affected by strong MP interferences due to the presence of walls covering a large part of the sky-view. For the data campaign, we used a Septentrio AsteRx_SBI3 Pro + GNSS receiver (Septentrio NV, Leuven, Belgium) with a Septentrio PolaNt* MC.v2 antenna. All the MP mitigation features of the receiver have been deactivated.
We selected CMC as a proxy for MP and CN0 due to its known sensitivity to MP effects as inputs to the AE. Dual-frequency measurements from L1 and L2 were included. CMC values were computed following the method described in [15]. Furthermore, we used a post-processed CN0 metric for subsequent analysis. This metric is computed by retrieving the correlation between elevation and CN0, modeled as a second-order polynomial, from CN0 [22]. The resulting metric is centered around 0 with variations indicating that CN0 and elevation are no longer correlated, commonly indicating MP degradation.
The AE revealed two profiles of contexts. Some of the contexts showed identical CN0 on their respective L1 and L2 measurements; we labeled them “EQ_CN0” contexts. Conversely, others showed different CN0 on their respective L1 and L2 measurements; we labeled them “DIF_CN0” contexts. Table 1 lists the different trained AE models exhaustively. Their only source of variation is the constitution of their respective training set. The number of samples forming the training set stays identical, independent of the number of contexts.
The architecture of the Encoder consists of a linear layer projecting the sample to 256 dimensions (up-projection), followed by 5 linear layers separated by ReLU activation functions, each layer contains 256 neurons. The depth and width of the network were determined empirically to balance reconstruction accuracy with efficient training. Finally, a residual connection from the first projection is added to improve convergence speed and training stability; layer normalization is applied before a linear projection to 2 dimensions (down-projection). While providing a convenient space for interpretability, the down-projection reduces the representational capacity of the model. The reconstruction error is sufficiently low to make this reduction acceptable. The decoder corresponds to the same architecture with inverted up and down projections and without a residual connection.
The AE-models are trained using 2 million samples drawn randomly from 10 days of data; the training consists of 100 epochs. We used the Adam optimizer with a learning rate of 1 × 10−4 and a weight decay of 1 × 10−3; the optimized loss function is the Mean Squared Error between the sample and the prediction of the Decoder. Samples are z-normalized before passing through the AE-models using mean and standard deviation parameters determined from the gathering of samples of every IGS Station (100 k samples per Station).

2.2.3. MP Model Analysis

The trained AE-models are evaluated to obtain latent representations and reconstruction error statistics for each model and for each context. Figure 1a summarizes the main experiments forming the Generalization and Interpretability analysis, relying, respectively, on reconstruction error statistics and latent representations.
The Generalization analysis relies on the reconstruction error of the AE-models on specific datasets, evaluated as the Mean Absolute Error (MAE). This metric is an indicator of the ability of an AE to rely on well-known features to produce its latent representations. A model maintaining a low reconstruction error across different unseen contexts is a model able to generalize to the tested datasets. The reconstruction error was computed between z-score normalized samples and the inherently normalized predictions.
The Interpretability analysis relies on the visual exploration of the latent space to understand how features and correlations are captured by the Encoder. The latent representation of each sample corresponds to a 2D vector indicating a position in the 2D latent space of the evaluated model. We separately plot the latent representations of different datasets to explore the layout of the points. Additionally, the points are colored according to their respective values of a selected metric (e.g., CN0/CMC, Elevation, etc.). We can consequently observe the layout and evolution of the values of specific metrics in the latent space. We specifically look for clusters of latent representations indicating the isolation of specific features; and for locally smooth evolutions of latent representations indicating that smooth variations in metrics are translated into continuous and interpretable latent directions. Representative visualizations were selected to illustrate key findings.

3. Results and Discussion

3.1. Generalization Analysis

Table 2 presents the MAE for categories of AE-models and profiles of environments. The MAE for each AE-model on their respective training sets is detailed in the last column, demonstrating that the reconstruction error remains within a reasonable interval with respect to the training loss. The HIGH_DEGR context corresponds to a DIF_CN0 profile, but is accounted for in a distinct column to focus on its specific statistics. Table 2 shows that the AE-models (except EQ_CN0_AE models) show convincing reconstruction errors for the highly degraded context while they were trained using low- MP context data only. The underlying features might be inherently general.
Figure 2 depicts the MAE per evaluated combinations of AE-model and IGS Station. The profiles of the contexts are a central element to the generalization ability of the AE-models. Indeed, models trained using a single context tend to generalize to their own profile, but struggle to generalize to the other, especially EQ_CN0_AE. 2xEQ_CN0_AE and 2xDIF_CN0 models show similar behavior, indicating that multiplying contexts of a single profile does not increase the generalization ability of the AE model.
However, mixing contexts of different profiles drastically improves the generalization abilities of the AE models. Consequently, the underlying features are compatible (i.e., an AE-model can accurately learn to encode more than one profile), and mixing the profiles is beneficial to the global generalization ability of the AE-models.
These findings demonstrate the efficiency of Self-Supervised Training to find general features from CMC and CN0 measurements, notably related to MP degradation.

3.2. Interpretability Analysis

Our Interpretability Analysis reveals how the layout of the latent representations reveals information about the context itself. The selected latent representations are produced by the same MIX_4_AE model, which showed convincing generalization capabilities. While the model was also selected for the readability of its latent representation, we could draw identical conclusions from a great majority of the other MIX_AE models.
Figure 3 shows the Latent Space representations produced by the selected AE-model for, respectively, a mix of samples from DIF_CN0 contexts, and a mix of samples from EQ_CN0 contexts. While a common latent region exists, specific zones of the latent space are populated only by DIF_CN0 samples, explaining the ability of the MIX_AE models to handle both profiles. The latent space representation of several samples of the same context contains information regarding the context itself.
Figure 4 depicts the latent representations of the selected AE-model, colored with, respectively, CN0 measurements on L2 and Elevation values. Elevation is encoded in a smooth manner as it is correlated to CN0. The correlation of CN0 and elevation is due to the change in the distance between the receiver and the satellites, which is itself correlated to both measurements. The model, therefore, captures this underlying geometry and demonstrates it in its latent representations.
Having analyzed correlations driven by broader geometrical factors such as distance and atmospheric effects, we now turn to finer-grained geometrical cues. To this end, we consider a post-processed CN0 metric, which is sensitive to multipath and therefore linked to the local geometry of the receiver. Figure 5 shows latent representations of the selected AE-model for two different contexts, colored with post-processed CN0 L1 values. The latent layout of this metric is similar for two distinct contexts, while it is not an input of the Encoder. This indicates that similar features were captured underlyingly by the Encoder. Additionally, we found different categorizable layouts in the latent space, suggesting the underlying captured features are also categorizable. Due to the link of this metric to MP degradation and the link of MP degradation to the environment itself, we suggest that the captured features are related to the environment surrounding the receivers.
These findings demonstrate the potential of latent space analysis to understand the information extracted by the Encoder, but also to explore characteristics of the data itself. We suggest that environmental information might be encoded in the latent space representations already.

4. Conclusions

We implemented an approach enabling the evaluation of both the generalization and interpretability abilities of DL models. The AE Self-Supervised training phase was an efficient approach to capture general features of MP degradation of the selected measurements, as the AE models were globally able to generalize to unseen contexts. Additionally, latent space exploration revealed itself as a promising tool to explore the inner mechanisms of a model, as we found interpretations for the layout of the latent representations.
This method, combined with the subsequent possible generalization and interpretability analyses, may help increase the confidence one can have in the predictions of its DL models of MP degradation of the measurements. The implementation of mitigation methods relying on the representations produced by the Encoder is a logical next step to evaluate the practical applications of our approach and, therefore, its global efficiency.

Author Contributions

Conceptualization, T.B. and E.R.M.; methodology, T.B. and E.R.M.; software, T.B. and M.E.; validation, E.R.M. and T.B.; formal analysis, T.B. and E.R.M.; investigation, T.B. and M.E.; resources, B.E.; data curation, T.B.; writing—original draft preparation, T.B. and E.R.M.; writing—review and editing, T.B., E.R.M., B.E., J.C.; visualization, T.B.; supervision, B.E.; project administration, B.E.; funding acquisition, B.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings in this study are available upon reasonable request from the authors. The raw data can be downloaded from the IGS site.

Conflicts of Interest

All authors were employed by Abbia GNSS Technologies and declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Matera, E.-R. A Multipath and Thermal Noise Joint Error Characterization and Exploitation for Low-Cost GNSS PVT Estimators in Urban Environment. Doctoral Dissertation, Institut National Polytechnique de Toulouse—INPT, Toulouse, France, 2022. [Google Scholar]
  2. Xia, X.; Hsu, L.-T.; Wen, W. Integrity-Constrained Factor Graph Optimization for GNSS Positioning. In Proceedings of the 2023 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 24–27 April 2023; IEEE: New York, NY, USA, 2023; pp. 414–420. [Google Scholar]
  3. Medina, D.; Li, H.; Vilà-Valls, J.; Closas, P. Robust Statistics for GNSS Positioning under Harsh Conditions: A Useful Tool? Sensors 2019, 19, 5402. [Google Scholar] [CrossRef] [PubMed]
  4. Groves, P. It’s Time for 3D Mapping–Aided GNSS. Inside GNSS Magazine, 1 September 2016. [Google Scholar]
  5. Zheng, S.; Zeng, K.; Li, Z.; Wang, Q.; Xie, K.; Liu, M.; Xie, S. Improving the Prediction of GNSS Satellite Visibility in Urban Canyons Based on a Graph Transformer. J. Inst. Navig. 2024, 71, navi.676. [Google Scholar] [CrossRef]
  6. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  7. Zhang, H.; Wang, Z.; Vallery, H. Learning-Based NLOS Detection and Uncertainty Prediction of GNSS Observations with Transformer-Enhanced LSTM Network. In Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), Bilbao, Spain, 24–28 September 2023; IEEE: New York, NY, USA, 2023; pp. 910–917. [Google Scholar]
  8. Li, H.; O’Keefe, K. Neural Network-Based GNSS Code Measurement De-Weighting for Multipath Mitigation. In Proceedings of the 37th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2024), Baltimore, MD, USA, 16–20 September 2024; pp. 3757–3768. [Google Scholar]
  9. Wang, H.-S.; Jwo, D.-J.; Gao, Z.-H. Towards Explainable Artificial Intelligence for GNSS Multipath LSTM Training Models. Sensors 2025, 25, 978. [Google Scholar] [CrossRef] [PubMed]
  10. Mersha, M.; Lam, K.; Wood, J.; AlShami, A.K.; Kalita, J. Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction. Neurocomputing 2024, 599, 128111. [Google Scholar] [CrossRef]
  11. Elango, A.; Landry, R., Jr. XAI GNSS—A Comprehensive Study on Signal Quality Assessment of GNSS Disruptions Using Explainable AI Technique. Sensors 2024, 24, 8039. [Google Scholar] [CrossRef] [PubMed]
  12. Iqbal, A.; Aman, M.N.; Sikdar, B. A Deep Learning Based Induced GNSS Spoof Detection Framework. IEEE Trans. Mach. Learn. Commun. Netw. 2024, 2, 457–478. [Google Scholar] [CrossRef]
  13. Li, Y.; Jiang, Z.; Qian, C.; Huang, W.; Yang, Z. A Deep-Learning Based GNSS Scene Recognition Method for Detailed Urban Static Positioning Task via Low-Cost Receivers. Remote Sens. 2024, 16, 3077. [Google Scholar] [CrossRef]
  14. Zhang, G.; Xu, P.; Xu, H.; Hsu, L.-T. Prediction on the Urban GNSS Measurement Uncertainty Based on Deep Learning Networks With Long Short-Term Memory. IEEE Sens. J. 2021, 21, 20563–20577. [Google Scholar] [CrossRef]
  15. García, V.P.; Woodhouse, N. Multipath Analysis Using Code-Minus-Carrier Technique in GNSS Antennas; Taoglas Ltd.: Daskroi, India, 2020; CorpusID: 220493648; Available online: https://api.semanticscholar.org/CorpusID:220493648 (accessed on 20 May 2025).
  16. Bank, D.; Koenigstein, N.; Giryes, R. Autoencoders. In Machine Learning for Data Science Handbook; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  17. Liu, Y.; Jun, E.; Li, Q.; Heer, J. Latent Space Cartography: Visual Analysis of Vector Space Embeddings. Comput. Graph. Forum 2019, 38, 67–78. [Google Scholar] [CrossRef]
  18. Wang, J.; Lan, C.; Liu, C.; Ouyang, Y.; Qin, T.; Lu, W.; Chen, Y.; Zeng, W.; Yu, P.S. Generalizing to Unseen Domains: A Survey on Domain Generalization. IEEE Trans. Knowl. Data Eng. 2023, 35, 8052–8072. [Google Scholar] [CrossRef]
  19. Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  20. Gui, J.; Chen, T.; Zhang, J.; Cao, Q.; Sun, Z.; Luo, H.; Tao, D. A Survey on Self-Supervised Learning: Algorithms, Applications, and Future Trends. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 9052–9071. [Google Scholar] [CrossRef] [PubMed]
  21. Johnston, G.; Riddell, A.; Hausler, G. The International GNSS Service. In Springer Handbook of Global Navigation Satellite Systems; Teunissen, P.J.G., Montenbruck, O., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 967–982. ISBN 978-3-319-42926-7. [Google Scholar]
  22. Li, Y.; Cai, C.; Xu, Z. A Combined Elevation Angle and C/N0 Weighting Method for GNSS PPP on Xiaomi MI8 Smartphones. Sensors 2022, 22, 2804. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Illustration of the employed methodology through an overview of the main steps and analysis. (a) Illustration of the AE method and our analyses; (b) Main steps of our methodology.
Figure 1. Illustration of the employed methodology through an overview of the main steps and analysis. (a) Illustration of the AE method and our analyses; (b) Main steps of our methodology.
Engproc 126 00014 g001
Figure 2. Mean Absolute Reconstruction error of each AE-model for each IGS station. Reconstruction error was capped at 1. Blue indicates low reconstruction error, while yellow indicates the maximal one. A model generalizes if its corresponding column tends to be blue for every row.
Figure 2. Mean Absolute Reconstruction error of each AE-model for each IGS station. Reconstruction error was capped at 1. Blue indicates low reconstruction error, while yellow indicates the maximal one. A model generalizes if its corresponding column tends to be blue for every row.
Engproc 126 00014 g002
Figure 3. Latent space representations of the selected MIX_4_AE model. (a) Samples from 4 DIF_ENV are represented; (b) Samples from 4 EQ_ENV are represented.
Figure 3. Latent space representations of the selected MIX_4_AE model. (a) Samples from 4 DIF_ENV are represented; (b) Samples from 4 EQ_ENV are represented.
Engproc 126 00014 g003
Figure 4. Latent space representations of the selected MIX_4_AE model. Samples are colored with their respective (a) CN0 L2 values; (b) Elevation values.
Figure 4. Latent space representations of the selected MIX_4_AE model. Samples are colored with their respective (a) CN0 L2 values; (b) Elevation values.
Engproc 126 00014 g004
Figure 5. Latent space of the selected MIX_4_AE model. Samples are colored with their respective post-processed CN0 L1 values. Latent representation for IGS station (a) KOKV; (b) MADR.
Figure 5. Latent space of the selected MIX_4_AE model. Samples are colored with their respective post-processed CN0 L1 values. Latent representation for IGS station (a) KOKV; (b) MADR.
Engproc 126 00014 g005
Table 1. Description of the different categories of trained Auto-Encoder models.
Table 1. Description of the different categories of trained Auto-Encoder models.
AE-Model CategoryTraining Set CompositionRational Behind# of Models
EQ_CN0_AE1 EQ_CN0 profiled IGS StationAssess the ability of a model to generalize, depending on the station used as a training set27
DIF_CN0_AE1 DIF_CN0 profiled IGS Station50
2xDIF_CN0_AE2 distinct EQ_CN0 profiled IGS StationsAssess the ability of a single model to capture the features of different contexts10
2xEQ_CN0_AE2 distinct DIF_CN0 profiled IGS Stations10
MIX_2_AE1 EQ_CN0 + 1 DIF_CN0
profiled IGS Stations
20
MIX_4_AE2 distinct EQ_CN0 + 2 distinct DIF_CN0 profiled IGS Stations20
MIX_8_AE4 distinct EQ_CN0 + 4 distinct DIF_CN0 profiled IGS Stations20
# of Models refers to the number of AE models trained.
Table 2. Mean and standard deviation of the MAE of the different AE-models for the different contexts per categories of AE-models and per profiles of contexts. * Standard deviation.
Table 2. Mean and standard deviation of the MAE of the different AE-models for the different contexts per categories of AE-models and per profiles of contexts. * Standard deviation.
AE-Model CategoryEQ_CN0 ContextsDIF_CN0 ContextsHIGH_DEGRTraining Context
MeanStd *MeanStdMeanStdMeanStd
EQ_CN0_AE0.110.031.600.952.280.980.090.01
DIF_CN0_AE0.330.080.180.080.310.120.110.02
2xDIF_CN0_AE0.350.090.180.070.270.060.110.01
2xEQ_CN0_AE0.170.111.530.711.971.430.100.02
MIX_2_AE0.190.110.220.080.340.070.110.01
MIX_4_AE0.180.100.200.060.320.040.130.01
MIX _8_AE0.160.080.180.040.290.020.130.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barbero, T.; Matera, E.R.; Ekambi, B.; Chamard, J.; Ekambi, M. Toward an Interpretable Multipath Error Model from GNSS Observables Through the Application of Deep Learning. Eng. Proc. 2026, 126, 14. https://doi.org/10.3390/engproc2026126014

AMA Style

Barbero T, Matera ER, Ekambi B, Chamard J, Ekambi M. Toward an Interpretable Multipath Error Model from GNSS Observables Through the Application of Deep Learning. Engineering Proceedings. 2026; 126(1):14. https://doi.org/10.3390/engproc2026126014

Chicago/Turabian Style

Barbero, Thomas, Eustachio Roberto Matera, Bertrand Ekambi, Jeremy Chamard, and Mathieu Ekambi. 2026. "Toward an Interpretable Multipath Error Model from GNSS Observables Through the Application of Deep Learning" Engineering Proceedings 126, no. 1: 14. https://doi.org/10.3390/engproc2026126014

APA Style

Barbero, T., Matera, E. R., Ekambi, B., Chamard, J., & Ekambi, M. (2026). Toward an Interpretable Multipath Error Model from GNSS Observables Through the Application of Deep Learning. Engineering Proceedings, 126(1), 14. https://doi.org/10.3390/engproc2026126014

Article Metrics

Back to TopTop