Previous Article in Journal
Subchannel Allocation in Massive Multiple-Input Multiple-Output Orthogonal Frequency-Division Multiple Access and Hybrid Beamforming Systems with Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Deep Learning Assisted Composite Clock: Robust Timescale for GNSS Through Neural Network †

1
European Space Agency (ESA), 2201 AZ Noordwijk, The Netherlands
2
European Space Agency (ESA), 00044 Rome, Italy
*
Author to whom correspondence should be addressed.
Presented at the European Navigation Conference 2025 (ENC 2025), Wrocław, Poland, 21–23 May 2025.
Eng. Proc. 2026, 126(1), 2; https://doi.org/10.3390/engproc2026126002 (registering DOI)
Published: 5 February 2026
(This article belongs to the Proceedings of European Navigation Conference 2025)

Abstract

This study introduces the Deep Learning Assisted Composite Clock (DLACC), aiming to improve the robustness of the GNSS timescale. If traditional Kalman filter-based composite clocks are today used in systems like GPS and EGNOS, the non-linear, non-Gaussian, and non-stationary behavior of atomic clocks can impact the performance of such model-based filtering. DLACC, built from the KalmanNet approach, proposes to enhance Kalman filters by computing its gain through a neural network to better model clock dynamics and manage ensemble clock reconfigurations. In particular, this study evaluates this method’s performance against conventional filters, demonstrating its potential for more resilient and adaptive GNSS timescales.

1. Introduction

Clock synchronization is vital for navigation systems that require a stable, available, and resilient common time reference to ensure positioning and timing performance for all users. The composite clock approach, adopted by systems such as GPS and EGNOS, addresses these needs by assembling multiple physical clocks through Kalman filters [1,2]. Such algorithms benefit from the availability of a large clock ensemble and can efficiently detect anomalies in individual clocks, allowing them to adapt their contribution to the overall system time. Yet, tuning such algorithms is a very complex task as the clock’s behaviour can be seen as non-linear, non-Gaussian, and non-stationary, while conventional filtering methods often assume the opposite [3,4]. Significant challenges related to the initialization of such filters and handling the clock’s ensemble reconfiguration can also be faced [5].
In such contexts, this study proposes to introduce the concept of a Deep Learning Assisted Composite Clock (DLACC) for GNSS, leveraging supervised learning methods to dynamically estimate filter parameters in real time and in non-linear environments while also considering partial information. Based on recent research [6], the proposed approach embeds a dedicated neural network within the existing Kalman filter, enhancing their ability to capture complex clock behaviors and ensemble reconfigurations.
Firstly, this paper introduces the general paradigm regarding GNSS composite clocks based on the conventional Kalman filters, highlighting the challenges associated with atomic clock behavior prediction. Then, an evolution of the Kalman filter is presented, based on work detailed in [6]. This KalmanNet system, computing the Kalman gain from a recurrent neural network fed by observed and predicted clocks states, is then evaluated with respect to classic filtering methods.

2. Basics of the Composite Clock for GNSS

Precise timing is at the core of the satellite navigation, as associated service relies on the precise knowledge of the satellite clock’s state. In order to synchronize these clocks and other system assets and to compute precise satellite clock predictions for broadcasting in the navigation message, Global Navigation Satellite Systems (GNSS) typically establish an internal timescale: the system time. The work of K. Brown [1] presented the solution to compute the GNSS reference time (as used for satellite clock prediction) from a combination of clocks using a Kalman filter. This ensemble timescale, commonly called a composite clock, has obvious benefits for overall timescale availability and robustness as well as for the monitoring of the contributing clocks for detection and isolation of anomalies.
In [1], the ensemble timescale itself is defined as the implicit Kalman mean that is not directly observable, but the states of the individual clock states (phase, frequency, and frequency drift) are de facto estimated versus it. The blocks of the process covariance matrix Q corresponding to the clock i are defined as follows:
Q i = ( q i , 1 τ + q i , 2 τ 3 3 + q i , 3 τ 5 20 q i , 2 τ 2 2 + q i , 3 τ 4 8 q i , 3 τ 3 6 * q i , 2 τ + q i , 3 τ 3 3 q i , 3 τ 2 2 * * q i , 3 τ )
The coefficients q are related to the Allan variance of the corresponding clock, as shown in [7], where q 0 , q 1 , q 2 , q 3 correspond to the white phase modulation, white frequency modulation, frequency random walk, and frequency random run, respectively:
σ y 2 ( τ ) = 3 q 0 τ 2 + q 1 τ + q 2 τ 3 + q 3 τ 3 20
Important practical considerations regarding the computation and tuning of the clock covariances are presented in [7,8].
Brown’s composite clock implementation suffered from the common problem of the early Kalman filter timescales: the uncontrolled growth of the covariance of the estimated states. This happens because the input measurements are de facto clock differences, while the model assumes estimation of the absolute clock states, which are not observable. In other words, the number of independent observations is always less than the number of clocks that the Kalman filter estimates. This leads to uncontrolled covariance growth, as shown in [9] (although the Kalman gain remains steady), and can result in numerical problems. In [1], the covariance growth is addressed through regular covariance reductions.
A solution to improve the system’s observability by adding constraints on the continuity of the estimated clock states was proposed by S. Stein in [10]. This approach was adopted in the timescale algorithm proposed for the International GPS Service (IGS, now International GNSS Service) [2,11].
Under an ESA contract for the Galileo Ground Mission Segment, Thales Alenia Space France (TAS-F) has investigated options to adapt the IGS timescale algorithm for Galileo [12]. The timescale computation is proposed to be implemented separately from the legacy orbit determination and time synchronization algorithm ODTS (see Figure 1).
While re-using the basic structure of the covariance matrix from (1), [12] proposed scaling factors A v a i l and A n o m to manage i-th clock exclusion/re-introduction because of data availability or clock weight reductions due to an anomaly, similarly to the approach in [11]:
w i , p h a s e = A v a i l i · A n o m i q 1 , i τ + q 2 , i τ 3 3 + q 3 , i τ 5 20
w i , f r e q u e n c y = A v a i l i · A n o m i q 2 , i τ 2 2 + q 3 , i τ 4 8
w i , d r i f t = A v a i l i · A n o m i q 3 , i τ 2 2
In the context of future operational implementation of such an algorithm in GNSS, one of the main challenges is refining and automating the dynamic estimation of the process covariance matrix. This is particularly important considering relatively frequent reconfigurations of the system clock ensemble and the general non-stationarity of process noises. This issue becomes even more critical in cases where additional weighting factors to manage clock introduction/exclusion are introduced, as proposed in [12]. Such tuning requires a long-term operational experience and expert knowledge. To increase the level of automation of the GNSS operations and consequently reduce the dependence on expert knowledge and mitigate human error factors, we will consider application of the neural network and machine learning.

3. Machine Learning Assisted Kalman Filter

As introduced above and detailed in [12], the Kalman gain K t is for now based on the predicted covariance P ^ t and the observation noise R and thus requires full knowledge of the underlying model, which also admits an efficient linear and recursive structure. In such a context, it appears clear that any mismatch with respect to the model can degrade the overall performance of the Kalman filter, as it can lead to a wrong Kalman gain computation. Yet, in experimental and operational conditions, such mismatches are easily encountered:
  • Any anomalies in given clock behavior (frequency and/or phase drift, data gaps) can affect the next state prediction and thus K t computation.
  • Atomic clocks embedded in GNSS (ground segment and space vehicles clocks) can be seen as non-linear, non-Gaussian, and non-stationary system, with which model-based methods cannot fully comply ([3,4,5]).
If any clock with anomalies can be easily taken out of the system by pre-processing the block in charge of detecting and isolating such anomalies, the clock’s behavior can affect the Kalman filter process and therefore the composite clock’s performance.
Based on the work described in [6], the limitations of the model-based approach can be bypassed through usage of a deep learning aided Kalman filter, KalmanNet, where the Kalman gain K t is computed through a recurrent neural network (RNN) fed by a set of inputs laying on knowledge of the underlying statistics, directly coming from the data stream. In the context of a composite clock, this KalmanNet solution has been adapted to use as inputs the residual observation difference y t ~ , the residual difference y t , the forward evolution difference x ~ t 1 , and the forward update difference x ^ t 1 :
y ~ t = y t y t 1  
y t = y t y ^ t | t 1
x ~ t 1 = x ^ t 1 | t 1 x ^ t 2 | t 2
x ^ t 1 = x ^ t 1 | t 1 x ^ t 1 | t 2
Figure 2 presents the block diagram of this Deep Learning Assisted Composite Clock (DLACC) Kalman filter, introduced in [6] in a more general context. In this new approach, the prediction step is the same as in the model-based Kalman filter, except that only the first-order statistical moments are predicted (upper block, x ^ t | t 1 ). After that, a prior estimate for the current observation y ^ t | t 1 is computed from x ^ t | t 1 . Regarding the update step, the new observation y t is used to compute the current state posterior x ^ t from the previously computed prior state x ^ t | t 1 , and using the innovation difference y t and the Kalman gain K t . Yet, here, the computation of K t is not given explicitly and instead learned from data using an RNN, as per [6]. This RNN, thanks to its inherent memory, allows for implicitly tracking the second-order statistical moments ( P ^ t ahead) without requiring knowledge of the underlying noise statistics.
The advantage of such implementation is that it does not imply huge impact in terms of general implementation of composite clock algorithms, such as the one proposed in [12]. Here, ( x ^ t ) i represents the predicted state estimate vector of phase, frequency, and frequency drift of a given clock C L K i with respect to the composite clock C C :
( x ^ t ) i =   [ ( C L K i C C ) p h a s e ( C L K i C C ) f r e q u e n c y ( C L K i C C ) d r i f t ]
Yet, even if the general paradigm remains unchanged, the main challenge is now to feed the recurrent neural network and find the perfect balance between the number of neurons (linear input and output layers), Gated Recurrent Unit (GRU) interfaces, the number of epochs, and the minimum stream duration to reach a stable Root Mean Square Error (RMSE) between the training dataset and the test one. To perform this, a dataset first needs to be created.

4. Dataset Construction, Recurrent Neural Network Settings and Performance Evaluation

4.1. Dataset Construction

As with any machine learning process, the neural network proposed in the previous section needs to be trained to be correctly parameterized. In the context of DLACC, the main challenge is to retrieve a representative dataset of clock phases. This dataset should emulate the behavior of both ground and space clocks, and, for a very large period, also evaluate nominally the convergence time of the neural network with respect to the time series length, as well as its stability. While the use of real data extracted from actual GNSS systems or laboratories was initially discussed, it was preferred to generate emulated time series using STABLE32 software (version 1.62) [13], as our goal at this stage was to evaluate KalmanNet’s ability to model the process noise for different types of clocks. Besides the possibility of generating long-term time series, these time series cannot suffer from data gaps commonly found in real datasets. Table 1 introduces the parameters that have been used to generate the 11 ground and the 11 space clocks time series, and Figure 3 represents the different clock offset with respect to first ground (a) or space (b) clock time series generated (reference clock in red).

4.2. Recurrent Neural Network Settings and Performance Evaluation

The heart of the KalmanNet filter introduced in [12] and Figure 2 is the recurrent neural network (RNN). This neural network, as with any other one, needs to be parameterized. This setting phase concerns the following:
  • n n e u r o n : The number of neurons embedded into the RNN. In the context of KalmanNet usage and for convenience, n n e u r o n will represent the number of neurons embedded in each different layer of the RNN (all layers share the same number of neurons).
  • n e p o c h : The number of times the entire dataset is passed through the RNN.
  • n s a m p l e : The number of samples injected to train the neural network.
  • l b a t c h : Batch size, representing the number of samples that will be propagated through the network at the same time.
In addition, the overall architecture of the RNN has been considered as a parameter, too, to test both architectures that are introduced in [12]:
  • 1-GRU: Architecture that considers one Gated Recurrent Unit (GRU) between the linear input and linear output layers. This architecture can be considered fully connected, giving more independence to the RNN to formulate the Kalman gain K t .
  • 3-GRU: Architecture that considers 3 GRU, with each one dedicated to the prediction of the process covariance Q , the state estimate covariance P , and the residuals covariance S . This architecture more significantly constrains the RNN, reducing the overall abstraction and number of trainable parameters, leading generally to better performance in terms of computation time. At t = 0 , it has been chosen to initialize this parameter as in a traditional Kalman filter, which are then adjusted at each RNN round of computation.
To evaluate the overall performance of the KalmanNet with respect to the traditional Kalman filter, 6600 rounds of simulations have been performed, considering different values for parameters described above and different clock ensembles. Table 2 introduces the summary of these results for two clock ensembles and different configurations. In green is represented the configuration that seems to bring the best residuals of RMS value ( R M S r e s , K N e t ) with respect to computation time ( t i m e c , equal to the time needed to retrieve the Kalman gain value K t , that is to say, when n e p o c h is reached). The simulations have been performed using Python 3.10.11 on a Windows 11 24H2 desktop computer equipped with an i9-12900K 3.20 GHz CPU and an Nvidia GeForce RTX 4090 GPU. Regarding n s a m p l e and l b a t c h , their duration is expressed in number of points, where the time between two points represent 600 s. Please also note that if 4 different n n e u r o n values have been used for the simulations, Table 2 only shows results considering 40 neurons, as it represents the perfect trade-off between computation time and performance.
As per Table 2, the configuration that seems to bring to best performance is configuration #7. Figure 4 presents results obtained with this configuration with clock ensemble GROUND_001 and GROUND_003. In comparison with the traditional Kalman filter, it seems that performances are better when using KalmanNet.

5. Outlook and Conclusions

It appears that the usage of a neural network enhanced Kalman filter provides very promising results to estimate a clock’s state. KalmanNet can provide real-time state estimation using continuous learning that seems to overcome model mismatches and non-linearities that can be encountered using a Kalman filter. Following these encouraging results, the application of a KalmanNet filter for a composite clock is quite direct; as described in Section 3, it consists of changing the Kalman filter generally deployed today with this new approach. In the GNSS context, this approach can consist of embedding this KalmanNet filter into the composite clock, as in [1] or [12]. Based on this, the Deep Learning Assisted Composite Clock (DLACC) can finally emerge and will be the heart of the next studies that will be performed with this new approach.
In this study, the measurement noise and KalmanNet integration into a composite clock have not been considered. In such a context, additional work is planned to evaluate performance in a more realistic context with respect to measurements, which bring more uncertainties into state estimation. In addition, the clock datasets used here represent ideal behaviors without considering any clock anomalies (such as data gaps or unusual and sudden drifts) and need to be injected into KalmanNet to test their robustness in this context. In parallel, the computation performance of such a filter needs to be better optimized. Indeed, if computation performance introduced in Table 2 using configuration #7 is satisfactory considering that new measurements are performed every 10 min ( τ = 600   s ), clear optimization can be made at this moment with the current KalmanNet implementation by better parallelizing computation performed inside of the RNN. Finally, the insertion of KalmanNet inside of a real composite clock algorithm (including anomaly detection) will become the priority in the coming months, including tests with experimental clock data.

Author Contributions

Conceptualization, methodology, A.M., G.F., H.S. and A.C.; development, validation, G.F. and A.M.; writing—original draft preparation, G.F. and A.M.; writing—review and editing, A.M., G.F., H.S. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and documents used for this study are available upon request to the corresponding author of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BIPMBureau International des Poids et Mesures
CCComposite Clock
DLACCDeep Learning Assisted Composite Clock
DLRDeutsches Zentrum für Luft- und Raumfahrt (German Aerospace Center)
EALÉchelle Atomique Libre (free atomic timescale)
FFMFlicker Frequency Modulation
IGSInternational GNSS Service
MLMachine Learning
NPLNational Physical Laboratory
RMSRoot Mean Square
RNNRecurrent Neural Network
RWFMRandom-Walk Frequency Modulation
SISystème International (international system)
UTCUniversal Time Coordinated
WFMWhite Frequency Modulation

References

  1. Brown, K.R. The Theory of the GPS Composite Clock. In Proceedings of the 4th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS 1991), Albuquerque, NM, USA, 11–13 September 1991; pp. 223–243. [Google Scholar]
  2. Senior, K.; Koppang, P.; Ray, J. Developing an IGS Time Scale. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2003, 50, 585–593. [Google Scholar] [CrossRef] [PubMed]
  3. Zorzi, M. Robust Kalman filtering under model perturbations. IEEE Trans. Autom. Control 2016, 62, 2902–2907. [Google Scholar] [CrossRef]
  4. Zorzi, M. On the robustness of the Bayes and Wiener estimators under model uncertainty. Automatica 2017, 83, 133–140. [Google Scholar] [CrossRef]
  5. Longhini, A.; Perbellini, M.; Gottardi, S.; Yi, S.; Liu, H.; Zorzi, M. Learning the tuned liquid damper dynamics by means of a robust EKF. arXiv 2021, arXiv:2103.03520. [Google Scholar] [CrossRef]
  6. Revach, G.; Shlezinger, N.; Ni, X.; Escoriza, A.L.; Van Sloun, R.J.; Eldar, Y.C. KalmanNet: Neural Network Aided Kalman Filtering for Partially Known Dynamics. IEEE Trans. Signal Process. 2022, 70, 1532–1547. [Google Scholar] [CrossRef]
  7. Hutsell, S. Relating the Hadamard Variance to MCS Kalman filter clock estimation. In Proceedings of the 27th PTTI Systems and Applications Meeting, San Diego, CA, USA, 29 November–1 December 1995; pp. 291–302. [Google Scholar]
  8. Hutsell, S. Fine Tuning GPS Composite Clock Estimation in the MCS. In Proceedings of the 26th PTTI Systems and Applications Meeting, Reston, VA, USA, 6–8 December 1994; pp. 63–74. [Google Scholar]
  9. Satin, A.; Leondes, C. Ensembling Clocks of Global Positioning System. IEEE Trans. Aerosp. Electron. Syst. 1990, 26, 84–87. [Google Scholar] [CrossRef]
  10. Stein, S. Time Scales Demystified. In Proceedings of the IEEE International Frequency Control Symposium and PDA Exhibition Jointly with the 17th European Frequency and Time Forum, Tampa, FL, USA, 4–8 May 2003; pp. 223–227. [Google Scholar]
  11. Senior, K.L.; Coleman, M.J. The Next Generation GPS Time. NAVIGATION J. Inst. Navig. 2017, 64, 411–426. [Google Scholar] [CrossRef]
  12. Roldan, P.; Trilles, S.; Serena, X.; Tajdine, A. Novel Composite Clock Algorithm for the Generation of Galileo Robust Timescale. In Proceedings of the 35th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2022), Denver, CO, USA, 19–23 September 2022; pp. 2790–2799. [Google Scholar]
  13. Hamilton Technical Services. STABLE32 User Manual; Hamilton Technical Services: Beaufort, SC, USA, 2008; Available online: http://www.stable32.com/Manual154.pdf (accessed on 1 February 2025).
Figure 1. Proposed integration of composite clock in Galileo, as per [12].
Figure 1. Proposed integration of composite clock in Galileo, as per [12].
Engproc 126 00002 g001
Figure 2. Kalman filter where gain is computed through recurrent neural network.
Figure 2. Kalman filter where gain is computed through recurrent neural network.
Engproc 126 00002 g002
Figure 3. Dataset clocks offset with respect to reference clock GROUND_001: (a) one week, (b) one month.
Figure 3. Dataset clocks offset with respect to reference clock GROUND_001: (a) one week, (b) one month.
Engproc 126 00002 g003
Figure 4. KalmanNet performance evaluation with configuration #7 (clock ensemble GROUND_001—GROUND_003): (a) residuals for 30 days, (b) Allan deviation.
Figure 4. KalmanNet performance evaluation with configuration #7 (clock ensemble GROUND_001—GROUND_003): (a) residuals for 30 days, (b) Allan deviation.
Engproc 126 00002 g004
Table 1. Ground and space clocks’ parameters.
Table 1. Ground and space clocks’ parameters.
Clock TypeTime Series Configuration
Ground τ = 600   s ,   t = 100   y e a r s   ( 5,259,600   p o i n t s )
R W F M = 0 (Random-Walk Frequency Modulation)
F F M = 8.0 × 10 15 (Flicker Frequency Modulation)
W F M = 4.5 × 10 12   (White Frequency Modulation)
f d r i f t = 0   H z / s (frequency drift)
Space τ = 600   s ,   t = 100   y e a r s   ( 5,259,600   p o i n t s )
R W F M = 0 (Random-Walk Frequency Modulation)
F F M = 1.0 × 10 14 (Flicker Frequency Modulation)
W F M = 1.0 × 10 12 (White Frequency Modulation)
f d r i f t = 1.0 × 10 19   H z / s (frequency drift)
Table 2. Performance evaluation of KalmanNet filter to reproduce clock offset in GNSS context (worst value for 100 runs).
Table 2. Performance evaluation of KalmanNet filter to reproduce clock offset in GNSS context (worst value for 100 runs).
Clock EnsembleConfig.Arch. n n e u r o n n e p o c h n s a m p l e l b a t c h t i m e c (s) R M S r e s , K N e t R M S r e s , E K F
GROUND_001
-
GROUND_003
11-GRU402510001003231.72489 × 10−101.60188 × 10−10
21-GRU4025400010009311.56059 × 10−101.60188 × 10−10
31-GRU405010001005241.39614 × 10−101.60188 × 10−10
41-GRU40504000100017381.24873 × 10−101.60188 × 10−10
53-GRU402510001002531.39167 × 10−101.60188 × 10−10
63-GRU4025400010008531.31466 × 10−101.60188 × 10−10
73-GRU405010001004131.01084 × 10−101.60188 × 10−10
83-GRU40504000100014731.00498 × 10−101.60188 × 10−10
GROUND_001
-
SV_007
11-GRU402510001003161.70944 × 10−101.20055 × 10−10
21-GRU4025400010009531.44622 × 10−101.20055 × 10−10
31-GRU405010001005231.33926 × 10−101.20055 × 10−10
41-GRU40504000100016971.18975 × 10−101.20055 × 10−10
53-GRU402510001002591.38892 × 10−101.20055 × 10−10
63-GRU4025400010008381.31793 × 10−101.20055 × 10−10
73-GRU405010001004271.02095 × 10−101.20055 × 10−10
83-GRU40504000100014381.01168 × 10−101.20055 × 10−10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fayon, G.; Mudrak, A.; Sobreira, H.; Castillo, A. Deep Learning Assisted Composite Clock: Robust Timescale for GNSS Through Neural Network. Eng. Proc. 2026, 126, 2. https://doi.org/10.3390/engproc2026126002

AMA Style

Fayon G, Mudrak A, Sobreira H, Castillo A. Deep Learning Assisted Composite Clock: Robust Timescale for GNSS Through Neural Network. Engineering Proceedings. 2026; 126(1):2. https://doi.org/10.3390/engproc2026126002

Chicago/Turabian Style

Fayon, Gaëtan, Alexander Mudrak, Hugo Sobreira, and Artemio Castillo. 2026. "Deep Learning Assisted Composite Clock: Robust Timescale for GNSS Through Neural Network" Engineering Proceedings 126, no. 1: 2. https://doi.org/10.3390/engproc2026126002

APA Style

Fayon, G., Mudrak, A., Sobreira, H., & Castillo, A. (2026). Deep Learning Assisted Composite Clock: Robust Timescale for GNSS Through Neural Network. Engineering Proceedings, 126(1), 2. https://doi.org/10.3390/engproc2026126002

Article Metrics

Back to TopTop