Prediction of GPS Satellite Clock Offset Based on an Improved Particle Swarm Algorithm Optimized BP Neural Network

: Satellite clock offset is an important factor affecting the accuracy of real-time precise point positioning (RT-PPP). Due to missing real-time service (RTS) products provided by the International GNSS Service (IGS) or network faults, users may not obtain effective real-time corrections, resulting in the unavailability of RT-PPP. Considering this issue, an improved back propagation (BP) neural network optimized by heterogeneous comprehensive learning and dynamic multi-swarm particle swarm optimizer (HPSO-BP) is proposed for clock offset prediction. The new model uses the particle swarm optimizer to optimize the initial parameters of the BP neural network, which can avoid the instability and over-ﬁtting problems of the traditional BP neural network. IGS RTS product data is selected for the experimental analysis; the results demonstrate that the average prediction precision of the HPSO-BP model for 20-min and 60-min is better than 0.15 ns, improving by approximately 85% compared to traditional models including the linear polynomial (LP) model, the quadratic polynomial (QP) model, the gray system model (GM (1,1)), and the ARMA time series model. It indicates that the HPSO-BP model has reasonable practicability and stability in the short-term satellite clock offset prediction, and its prediction performance is superior to traditional models. Therefore, in practical applications, the clock offset products predicted by the HPSO-BP model can meet the centimeter-level positioning accuracy requirements of RT-PPP.


Introduction
Precise point positioning (PPP) has rapidly developed since it was put forward [1,2], and real-time precise point positioning (RT-PPP) is a hot topic in current research. The key factors to restrict the broad application of RT-PPP are the accuracy and real-time performance of satellite orbit and satellite clock offset products [3,4]. Since 2013, the International GNSS Service (IGS) has provided real-time service (RTS), including highprecision orbit and clock products [5,6]. Through the daily comparison between the analysis center and the IGS rapid solution, the accuracy of the RTS clock offset product can reach 0.1-0.15 ns [7]. However, there may be some problems in the actual use, such as the RTS products often missing, the clock offset data containing gross errors and jumps, or data not obtainable due to network faults [8,9]. Through short-term prediction, clock offset data with accuracy equivalent to that of RTS products can be used to achieve real-time high-precision positioning when RTS products are interrupted. Therefore, it is indispensable to establish a high-precision short-term clock offset prediction model.
At present, researchers have established many prediction models for satellite clock offset, including the linear polynomial (LP) model, the quadratic polynomial (QP) model, the grey model (GM (1,1)) and so on. However, the LP model does not consider the influence of clock drift on the prediction of clock offset. Noise in the QP model is regarded as the error which obeys the normal distribution, so the prediction accuracy will decrease with time. The grey model prediction requires the original function to be smooth and change exponentially, and the prediction accuracy is easily affected by the coefficients of function. Considering the limitations of single prediction models, the literature [10][11][12] proposed combined clock offset prediction models. The experimental results show that the combined model has better prediction accuracy and stability compared with the single model. However, the performance of the prediction model that fuses multiple models will be affected by the individual models involved to a certain extent, and the optimal selection of the weights of the individual models in the combined model is also difficult. Since satellite clocks are easily affected by the external environment, noise is inevitable in the clock offset, resulting in periodic and random changes in the satellite clock offset [13,14]. In addition, some clocks have phase and frequency jumps, so there are many complexities in modelling atomic clocks [15][16][17].
The traditional models have shortcomings in expressing the nonlinear characteristics of the clock offset, and it is difficult to improve the prediction accuracy further. However, neural networks are more sensitive to nonlinear problems and can overcome the limitations of conventional models for more accurate predictions [18,19]. Literature [20] used the wavelet neural network to predict the BeiDou satellite clock offset, the experimental results show that the prediction accuracy of 6 h can reach 1~2 ns, and the prediction accuracy of 24 h can reach 2~4.6 ns, which is better than the traditional quadratic polynomial model and grey model. However, the improper selection of wavelet basis function in the network will affect the prediction accuracy, and the initial parameters of the wavelet neural network are randomly selected. If the parameter initialization is not appropriate, it will lead to the non-convergence of the network learning process, which will affect the prediction results. Literature [21] used the quadratic polynomial model with additional periodic terms to predict the BeiDou satellite clock offset, and the BP neural network is used to compensate for the nonlinear system error in the fitting residual. The experimental results show that the performance and accuracy of the improved model are better than those of the ISU-P product and the GBU-P product. However, the initial parameters of the BP neural network are randomly selected, different initial values will affect the prediction results, and the nonlinear system error of wrong prediction will seriously affect the prediction accuracy of the model. One study [22] used the BP neural network to predict GPS satellite clock offset; the experimental results show that the prediction accuracy of the BP neural network is better than that of the grey model. However, the BP neural network has shortcomings such as slow convergence speed and easy falling into local optimum, which affects the final convergence accuracy [23].
At present, there has been few studies on the use of the particle swarm optimized BP neural network algorithm in clock offset prediction. In this contribution, we optimize the BP neural network using an improved particle swarm optimization (PSO) algorithm, which means that this method will not be affected by the initial parameters of the BP neural network. The particle swarm optimization algorithm is a random search algorithm based on swarm, which is fast and easy to implement, so it is very suitable for solving the problem of finding the optimal value of single-or multi-objectives [24,25]. However, the disadvantage of the standard PSO algorithm is that it is easy to fall into local optimum, resulting in premature convergence. Therefore, we chose the HCLDMS-PSO algorithm [26], whose training speed, convergence accuracy and reliability are better than the traditional PSO algorithm to optimize the initial parameters of the BP neural network. This means that we can avoid the BP neural network from falling into local optimum through the HCLDMS-PSO algorithm and improve the prediction accuracy of the network. Considering the characteristics of the satellite clock offset and the limitations of the BP neural network, we select the HCLDMS-PSO algorithm to optimize the initial weights and thresholds of the BP neural network for satellite clock offset prediction. Therefore, we apply the HPSO-BP model to predict the satellite clock offset.
Due to the influence of the external environment, the satellite clock offset data presents nonlinear characteristics, and the traditional method cannot accurately model the atomic clock. However, the neural network in the HPSO-BP model has strong nonlinear mapping ability and is more sensitive to nonlinear data. In addition, the neural network has strong self-learning and fault-tolerant capabilities, and will not affect the characteristics of the entire model due to the error of individual samples. Through the training and learning of historical data, a network model with high prediction accuracy can be obtained. The choice of basis function of the wavelet neural network is more complicated, and initial parameters such as scale factor are added; the parameters are difficult to determine accurately, which affects the convergence speed of the network. Compared with the wavelet neural network prediction model, the algorithm of the BP neural network in the HPSO-BP model is simple, easy to master, and has strong network stability. In addition, the HPSO-BP model uses the particle swarm algorithm to optimize the initial parameters of the neural network, which can prevent the neural network from falling into local optimum, and obtain the optimal training network, thereby improving the prediction accuracy. Existing literature used the BP neural network to predict the fitting residual of the quadratic polynomial model, but it can only compensate for the quadratic polynomial model; conversely, the HPSO-BP model directly processes the satellite clock offset data, which can make full use of the known information about the clock offset and can more accurately predict the clock offset data. The traditional BP neural network model will affect the network performance due to the random selection of initial parameters. However, the HPSO-BP model optimizes the initial parameters through the particle swarm algorithm, which can improve the convergence speed and obtain clock offset data with better accuracy.
The prediction precision of satellite clock offset can be significantly increased by combining the neural network and the "first differencing" processing method for modeling to predict the satellite clock offset. The "first differencing" processing method is equivalent to converting to the frequency domain because the frequency is the first difference divided by the time interval between data [27][28][29]. This method can meet the real-time positioning requirements in the short-term prediction in an abnormal situation such as an interruption of RTS products. First, we use the "first differencing" method to process adjacent epochs of the original clock offset data. Second, we introduce a new model which uses the HCLDMS-PSO algorithm to optimize the initial parameters of the BP neural network to achieve the optimal estimation of clock offset. Finally, we analyze the applicability of our model and method in clock offset forecasting with the experiments.
The rest of this article is organized as follows: in Section 2, the improved particle swarm optimization algorithm, named the heterogeneous comprehensive learning and the dynamic multi-swarm particle swarm optimizer (HCLDMS-PSO), is introduced. In Section 3, the HPSO-BP model for satellite clock offset prediction is introduced. In Section 4, the satellite clock offset products in the final precise ephemeris of RTS provided by IGS are selected for experiments. In Section 5, the discussion of the results is given. Finally, some conclusions and summaries are given in Section 6.

Heterogeneous Comprehensive Learning and the Dynamic Multi-Swarm Particle Swarm Optimizer (HCLDMS-PSO)
The particle swarm optimization (PSO) algorithm is a swarm intelligence algorithm based on a population-based search. By simulating the predation behavior of a flock of birds, the PSO algorithm iteratively optimizes the different particles randomly generated in the population. The randomly generated particles represent different solutions, and the optimal solution is obtained by adjusting the position and velocity of each particle during the iteration. The PSO algorithm is widely applied in various fields owing to its advantages of fast convergence and easy implementation. However, the PSO algorithm is prone to problems of diversity loss and premature convergence [30]. The HCLDMS-PSO algorithm fully combines the advantages of the comprehensive learning particle swarm optimization (CLPSO) algorithm and the dynamic multi-swarm particle swarm optimizer (DMS-PSO) Remote Sens. 2022, 14, 2407 4 of 17 algorithm, achieving a good balance between exploration and exploitation. The training speed, convergence accuracy, and reliability of the HCLDMS-PSO algorithm are better than those of traditional models [26].
Assuming the vector X = (X 1 , X 2 , . . . , X N ) represents a population of N particles in a D-dimensional space, and the position and velocity of each particle are represented as The historical optimal position of the ith particle and the best position of the entire population are pbest i = (pbest i1 , pbest i2 , . . . , pbest iD ) and gbest i = (gbest i1 , gbest i2 , . . . , gbest iD ), respectively. The particle will update its position and velocity in each evolution process after finding the individual optimal position and the global optimal position. The HCLDMS-PSO algorithm divides the entire population into two different sub-populations for exploration and exploitation, respectively. One sub-population adopts the comprehensive learning (CL) strategy, where particles learn from different historical experiences in different dimensions and learn from the position gbest of the entire population. The speed and position update formula can be expressed as: (1) where ω 1 is the inertia weight parameter, which is used as a linearly decreasing function with the number of iterations from 0.99 to 0.2; d = 1, 2, . . . , D are the dimensions of the parameters, i = 1, 2, . . . , N, x id and v id are the position and velocity of the ith particle, . . , f i (D)] defines the ith particle which should follow the pbest of that particle for the dth dimension; c 1 and c 2 are the acceleration coefficients; and r 1 and r 2 are two random numbers with values in the range of [0, 1]. The other sub-population adopts the dynamic multi-swarm (DMS) strategy. Its particle position updated formula is shown in (2), and the updated formula of velocity can be expressed as: where lbest is local best experience, and the remaining parameters are the same as above.
A nonlinear adaptive inertia weight is constructed based on the sigmoid function to improve the exploration and exploitation capability of the particles in the DMS subpopulation. The inertia weight is adjusted based on the total number found at the search level of different sub-populations, and it can be expressed as: where ω max and ω min are the maximum and minimum values of inertia weight, which are 0.99 and 0.20, respectively; T is the current running time, while T is the maximum evolution time; m i and M are the average fitness values of sub-swarm i and the entire DMS subpopulation, respectively. Specific definitions are shown in the original text [26]. C is a constant and the value is 0.15; ω 2 is the adjusted weight, ranging between ω min and ω max . Introducing a non-uniform mutation operator into the DMS sub-population can effectively increase the diversity of the sub-population and enhance its capabilities of global search and local development to avoid premature convergence. Finally, to prevent the entire population from falling into the local optimum, a Gauss mutation operator is introduced into the optimal position gbest.

Construction of the HPSO-BP Model Used for Satellite Clock Offset Prediction
The BP neural network adjusts the weights and thresholds through repeated training to constantly make the output value approach the expected value. However, the selection of the initial weight and threshold of the network severely affects the convergence and accuracy of the BP neural network, and the final result obtained after the training easily falls into the local optimum. To avoid the BP neural network falling into the local minimum determined by the randomly initialized parameters, we apply the HCLDMS-PSO algorithm to optimize the initial weights and thresholds of the BP neural network.
Before forecasting the clock offset, we need to determine the neural network structure. In general, the number of network output layers is equal to the type of output data. The output value is the clock offset, and the number of neurons in the output layer is 1. However, the number of neurons in the hidden layer is usually set according to experience without a specific theoretical foundation [31]. Assuming there is a set of clock offset data {C 1 , C 2 , . . . , C N }, in the training process of the neural network, we establish the mapping relationship between {C 1 , C 2 , . . . , C m } and {C m+1 }. Moreover, on the premise that the number of samples remains unchanged, we adopt the moving window idea, and continuously use new prediction data to replace the previously known data to perform multi-epoch satellite clock offset prediction. The specific steps of using the proposed HPSO-BP model for satellite clock offset prediction are as follows: 1.
Initializing the BP neural network parameters, including network learning rate, the permissible error, and the maximum number of training.

2.
Dividing the processed clock offset data into an input part and an expected output part. Assuming that there are N data of satellite clock offset involved in training, they can be divided into M groups. The network input terminal can be expressed as At the same time, the corresponding network output terminal can be expressed as Y = y(1), y(2), · · · y(j), · · · y(M), y(j) ∈ R N . It should be noted that before bringing a sample into the neural network training, the sample data should be normalized to the interval [-1, 1] through the normalization method. 3.
Using the HCLDMS-PSO algorithm to optimize the initial weights and thresholds of the BP neural network. Specifically, the steps of HCLDMS-PSO for optimizing the BP network parameters are as follows: (1) Initializing parameters of the HCLDMS-PSO algorithm, including setting the number of particles, dimension of the variable, the maximum number of iterations, and the initial position and velocity of particles, are set by bringing a sample in. (2) Dividing the entire population into two different sub-populations. We use the comprehensive learning (CL) strategy and dynamic multi-swarm (DMS) strategy to update the position and velocity of the particles in each subpopulation, respectively.
Setting the mean square error (MSE) value obtained from the neural network training as the fitness value. According to the fitness of each particle, the individual extreme value pbest, the global extreme value gbest and the lbest of each particle in DMS subpopulation sub-swarms are continuously updated. Furthermore, the Gauss mutation operator is introduced into the best position gbest to update exemplar for the CL sub-population and to regroup the DMS sub-population sub-swarm particles. (4) Identifying the circulatory arrest condition. If the fitness value reaches the optimal value or the number of iterations reaches the maximum value, the parameter optimization process is stopped; otherwise, we return to step (3) until the requirements are met. When the iteration ends, the initial weight and threshold of the neural network are obtained.

4.
Substituting the optimized initial weights and thresholds into the BP neural network, and training the sample datasets prepared in step 2. The self-learning and the actual output vector of the BP neural network are, namely: where n represents the number of network layers; ω ij and θ ij represent the weight and threshold of the jth layer, respectively. Out n,j represents the output result of the output layer. 5.
In the process of network training, we calculate the system error between the actual output vector and the expected output vector. If the error is less than the allowable error value set in advance or the number of training times has reached the preset value, the training is terminated; otherwise, we proceed to the next step. 6.
According to the error between the actual output value and the expected output value in this training stage, the gradient descent method is used to adjust the weights and thresholds in the network, so that the error value becomes smaller. Then, the output value is recalculated by using the adjusted parameters, and the error between the actual output value and the expected output value is obtained again. This process is repeated until the error meets the limit value set in advance or the number of training times has reached the preset value. 7.
Predicting the data with the trained network, and performing the normalized inverse operation on the predicted value to obtain the final satellite clock offset data.
After completing the neural network training with the sample data, we used a moving window method to realize the multi-step prediction of the neural network in the forecasting process according to the input and output vector dimensions of the network during the training phase. When the window moves, on the premise that the amount of data in the window is m, we delete the first data in the data sequence and add new epoch data at the end of the data sequence; assuming the input layer nodes of the neural network are m, and the output layer is known to be one. The details are shown in Table 1. Table 1. Multi-step prediction of neural network.

Input
Output The input layer and hidden layer nodes of the neural network are generally selected through experience without definite stipulation. Too many nodes will not only increase the calculation demands, but may also affect the final convergence accuracy. Therefore, to ensure the training and prediction precision, the number of nodes should be reduced as much as possible. The hidden layer nodes of that network are selected according to the Kolmogorov theorem and its number is two times the number of input layer nodes plus 1 [23]. To improve the data quality and improve the accuracy of clock offset prediction, the data sequence used for training and prediction is the clock offset data after "first differencing" processing. Finally, after obtaining the predicted values, the corresponding satellite clock offset prediction value can be obtained by restoring. The flow chart of the HPSO-BP algorithm is shown in Figure 1.
encing" processing. Finally, after obtaining the predicted values, the corresponding satellite clock offset prediction value can be obtained by restoring. The flow chart of the HPSO-BP algorithm is shown in Figure 1. Flow chart of the HPSO-BP algorithm (CL is the comprehensive learning, DMS is the dynamic multi-swarm, Xi and Vi represent the position and velocity, respectively. pbesti and gbesti represent the historical optimal position of the ith particle and the best position of the entire population, respectively. lbesti is local best experience, fes is the number of iterations. HPSO-BP represents the BP neural network model optimized by HCLDMS-PSO algorithm).

Experiment Results
To verify the practicability of the algorithm, we selected the satellite clock offset product in the final precise ephemeris of the RTS provided by IGS for experimental analysis. The data are publicly available on the IGS website (https://cddis.nasa.gov/, accessed on 10 April 2022). Taking the satellite clock offset data on the fourth day of GPS week 2150 (24 March 2021) as an example, the sampling interval is 30 s. There are six types of clocks for GPS satellites, including Block IIR Rb, Block IIR-M Rb, Block IIA Rb, Block IIA Cs, Block IIF Rb and Block IIF Cs. The selected satellites are labeled by the bold font in Table 2, covering the above six clock types. Taking the clock offset products of RTS as the benchmark, we compared the predicted clock offset value to it. The root mean square (RMS) error is used to evaluate the reliability of the prediction result. The formulas can be expressed as: where i Ĉ is the predicted value of the obtained clock offset, i C is the RTS clock offset

Experiment Results
To verify the practicability of the algorithm, we selected the satellite clock offset product in the final precise ephemeris of the RTS provided by IGS for experimental analysis. The data are publicly available on the IGS website (https://cddis.nasa.gov/, accessed on 10 April 2022). Taking the satellite clock offset data on the fourth day of GPS week 2150 (24 March 2021) as an example, the sampling interval is 30 s. There are six types of clocks for GPS satellites, including Block IIR Rb, Block IIR-M Rb, Block IIA Rb, Block IIA Cs, Block IIF Rb and Block IIF Cs. The selected satellites are labeled by the bold font in Table 2, covering the above six clock types. Taking the clock offset products of RTS as the benchmark, we compared the predicted clock offset value to it. The root mean square (RMS) error is used to evaluate the reliability of the prediction result. The formulas can be expressed as: whereĈ i is the predicted value of the obtained clock offset, C i is the RTS clock offset value, and n is the number of predicted clock offset. To compare and analyze the prediction performance of the BP model and the HPSO-BP model for satellite clock offset, we also selected satellites of different clock types (PRN01, 02, 04, 10, 12, 24) for experiments. To comprehensively compare the prediction precision, we used the single difference satellite clock offset data of the first 2 h of day for the modeling to independently forecast the next 10-min, next 30-min and next 60-min clock offset value five times. The prediction precision statistics of the BP model and the HPSO-BP model are shown in Tables 3-5.   Tables 3-5 show that the multiple prediction precision of the BP model and the HPSO-BP model changes little for the different prediction periods of the six satellites, indicating that the selected network structure has a better prediction effect. In the three satellite clock offset prediction periods, the average value of RMS of the five prediction results of the HPSO-BP model are all smaller than those of the BP model, and the prediction precision of the HPSO-BP model reduces less than the BP model as the prediction time extends. Because the initial weight and threshold of the BP neural network are randomly selected, the BP neural network easily falls into the local optimum in the training process, leading to the increase of network training error and consequently reducing the prediction precision. However, the initial network parameters of the HPSO-BP model are optimized by the HCLDMS-PSO algorithm, which can avoid the BP neural network model being affected by local extremes during training to improve the accuracy and stability of the prediction results.

Example 2
To comprehensively verify the prediction performance of the HPSO-BP model, we used the data of the first 6 h of a day (24 March 2021) for modeling to predict satellite clock offset data of the next 20-min and 60-min. We compared prediction results of the HPSO-BP model with that of four common models which include the linear polynomial (LP) model, the quadratic polynomial (QP) model, the gray system model (GM (1,1)), and the ARMA time series model. Figure 2 shows the prediction precision of the HPSO-BP model and four common models in two prediction periods of 18 satellites with different clock types. It can be seen in the figure that the prediction precision of the HPSO-BP model outperforms the other four prediction models.
For 18 satellites of different atomic clock types, the HPSO-BP model has the highest prediction precision in the two prediction periods among the five prediction models. Among them, the prediction precision of PRN24 was improved the most. The RMS value of the prediction results of the HPSO-BP model was smaller than that of the other four models, showing that the HPSO-BP model has good performance in the satellite clock offset prediction of different clock types. According to the statistics of the respective average prediction results of the satellite atomic clock types, the bar charts for the RMS of each atomic clock type in the two prediction periods are shown in Figures 3 and 4. The corresponding values of RMS are listed in Tables 6 and 7.
From the prediction results of the two periods in Tables 6 and 7, the prediction precision of the HPSO-BP model outperforms the four common models when forecasting the satellite clock offset of six different atomic clock types. In the two prediction periods, compared with the four models, the average prediction precision of the HPSO-BP model increases by approximately 85%. When the prediction period increases, the prediction precision of the HPSO-BP model changes less. Moreover, its prediction results are more stable than the four common models and the predicted clock offset is more in line with the actual value, which indicates the good stability of the model and the feasibility of neural network structure to a certain extent. The HPSO-BP model can obtain better prediction results depending on two aspects. First, the "first differencing" eliminates the systematic error in the original satellite clock offset data, making the nonlinear characteristics of the processed data more prominent. Second, the initial parameters of the BP neural network are optimized through HCLDMS-PSO to avoid the BP neural network falling into the local optimum and improve the global optimization ability of the model.  Tables 6 and 7).    1) and ARMA represent the linear polynomial mod quadratic polynomial model, the gray system model and the auto regression moving a model, respectively). To further verify the prediction performance of the HPSO-BP model, we randomly selected the satellite clock offset data of two days (30 January 2019 and 13 September 2020). As the above experiment, we used the data of the first 6-h for modeling and used the five models to predict the next 60-min satellite clock offset data. Figure 5 shows the prediction precision of each satellite by the five models within the two days. structure to a certain extent. The HPSO-BP model can obtain better prediction results d pending on two aspects. First, the "first differencing" eliminates the systematic error the original satellite clock offset data, making the nonlinear characteristics of the pr cessed data more prominent. Second, the initial parameters of the BP neural network a optimized through HCLDMS-PSO to avoid the BP neural network falling into the loc optimum and improve the global optimization ability of the model.
To further verify the prediction performance of the HPSO-BP model, we random selected the satellite clock offset data of two days (30 January 2019 and 13 September 202 As the above experiment, we used the data of the first 6-h for modeling and used the fi models to predict the next 60-min satellite clock offset data. Figure 5 shows the predicti precision of each satellite by the five models within the two days. From Figure 5, the prediction precision of each satellite by the four traditional mode within the two days is lower than the HPSO-BP model, indicating that the HPSO-B model has good applicability and stability. According to the average prediction results different satellite clocks, Figures 6 and 7 show the bar charts for the RMS of each satell over the two days. The corresponding RMS values are listed in Tables 8 and 9.  Tables 8 and 9). From Figure 5, the prediction precision of each satellite by the four traditional models within the two days is lower than the HPSO-BP model, indicating that the HPSO-BP model has good applicability and stability. According to the average prediction results of different satellite clocks, Figures 6 and 7 show the bar charts for the RMS of each satellite over the two days. The corresponding RMS values are listed in Tables 8 and 9. From Figures 6 and 7, we can clearly see the advantages of the HPSO-BP model over the other models. It can be seen from Tables 8 and 9 that the HPSO-BP model has the highest prediction precision compared to the four common models in the 60-min prediction within the two days. The prediction precision of the Block IIF Cs clock type is low, but it does not exceed 0.15 ns. Compared with the four common models, the average prediction precision of the HPSO-BP model has increased by approximately 85%. The prediction precision of other atomic clock types is controlled within 0.1 ns. HPSO-BP model has the same prediction precision for different satellite clock offset data within the two days, indicating that the model is applicable to satellite clock offset data of any date.
Taking the PRN10 as an example, Figure 8 shows the variation curves of the prediction errors of the five models over time.  Figures 6 and 7, we can clearly see the advantages of the HPSO-BP model ov the other models. It can be seen from Tables 8 and 9 that the HPSO-BP model has t highest prediction precision compared to the four common models in the 60-min pred tion within the two days. The prediction precision of the Block IIF Cs clock type is lo but it does not exceed 0.15 ns. Compared with the four common models, the average pr diction precision of the HPSO-BP model has increased by approximately 85%. The pred tion precision of other atomic clock types is controlled within 0.1 ns. HPSO-BP model h the same prediction precision for different satellite clock offset data within the two day indicating that the model is applicable to satellite clock offset data of any date.
Taking the PRN10 as an example, Figure 8 shows the variation curves of the pred tion errors of the five models over time.  From Figure 8, compared with the other four models, the prediction error curve of the HPSO-BP model is relatively stable as a whole. When the prediction time increases, the prediction precision of the HPSO-BP model changes less. Moreover, its prediction results are more stable than the four common models, and the predicted clock offset is more in line with the actual value. The fluctuation degree of the prediction error of the HPSO-BP model is small, and the error value fluctuates around 0, indicating that the model has good prediction performance.
Through the above experiments, it can be seen that the prediction precision of the HPSO-BP model is better than that of the four traditional models. As the forecasting time increases, the prediction precision of the four models all have a certain degree of reduction, while the prediction results of the HPSO-BP model do not change much. There are two main reasons why the HPSO-BP model can obtain better prediction results without being affected by the satellite clock type and data. One reason is that the neural network is more sensitive to nonlinear data, and the data is more nonlinear after the "first differencing" processing. Therefore, the accuracy of multiple predictions based on the BP neural network has no obvious change and the "first differencing" processing can effectively improve the accuracy of neural network prediction of clock offset. The other reason is that we optimize the initial weights and thresholds of the BP neural network through the HCLDMS-PSO algorithm, which can avoid the BP neural network results falling into the local optimum. In addition, it improves the convergence speed and prediction precision of the BP neural network and consequently enhances the stability in clock offset prediction of the BP model. In conclusion, compared with the common clock offset prediction model, the HPSO-BP model has better prediction performance.

Results Discussion
The BP neural network shows good performance in satellite clock offset prediction due to its good adaptability and robustness. Meanwhile, whether the initial parameters selection is reasonable or not dominates the ultimate accuracy of the clock offset prediction. In example 1, both the BP model and the HPSO-BP model obtained high prediction accuracy. However, since the initial parameters of the traditional BP neural network are randomly selected, the results of the five repeated experiments in Tables 3-5 are different, indicating that the initial parameters have a certain influence on the prediction results of the BP neural network. The prediction accuracy of the HPSO-BP model is the same in five repeated experiments, and it is better than the BP neural network, indicating that the initial parameters optimized by the HCLDMS-PSO algorithm can prevent the BP neural network from falling into local optimum, reduce the network training error, and improve the prediction accuracy. The above results imply that the HPSO-BP model has better prediction accuracy and stability.
In the 20-min and 60-min prediction experiments, the HPSO-BP model can better reflect the variation trend of the clock offset. The prediction accuracy of the HPSO-BP model outperforms the four common models when forecasting the satellite clock offset of six different clock types. Compared with the four models, the average prediction accuracy of the HPSO-BP model is improved by about 80%. Due to the poor performance of Block IIF Cs clock type compared to the other five types of atomic clocks, the prediction accuracy of the HPSO-BP model is improved by about 90%. When the prediction time increases, the HPSO-BP model's prediction precision changes less. Moreover, its prediction results are more stable than the four common models, indicating that the model has better stability. We calculated the average operation time of the HPSO-BP model and four traditional models, among which LP, QP and grey models have the shortest operation time of about 0.001 s due to their simpler models. The average operation times of the ARMA model and the HPSO-BP model are 1.65 s and 1.61 s, respectively. The HPSO-BP model finds the optimal value iteratively through particle swarm optimization, which takes a relatively long time, but it is also within an acceptable range. If the optimization speed of the particle swarm algorithm can be further improved, it is expected to shorten the operation time of the model and further improve the speed of clock offset prediction.
Many factors affect the prediction precision of the neural network clock offset. For different satellites, the clock offset data sequence changes after the "first differencing" processing is not the same. The corresponding network structure needs to be further optimized and adjusted to improve the prediction precision of each satellite. Moreover, the quality of satellite clock offset data is also an important factor. If an appropriate preprocessing method can effectively eliminate outliers in the data, higher network precision after training will improve the prediction precision.

Conclusions
Due to the nonlinear characteristics of the satellite clock offset, when we use common models to predict the clock offset, the prediction error will accumulate over time, and the accuracy of the result is unstable. Considering those problems of the BP neural network in the clock offset prediction, such as slow convergence and easily falling into local optimum during training, we propose the HPSO-BP model suitable for satellite clock offset prediction. This model adopts the HCLDMS-PSO algorithm to obtain better initialization parameters, which improves the prediction precision of the model for satellite clock offset. In addition, it can effectively prevent the BP neural network falling into local optimum and speed up the convergence speed of the BP algorithm.
By analyzing the prediction precision of the five models, it can be seen that the HPSO-BP model is relatively simple and has good prediction precision and stability for short-term prediction of the clock offset. When the forecast time is extended, the precision changes little, and the prediction precision of satellites in different atomic clock types is kept within 0.15 ns. Compared with the traditional LP model, QP model, GM (1,1) model and ARMA model, the prediction precision can be improved by more than 80%. In addition, the HPSO-BP model has strong adaptive capabilities, global searching capabilities, and global convergence. This model considers the advantages of the HCLDMS-PSO algorithm in training speed, convergence accuracy and reliability to obtain the globally optimal solution of the initial weights and thresholds in the BP neural network, which improves the accuracy and stability of the BP neural network in satellite clock offset prediction. The HPSO-BP model performs well in short-term predictions and has strong real-time performance. It can be used for the high-precision prediction of satellite clock offset and can be replaced when the real-time clock offset data fails to guarantee the positioning accuracy of RT-PPP.