Next Article in Journal
A Stability- and Aggregation-Based Method for Heart Rate Estimation Using Photoplethysmographic Signals During Physical Activity
Previous Article in Journal
Porous-Cladding Polydimethylsiloxane Optical Waveguide for Biomedical Pressure Sensing Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complex Environmental Geomagnetic Matching-Assisted Navigation Algorithm Based on Improved Extreme Learning Machine

by
Jian Huang
,
Zhe Hu
and
Wenjun Yi
*
National Key Laboratory of Transient Physics, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(14), 4310; https://doi.org/10.3390/s25144310
Submission received: 8 May 2025 / Revised: 5 July 2025 / Accepted: 8 July 2025 / Published: 10 July 2025
(This article belongs to the Section Navigation and Positioning)

Abstract

In complex environments where satellite signals may be interfered with, it is difficult to achieve precise positioning of high-speed aerial vehicles solely through the inertial navigation system. To overcome this challenge, this paper proposes an NGO-ELM geomagnetic matching-assisted navigation algorithm, in which the Northern Goshawk Optimization (NGO) algorithm is used to optimize the initial weights and biases of the Extreme Learning Machine (ELM). To enhance the matching performance of the NGO-ELM algorithm, three improvements are proposed to the NGO algorithm. The effectiveness of these improvements is validated using the CEC2005 benchmark function suite. Additionally, the IGRF-13 model is utilized to generate a geomagnetic matching dataset, followed by comparative testing of five geomagnetic matching models: INGO-ELM, NGO-ELM, ELM, INGO-XGBoost, and INGO-BP. The simulation results show that after the airborne equipment acquires the geomagnetic data, it only takes 0.27 µs to obtain the latitude, longitude, and altitude of the aerial vehicle through the INGO-ELM model. After unit conversion, the average absolute errors are approximately 6.38 m, 6.43 m, and 0.0137 m, respectively, which significantly outperform the results of four other models. Furthermore, when noise is introduced into the test set inputs, the positioning error of the INGO-ELM model remains within the same order of magnitude as those before the noise was added, indicating that the model exhibits excellent robustness. It has been verified that the geomagnetic matching-assisted navigation algorithm proposed in this paper can achieve real-time, accurate, and stable positioning, even in the presence of observational errors from the magnetic sensor.

1. Introduction

In the navigation system of an aerial vehicle, it is crucial to obtain accurate and real-time position information. Currently, aerial vehicles primarily rely on a combined navigation system that integrates an Inertial Navigation System (INS) and Global Positioning System (GPS) for positioning [1]. INS tends to accumulate system errors over time, which hinders its ability to meet the demands for high-precision positioning. Therefore, it needs to be combined with GPS for navigation. However, GPS may experience signal loss or even fail to provide positioning in extreme environments [2,3,4,5]. Geomagnetic navigation technology, which utilizes the spatial variation of the geomagnetic field, offers advantages such as all-terrain applicability, all-weather capability, strong anti-jamming performance, and no error accumulation. Consequently, geomagnetic matching technology can provide stable and precise positioning for aerial vehicles. By integrating geomagnetic navigation with inertial navigation, high-precision autonomous navigation for aerial vehicles can be achieved [6].
The key technologies of geomagnetic navigation can be categorized into geomagnetic filtering and geomagnetic matching [7]. Geomagnetic filtering technology associates the measured magnetic field data with the INS data, and corrects the position output of the INS through filtering. Common geomagnetic filtering techniques include Kalman filtering, particle filtering, etc. Among the geomagnetic navigation systems based on Kalman filtering, the most representative one is the Sandia Inertial Terrain Aided Navigation (SITAN) [8,9]. The SITAN algorithm continuously processes the INS data through Kalman filtering and estimates the position information by combining terrain elevation. However, when the terrain information is linearized or the initial position accuracy is poor, the navigation accuracy deteriorates, resulting in poor robustness of the SITAN algorithm [10,11]. Therefore, the theory of nonlinear filtering has attracted increasing attention from relevant scholars. Stepanov and Toropov [12] formulated the map-aided navigation problem within the framework of Bayesian nonlinear filtering theory, providing a rigorous theoretical foundation for the design of corresponding nonlinear filtering algorithms. Particle filtering, a nonlinear filtering method leveraging the Monte Carlo approach, enables parameter estimation in nonlinear and non-Gaussian environments [13]. It approximates the posterior probability distribution through a large number of particles, leading to a significant increase in computational cost and insufficient real-time performance. Furthermore, particle filtering suffers from the issues of sample degradation and sample impoverishment [14,15]. In comparison to geomagnetic filtering, geomagnetic matching technology is more flexible and convenient, with lower sensitivity to initial position errors and no accumulation of errors [16].
Geomagnetic matching technology involves matching the measured geomagnetic data with a pre-established geomagnetic database to find similar geomagnetic sequences, thereby determining the current position. The real-time capability and accuracy of the positioning are predominantly determined by the performance of the geomagnetic matching algorithm [17]. Traditional geomagnetic matching algorithms encompass the Iterative Closest Contour Point (ICCP) algorithm and the Geomagnetic Contour Matching (MAGCOM) algorithm [18]. The MAGCOM algorithm rectifies position errors in the INS through translational search, but when there is a large heading deviation, this algorithm cannot provide accurate positioning [19]. The ICCP algorithm can correct both heading and position errors of the INS. However, the ICCP algorithm typically suffers from significant initial localization errors and tends to fall into local optima [20]. To optimize the ICCP algorithm, Xiao et al. [21] introduced the Probability Data Association (PDA) algorithm and the principle of incremental modulation, which enhanced the algorithm’s robustness to interference and improved positioning accuracy.
In recent years, in addition to traditional geomagnetic matching algorithms, artificial intelligence algorithms and intelligent optimization algorithms have also been applied to geomagnetic matching navigation. Xu et al. [22] presented the PSO-ICCP algorithm, optimizing the ICCP output by incorporating a multi-attribute decision-making mechanism. By combining Particle Swarm Optimization (PSO) algorithm with an improved particle initialization strategy, this method effectively reduces the effect of initial positioning errors with respect to the geomagnetic matching precision of ICCP. Chen et al. [23] introduced the fundamentals of pattern recognition into geomagnetic matching navigation and proposed a geomagnetic vector matching algorithm based on a two-stage neural network. This method cascades the non-fully connected neural network and the probabilistic neural network (PNN) to perform preliminary and refined screening on the geomagnetic vectors and their characteristic information, respectively. This approach achieves a high matching success rate and positioning accuracy under low-gradient conditions, addressing the issue of failure in traditional geomagnetic matching algorithms under the same conditions. Similarly, inspired by the fundamentals of pattern recognition, they later proposed a geomagnetic vector matching method based on PNN. By optimizing the smoothing parameters of the PNN using a genetic algorithm, this method significantly improves matching accuracy compared to traditional geomagnetic matching algorithms [24].
Although existing geomagnetic matching algorithms achieve high matching accuracy, accurately and rapidly determining the position through geomagnetic information in the high-speed dynamic environment remains a challenge. In real-world environments, geomagnetic information is uniquely associated with positional information, which aligns with the characteristics of a regression prediction task. Therefore, machine learning methods can be employed to fit this model, resulting in a trained geomagnetic matching model. Once the airborne equipment acquires the geomagnetic information, it can quickly match and obtain the positional information through this model, enabling accurate and real-time positioning.
The Extreme Learning Machine (ELM) algorithm, known for its efficiency as a machine learning approach, simplifies the training process of traditional neural networks by randomly generating input layer weights and biases, thereby eliminating iterative optimization steps and significantly enhancing training speed [25]. Therefore, the ELM algorithm has a significant advantage in applications that require large-scale data processing and emphasize real-time performance [26], making it suitable for geomagnetic matching tasks. However, this algorithm has certain limitations. Due to the random initialization of the weights and biases of the neural network, the performance of the ELM algorithm may exhibit instability across different datasets, which can affect its generalizability [27,28]. To address these shortcomings, Huang et al. [29] proposed the Kernel Extreme Learning Machine (KELM) to better handle linearly non-separable samples and improve robustness. Gao et al. [30] further employed the improved Dung Beetle Optimization (IDBO) algorithm for the optimization of the regularization coefficient and kernel parameters in KELM, which allowed for the accurate identification of projectile aerodynamic parameters. Wang et al. [31] employed the adaptive neuron clipping algorithm and the PSO algorithm to improve the ELM network. They then applied the adaptive boosting (AdaBoost) algorithm to iteratively train a series of weak learners, ultimately combining them to form a strong learner for achieving high-precision prediction. More studies have also employed intelligent optimization algorithms to optimize the hyperparameters of machine learning models [32,33]. These works demonstrate that such hybrid models can achieve more stable and accurate performance in regression prediction tasks.
In order to achieve efficient and stable geomagnetic matching, this paper will use an improved NGO (INGO) algorithm to optimize the initial weights and biases of the ELM model, and propose the INGO-ELM model. To enhance the persuasiveness of this article, the IGRF-13 model will be used to simulate geomagnetic data and the performance of the presented geomagnetic matching algorithm will be evaluated. The innovations introduced in this paper are as follows:
  • To strengthen the optimization capability of the NGO algorithm, this paper proposes three improvement measures for the NGO algorithm.
  • This paper presents, for the first time, the INGO-ELM algorithm for geomagnetic matching-assisted navigation, where the INGO algorithm is used to optimize the initial weights and biases of the ELM, effectively enhancing the accuracy and stability of the ELM algorithm in geomagnetic matching positioning tasks.
  • The geomagnetic matching dataset is generated using the IGRF-13 model to validate the matching performance of the geomagnetic matching assisted navigation algorithm proposed in this paper, with noise added to the dataset to simulate the complexity of real-world environments. The simulation results demonstrate that the INGO-ELM algorithm presented in this paper exhibits superior robustness and is capable of accomplishing the positioning task in real time with high-accuracy matching performance, even in the presence of observational errors from the magnetic sensor.
The organization of this paper is as outlined below: The IGRF-13 model and the construction methodology of the geomagnetic matching dataset are introduced in Section 2. Section 3 discusses the three improvements made to the NGO algorithm and analyzes their effectiveness, which is followed by the introduction of the INGO-ELM model proposed in this paper. Section 4 presents a comparative analysis of the geomagnetic matching test results between the INGO-ELM model and four other models, further testing the geomagnetic matching performance of each model after the addition of noise. Section 5 presents the conclusions and outlook of this paper.
To ensure clarity and consistency in scientific communication, we have compiled a terminology list and an acronyms list, as shown in Table 1 and Table 2, respectively.

2. Dataset Construction and Data Preprocessing

The IGRF-13 (International Geomagnetic Reference Field, 13th Edition) is the internationally recognized global geomagnetic model, used to precisely describe the temporal variations and spatial distribution of the Earth’s magnetic field [34]. The model is based on integrated data from satellite observations, ground measurements, and airborne data. Through high-precision spherical harmonic expansion and numerical computation methods, it provides a detailed representation of the geomagnetic field from the ground to high altitudes. The IGRF-13 model covers a variety of geomagnetic field components, ranging from the Earth’s internal main field to external sources, effectively depicting the time-varying characteristics in the Earth’s magnetic field and contributing to the prediction of long-term magnetic field changes. This model finds extensive use in areas such as navigation, space weather forecasting, satellite orbit determination, and geophysical research, making it a critical tool in geomagnetic studies and applications. This paper constructs and preprocesses a dataset based on the IGRF-13 model for the purpose of training and testing geomagnetic matching algorithms.

2.1. Dataset Construction

The IGRF-13 model encompasses geomagnetic field data from 2020 to 2025, and the model demonstrates high accuracy across various altitudes, latitudes, longitudes, and time scales. In the IGRF-13 model, the spherical harmonic functions representing the geomagnetic potential are given by Equation (1):
V = a n = 1 m = 0 n a z n + 1 h n m sin m λ + g n m cos m λ P n m cos θ
In the equation, the Earth’s average radius is assumed as a = 6371.2 km; let z represent the distance from the Earth’s core to the calculation point, where z = a + h , with h being the distance from the Earth’s surface. The angle θ is the cotangent latitude, measured from the North Pole, and is given by θ = 90 φ , where φ is the geomagnetic latitude; the terms h n m and g n m are the normalized Schmidt spherical harmonic coefficients, also known as Gauss coefficients, with a cutoff degree of n; λ denotes the longitude, measured eastward from Greenwich. P n m cos θ is the Schmidt quasi-normalized associated Legendre function of degree n and order m, defined as follows:
P n m cos θ = 1 2 n n ! C m n m ! 1 cos 2 θ m n + m ! d n + m cos 2 θ 1 n d cos θ n + m C m = 1 m = 0 2 m 1
The coordinate system O-XYZ shown in Figure 1 represents the North–East–Down (NED) coordinate system when the Earth is modeled as a sphere. By applying the principle of potential field transformation, the magnetic potential V is differentiated with respect to the north, east, and vertical directions. This allows the components of the magnetic field strength in the three directions B x , B y , and B z to be obtained, as expressed by the following equations:
B x = 1 z V θ = n = 1 m = 0 n a z n + 2 h n m sin m λ + g n m cos m λ d d θ P n m cos θ B y = 1 z sin θ V θ = n = 1 m = 0 n a z n + 2 m sin θ g n m sin m λ h n m cos m λ P n m cos θ B z = V z = n = 1 m = 0 n a z n + 2 n 1 h n m sin m λ + g n m cos m λ P n m cos θ
After obtaining the position information of a specific location, the components of the magnetic field strength along the three axes of the NED coordinate system at that location can be computed using Equation (1) through Equation (3). Therefore, this study first employs the IGRF-13 model to generate a dataset composed of coordinates, following the sampling approach outlined in Table 3.
After sampling, the geomagnetic information corresponding to all coordinates in the sampling area is calculated using the IGRF-13 model, resulting in a dataset composed of geomagnetic data. In the geomagnetic matching model proposed in this paper, the components of the total magnetic field intensity along the NED coordinate axes at the sampling points are used as input data, while the latitude, longitude, and altitude of the sampling points are the output data. These two datasets are paired one-to-one and will be allocated to the training, validation, and testing sets following an 8:1:1 proportion.

2.2. Data Preprocessing

In the computations based on the IGRF-13 model, the inputs are latitude, longitude, and altitude, while the outputs are the components of the total magnetic field strength along the three axes of the NED coordinate system. These features exhibit different scales and a wide range of values. Therefore, it is essential to normalize these features prior to subsequent model training. Data normalization is a widely used preprocessing technique, primarily aimed at transforming data with varying ranges or scales into a consistent scale, thereby preventing bias during model training due to discrepancies in feature scales. This research utilizes the min–max normalization method, which eliminates the scale differences between features by linearly transforming each feature value into the range of [0, 1]. The expression for this method is as follows:
Y = Y Y m i n Y m a x Y m i n
In the equation, Y represents the original data; Y m a x and Y m i n represent the maximum and minimum values of this feature, respectively; and Y represents the normalized data.

3. Geomagnetic Matching Algorithm

The aerial vehicle can obtain the components of the magnetic field intensity along the three axes of the NED coordinate system through the magnetic sensor. If these components can be rapidly matched to determine the position of the aerial vehicle, the positioning errors accumulated over time by the inertial navigation system can be corrected. Therefore, designing an efficient geomagnetic matching algorithm can enhance the accuracy, real-time capabilities, and stability of the aerial vehicle positioning. To achieve this objective, this paper proposes a geomagnetic matching algorithm obtained by integrating the Extreme Learning Machine (ELM) with the improved Northern Goshawk Optimization (INGO) algorithm.

3.1. Extreme Learning Machine

Extreme Learning Machine is a learning algorithm based on a single hidden layer feedforward neural network. Unlike traditional methods for training neural networks, the key advantage of ELM lies in its approach of randomly generating the connection weights and biases of the hidden layer nodes, and then computing the output layer weights using only the least-squares method. This significantly improves learning efficiency. The process does not require iterative optimization, thus avoiding the multiple computations and adjustments required by the backpropagation algorithm, greatly reducing training time and computational overhead. In regression and prediction tasks, ELM, through its randomized hidden layer structure, can map the input data to a high-dimensional feature space, effectively capturing the nonlinear relationships within the data. Since the training process does not involve iterative optimization, ELM not only exhibits high training efficiency but also mitigates the overfitting problems often encountered by traditional neural networks, offering more stable prediction performance. ELM is particularly well-suited for handling large-scale datasets, as it can significantly enhance training efficiency while maintaining high accuracy [35,36,37].
Given N distinct training samples, with the activation function denoted as g x , let the ELM network have n input neurons, p output neurons, and l hidden layer neurons. The ELM network can be mathematically represented as follows:
H β = T
H = g w 1 x 1 + b 1 g w l x 1 + b l g w 1 x N + b 1 g w l x N + b l N × l
W = w 1 w 2 w l = w 11 w 1 n w l 1 w l n l × n
β = β 1 β 2 β l = β 11 β 1 n β l 1 β l m l × m
b = b 1 b 2 b l
In the equations, H represents the output matrix of the hidden layer; W denotes the weight matrix between the input layer and the hidden layer; β represents the weight matrix between the output layer and the hidden layer; T represents the matrix of expected outputs for the training samples; and b refers to the threshold of the hidden layer. The parameters of the hidden layer neurons in b and W are randomly generated, and once the training samples are provided, the matrix H is known. To solve for the matrix β , Equation (5) can be transformed into solving the linear system H β = T , which can be obtained by the minimum norm least-squares solution β ^ :
β ^ = H + T
In the equation, H + represents the Moore–Penrose generalized inverse of the hidden layer output matrix H.
Despite the numerous advantages of ELM, its main drawback lies in the random initialization of the hidden layer weights, which may limit the model’s stability and prediction accuracy.

3.2. The Northern Goshawk Optimization Algorithm and Its Improvements

The Northern Goshawk Optimization (NGO) algorithm, introduced by Dehghani et al. [38], is a novel population-based optimization approach. Compared to other algorithms, the basic framework of this algorithm demonstrates better optimization capabilities; however, its search ability still has room for improvement. In this paper, three improvement measures for the NGO algorithm are proposed.

3.2.1. The Northern Goshawk Optimization Algorithm

The mathematical model established based on the different hunting stages of the NGO algorithm is presented as follows.
Stage 1: Prey identification (exploration process).
Prey is freely distributed in the environment, and its behavior is modeled by the following expression:
P i = X c ,   i = 1 , 2 , , N ;   c = 1 , 2 , , N
P i represents the location of the target prey for the i-th Northern Goshawk; X c denotes the state of the Northern Goshawk; c is a natural number within the interval [1, N]; and N is the total population of goshawks.
A prey is randomly chosen by the Northern Goshawk, which then launches a swift attack. The behavioral model of the goshawk is expressed by Equations (12) and (13).
X i , u n e w , p 1 = x i , u + s p i , u I x i , u , F s i < F i x i , u + s x i , u p i , u , F s i F i
X i = X i n e w , p 1 , F i n e w , p 1 < F i X i , F i n e w , p 1 F i
In the equations, F s i represents the ideal fitness value; X i n e w , p 1 denotes the i-th Northern Goshawk’s new state; X i , u n e w , p 1 represents the i-th Northern Goshawk’s new state in the u-th dimension; s is an arbitrary value within the range [1, N], used to generate the erratic behavior of the goshawk; I takes the values of 1 or 2; and F i n e w , p 1 is the fitness value associated with the i-th Northern Goshawk’s new state in the u-th dimension.
Stage 2: Pursuit and evasion (development process).
During the hunting process of the Northern Goshawk, the prey attempts to escape. It is assumed that the Northern Goshawk pursues its prey within a range of radius R 1 . The mathematical expression for this process is as follows:
X i , u n e w , p 2 = x i , u + R 1 2 s 1 x i , u
R 1 = 0.02 1 j J
X i = X i n e w , p 2 , F i n e w , p 2 < F i X i , F i n e w , p 2 F i
In the equations, F i n e w , p 2 represents the new fitness value under the new state; X i , u n e w , p 2 represents the i-th Northern Goshawk’s new state in the u-th dimension during the second stage; j represents the current iteration number; J indicates the maximum iteration limit; and X i n e w , p 2 represents the i -th Northern Goshawk’s new state during the second stage.

3.2.2. Improvement Measures for the NGO Algorithm

The local development capability and global search ability of the NGO algorithm require further enhancement, and its stability, convergence speed, and accuracy are also insufficient. To address these issues, this paper proposes three improvement measures:
Improvement measure 1: Bernoulli shift mapping initialization strategy.
Enhancing the diversity of the initial population plays a key role in improving the global search capability of the algorithm, enabling it to explore a broader solution space and preventing the algorithm from prematurely becoming stuck in local optima. However, in the first stage of the NGO algorithm, the Northern Goshawks are randomly distributed in an uneven manner, resulting in uneven individual quality within the population and preventing the algorithm from obtaining the optimal initial positions. Therefore, this paper adopts the Bernoulli shift mapping initialization strategy [39] to optimize the NGO algorithm. The Bernoulli shift mapping, as an efficient nonlinear mapping, can be used to generate initialization populations with high randomness and chaotic characteristics. Its formula can be expressed as follows:
x k + 1 = x k 1 ε , 0 < x k 1 ε x k 1 + ε ε , 1 ε < x k < 1
In the equation, k denotes the number of mappings; ε is the adjustment coefficient, set to 0.4 in this paper; x k the sequence value generated by the k-th shift mapping; and x 0 denotes a random value in the interval (0, 1).
Figure 2a,b illustrate the population distributions generated by the random method and the Bernoulli shift mapping, respectively. Comparatively, it is evident that the population generated by the Bernoulli shift mapping exhibits a more uniform distribution throughout the entire space, with fewer local clusters.
Improvement measure 2: Gaussian mutation strategy.
Since the NGO algorithm is prone to become stuck in local optima, the Gaussian mutation strategy is employed in this paper to optimize the algorithm. Gaussian mutation generates new solutions by introducing random perturbations to the individual’s state through random numbers drawn from a Gaussian distribution (normal distribution), thereby expanding the exploration range of the solution space, enhancing solution quality, and accelerating convergence [40]. Gaussian mutation provides strong support for solution diversity and global search, effectively improving the ability of the NGO algorithm to escape from local optimum. In the optimization process of the NGO algorithm, the Gaussian mutation strategy is applied when the fitness function value stays constant for five successive iterations. Its specific expression is as follows:
X i = X i ( 1 + γ )
In the expression, γ is a random number that follows a Gaussian distribution, with a maximum value of 1 and a minimum value of 0; X i represents the current state of the individual; and X i denotes the state of the individual after applying Gaussian mutation.
Improvement measure 3: Improved nonlinear convergence factor.
In the second stage of the NGO algorithm, the goshawk pursues its prey within an attack range of radius R 1 , which is related to the current iteration number j and the maximum iteration number J, and gradually reduces as the current iteration number grows. In the early stage of the goshawk’s hunting, the value of R 1 is relatively large. To enhance the global search capability of the NGO algorithm, the decay rate of R 1 should be slowed down during this phase to maintain a larger search step size, thereby avoiding local optima. In the later stage of the goshawk’s hunting, when R 1 becomes smaller and the search step size of the algorithm decreases, the decay rate of R 1 should be increased to ensure that it remains small in the latter stage of the optimization process, improving the accuracy of the algorithm’s search. However, as shown in Equation (15), R 1 changes linearly and does not adequately meet these requirements. Therefore, this paper improves the NGO algorithm by using a nonlinear convergence factor R 2 , which is expressed as follows:
R 2 = 0.01 cos π · j J + 1
The original convergence factor R 1 and the improved convergence factor R 2 are shown in Figure 3, with their variations across iterations. R 2 decays more slowly than R 1 in the early stages of the algorithm’s optimization, while it decays more rapidly than R 1 in the later stage, which is consistent with the expected results.

3.2.3. Analysis of the Effectiveness of the Improvement Measures

To verify the effectiveness of the three improvement measures in the NGO algorithm, performance tests were conducted on the improved algorithm, INGO. The results were compared with several state-of-the-art algorithms, including NGO, GWO [41], WOA [42], and PSO [43]. The parameter settings for these algorithms are provided in Table 4.
In this study, the CEC2005 benchmark function suite was used to assess the performance of these algorithms. As detailed in Table 5, the CEC2005 benchmark function suite includes 23 test functions. F 1 F 7 are unimodal test functions, each having a single global optimum within the given range, which are capable of testing the accuracy and convergence rate of the algorithms. F 8 F 13 are multimodal test functions, characterized by a single global optimum within the given interval, but multiple local optima. These functions are utilized to evaluate the algorithm’s global search capability and its ability in avoiding local optima. F 14 F 23 are fixed-dimensional multimodal test functions, with lower dimensions and fewer extrema, designed to simultaneously test the accuracy, convergence rate, and global search capability of the algorithm. The performance of different algorithms is compared based on the mean values and standard deviations across these three types of test functions. A lower average value signifies greater convergence accuracy of the algorithm, whereas a smaller standard deviation reflects better stability of the algorithm.
In this test, each algorithm was executed 30 times independently, with the number of iterations limited to 500 and a population size of 30. The test results are presented in Table 6.
Based on the test results of the unimodal test functions F 1 F 5 , F 7 , the multimodal test functions F 9 F 11 , F 13 , and the fixed-dimensional multimodal test functions F 14 F 20 , F 23 , it is evident that the INGO algorithm consistently achieves higher convergence accuracy than the other four algorithms, and in most cases, it also demonstrates superior stability. In the test results of other functions, the convergence accuracy of the INGO algorithm shows only a small difference compared to the best-performing algorithm, highlighting its excellent performance. Overall, the improvements proposed for the NGO algorithm in this paper effectively enhance the optimization capability and stability of the algorithm.
To provide a more intuitive comparison of the convergence and optimization processes between the INGO algorithm and other algorithms, Figure 4 illustrates the convergence curves of five algorithms on 23 benchmark test functions. The dimensions of functions F 1 F 13 are 30, where the vertical axis corresponds to the fitness function values, and the horizontal axis corresponds to the iteration counts of the algorithms. As shown in Figure 4, although the INGO algorithm performs poorly in the convergence curves of the fitness functions for F 6 and F 12 , it exhibits the best convergence speed and precision among the five algorithms for other benchmark functions. In most cases, the INGO algorithm achieves better convergence accuracy with fewer iterations and demonstrates a powerful capability to avoid local optima. In particular, in the convergence curves of F 1 F 4 , the global optimization ability of the INGO algorithm exhibits global optimization capability that significantly outperforms other algorithms.
In conclusion, based on the test results of the five algorithms, the INGO algorithm shows significant advantages in terms of convergence rate, accuracy, and stability, and stability, thereby validating the effectiveness of the improvement measures proposed to the NGO algorithm in this paper.

3.3. INGO-ELM Geomagnetic Matching Algorithm

Despite the fast training speed and high prediction accuracy of ELM, the random initialization of the hidden layer weights and biases can lead to instability in the model’s predictive performance. This issue is particularly evident when the training data contains noise or outliers, which may compromise the accuracy and robustness of the model. To optimize the ELM algorithm, this paper utilizes the INGO algorithm to initialize the hidden layer weights and biases of the ELM model, thereby proposing the INGO-ELM geomagnetic matching algorithm. Figure 5 presents the flowchart of the algorithm.
Based on Figure 5, the INGO-ELM model is trained on the training set constructed in Section 2, resulting in the geomagnetic matching model.

4. Simulation Tests

The INGO-ELM geomagnetic matching model introduces the principle of machine learning to form a one-to-one nonlinear mapping between geomagnetic information and position information for direct matching and positioning. By leveraging the advantages of machine learning, this method can not only perform matching based on existing geomagnetic databases but also accurately estimates the geomagnetic data between discrete points in the database, essentially constructing a continuous and complete geomagnetic matching database to achieve more precise matching and positioning.
Thus, to verify the geomagnetic matching performance of the INGO-ELM model, we construct geomagnetic matching models using XGBoost and BP neural networks—both machine learning algorithms—and compared them with the INGO-ELM model in simulation tests. To ensure a fair comparison comparison with the INGO-ELM model, we use INGO to optimize the parameters of XGBoost and BP neural networks. Meanwhile, we built two geomagnetic matching models, NGO-ELM and ELM, to further validate the performance of INGO as an intelligent optimization algorithm.

4.1. Extreme Gradient Boosting

Extreme gradient boosting (XGBoost) [44], a classic algorithm in the field of machine learning, originates from the optimization and upgrading of the gradient boosting decision tree (GBDT) in its core design philosophy. When training on large-scale datasets, XGBoost significantly accelerates the training speed through efficient tree structure pruning and parallel computing mechanisms. Meanwhile, it introduces regularization terms to prevent overfitting, enabling the model to maintain excellent generalization performance on complex datasets. As the core of XGBoost, its objective function Obj formula is as follows:
O b j = i = 1 n L ( y i , y ^ i ) + Ω ( f ^ )
L y i , y i ^ denotes the loss function, which is employed to measure the discrepancy between the model’s predicted value y i ^ and the true value y i . Ω f ^ represents the regularization term, serving to control the complexity of each tree. Its formula is
Ω ( f ^ ) = γ 1 T 1 + 1 2 λ 1 j = 1 m w j 2
In the formula, f ^ is the prediction function; γ 1 is the regularization parameter; T 1 is the number of leaf nodes in the regression tree; λ 1 is the penalty term for leaf node weights; m is the number of features; and w j is the leaf node weight of the j-th tree.

4.2. Back Propagation Neural Network

The Back Propagation (BP) Neural Network [45], a feedforward neural network, exhibits a strong nonlinear fitting capability and finds extensive applications. When constructing a geomagnetic matching model using a BP neural network in this paper, the input layer consists of three nodes, which are the components of the total magnetic field intensity at the sampling point along the three axes of the NED coordinate system. The hidden layer is set with 10 nodes. The output layer contains three nodes, corresponding to the latitude, longitude, and altitude of the sampling point, respectively. The activation functions of neurons in the hidden layer and output layer are shown in Equations (22) and (23), respectively:
f 1 ( x ) = e x e x e x + e x
f 2 ( x ) = x

4.3. Geomagnetic Matching Simulation Test

To establish the control group, INGO is used to optimize the number of iterations, tree depth, and learning rate of XGBoost. Similarly, INGO is employed to optimize the weight matrix from the input layer to the hidden layer, the weight matrix from the hidden layer to the output layer, and the biases of the hidden layer and output layer for the BP neural network.
Then, the geomagnetic matching dataset constructed in Section 2 is used to conduct comparative tests on five models, namely, INGO-ELM, NGO-ELM, ELM, INGO-XGBoost, and INGO-BP. The test results are shown in Table 7, Table 8 and Table 9, which present the error statistics of latitude, longitude, and height obtained by the five geomagnetic matching models, respectively.
In Table 7, Table 8 and Table 9, the INGO-ELM model exhibits the lowest mean absolute error (MAE), root mean square error (RMSE), and mean square error (MSE) among the five models, indicating that the average errors for latitude, longitude, and altitude obtained through this model are the smallest, with the fewest prediction outliers. Therefore, the geomagnetic matching accuracy and stability of this model are the highest. Furthermore, taking the MAE evaluation index as an example, the MAE for latitude and longitude obtained using the INGO-ELM geomagnetic matching model are 5.728 × 10−5 degrees and 5.98 × 10−5 degrees, respectively, which correspond to approximately 6.38 m and 6.43 m after unit conversion. The MAE for altitude obtained by this model is 0.0137 m. These results demonstrate that the geomagnetic matching model based on INGO-ELM achieves extremely high matching accuracy.
The error evaluation index of the NGO-ELM model are second only to those of the INGO-ELM model, remaining within the same order of magnitude. However, it is noteworthy that in the latitude matching results presented in Table 7, the maximum error of the INGO-ELM model is 6.156 × 10−4 degrees, which corresponds to approximately 68.53 m after unit conversion, while the maximum error of the NGO-ELM model is 9.554 × 10−4 degrees, equivalent to approximately 106.36 m after unit conversion, with a difference of about 37.83 m between the two. Similarly, in the longitude matching results shown in Table 8, the maximum error of the two models differs by approximately 41.74 m after unit conversion. A positioning error of several tens of meters has a significant impact on the navigation of the aerial vehicle. Therefore, compared to the NGO-ELM algorithm, the INGO-ELM algorithm not only enhances the matching accuracy but also greatly improves stability. The improvements proposed for the NGO algorithm in this paper are effective and have practical application value.
From Table 7, Table 8 and Table 9, it can be observed that the four error evaluation index for latitude and altitude obtained through the INGO-BP model are similar to those of the INGO-ELM model. However, for longitude, the MAE value of the BP neural network model is 8.359 × 10−4, which differs by an order of magnitude from the MAE value of 5.98 × 10−5 for the INGO-ELM model, demonstrating relatively poor performance. This discrepancy may be attributed to the tendency of the BP neural network to become stuck in local optima during the training process, leading to lower overall geomagnetic matching accuracy. The error data of the INGO-XGBoost model in the three tables are mostly two orders of magnitude higher than those of the INGO-ELM model. This could be attributed to the fact that the matching results of XGBoost are determined by the expected values based on the conditional distribution of features, lacking the ability to excavate deeper-level relationships between features. XGBoost cannot directly capture the complex feature relationships in the geomagnetic matching dataset, resulting in the poor geomagnetic matching performance of INGO-XGBoost model. In Table 7 and Table 8, all error evaluation metrics for the ELM model are one order of magnitude higher than those of the INGO-ELM and NGO-ELM models, indicating that optimizing the initial weights and biases of the ELM model using the INGO or NGO algorithms significantly improves its matching performance.
Furthermore, in the geomagnetic matching dataset of this paper, the units of latitude and longitude are degrees, with small numerical values and minimal variations, while the unit of altitude is meters, featuring large numerical values and significant variations. However, the data obtained by the machine learning model is dimensionless, and the model itself does not inherently understand the specific physical meanings of each input and output parameter. Without considering dimensional units, the output data in Table 7, Table 8 and Table 9 show that the matching errors for latitude and longitude are several orders of magnitude lower than those for altitude. However, after converting the units of latitude and longitude from degrees to meters, their matching errors become higher than those of altitude. This indicates that the dimensional discrepancies in the geomagnetic matching dataset lead to higher positioning accuracy for altitude.
To provide a more intuitive comparison of the fitting performance among the five models, Figure 6a, Figure 7a and Figure 8a illustrate the predicted values of latitude, longitude, and altitude for the sampling points by different models, along with the corresponding true values. Enlarged sections of these three figures are shown in Figure 6b, Figure 7b, and Figure 8b.
As seen in Figure 6a and Figure 8a, the predicted values of five models exhibit a good fit with the actual values at all sampling points. However, in Figure 7a, it is evident that INGO-XGBoost and INGO-BP models exhibit poor fitting performance, while the other three algorithms still perform well. This situation may be attributed to the fact that within the sampling range of this study, the variation in longitude [15, 15.009] is much smaller than the variation in latitude [15, 16], while the sampling interval is the same for both at 0.001 degrees. As a result, the longitude sampling values lack sufficient diversity, and both INGO-XGBoost and INGO-BP models failed to capture key features when processing this type of data, leading to poor prediction performance. As shown in Figure 6b, Figure 7b, and Figure 8b, the INGO-ELM model demonstrates the highest matching accuracy.
Based on the comprehensive analysis of Table 7, Table 8 and Table 9 and Figure 6, Figure 7, and Figure 8, the test results confirm the effectiveness of the three improvement measures proposed for NGO in this study. In addition, after obtaining the geomagnetic data of the aerial vehicle’s location, the trained INGO-ELM model can accurately determine the aerial vehicle’s geographic coordinates in just 0.27 µs. Therefore, the proposed INGO-ELM geomagnetic matching model significantly outperforms NGO-ELM, ELM, INGO-XGBoost and INGO-BP models, achieving stable, precise, and real-time positioning.
It is worth noting that achieving the positioning accuracy of the INGO-ELM geomagnetic matching model imposes stringent requirements on geomagnetic sensors. Taking latitude as an example for analysis, in the IGRF-13 model, when longitude and altitude remain unchanged, the geomagnetic field may change by only a few nanoteslas (nT) per 0.001 degrees of latitude (approximately 110 m). In the simulation results of this paper, the average absolute error in latitude matching is 6.38 m, implying that to achieve this positioning accuracy, the sensitivity of the geomagnetic sensor must be lower than 1 nT. However, as shown in References [46,47,48,49], with the development of atomic magnetometer technology, geomagnetic sensors will be capable of achieving picotesla (pT)-level or even femtotesla (fT)-level three-axis magnetic field measurements within a high dynamic range in noisy and physically demanding environments, meeting the performance requirements of the geomagnetic matching method proposed in this paper.

4.4. Robustness Verification

Within the sampling range shown in Table 3, the interference of non-natural noise on the magnetic field is minimal. This study primarily focuses on the impact of observational errors from the geomagnetic sensor on the INGO-ELM geomagnetic matching model. To evaluate the robustness of the INGO-ELM geomagnetic matching model, Gaussian white noise with standard deviations of 1 nT and 5 nT was added to the input data of the test set, and the test results are presented in Table 10, Table 11 and Table 12.
In Table 10 and Table 11, the INGO-ELM model exhibits the smallest mean absolute error for geomagnetic matching, indicating that the model maintains the highest geomagnetic matching accuracy among the five models even after the introduction of Gaussian white noise. Furthermore, these two tables demonstrate that after adding noise, the geomagnetic matching mean absolute error of INGO-ELM and NGO-ELM is significantly better than that of the other three models. When the standard deviation of the Gaussian white noise is 1nT, the matching accuracy of the two models is similar. However, when the standard deviation of the Gaussian white noise is 5 nT, the mean absolute error of the INGO-ELM model in matching latitude and longitude, after unit conversion, is approximately 12.41 m and 12.28 m lower than that of the NGO-ELM model, respectively. These test results suggest that the INGO-ELM model demonstrates significantly better anti-interference capability than the NGO-ELM model.
Table 12 further indicates that when Gaussian white noise with a standard deviation of 1nT is added to the INGO-ELM geomagnetic matching model, the order of magnitude in most error data remains unchanged. However, when the standard deviation increases to 5 nT, the average absolute errors of the INGO-ELM model in matching latitude, longitude, and altitude, after unit conversion, become 15.83 m, 17.55 m, and 0.027 m, respectively, still maintaining a high level of geomagnetic matching accuracy. Meanwhile, the maximum errors, after unit conversion, become 213.63 m, 196.13 m, and 0.409 m, but their occurrence probability is extremely low, and abnormal values can be eliminated using filtering algorithms. Therefore, a comprehensive analysis of the data in Table 8, Table 9 and Table 10 reveals that the INGO-ELM model not only demonstrates excellent matching accuracy but also maintains robust performance in the presence of noise.

5. Conclusions

Geomagnetic matching-assisted navigation offers several advantages, including all-terrain, all-weather capabilities, strong anti-jamming performance, and no error accumulation, which can compensate for the shortcomings of the INS/GPS integrated navigation system. The INGO-ELM geomagnetic matching model proposed in this paper enables real-time, precise, and stable positioning for high-speed aerial vehicles. In this model, an improved NGO algorithm is used to optimize the initial weights and biases of the ELM model, and the effectiveness of the improvements is validated using the CEC2005 benchmark function suite. The results demonstrate that the INGO algorithm performs best overall in the tests, and the improvements effectively enhance the convergence rate, convergence accuracy, and stability of the NGO algorithm.
In addition, this paper employs the IGRF-13 model to generate a geomagnetic matching dataset and conducts comparative testing of several geomagnetic matching models, including INGO-ELM, NGO-ELM, ELM, INGO-XGBoost, and INGO-BP. The simulation results show that after the airborne equipment acquires the geomagnetic data, it only takes 0.27 µs to obtain the latitude, longitude, and altitude of the aerial vehicle through the INGO-ELM model. After unit conversion, the average absolute errors are approximately 6.38 m, 6.43 m, and 0.0137 m, respectively. When considering all types of geomagnetic matching error data comprehensively, the geomagnetic matching performance of the INGO-ELM model is significantly superior to that of the other four models. Furthermore, the INGO-ELM model maintained a high level of geomagnetic matching accuracy even after the addition of Gaussian white noise to the test set inputs, indicating that the model exhibits good robustness. Therefore, the INGO-ELM algorithm presented in this paper exhibits superior robustness and is capable of accomplishing the positioning task in real time with high-accuracy matching performance, even in the presence of observational errors from the magnetic sensor.
Future Outlook: Although the INGO-ELM model demonstrates exceptionally high geomagnetic matching accuracy, it requires precise geomagnetic data from the target area for training. A major challenge lies in the stable acquisition of such data in advance. Although the IGRF-13 model adopted in this study is a highly internationally recognized geomagnetic field reference model, it exhibits certain discrepancies compared to the real geomagnetic field. In fact, the geomagnetic matching algorithm proposed herein is equally applicable to datasets generated by other more accurate geomagnetic field models that establish one-to-one correspondences between geomagnetic information and location information. In future work, we will focus on the construction of regional geomagnetic field models. Additionally, geomagnetic matching-assisted navigation may be severely disrupted in special circumstances, such as geomagnetic storms. Therefore, it is crucial to explore how to integrate the INGO-ELM model with other navigation systems to effectively address such situations.

Author Contributions

Conceptualization, J.H.; project administration, Z.H. and W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grant number: 62203191).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wei, X.; Lang, P.; Li, J.; Feng, K.; Zhan, Y. A hybrid optimization method based on extreme learning machine aided factor graph for ins/gps information fusion during gps outages. Aerosp. Sci. Technol. 2024, 152, 109326. [Google Scholar] [CrossRef]
  2. Wang, D.; Xu, X.; Zhu, Y. A novel hybrid of a fading filter and an extreme learning machine for gps/ins during gps outages. Sensors 2018, 18, 3863. [Google Scholar] [CrossRef] [PubMed]
  3. Lyu, D.; Wang, J.; He, Z.; Chen, Y.; Hou, B. Landmark-based inertial navigation system for autonomous navigation of missile platform. Sensors 2020, 20, 3083. [Google Scholar] [CrossRef] [PubMed]
  4. Shen, C.; Zhang, Y.; Tang, J.; Cao, H.; Liu, J. Dual-optimization for a mems-ins/gps system during gps outages based on the cubature kalman filter and neural networks. Mech. Syst. Signal Process. 2019, 133, 106222. [Google Scholar] [CrossRef]
  5. Hegazy, S.A.E.-H.; Kamel, A.M.; Arafa, I.I.; Elhalwagy, Y.Z. INS stochastic noise impact on circular error probability of ballistic missiles. Navig. J. Inst. Navig. 2022, 69, navi.523. [Google Scholar] [CrossRef]
  6. Xie, F.; Dong, M. An improved attitude compensation algorithm for sins/gns integrated navigation system. J. Sens. 2021, 2021, 5525481. [Google Scholar] [CrossRef]
  7. Sang, Y.; Ji, X.; Wei, D.; You, Y.; Zhang, W. Cramer-rao lower bound for geomagnetic matching and its approximation method. IEEE Trans. Aerosp. Electron. Syst. 2025; early access. [Google Scholar]
  8. Heda, Z.; Ning, Z.; Lei, X.; Penglong, L.; Yonglu, L.; Xu, L. Summary of research on geomagnetic navigation technology. Iop Conf. Ser. Earth Environ. Sci. 2021, 769, 032031. [Google Scholar]
  9. Li, M.M.; Lu, H.-Q.; Yin, H.; Huang, X.L. Novel algorithm for geomagnetic navigation. J. Cent. South Univ. Technol. 2011, 18, 791–799. [Google Scholar] [CrossRef]
  10. Zhang, J.; Zhang, T.; Zhang, C.; Yao, Y. An improved iccp-based underwater terrain matching algorithm for large initial position error. IEEE Sens. J. 2022, 22, 16381–16391. [Google Scholar] [CrossRef]
  11. Wei, E.; Dong, C.; Yang, Y.; Tang, S.; Liu, J.; Gong, G.; Deng, Z. A robust solution of integrated sitan with tercom algorithm: Weight-reducing iteration technique for underwater vehicles’ gravity-aided inertial naviga-tion system. Navigation 2017, 64, 111–122. [Google Scholar] [CrossRef]
  12. Stepanov, O.A.; Toropov, A.B. Nonlinear filtering for map-aided navigation. part 1. an overview of algorithms. Gyroscopy Navig. 2015, 6, 324–337. [Google Scholar] [CrossRef]
  13. Arulampalam, M.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonline-ar/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  14. Rigatos, G.G. Particle filtering for state estimation in nonlinear industrial systems. IEEE Trans. Instrum. Meas. 2009, 58, 3885–3900. [Google Scholar] [CrossRef]
  15. Ahwiadi, M.; Wang, W. An adaptive particle filter technique for system state estimation and prognosis. IEEE Trans. Instrum. Meas. 2020, 69, 6756–6765. [Google Scholar] [CrossRef]
  16. Ji, C.; Song, C. An assisted navigation method based on geomagnetic matching. In Proceedings of the International Conference on Advanced Manufacturing Technology and Manufacturing Systems (ICAMTMS 2022), Shijiazhuang, China, 27–29 May 2022; Deng, Q., Ed.; International Society for Optics and Photonics (SPIE): Bellingham, WA, USA, 2022; Volume 12309, p. 123090U. [Google Scholar]
  17. Ji, C.; Chen, Q.; Song, C. Improved particle swarm optimization geomagnetic matching algorithm based on simulated annealing. IEEE Access 2020, 8, 226064–226073. [Google Scholar] [CrossRef]
  18. Chen, K.; Liang, W.C.; Liu, M.X.; Sun, H.Y. Comparison of geomagnetic aided navigation algorithms for hypersonic vehicles. J. Zhejiang-Univ.-Sci. A 2020, 21, 673–683. [Google Scholar] [CrossRef]
  19. Tang, C.; Shi, H.; Zhang, L. Geomagnetic matching cooperative positioning method for unmanned boat cluster based on factor graph. Ocean. Eng. 2024, 296, 116901. [Google Scholar] [CrossRef]
  20. Ren, Y.; Wang, L.; Lin, K.; Ma, H.; Ma, M. Improved iterative closest contour point matching navigation algorithm based on geomagnetic vector. Electronics 2022, 11, 796. [Google Scholar] [CrossRef]
  21. Xiao, J.; Duan, X.; Qi, X.; Liu, Y. An improved iccp matching algorithm for use in an interference envi-ronment during geomagnetic navigation. J. Navig. 2019, 73, 56–74. [Google Scholar] [CrossRef]
  22. Xu, N.; Wang, L.; Wu, T.; Yao, Z. An innovative pso-iccp matching algorithm for geomagnetic navigation. Measurement 2022, 193, 110958. [Google Scholar] [CrossRef]
  23. Chen, Z.; Liu, Z.; Zhang, Q.; Chen, D.; Pan, M.; Xu, Y. A new geomagnetic vector navigation method based on a two-stage neural network. Electronics 2023, 12, 1975. [Google Scholar] [CrossRef]
  24. Chen, Z.; Liu, K.; Zhang, Q.; Liu, Z.; Chen, D.; Pan, M.; Hu, J.; Xu, Y. Geomagnetic vector pattern recognition navigation method based on probabilistic neural network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5909608. [Google Scholar] [CrossRef]
  25. Ding, S.; Xu, X.; Nie, R. Extreme learning machine and its applications. Neural Comput. Appl. 2014, 25, 549–556. [Google Scholar] [CrossRef]
  26. Gao, Z.; Yi, W. Prediction of projectile impact points and launch conditions based on extreme learning machine. Measurement 2025, 252, 117308. [Google Scholar] [CrossRef]
  27. Li, Z.; Jiang, W.; Zhang, S.; Sun, Y.; Zhang, S. A hydraulic pump fault diagnosis method based on the modified ensemble empirical mode decomposition and wavelet kernel extreme learning machine methods. Sensors 2021, 21, 2599. [Google Scholar] [CrossRef]
  28. Shariati, M.; Mafipour, M.S.; Ghahremani, B.; Azarhomayun, F.; Ahmadi, M.; Trung, N.T.; Shariati, A. A novel hybrid extreme learning machine–grey wolf optimizer (elm-gwo) model to predict compressive strength of concrete with partial replacements for cement. Eng. Comput. 2020, 38, 757–779. [Google Scholar] [CrossRef]
  29. Huang, G.-B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass clas-sification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2012, 42, 513–529. [Google Scholar] [CrossRef] [PubMed]
  30. Gao, Z.; Yi, W. Optimizing projectile aerodynamic parameter identification of kernel extreme learning machine based on improved dung beetle optimizer algorithm. Measurement 2025, 239, 115473. [Google Scholar] [CrossRef]
  31. Wang, Y.; Yu, H.; Zhang, L.; Li, G. Apo-elm model for improving azimuth correction of shipborne hfswr. Remote Sens. 2023, 15, 3818. [Google Scholar] [CrossRef]
  32. Subudhi, U.; Dash, S. Detection and classification of power quality disturbances using GWO ELM. J. Ind. Inf. Integr. 2021, 22, 100204. [Google Scholar] [CrossRef]
  33. Xu, F.; Liu, Y.; Wang, L. An improved elm-woa–based fault diagnosis for electric power. Front. Energy Res. 2023, 11, 1135741. [Google Scholar] [CrossRef]
  34. Alken, P.; Thébault, E.; Beggan, C.D.; Amit, H.; Aubert, J.; Baerenzung, J.; Bondar, T.N.; Brown, W.J.; Califf, S.; Chambodut, A.; et al. International geomagnetic reference field: The thirteenth generation. Earth Planets Space 2021, 73, 49. [Google Scholar] [CrossRef]
  35. Bacanin, N.; Stoean, C.; Zivkovic, M.; Jovanovic, D.; Antonijevic, M.; Mladenovic, D. Multi-swarm algorithm for extreme learning machine optimization. Sensors 2022, 22, 4204. [Google Scholar] [CrossRef]
  36. Li, X.; Ma, J. Domain adaptation based on semi-supervised cross-domain mean discriminative analysis and kernel transfer extreme learning machine. Sensors 2023, 23, 6102. [Google Scholar] [CrossRef]
  37. Bardhan, A.; Samui, P.; Ghosh, K.; Gandomi, A.H.; Bhattacharyya, S. Elm-based adaptive neuro swarm intelligence techniques for predicting the california bearing ratio of soils in soaked conditions. Appl. Soft Comput. 2021, 110, 107595. [Google Scholar] [CrossRef]
  38. Dehghani, M.; Hubálovský, Š.; Trojovský, P. Northern goshawk optimization: A new swarm-based algorithm for solving optimization problems. IEEE Access 2021, 9, 162059–162080. [Google Scholar] [CrossRef]
  39. Hu, G.; Zhong, J.; Wei, G. Sachba_pdn: Modified honey badger algorithm with multi-strategy for uav path planning. Expert Syst. Appl. 2023, 223, 119941. [Google Scholar] [CrossRef]
  40. Gupta, S.; Deep, K.; Mirjalili, S. An efficient equilibrium optimizer with mutation strategy for numerical op-timization. Appl. Soft Comput. 2020, 96, 106542. [Google Scholar] [CrossRef]
  41. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  42. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  43. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  44. Qiu, Y.; Zhou, J.; Khandelwal, M.; Yang, H.; Yang, P.; Li, C. Performance evaluation of hybrid woa-xgboost, gwo-xgboost and bo-xgboost models to predict blast-induced ground vibration. Eng. Comput. 2021, 38 (Suppl. 5), 1–18. [Google Scholar] [CrossRef]
  45. Zhang, D.; Lou, S. The application research of neural network and bp algorithm in stock price pattern classi-fication and prediction. Future Gener. Comput. Syst. 2021, 115, 872–879. [Google Scholar] [CrossRef]
  46. Bai, X.; Wen, K.; Peng, D.; Liu, S.; Luo, L. Atomic magnetometers and their application in industry. Front. Phys. 2023, 11, 1212368. [Google Scholar] [CrossRef]
  47. Tang, J.; Zhai, Y.; Cao, L.; Zhang, Y.; Li, L.; Zhao, B.; Zhou, B.; Han, B.; Liu, G. High-sensitivity operation of a single-beam atomic magnetometer for three-axis magnetic field measurement. Opt. Express 2021, 29, 15641–15652. [Google Scholar] [CrossRef] [PubMed]
  48. Wu, S.; Bao, G.; Guo, J.; Chen, J.; Du, W.; Shi, M.; Yang, P.; Chen, L.; Zhang, W. Quantum magnetic gra-diometer with entangled twin light beams. Sci. Adv. 2023, 9, eadg1760. [Google Scholar] [CrossRef]
  49. Ma, Z.; Han, C.; Tan, Z.; He, H.; Shi, S.; Kang, X.; Wu, J.; Huang, J.; Lu, B.; Lee, C. Adaptive cold-atom magnetometry mitigating the trade-off between sensitivity and dynamic range. Sci. Adv. 2025, 11, eadt3938. [Google Scholar] [CrossRef]
Figure 1. The North–East–Down coordinate system.
Figure 1. The North–East–Down coordinate system.
Sensors 25 04310 g001
Figure 2. Comparison of initial population distributions.
Figure 2. Comparison of initial population distributions.
Sensors 25 04310 g002
Figure 3. Comparison of convergence factors.
Figure 3. Comparison of convergence factors.
Sensors 25 04310 g003
Figure 4. Convergence curves of various intelligent optimization algorithms on the CEC2005 benchmark function suite.
Figure 4. Convergence curves of various intelligent optimization algorithms on the CEC2005 benchmark function suite.
Sensors 25 04310 g004aSensors 25 04310 g004b
Figure 5. Flowchart of the INGO-ELM algorithm.
Figure 5. Flowchart of the INGO-ELM algorithm.
Sensors 25 04310 g005
Figure 6. The latitude matching results of different models.
Figure 6. The latitude matching results of different models.
Sensors 25 04310 g006
Figure 7. The longitude matching results of different models.
Figure 7. The longitude matching results of different models.
Sensors 25 04310 g007
Figure 8. The altitude matching results of different models.
Figure 8. The altitude matching results of different models.
Sensors 25 04310 g008
Table 1. Terminology summary.
Table 1. Terminology summary.
Mathematical SymbolFull Name
VGeomagnetic potential
aThe Earth’s average radius
zThe distance from the Earth’s core to the calculation point
θ Cotangent latitude measured from the North Pole
φ Geomagnetic latitude
λ Longitude measured eastward from Greenwich
g n m , h n m Normalized Schmidt spherical harmonic coefficients
P n m cos θ Schmidt quasi-normalized associated Legendre function of degree n and order m
B x , B y , B z Magnetic field intensities in northward, eastward, and downward directions
HOutput matrix of the hidden layer in ELM
H + Moore–Penrose generalized inverse of H
WWeight matrix between the input layer and the hidden layer in ELM
β Weight matrix between the output layer and the hidden layer in ELM
TExpected output matrix of training samples in ELM
bThreshold of the hidden layer in ELM
β ^ Minimum norm least squares solution
P i Location of the target prey for the i-th Northern Goshawk
X c State of the Northern Goshawk
F s i Ideal fitness value of NGO algorithm
X i n e w , p 1 New state of the i -th Northern Goshawk at Stage 1 of NGO algorithm
X i , u n e w , p 1 New state of the i -th Northern Goshawk in the u-th dimension at Stage 1 of NGO algorithm
F i n e w , p 1 Fitness value of Northern Goshawk under the new state at Stage 1 of NGO algorithm
X i n e w , p 2 New state of the i -th Northern Goshawk at Stage 2 of NGO algorithm
X i , u n e w , p 2 New state of the i -th Northern Goshawk in the u-th dimension at Stage 2 of NGO algorithm
F i n e w , p 2 Fitness value of Northern Goshawk under the new state at Stage 2 of NGO algorithm
R 1 Convergence factor of stage 2 in NGO algorithm
R 2 Convergence factor of stage 2 in INGO algorithm
JMaximum iteration limit of stage 2 in NGO or INGO algorithm
ε Adjustment coefficient of the Bernoulli shift map
γ A random number that follows a Gaussian distribution
a 1 Linear decreasing convergence factor in GWO algorithm
b 1 Spiral shape parameter in WOA
c 1 , c 2 , w 1 Cognitive parameter, social parameter, and inertia weight in PSO algorithm
O b j Objective function in XGBoost
f ^ Prediction function in XGBoost
γ 1 Regularization parameter in XGBoost
T 1 Number of leaf nodes in the regression tree of XGBoost
λ 1 Penalty term for leaf node weights in XGBoost
y i ^ Model predicted values in XGBoost
y i True value
L y i , y i ^ Loss function in XGBoost
Ω f ^ Regularization term in XGBoost
f 1 x Activation functions of neurons in the hidden layers of BP neural network
f 2 x Activation functions of neurons in the output layer of BP neural network
Table 2. Acronyms summary.
Table 2. Acronyms summary.
AcronymsFull Name
NGONorthern Goshawk Optimization
INGOImproved Northern Goshawk Optimization
ELMExtreme Learning Machine
XGBoostExtreme Gradient Boosting
BPBack Propagation
PSOParticle Swarm Optimization
GWOGrey Wolf Optimizer
WOAWhale Optimization Algorithm
INSInertial Navigation System
GPSGlobal Positioning System
IGRF-13International Geomagnetic Reference Field, 13th Edition
SITANSandia Inertial Terrain Aided Navigation
ICCPIterative Closest Contour Point
MAGCOMGeomagnetic Contour Matching
PDAProbability Data Association
PNNProbabilistic Neural Network
KELMKernel Extreme Learning Machine
IDBOImproved Dung Beetle Optimization
AdaBoostAdaptive Boosting
NEDNorth–East–Down (coordinates system)
CECCongress on Evolutionary Computation
GBDTGradient Boosting Decision Tree
Table 3. The sampling approach for latitude, longitude, and altitude.
Table 3. The sampling approach for latitude, longitude, and altitude.
Sampling ParametersSampling RangeSampling Interval
Latitude (°)[15, 16]0.001
Longitude (°)[15, 15.009]0.001
Altitude (m)[2000, 12,000]80
Table 4. Parameter settings of the Intelligent Optimization Algorithm.
Table 4. Parameter settings of the Intelligent Optimization Algorithm.
AlgorithmName of the ParameterValue of the Parameter
INGO R 2 [0, 0.02]
NGO R 1 [0, 0.02]
GWO a 1 [0, 2]
WOA b 1 1
PSO c 1 , c 2 , w 1 1.49618, 1.49618, 0.8
Table 5. The CEC2005 benchmark function suite.
Table 5. The CEC2005 benchmark function suite.
Function ExpressionsDimensionRangeOptimal Value
F 1 x = i = 1 d x i 2 30/100[−100, 100]0
F 2 x = i = 1 d x i + i = 1 d x i 30/100[−100, 100]0
F 3 x = i = 1 d j = 1 i x j 2 30/100[−10, 10]0
F 4 x = max { x i , 1 i d } 30/100[−100, 100]0
F 5 x = i = 1 d 1 x i 1 2 + 100 x i + 1 x i 2 2 30/100[−30, 30]0
F 6 x = i = 1 d x i + 0.5 2 30/100[−100, 100]0
F 7 x = r a n d o m 0 , 1 + i = 1 d i x i 4 30/100[−1.28, 1.28]0
F 8 x = i = 1 d x i s i n x i 30/100[−500, 500]−418.982 × dim
F 9 x = i = 1 d x i 2 10 c o s 2 π x i + 10 30/100[−5.12, 5.12]0
F 10 x = e x p 1 d i = 1 d c o s 2 π x i 20 e x p 0.2 1 d i = 1 d x i 2 + e + 20 30/100[−32, 32]0
F 11 x = 1 + i = 1 d c o s x i i + 1 4000 i = 1 d x i 2 30/100[−600, 600]0
F 12 x = π d y n 1 2 + i = 1 d 1 10 s i n 2 π y i + 1 + 1 y i 1 2 30/100[−50, 50]0
+ 10 s i n π y 1 + i = 1 d u x i , 10 , 100 , 4
y i = 1 + x i 4 + 1
u x i , a , k , m = k a + x i m , x i > a 0 , a < x i < a k a x i m , x i < a
F 13 x = i = 1 d u x i , 5 , 100 , 4 + 0.1 { x d 1 2 s i n 2 2 π x d + 1 30/100[−50, 50]0
+ i = 1 d 1 + x i 2 s i n 2 1 + 3 π x i + 1 + s i n 2 3 π x i
u x i , a , k , m = k a + x i m , x i > a 0 , a < x i < a k a x i m , x i < a
F 14 x = 1 500 + j = 1 25 1 i + i = 1 2 x i a i j 1 2[−65.53, 65.53]1
F 15 x = i = 1 11 a i x i b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]0.0003075
F 16 x = x 1 x 2 + 1 3 x 1 6 2.1 x 1 4 + 4 x 1 2 + 4 x 2 4 4 x 2 2 2[−5, 5]−1.0316285
F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 2[−5, 10] × [0, 15]0.398
F 18 x = 3 x 1 3 14 x 1 + 3 x 2 2 + 6 x 1 x 2 14 x 2 + 19 x 2 + x 1 + 1 2 + 1 2[−5, 5]3
× 12 x 1 2 32 x 1 + 27 x 2 2 36 x 1 x 2 + 48 x 2 + 18 × 2 x 1 3 x 2 2 + 30
F 19 x = i = 1 4 c 1 e x p j = 1 3 a i j p i j + x j 2 3[0, 1]−3.86
F 20 x = i = 1 4 c 1 e x p j = 1 6 a i j p i j + x j 2 6[0, 1]−3.32
F 21 x = i = 1 5 c i + a i + X a i + X T 1 4[0, 10]−10.1523
F 22 x = i = 1 7 c i + a i + X a i + X T 1 4[0, 10]−10.4028
F 23 x = i = 1 10 c i + a i + X a i + X T 1 4[0, 10]−10.5363
Table 6. The test results of various intelligent optimization algorithms on the CEC2005 benchmark function suite.
Table 6. The test results of various intelligent optimization algorithms on the CEC2005 benchmark function suite.
FunctionItemINGONGOGWOWOAPSORank
F 1 AVG 3.15 × 10 262 1.75 × 10 88 6 × 10 33 2.76 × 10 85 0.0633921
STD0 3.77 × 10 88 1.34 × 10 32 9 × 10 85 0.0734931
F 2 AVG 3 × 10 130 9.80 × 10 46 8.5 × 10 20 4.33 × 10 54 0.3690311
STD 1.1 × 10 129 7.37 × 10 46 7.67 × 10 20 2.14 × 10 53 1.8238121
F 3 AVG 5.2 × 10 266 5.59 × 10 23 1.68 × 10 8 29,390.271304.7861
STD0 2.37 × 10 22 3.38 × 10 8 11,871.371100.2181
F 4 AVG 1.3 × 10 131 8.34 × 10 38 2.07 × 10 8 40.842045.5352471
STD 7.2 × 10 131 6.36 × 10 38 2.23 × 10 8 29.048291.0845621
F 5 AVG25.0701725.4817526.7339827.434983147.531
STD0.1976560.3786910.6662090.38121116412.41
F 6 AVG0.007508 9.45 × 10 7 0.5245270.1116340.0908612
STD0.038813 4.3 × 10 7 0.3546990.1120650.0826042
F 7 AVG0.0002640.0004970.001250.0018250.0301621
STD0.0001390.0002380.0006570.0019880.0108571
F 8 AVG−7641.21−6759.21−6090.95−10843.1−8131.353
STD343.3791569.1969896.46461636.906689.85511
F 9 AVG001.680008043.411421
STD002.829896013.161531
F 10 AVG 4.44 × 10 16 5.3 × 10 15 4.31 × 10 14 4.23 × 10 15 0.1622051
STD0 1.74 × 10 15 5.36 × 10 15 1.85 × 10 15 0.2550721
F 11 AVG000.00058800.1526811
STD000.00223600.1204311
F 12 AVG 1.19 × 10 5 2.68 × 10 7 0.0312270.0066750.1514362
STD 5.33 × 10 6 3.08 × 10 7 0.0194660.0068050.2937992
F 13 AVG0.0231220.065250.3354690.2018980.0930561
STD0.0290730.0904890.2303450.1545820.0792771
F 14 AVG0.9980040.9980044.2966121.7866020.9980041
STD004.0472561.87817 1.17 × 10 16 1
F 15 AVG0.0003080.0003080.0037760.0006960.0011471
STD 3.12 × 10 7 8.79 × 10 7 0.0075510.0003830.0036471
F 16 AVG−1.03163−1.03163−1.03163−1.03163−1.031631
STD 6.78 × 10 16 6.71 × 10 16 1.88 × 10 8 2.13 × 10 10 6.32 × 10 16 3
F 17 AVG0.3978870.3978870.3979050.3978920.3978871
STD00 8.56 × 10 5 1.38 × 10 5 01
F 18 AVG333.0000143.00000331
STD 9.4 × 10 16 9.26 × 10 16 1.68 × 10 5 5.65 × 10 6 1.07 × 10 15 2
F 19 AVG−3.86278−3.86278−3.86154−3.859613.862781
STD 2.71 × 10 15 2.7 × 10 15 0.0025910.002958 2.64 × 10 15 3
F 20 AVG−3.322−3.322−3.24887−3.23504−3.260371
STD 1.41 × 10 15 2.95 × 10 14 0.078860.1123130.0637731
F 21 AVG−10.1521−10.1532−9.30821−9.05871−6.302252
STD0.00399 1.87 × 10 9 1.9183512.5316183.1322772
F 22 AVG−10.2257−10.4029−10.226−9.14426−8.55223
STD0.970414 2.7 × 10 9 0.9631222.5211233.1644573
F 23 AVG−10.5364−10.5364−10.535−8.65884−7.923061
STD0.0001 1.32 × 10 15 0.000792.7697053.5388682
Table 7. The latitude matching error (°) of different models.
Table 7. The latitude matching error (°) of different models.
Evaluation IndexINGO-ELMNGO-ELMELMINGO-XGBoostINGO-BP
MAE 5.728 × 10 5 6.516 × 10 5 1.139 × 10 4 1.697 × 10 3 8.284 × 10 5
MSE 5.754 × 10 9 8.013 × 10 9 2.316 × 10 8 5.664 × 10 6 1.17 × 10 8
MAX 6.156 × 10 4 9.554 × 10 4 1.016 × 10 3 3.519 × 10 2 1.119 × 10 3
RMSE 7.586 × 10 5 8.951 × 10 5 1.522 × 10 4 2.38 × 10 3 1.082 × 10 4
Table 8. The longitude matching error (°) of different models.
Table 8. The longitude matching error (°) of different models.
Evaluation IndexINGO-ELMNGO-ELMELMINGO-XGBoostINGO-BP
MAE 5.98 × 10 5 6.761 × 10 5 1.289 × 10 4 2.234 × 10 3 8.359 × 10 4
MSE 6.609 × 10 9 8.399 × 10 9 3.086 × 10 8 7.058 × 10 6 1.29 × 10 6
MAX 5.305 × 10 4 9.187 × 10 4 1.434 × 10 3 5.893 × 10 3 4.984 × 10 3
RMSE 8.13 × 10 5 9.165 × 10 5 1.757 × 10 4 2.657 × 10 3 1.136 × 10 3
Table 9. The altitude matching error (m) of different models.
Table 9. The altitude matching error (m) of different models.
Evaluation IndexINGO-ELMNGO-ELMELMINGO-XGBoostINGO-BP
MAE 1.372 × 10 2 1.635 × 10 2 3.028 × 10 2 0.104 5.11 × 10 2
MSE 2.82 × 10 4 4.467 × 10 4 1.495 × 10 3 3.16 × 10 2 4.698 × 10 3
MAX0.1040.2020.32419.0990.314
RMSE 1.68 × 10 2 2.114 × 10 2 3.867 × 10 2 0.178 6.854 × 10 2
Table 10. Mean absolute errors under Gaussian white noise with standard deviation of 1 nT.
Table 10. Mean absolute errors under Gaussian white noise with standard deviation of 1 nT.
Coordinate ParametersINGO-ELMNGO-ELMELMINGO-XGBoostINGO-BP
Latitude (°) 7.818 × 10 5 8.583 × 10 5 4.691 × 10 4 2.128 × 10 3 2.949 × 10 4
Longitude (°) 9.416 × 10 5 9.919 × 10 5 6.763 × 10 4 2.615 × 10 3 9.091 × 10 4
Altitude (m) 1.522 × 10 2 1.827 × 10 2 7.95 × 10 2 0.1620.159
Table 11. Mean absolute errors under Gaussian white noise with standard deviation of 5 nT.
Table 11. Mean absolute errors under Gaussian white noise with standard deviation of 5 nT.
Coordinate ParametersINGO-ELMNGO-ELMELMINGO-XGBoostINGO-BP
Latitude (°) 1.422 × 10 4 2.537 × 10 4 7.557 × 10 4 3.952 × 10 3 6.889 × 10 4
Longitude (°) 1.632 × 10 4 2.774 × 10 4 1.105 × 10 3 3.498 × 10 3 1.263 × 10 3
Altitude (m) 2.736 × 10 2 4.078 × 10 2 0.1230.5680.372
Table 12. Matching errors of the INGO-ELM model under Gaussian white noise with different standard deviations.
Table 12. Matching errors of the INGO-ELM model under Gaussian white noise with different standard deviations.
Coordinate Parameters0 nT1 nT5 nT
MAE MAX MAE MAX MAE MAX
Latitude (°) 5.728 × 10 5 6.156 × 10 4 7.818 × 10 5 1.11 × 10 3 1.422 × 10 4 1.919 × 10 3
Longitude (°) 5.98 × 10 5 5.305 × 10 4 9.416 × 10 5 9.799 × 10 4 1.632 × 10 4 1.824 × 10 3
Altitude (m) 1.372 × 10 2 0.104 1.522 × 10 2 0.171 2.736 × 10 2 0.409
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, J.; Hu, Z.; Yi, W. Complex Environmental Geomagnetic Matching-Assisted Navigation Algorithm Based on Improved Extreme Learning Machine. Sensors 2025, 25, 4310. https://doi.org/10.3390/s25144310

AMA Style

Huang J, Hu Z, Yi W. Complex Environmental Geomagnetic Matching-Assisted Navigation Algorithm Based on Improved Extreme Learning Machine. Sensors. 2025; 25(14):4310. https://doi.org/10.3390/s25144310

Chicago/Turabian Style

Huang, Jian, Zhe Hu, and Wenjun Yi. 2025. "Complex Environmental Geomagnetic Matching-Assisted Navigation Algorithm Based on Improved Extreme Learning Machine" Sensors 25, no. 14: 4310. https://doi.org/10.3390/s25144310

APA Style

Huang, J., Hu, Z., & Yi, W. (2025). Complex Environmental Geomagnetic Matching-Assisted Navigation Algorithm Based on Improved Extreme Learning Machine. Sensors, 25(14), 4310. https://doi.org/10.3390/s25144310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop