1. Introduction
Global navigation satellite systems (GNSSs) have become critical infrastructure that supports modern society, with applications that span aviation, maritime navigation, unmanned aerial vehicle autonomous navigation, intelligent transportation, precision agriculture, and many other fields, thus serving as a vital technological foundation for societal operations [
1]. However, as GNSS applications continue to expand, the security threats they face are becoming increasingly severe. In high-risk scenarios such as civil aviation takeoff and landing and drone swarm operations, malicious interference can cause navigation signal distortion or even service disruption, which can lead to major safety incidents [
2]. GNSS frequency band interference signals can disrupt service functions over vast geographical areas, thereby resulting in positioning errors, speed measurement inaccuracies, and timing anomalies. These issues can range from causing navigation errors to system crashes or even major safety incidents [
3]. This interference not only severely degrades system performance but also poses potential threats to low-altitude safety and public security [
4]. Therefore, research on the precise identification and classification of GNSS interference signals is highly important for ensuring system reliability, mitigating major risks, and maintaining safety and social stability.
Significantly, within the realm of advanced applications reliant on precise GNSS services, reliable sign detection and interference identification are not merely desirable for system robustness, but constitute critical prerequisites for ensuring autonomous operation and decision-making safety. For instance, in multi-perspective Unmanned Aerial Vehicle object detection tasks, accurate target localization and environmental perception critically depend on trustworthy navigation signals; the presence of GNSS interference can directly degrade the input quality to detection models, jeopardizing both detection accuracy and system safety [
5]. In the context of dispatch optimization for Shared Autonomous Vehicles, efficient vehicle positioning and route planning form the foundation for achieving system stability and maximizing overall performance; interference-induced positioning drift or loss can severely compromise the execution of optimal dispatch strategies [
6]. Similarly, in LiDAR-based Simultaneous Localization and Mapping systems enhanced by GNSS, the global position constraints provided by GNSS are vital for correcting odometry drift and enhancing map accuracy; interfering signals can significantly undermine the efficacy of GNSS-aided SLAM graph optimization [
7]. Therefore, within these critical and safety-sensitive applications, conducting research into the precise identification and classification of GNSS interference signals holds paramount significance for ensuring reliable system operation, mitigating significant risks, and safeguarding public safety and social stability.
In recent years, GNSS jamming technology has trended toward increasing complexity, diversification, and stealthiness in its forms. We can classify the interference into continuous wave interference (CWI), Chirp interference, pulse interference, frequency modulation noise interference (FMI), amplitude modulation noise interference (AMI), and spoofing interference [
8]. While traditional CWI effectively suppresses receiver capture and tracking loops because of its concentrated energy and ease of implementation, its steady-state characteristics make it relatively easy to detect. In contrast, chirp jamming leverages rapidly varying spectral characteristics and pulse interference uses instantaneous peak power; both effectively evade conventional filter designs and rapidly cause positioning errors or receiver lock loss. FMI and AMI employ wide-spectrum modulation techniques to mimic environmental noise characteristics, introducing high levels of uncertainty and complexity that increase the information entropy of the signals, thereby significantly increases the probability of misclassification in interference detection. The most challenging type of interference is spoofing interference. Spoofing interference achieves high-fidelity emulation across multiple signal dimensions (time—frequency, structure, power, spatial), making it statistically very similar to authentic signals in terms of measured information entropy or mutual information, and endowing it with extremely strong stealth and deception capabilities [
9]. The rapid evolution and diversification of interference techniques resulting in highly complex, uncertain, and high-entropy signal environments, have highlighted the urgent need for efficient detection and precise identification of various interference signals, regardless of whether they are suppression-type or spoofing-type.
The continuous iteration of interference techniques (especially the parameter similarity between deceptive signals and legitimate signals) has led to a double bottleneck for traditional detection methods: First, their classification capabilities are insufficient in complex and entropy-rich electromagnetic environments. Conventional signal quality monitoring (SQM [
10]) technology has difficulty distinguishing between different types of interference in interference scenarios, and its role is often limited to detecting the presence of interference signals [
11,
12]. Second, there are blind spots in the detection of highly realistic deceptive signals, and parameterized deception technology can circumvent detection mechanisms on the basis of traditional metrics such as the carrier-to-noise ratio (C/N
0) and correlation peak symmetry [
13].
Therefore, in the field of GNSS security protection, the intelligent identification and classification of interference signals have become the core research issues for overcoming the current bottlenecks. Robust classification models that are based on deep learning feature extraction can output high-confidence interference-type discrimination results that provide category prior information for adaptive notch filtering, spatiotemporal processing, and multidimensional detection algorithms, thereby enabling the real-time generation of optimal suppression strategies through adaptive parameter configuration. This technical approach achieves rapid and accurate perception and determination of complex interference categories, thus laying a foundation for the construction of a closed-loop defense framework. This approach holds decisive significance for enhancing the availability and continuity of GNSS services in complex electromagnetic environments and ensuring the safe operation of critical infrastructure.
Deep learning, as a powerful data-driven method, has shown great potential in GNSS interference signal classification tasks because of its excellent pattern learning and feature extraction capabilities. It can deeply analyze signal images (such as time—frequency diagrams and eye diagrams) and sequence data characteristics, extract deep features, construct accurate models to effectively identify potential patterns of interference signals and achieve efficient and reliable classification, thus providing new ideas for overcoming the limitations of traditional methods.
Several machine learning-based studies on GNSS interference signal classification (especially spoofing detection) have been conducted. Zhu et al. [
14] used support vector machine (SVM) to analyze seven features (SQM, moving variance, moving average, early and late phases, etc.) to detect spoofing signals. Wang [
15] and Chen [
16] both adopted a method that combines multiple parameters with an SVM to improve spoofing detection performance. Li et al. [
17] extracted multiple features, including SQM, C/N
0, and pseudorange-Doppler consistency, and validated their performance on the open-source deception dataset TEXBAT. For interference pattern recognition, O’Shea et al. [
18] explored the application of a convolutional neural network (CNN) in the identification of complex radio signal modulations. Ferre et al. [
19] transformed the spectral plots of signals into images, thus converting the interference type classification problem into a spectral plot classification task, and compared the performance of a SVM and a CNN in distinguishing between continuous waves, linear frequency modulation, multitone frequency modulation, pulses, and narrowband frequency modulation interference.
However, existing methods have significant limitations. First, methods that are based on manual features are highly dependent on feature engineering, lack flexibility, and are difficult to generalize. Second, existing machine learning and deep learning models, such as SVM, multilayer perceptron (MLP [
20]), CNN [
21], long short-term memory (LSTM [
22]) and bidirectional long short-term memory (BiLSTM [
23]), still have room for improvement in terms of classification accuracy, especially when dealing with multiple types of interference patterns.
To overcome the above limitations, a CNN–BiLSTM–ResNet hybrid model that is based on an improved coati optimization algorithm (ICOA–CNN–BiLSTM–ResNet) for interference signal classification is proposed in this study. The core strategy is to construct an eye diagram dataset of interference signals by using the correlation values of GNSS receivers and convert the interference signal classification problem into an eye diagram image classification problem. This transformation leverages the visual representation to encode the underlying statistical variations and entropy signatures induced by different jamming types.
At the model design level, single-structure deep learning methods face significant limitations: traditional CNN models can extract local spatial features effectively, but they struggle to model long-range global context dependencies; BiLSTM models can capture the temporal dependencies of sequence data, helpful for modeling time-varying information content, but they are still constrained by the vanishing gradient problem when processing extremely long sequences; and deep residual network (ResNet [
24]) alleviate the degradation issues of deep networks through skip connections, thereby enhancing feature propagation capabilities, but their primary advantage lies in spatial feature representation, capturing multi-scale informational structures, and they have relatively limited ability to capture complex dynamic patterns in signals that exhibit high temporal entropy. To overcome the inherent shortcomings of the above single models, we designed a CNN–ResNet–BiLSTM hybrid architecture. In this architecture, a CNN acts as a front-end feature extractor to efficiently capture the eye diagram features of interference signals, a ResNet integrates and deepens the features passed by the CNN layer and uses its powerful deep representation ability and the stability of residual learning to learn more robust and abstract discriminative patterns from multilevel features, and a BiLSTM network leverages its bidirectional temporal modeling capabilities to understand the temporal and spatial correlations of signals in detail and capture the unique long-term and short-term dynamic evolution patterns of fraudulent signals. Through innovative model design, this approach effectively combines the spatial feature extraction advantages of CNN, the deep feature abstraction and stability optimization characteristics of ResNet, and the bidirectional temporal modeling capabilities of BiLSTM, thereby significantly improving the classification performance for complex and diverse interference signals.
Despite the outstanding performance of deep learning models, their parameter optimization process still faces significant challenges. Traditional parameter optimization methods such as stochastic gradient descent (SGD) and adaptive moment estimation (Adam) are prone to becoming stuck in local optima and have limited convergence speeds in high-dimensional spaces. Researchers have proposed various novel optimization algorithms to address these issues, but their applications remain limited. For example, the coati optimization algorithm (COA) [
25], which was proposed by Dehghani, draws inspiration from the foraging behavior of coatis for search and demonstrates strong local search capabilities. However, it suffers from insufficient global convergence in high-dimensional multimodal problems and is easily constrained by initial population diversity, which makes it unable to find boundary optimal solutions. Braik’s chameleon swarm algorithm (CSA) [
26] incorporates the chameleon visual mechanism to enhance exploration, but it overly relies on individual information interaction, which leads to premature convergence and a decline in search activity in the later stages. Liu’s adaptive particle swarm optimization (APSO) [
27] improves upon traditional PSO by increasing convergence speed and global performance, but it still struggles to avoid particle diversity decay and faces the risk of premature convergence in nonconvex tasks. The gray wolf optimization algorithm (GWO) [
28] is based on the social hierarchy and hunting strategy (α/β/δ guidance) of gray wolves and has strong global search capabilities. However, population homogeneity increases in the later stages, thus limiting the ability to escape from local optima, which is a phenomenon that becomes more pronounced in high-dimensional problems. The whale optimization algorithm (WOA) [
29] simulates whale bubble net hunting, with efficient initial exploration but excessive focus on the current optimal solution during the development phase. The imbalance between exploration and exploitation leads to insufficient convergence stability when complex optimization surfaces are faced. Additionally, Yan optimized GRU hyperparameters via the simulated annealing (SA) algorithm [
30] as a single-point search strategy, which has low search efficiency and limited optimization capabilities in high-dimensional spaces.
In response to the limitations of the above algorithms, an improved COA (ICOA) is proposed in this study. This algorithm makes the following improvements on the basis of the COA. It uses a logistic–tent chaotic mapping mechanism to generate the initial population, which overcomes the limitations of traditional algorithms in the selection of initial solutions and improves the diversity and uniformity of the population. It also uses an elite perturbation strategy in which controllable perturbations are periodically applied to the current optimal solution to introduce moderate randomness, thus breaking the algorithm’s potential local stability while maintaining the trend toward convergence to the globally optimal region. In addition, we introduce a dynamic weighting mechanism that is based on the original exploration and development stages to enhance exploration capabilities in the early stages and focus on development accuracy in the later stages, thereby balancing global exploration and local development capabilities. By combining the above improvements, the ICOA not only enhances the search efficiency and convergence speed but also demonstrates superior stability and generalization capabilities in avoiding premature convergence and optimizing high-dimensional spaces.
In summary, the main contributions of this study are as follows: (1) A new method for constructing eye diagram datasets on the basis of GNSS receiver correlation values is proposed, which transforms the interference signal classification problem into an eye diagram image classification problem. This transformation efficiently captures the critical information-theoretic characteristics (like signal uncertainty/distribution) of jamming. (2) A hybrid deep learning model (CNN–ResNet–BiLSTM) that integrates the advantages of CNN, BiLSTM, and ResNet to comprehensively model the spatial, temporal, and hierarchical informational entropy of interference signals was designed and verified for interference signal classification. (3) An improved coati optimization algorithm is proposed, which effectively solves the global convergence and local optimality problems in deep learning model hyperparameter optimization. (4) Extensive experiments were conducted on interference signal datasets. The results show that the proposed ICOA–CNN–ResNet–BiLSTM hybrid model achieved a classification accuracy of over 98%, which was significantly better than those of the baseline models (e.g., CNN, LSTM, SVM, and CNN-LSTM [
31] models).
The remainder of this paper is organized as follows:
Section 2 introduces a mathematical model of interference signals and their eye diagram characteristics,
Section 3 details the ICOA–CNN–ResNet–BiLSTM model structure and optimization methods,
Section 4 evaluates the model performance on the basis of the interference signal dataset, and
Section 5 summarizes the paper.
3. ICOA–CNN–ResNet–BiLSTM Interference Classification Method
In this study, the classification problem for suppression and deception interference signals is addressed by proposing an ICOA–CNN–ResNet–BiLSTM hybrid model (
Figure 3). First, the original images are preprocessed by grayscaling, normalization, and scaling to extract the key features of the interference signals. Then, the processed data are divided into training, validation, and test sets for model training, hyperparameter tuning, and performance evaluation, respectively. In the model training stage, a hybrid architecture that combines a CNN, a ResNet, and BiLSTM is constructed to leverage their respective advantages in spatial feature extraction and temporal modeling. To improve model performance, key hyperparameters (such as the learning rate, number of neurons, and number of convolutional kernels) are iteratively optimized via the ICOA. The ICOA evaluates the difference between the prediction results and the actual labels on the validation set, calculates the fitness function, and dynamically adjusts the hyperparameter configuration to effectively avoid falling into local optima and enhance the model’s generalizability. Finally, the optimized hyperparameters are applied to the hybrid model to explore the deep spatial and temporal features of the interference signals fully. After the model training is complete, the test set is used to verify the performance, and the classification effect is evaluated in terms of metrics such as accuracy, recall, and the F1 score. The experimental results demonstrate that this method not only improves hyperparameter search efficiency but also effectively integrates spatial, temporal, and global features, thereby significantly enhancing the accuracy and robustness of deception interference signal recognition.
3.1. Image Preprocessing
To prepare the raw input images (initial dimensions: ) for efficient and effective analysis by the subsequent deep learning model, a focused three-stage preprocessing pipeline is applied. This pipeline is designed to: (1) grayscale conversion: remove color channel dependencies and reduce noise/complexity, (2) normalization: standardize the input intensity range to enhance model stability and sensitivity to subtle features, and (3) image resizing: reduce the spatial resolution significantly to improve computational efficiency for model training and inference while preserving critical spatial patterns. Overall, this preprocessing pipeline ensures that the model receives clean, standardized, and computationally manageable inputs optimized for learning and inference.
RGB images are transformed into single-channel grayscale representations via the luminosity method, which is defined as follows:
where
,
, and
denote the red, green, and blue channel intensities, respectively. This operation reduces the data dimensionality from
to
and eliminates color interference while preserving the spatial relationships that are critical for signal characterization.
Pixel intensity values are scaled to the range [0, 1] via min–max normalization:
where
and
represent global extrema across the
grayscale image. This step enhances contrast sensitivity for low-amplitude interference components.
Grayscale images are resampled to a standardized resolution of
via bilinear interpolation:
This spatial reduction (horizontal: ≈11.7× and vertical: ≈6.6×) optimizes computational efficiency while preserving the structural continuity of interference patterns through adaptive bilinear weighting.
The input is downsampled from RGB to grayscale, thereby achieving 98.7% compression and an improved signal-to-noise ratio for more effective feature extraction.
3.2. CNN–ResNet–BiLSTM
The CNN–ResNet–BiLSTM hybrid model fully integrates the advantages of the three network structures in spatial feature extraction and temporal modeling to achieve synergistic enhancement. The model (
Figure 4) consists of the following core modules and functions.
The model first applies a two-dimensional convolution (2D-CNN) operation to the input GNSS eye diagram image to extract interference features with local spatial characteristics in the image. Convolution operations can effectively identify short-term interference patterns (such as short-term bursts and periodic textures) and can be combined with pooling operations (such as MaxPooling) to further compress the feature dimensions and enhance robustness.
To deepen feature representation, a residual module is introduced, which includes two convolution layers and a skip connection layer. By setting multiscale convolution kernels, the model can accommodate interference features at various scales, which improves its ability to recognize complex interference types. Moreover, the residual structure effectively alleviates the gradient disappearance and performance degradation problems in deep networks, thereby ensuring the stable transmission of features in deep networks.
After being processed by the residual module, the high-order features are first flattened into a sequence format via a max pooling layer and a flatten layer and then input into the BiLSTM layer. BiLSTM possesses forward and backward temporal dependency modeling capabilities, which enable it to capture dynamic changes in signals across the temporal dimension. This approach is particularly effective for identifying interference types with temporal evolution characteristics, thereby significantly enhancing the model’s recognition accuracy and robustness against dynamic interference.
The output of the BiLSTM, which consists of a fully connected layer (FC) and a SoftMax layer, is mapped to a predefined interference category space via the FC layer, and the SoftMax layer outputs the predicted probability of each interference category, thus completing the final classification task.
To accelerate the training process, improve model stability, and prevent overfitting, multiple batch normalization (BN) layers and dropout layers are employed in the model. During training, the Adam optimizer is used in combination with a learning rate decay mechanism to further improve training stability and model generalizability. Additionally, ReLU activation functions are employed in all the network layers to enhance the model’s nonlinear expression capability and accelerate convergence.
3.3. ICOA
To address the inherent shortcomings of the COA in high-dimensional complex search spaces, such as becoming stuck in local optima and insufficient convergence accuracy, the proposed ICOA optimizes the existing COA through three core improvements: (1) In the population initialization stage, the traditional random initialization strategy is replaced with a logistic–tent chaotic mapping that combines the characteristics of logistic mapping and tent mapping, which significantly improves the uniformity and exploration of the initial population and lays a foundation for diversity in global search; (2) an elite perturbation strategy (EPS) is introduced, which applies periodic, adaptively decaying perturbations to the current optimal solution (elite individuals) and uses a greedy selection mechanism to dynamically inject controllable randomness to effectively escape from local optima; and (3) the dynamically decaying weight coefficients are dynamically optimized between the exploration and exploitation phases, which includes adjusting exploration intensity during the exploration phase and improving the original linear boundary contraction strategy to a nonlinear power-law contraction strategy with elite guidance during the exploitation phase, thereby collectively enhancing the algorithm’s global exploration capability and local optimization efficiency in complex search spaces. The process of the ICOA (
Figure 5) is as follows.
3.3.1. Logistic–Tent Chaotic Mapping
To address the issues of uneven population distribution and insufficient coverage of the solution space that may arise with traditional random initialization methods in the COA, an improved strategy—logistic–tent chaotic mapping initialization—is employed in this study. By combining the traversal characteristics of logistic mapping and tent mapping, this method significantly enhances the uniformity and diversity of the initial population, thereby effectively avoiding local convergence and population clustering during the search process. The mathematical expression for the logistic map is as follows:
where
,
represents the number of mappings, and
represents the function value of the
-th mapping.
The mathematical expression for the tent map is as follows:
where
,
represents the number of mappings, and
represents the function value of the
-th mapping.
The mathematical expression for the logistic–tent chaotic map is as follows:
where
represents the population individual obtained by mapping back to the individual search space and where
represent the upper and lower bounds of the solution space.
3.3.2. Elite Perturbation Strategy
In the improved coati optimization algorithm, we adopted the EPS to enhance the algorithm’s ability to escape from local optima. The core idea of this strategy is to periodically apply controllable perturbations to the current optimal solution during the algorithm iteration process. By introducing moderate randomness, the algorithm breaks out of the local stable state into which it may fall while maintaining the trend of converging to the global optimal region. Elite perturbations are triggered periodically:
where
represents the current iteration number. To balance exploration and exploitation in the optimization process, we propose an elite adaptive perturbation strategy, where the intensity of the adaptive perturbation decays dynamically with the iteration process. In the early stages, local optima are escaped from by means of large perturbations. In the later stages, the perturbations are reduced to small perturbations to achieve accurate optimization. The adaptive perturbation expression is as follows:
where
represents the current globally optimal solution,
represents the basic disturbance coefficient (set to 0.1),
represents the maximum number of iterations, and
represents a random vector whose entries are uniformly distributed in the interval [0, 1]. We impose boundary constraints on the disturbance:
In addition, we also apply elite perturbation and greedy selection strategies for exploration and exploitation, which is expressed as
After each solution is generated, if the current solution is better than the current optimal solution, the current optimal solution is replaced; otherwise, the original optimal solution is retained. The elite perturbation strategy injects vitality into the algorithm to escape from local optima while maintaining the convergence direction of the algorithm.
3.3.3. Exploration and Exploitation Optimization
We introduce a dynamic weighting mechanism that is based on the original exploration and development stage to balance global exploration and local development capabilities. The weight decreases nonlinearly with the number of iterations, which enhances exploration capabilities in the early stages and focuses on development accuracy in the later stages.
where
represents a uniformly distributed random number,
represents a randomly switched exploration direction coefficient, and
is the following adaptive weighting function:
where
and
.
During the development phase, we improved the original linear boundary contraction to a nonlinear power-law contraction and introduced an elite guidance mechanism to accelerate local convergence while preventing premature convergence. The nonlinear power-law contraction expression is as follows:
where
is the nonlinear contraction index. The elite guidance mechanism can be expressed as
where
represents an independent random number and where
represents the probability of elite guidance selection.
3.3.4. Performance Analysis
Four classic optimization test functions (the Rastrigin, Rosenbrock, Ackley, and Schwefel functions) were used as benchmarks (
Table 2) to comprehensively evaluate the performance of the LCAVOA. These functions each have their own distinctive characteristics (such as strong oscillations, coexistence of flat regions and steep valleys, and narrow flat valleys), which were used to systematically examine the algorithm’s global search capability, convergence behavior, and solution accuracy. By comparing it with the APSO algorithm, CSA, SA algorithm, GWO algorithm, WOA, and COA, we analyzed the performance of the ICOA in terms of accuracy, speed, and stability. All the experiments were conducted under uniform conditions: a population size of 60, a maximum of 200 iterations, and 50 dimensions.
The fitness curves of the optimization algorithms on the Rastrigin, Rosenbrock, Ackley, and Schwefel functions effectively demonstrate their performance differences, particularly in terms of global search capability, ability to escape from local optima, and convergence speed (
Figure 6). To further compare the performance of these algorithms in a more intuitive manner, we visualized the solutions of each algorithm via box plots (
Figure 7), which helped reveal the optimization effectiveness of the algorithms under various objective functions.
In the Rastrigin function optimization problem, the global search capability of the algorithm is crucial. Specifically, the APSO algorithm, CSA, SA algorithm, and GWO algorithm may easily became stuck in local optima during the optimization process, thereby resulting in suboptimal search results, with the optimal solution falling between 101 and 102. In contrast, the optimal solutions found by the WOA, COA, and ICOA were in the range of 10−13. However, the WOA had slower convergence, especially in the early stages, with a relatively slow decline in the fitness curve, thus failing to effectively accelerate convergence. In contrast, the COA demonstrated strong global search capabilities, which enabled it to converge quickly to optimal solutions, thus making it highly effective in solving the Rastrigin function optimization problem. Furthermore, by improving the search strategy, the ICOA outperformed the COA in terms of convergence speed and stability, converging in just 10 iterations. This indicated that its optimization performance was enhanced in multiple aspects, with a better balance between global exploration and local exploitation during the search process, thereby improving solution accuracy and algorithm convergence speed, achieving a 44% improvement in convergence speed compared to the original algorithm.
In the optimization of the Rosenbrock function, there is only one globally optimal solution, but its elongated shape requires the algorithm to perform precise searches along fine paths. APSO and the CSA performed the worst, with optimal solutions above 500, which deviated significantly from the true optimal solution; SA also performed poorly, with optimal solutions that fluctuated around 500; GWO and the WOA performed moderately, with the optimal solution at the 102 level; the COA became stuck in a local optimum, with the optimal solution remaining at the 102 level; and the ICOA demonstrated the best performance, with the optimal solution approaching the 10−9 level.
In the optimization of the Ackley function, the function features flat regions and steep valleys, with multiple local optima, which necessitates an algorithm with strong global search capabilities. The ICOA performed the best, with the optimal solution approaching the 10−16 order of magnitude, which was the closest to the ideal solution of 0; the COA’s optimal solution was of the same order of magnitude as that of the ICOA, but the COA required 17 more iterations to converge, with ICOA achieving a 40% improvement in convergence speed; the WOA’s optimal solution was on the 10−15 order of magnitude, but the WOA converged more slowly, requiring more than 150 iterations to converge; and the CSA, APSO, and SA performed similarly, with optimal solutions that ranged between 5 and 10.
Owing to the complex multipeaked nature of the Schwefel function, optimization algorithms face greater challenges. The optimal solutions of the CSA, GWO algorithm, and COA all exceeded 10,000 and were far from the ideal solution of 0; the optimal solutions of the APSO algorithm, SA algorithm, and WOA remained in the 103–104 range; and the ICOA achieved an optimal solution in the 10−4 range, thus performing the best.
In summary, the algorithms exhibited significant differences in terms of global search capability and convergence performance across various optimization problems. For the Rastrigin function optimization problem, the algorithms tended to become stuck in local optima during global search, which led to suboptimal optimization results. However, the COA and ICOA, with their strong global search capabilities, quickly converged to optimal solutions. In particular, the ICOA, through improved search strategies, demonstrated superior performance in terms of convergence speed and stability, with better optimization performance than the COA, achieving a more than 40% improvement in convergence speed. For the Rosenbrock function, although APSO and the CSA performed poorly, with their optimal solutions far from the true optimal solution, the ICOA demonstrated advantages in fine-grained search and obtained a solution that was close to the true optimal solution. In the optimization of the Ackley function, the global search capability of the ICOA also played an important role, and it obtained the results closest to the ideal solution. Other algorithms, such as the COA and WOA, although obtaining results similar in magnitude to those obtained by the ICOA, converged more slowly. Owing to its complex multipeaked nature, the Schwefel function is difficult to optimize, but ICOA still performed the best, as it was the only one to obtain an optimal solution in the 10−4 order of magnitude, while other algorithms had optimal solutions in the 104 order of magnitude. In conclusion, the ICOA demonstrates strong global search capabilities and excellent convergence performance across various test functions,. It effectively converges in several test functions, exhibiting strong stability, thus making it a reliable optimization method.