Next Article in Journal
A Reinforced Whale Optimization Algorithm for Solving Mathematical Optimization Problems
Previous Article in Journal
Nanoparticles as Drug Delivery Vehicles for People with Cystic Fibrosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimizing Deep Learning Models with Improved BWO for TEC Prediction

1
Institute of Intelligent Emergency Information Processing, Institute of Disaster Prevention, Langfang 065201, China
2
Institute of Mineral Resources Research, China Metallurgical Geology Bureau, Beijing 101300, China
3
College of Computer Science and Technology, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(9), 575; https://doi.org/10.3390/biomimetics9090575
Submission received: 6 September 2024 / Revised: 18 September 2024 / Accepted: 19 September 2024 / Published: 22 September 2024

Abstract

The prediction of total ionospheric electron content (TEC) is of great significance for space weather monitoring and wireless communication. Recently, deep learning models have become increasingly popular in TEC prediction. However, these deep learning models usually contain a large number of hyperparameters. Finding the optimal hyperparameters (also known as hyperparameter optimization) is currently a great challenge, directly affecting the predictive performance of the deep learning models. The Beluga Whale Optimization (BWO) algorithm is a swarm intelligence optimization algorithm that can be used to optimize hyperparameters of deep learning models. However, it is easy to fall into local minima. This paper analyzed the drawbacks of BWO and proposed an improved BWO algorithm, named FAMBWO (Firefly Assisted Multi-strategy Beluga Whale Optimization). Our proposed FAMBWO was compared with 11 state-of-the-art swarm intelligence optimization algorithms on 30 benchmark functions, and the results showed that our improved algorithm had faster convergence speed and better solutions on almost all benchmark functions. Then we proposed an automated machine learning framework FAMBWO-MA-BiLSTM for TEC prediction, where MA-BiLSTM is for TEC prediction and FAMBWO for hyperparameters optimization. We compared it with grid search, random search, Bayesian optimization algorithm and beluga whale optimization algorithm. Results showed that the MA-BiLSTM model optimized by FAMBWO is significantly better than the MA-BiLSTM model optimized by grid search, random search, Bayesian optimization algorithm, and BWO.

1. Introduction

The prediction of total electron content (TEC) in the ionosphere is of great significance for positioning and navigation, space weather monitoring, and wireless communication [1,2,3]. However, many factors affect the ionospheric TEC, such as local time, latitude, longitude, season, solar cycle, solar activity, and geomagnetic activity, and it is very difficult to establish physical prediction models for ionospheric TEC [4]. Since the establishment of the International GNSS Service (IGS) in 1998, many analysis centers, such as the European Orbital Determination Center (CODE), the European Space Agency (ESA), the Jet Propulsion Laboratory (JPL) of the United States, and the University of Technology of Catalonia (UPC), have been providing users with a Global Ionospheric Map (GIM), which provides rich data support for ionospheric TEC prediction using deep learning models. In recent years, research has shown that deep learning models outperform empirical and statistical models in ionospheric TEC prediction [5,6]. Deep learning models have become the mainstream ionospheric TEC prediction technology [7,8]. These deep learning models often contain a large number of hyperparameters. Nils’ research has shown that deep learning models can have over 12 common hyperparameters [9], including learning rate, batch size, number of hidden layer nodes, convolutional kernel size, etc. These hyperparameters are used to train deep learning models. They cannot be estimated by the models themselves [10,11,12,13]. Finding the optimal hyperparameter combination for deep learning models, also known as hyperparameter optimization, directly affects the performance of the models. Researchers have shown that finding the best hyperparameters is the main challenge in training deep learning models and is even more important than selecting deep learning models [11,14].
When optimizing hyperparameters, the first step is to define a search space that includes the hyperparameters to be optimized and the search range corresponding to them. Then, a heuristic algorithm needs to be defined to search for the best solution in the search space. Due to the wide range for each hyperparameter, there are a large number of hyperparameter combinations in the search space. Hyperparameter optimization requires evaluating all possible hyperparameter combinations to find the optimal one. Therefore, the cost of hyperparameter optimization is very expensive [11,15].
Recently, swarm intelligence optimization algorithms have been proven to be effective in hyperparameter automatic optimization [16]. The inspiration for swarm intelligence optimization algorithms comes from the swarm intelligence behavior of various animals or humans. They are simple and flexible, so many researchers use them to quickly and accurately find the global optimal solution in complex optimization problems [17,18,19]. At present, common swarm intelligence optimization algorithms include Particle Swarm Optimization (PSO) [20], Moth Flame Optimization (MFO) [21], Sine Cosine Algorithm (SCA) [22], Salp Swarm Algorithm (SSA) [23], Whale Optimization Algorithm (WOA) [24], Seagull Optimization Algorithm (SOA) [25], Grey Wolf Optimization (GWO) algorithm [26], Dung Beetle Optimization (DBO) algorithm [27], and War Strategy Optimization algorithm (WSO) [28], etc. The Beluga Whale Optimization (BWO) algorithm is a swarm intelligence optimization algorithm proposed in recent years to simulate the collaborative behavior of the beluga whale population [29]. Its performance is proved to be superior to PSO, GWO, HHO, MFO, WOA, SOA, SSA, etc. However, the original BWO algorithm has two drawbacks: (1) the initial population lacked diversity, which limited the algorithm’s search ability; (2) The exploration phase and exploitation phase are imbalanced, making it easy to fall into local optima during optimization. In order to solve the above two problems, we improved the BWO algorithm and proposed the FAMBWO algorithm. Finally, we propose a deep learning model for TEC prediction and apply our improved FAMBWO algorithm to optimize the hyperparameters of the deep learning model. The contributions of this paper are as follows:
  • To improve the diversity of the initial population, we used Cat Chaotic Mapping (CCM) to initialize the initial population of BWO;
  • To solve the problem of local optima caused by the imbalance between the exploration phase and the exploitation phase in the original BWO algorithm, we added Cauchy mutation & Tent chaotic mapping (CMT) strategy in the exploitation phase of BWO algorithm to enhance the algorithm’s ability to jump out of local optima; we added the Firefly Algorithm (FA) strategy to the exploration phase to enhance the randomness and diversity of exploration, enhancing the exploration ability of the algorithm;
  • We proposed an automated machine learning framework FAMBWO-MA-BiLSTM for ionospheric TEC prediction and optimization. In our framework, we first proposed a deep learning model for TEC prediction, named Multi-head Attentional Bidirectional Long Short-Term Memory (MA-BiLSTM). Then we use FAMBWO to optimize four hyperparameters of MA-BiLSTM, including learning rate, dropout ratio, batch size, and the number of neurons in MA-BiLSTM’s BiLSTM layer.
The paper is structured as follows. Section 2 introduces the literature review in TEC prediction and hyperparameter optimization. Section 3 introduces the original BWO algorithm. Section 4 introduces 3 strategies used to improve BWO and the improved FAMBWO algorithm. Section 5 presents experimental results and analysis. Section 6 introduces the FAMBWO-MA-BiLSTM framework for ionospheric TEC prediction and optimization. Section 7 summarizes the entire paper.

2. Literature Reviews

At present, deep learning models are the most popular tools in TEC prediction. The hyperparameter optimization methods for TEC prediction models mainly include manual setting and grid search.
The manual setting method is for researchers to manually set hyperparameters based on their own experience. For example, Maria Kaselimi et al. proposed an LSTM model for TEC prediction. Their model consisted of two bidirectional LSTM layers. The number of neurons in each LSTM layer was manually set to 60 and 72; The learning rate and batch size were manually set to 0.0001 and 28 [30]. Xu Lin et al. used a spatiotemporal network ST-LSTM to predict global ionospheric TEC. The number of convolutional kernels and the size of convolutional kernels were set to 64 and 5; The initial learning rate was set to 0.001 [31]. Xia, G. et al. [32] proposed an ionospheric TEC map prediction model named CAiTST. The bath size and learning rate were manually set to 32 and 0.001, respectively; The number and size of convolutional kernels to 40 and 5. In [33], Xia, G. et al. proposed the ED-ConvLSTM model to predict global TEC maps, where hyperparameters such as convolutional kernel size was manually set to 5, batch size was 32, and learning rate was 0.001. Xin Gao et al. proposed a TEC map prediction model based on multi-channel ConvLSTM. In their work, the batch size was manually set to 15, the learning rate dynamically decayed, and the decay rate of the learning rate was also manually set [34]. Huang, Z. et al. [35] applied ANN to predict the vertical TEC of a single station in China. The hyperparameters such as learning rate and crossover probability were manually set to 0.1 and 0.4, respectively. Liu, L. et al. proposed the ConvLSTM model for storm-time high-latitude ionospheric TEC maps prediction. The learning rate and batch size in their research were manually set to 0.00003 and 14, respectively [36]. In [37], Liu, L. et al. proposed the ConvLSTM model to predict global ionospheric TEC, in which the dropout, learning rate, and batch size were manually set to 0.2, 0.00005, and 72, respectively. Manual setting of hyperparameters is easily influenced by personal subjective opinions as it is based on the experience and intuition of researchers. Especially, many hyperparameters are continuous, and the hyperparameters that researchers manually set are almost impossible to be the optimal ones. That is to say, the model with manual hyperparameters cannot achieve its optimal performance.
The grid search method is an automatic hyperparameter optimization algorithm. It first discretizes each hyperparameter to form a discretized hyperparameter space. Then it exhaustively searches through all possible hyperparameter combinations in the discretized hyperparameter space to find the optimal ones. The grid search method solves the problem of excessive reliance on researchers’ experience and is applied to optimize TEC prediction models. For example, Tang J. et al. [38] proposed the CNN-LSTM-Attention model to predict ionospheric TEC, and the hyperparameters such as batch size, epochs, filters and kernel size in their model were determined by grid search method. Lei, D. et al. [39] proposed Attentional-BiGRU to predict ionospheric TEC. In their work, the range of batch size was set to {16, 32, 64, 128}, and the range of learning rate was discretized to {0.1, 0.05, 0.01, 0.005, 0.001}. Then, grid search method was used to search for the optimal hyperparameter combination within the given range. Tang J. et al. [40] proposed the BiConvGRU model to predict TEC in China, where the number of layers for BiConvGRU, the convolutional kernel size, and the learning rate were determined by the grid search method. Although the grid search method can automatically search for hyperparameters, it also has shortcomings. On the one hand, the grid search method is an exhaustive method, and when there are a large number of hyperparameters to be optimized, the computational cost is very expensive [41]. On the other hand, when using the grid search method, continuous hyperparameters will be discretized to form a discrete search space. However, not all values of continuous hyperparameters are included in the discretized search space. Therefore, the grid search method can only obtain suboptimal results, and it is almost impossible to find the optimal hyperparameters [42].
In other application fields of deep learning models, random search algorithms, Bayesian optimization methods, and swarm intelligence optimization techniques have also been applied to the hyperparameter optimization of deep learning models. The random search algorithm [43] randomly generates solutions and evaluates them to find the best one. Its time complexity is lower than that of grid search, but due to random selection, it may lead to unstable results, missing important hyperparameters. Furthermore, the random search method cannot learn from past iterations. Bayesian optimization [44] method uses probability models to learn from previous attempts and guides the search towards the optimal combination of hyperparameters in the search space. Compared with memoryless grid search and random search methods, Bayesian optimization can find better parameters in fewer iterations, but the proxy function selected in the probabilistic proxy model needs to rely on experience. Recently, swarm intelligence has been widely used for hyperparameter optimization, replacing outdated manual setting method and grid search method. For example, Maroufpoor, S. et al. [43] applied the Grey Wolf Optimization (GWO) algorithm to optimize the hyperparameters of artificial neural network (ANN) for reference evapotranspiration estimation. Compared with manual optimization algorithms, GWO improves ANN’s prediction accuracy by 2.75%. Ofori-Ntow Jnr et al. [44] proposed a short-term load forecasting method based on ANN and used Particle Swarm Optimization (PSO) to optimize its hyperparameters. The results showed that after using PSO optimization, the performance of ANN was improved by 7.3666%. P. Singh et al. [45] proposed a multi-layer particle swarm optimization (MPSO) algorithm to optimize hyperparameters of convolutional neural networks (CNN). Their research showed that the model optimized by MPSO had an accuracy improvement of 31.07% and 8.65% on the CIFAR-10 data set and CIFAR-100 data set, respectively, compared to the manually optimized model. Ling Chen, H. et al. proposed an improved PSO optimization algorithm (TVPSO) to optimize SVM. Their work showed that compared to manually optimized methods, TVPSO has improved SVM by 1.1% and 2.4% on the Wisconsin data set and German data set [46]. Shah, H. et al. used ant colony optimization (ACO) algorithm to optimize the BP neural network and reduced MSE of BP by 5.42% compared to the manual method [47]. Swarm intelligence has made some progress in many application fields such as machine learning and deep learning, but there are no relevant reports on its application in TEC prediction. At present, the hyperparameter optimization in TEC prediction still uses the most primitive and clumsy grid search method and manual tuning method, greatly limiting the TEC prediction performance.

3. Overview of Original BWO

The Beluga Whale Optimization (BWO) algorithm [29] is a swarm intelligence algorithm for solving optimization problems. It imitated the behaviors of beluga whales such as swimming, preying and whale fall. BWO includes exploration and exploitation phases. Beluga whales are used as the search agent, and each beluga whale is a candidate solution for a hyperparameter combination that is updated during the optimization. The beluga whale with the best fitness value corresponds to the optimal hyperparameter combination. The implementation process of the original BWO algorithm is as follows.

3.1. Initialization

Suppose the entire population has n individual beluga whales (i.e., n possible candidate solutions), the problem to be optimized is d -dimensional (i.e., the number of hyperparameters to be optimized is d ). Firstly, the matrix of search agent positions is initialized randomly by Equation (1).
X = x 1 x 2 x 3 . . x n = x 1,1 x 1,2 x 1 , d x 2,1 x 2,2 x 2 , d x 3,1 x 3,2 x 3 , d . . . . . . . . x n , 1 x n , 2 x n , d
where x i = [ x i , 1 , x i , 2 , …, x i , d ] ( i = 1,2, …, n ) represents the position of the i -th individual beluga whale, which is the i -th possible optimal parameter combination. x i , j represents the j-th hyperparameter to be optimized for the i -th beluga whale.
During the optimization process, a fitness function F ( x i ) (corresponding to the objective function of the model to be optimized) is used to estimate the fitness value of beluga whale i , and the fitness values of all beluga whales are collected and stored in the fitness matrix F X . The fitness matrix is as Equation (2).
F X = F x 1,1 , x 1,2 , , x 1 , d F x 2,1 , x 2,2 , , x 2 , d F x n , 1 , x n , 2 , , x n , d
Sort all the fitness values, and the position of the beluga whale with the minimum fitness is the optimal hyperparameter.
To balance exploration and exploitation, the BWO algorithm adopts a balance factor B f , which is calculated as Equation (3).
B f = B 0 1 T / 2 T m a x
where T is the current iteration, and T m a x is the maximum number of iterations, B 0 is a random parameter between 0 and 1. When B f > 0.5, the optimization algorithm enters the exploration phase, and when B f < 0.5, it enters the exploitation phase.

3.2. Exploration

The Beluga Whale Optimization algorithm provides two position update formulas during the exploration phase, as shown in Equation (4).
x i , j T + 1 = x i , p j T + x r , p 1 T x i , p j T 1 + r 1 sin   2 π r 2 ,         j = e v e n x i , j T + 1 = x i , p j T + x r , p 1 T x i , p j T 1 + r 1 cos   2 π r 2 ,         j = o d d
where T is the current iteration, x i , j T + 1 is the new position of the i -th beluga whale in the j -th dimension during the T + 1 iteration. p j j = 1,2 , 3 , . . . , d is a random integer between 1 and d, x i , p j T indicates that in the T -th iteration, the position of the i -th beluga whale in dimension p j . r is a random integer between 1 and n , r 1 and r 2 are both random numbers between 0 and 1.

3.3. Exploitation

In the exploitation phase, the Levy flight strategy is added to the BWO algorithm to accelerate its convergence speed and enhance its local search ability. The positions for beluga whales during the exploitation phase are updated as Equation (5).
x i T + 1 = r 3 x best   T r 4 x i T + C 1 L F x r T x i T
where x i T + 1 and x i T represent the position of the beluga whale during iteration T + 1 and T , respectively. x best   T is the optimal position during the T -th iteration. x i T represents the position of a random beluga whale during the T -th iteration, C 1 = 2 r 4 1 T / T m a x , is weight of Levy flight, r 3 and r 4 are random numbers between (0,1). L F is Levy flight function, which is calculated by Equations (6) and (7).
L F = 0.05 × u × σ | v | 1 / β
σ = Γ 1 + β × sin   π β / 2 Γ ( 1 + β ) / 2 × β × 2 ( β 1 ) / 2 1 / β
where u and v represent random numbers with a normal distribution, and β is a constant. In the original BWO, β was 1.5.

3.4. Whale Fall

The whale fall phase simulates the process of a dead beluga whale falling into the seabed. Introducing this phase can enhance the algorithm’s ability to jump out of local optima. During the whale fall phase, the formula for updating the position of the beluga whale is defined as follows:
x i T + 1 = r 5 x i T r 6 x r T + r 7 x s t e p  
x s t e p = u b l b exp   C 2 T / T m a x
where r 5 , r 6 , r 7 are random numbers between (0,1), u b and l b represent the upper and lower boundaries of the optimized parameters, x s t e p is the step of whale fall. C 2 = 2 W f × n , which is a parameter related to the step of whale fall step. W f is the probability of a whale falling, calculated using Equation (10).
W f = 0.1 0.05 T / T m a x
The pseudo code of the original BWO optimization algorithm is shown in Algorithm 1.
Algorithm 1. Pseudocode of the original BWO
Input:Parameters of BWO, such as T m a x , the number of beluga whales n , the number of hyperparameters to be optimized d, and the upper and lower boundary of the parameters to be optimized, represented as u b and l b .
Output:The best solution P *.
1:Randomly initialize the population, calculate fitness values, and then find the current best solution.
2:while  T T m a x  do
3:  Calculate the current probability of whale fall W f through Equation (10), and the current balance factor B f through Equation (3).
4:for each candidate solution do
5:  if  B f i > 0.5 then
6:   // In the exploration phase of BWO
7:    Generate p j ( j = 1,2 , , d ) randomly
8:     Select a beluga whale x r randomly
9:     Update the position of the i-th beluga whale according to Equation (4)
10:    else if  B f i   0.5
11:     //In the Exploitation of BWO
12:     Calculate the weight of Levy flight C 1 ,then calculate the Levy
     flight function by Equations (6) and (7)
13:     Update the position of the i-th beluga whale according to Equation (5)
14:     end if
15:     Check the boundaries of new positions and evaluate the fitness
16:  end for
17:  for each candidate solution ( x i ) do
18:     // In whale fall of BWO
19:    if B f i   W f
20:     Update the step factor C 2
21:     Calculate the whale fall step x s t e p by Equation (9)
22:     Update the position of the i-th beluga whale according to
     Equation (8)
23:     Calculate fitness based on the updated position of the beluga whale.
24:     end if
25:   end for
26:  Find the current best solution P *
27:  T = T + 1
28:end while
29:Output the best solution

4. Our Improved BWO

Although the BWO algorithm has achieved some results in machine learning and deep learning hyperparameter optimization, the original BWO still has shortcomings such as insufficient initial population diversity and imbalanced development and exploration stages, making it easy for the Beluga algorithm to fall into local optima during hyperparameter optimization [48]. In order to solve the above problems, this paper has made three improvements to the BWO algorithm, including:
  • Add cat chaotic mapping strategy (CCM) in the population initialization phase to increase population diversity;
  • Add firefly algorithm (FA) strategy in the exploration phase to help it find the global optimal solution more easily;
  • Add a CMT strategy (Cauchy Mutation and Tent chaotic) in the exploitation phase to enhance the algorithm’s ability to optimize nonlinear functions and jump out of local optima. We name the improved model FAMBWO.
Next, we will elaborate on the principles of the strategies used in this paper.

4.1. Cat Chaotic Mapping Strategy (CCM)

Cat chaotic mapping has good chaotic characteristics [49]. In order to solve the problem of insufficient diversity in the original BWO, this paper applies cat chaotic mapping strategy to replace the random initialization population method during population initialization phase. The steps to apply cat mapping chaos strategy to initial population are as follows:
  • Firstly, randomly generate two d-dimensional vectors, x 1 = [ x 1 , 1 , x 1 , 2 , … x 1 , d ], y 1 = [ y 1,1 , y 1,2 , …, y 1 , d ], with each element’s between 0 and 1;
  • Calculate n chaotic variables through cat mapping in Equation (11);
x i + 1 y i + 1 = 1         1 1         2 x i y i m o d   1 ,     i = 1,2 , , n
where x i mod 1 = x i − [ x i ], x i = [ x i , 1 , x i , 2 , …, x i , d ] (i = 1,2, …, n).
  • Map chaotic variables to the range of parameters to be optimized using Use Equation (12).
        x i = l b + u b l b x i , i = 1,2 , , n
where u b and l b represents the upper and lower boundaries of the parameters to be optimized.

4.2. Firefly Algorithm Strategy (FA)

The BWO optimization algorithm adopts two fixed position update formulas during the exploration phase, which limits its exploration performance. To solve this problem, we added an additional firefly algorithm strategy after the location update of the beluga whale, adding disturbance to the location update, increasing the diversity of location updates, and improving the exploration ability of the algorithm. In [50], it was pointed out that FA can enhance the ability of optimization algorithms to find global optima by simulating the behavior of fireflies emitting light to attract peers for information transmission. In FA, first calculate the spatial distance between two fireflies, then calculate the attraction between these two fireflies based on their distance, and finally update the position of the fireflies according to the attraction. The formula for calculating the spatial distance r i r between two fireflies x i and x r is shown in Equation (13).
r i r = x i x r = k = 1 k = d   x i , k x r , k 2  
The calculation method for the attraction β ( r i r ) between x i and x r ( i , r = 1, 2, …, n ) is as Equation (14).
β ( r i r ) = β 0 e γ r i r 2
where β 0 is the attraction of two fireflies at a distance of 0.
When firefly x i is attracted to firefly x r , the position for x i is updated according to Equation (15).
x i = x i + β ( r i r ) x j x r + α ( r 8 0.5 )
where r 8 is a random number within [0,1], α is a step factor between [0,1].

4.3. CMT Strategy

In order to improve the exploitation ability of BWO, we add a CMT strategy (Cauchy Mutation and Tent chaotic) in the exploitation phase. The CMT strategy is a combination of Cauchy mutation strategy and Tent chaotic mapping strategy.

4.3.1. Cauchy Mutation Strategy

The Cauchy distribution has the characteristic of a long tail. Adding variables that follow the Cauchy distribution in position updates is called Cauchy variation, which can generate significant changes in the search space, helping the algorithm jump out of local minima and search globally. The Inverse Cumulative Distribution Function (ICDF) is used to generate random variables that follow the Cauchy distribution, and its definition is as Equation (16).
F 1 p ; x 0 , γ = x 0 + γ t a n   π p 1 2
Inspired by ICDF, we propose a position updating formula for beluga whales based on a Cauchy mutation, and its calculation method is Equation (17).
x i = x i + x i Λ t a n   π r 9 1 2
where Λ is a spiral factor to adjust the magnitude of the mutation operation and r 9 is a random number between [0, 1].

4.3.2. Tent Chaotic Mapping Strategy

The Tent chaotic mapping has traversal uniformity and faster search speed. Using Tent chaotic mapping for optimization can improve the algorithm’s optimization ability for nonlinear problems and improve its accuracy [51]. The calculation formula for Tent chaotic mapping is shown in Equation (18).
x n + 1 = 2 x n , 0 x n 0.5 2 1 x n , 0.5 x n 1
After adding the Tent chaotic mapping, the update formula for the position of the beluga whale is as Equation (19).
  x   i = x i + x i x n + 1
where x n is a random number between [0, 1].

4.3.3. CMT

The previous section described Cauchy mutation and tent chaotic mapping, and this section combines the two to propose a CMT strategy.
Let F x i be the fitness corresponding to the position of the i-th beluga whale, F m e a n be the average fitness of the population. In the CMT strategy, when F x i F m e a n , update the positions of beluga whales by Cauchy mutation strategy in Equation (17) so as to enhance the algorithm’s ability to jump out of local optima; otherwise, update the positions by Tent chaotic mapping strategy Equation (19) so as to increase the algorithm’s ability to optimize nonlinear functions.

4.4. The Details of Our Proposed FAMBWO

The three strategies used in this paper were presented earlier. In this section, we will combine these three strategies with BWO and describe in detail our proposed FAMBWO algorithm.
Our FAMBWO consists of three phases: initialization phase, exploration phase, and exploitation phase. The pseudo code of the FAMBWO algorithm is shown in Algorithm 2, and the flowchart is shown in Figure 1, where the green parts are our improvements.
Initialization phase: In our FAMBWO algorithm, the CCM strategy introduced in Section 4.1 is used to initialize the population, increasing its diversity and improving search efficiency.
Exploration phase: update the population with the FA strategy introduced in Section 4.2, improving the algorithm’s exploration ability and helping the algorithm find the global optimal solution more easily.
Exploitation phase: Update the population with the CMT strategy introduced in Section 4.3, improving the algorithm’s ability to optimize nonlinear functions and jump out of local optima.
We balanced the exploitation and exploration capabilities of the algorithm by combining FA and CMT strategies.
The pseudocode for FAMBWO is as Algorithm 2.
Algorithm 2. Pseudocode for FAMBWO
Input:The initial parameters of FAMBWO, including T m a x , n , d , u b and l b .
Output:The best solution P *.
1:Initialize the population through Equations (11) and (12).
2:Calculate the fitness value and then find the location of the current best solution.
3:while  T T m a x  do
4:Calculate the current probability of whale fall W f by Equation (10) and the current balance factor B f by Equation (3).
5:Initialize parameters α , β 0   and   r 8 in the firefly algorithm
6:  for each begula x i do
7:   if  B f i > 0.5 then
8:      // In exploration phase of FAMBWO
9:      Randomly generate p j (j = 1, 2, …, d)
10:      Randomly choose a beluga whale x r
11:      Update the position of i-th beluga whale by Equation (4)
12:      Calculate the spatial distance r i r between x i and x r by Equation (13) and the attraction β ( r i r ) between x i and x r by Equation (14)
13:      Update the position of i-th beluga whale by Equation (15)
14:   else if  B f i 0.5
15:      //In exploitation of FAMBWO
16:      Calculate the random jump intensity factor C 1 , and calculate
  Levy flight function by Equation (6)
17:      Update the position of i-th beluga whale by Equation (5)
18:       if  F x i F m e a n   then
19:        Randomly generate r
20:         Update the position of i-th beluga whale by Equation (17)
     // cauchy mutation
21:      else if  F x i   >   F m e a n
22:         Calculate x n + 1 by Equation (18)
23:         Update the position of i-th beluga whale by Equation (19)
24:       end if
25:    end if
26:   Check the updated position of the beluga whale and calculate its fitness.
27:  end for
28:  for each candidate solution do
29:   if  B f i > 0.5 then
30:      // In whale fall of FAMBWO
31:        Update the step factor C 2
32:        Calculate the whale fall step x s t e p by Equation (9)
33:        Update the position of i-th beluga whale by Equation (8)
34:        Check the updated position of the beluga whale and calculate
        its fitness.
35:       end if
36:  end for
37:  Find the best candidate solution for the current iteration P *
38:  T = T + 1
39:end while
40:Output the best solution

4.5. Computational Complexity

The time complexity of FAMBWO mainly includes population initialization, fitness evaluation, and population update. The main parameters that affect time complexity are the maximum number of iterations T m a x , dimension of the problem d, and population size n. The time complexity of population initialization, computational fitness, and population update are O(n × d), O( T m a x × n) and O( T m a x × d × n), respectively. So, O(FAMBWO) = O(population initialization) + O(fitness evaluation) + O(population update) ≈ O(n × d) + O( T m a x × n) + O( T m a x × d × n) = O( T m a x × d × n).

5. Experimental Results and Discussion

The performance of our proposed FAMBWO is evaluated on 30 well-known benchmark problems, and the results are compared with other 11 metaheuristic algorithms. In this section, we first introduced the benchmark problems and experimental setup, followed by discussing the influence of the 3 strategies, and then compared the exploitation ability, exploration ability, and local optimal avoidance ability of our algorithm with the other 11 mainstream metaheuristic optimization algorithms. In addition, scalability analysis was conducted on 12 algorithms to compare their ability to handle high-dimensional optimization problems.

5.1. Benchmark Problems and Experimental Setup

To evaluate the performance of the proposed FAMBWO, 30 different benchmark problems were chosen for comparative experiments, including 9 unimodal functions (F1–F9, as shown in Table A1 of Appendix A) for testing the exploitation ability, 15 multimodal functions (F10–F24, as shown in Table 2) for testing the exploration ability, and 6 composition functions (F25–F30, as shown in Table 3) for evaluating the local optimum avoidance ability. In Table 1, Table 2 and Table 3, Range represents the bound of design variable, and f m i n is the optimal value.
The comparison algorithms are 11 mainstream metaheuristic optimization algorithms, including PSO [20], MFO [21], DE, SCA [22], SSA [23], WOA [24], SOA [25], GWO [26], DBO [27], WSO [28], and BWO [29]. The parameters of the comparison algorithms are shown in Table 4.
During the experiment, the population size of each algorithm was 50 and the maximum number of iterations was 200. To eliminate the influence of random factors, each algorithm was independently executed 30 times on each benchmark function. The Friedman test method was used to rank the fitness of all the algorithms on the benchmark functions to evaluate their performance [52].
All algorithms were written in Python 3.7 and tested on a computer equipped with an Intel (R) Xeon (R) CPU E5-2686 v4 12 core processor and an NVIDIA GeForce RTX 3060 Ti graphics card with 8 GB VRAM.
Table 4. Algorithmic parameters for metaheuristics [53,54,55,56].
Table 4. Algorithmic parameters for metaheuristics [53,54,55,56].
AlgorithmParametersValues
# All algorithmsPopulation size, maximum iterative number, replication times50, 200, 30
PSOCognitive and social constant
Inertia weight linearly decreased at interval
c 1 = 2, c 2 = 2
[0.9,0.2]
MFOConvergence constant
spiral factor
a = [−2 −1], b = 1
DEScaling factor,
crossover probability
0.5, 0.5
SCAspiral factor[0 2]
SSALeader position update probability0.5
WOAProbability of encircling mechanism, spiral factor0.5, 1
SOAControl parameter f c [2,0], 2
GWOConvergence parameter a decreased at interval[2 0]
DBOSpecial ParametersK = 0.1, b = 0.3, and S = 0.5
WSOConvergence constant0.8
BWOProbability of whale fall decreased at interval W f [0.1 0.05]
FAMBWO β 0 , step factor a , Probability of whale fall decreased at interval W f , spiral factor Λ β 0 = 2, a = 0.2, W f = [0.1 0.05], Λ = [0 2]

5.2. Influence of the Three Strategies

In this section, CCM strategy, FA strategy, and CMT strategy are combined with BWO in different ways to analyze their impact on improving BWO performance. The details of these various BWOs are shown in Table 5, where ‘1’ indicates that the strategy is added to BWO and ‘0’ means vice versa.
Table A1 in Appendix A shows the results of these various BWOs on 30 benchmark problems, with Aver being the average and Std being the standard deviation.
According to the Aver of each algorithm in Table A1, we perform a Friedman test and obtain the ranking of each algorithm as shown in Table 6, with Rank indicating the algorithm’s rank, Avg being the average rank, and ‘+/−/=’ indicating the number of benchmark problems where FAMBWO’s performance is superior, inferior, or equal to other algorithms, respectively.
From Table 7, the performance of the 8 BWOs from best to worst is FAMBWO > CMT_FA_BWO > CCM_CMT_BWO > CCM_FA_BWO > FA_BWO > CMT_BWO > BWO > CCM_BWO. The FAMBWO algorithm, which includes CCM, FA, and CMT strategies, ranks first, indicating that adding these three strategies simultaneously to BWO can significantly improve algorithm performance.

5.3. Comparison with State-of-the-Art SI Algorithms

To evaluate the performance of our proposed FAMBWO, we compared it with 11 other mainstream swarm intelligence optimization algorithms on 30 benchmark problems. The comparison algorithms include PSO [20], MFO [21], DE, SCA [22], SSA [23], WOA [24], SOA [25], GWO [26], DBO [27], WSO [28], and BWO [29]. The comparative experiments are divided into five parts: Firstly, we analyzed the convergence behavior of FAMBWO; secondly, the exploitation abilities of all algorithms were compared on the unimodal function (F1–F9); thirdly, the exploration abilities of various algorithms were tested on multimodal functions (F10–F24); fourthly, the ability of local optimum avoidance was evaluated on composition functions (F25–F30); finally, scalability analysis was conducted on composition functions (F25–F30) of 100 dimensions to compare their ability to handle high-dimensional optimization problems. Below is a detailed discussion of these comparative experiments.

5.3.1. Convergence Behavior Analysis

To validate whether FAMBWO converges, we tested its convergence behaviors on 10 benchmark functions, including 4 unimodal functions (F1, F2, F4, F5) and 6 multimodal functions (F10, F11, F13, F14, F16, F19). Results of convergence behaviors are presented in Figure 2, including: (1) landscape of benchmark functions; (2) the search history of search agents; (3) the average fitness of search agents; and (4) the trajectory of the first dimension.
The benchmark functions in Figure 2 were used as search spaces for FAMBWO. The global minimum value on each benchmark function is the ultimate best solution of FAMBWO.
The search history in Figure 2 showed the distribution of search agents’ positions in the process of finding the global optimal solution, with the red dot denoting the globally optimal solution and the black ones indicating the search agents’ positions. From the search history, it is clearly seen that on unimodal functions (F1, F2, F4, F5), the search trajectory clustered near the global best solution. This indicates that FAMBWO can achieve fast convergence. On 4 multimodal functions (F11, F13, F14, F16), the search history of FAMBWO shows a nearly linear pattern, indicating FAMBWO can avoid local optima and ensure the global solution. On F10 and F19, the search trajectory is concentrated near the optimum solution and distributed throughout the search space, indicating that FAMBWO can effectively explore the search space.
The third column in Figure 2 shows the change in the average fitness of the search agent. It can be seen that average fitness rapidly decreases during the initial stage of the iteration, indicating that FAMBWO can converge quickly.
The fourth column in Figure 2 shows the trajectory of the first search agent in the first dimension. It represents the primary exploratory behavior of FAMBWO. The results show that it fluctuates sharply in the early stages of the iteration and gradually stabilizes in the later stages, ensuring that FAMBWO can eventually converge.

5.3.2. Exploitation Ability Analysis

To verify the exploitation capability of our FAMBWO, we compared it with 11 metaheuristic algorithms on 9 unimodal functions (F1–F9). We select 6 unimodal test functions to present the convergence curves of 12 optimization algorithms, as shown in Figure 3. It can be seen that, compared with the other 11 algorithms, FAMBWO’s fitness changes rapidly and converges earliest in the initial stage of iteration, indicating that FAMBWO requires the least number of iterations to find the optimal solution and has the fastest convergence speed. It can also be seen that the position of the optimal solution of FAMBWO is the lowest, indicating that the FAMBWO algorithm has the highest accuracy.
The quantitative statistical results of the optimal fitness of 12 algorithms on the unimodal benchmark function are shown in Table A2 of Appendix A, with Aver and STD being the mean and standard deviation of fitness. FAMBWO ranks first in terms of Aver and STD on all unimodal functions except F7 and is significantly superior to other comparison algorithms. On function F7, FAMBWO ranks second after WSO.
According to the Aver in Table A2, we perform the Friedman test on 12 algorithms, and the results are shown in Table 7, where Avg represents the average ranking of the algorithm in the test, and Rank represents the final ranking. The smaller the Rank and the Avg, the better the performance of the algorithm. We can see from Table 9, FAMBWO ranks first among the 12 algorithms.
Table 7. Friedman test results on unimodal functions (F1–F9, dim = 30).
Table 7. Friedman test results on unimodal functions (F1–F9, dim = 30).
FunRankAvg
FAMBWO11.1111
PSO98.4444
MFO1110.5556
DE1211.7778
SCA88.1111
SSA109.7778
WOA65.7778
SOA34.3333
GWO54.7778
DBO76.4444
WSO44.3333
BWO22.5556
Table A3 in Appendix A presents the Wilcoxon signed rank test results of the FAMBWO algorithm compared to other algorithms. p-value < 0.05 indicates that the FAMBWO algorithm has significant statistical advantages compared to other comparative algorithms. From Table 10, it can be seen that on the 9 unimodal functions, the vast majority of p-values are less than 0.05. Therefore, it can be concluded that FAMBWO is significantly superior to the other 11 comparative algorithms, indicating that FAMBWO’s exploitation ability is significantly better than those of the comparative algorithms.

5.3.3. Exploration Analysis

In the previous section, we evaluated the exploitation ability of algorithms on unimodal functions. In this section, we compare the optimization capabilities of algorithms on multimodal functions, which have many local optimal solutions that can be used to evaluate the exploration ability of the algorithm. We selected 15 multimodal functions (F10–F24) and conducted 30 independent experiments on each one. Table 11 shows the average optimal fitness values of 12 algorithms on these multimodal functions. We present convergence curves on 12 multimodal functions to demonstrate the exploitation ability, as shown in Figure 4. It can be seen that compared with the other 11 optimization algorithms, FAMBWO has the fastest convergence speed and the lowest position of the optimal solution. This indicates that FAMBWO has the fastest speed in exploring the optimal solution, and the solution found by FAMBWO is closest to the global optimal solution.
From Table A4 in Appendix A, we can see that FAMBWO ranks first on ten functions (F10–F11, F13–F19, and F21), second on one function (F20), third on three functions (F12, F22, and F23), and fourth on F24.
Table 8 shows the Friedman test results of Aver, indicating that FAMBWO ranks first among the 12 algorithms.
In addition, we conducted a Wilcoxon signed rank test based on the average fitness. Table A5 in Appendix A presents the Wilcoxon signed rank test results of the FAMBWO and other algorithms. Among them, the vast majority of p-values are less than 0.05. Therefore, it can be concluded that FAMBWO is significantly better than the other 11 comparison models, meaning that FAMBWO’s exploration ability is better than the comparison algorithms.

5.3.4. Local Optimal Avoidance Ability

Composition functions combine the characteristics of multiple basic functions, making them more complex compared to unimodal or multimodal functions. They are typically used to test algorithms’ ability to jump out of local optima. We conducted comparative experiments on 6 composition functions (F25–F30) and tested the local optimal avoidance ability of 12 algorithms.
Figure 5 shows the convergence curves of 12 optimization algorithms on 6 combination functions. Among them, we can see that the value of our FAMBWO’s optimal solution is smaller than those of other comparative models. This indicates that our algorithm outperforms the comparative algorithms in optimizing problems with multiple local optima. That is to say, our FAMBWO’s ability to avoid local minima exceeds that of the comparative algorithms.
The quantitative statistical results of the optimal fitness of 12 algorithms on the composition functions are shown in Table A6 of Appendix A. It can be seen that FAMBWO ranks first on all 6 composition functions.
We carried out the Friedman test according to the average fitness (Aver) in Table 9. The Friedman test results are shown in Table 9. It is easy to see that FAMBWO ranks first among the 12 algorithms.
We still conducted the Wilcoxon signed rank test according to the average fitness. Table A7 presents the Wilcoxon signed rank test results of the FAMBWO and other algorithms. From Table A7 in Appendix A, on the 15 multimodal functions, the vast majority of p-values are less than 0.05. Therefore, it can be concluded that FAMBWO’s ability to jump out of local optima is significantly better than the comparison algorithms.

5.3.5. Scalability Analysis

The benchmark functions used in the previous experiments were all 30 dimensions. To test the ability of FAMBWO to solve high-dimensional optimization problems, we conducted scalability analysis on 6 composite functions (F25–F30) in 100 dimensions. During the experiment, the population size of each algorithm was 50, and the maximum number of iterations was 1000. Meanwhile, to eliminate the influence of random factors, each algorithm was independently executed 30 times on each benchmark function. The experimental results are shown in Figure 6:
From Figure 6, the convergence speed and optimal solution of FAMBWO are significantly better than those of the comparison models on F26–F30, slightly inferior to MFO on F25. This indicates that our FAMBWO outperforms the comparative algorithms in solving high-dimensional optimization problems.
Table A8 in Appendix A shows the statistical results (mean and standard deviation) of 12 algorithms on 100-dimensional combination functions F25–F30.
From Table A8, FAMBWO ranks first in the five combination functions (F26–F30) of 100 dimensions, only slightly inferior to the MFO on F25. The Friedman test results for the average fitness (Aver) on F25–F30 are shown in Table 10, from where we can see that FAMBWO ranks first in the 100-dimensional composition functions (F25–F30).
Table A9 in Appendix A shows the p-values of FAMBWO compared to 11 other algorithms on F25–F30. Among them, most p-values are less than 0.05, indicating that FAMBWO is significantly superior to the other 11 comparison algorithms in high-dimensional function optimization.

6. Optimizing the Ionospheric TEC Prediction Model Using FAMBWO

Previous experiments have been conducted on benchmark functions. In this section, we will apply our proposed FAMBWO to optimize practical application. We proposed a framework for ionospheric TEC prediction named FAMBWO-MA-BiLSTM. In this framework, we proposed a deep learning model based on Multi-Head Attention and BiLSTM for TEC prediction, which we named MA-BiLSTM. We then used FAMBWO to optimize the hyperparameters of MA-BiLSTM. We compared FAMBWO-MA-BiLSTM with GS-MA-BiLSTM (MA-BiLSTM optimized by grid search method), RS-MA-BiLSTM (MA-BiLSTM optimized by random search), BOA-MA-BiLSTM (MA-BiLSTM optimized by Bayesian optimization algorithm), and BWO-MA-BiLSTM (MA-BiLSTM optimized by BWO). The following describes the TEC data set and data preprocessing, MA-BiLSTM model, FAMBWO-MA-LSTM framework, and comparative experimental results.

6.1. Data Set and Data Preprocessing

The TEC data used in this paper are provided by the Center for Orbit Determination in Europe (CODE), with a time resolution of 2 h. We selected TEC data from UT0:00 on 1 January 1999 to UT12:00 on 30 April 2015 at positions (25° N, 105° E) for the experiment. Our data include 77,467 TEC values. The raw TEC data are unstable and cannot be directly modeled. Therefore, we performed a first-order difference on the raw data to make them stationary. Then, we normalized them by the min-max method to eliminate the impact of data scale on prediction performance. The raw TEC and the processed TEC are shown in Figure 7.
In this paper, continuous 24-h TEC data are used as input to predict the next 2 h of TEC in the future. So, the input of a sample contains 12 TEC values, and the output contains 1 TEC value. We adopted a sliding window method to segment different samples, with each sliding for two hours. The sample production process is shown in Figure 8, with the purple data as input and the blue as output. In total, we obtained 77,454 samples, of which the samples from the first 14 years were used as training samples (1 January 1999 to 1 January 2013), and the ones from the rest were used as testing samples (1 January 2013 to 30 April 2015).

6.2. MA-BiLSTM

In this section, we proposed a TEC predicting model, named Multi-Head Attentional Bidirectional Long Short-Term Memory (MA-BiLSTM), which includes five modules: the input module, the encoder module, the decoder module, multi-head attention module and the output module. Its structure is shown in Figure 9.
Input module: used to receive samples. The input shape is (12,1), indicating 12 TEC values within 1 day.
Encoder module: This module contains a BiLSTM layer with m units (m is the hyperparameter to be optimized), and a Dropout layer with a ratio of r (r is the hyperparameter to be optimized). The encoder module is used to extract bidirectional temporal features from the input   X i , and the output of this module is e i , representing the bidirectional temporal feature vector corresponding to   X i .
Decoder module: This module consists of 2*m LSTM units, with an output of d i , used to assist in calculating the weights of temporal features.
Multi-Head Attention module: This module contains three independent attention heads, obtaining three weighted temporal features ( t i 1 , t i 2 , t i 3 ), which are then connected to form the final weighted feature vector f i . In this module, each attention head receives e i from the encoder module and d i   from the decoder module and calculates their similarity score. The similarity score of the j-th attention head s c o r e j   ( e i , d i ) by Equation (20).
s c o r e j ( e i , d i ) = V j T t a n h   W j e i + U j d i           ( j = 1,2 , 3 )
where U j , V j , W j ( j = 1,2 , 3 ) are the parameters that can be learned in the training process. After obtaining the attention score, then normalize it with the softmax function to obtain the probability distribution of attention. The specific calculation formula is as Equation (21):
a i j = s o f t m a x   score   h i , y i = s c o r e j ( e i , d i ) e s c o r e j ( e i , d i ) ( j = 1,2 , 3 )
a i j represents the respective attention distribution value of the j-th attention head.
Then, a i j   is multiplied by e i to obtain the weighted feature of the j-th attention head t i j . The calculation of t i j is shown as Equation (22):
t i j = a i j × e i ( j = 1,2 , 3 )
Finally, connect the 3 weighted feature vectors from the 3 attention heads as the final weighted feature f i . The calculation of f i is shown in Equation (23).
f i = [ t i 1 , t i 2 , t i 3 ]
where [] represents the concatenation of vectors.
Output layer: This layer includes a fully connected layer (Dense). It is used to map the weighted temporal features f i into the predicted values and then output them.

6.3. FAMBWO-MA-BiLSTM Framework

When using MA-BiLSTM for TEC prediction, there are four important hyperparameters that affect its prediction performance, including the number of BiLSTM units, the proportion of dropouts, the batch size and the learning rate. We used FAMBWO to optimize these four hyperparameters. Firstly, the upper and lower boundaries of these four hyperparameters should be given to form the search space. The search space for these 4 hyperparameters is shown in Table 11. Secondly, initialize FAMBWO. Among them, the maximum number of evaluations Tmax is 200, the dimension d is 4, and the population size n is 30.
Then, the loss of MA-BiLSTM is used as the fitness function (in this paper, the loss function is MSE). The solver of MA-BiLSTM is set to AdaGrad. Finally, the FAMBWO algorithm is used to search for the optimal hyperparameters of MA-BiLSTM.
We name the entire framework for TEC modeling and optimization as FAMBWO-MA-BiLSTM, in which MA-BiLSTM is for TEC prediction and FAMBWO is for hyperparameters optimization. Figure 10 shows the flowchart of the entire FAMBWO-MA-BiLSTM framework.

6.4. Performance Metrics

The following metrics are used to quantitatively evaluate the predictive performance.
M S E = 1 N i = 1 N Y i Y ^ i 2
R M S E = 1 N i = 1 n Y i Y ^ i 2
M A E = 1 N t = 1 N   Y i Y ^ i
R 2 = i = 1 N Y i Y ^ i 2 i = 1 N Y i Y ¯ 2
Y ¯ = 1 N i = 1 N Y i
where N is the number of samples in the test set; Y i is the true value of sample i; Y ^ i is the predicted value of the i- t h sample; M S E is the mean-square error; R M S E is the root mean square error; M A E is the mean absolute error; R 2 is the correlation coefficient.
MSE, R M S E , and MAE reflect the errors between the true and predicted values, indicating how far the predicted values are from the true values. The smaller the error, the better the prediction performance of the model. R 2 describes the correlation between predicted values and true values. The larger the R 2 , the higher the correlation between the predicted values and the true values.

6.5. Comparison Results on TEC Prediction

We compared the performance of the optimized MA-LSTM model using four optimization methods, namely grid search method, random search method, Bayesian optimization algorithm, and beluga optimization algorithm. The result is shown in Figure 11, where RS-MA-BiLSTM, GS-MA-BiLSTM, BOA-MA-BiLSTM, and BWO-MA-BiLSTM represent MA-BiLSTM model optimized by random search method, grid search method, Bayesian optimization algorithm, and beluga optimization algorithm, respectively. Compared to GS-MA-BiLSTM, our framework has reduced MSE by 18.50%, RMSE by 9.72%, and MAE by 13.60%. Compared to RS-MA-BiLSTM, our framework has reduced MSE by 15.38%, RMSE by 7.99%, and MAE by 10.05%. Compared to BOA-MA-BiLSTM, our framework has reduced MSE by 12.57%, RMSE by 6.49%, and MAE by 8.37%. Compared to BWO-MA-BiLSTM, our framework has reduced MSE by 5.98%, RMSE by 3.03%, and MAE by 4.37%. Table 12 presents the quantitative comparison results of the three frameworks. Obviously, FAMBWO-MA-BiLSTM is significantly better than RS-MA-BiLSTM, GS-MA-BiLSTM and BOA-MA-BiLSTM. Compared to BWO-MA-BiLSTM, our proposed framework also shows obvious improvement. These experimental results also show that simply optimizing hyperparameters can significantly improve the predictive performance of the model, indicating that hyperparameter optimization is even more important than model selection.

7. Conclusions

Deep learning is currently the state-of-the-art technology for TEC prediction, and hyperparameter optimization in deep learning models is a challenge, which greatly affects the performance of deep learning models. This article proposes a TEC prediction and optimization framework FAMBWO-MA-BiLSTM. We first analyzed the problems of the BWO algorithm, such as a lack of population diversity and an imbalance between the exploration and exploitation phases. We then proposed an improved algorithm FAMBWO by applying Cat chaotic mapping strategy during population initialization phase, adding Firefly Algorithm strategy in the location updating, and adding Cauchy mutation & Tent chaotic mapping strategy in the exploitation phase. We validated the effectiveness of adding these three strategies through ablation experiments. Then we compared our proposed FAMBWO with 11 other meta-heuristic algorithms on 30 benchmark functions, comparing their exploration, exploitation, and local optimal avoidance capabilities. The experimental results show that our proposed FAMBWO outperforms the comparative algorithms in terms of exploration ability, exploitation ability, local optimal avoidance ability, and the ability to solve high-dimensional optimization problems. Finally, we used the FAMBWO to solve the hyperparameter optimization problem of deep learning models in TEC prediction. We proposed an automated machine learning framework FAMBWO-MA-BiLSTM for TEC prediction and optimization. In this framework, MA-BiLSTM was used for TEC prediction and FAMBWO was used to optimize four hyperparameters of MA-BiLSTM. We compared our FAMBWO-MA-BiLSTM with GS-MA-BiLSTM, RS-MA-BiLSTM, BOA-MA-BiLSTM and BWO-MA-BiLSTM. The results indicate that the predictive performance of the FAMBWO-MA-BiLSTM framework is far superior to GS-MA-BiLSTM, RS-MA-BiLSTM, BOA-MA-BiLSTM and obviously outperforms BWO-MA-BiLSTM.
The study in this paper provides a new solution for deep learning hyperparameter optimization in TEC prediction and also provides reference for hyperparameter optimization in other deep learning application fields.

Author Contributions

Y.C.: supervision, software, methodology, conceptualization, formal analysis, investigation, visualization, writing—review and editing. H.L.: supervision, methodology, conceptualization, investigation, visualization, formal analysis, writing—review and editing. W.S.: Resources, conceptualization. Y.Y.: Resources, formal analysis. L.X.: investigation, formal analysis, Funding acquisition. H.W.: Resources, investigation. K.Z.: Resources, Funding acquisition. All authors read and approved the final paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This article does not contain any studies with human partic-ipants or animals performed by any authors.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

This work was supported by Special Fund of Fundamental Scientific Research Business Expense for Higher School of Central Government (ZY20180119), the Research and Development Program of Langfang Science and Technology (2023011054), and the Natural Science Foundation of Hebei Province (D2023512004). The authors extend their sincere gratitude to NASA’s CDDIS website. Additionally, they express their appreciation to the IGS and CODE teams for their ongoing efforts in maintaining and improving these valuable resources for the scientific community. The authors also acknowledge with gratitude the utilization of Tensorflow and Keras for the deep learning model employed in this study.

Conflicts of Interest

The authors declare that there is no conflicts of interest.

Appendix A. Supplementary Section

Table A1. Experimental results of strategy comparison on benchmark functions (F1–F30, dim = 30).
Table A1. Experimental results of strategy comparison on benchmark functions (F1–F30, dim = 30).
F1F2F3
AverStdAverStdAverStd
BWO6.0326E-971.1981E-962.0445E-493.6330E-496.5026E-661.4008E-65
CCM_BWO8.5037E-985.4601E-987.2947E-483.7601E-488.0273E-659.2481E-65
CMT_BWO4.8204E-1067.6152E-1055.2637E-568.3460E-564.2386E-693.471E-69
FA_BWO1.7365E-1023.1267E-1012.1898E-514.7816E-512.8723E-727.9173E-72
CCM_CMT_BWO3.5805E-1278.4561E-1276.6735E-657.2378E-651.0287E-865.2654E-86
CCM_FA_BWO6.6154E-1024.1783E-1013.3487E-518.2376E-516.2377E-737.0192E-73
CMT_FA_BWO4.8341E-1387.1623E-1388.2836-691.1687-693.1276E-897.2703E-89
FAMBWO2.1784E-1446.5303E-1442.6852E-735.8304E-735.2865E-891.5806E-88
F4F5F6
AverStdAverStdAverStd
BWO5.2629E-941.2195E-933.3304E-485.0039E-487.8603E-481.6408E-47
CCM_BWO5.3237E-974.3234E-967.1761E-521.0473E-528.1026E-526.3190E-51
CMT_BWO3.2031E-1186.1250E-1164.2308E-643.7832E-643.7657E-643.1236E-64
FA_BWO1.8723E-874.2134E-866.2167E-429.7912E-414.2386E-422.6124E-41
CCM_CMT_BWO2.9323E-1251.2987E-1258.1268E-685.9473E-686.3201E-681.2081E-67
CCM_FA_BWO7.1730E-1045.6245E-1033.0923E-572.3893E-572.5712E-578.6121E-56
CMT_FA_BWO8.7163E-1297.8160E-1295.4861E-704.4012E-691.1863E-705.6737E-69
FAMBWO1.8723E-1432.7901E-1433.9239E-711.0511E-703.0652E-715.9035E-71
F7F8F9
AverStdAverStdAverStd
BWO1.7754E+05.5606E-12.1725E-41.0442E-44.6357E-996.7786E-99
CCM_BWO1.5011E+07.3688E-12.3125E-41.2023E-48.3871E-1065.0871E-105
CMT_BWO8.3679E-15.1682E-17.0761E-38.3561E-36.0964E-1261.2944E-126
FA_BWO6.8671E-14.3061E-13.4123E-47.1098E-38.8713E-1143.4103E-114
CCM_CMT_BWO4.1264E-13.8735E-15.1276E-46.1263E-41.3760E-1334.2301E-133
CCM_FA_BWO5.3672E-18.3877E-19.9760E-44.3872E-49.1367E-1398.0122E-138
CMT_FA_BWO2.2378E-13.1265E-11.9937E-49.3012E-53.7650E-1301.2312E-139
FAMBWO1.0229E-14.0150E-21.8683E-48.1479E-59.2454E-1492.1089E-148
F10F11F12
AverStdAverStdAverStd
BWO-8.4003E+31.7763E+30.0000E+40.0000E+4-1.0891E+34.1222E+1
CCM_BWO-8.2371E+31.8761E+30.0000E+40.0000E+4-1.0880E+34.2613E+1
CMT_BWO-9.0364E+37.7801E-10.0000E+40.0000E+4-1.0923E+36.1876E+1
FA_BWO-8.1875E+31.0715E+20.0000E+40.0000E+4-1.0955E+32.1651E+1
CCM_CMT_BWO-2.7611E+43.9120E+10.0000E+40.0000E+4-1.1781E+37.3751E+0
CCM_FA_BWO-3.7013E+45.8761E-10.0000E+40.0000E+4-1.1211E+39.1256E+0
CMT_FA_BWO-2.1762E+49.7178E-10.0000E+40.0000E+4-1.0403E+36.82257E+0
FAMBWO-1.2569E+43.5252E-20.0000E+40.0000E+4-1.1687E+36.8210E+0
F13F14F15
AverStdAverStdAverStd
BWO0.0000E+40.0000E+44.4409E-160.0000E+40.0000E+40.0000E+4
CCM_BWO0.0000E+40.0000E+44.4409E-160.0000E+40.0000E+40.0000E+4
CMT_BWO0.0000E+40.0000E+44.4409E-160.0000E+40.0000E+40.0000E+4
FA_BWO0.0000E+40.0000E+44.4409E-160.0000E+40.0000E+40.0000E+4
CCM_CMT_BWO0.0000E+40.0000E+44.4409E-160.0000E+40.0000E+40.0000E+4
CCM_FA_BWO0.0000E+40.0000E+44.4409E-160.0000E+40.0000E+40.0000E+4
CMT_FA_BWO0.0000E+40.0000E+44.4409E-160.0000E+40.0000E+40.0000E+4
FAMBWO0.0000E+40.0000E+44.4409E-160.0000E+40.0000E+40.0000E+4
F16F17F18
AverStdAverStdAverStd
BWO-1.0000E+00.0000E+41.1166E-13.8210E-21.5109E-31.3678E-3
CCM_BWO-1.0000E+00.0000E+43.0762E-17.1267E-23.8730E-33.1287E-3
CMT_BWO-1.0000E+00.0000E+46.7098E-11.2655E-25.1236E-32.8613E-3
FA_BWO-1.0000E+00.0000E+47.7513E-24.6012E-21.1312E-32.0167E-3
CCM_CMT_BWO-1.0000E+00.0000E+41.8075E-39.2606E-24.3760E-48.3106E-4
CCM_FA_BWO-1.0000E+00.0000E+48.1338E-26.1763E-28.0983E-31.8971E-4
CMT_FA_BWO-1.0000E+00.0000E+44.5362E-33.2780E-34.2371E-47.5608E-4
FAMBWO-1.0000E+00.0000E+43.2652E-31.4095E-39.0821E-41.2501E-3
F19F20F21
AverStdAverStdAverStd
BWO9.9807E-11.3171E-44.6485E-45.2770E-4-1.0316E+07.7597E-10
CCM_BWO9.9807E-13.7072E-72.1901E-46.0981E-4-1.0316E+06.8103E-10
CMT_BWO9.9807E-17.9908E-45.1239E-49.8139E-4-1.0316E+08.2385E-10
FA_BWO9.9807E-12.8723E-109.1904E-41.2813E-4-1.0316E+04.1324E-10
CCM_CMT_BWO9.9807E-14.2831E-73.0736E-44.2874E-4-1.0316E+07.1104E-10
CCM_FA_BWO9.9807E-16.2903E-86.7938E-43.0982E-4-1.0316E+05.9012E-10
CMT_FA_BWO9.9807E-18.1308E-111.8341E-44.3703E-4-1.0316E+06.1095E-10
FAMBWO9.9807E-18.1593E-111.1780E-43.0644E-4-1.0316E+05.9411E-10
F22F23F24
AverStdAverStdAverStd
BWO-8.0985E+01.8323E+0-8.8582E+01.4560E+0-8.5132E+01.3851E+0
CCM_BWO-8.4208E+08.1983E-1-3.4913E+04.3219E+0-9.1278E+03.8902E-1
CMT_BWO-9.0671E+01.6370E-1-6.3098E+02.0344E-2-9.7612E+04.7898E-1
FA_BWO-6.7127E+07.6128E+0-8.1298E+09.4089E+0-1.0613E+18.3014E-2
CCM_CMT_BWO-3.9820E+09.2891E+0-1.4980E+13.4083E-1-7.1975E+06.0781E-1
CCM_FA_BWO-7.8087E+18.7087E-1-9.8019E+04.3909E-1-4.8035E+09.8776E-2
CMT_FA_BWO-2.9083E+14.2389E-1-4.8984E+05.9088E-1-1.2049E+15.1780E-2
FAMBWO-1.0119E+12.1270E-2-1.0373E+12.5349E-2-1.0489E+13.6462E-2
F25F26F27
AverStdAverStdAverStd
BWO4.0035E+34.0536E+25.9951E+48.5572E+33.7909E+44.3557E+3
CCM_BWO4.0071E+34.9981E+25.9451E+46.7213E+33.8417E+44.6121E+3
CMT_BWO4.0048E+35.9957E+25.8961E+47.7611E+33.7782E+43.6017E+3
FA_BWO3.9983E+36.8903E+25.9071E+49.6713E+33.7076E+45.7104E+3
CCM_CMT_BWO3.9008E+34.8482E+25.8601E+46.7613E+33.7892E+44.0913E+3
CCM_FA_BWO3.9992E+33.8870E+25.8023E+47.6134E+33.6426E+46.8619E+3
CMT_FA_BWO3.9601E+34.8611E+25.7761E+46.8731E+33.5716E+44.9125E+3
FAMBWO3.8719E+33.2185E+25.6212E+46.3241E+33.3417E+43.8474E+3
F28F29F30
AverStdAverStdAverStd
BWO8.7766E+31.1405E+31.0724E+42.1617E+34.2551E+31.5691E+2
CCM_BWO8.1880E+32.3927E+31.0246E+42.9831E+34.2671E+32.1013E+2
CMT_BWO8.9967E+31.7853E+31.0497E+47.8087E+34.2401E+31.9954E+2
FA_BWO8.9730E+31.1813E+31.0190E+44.1707E+34.1364E+32.0192E+2
CCM_CMT_BWO8.0830E+32.8794E+39.9177E+32.5812E+34.2398E+31.8096E+2
CCM_FA_BWO7.9884E+32.3971E+39.6019E+33.2991E+34.1043E+32.7913E+2
CMT_FA_BWO7.9498E+31.8766E+39.0178E+34.8018E+34.1012E+31.0383E+2
FAMBWO7.9028E+31.0457E+38.0399E+31.5892E+34.0977E+31.2075E+2
Table A2. Comparison results on unimodal functions (F1–F9, dim = 30).
Table A2. Comparison results on unimodal functions (F1–F9, dim = 30).
FunMethodPSOMFODESCASSAWOASOAGWODBOWSOBWOFAMBWO
F1Aver6.8490E+06.7444E+33.4708E+42.2896E+25.0070E+34.0383E-71.4718E-831.8743E-111.0000E+32.4508E-136.0326E-972.1784E-144
STD1.7102E+03.4688E+34.1341E+32.0599E+23.0605E+31.1615E-62.6517E-831.3918E-113.0000E+35.1909E-131.1981E-966.5303E-144
F2Aver1.4574E+16.1632E+14.1621E+41.3758E+03.6748E+21.8221E-62.2915E-492.6769E-71.0000E+12.9476E-82.0445E-492.6852E-73
STD1.0815E+11.4123E+16.9389E+41.1781E+07.8082E+21.4165E-66.8722E-499.4134E-81.0000E+15.3270E-83.6330E-495.8304E-73
F3Aver3.2366E-26.8955E-37.7184E-11.1308E-21.4129E-27.7084E-264.5174E-758.2865E-364.8427E-211.2328E-186.5026E-665.2865E-89
STD3.2448E-24.7926E-32.5389E-12.3299E-21.0444E-21.3552E-741.3552E-741.1115E-351.1157E-202.1425E-181.4008E-651.5806E-88
F4Aver3.3500E+24.2556E+46.4232E+41.6679E+42.3164E+41.1590E+27.0551E+43.6043E-11.2141E+41.1838E-135.2629E-941.8723E-143
STD1.1339E+29.9362E+34.3655E+37.5642E+35.5044E+32.3903E+23.0165E+41.7346E-11.1663E+43.4892E-131.2195E-932.7901E-143
F5Aver3.9709E+07.4761E+18.6668E+14.8122E+15.0257E+12.0319E-12.2309E-126.9860E-31.2809E-96.3616E-83.3304E-483.9239E-71
STD1.2277E+06.1183E+02.7822E+07.3415E+05.4787E+01.9208E-15.6215E-123.7966E-31.9993E-91.7728E-75.0039E-481.0511E-70
F6Aver4.1819E+07.4350E+18.5470E+14.8395E+15.2017E+11.6344E-12.3028E-97.2349E-39.8127E-106.3104E-97.8603E-483.0652E-71
STD1.0668E+08.7207E+03.3904E+09.7793E+03.1908E+01.4722E-16.9018E-93.2801E-31.0469E-96.5178E-91.6408E-475.9035E-71
F7Aver7.3372E+06.7066E+33.5637E+49.1885E+22.2072E+31.2211E+01.2162E-16.5462E-11.0106E+34.1423E-21.7754E+01.0229E-1
STD2.8164E+02.9561E+37.0200E+38.4759E+21.0926E+34.2589E-11.3762E-13.1512E-13.0300E+33.7514E-25.5606E-14.0150E-2
F8Aver3.3204E+01.3878E+14.8039E+16.4379E-12.9030E+04.0584E-32.5258E-43.8932E-31.9174E-34.8033E-32.1725E-41.0442E-4
STD2.0680E+08.3744E+06.6963E+07.1711E-11.5040E+04.8341E-31.7940E-42.4039E-38.0869E-43.4241E-31.8683E-48.1479E-5
F9Aver1.0263E+22.3871E+22.3004E+26.3463E+02.2871E+13.5438E-71.1471E+02.9268E-121.0480E+19.9794E-144.6357E-999.2454E-149
STD1.7766E+26.8304E+14.1702E+13.8758E+01.1263E+13.5888E-73.4080E+03.0885E-123.1441E+12.9345E-136.7786E-992.1089E-148
Table A3. p-values of unimodal functions (F1–F9, dim = 30).
Table A3. p-values of unimodal functions (F1–F9, dim = 30).
FunPSOMFODESCASSAWOASOAGWODBOWSOBWO
F11.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-04
F21.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-04
F31.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-043.1971E-031.5705E-041.5705E-041.5705E-041.5705E-04
F41.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-04
F51.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-04
F61.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-04
F71.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-044.4969E-011.9397E-031.5705E-048.151E-031.5705E-04
F81.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-045.8782E-021.5705E-041.5705E-041.5705E-041.3057E-01
F91.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-04
Table A4. Comparison results on multimodal functions (F10–F24, dim = 30).
Table A4. Comparison results on multimodal functions (F10–F24, dim = 30).
FunMethodPSOMFODESCASSAWOASOAGWODBOWSOBWOFAMBWO
F10Aver-4.1990E+3-7.5453E+3-5.3132E+3-3.7296E+3-6.9819E+3-6.9421E+3-1.2529E+4-6.2970E+3-9.5608E+3-1.2568E+4-8.4003E+3-1.2569E+4
STD7.9674E+25.3018E+23.6783E+22.0742E+26.0326E+22.6357E+28.8831E+15.8555E+21.2744E+31.8889E+01.7763E+33.5252E-2
F11Aver6.1700E+04.3716E+07.0070E+07.5077E+02.3187E+08.2724E-10.0000E+41.4655E+03.7968E+01.2545E-100.0000E+40.0000E+4
STD8.6362E-14.5897E-13.1413E-15.8903E-16.8595E-15.5481E-10.0000E+43.3570E-11.3110E+02.2972E-100.0000E+40.0000E+4
F12Aver-8.6276E+2-9.7358E+2-5.3341E+2-5.8953E+2-9.1291E+2-7.5878E+2-1.1748E+3-9.3520E+2-8.1231E+2-1.1749E+3-1.0891E+3-1.1687E+3
STD6.8209E+12.7682E+15.2359E+13.9380E+15.9464E+19.2879E+12.6498E-14.9462E+19.0867E+11.3952E-14.1222E+16.8210E+0
F13Aver1.9278E+22.1788E+23.5404E+26.5252E+12.1911E+21.0446E-10.0000E+41.5567E+15.7849E+01.2417E-130.0000E+40.0000E+4
STD2.4352E+14.2181E+11.5667E+12.8560E+13.7967E+13.1334E-10.0000E+45.9490E+01.1570E+13.4605E-130.0000E+40.0000E+4
F14Aver4.0415E+01.9160E+11.9959E+11.5741E+11.6377E+18.8903E-54.4409E-161.1499E-64.1993E-111.2470E-84.4409E-164.4409E-16
STD5.6701E-12.3175E+03.4900E-36.8663E+02.7163E+05.7951E-50.0000E+43.9160E-75.3902E-112.0079E-80.0000E+40.0000E+4
F15Aver8.6986E-16.5211E+13.2148E+23.2698E+02.8432E+16.9398E-30.0000E+46.3183E-39.1704E-157.5551E-140.0000E+40.0000E+4
STD5.4028E-24.2311E+16.6860E+12.4180E+01.0714E+11.1575E-20.0000E+47.9812E-32.7511E-141.9832E-130.0000E+40.0000E+4
F16Aver9.0278E-122.3995E-113.3364E-93.5848E-102.7083E-113.1951E-12-1.0000E+06.1424E-155.1416E-12-9.9876E-1-1.0000E+0-1.0000E+0
STD1.5150E-112.2714E-111.8871E-91.8679E-104.2416E-111.7624E-120.0000E+48.9887E-152.4031E-123.4803E-30.0000E+40.0000E+4
F17Aver1.9888E+07.1808E+62.5231E+86.5496E+61.5142E+61.3017E-11.8572E-26.2936E-23.8577E-23.5579E-31.1166E-13.2652E-3
STD8.0797E-16.1358E+61.1172E+81.4997E+71.8196E+61.5138E-18.6800E-34.6080E-21.4023E-26.9370E-43.8210E-21.4095E-3
F18Aver1.4771E+07.5675E+21.1689E+31.0721E+14.9256E+21.1190E+05.7996E-23.5672E-13.5387E-11.3311E-21.5109E-31.3678E-3
STD5.2995E-13.0941E+21.7358E+23.8644E+01.7474E+22.0245E-13.6602E-21.6280E-11.1955E-12.7532E-29.0821E-41.2501E-3
F19Aver2.0867E+01.3952E+09.9800E-11.5957E+02.1257E+01.4478E+03.3479E+03.7370E+01.0974E+09.9800E-19.9800E-19.9800E-1
STD1.4936E+06.5841E-11.2162E-169.0764E-11.2421E+06.6106E-13.7865E+04.1036E+02.9821E-11.7189E-121.3171E-48.1593E-11
F20Aver1.4243E-31.1496E-35.1773E-41.1113E-34.5175E-34.6798E-41.5745E-34.4585E-33.4553E-34.1358E-44.6485E-44.2770E-4
STD6.8454E-43.1737E-43.5571E-44.2714E-44.1546E-32.7070E-41.1457E-37.9532E-33.2167E-32.7329E-41.1780E-43.0644E-4
F21Aver-1.0315E+0-1.0316E+0-1.0316E+0-1.0316E+0-1.0316E+0-1.0316E+0-1.0316E+0-1.0316E+0-1.0316E+0-1.0316E+0-1.0316E+0-1.0316E+0
STD1.5763E-41.4043E-167.8918E-158.0073E-51.2274E-134.8436E-84.0879E-64.8763E-86.2923E-83.7059E-147.7597E-105.9411E-10
F22Aver-1.0046E+1-5.6140E+0-1.0153E+1-2.9240E+0-5.4023E+0-3.7643E+0-8.8029E+0-9.6446E+0-7.0929E+0-1.0153E+1-8.0985E+0-1.0119E+1
STD1.3707E-12.4519E+01.6085E-61.3862E+03.2428E+01.9736E+02.4198E+01.5158E+02.4738E+01.5392E-51.8323E+02.1270E-2
F23Aver-1.0285E+1-7.6342E+0-1.0403E+1-3.4061E+0-5.7208E+0-3.9907E+0-7.0898E+0-9.8684E+0-9.3005E+0-1.0403E+1-8.8582E+0-1.0373E+1
STD7.5974E-23.4014E+07.9317E-61.2397E+03.1696E+01.4778E+02.9727E+01.5936E+02.0919E+08.8006E-61.4560E+02.5349E-2
F24Aver-1.0450E+1-5.3728E+0-1.0536E+1-3.5850E+0-6.3097E+0-4.0946E+0-5.0826E+0-1.0534E+1-7.7791E+0-1.0536E+1-8.5132E+0-1.0489E+1
STD9.2939E-22.7765E+03.1115E-61.3324E+03.5236E+01.8502E+02.7384E+01.3937E-32.7125E+02.0858E-51.3851E+03.6462E-2
Table A5. p-values of multimodal functions (F10–F24, dim =30).
Table A5. p-values of multimodal functions (F10–F24, dim =30).
FunPSOMFODESCASSAWOASOAGWODBOWSOBWO
F101.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-044.4969E-011.5705E-042.4969E-031.5093E-011.5705E-04
F111.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041E+001.5705E-041.5705E-041.5705E-041E+00
F121.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-04
F131.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041E+001.5705E-045.8782E-025.8782E-021E+00
F141.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041E+001.5705E-041.5705E-041.5705E-041E+00
F151.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041E+001.5705E-047.0546E-011.3057E-011E+00
F161.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041E+001.5705E-041.5705E-041.5705E-041E+00
F171.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.9876E-011.5705E-04
F181.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-041.5705E-044.072E-036.5015E-01
F198.2099E-024.4969E-011.5705E-041.5705E-042.3342E-021.5705E-041.7362E-013.8106E-044.4969E-013.8106E-041.5705E-04
F201.4989E-036.5017E-032.2648E-018.151E-033.8106E-044.0568E-014.9366E-022.8992E-018.8074E-041.0165E-025.967E-01
F211.5705E-041.5705E-041.5705E-041.5705E-041.5705E-043.8106E-041.5705E-041.5705E-041.5705E-041.5705E-042.2648E-01
F225.8782E-022.3342E-021.5705E-041.5705E-041.3057E-011.5705E-041.5093E-011.0165E-023.4294E-021.5705E-041.5705E-04
F231.152E-034.4969E-011.5705E-041.5705E-041.3057E-011.5705E-042.4969E-032.4969E-034.9366E-021.5705E-041.5705E-04
F247.0546E-012.3342E-021.5705E-041.5705E-044.4969E-011.5705E-042.1218E-041.5705E-041.3057E-011.5705E-042.1218E-04
Table A6. Comparison results on composition functions (F25–F30, dim = 30).
Table A6. Comparison results on composition functions (F25–F30, dim = 30).
FunMethodPSOMFODESCASSAWOASOAGWODBOWSOBWOFAMBWO
F25Aver4.2501E+34.3714E+34.2012E+34.1697E+34.2810E+34.1909E+34.2513E+34.1745E+34.1032E+34.3108E+34.0035E+33.8719E+3
STD3.9414E+24.1368E+23.9646E+25.5081E+24.4508E+24.7587E+24.1419E+23.2691E+24.3694E+24.8812E+24.0536E+23.2185E+2
F26Aver6.1497E+46.2600E+46.3064E+46.1810E+46.3679E+46.2453E+46.2085E+46.3440E+46.4170E+46.3717E+45.9951E+45.6212E+4
STD8.3180E+36.1200E+37.9024E+35.9720E+36.7052E+37.1979E+35.2581E+37.8692E+35.9357E+36.0638E+38.5572E+36.3241E+3
F27Aver3.7884E+43.7348E+43.7175E+43.7916E+43.8343E+43.8939E+44.0013E+43.8708E+43.7545E+43.9364E+43.7909E+43.3417E+4
STD4.8197E+34.8688E+35.3916E+34.4953E+33.8023E+33.7373E+34.5948E+33.8914E+34.0606E+34.4915E+34.3557E+33.8474E+3
F28Aver9.2863E+39.0139E+39.1491E+39.1018E+39.1074E+38.9788E+38.9313E+39.0204E+39.0009E+39.1428E+38.7766E+37.9028E+3
STD1.0664E+31.1657E+31.3429E+31.5160E+31.2147E+31.1470E+31.3252E+31.2400E+39.8165E+21.3555E+31.1405E+31.0457E+3
F29Aver1.0741E+41.1438E+41.0546E+41.1140E+41.1237E+41.0864E+41.1093E+41.1180E+41.0915E+41.0611E+41.0724E+48.0399E+3
STD1.2948E+31.9296E+31.6449E+31.4688E+31.4112E+31.8017E+31.8848E+31.5839E+31.8441E+31.8707E+32.1617E+31.5892E+3
F30Aver4.2832E+34.3256E+34.3162E+34.2181E+34.2981E+34.2168E+34.2989E+34.3055E+34.2065E+34.2300E+34.2551E+34.0977E+3
STD1.3042E+22.2106E+21.5269E+22.4449E+22.1250E+21.5210E+21.5466E+21.6424E+21.6123E+21.1989E+21.5691E+21.2075E+2
Table A7. p-values of composition functions (F25–F30, dim = 30).
Table A7. p-values of composition functions (F25–F30, dim = 30).
FunDimPSOMFODESCASSAWOASOAGWODBOWSOBWO
F25304.8461E-41.3825E-51.2046E-31.9494E-24.1013E-44.9689E-37.9060E-41.4796E-31.9494E-21.7317E-41.8332E-1
F26301.7227E-33.0928E-41.5382E-41.1436E-37.8991E-51.2685E-33.8778E-45.4107E-42.3542E-54.7885E-55.9613E-3
F27302.6046E-42.4390E-33.2599E-33.0928E-44.7885E-53.7006E-62.2340E-61.2077E-54.1013E-45.6570E-67.8991E-5
F28303.2611E-58.3393E-49.2727E-47.4937E-48.3393E-47.4937E-44.1284E-31.8368E-43.8778E-41.0302E-32.5611E-3
F29304.4937E-43.0201E-42.9436E-32.2988E-41.3065E-48.4421E-49.5315E-41.5084E-41.2093E-37.7931E-33.6203E-3
F30301.5084E-31.5084E-31.7420E-41.0134E-16.5914E-31.3004E-11.5084E-32.6370E-34.0057E-21.2093E-27.4655E-3
Table A8. Comparison results on high-dimensional composition functions (F25–F30, dim = 100).
Table A8. Comparison results on high-dimensional composition functions (F25–F30, dim = 100).
FunDimMethodPSOMFODESCASSAWOASOAGWODBOWSOBWOFAMBWO
F25100Aver1.0385E+49.7473E+31.0265E+41.0293E+41.0963E+41.0336E+41.0300E+41.0696E+41.0301E+41.0099E+41.0465E+49.8810E+3
STD1.3243E+31.2451E+39.1069E+26.4140E+21.2945E+31.1711E+37.4290E+28.3234E+21.2352E+35.9307E+21.4310E+37.6980E+2
F26100Aver1.4816E+51.4716E+51.4950E+51.4126E+51.4230E+51.4274E+51.4112E+51.4312E+51.4567E+51.4665E+51.4570E+51.3482E+5
STD7.3497E+36.5706E+31.0969E+41.8194E+41.0396E+48.3374E+31.2159E+49.3927E+31.3538E+41.3796E+48.5761E+31.0697E+4
F27100Aver9.3549E+48.8460E+49.1208E+48.7080E+49.4635E+49.5866E+48.9455E+48.7414E+49.0502E+49.3059E+48.9737E+48.2718E+4
STD6.4928E+38.8652E+36.4811E+37.4193E+37.0810E+35.2458E+36.6586E+31.1432E+48.6436E+38.0090E+38.0301E+36.9324E+3
F28100Aver3.5044E+43.5828E+43.4288E+43.4423E+43.5320E+43.4143E+43.4184E+43.4150E+43.5882E+43.5157E+43.5221E+43.0264E+4
STD4.4720E+33.2380E+33.4731E+34.1880E+33.0991E+33.9773E+33.3118E+34.8475E+33.8251E+34.2116E+34.8540E+33.2352E+3
F29100Aver4.2600E+43.8342E+44.0075E+43.9187E+44.0305E+44.0275E+43.9146E+44.2023E+44.3092E+44.2620E+43.7676E+43.4992E+4
STD3.9281E+35.9466E+38.1387E+37.0578E+38.8507E+38.2231E+37.5353E+36.1876E+36.8596E+35.1902E+35.5419E+34.0076E+3
F30100Aver5.8614E+35.7999E+35.7811E+35.9534E+35.8223E+35.7778E+35.9028E+35.8829E+35.9016E+35.7467E+35.8418E+35.5284E+3
STD3.1410E+23.0393E+22.8520E+21.7280E+23.5349E+24.4726E+21.4622E+23.6817E+23.1756E+22.3156E+22.8487E+22.0158E+2
Table A9. p-values of high-dimensional composition functions (F25–F30, dim = 100).
Table A9. p-values of high-dimensional composition functions (F25–F30, dim = 100).
FunPSOMFODESCASSAWOASOAGWODBOWSOBWO
F254.0057E-29.8345E-32.3716E-23.0953E-35.1139E-31.6468E-31.4089E-31.2093E-35.6468E-26.0413E-23.6203E-3
F261.1298E-33.0201E-33.0201E-34.0057E-29.2984E-25.9126E-21.0134E-17.7931E-22.3787E-22.1333E-21.0744E-2
F272.8416E-43.2670E-23.9425E-31.0134E-21.7420E-43.0655E-51.2093E-25.3764E-26.5914E-31.7386E-36.5914E-3
F285.8104E-32.8416E-45.8104E-34.4937E-39.7547E-41.3589E-25.1139E-34.4253E-21.1298E-33.0201E-36.5914E-3
F297.4588E-51.9136E-23.2670E-24.4253E-22.3787E-21.1984E-22.2110E-21.5084E-31.5084E-32.4178E-41.5243E-2
F302.6370E-38.4421E-31.5247E-23.6742E-51.9103E-29.5315E-37.4588E-55.1139E-39.7547E-41.9103E-26.5914E-3

References

  1. Komjathy, A.; Sparks, L.; Wilson, B.D.; Mannucci, A.J. Automated daily processing of more than 1000 ground-based GPS receivers for studying intense ionospheric storms. Radio Sci. 2005, 40, 1–11. [Google Scholar] [CrossRef]
  2. Schunk, R.W.; Sojka, J. Ionosphere-thermosphere space weather issues. J. Atmos. Terr. Phys. 1996, 58, 1527–1574. [Google Scholar] [CrossRef]
  3. Zhang, X.; Ren, X.; Chen, J.; Zuo, X.; Mei, D.; Liu, W. Investigating GNSS PPP–RTK with external ionospheric constraints. Satell. Navig. 2022, 3, 6. [Google Scholar] [CrossRef]
  4. Li, L.; Liu, H.; Le, H.; Yuan, J.; Shan, W.; Han, Y.; Yuan, G.; Cui, C.; Wang, J. Spatiotemporal Prediction of Ionospheric Total lectron Content Based on ED-ConvLSTM. Remote Sens. 2023, 15, 3064. [Google Scholar] [CrossRef]
  5. Camporeale, E. The challenge of machine learning in space weather: Nowcasting and forecasting. Space Weather-Int. J. Res. Appl. 2019, 17, 1166–1207. [Google Scholar] [CrossRef]
  6. Ren, X.; Yang, P.; Liu, H.; Chen, J.; Liu, W. Deep learning for global ionospheric TEC forecasting: Different approaches and validation. Space Weather 2022, 20, e2021SW003011. [Google Scholar] [CrossRef]
  7. Shaikh, M.M.; Butt, R.A.; Khawaja, A. Forecasting total electron content (TEC) using CEEMDAN LSTM model. Adv. Space Res. 2023, 71, 4361–4373. [Google Scholar] [CrossRef]
  8. Akhoondzadeh, M. A MLP neural network as an investigator of TEC time series to detect seismo-ionospheric anomalies. Adv. Space Res. 2013, 51, 2048–2057. [Google Scholar] [CrossRef]
  9. Reimers, N.; Gurevych, I. Optimal hyperparameters for deep lstm-networks for sequence labeling tasks. arXiv 2017, arXiv:1707.06799. [Google Scholar]
  10. Lavesson, N.; Davidsson, P. Quantifying the Impact of Learning Algorithm Parameter Tuning. In Proceedings of the 21st National Conference on Artificial Intelligence—Volume 1 (AAAI’06), Boston, MA, USA, 16–20 July 2006; AAAI Press: Pomona, CA, USA, 2006; pp. 395–400. Available online: http://dl.acm.org/citation.cfm?id=1597538.1597602 (accessed on 1 July 2006).
  11. Mantovani, R.G.; Rossi, A.L.; Vanschoren, J.; Bischl, B.; Carvalho, A.C. To tune or not to tune: Recommending when to adjust SVM hyper-parameters via meta-learning. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015. [Google Scholar] [CrossRef]
  12. Probst, P.; Bischl, B.; Boulesteix, A.-L. Tunability: Importance of Hyperparameters of Machine Learning Algorithms. arXiv 2018, arXiv:1802.09596. [Google Scholar] [CrossRef]
  13. Prost, J. Hands on Hyperparameter Tuning with Keras Tuner. 2020. Available online: https://www.sicara.ai/blog/hyperparameter-tuning-keras-tuner (accessed on 12 July 2020).
  14. Weerts, H.J.; Mueller, A.C.; Vanschoren, J. Importance of tuning hyperparameters of machine learning algorithms. arXiv 2020, arXiv:2007.07588. [Google Scholar]
  15. Hutter, F.; Hoos, H.H.; StUtzle, T. Automatic algorithm configuration based on local search. In Proceedings of the 22nd National Conference on Artificial Intelligence, Vancouver, BC, Canada, 22–26 July 2007; AAAI Press: Pomona, CA, USA, 2007; pp. 1152–1157. [Google Scholar]
  16. Lorenzo, P.R.; Nalepa, J.; Kawulok, M.; Ramos, L.S.; Pastor, J.R. Particle swarm optimization for hyper-parameter selection in deep neural networks. In Proceedings of the Genetic and Evolutionary Computation Conference, Berlin, Germany, 15–19 July 2017; pp. 481–488. [Google Scholar]
  17. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Zamani, H.; Bahreininejad, A. GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems. J. Comput. Sci. 2022, 21, 101636. [Google Scholar] [CrossRef]
  18. Gharehchopogh, F.S.; Shayanfar, H.; Gholizadeh, H. A comprehensive survey on symbiotic organisms search algorithms. Artif. Intell. Rev. 2020, 53, 2265–2312. [Google Scholar] [CrossRef]
  19. Ghafori, S.; Gharehchopogh, F.S. Advances in spotted hyena optimizer: A comprehensive survey. Arch. Comput. Methods Eng. 2021, 29, 1569. [Google Scholar] [CrossRef]
  20. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar] [CrossRef]
  21. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  22. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  25. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  26. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  27. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  28. Ayyarao, T.S.; Ramakrishna, N.S.S.; Elavarasan, R.M.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War strategy optimization algorithm: A new effective metaheuristic algorithm for global optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
  29. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  30. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harrishawks optimization: Algorithm and applications, Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  31. Liu, H.; Lei, D.; Yuan, J.; Yuan, G.; Cui, C.; Wang, Y.; Xue, W. Ionospheric TEC Prediction in China Based on the Multiple-Attention LSTM Model. Atmosphere 2022, 13, 1939. [Google Scholar] [CrossRef]
  32. Kaselimi, M.; Voulodimos, A.; Doulamis, N.; Doulamis, A.; Delikaraoglou, D. A causal long short-term memory sequence to sequence model for TEC prediction using GNSS observations. Remote Sens. 2020, 12, 1354. [Google Scholar] [CrossRef]
  33. Lin, X.; Wang, H.; Zhang, Q.; Yao, C.; Chen, C.; Cheng, L.; Li, Z. A spatiotemporal network model for global ionospheric TEC forecasting. Remote Sens. 2022, 14, 1717. [Google Scholar] [CrossRef]
  34. Xia, G.; Liu, M.; Zhang, F.; Zhou, C. CAiTST: Conv-attentional image time sequence transformer for ionospheric TEC maps forecast. Remote Sens. 2022, 14, 4223. [Google Scholar] [CrossRef]
  35. Xia, G.; Zhang, F.; Wang, C.; Zhou, C. ED-ConvLSTM: A Novel Global Ionospheric Total Electron Content Medium-Term Forecast Model. Space Weather 2022, 20, e2021SW002959. [Google Scholar] [CrossRef]
  36. Gao, X.; Yao, Y. A storm-time ionospheric TEC model with multichannel features by the spatiotemporal ConvLSTM network. J. Geod. 2023, 97, 9. [Google Scholar] [CrossRef]
  37. Huang, Z.; Li, Q.B.; Yuan, H. Forecasting of ionospheric vertical TEC 1-h ahead using a genetic algorithm and neural network. Adv. Space Res. 2015, 55, 1775–1783. [Google Scholar] [CrossRef]
  38. Liu, L.; Morton, Y.J.; Liu, Y. Machine Learning Prediction of Storm-Time High-Latitude Ionospheric Irregularities From GNSS-Derived ROTI Maps. Geophys. Res. Lett. 2021, 48, e2021GL095561. [Google Scholar] [CrossRef]
  39. Liu, L.; Morton, Y.J.; Liu, Y. ML prediction of global ionospheric TEC maps. Space Weather 2022, 20, e2022SW003135. [Google Scholar] [CrossRef]
  40. Tang, J.; Li, Y.; Ding, M.; Liu, H.; Yang, D.; Wu, X. An ionospheric TEC forecasting model based on a CNN-LSTM-attention mechanism neural network. Remote Sens. 2022, 14, 2433. [Google Scholar] [CrossRef]
  41. Lei, D.; Liu, H.; Le, H.; Huang, J.; Yuan, J.; Li, L.; Wang, Y. Ionospheric TEC Prediction Base on Attentional BiGRU. Atmosphere 2022, 13, 1039. [Google Scholar] [CrossRef]
  42. Tang, J.; Zhong, Z.; Hu, J.; Wu, X. Forecasting Regional Ionospheric TEC Maps over China Using BiConvGRU Deep Learning. Remote Sens. 2023, 15, 3405. [Google Scholar] [CrossRef]
  43. Shan, W.; Qiao, Z.; Heidari, A.A.; Chen, H.; Turabieh, H.; Teng, Y. Double adaptive weights for stabilization of moth flame optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowl.-Based Syst. 2021, 214, 106728. [Google Scholar] [CrossRef]
  44. Braga, I.; Carmo, L.P.D.; Benatti, C.C.; Monard, M.C. A note on parameter selection for support vector machines. In Advances in Soft Computing and Its Applications; Castro, F., Gelbukh, A., Gonzalez, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8266, pp. 233–244. [Google Scholar]
  45. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  46. Frazier, P.I. A tutorial on Bayesian optimization. arXiv 2018, arXiv:1807.02811. [Google Scholar]
  47. Maroufpoor, S.; Bozorg-Haddad, O.; Maroufpoor, E. Reference evapotranspiration estimating based on optimal input combination and hybrid artificial intelligent model: Hybridization of artificial neural network with grey wolf optimizer algorithm. J. Hydrol. 2020, 588, 125060. [Google Scholar] [CrossRef]
  48. Ofori-Ntow Jnr, E.; Ziggah, Y.Y.; Relvas, S. Hybrid ensemble intelligent model based on wavelet transform, swarm intelligence and artificial neural network for electricity demand forecasting. Sustain. Cities Soc. 2021, 66, 102679. [Google Scholar] [CrossRef]
  49. Singh, P.; Chaudhury, S.; Panigrahi, B.K. Hybrid MPSO-CNN: Multi-level particle swarm optimized hyperparameters of convolutional neural network. Swarm Evol. Comput. 2021, 63, 100863. [Google Scholar] [CrossRef]
  50. Chen, H.; Yang, B.; Wang, S.; Wang, G.; Li, H.; Liu, W. Towards an optimal support vector machine classifier using a parallel particle swarm optimization strategy. Appl. Math. Comput. 2014, 239, 180–197. [Google Scholar] [CrossRef]
  51. Shah, H.; Ghazali, R.; Nawi, N.M. Hybrid ant bee colony algorithm for volcano temperature prediction. In Proceedings of the Emerging Trends and Applications in Information Communication Technologies: Second International Multi Topic Conference, IMTIC 2012, Jamshoro, Pakistan, 28–30 March 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 453–465. [Google Scholar]
  52. Yuan, X.; Hu, G.; Zhong, J.; Wei, G. HBWO-JS: Jellyfish search boosted hybrid beluga whale optimization algorithm for engineering applications. J. Comput. Des. Eng. 2023, 10, 1615–1656. [Google Scholar] [CrossRef]
  53. Wang, H.; Wang, W.; Cui, Z.; Zhou, X.; Zhao, J.; Li, Y. A new dynamic firefly algorithm for demand estimation of water resources. Inf. Sci. 2018, 438, 95–106. [Google Scholar] [CrossRef]
  54. Amiri, M.H.; Mehrabi Hashjin, N.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  55. Deng, X.; He, D.; Qu, L. A novel hybrid algorithm based on arithmetic optimization algorithm and particle swarm optimization for global optimization problems. J. Supercomput. 2024, 80, 8857–8897. [Google Scholar] [CrossRef]
  56. Li, K.; Huang, H.; Fu, S.; Ma, C.; Fan, Q.; Zhu, Y. A multi-strategy enhanced northern goshawk optimization algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 2023, 415, 116199. [Google Scholar] [CrossRef]
Figure 1. Flowchart of FAMBWO.
Figure 1. Flowchart of FAMBWO.
Biomimetics 09 00575 g001
Figure 2. The convergence behavior of FAMBWO.
Figure 2. The convergence behavior of FAMBWO.
Biomimetics 09 00575 g002aBiomimetics 09 00575 g002b
Figure 3. Convergence curves of different algorithms on the unimodal functions.
Figure 3. Convergence curves of different algorithms on the unimodal functions.
Biomimetics 09 00575 g003
Figure 4. Convergence curves of different algorithms on the multimodal functions.
Figure 4. Convergence curves of different algorithms on the multimodal functions.
Biomimetics 09 00575 g004aBiomimetics 09 00575 g004b
Figure 5. Convergence curves of different algorithms on composition functions with 30 dimensions.
Figure 5. Convergence curves of different algorithms on composition functions with 30 dimensions.
Biomimetics 09 00575 g005
Figure 6. Convergence curves of different algorithms on composition functions with 100 dimensions.
Figure 6. Convergence curves of different algorithms on composition functions with 100 dimensions.
Biomimetics 09 00575 g006aBiomimetics 09 00575 g006b
Figure 7. The raw and processed TEC: the upper shows the raw TEC, the middle shows the first-order difference, and the bottom shows the normalized TEC.
Figure 7. The raw and processed TEC: the upper shows the raw TEC, the middle shows the first-order difference, and the bottom shows the normalized TEC.
Biomimetics 09 00575 g007
Figure 8. Schematic diagram of sample-making process.
Figure 8. Schematic diagram of sample-making process.
Biomimetics 09 00575 g008
Figure 9. MA-BiLSTM model structure.
Figure 9. MA-BiLSTM model structure.
Biomimetics 09 00575 g009
Figure 10. Flowchart of FAMBWO-MA-BiLSTM.
Figure 10. Flowchart of FAMBWO-MA-BiLSTM.
Biomimetics 09 00575 g010
Figure 11. Comparison of prediction errors among 4 frameworks.
Figure 11. Comparison of prediction errors among 4 frameworks.
Biomimetics 09 00575 g011
Table 1. Details of benchmark problems for unimodal functions.
Table 1. Details of benchmark problems for unimodal functions.
NameFunctionRange f m i n
F1Sphere i = 1 D   x i 2 [−100,100]0
F2 Schwefe l s   2.22 i = 1 D   x i + i = 1 D   x i [−10,10]0
F3Powell Sum i = 1 D   x i i + 1 [−1,1]0
F4 Schwefe l s   1.2 i = 1 D   ( j = 1 D   x j ) 2 [−100,100]0
F5 Schwefe l s   m a x i   x i , 1 i D [−100,100]0
F6Rosenbrock i = 1 D 1   [ 100 x i + 1 x i 2 2 + x i 1 2 ] [−30,30]0
F7Step i = 1 D   x i + 0.5 2 [−100,100]0
F8Quartic i = 1 D   i x i 4 + random 0,1 [−1.28,1.28]0
F9Zakharov i = 1 D   x i 2 + ( i = 1 D   0.5 i x i ) 2 + ( i = 1 D   0.5 i x i ) 4 [−5,10]0
Table 2. Details of benchmark problems for multimodal functions.
Table 2. Details of benchmark problems for multimodal functions.
NameFunctionRange f m i n
F10Schwefel i = 1 n   x i sin x i [−500,500]−418.98 × d
F11Periodic 1 + i = 1 D   sin 2   x i exp   i = 1 D   x i 2 [−10,10]0
F12Styblinski-Tang 0.5 i = 1 D   x i 4 16 x i 2 + 5 x i [−5,5]−39.116 × d
F13Rastrigin i = 1 D   x i 2 10 cos 2 π x i + 10 [−5.12,5.12]0
F14Ackey 1 20 exp   0.2 i = 1 D   x i 2 D exp   i = 1 D   cos 2 π x i D + 20 + e[−32,32]0
F15Griewank i = 1 D D   x i 2 4000 i = 1 D   cos x i i + 1 [−600,600]0
F16Xin-She Yang N.4 ( i = 1 D   sin 2   x i exp   ( i = 1 D   x i 2 ) ) e x p   ( i = 1 D   sin 2 x i ) [−10,10]−1
F17Penalized π D { 10 sin π y 1 + i = 1 D   y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 } + i = 1 D   u x i , 10,100,4 [−50,50]0
F18Penalized2 0.1 { sin 2 3 π x 1 + i = 1 D   x i 1 2 1 + sin 2 3 π x i + 1 + x D 1 2 1 + sin 2 2 π x D } + i = 1 D   u x i , 5,100,4 [−50,50]0
F19Foxholes 1 500 + j = 1 25   1 j + i = 1 2   x i a i j 6 1 ± 65.5360.998
F20Kowalik i = 1 11   a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 [−5,5]0.000308
F21Six Hump Camel 4 x 1 2 2.1 x 1 4 + x 1 6 3 + x 1 x 2 4 x 2 2 + 4 x 2 4 [−5,5]−1.0316
F22Shekel 5 i = 1 5   x i a i x i a i T + c i 1 [0,10]−10.1532
F23Shekel 7 i = 1 7   x i a i x i a i T + c i 1 [0,10]−10.4028
F24Shekel 10 i = 1 10   x i a i x i a i T + c i 1 [0,10]−10.5364
Table 3. Details of benchmark problems for composition functions.
Table 3. Details of benchmark problems for composition functions.
NameFunctionBrief Expressions f m i n
F25CEC2017-F21Composition Function 1 (N = 3) i = 1 N   ω i λ i g i ( x ) +   bias   i + F 21 * , g ( x ) = f 4 , f 11 , f 5 2100
F26CEC2017-F22Composition Function 2 (N = 3) i = 1 N   ω i λ i g i ( x ) +   bias   i + F 22 * , g ( x ) = f 5 , f 15 , f 10 2200
F27CEC2017-F23Composition Function 3 (N = 4) i = 1 N   ω i λ i g i ( x ) +   bias   i + F 23 * , g ( x ) = f 4 , f 13 , f 10 , f 5 2300
F28CEC2017-F24Composition Function 4 (N = 4) i = 1 N   ω i λ i g i ( x ) +   bias   i + F 24 * , g ( x ) = f 13 , f 11 , f 15 , f 5 2400
F29CEC2017-F25Composition Function 5 (N = 5) i = 1 N   ω i λ i g i ( x ) +   bias   i + F 25 * , g ( x ) = f 5 , f 17 , f 13 , f 12 , f 4 2500
F30CEC2017-F26Composition Function 6 (N = 5) i = 1 N   ω i λ i g i ( x ) +   bias   i + F 26 * , g ( x ) = f 6 , f 10 , f 15 , f 4 , f 5 2600
Table 5. Various BWOs from three strategies.
Table 5. Various BWOs from three strategies.
CCMFACMT
BWO000
CCM_BWO100
CMT_BWO001
FA_BWO010
CCM_CMT_BWO101
CCM_FA_BWO110
CMT_FA_BWO011
FAMBWO111
Table 6. Friedman test results on benchmark functions (F1–F30, dim = 30).
Table 6. Friedman test results on benchmark functions (F1–F30, dim = 30).
Rank+/−/=Avg
FAMBWO1~1.4
BWO722/1/75.2
CCM_BWO823/0/75.2333
CMT_BWO623/0/74.5333
FA_BWO522/1/74.5333
CCM_CMT_BWO320/3/72.9
CCM_FA_BWO421/2/73.6333
CMT_FA_BWO220/3/71.9333
Table 8. Friedman test results on multimodal functions (F10–F24, dim = 30).
Table 8. Friedman test results on multimodal functions (F10–F24, dim = 30).
FunRankAvg
FAMBWO11.6666
PSO98.0
MFO108.2
DE87.6667
SCA129.4667
SSA118.8
WOA77.1333
SOA44.2
GWO66.2
DBO55.8667
WSO22.4
BWO33.1333
Table 9. Friedman test results on composition functions (F25–F30, dim = 30).
Table 9. Friedman test results on composition functions (F25–F30, dim = 30).
FunRankAvg
FAMBWO11.1667
PSO86.5
MFO56
DE34
SCA109.3333
SSA76.3333
WOA44.8333
SOA1211.1667
GWO1110.8333
DBO66.1167
WSO98.5
BWO23.1667
Table 10. Friedman test results on high-dimensional composition functions (F25–F30, dim = 100).
Table 10. Friedman test results on high-dimensional composition functions (F25–F30, dim = 100).
FunRankAvg
FAMBWO11.1667
PSO89.1667
MFO55.6667
DE36.5
SCA105.5
SSA78.5
WOA46.1667
SOA125.3333
GWO116.8333
DBO69.1667
WSO97
BWO27
Table 11. Search space for the 4 hyperparameters in FAMBWO-MA-BiLSTM.
Table 11. Search space for the 4 hyperparameters in FAMBWO-MA-BiLSTM.
HyperparameterDescriptionRange
mUnits in the BiLSTM layer(32~128)
rThe parameter of the dropout layer, representing the proportion of dropout(0.1~0.4)
learning rateHyperparameter in model training that controls the step size of the model during each iteration of parameter updates(0.05~0.2)
batch sizeHyperparameter in model training, representing the number of samples used in each iteration during training(32~128)
Table 12. Comparison of prediction performance.
Table 12. Comparison of prediction performance.
MSE(TECU)RMSE(TECU)MAE(TECU) R 2
GS-MA-BiLSTM10.17613.192.280.9697
RS-MA-BiLSTM9.79693.132.190.9718
BOA-MA-BiLSTM9.48643.082.150.9725
BWO-MA-BiLSTM8.82092.972.060.9782
FAMBWO-MA-BiLSTM8.29442.881.970.9803
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Liu, H.; Shan, W.; Yao, Y.; Xing, L.; Wang, H.; Zhang, K. Optimizing Deep Learning Models with Improved BWO for TEC Prediction. Biomimetics 2024, 9, 575. https://doi.org/10.3390/biomimetics9090575

AMA Style

Chen Y, Liu H, Shan W, Yao Y, Xing L, Wang H, Zhang K. Optimizing Deep Learning Models with Improved BWO for TEC Prediction. Biomimetics. 2024; 9(9):575. https://doi.org/10.3390/biomimetics9090575

Chicago/Turabian Style

Chen, Yi, Haijun Liu, Weifeng Shan, Yuan Yao, Lili Xing, Haoran Wang, and Kunpeng Zhang. 2024. "Optimizing Deep Learning Models with Improved BWO for TEC Prediction" Biomimetics 9, no. 9: 575. https://doi.org/10.3390/biomimetics9090575

APA Style

Chen, Y., Liu, H., Shan, W., Yao, Y., Xing, L., Wang, H., & Zhang, K. (2024). Optimizing Deep Learning Models with Improved BWO for TEC Prediction. Biomimetics, 9(9), 575. https://doi.org/10.3390/biomimetics9090575

Article Metrics

Back to TopTop